As many of you are aware, we’ve been discussing next steps for CM3 (+ESM3) evaluation and how we can leverage the ESM meetings to drive progress. Excitingly, both at the workshop and recent meetings, there has been strong community interest in contributing to CM3 evaluation and development (CM3 being the pre-cursor to ESM3). We have had a few discussions already and think we may be able to draw inspiration from the successful elements of similar groups.
Before moving forward, we’d like to check in with the community about the best way forward. There are several groups to coordinate (e.g. ESM working group, ACCESS NRI, CSIRO, 21st Century CoE, others?) and we’d like anyone interested to participate!
We’ll also announce an agenda shortly. Feel free to reply here or email me (chris.bull@anu.edu.au) if there’s anything particular you’d like discussed.
Thanks everyone for contributing to the program so far. Here is the current Agenda for next week’s meeting:
Outlook and plan for ESM3 and related models (update from Andy Hogg) (3 minutes)
Update on CM3 current status ACCESS-NRI (discussion led by Kieran) (5 mins + 2 mins question)
ACCESS Consortium/NESP development efforts and application/evaluation interests (Harun/Christine/Dougie/Pearse?) (5 minutes + 5 mins question)
How will process of model evaluation work (e.g example role from ESM1.6 and ACCESS OM2)? (discussion led Chris/Wilma/Romain) (5 mins + 5 mins question)?
Andy Hogg shared a roadmap for developments in AM3, OM3, CABLE, and WOMBAT, outlining how these will be merged into CM3 and eventually ESM3.
2. Update on CM3 current status
@kieranricardo presented on the latest updates to CM3. The major instabilities encountered over the last year have now been resolved, meaning that the model is in good state for more in depth evaluation.
Several issues remain to be addressed including a cold bias at the start of simulations, small issues in the water balance, salinity drift in the Baltic Sea, and excessive NH sea ice volumes. Recent parameter changes have greatly improved the top of atmosphere energy imbalances, however further tuning may bring additional improvements.
Future model developments include merging CABLE3 and WOMBAT, adding pseudo iceberg fluxes, atmospheric parameter changes, bathymetry edits, and adding penetrating shortwave radiation. Other developments include creating a PI control simulation, and improving data output and access (ESM Datastore).
3. ACCESS Consortium/NESP development efforts and application/evaluation interests
@ctychung and @Harun_Rashid presented on initial CM3 evaluation work being completed under the NESP project. This work focuses on climate drivers, including ENSO, IOD, SAM, and MJO.
Initial analysis of a short CM3 simulation showed that biases in SAT, precipitation, and radiation fluxes may be slightly improved compared to CM2.
Discussion highlighted the ENSO tuning work that has been done for CM2, which will help inform tuning for ENSO in CM3.
4. How will the process of model evaluation work
The proposed evaluation framework centres around a shared GitHub repository for sharing analysis notebooks, and using GitHub issues to share and discuss metrics and figures.
Monthly discussions during the ESM WG can then be used to share recent analysis and figures, and discuss development directions arising from the analysis.
Proposal for a CM3 evaluation hackathon in 2026
The ACCESS Hive forum can be used for meeting announcements, minutes, and for providing support for people who want to get involved.
Shared overleaf documents can be used to produce any evaluation papers.
@rbeucher presented plans from the Model Evaluation and Diagnostics Team, including extending ENSO evaluation and CMORisation workflows to CM3.
@aekiss and @cbull presented takeaways from OM2 and ESM1.6 evaluation, highlighting aspects that can be carried over to the CM3 evaluation, including the use of GitHub repositories for collaborative analysis, the use of the ACCESS-hive forum to coordinate meetings, and using GitHub as a tool for experiment provenance.
@cbull presented on the ACCESS-Community-Hub/access-cm3-paper-1 repository, which aims to provide a simple workflow for sharing analysis, keeping GitHub features as simple as possible.
Notebook templates will allow for analyses to easily be rerun in one go on new experiments, and OM3 evaluation notebooks will be transferrable to CM3.
Diagnostics and figures could be split into core analysis, required as a core part of the model evaluation, and bonus analysis which utilises community members’ specialised expertise. Anyone is welcome to contribute to either type of analysis.
For organising analysis code, three categories are proposed:
Polished Python: based on the template notebooks, including infrastructure for automatic rerunning on new experiments.
Sandbox Python: For one off figures and analysis. Not required to follow the templates
Non Python: For analysis done in languages other than Python
The repository has usage instructions here. For help with getting started, request ACCESS to the repository. There may be an ACCESS-NRI buddy system, where ACCESS-NRI members can guide contributors through the steps for getting started.
7. Hypothetical paper ideas
A brief discussion was held around papers that could be written at the end of the evaluation process. Options raised included a single large paper, or a model development paper along with several focused evaluation papers.
8. Closing summary/action items
A summary of this meeting will be presented at the ESM Working Group meeting on 9/10/2025. Open points for discussion will be continued there, including changing the day/time of working group meetings.
Roles for coordinating meetings and the GitHub repository will be assigned.
Data from recent CM3 runs and early analysis will be shared with the ESM Working Group.
9. Closing discussion
Questions around compatibility with CMORisation and tools like ESMvaltool were raised. The current idea is to aim for a low barrier to entry, and in the long term convert analysis to use ESMvaltool where possible.
Interest was raised in using the framework to contribute directly to model development, rather than focusing only on an evaluation paper. The community evaluation could be carried out in two phases, with the first focusing on evaluation to inform model development, and the second evaluating the final model for publication. Fortnightly ACCESS-NRI coupled development meetings could also be opened to members of the community who are interested in contributing directly to the model’s technical development.
This post is an open wiki, please add any corrections and clarifications