Tuesday 12th November: 1.30pm - 3pm. ACCESS-OM3 model evaluation paper metrics I. RE: Model Live Diagnostics vs ESMValTool etc.
People present: @helen, @aekiss, @minghangli, @ezhilsabareesh8, @anton, @cbull, @rbeucher, @CharlesTurner
Topics covered (briefly): Model Live Diagnostics tool available for monitoring simulations as they run and ESMvaltool for evaluating/comparing model output to observations/other models. Datasets and metrics that we should start with to evaluate ACCESS-OM3.
Next actions:
example run of om3 (@ezhilsabareesh8)
observational datasets (@cbull). Could ask people such as @RyanHolmes, @Pearse, @Matthis_Auger for input on most appropriate / updated datasets ()
example evaluation notebook of om2 that would need to be ESMValCore-ised (@anton)
template for ACCESS-OM3 evaluation metrics (@anton)
Summary note written by: @cbull
Summary @rbeucher talk of available tools and support offered by MED team.
@rbeucher presentations slides are here:
OceanTeamWorkshopMED.pptx (6.9 MB)
Model evaluation and diagnostics
Two major use cases:
- Model development (ensure model behaviour is sensible)
- Data user perspective (has been most of the team’s focus)
Validation (basic checks against observations) vs Evaluation (in depth comparison using research specific metrics). Evaluation is complicated! Intersection of theory, observations and models. Challenges in model evaluation: data format capability, data accessibility and documentation, different use cases.
Supported tools
MED Conda environments, ESMValTool-Workflow, ILAMB-Workflow, ACCESS MED Diagnostics, ACCESS-NRI Intake Catalogue, Data Replicas
Multi-Stage evaluation strategy involves 3 stages: initial diagnostics (interactive checks as the model runs, ideally uses Model Live Diagnostics), intermediate evaluation, detailed diagnostics
Re: Model Live Diagnostics. Python based tool that can be imported from a ARE Jupyter session. Just need to specify model type (OM3 is not currently supported but that could be added) and output folder. Can also compare multiple simulations as long as they are the same type and available in the access-nri-intake-catalogue.
Re: Intermediate evaluation. Climate aspects over longer simulation periods in which comparisons are made with previous model configurations, CMIP6 etc. E.g. COSIMA recipes.
Re: Detailed diagnostics. Examine specific processes with dedicated diagnostics. Idea: in-depth analysis of specific climate processes. Suggested tool is ESMValTool.
Re: ESMValTool-Workflow. Likely overkill for model development but useful for analysis. Pro: large community with many applications. Cons: steep learning curve (CLI based)
Re: CMIP fast track evaluation strategy. Historically, ESMValTool did not have a lot of ocean diagnostics. Work is ongoing porting COSIMA ocean receipts to ESMValTool. ESMValCore Python API is used for pre-processing. Aspiration is to add recipes for ENSO and sea ice. Showed example of the ENSO evaluation recipes that have already been created.
Re: IOMB-Workflow – International Ocean Model Benchmarking (IOMB) package. Evaluates marine bgc models against observations.
Replica datasets, considerable number of datasets have been processed by the MED team to make comparisons easier. Currently datasets such as WOA are part of the collection and additional variables could be added, as could newer versions. Romain had imagined having a specific sub-collection for COSIMA data. Ship like sections have been used in OM2 in the past. Have also developed a tool for on the fly CMORisation.
Can help with regridding new datasets: so we can send them a wishlist. For example: NOAA passive microwave data (sea ice concentration, Antarctic/Arctic sea ice thickness)
Model Live Diagnostics tool. Can do live plots and spatial plots. Users can add their own diagnostics which once accepted become part of the tool for anyone.
ENSO recipes is a proof of concept example that they are developing that uses ESMValCore (current assumption is that the data is in the catalogue), see:
GitHub - ACCESS-NRI/ACCESS-ENSO-recipes: Recipes and metrics for evaluating ENSO in ACCESS, which is looking to create figures like https://clivar.org/news/clivar-2020-enso-metrics-package
ACCESS-OM3 evaluation datasets we could ask the MED team to turn into replica datasets.
There is a list and discussion of metrics at ACCESS-OM3 evaluation, the following is based on that.
ERSST v4 (SST comparisons)
CNES-CLS13 and AVISO SSALTO/DUACS (mean and variability of SSH comparison). Updated product? Perhaps this: https://www.aviso.altimetry.fr/en/data/products/sea-surface-height-products/global/gridded-sea-level-heights-and-derived-variables.html
i.e. The delayed-time “allsat”
WOA2023 (T and S comparisons). Already on Gadi (/g/data/ik11/observations/woa23)
NSIDC CDR v4 (sea ice concentration). NOAA/NSIDC Climate Data Record of Passive Microwave Sea Ice Concentration, Version 4 | National Snow and Ice Data Center
NSIDC sea ice index v3 (area and extent). Sea Ice Index, Version 3 | National Snow and Ice Data Center
Tropical Atmosphere Ocean data. Want: subsurface temperature, fixed depth currents, subsurface salinity. data. Related: Forum post
chlorophyll
Sauzede et al., 2016 JGR Oceans
Thursday 14th November
9:45am – 10.15am. ACCESS-OM3 model evaluation paper metrics II (invited: @AndyHoggANU) and global sanity checks I (e.g. freshwater budget)
People present: @helen, @aekiss, @minghangli, @ezhilsabareesh8, @anton, @cbull, @AndyHoggANU
Existing GitHub repository that needs to be updated.
Also see figure scripts from OM2: