Models needing performance benchmarks for the upcoming 2026 NCMAS applications

NCMAS 2026 Call for Applications will open next month and a closing date of 31 October 2025 has now been announced.

We’re interested in knowing what models people in the community would like to include in their applications. This allows us (Ocean and Software transformation teams) to plan and complete the appropriate benchmarking.

Below is a poll in which we ask anyone who is completing an NCMAS application to indicate if there is a particular model they would like performance data on.

  • release-MC_25km_jra_ryf (OM3)
  • dev-MCW_100km_jra_ryf (OM3)
  • dev-MC_4km_jra_ryf+regionalpanan (OM3)
  • dev-MC_4km_jra_ryf+regionalpanan_isf (OM3)
  • dev-MC_100km_jra_ryf+wombatlite (OM3)
  • dev-MC_25km_jra_ryf+wombatlite (OM3)
  • dev-1deg_jra55_ryf+wombatlite (OM2)
0 voters

Poll closes on 2025-09-29 and voter origin is not shown publicly. (Wombat aside) OM2 configurations are mostly not on the above list as performance data already exists for those models.

If there are other configurations you’d like to use but not on this list, please post them below. Prefer not to discuss this publicly? Send an email to chris.bull@anu.edu.auor post here anonymously.

Most of the branches are named according to the following naming scheme:

{dev|release}-{MODEL_COMPONENTS}_{nominal_resolution}km_{forcing_data}_{forcing_method}[+{modifier}]

where {MODEL_COMPONENTS} is an acronym specifying the active model components in the following order:

  • M: MOM6
  • C: CICE6
  • W: WW3
  • r : a regional configuration

and the nominal resolution is given in kilometers, corresponding to the nominal resolution in degrees as follows:

  • 100km: 1°
  • 25km: 0.25°
  • 10km: 0.1°
  • 8km: 1/12°

NB: above the dev-MC_4km_jra_ryf+regionalpanan above uses the global 8km grid but is ~4km in the southern ocean and isf is for ice shelf.

2 Likes

Might help to have some interpretation of what these names mean. What is MC and MCW? I know “isf” means ice shelf, but others may not.

Thanks @adele-morrison, great point. I’ve added an explanation, hope that’s now clearer?

2 Likes

I was not expecting the “results” page for this poll to be so complicated! It’s some sort of instant runoff voting system?!

2 Likes

An aside, hopefully authentically-naive question, but is there a thread explaining why ERA5 is not being considered in the near term? Maybe it’s general knowledge at this level, but I’m a bit in the dark.

Hi Dan,

Not naive at all but I’m unaware of an existing thread. There’s a few reasons for this, the main one being that JRA55 is the current forcing planned for initial OMIP runs, which will use JRA55.

Were you at the workshop in Melbourne? There was a presentation on this:
Gokhan Danabasoglu (NSF National Centre for Atmospheric Research)
CFORCE: Creating the next generation datasets for forcing ocean – sea-ice coupled models

Having said this, we will eventually be using ERA5 updated to work with ocean sea-ice models. There has been some prototype work using ERA5 which you can see here:

  1. Swapping from JRA55 to ERA5 atmospheric forcing · Issue #548 · ACCESS-NRI/access-om3-configs · GitHub
  2. GitHub - ACCESS-NRI/access-om3-configs at 109-MC_100km_era_iaf

I think @mmr0 and @ezhilsabareesh8 would be very keen for help if you’d like to be involved!

Chris.

1 Like

Thanks very much for clarifying Chris—that helps me understand the current context around JRA55 and the initial OMIP runs. Unfortunately, I wasn’t at the Melbourne workshop and that presentation sounds like I would have got some good info from. I’ll have a read of the GitHub issue and repo you linked.

I’ll need to keep my main focus on my CICE6-standalone work for now, but I’d certainly be interested in following the ERA5 prototype developments as they progress, especially given the overlap with my fast ice experiments.

Thanks again for pointing me in the right direction.

Respectfully,
Dan

The results are in!

Preference Configuration
1 release-MC_25km_jra_ryf (OM3)
1 dev-MCW_100km_jra_ryf (OM3)
1 dev-MCW_100km_jra_ryf (OM3)
1 dev-MC_4km_jra_ryf+regionalpanan (OM3)
1 dev-MC_4km_jra_ryf+regionalpanan (OM3)
1 dev-MC_4km_jra_ryf+regionalpanan_isf (OM3)
1 dev-MC_4km_jra_ryf+regionalpanan_isf (OM3)
1 dev-MC_4km_jra_ryf+regionalpanan_isf (OM3)
1 dev-MC_25km_jra_ryf+wombatlite (OM3)
2 release-MC_25km_jra_ryf (OM3)
2 release-MC_25km_jra_ryf (OM3)
2 release-MC_25km_jra_ryf (OM3)
2 release-MC_25km_jra_ryf (OM3)
2 dev-MC_4km_jra_ryf+regionalpanan (OM3)
2 dev-MC_4km_jra_ryf+regionalpanan (OM3)
2 dev-MC_4km_jra_ryf+regionalpanan_isf (OM3)
2 dev-MC_4km_jra_ryf+regionalpanan_isf (OM3)
2 dev-MC_100km_jra_ryf+wombatlite (OM3)
3 release-MC_25km_jra_ryf (OM3)
3 release-MC_25km_jra_ryf (OM3)
3 dev-MC_4km_jra_ryf+regionalpanan (OM3)
3 dev-MC_4km_jra_ryf+regionalpanan_isf (OM3)
3 dev-MC_25km_jra_ryf+wombatlite (OM3)
3 dev-MC_25km_jra_ryf+wombatlite (OM3)

Thanks to the 9 people that voted!

How does one read this table? Is 1 the most preferred, or least preferred? And why are there lots of duplicates?

The poll allowed voters to rank their preferences (rather than just vote for one model). In the table, I’ve included the top three preferences people had. 1 is the most preferred, and you can note there are nine votes. You’ll see there arn’t 9 votes by the time you get to preference 3 has people were not required to give a full set of preferences.

Make more sense?

(I don’t think I’ll use this particular poll type in the future, as it is rather convoluted!)