Forcing ACCESS-OM2 using ESM1.5 data

Thanks Aidan, so… let’s say if I wanted to recompile ACCESS-OM2 myself from this repo, could I make my own exe with extra traceback flags?

The easiest way, and what we’re doing with a lot of the model development work is to use the build machinery to do it all for you by creating a pull request with the modifications you need

However this requires write access to the ACCESS-OM2 repo, as you need to make a branch there for the pull request to do it’s magic. If you’re interested we can certainly add you as a collaborator.

The other option is to build the model yourself on gadi

Note that you don’t need to create a development package for MOM5 in this instance, as all the changes you want to make will be in the spack.yaml, so at the environment level.

Regardless of which way you choose to go you will need to add some FFLAGS variables to the spack.yaml.

This is a different model, but you can see the pattern of how to add additional build flags in the spack.yaml

You can add them to whatever dependencies you want, just MOM, or MOM, CICE and netcdf, whatever. Spack will build it for you.

1 Like

Thanks Aidan. I may try the “build on Gadi” approach next week. Great to know the instructions are there for DIY compiling.

1 Like

Feel free to try both approaches to see compare which use might suit each one.

DIY building is good for checking rapid code changes. The deployment auto-build approach is good for minimal effort, but also reproducibility and checking your work, and particularly for sharing with others. It creates a pre-release build that anyone can use with simple module use/module load steps, and it can stay deployed in prerelease for as long as it is useful.

1 Like

An overdue update and follow-up:
Thanks all for the guidance so far and nice to meet many people IRL at the workshop last week.
OM2 now runs, and the jupyter notebooks for preparing the forcing files are here, along with the access-om2 config files I used: GitHub - ongqingyee/esm-forced-om2.

A few issues remaining:

  1. I found differences in the thermocline at the equator between ESM and OM which could be a problem (Slide 12 below).
    Ellie Ong - 11 Sep - Parallel 3 Forum 1 - 1130.pptx (8.2 MB)
    Some possible reasons are:
    a. using 3 hourly forcing as opposed to a higher frequency (e.g. 1 hourly), esp for wind speeds, and instantaneous shortwave radiative fluxes as suggested by the NCAR group. I remember hearing though that coupling in ESM uses 3 hourly data, at least for shortwave but not sure about wind speeds @spencerwong? Will check if option b fails.
    b. different vertical grid in OM2 vs. ESM. Currently the ESM restart was interpolated to the OM2 grid but as suggested here, the ESM grid might represent the thermocline better. @dougiesquire, you mentioned that changing the OM2 grid back to ESM was not too hard?
  2. This config runs very slow, 3 hours for 1 model year. I’ve started from scratch setting up the config file again, and with just the ESM forcing applied it becomes very slow (i.e. without ocean initial condition changes). I was recommended to change the forcing files one variable at a time – will go back and try this. (need to change namcouple remapping files one at a time too).
3 Likes

Hi @ongqingyee, great job getting this working, It’s really exciting to hear that it’s running!

Just double checked this and can confirm ESM1.5 used a 3 hourly coupling timestep. This is briefly mentioned in the “2.3 Ocean, sea-ice and coupling” section of the Ziehn et al. 2020 paper on ESM1.5, and is configured in the namcouple coupling configuration file, where each of the atmosphere->ice/ocean fields use a 10800s coupling period.

Cheers,
Spencer

1 Like

I’ve done a run which only changes the shortwave radiation from the NRI release 1deg_jra55_ryf run (/home/561/qo9901/access-om2/1deg_esm_ind_forc). It took 23 mins to run one year, so for my other runs with multiple ESM forcings the time cost is probably compounding. Has anyone seen such an increase in runtime with different atmospheric forcings? @dougiesquire @aekiss

======================================================================================
                  Resource Usage on 2025-09-16 15:26:01:
   Job Id:             149793173.gadi-pbs
   Project:            fy29
   Exit Status:        0
   Service Units:      225.92
   NCPUs Requested:    288                    NCPUs Used: 288
                                           CPU Time Used: 92:36:01
   Memory Requested:   1000.0GB              Memory Used: 248.4GB
   Walltime requested: 03:00:00            Walltime Used: 00:23:32
   JobFS requested:    600.0MB                JobFS used: 8.16MB
======================================================================================

For context, the JRA reanalysis forcing is of dimensions time: 2920 lat: 320 bnds: 2 lon: 640 , and the ESM forcing I am now applying is of dimensions time: 2920 lat: 145 lon: 192 bnds: 2.

If the chunking is bad for time-slice access it can make it very slow to read.

What is the on disk chunking of your modified forcing files? You can find this out using ncdump -hs.

Amazing, I rechunked the ESM forcing to be 1 chunk per time step and it sped up to 11min for 1 model year. The NRI model release took 13min for 1 model year for me (I believe this is usual?) Thank you @Aidan !
fld_s01i235:_ChunkSizes = 1, 145, 192 ;

3 Likes

Great! I assume this is the chunk size after rechunking? What was it before you did this out of interest? Just wondering what others should look to avoid.

Yes, below is what I had previously. This chunking might have come from ESM1.5, I haven’t checked but the time chunk would be the problem (time, lat, lon).
fld_s01i235:_ChunkSizes = 974, 49, 64 ;

Thanks, and yes that is disastrously bad for reading time-slices as is required for this application.

The netCDF library has to read and decompress a 974x320x640 chunk for every time slice. It ends up re-reading and re-decoding the same chunk 974 times.

1 Like

I’ve started trying to change the vertical grid in OM2 to follow that of ESM. I replaced the ocean inputs in config.yaml to follow that of the ESM1.5 NRI release, namely replacing the OM2 input files with these.

I am now getting this error, even though I am using the ESM mask and restart (I definitely checked the last one), so I’m not entirely sure where I went wrong. There is no topo file in the ESM config file too, which I thought might have been an issue? Any insight would be great, thanks!:slight_smile:

FATAL from PE   101: MPP_DEFINE_DOMAINS2D: incorrect number of PEs assigned for this layout and maskmap. Use	  240 PEs for this domain decomposition for mom_domain


FATAL from PE    72: MPP_DEFINE_DOMAINS2D: incorrect number of PEs assigned for this layout and maskmap. Use	  240 PEs for this domain decomposition for mom_domain


FATAL from PE   134: MPP_DEFINE_DOMAINS2D: incorrect number of PEs assigned for this layout and maskmap. Use	  240 PEs for this domain decomposition for mom_domain


FATAL from PE   100: MPP_DEFINE_DOMAINS2D: incorrect number of PEs assigned for this layout and maskmap. Use	  240 PEs for this domain decomposition for mom_domain


FATAL from PE   102: MPP_DEFINE_DOMAINS2D: incorrect number of PEs assigned for this layout and maskmap. Use	  240 PEs for this domain decomposition for mom_domain


FATAL from PE    74: MPP_DEFINE_DOMAINS2D: incorrect number of PEs assigned for this layout and maskmap. Use	  240 PEs for this domain decomposition for mom_domain

Dir I’m using: /home/561/qo9901/access-om2/1deg_esm1p5_hist_vgrid

Hi Ellie,

I think this is probably an issue of the “old” grid_spec format being used ESM1.5 vs the “new” grid_spec for OM2, which specifies mosaic files, where topog.nc is designated as a standalone file.

I can have a quick look at your config this afternoon.

Regards, Dave

As part of the ESM1.6 development, we recently moved from the legacy grid format to mosaic grid format - see this PR.

As part of this, we created an ocean_vgrid.nc file for the ESM1.5/1.6 grid that you should be able to use with the OM2 mosaic grid:

/g/data/vk83/configurations/inputs/access-esm1p6/modern/share/ocean/grids/vertical/global.1deg/2025.07.29/ocean_vgrid.nc
1 Like

Thanks Dougie! @ongqingyee : I was going to suggest that you keep all the grid inputs (including grid_spec.nc) the same as in OM2 except for updating the ocean_vgrid.nc as Dougie mentioned. Please let us know if that fixes the problem.

1 Like

Thanks Dougie!

With alot of Dave’s help (thank you!) this now works. We changed the vgrid and topo to ESM1p6 format, but also the remapping files had to be changed to take into account the ocean mask being different in ESM1.5 and OM2. Bad remapping notes that will become instructions are here for now.

Unfortunately this does not solve my problem that the OM mixed layer is shallower than ESM.

I was thinking though that how namcouple works in OM2 would be having an effect? @spencerwong said that atmos → ice is 3 hourly in ESM, but the ice → ocean coupling looks faster in ESM (1h coupling).

The corresponding line in OM2 namcouple doesn’t have this - and am not sure what this means?
strsu_io u_flux -1 -1 0 i2o.nc IGNORED

cheers!

2 Likes

Hi @ongqingyee, I’ve had a bit more of a look at the coupling timesteps. As you mentioned, ESM sends UM data to CICE every 3 hours, and data from CICE to MOM every hour, i.e. every ice/ocean timestep.

As a result, MOM receives updated values for the CICE variables (e.g. ice concentration and melt fluxes) every timestep, and only receives new values for the UM variables (e.g. SW flux) every third timestep. In the intermediate timesteps, it just re-receives last set of atmospheric data.

The chk_i2o_flds flag in the ice/input_ice.nml namelist can be used to save a copy of the coupling data sent from CICE to MOM. The following shows sums of the ice concentration and SW flux exported by CICE in a short ESM1.5 run. While the ice data gets new values every timestep, the atmosphere data only gets new values every third timestep:

For OM2, I’m not confident on what the -1 value in the namcouple file means. However, saving the ice->ocean coupling data from a short run of the 1deg_jra55_iaf configuration, it looks similar with flat atmospheric data every three hours, and ice data that is updated every ice/ocean timestep:

(Not too sure what aice does at the start)

The ice/ocean timestep for this configuration is 1.5 hours though, so MOM essentially receives new atmospheric data every second timestep in this case.

(Small caveat: In ESM1.5, the atmospheric data appears to stay exactly flat for every 3 hour block, while for OM2 it changes a tiny bit at the ice margins. I’m not too sure what causes this difference)

2 Likes

Thank you for the great clarification @spencerwong!

I believe that the 1.5 hour vs 1 hour ice/ocean timestep difference between OM2 and ESM respectively is what is causing the subsurface differences I see. I have tried to remedy this by changing the ice_ocean_timestep = 3600 from 5400s in &accessom2.nml but that didn’t fix the subsurface biases. I am not sure if the time lag between atmos/ice and ice/ocean coupling is the same between ESM and OM2 though? @sofarrell might know more? and @dougiesquire about where else DT needs to change in OM2 to match ESM? Cheers!

Hi @ongqingyee, apologies I missed your talk at the ACCESS meeting I was in a clashing session, I just looked at the ppt, the thing that interested me most is the slides you didn’t show which had the differences in the thermocline. One question. I haven’t closely followed all the discussions on you setting up these runs, I have seen some of it on other threads. but did you check all the parameters in the OM2 set up v ESM1.5 set up, particularly mixing parameters. Getting the vertical resolution the same in the two models in the thermocline may also be an issue, The OM2 had an updated set up with more resolution near the surface whilst we had more resolution in the thermocline in the ESM1.5 set up.
In relation to the time stepping the ice-ocean time of 1 hour would be the optimum to use, I guess the same interpolation as used by OM2 for JRA-55 (3hour forcing) is being applied to your 3 hourly saved ESM forcing when you are using it in on the OM2 fields but I will need to look back at your hive posts. There is a lag in the ESM model in how it sees the updated atmospheric forcing but that would pass on to your forced runs as well.