I am interested in undertaking long (years) simulations using ACCESS-rAM3, so want to make compute as efficient as possible. I will only be using the first nest ~12 km resolution over Australia (GAL9).
By default the model timestep in rAM3 for this 12 km domain is 120 s.
The help dialogue for rg01_rs01_m01_dt says:
A typical choice for 4.4 km, 1.5 km and 300 m resolution models is
120 s, 60 s and 12 s, respectively… For RA1-T, the timesteps for 4.4 km and 1.5 km models can bepushed to 180s and 75s respectively.
This implies the 12 km timestep can be increased, along with radiation timesteps, with radiation help dialogue:
The prognostic radiation time-step (the first value to be specified) must be an integer multiple (typically 3 or 4 times) of the diagnostic radiation time-step. The diagnostic radiation time-step must be an integer multiple (typically 5 times) of the model time-step.
My proposed changes are:
rAM3 default
proposed
model timestep (dt)
120 s
300 s
radiation timestep (prognostic)
1800 s
3600 s
radiation timestep (diagnostic)
600 s
900 s
This fits reasonably well with the suggestions in the help dialogue (prognostic is 4 times diagnostic, diagnostic is 3 times model timestep).
My questions:
Was there a strong basis for the default rAM3 12 km domain to have shorter than RNS recommended model timestep?
Any concerns with my proposed changes?
Any other suggestions for increased efficiency for longer runs?
Suggestions for startdump creation, say every month or year, to allow restarts?
Some combinations of time steps don’t automatically work and/or complicate the model output.
We did originally have a longer timestep but I believe that a shorter timestep for a shorter run (48 hours) helped to produce easier output and a likely more accurate outcome (although I am sure there are lots of different opinions).
You will also need to change your number of CPUs etc and possibly would benefit from going to longer cycles.
I doubt I can much to the start dump creation strategy (because it is subjective) beyond what you may have thought about, but perhaps those who do longer runs may have a stronger opinion.
“Accurate” is a strong word. Model timestep is certainly a source of model error and uncertainty, particularly in fast physics processes.
From a physics perspective, the greater the deviation between your dynamics and radiation timestep, the less physically-consistent your simulation becomes. (You are allowing convection and cloud processes to occur without having a radiative impact).
Your proposed changes actually bring this ratio into better alignment. Whether or not the results are less accurate / acceptably different (as compared in e.g. a short case study run) is something that should probably be tested.
Hi @bethanwhite I agree that “accurate” is a strong word, but coming from a background in data assimilation from what I understand having as accurate initial conditions as possible and watching sources of error growth is important.
That said, you are the modelling expert. If you think that the settings in the Lismore example need to be changed please free to give a better suggestion.
I’m talking about Mat’s question, not the Lismore case. (I take it as given that the Lismore settings are well-tested).
Initial conditions are pretty much irrelevant in a multi-year simulation like the one Mat’s proposed. It’s not an operational forecast or a case study simulation, and the model will have long forgotten about its ICs by the end of several years. What you want is for the model to be as physically self-consistent as possible, because the question you’re really exploring is what’s going on the world of the model’s physics? (Especially in a case like this with a large domain where the centre is far removed from the boundary forcing). So, having the radiation timestep equal to the dynamics would be ideal (equally, any other physics scheme or coupled model component that is run on a different / longer timestep). But there are always decisions to be made around compute time and efficiency.
I think “accurate” really depends on the question you’re asking / process you’re studying, even in short case study runs. “Does it get a similar amount of rain as was observed in a similar place to observed” is a completely different question from “was that rain produced through realistic processes”. There are studies that show a large impact of timestep on microphysical pathways and thus precip generation (this is a nice / scary example) - in this case, “accurate” really is a nuanced word, because of the severe lack of observational constraints in microphysics. If you care about rain amounts and location it might be accurate, but if you care about processes and pathways it might not be.
I’d argue that error growth from physical inconsistencies and uncertainties matters even more in these longer runs. An operational forecast system benefits from DA to pull the model back to the observed state every cycle, a short weather-scale case study run doesn’t give the error sources very much time to grow, while a multi-year run really will see the impacts of error growth on the model’s physics.
Hi @bethanwhite, I agree that the settings needed for climatic runs are very different to short-term weather runs. I was not trying to infer that they should both be the same, I was just unsure if you were commenting on the Lismore (weather timescale run) specifically.
Results: I tested the 12 km pan-Australian domain (386 x 490 grid points) with a model timestep of 300 s for a free-running period of 6 months in 2012 with 10 day cycles. Over this time I had 2-3 BiCGSTAB failures for which I had to reduce timestep to 200 s for the cycle. Interestingly, the walltime for the dt=200 s was perhaps 5% longer, which I think indicates the 300 s was struggling more to converge.
Conclusion: for 12 km resolution use dt=200 s or less.