As a new user I want to be able to start a new perturbation experiment from an existing ACCESS-OM2 control run.
How do I identify which control run to choose?
Which experiment do I clone?
How do I know where to branch my experiment?
How do I determine which restart files to use for my chosen branch point?
Where can I find those restart files on disk?
How do I configure payu to use the correct restart files?
How do I know I’ve done things correctly?
At the COSIMA meeting discussing the scope of an ACCESS-NRI release of ACCESS-OM2 a use-case that would be useful for ACCESS-NRI to assist with was getting new users up and running new experiments from existing control runs.
Talk to supervisor
Ask data owner
Ask data owner
Ask data owner: they will tell user to consult the git log of the experiment repository which has the run number, and check in restart manifest for restart file directory
I have made this a wiki, with the intention that it should be edited to better reflect the experience of a new user (which I am not). So feel free to dive in and change as required.
I encourage anyone to create more user stories, and not just related to COSIMA, but just put them in the correct category. User stories are a great way to capture workflows, how they might be blocked or inefficient and we can improve them.
ACCESS-NRI would like to use user-stories as qualitative measures of impact and improvement for the community. In many cases what we might do can’t be well measured in metrics, but the community will “just know” it is a lot easier than it used to be. User stories are a way to capture this improvement, by documenting the improvement in the workflow for a particular user story.
Looks good to me Aidan. One additional thing that could be added to the first list I’ve now added to the first list:
How do I know I’ve done things correctly (i.e. the only difference between my new simulation and the previous control simulation is what I want)? [side note: Are our simulations bitwise reproducible? I can’t remember.].
Yes they should be bitwise reproducible in a deterministic sense: the same model configuration with the same inputs will produce the same outputs. Some of the models are known to be not bitwise reproducible if processor layout is changed for example, but that is a more stringent reproducibility criteria.
The issue @adele157 linked to is weird, and I would say anomalous, but we just don’t know why.
I think we should be adding an initial step for every forked experiment where the experiment is forked and run without changes and confirm the outputs are unchanged. That confirms that when changes are made that the control run is a valid comparison.