As a developer for CABLE, I want to know how my code changes affect the simulation outputs. That is I want to ensure there is no impact on unrelated parts of the code and diagnostic variables and evaluate the effect on the related diagnostic variables.
- How do I know what configurations to run for testing?
- How do I set up the codes and file structures to run all configurations correctly?
- How do I find what analysis to run?
- How do I find evaluation results from preview developments?
- How do I compare my results to other developments?
- How do I share and archive my analysis results?
Current Workflow (as of 17/01/2023):
- Each user figures it out on their own or within their team.
- Each user is responsible for their own setup.
- Users might be able to use modelevaluation.org or ILAMB if it fits their needs. Most of the time, users write their own analysis script or need to know someone who can share a script.
- Ask the developer if they are known. In most cases, the analysis will not be available and will need to be recreated from scratch.
- Each user has to figure that one out. Usually, the simulations and analyses of the various developments don’t match and need to be redone by the user before any comparison is possible.
- Each user shares their results as they see fit. They are responsible for archiving their results as well. That means in most cases, the results are shared only between a few people and they are not archived.