Continuous Testing of COSIMA Recipes

Hi All,

At the last COSIMA Workshop, people mentioned the need for continuous testing of the COSIMA Recipe.
This is definitely something where ACCESS-NRI can help.

I am starting this thread to gather the requirements for testing.

There are different levels of testing possible (from basic to more advanced. all possible but with different technical requirements):

  1. Check that all the recipes can be run.
  2. Check the outputs. This can be done by checking some metrics value. We can also check the plots (images) produced or some other outputs.
  3. Check that the recipes scale. That one is a bit more involved and can be specific to the recipe. It also requires resources that will have to be managed by ACCESS-NRI

Anything compute intensive will have to run on NCI infrastructure, which throws up some issues with security. Nothing that can’t be dealt with, but it isn’t necessarily trivial.

It is worth considering what checks could be done on GitHub runners:

  • Linting code
  • Updating kernel information to the latest current version (it is often the case that the recipes are not run for a while and the conda/analysis3 version that it used is no longer available)
  • Code style checks
  • ???

Another thing that was mentioned was the possibility of a more formal review process for new recipes. This could be easily implemented by requiring reviewer approval before PRs can be merged (if that isn’t already the case) and I expect ACCESS-NRI could help out with review. This had the added benefit that reviewers can think about things like scalability during their review.

We do currently require review before merging. However, we’ve struggled to get these reviews happening in a timely fashion this year (until @navidcy’s recent awesome review contributions). That would be great if ACCESS-NRI could help @dougiesquire. I think we do still need science reviewers though to make sure the science is correct and documentation is sufficient - maybe one COSIMA science reviewer and one ACCESS-NRI reviewer?

1 Like

Yes but I also see in the Google docs that not everyone is on board with GitHub.
Some open journals like JOSS have a submission process backed up by a reviewing process using GitHub. I have implemented something similar for people to submit their “recipes” in my former role.
It still in a demo state but I wonder if that would help collecting scripts that people have in their drawers:

A System like this would allow us to:

  • Gather metadata (Useful for the community because it can be used to generate reports on research activities)
  • Automatically generate DOIs
  • Enforce versioning
  • Provide a Community space where people are rewarded when submitting or doing reviews.

That’s definitely something doable.

1 Like

One review is hard enough; if we enforce two wouldn’t that create a bottleneck?

On the other hand, it’s always useful to have a pair of eyes that can catch things that are coded up scientifically correct but computationally inefficient (e.g., using for loops instead of xarray’s native functionality).

We (ACCESS-NRI and Model Evaluation team) can help with checking the code, testing, work on efficiency.

1 Like