I resonate @Thomas-Moore’s comment.
But cosima-recipes
is not a python package but rather a collection of notebooks. These notebooks use python packages which if tested properly they should catch changes in behaviour and if that was intentional they should issue a deprecation warning (e.g. “that methods your_fav_method()
will behave differently from version X.Y.Z
”) or something.
Regression tests (that @anton suggests) are a way to catch issues like that and we could discuss implementing some of those that will run automatically once a week on the HPC.
I also think that it’s very good idea to try to convey to people the notion of “testing the boundaries” of a method/function they write (comment by @willrhobbs). This is extremely useful concept. I haven’t really thought about it in a formal way as @willrhobbs discussed it. I don’t think we should enforce this to the notebooks since this will make the barrier of newbies contributing to the recipes even higher. But it’s such a useful concept that one should at least have it in the back of their mind.
I so much often see code that is not general enough and its limitations are neither documented nor asserted. For example, someone writes a method/function that works only for a very particular case and will fail if things are slightly different. Then someone else, who naively sees the existence of such method/function use it for their case and gets nonsense results.
def compute_zonal_mean(dataarray):
return dataarray.mean('xt_ocean')
might suggest that this function computes the zonal mean. But in reality, it computes the zonal mean only for data arrays that have xt_ocean
as their coordinate and it also assumes that the coordinate xt_ocean
runs across constant latitude values. This function will silently give wrong results if used at the Arctic and will fail if used with MOM6 or MITgcm output.
I’d like to touch on these issues on July 1st Workshop that we are organising.