Science configurations to use for CABLE evaluation

For the CABLE evaluation tool in development, a science configuration is a set of options for the CABLE namelist file. The tool support running an ensemble of default science configurations. So far the science configurations have been chosen rather randomly and not for their scientific interest.

This topic is to change this default set so we evaluate CABLE against a range of settings of interest.

The tool uses a base namelist file to specify the namelist options. Each science configuration sets a subset of options that replace the base options for that configuration.

The current default science configurations are all the combinations of:

cable_user%FWSOIL_SWITCH= ['standard', 'Haverd2013']
cable_user%GS_SWITCH= ['medlyn', 'leuning']

I propose to first focus on evaluation for offline simulations but I am happy for inputs on how to set up better evaluation for coupled simulations as well, AMIP, CM and ESM.

Do we want the possibility to run all the combinations of all options in CABLE? Would this be done at some flux sites, or globally with several met. forcing sources? If we do so, what analysis would we want from this:

  • ensure all combinations run through or
  • a high-level score of scientific performance against observations or
  • something else

The major issue going this way is the time needed to run the analysis and the amount of data created. It is probably not feasible to run the full set of options routinely (every day or more) while developing a new feature. What subsets of science configurations would be sufficient for flux sites and global runs to accept new development in CABLE? It could be a different subset for flux sites and spatial runs.

Please also consider the type of analysis you’d like to see. What information would be useful to have to judge the impact of new development in CABLE? I know some developments will need tailored analysis to evaluate fully but there is a core of diagnostics for CABLE we will want to routinely check against.

This list of science configurations will evolve as CABLE evolves but let’s just start considering the current options.


My feeling is that it is probably more important to have shorter tests (i.e. flux towers) for a wide range configurations to start with, so that if any proposed changes are going to break some configurations the user knows about this early, and can identify the issue before development gets too advanced. That said, there are obvious advantages to having a very fast evaluation turn around for development cycles, so perhaps we ultimately need:

  1. a minimal configuration suite (2-4 configs?), for quick development testing that will allow developers to get feedback fast (towers initially, but we might want to extend this global)

  2. instructions that make it clear to users that this minimal set isnot enough for ticket submission, but is intended for development, and

  3. a comprehensive configuration suite, which is used for ticket submission

In terms of configurations, I suspect @aukkola will have thoughts, but I gather we initially want to think about science choices like you have here, and also CASA C, CN, CNP configurations, and groundwater once it’s integrated.

With that configuration breadth under control I think could integrate global runs as an additional test in both minimal and comprehensive configuration sets.

A separate conversation is how we want to collate analysis results to aid decision making - what does the ‘trunk committee’ want to know about a proposed change in terms of the tradeoff between performance and functionality, for example?

Once that’s out of the way ( :stuck_out_tongue_winking_eye:) I guess we need to think about all of the systems that CABLE plugs into as part of the testing somehow - ACCESS configurations, WRF, LIS, etc.

1 Like

Some quick feedback on the analyses we are currently using in for CABLE:

  • add summary table like ILAMB dashboard.
  • more variables: see if we can have the same variables as with ILAMB: radiation, hydrology and carbon.
  • include some comparisons between science configurations and not just between source codes as we have now. This could be done via summary tables.
  • better legends, titles of the plots to understand better what they represent.