Analysis on ME.org keeps failing 42 site test

The first time I did this was from home and the download from Gadi, upload to ME.org was problematic. Analysis threw this error at me. I repeated the download/upload stuff from behind CSIRO’s firewall, it all went through without stumbling, and then it threw the same error in analysis. Can someone who can read the log make sense of what caused this, fix it, or suggest another approach. I will try a lesser analysis with a lesser number of sites to see how that goes

Seemingly same error. out-of-bounds error. which is even stranger as python, ferret etc seem ok - for a handful of fields/sites - i.e. there could be one that is wrong somewhere

Hi Jhan, do you have the link to the log files for the failed analysis on modelevaluation.org?

Analysis on ME.org keeps failing 42 site test

https://modelevaluation.org/modelOutput/display/iPzJpFvq5H6uqK5ao

is the what you mean?

Hi Jhan.

A few things aren’t quite right with the way this was set up:

  • no benchmarks were specified - that’s why it failed
  • you submitted your 42 site simulations to the 5 site test

I added benchmarks and it ran fine for the 5 site test: ModelEvaluation.org

Gab

Thanks Gab,

It looks like all the output files are still there, in which case how can I do the 42 site test? By setting the benchmark do you mean the experiment?

If you look at the profile page for this Model Output (i.e. the link above), you see that in the ‘Details’ box, you’ve specified ‘Five site test’ … you’ll need to change that to ‘Forty two site test’.
Next, if you scroll down past the list of files, you’ll see a ‘Benchmarks’ box - this is where you specify benchmarks. Benchmarks in this case are other Model Outputs in meorg that have been submitted to the same Experiment already … in this case I think I previously uploaded the empirical models - 1lin, 3km27, LSTM - that were used in the PLUMBER2 paper, for example. You could alternatively include other LSMs - like JULES, CLM, ORCHIDEE - potentially…

OK, Thanks - it seems to be running at least :slight_smile:

Once again, you have no benchmarks specified… they need to be reset, since you changed experiments…

I think the first one I used those same 3 benchmarks. The next couple I went back to “Edit”, selected something from the pulldown menu and went to “save”. But it looks like I should’ve hit “Add benchmark”. I didnt hit this as I interpreted it as adding a 2nd one. At least on this 3rd attempt I seem to have a different error

The problem is detailed in the log file:


I changed the benchmarks to be other simple Model Outputs for the 42 sites (rather than a different multi-branch, multi-configuration CABLE output), and am rerunning…

Worked fine :slight_smile:

Excellent. Thanks Gab :slight_smile: Where can I find out about the test_configs?

Not entirely sure what you’re asking here Jhan… do you mean which plots to look at?