Dear ACCESS community,
In several recent meetings, and during last week’s Workshop, an important topic has come up repeatedly: how can data users and those involved in model evaluation provide meaningful feedback on issues they spot in model outputs? Many of these users are not developers and often do not run models themselves—they typically have limited knowledge of model operations. However, they are highly capable of identifying critical aspects like bugs, biases, drifts, and other performance issues when comparing model outputs with observations or other models.
I believe it would be useful to create a dedicated space where the community can assess the strengths and limitations of the ACCESS models. Having a centralised feedback loop would allow us to collect input on model performance that goes beyond individual research projects and contribute to an ongoing assessment. Many potential users are also unaware of the data products available, and providing a platform for model evaluation feedback would bridge this gap as well.
To facilitate this, I propose setting up new Model Evaluation and ACCESS-Data categories on this forum. These categories could serve a role similar to bug trackers in software development, where users report problematic aspects of model outputs. This would allow us to identify and address specific issues affecting multiple domains such as the ocean, atmosphere, and land models, and improve our overall understanding of model behavior.
I believe that ACCESS-Hive is the best platform for hosting such a tool. It’s already a trusted hub for the community, and expanding it to include a model feedback system would provide a structured, familiar environment for users to report and discuss issues. This would also enhance collaboration across the board, enabling more seamless interaction between data users, model evaluators, and developers.
At present, I don’t think this level of detailed feedback can be adequately captured in the existing Working Groups, as they focus more on ACCESS configurations in general. Metrics that affect multiple domains could slip through the cracks unless we have a dedicated space to highlight them. Such a forum would also help foster more collaborative engagement, encouraging the community to contribute to refining the models over time.
I welcome your thoughts on this idea and look forward to hearing how we can improve the process of model evaluation and feedback for everyone.