Am 3. November 2017 gab das Institute of Atmospheric Physics, Chinese Academy of Sciences eine Pressemitteilung zur Qualitätsüberprüfung von Klimamodellen heraus:
A new method to evaluate overall performance of a climate model
Many climate-related studies, such as detection and attribution of historical climate change, projections of future climate and environments, and adaptation to future climate change, heavily rely on the performance of climate models. Concisely summarizing and evaluating model performance becomes increasingly important for climate model intercomparison and application, especially when more and more climate models participate in international model intercomparison projects.
Most of current model evaluation metrics, e.g., root mean square error (RMSE), correlation coefficient, standard deviation, measure the model performance in simulating individual variable. However, one often needs to evaluate a model’s overall performance in simulating multiple variables. To fill this gap, an article published in Geosci. Model Dev., presents a new multivariable integrated evaluation (MVIE) method.
“The MVIE includes three levels of statistical metrics, which can provide a comprehensive and quantitative evaluation on model performance.” Says XU, the first author of the study from the Institute of Atmospheric Physics, Chinese Academy of Sciences. The first level of metrics, including the commonly used correlation coefficient, RMS value, and RMSE, measures model performance in terms of individual variables. The second level of metrics, including four newly developed statistical quantities, provides an integrated evaluation of model performance in terms of simulating multiple fields. The third level of metrics, multivariable integrated evaluation index (MIEI), further summarizes the three statistical quantities of second level of metrics into a single index and can be used to rank the performances of various climate models. Different from the commonly used RMSE-based metrics, the MIEI satisfies the criterion that a model performance index should vary monotonically as the model performance improves.
According to the study, higher level of metrics is derived from and concisely summarizes the lower level of metrics. “Inevitably, the higher level of metrics loses detailed statistical information in contrast to the lower level of metrics.” XU therefore suggests, “To provide a more comprehensive and detailed evaluation of model performance, one can use all three levels of metrics.”
Es ist hocherfreulich, dass die Klimamodellierer nun endlich die Qualitätsüberprüfung ihrer Modelle ernst nehmen. Für die Zeit vor der Kleinen Eiszeit fehlt den Modellen leider noch jeglicher “Skill”. Bei konsequenter Anwendung der Qualitätstests sollte dies schnell klar werden.
Ancell et al. (2018) zeigten, dass einige Modelle von Chaos dominiert werden. Kleine Änderungen bei den Ausgangswerten führen zu sehr unterschiedlichen Ergebnissen:
Seeding Chaos: The Dire Consequences of Numerical Noise in NWP Perturbation Experiments
Perturbation experiments are a common technique used to study how differences between model simulations evolve within chaotic systems. Such perturbation experiments include modifications to initial conditions (including those involved with data assimilation), boundary conditions, and model parameterizations. We have discovered, however, that any difference between model simulations produces a rapid propagation of very small changes throughout all prognostic model variables at a rate many times the speed of sound. The rapid propagation seems to be due to the model’s higher-order spatial discretization schemes, allowing the communication of numerical error across many grid points with each time step. This phenomenon is found to be unavoidable within the Weather Research and Forecasting (WRF) Model even when using techniques such as digital filtering or numerical diffusion. These small differences quickly spread across the entire model domain. While these errors initially are on the order of a millionth of a degree with respect to temperature, for example, they can grow rapidly through nonlinear chaotic processes where moist processes are occurring. Subsequent evolution can produce within a day significant changes comparable in magnitude to high-impact weather events such as regions of heavy rainfall or the existence of rotating supercells. Most importantly, these unrealistic perturbations can contaminate experimental results, giving the false impression that realistic physical processes play a role. This study characterizes the propagation and growth of this type of noise through chaos, shows examples for various perturbation strategies, and discusses the important implications for past and future studies that are likely affected by this phenomenon.
Siehe auch Diskussion auf WUWT zum Paper.
Auch Eos beschäftigte sich am 26. Februar 2018 mit den Grenzen der Klimamodellierung. Eine Gruppe um Kenneth Carslaw schreibt dort:
Climate Models Are Uncertain, but We Can Do Something About It
Model simulations of many climate phenomena remain highly uncertain despite scientific advances and huge amounts of data. Scientists must do more to tackle model uncertainty head-on.
Model uncertainty is one of the biggest challenges we face in Earth system science, yet comparatively little effort is devoted to fixing it. A well-known example of persistent model uncertainty is aerosol radiative forcing of climate, for which the uncertainty range has remained essentially unchanged through all Intergovernmental Panel on Climate Change assessment reports since 1995. From the carbon cycle to ice sheets, each community will no doubt have its own examples. We argue that the huge and successful effort to develop physical understanding of the Earth system needs to be complemented by greater effort to understand and reduce model uncertainty. Without such reductions in uncertainty, the science we do will not, by itself, be sufficient to provide robust information for governments, policy makers, and the public at large.