Antonio Gallo, Gaetano La Bella
The accuracy of climate datasets is essential for understanding climate change and making future predictions, one of the most pressing and complex environmental challenges facing the scientific community in the 21st century.
While the Earth continues to naturally change its “breathing”, as widely documented in paleoclimatic studies, it is imperative to have climate datasets that are not only complete in the sequence of data and in their territorial representation, but that are representative of the entire Earth’s surface, are coherent and representative of reality.
Climate datasets play a crucial role not only in representing, monitoring and quantifying current and past climate changes, but also as key inputs for climate models used to forecast future trends. These forecasts underpin governments’ strategic decisions in planning and implementing economic policies aimed at mitigating climate change.
Internationally renowned organizations such as NASA GISTEMP, NOAA, HadCRUT and Berkeley Earth have collected, reorganized and made available global temperature datasets, constantly updated and reconstructed over a time series that extends for about 200 years. These datasets represent a fundamental resource for climatologists, who use them to define the global temperature curve and identify the evolutionary trends of the climate at the planetary level.Despite their relevance, the analysis of climate datasets highlights discrepancies, sometimes significant, in the results obtained. These differences mainly derive from the different mathematical methodologies adopted for the homogenization and filtering of the data, necessary to guarantee continuity and representativeness over larger areas than the original acquisition locations.
These discrepancies, present differently in each dataset, significantly affect the reconstruction of the curves that describe the trend of global warming. This impact is reflected not only on the precision of climate forecasting models, but also on mitigation and adaptation strategies, with significant repercussions on the community, unaware of the implications of such uncertainties.
This article aims to highlight how the process of homogenization of climate data, although considered essential to ensure the coherence of temperature time series, may have introduced, occasionally or accidentally, errors in the estimation of climate parameters. Such errors, in fact, have contributed to the formulation and diffusion of climate models potentially distorted with respect to reality, compromising their accuracy and reliability.
Starting with an in-depth analysis of the specific techniques used by the Institutes cited for the measurement and homogenization of climate data, this work aims to identify and describe, with accuracy, the anomalies generated and present in global temperature datasets. The goal is to bring these aspects to the attention of the scientific community, proposing a moment of shared reflection that can define the guidelines and scientific tools applicable for a coherent and critical review of the methodologies adopted so far.
We are fully aware of the complexity that characterizes climate systems and, even more, of the crucial importance of having homogeneous, accurate and complete datasets. Only through solid and reliable datasets will it be possible to support, with coherence and concreteness, any political and scientific decisions related to the changes in progress.
Furthermore, this proposal will allow for the development of future projections based on renewed, unbiased models, capable of providing a more reliable and useful vision to effectively address the global challenges related to climate change.
The value of this work lies precisely in the invitation to the entire scientific community to join in a constructive discussion on a topic of such great global relevance. This discussion should go beyond any economic or political interest, and should place lucid, rigorous and impartial scientific reasoning at the center of the debates.
The aim is to stimulate a critical review of the currently dominant climate models, which have been defined, as described, on datasets whose quality presents margins of uncertainty. This review would represent a necessary step to restore solidity and transparency to climate research, offering more adequate tools to interpret the present and correctly look at the future of our planet.
References
- Brohan, P., Kennedy, J. J., Harris, I., Tett, S. F. B., & Jones, P. D. (2006). Uncertainty estimates in observed regional and global temperature changes: a new data set from 1850. Journal of Geophysical Research: Atmospheres, 111(D12).
- Hansen, J., Ruedy, R., Sato, M., & Lo, K. (2010). Global surface temperature change. Reviews of Geophysics, 48(4), RG4004.
- Hansen, J., Sato, M., & Ruedy, R. (2012). Perception of climate change. Proceedings of the National Academy of Sciences, 109(37), E2415-E2423.
- Morice, C. P., Kennedy, J. J., Rayner, N. A., & Jones, P. D. (2012). Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: the HadCRUT4 dataset. Journal of Geophysical Research: Atmospheres, 117(D8).
- Peterson, T. C., & Vose, R. S. (1997). An overview of the Global Historical Climatology Network temperature database. Bulletin of the American Meteorological Society, 78(12), 2837-2849.
- Thorne, P. W., Parker, D. E., Titchner, H., Rayner, N. A., & McCarthy, M. (2011). Uncertainties in climate trends: lessons from upper air temperature records. Bulletin of the American Meteorological Society, 92(10), 1417-1422.
- Venema, V.K.C., Mestre, O., Aguilar, E., Auer, I., Guijarro, J., et al. (2012). Benchmarking homogenization algorithms for monthly data. Climate of the Past, 8(1), 89-115.