Skip to main content

Advertisement

Log in

The added value of the multi-system spread information for ocean heat content and steric sea level investigations in the CMEMS GREP ensemble reanalysis product

  • Published:
Climate Dynamics Aims and scope Submit manuscript

Abstract

Since 2016, the Copernicus Marine Environment Monitoring Service (CMEMS) has produced and disseminated an ensemble of four global ocean reanalyses produced at eddy-permitting resolution for the period from 1993 to present, called GREP (Global ocean Reanalysis Ensemble Product). This dataset offers the possibility to investigate the potential benefits of a multi-system approach for ocean reanalyses, since the four reanalyses span by construction the same spatial and temporal scales. In particular, our investigations focus on the added value of the information on the ensemble spread, implicitly contained in the GREP ensemble, for temperature, salinity, and steric sea level studies. It is shown that in spite of the small ensemble size, the spread is capable of estimating the flow-dependent uncertainty in the ensemble mean, although proper re-scaling is needed to achieve reliability. The GREP members also exhibit larger consistency (smaller spread) than their predecessors, suggesting advancement with time of the reanalysis vintage. The uncertainty information is crucial for monitoring the climate of the ocean, even at regional level, as GREP shows consistency with CMEMS high-resolution regional products and complement the regional estimates with uncertainty estimates. Further applications of the spread include the monitoring of the impact of changes in ocean observing networks; the use of multi-model ensemble anomalies in hybrid ensemble-variational retrospective analysis systems, which outperform static covariances and represent a promising application of GREP. Overall, the spread information of the GREP product is found to significantly contribute to the crucial requirement of uncertainty estimates for climatic datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

Download references

Acknowledgements

Data from the reanalyses presented in this work are available from the Copernicus Marine Environment Monitoring Service (CMEMS, http://marine.copernicus.eu/). Part of this work was supported by the EOS COST Action (“Evaluation of Ocean Synthesis”, http://eos-cost.eu/) through its Short Term Scientific Missions program. The full C-GLORS dataset is available at http://c-glors.cmcc.it. This work has received funding from the Copernicus Marine Environment Monitoring Service (CMEMS). The EN4 subsurface ocean temperature and salinity data were quality-controlled and distributed by the U.K. Met Office. The authors declare no conflicts of interest. We are grateful to four anonymous reviewers for their help in improving the quality of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrea Storto.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (DOCX 1409 KB)

Appendix 1: Assessment of GREP

Appendix 1: Assessment of GREP

The GREP product has been extensively validated and the main outcomes are included in the CMEMS QUality Information Document (QUID), available at http://marine.copernicus.eu/documents/QUID/CMEMS-GLO-QUID-001-026.pdf. Here, we focus only on skill score statistics of the monthly mean fields of temperature and salinity, which are the dataset used throughout this work. We base our assessment on the so-called GODAE OceanView CLASS4 metrics (Ryan et al. 2015), i.e. observation based statistics that estimate the reanalysis accuracy in observation space, inherited by the MERSEA project (Crosnier and Le Provost 2007) and then adopted by GODAE near real-time inter-comparison exercises (Hernandez et al. 2009). In accordance with disseminated data availability, we use monthly mean data for extracting the model equivalents from the GREP reanalyses. For sake of comparison, we introduce three observation-only products that do not make use of any dynamical model—also referred to as objective analyses, OA, with their ensemble mean (OA-EM)—and the World Ocean Atlas 2013 (WOA13) monthly climatology (Locarnini et al. 2013; Zweng et al. 2013) for the whole period 1955–2012. In particular, regarding the OAs, we calculated CLASS4 metrics for the Met Office EN4 objective analyses (Good et al. 2013), the Ifremer CORA objective analyses (Cabanes et al. 2013) and the CLS ARMOR objective analyses (Guinehut et al. 2012), within the GREP period. While there exists a large number of validation metrics (see e.g. Hernandez et al. 2015, for a detailed discussion), the goal of this exercise is to provide a basic assessment of the performances of the GREP-EM temperature and salinity through commonly adopted observation-based skill score metrics.

The Met Office EN4 (v4.2.0) observational data are used for this evaluation. EN4 includes profiles of MBTs and XBTs, Argo floats, CTDs, moorings and sea-mammal data. Only observations flagged as “good” are used. Note that these data are not independent strictly speaking, as they are assimilated by three out of four reanalyses and one objective analysis. Observational dataset may indeed differ notably because of different data sub-sampling, quality control procedure, and correction procedures (e.g. XBT fall rate corrections), especially before the Argo floats deployment. Therefore, the assessment presented hereafter has obvious limitations and serves only the purpose to verify how close to a reference dataset the reanalysis ensemble mean is, rather than quantifying the accuracy of either the reanalyses or objective analyses.

First, we validate the use of monthly means for the observation misfit statistics, which might seem inappropriate due to the higher than monthly temporal resolution of the model fields actually used as background in the data assimilation systems. To simplify, we focus on one product only, cglo, to verify the impact of sub-monthly variability on the skill scores during an observation-rich period. Figure S5 compares the cglo RMSE timeseries for two periods (1993–1998 and 2010–2015), of either daily or monthly mean fields of temperature and salinity. For both variables, differences are small, generally because the spatial representativeness error dominates the RMSE budget, rather than the temporal representativeness error. During the early period 1993–1998, differences are always less than 5% and 10% for salinity and temperature, respectively. During the recent period, the relative difference is of the order of 2% for salinity with occasional peaks in 2013–2014 up to 8%. Differences in temperature are slightly larger, with average value equals to 11% and peaking up to 22% in 2014. The behavior with time of the curves is almost identical, which indicates, together with the small differences, that the use of monthly means does not compromise the statistics. Note also that relative differences are greater during the latest years—in spite of smaller absolute values of RMSE, linked to the much denser observational sampling in both time and space.

In Fig. 10, we show yearly values (1993–2015) of the global RMSE statistics for temperature and salinity in the top 700 m and in the layer 700–2000 m, for the GREP-EM, OA-EM, and the WOA13 monthly climatology, along with monthly number of observations (green bars). Dashed lines represent for GREP and OA separately the ensemble average of the RMSE timeseries from individual members. The RMSE of the ensemble mean always outperforms the ensemble mean of the RMSEs, especially for the first decade and the deep layer. That confirms the effectiveness of the ensemble approach within data-sparse regions or periods.

Fig. 10
figure 10

RMSE of in-situ observations against monthly mean fields from the GREP-EM (black), the objective analyses ensemble mean (OAM), which is the average of CORA, EN4 and ARMOR objective analyses, and the WOA13 monthly climatology, for temperature and salinity in the layer 0-700 m and 700–2000 m. The dashed lines report for both GREP and OAM the ensemble average of the RMSE timeseries from individual members (opposite to the RMSE of the ensemble mean). Values are yearly. The green bars (right side axis) show the total number of monthly observations. Reanalysis, objective analysis and climatology data are monthly means, while the validating in-situ profiles are extracted from the Met Office EN4 dataset

To clearly identify the skill scores behavior, Fig. 11 shows the differences between the average RMSE of the individual ORAs (AVE) and the RMSE of the ensemble mean (black), between RMSE of OA-EM and RMSE of GREP-EM (blue) and between RMSE of WOA13 and RMSE of GREP-EM (red). Positive values indicate that GREP-EM outperforms the other timeseries.

Fig. 11
figure 11

Differences between the average of the RMSE timeseries from individual reanalyses and the RMSE of GREP (AVE—GREP-EM), between the RMSEs of OA-EM and GREP-EM (OA-EM—GREP-EM) and between the RMSEs of WOA13 and GREP-EM (WOA13—GREP-EM). Values are yearly. Reanalysis, objective analysis and climatology data are monthly means, while the validating in-situ profiles are extracted from the Met Office EN4 dataset

In the upper ocean, salinity skill scores are characterized by OA-EM errors smaller than GREP-EM up to around 2010. This suggests that the lack of in-situ observations is crucial for salinity RMSE. For temperature, WOA13 RMSE is significantly larger than GREP-EM and OA-EM, which present very similar behavior within the top 700 m layer. In the deeper ocean from 700 to 2000 m, OA-EM accuracy is higher than GREP-EM for the first simulated years, until 2000; then GREP-EM outperforms the ensemble of objective analyses. Figure S6 and S7 show similar analysis as Fig. 10 but for the three latitudinal bands Southern Extra-Tropics (60°S–20°S), Tropics (20°S–20°N) and Northern Extra-Tropics (20°N–60°N). The figures suggest that for temperature, especially in the Tropics, the reanalyses significantly outperform the OA-EM ensemble. Salinity skill scores of GREP-EM in the Southern Extra-Tropics, particularly in the upper ocean, are worse than OA-EM and the WOA13 climatology during the first decade of the reanalysis.

Overall, within data sparse periods, OA-EM generally shows the smallest errors, likely due to the use of climatology background within the objective analyses, while GREP-EM behaves at least as good as the objective analyses after 2000, when the deployment of Argo floats started. For the deep ocean, the added value of Argo floats after 2006, able to constrain the heat content evolution, results in better scores for GREP-EM than OA-EM. Note however that different time periods imply different spatial sampling of the observations—notably before Argo deployment the RMSE is representative mostly of the skill in the Northern Hemisphere—and should be considered accordingly.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Storto, A., Masina, S., Simoncelli, S. et al. The added value of the multi-system spread information for ocean heat content and steric sea level investigations in the CMEMS GREP ensemble reanalysis product. Clim Dyn 53, 287–312 (2019). https://doi.org/10.1007/s00382-018-4585-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00382-018-4585-5

Keywords

Navigation