Experimental Precipitation Subseasonal Forecast Historical Skill
Subseasonal skill score based on the historical performance of each model and their
multi-model ensemble.
The different skill scores are mapped by calendar month. The forecasts lead times
are combined over the weeks 2-3 and the weeks 3-4 from the forecast start time (i.e.,
14-day long periods respectively 8 to 21 days and 15 to 28 days after the forecast
is issued). Forecasts skill scores combine start times by calendar month and across
years 1999 to 2010.
The probabilistic forecasts shown here are obtained from the statistical calibration
of three models from the Subseasonal to Seasonal (S2S) Prediction Project database
(Vitart et al, 2017) which are combined with equal weight to form multi-model ensemble
precipitation tercile probabilities forecasts. Individual model forecasts are calibrated
separately for each point, start and lead using Extended Logistic Regressions (ELR;
Vigaud et al, 2017) based on the historical performance of each model, and thus provide
reliable intra-seasonal climate information in regards to a wide range of climate
risk of concerns to the decision making communities and for which subseasonal forecasts
are particularly well suited.
These skill scores diagnostics maps give a sense of where and when (issued which months
of the year and for which weekly lead times) subseasonal forecasts may have the potential
to provide useful information.
The actual forecasts, of which these skill scores are measuring the historical performance,
are to be found in the Experimental Precipitation Subseasonal Forecast Maproom.
Skill scores definitions:
- RPSS: Ranked Probability Skill Scores (RPSS; Epstein (1969); Murphy (1969, 1971); Weigel
et al. (2007)) are used to quantify the extent to which the calibrated predictions
are improved compared to climatological frequencies. RPSS values tend to be small,
even for skillful forecasts. The approximate relationship between RPSS and correlation
being such that a RPSS value of 0.1 corresponds to a correlation of about 0.44 (Tippett
et al. 2010).
- Spearman Ranked Correlation: the Spearman Anomalies Correlation Coefficient corresponds to the ranked correlation
between MME forecasts and observed anomalies, which is particularly appropriate to
verify probabilistic forecast
- ACC: the Anomalies Correlation Coefficient is the correlation between MME forecasts and
observed anomalies.
References:
- Epstein, E.S., 1969: A Scoring System for Probability Forecasts of Ranked Categories. J. Appl. Meteor., 8, 985–987
- Murphy, A.H., 1969: On the “Ranked Probability Score”. J. Appl. Meteor., 8, 988–989
- Murphy, A.H., 1971: A Note on the Ranked Probability Score. J. Appl. Meteor., 10, 155–156
- Tippett, M.K., A.G. Barnston, and T. DelSole, 2010: Comments on “Finite Samples and Uncertainty Estimates for Skill Measures for Seasonal
Prediction”. Mon. Wea. Rev., 138, 1487–1493
- Vitart, F., C. Ardilouze, A. Bonet, A. Brookshaw, M. Chen, C. Codorean, M. Déqué,
L. Ferranti, E. Fucile, M. Fuentes, H. Hendon, J. Hodgson, H. Kang, A. Kumar, H. Lin,
G. Liu, X. Liu, P. Malguzzi, I. Mallas, M. Manoussakis, D. Mastrangelo, C. MacLachlan,
P. McLean, A. Minami, R. Mladek, T. Nakazawa, S. Najm, Y. Nie, M. Rixen, A.W. Robertson,
P. Ruti, C. Sun, Y. Takaya, M. Tolstykh, F. Venuti, D. Waliser, S. Woolnough, T. Wu,
D. Won, H. Xiao, R. Zaripov, and L. Zhang, 2017: The Subseasonal to Seasonal (S2S) Prediction Project Database. Bull. Amer. Meteor. Soc., 98, 163–173
- Vigaud, N., A.W. Robertson, and M.K. Tippett, 2017: Multimodel Ensembling of Subseasonal Precipitation Forecasts over North America. Mon. Wea. Rev., 145, 3913–3928
- Weigel, A.P., M.A. Liniger, and C. Appenzeller, 2007: The Discrete Brier and Ranked Probability Skill Scores. Mon. Wea. Rev., 135, 118–124