*** Welcome to piglix ***

Forecast verification


Forecast verification is a subfield of the climate, atmospheric and ocean sciences dealing with validating, verifying and determining the predictive power of prognostic model forecasts. Because of the complexity of these models, forecast verification goes a good deal beyond simple measures of statistical association or mean error calculations.

To determine the value of a forecast, we need to measure it against some baseline, or minimally accurate forecast. There are many types of forecast that, while producing impressive-looking skill scores, are nonetheless naive. A "persistence" forecast can still rival even those of the most sophisticated models. An example is: "What is the weather going to be like today? Same as it was yesterday." This could be considered analogous to a "control" experiment. Another example would be a climatological forecast: "What is the weather going to be like today? The same as it was, on average, for all the previous days this time of year for the past 75 years".

The second example suggests a good method of normalizing a forecast before applying any skill measure. Most weather situations will cycle, since the Earth is forced by a highly regular energy source. A numerical weather model must accurately model both the seasonal cycle and (if finely resolved enough) the diurnal cycle. This output, however, adds no information content, since the same cycles are easily predicted from climatological data. Climatological cycles may be removed from both the model output and the "truth" data. Thus, the skill score, applied afterward, is more meaningful.

One way of thinking about it is, "how much does the forecast reduce our uncertainty?" Tang et al. (2005) used the conditional entropy to characterize the uncertainty of ensemble predictions of the El Nino/Southern Oscillation (ENSO):


...
Wikipedia

...