ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Publication Date: 2018-03-09
    Description: Verification in the presence of observation errors is approached form the Bayesian point of view. Like data assimilation (DA), Bayesian verification is shown to have a robust foundation established by Bayesian inference. Together, DA and Bayesian verification form two difference levels of Bayesian inference. Evaluation of a model is equivalent to inference on the plausibility of this model given observations. Relative performances between different models are measured by ratios of posterior plausibilities, which becomes ratios of likelihoods in case of no prior information. These ratios are called the Bayes factors and are the standard verification method in Bayesian model comparison. Since verification scores are used intensively in numerical weather prediction, the verification scores derived from likelihoods are proposed to replace the Bayes factors in Bayesian verification. With two requirements that the verification scores are both strictly proper and local, the logarithm score, i.e. log-likelihood, and its linear transformation are shown to be the unique class. Log-likelihoods in Bayesian verification are determined by the form of forecast probability distributions from models. The empirical form is preferable since its flexibility in incorporating not only observation errors but also other uncertainties in observation biases or observation error variances into calculation to obtain closed forms for log-likelihoods. When applied for observations with Gaussian errors, the logarithm score induces the weighted mean squared error which is non-dimensional and can be used for both univariate and multivariate observations. The most interesting application of Bayesian verification is to offer a new explanation for rank histograms and quantify the flatness of rank histograms by a metric which turns out to be the Kullback-Leibler divergence between the rank distribution observed in reality and a uniform rank distribution. It is worthy of note that the two very different metrics come from the logarithm score.
    Print ISSN: 0035-9009
    Electronic ISSN: 1477-870X
    Topics: Geography , Physics
    Published by Wiley
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...