ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Publication Date: 2015-10-01
    Description: Synoptic observations are often treated as error-free representations of the true state of the real world. For example, when observations are used to verify numerical weather prediction (NWP) forecasts, forecast–observation differences (the total error) are often entirely attributed to forecast inaccuracy. Such simplification is no longer justifiable for short-lead forecasts made with increasingly accurate higher-resolution models. For example, at least 25% of t + 6 h individual Met Office site-specific (postprocessed) temperature forecasts now typically have total errors of less than 0.2 K, which are comparable to typical instrument measurement errors of around 0.1 K. In addition to instrument errors, uncertainty is introduced by measurements not being taken concurrently with the forecasts. For example, synoptic temperature observations in the United Kingdom are typically taken 10 min before the hour, whereas forecasts are generally extracted as instantaneous values on the hour. This study develops a simple yet robust statistical modeling procedure for assessing how serially correlated subhourly variations limit the forecast accuracy that can be achieved. The methodology is demonstrated by application to synoptic temperature observations sampled every minute at several locations around the United Kingdom. Results show that subhourly variations lead to sizeable forecast errors of 0.16–0.44 K for observations taken 10 min before the forecast issue time. The magnitude of this error depends on spatial location and the annual cycle, with the greater errors occurring in the warmer seasons and at inland sites. This important source of uncertainty consists of a bias due to the diurnal cycle, plus irreducible uncertainty due to unpredictable subhourly variations that fundamentally limit forecast accuracy.
    Print ISSN: 0027-0644
    Electronic ISSN: 1520-0493
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2015-02-01
    Description: Fire danger indices are used in many countries to estimate the potential fire danger and to issue warnings to local regions. The McArthur fire danger rating system is used in Australia. The McArthur forest fire danger index (FFDI) uses only meteorological elements. It combines information on wind speed, temperature, relative humidity, and recent rainfall to produce a weather index of fire potential. This index is converted into fire danger categories to serve as warnings to the local population and to estimate potential fire-suppression difficulty. FFDI values above the threshold of 75 are rated as extreme. The spatial behavior of large values of the FFDI is modeled to investigate whether a varying threshold across space may serve as a better guide for determining the onset of elevated fire danger. The authors modify and apply a statistical method that was recently developed for spatial extreme events, using a “max-stable” process to model FFDI data at approximately 17 000 data sites. The method that is described here produces a quantile map that can be employed as a spatially varying fire danger threshold. It is found that a spatially varying threshold may serve to more accurately represent high fire danger, and an adjustment is proposed that varies by local government area. Temporal change was also investigated, and evidence was found of a recent increase in extreme fire danger in southwestern Australia.
    Print ISSN: 1558-8424
    Electronic ISSN: 1558-8432
    Topics: Geography , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...