Traditional frequency analysis of annual precipitation requires the fitting of a probability model to yearly precipitation totals. There are three potential problems with this approach: a long record (at least 25 ~ 30 years) is required in order to fit the model, years with missing data cannot be used, and the data need to be homogeneous. To overcome these limitations, we test an alternative methodology proposed by Eagleson (1978), based on the derived distribution approach (DDA). This allows for better estimation of the probability density function (pdf) of annual rainfall without requiring long records, provided that high-resolution precipitation data are available to derive external storm properties. The DDA combines marginal pdfs for storm depth and inter-arrival time to arrive at an analytical formulation of the distribution of annual precipitation under the assumption of independence between events. We tested the DDA at two temperate locations in different climates (Concepción, Chile, and Lugano, Switzerland), quantifying the effects of record length. Our results show that, as compared to the fitting of a normal or log-normal distribution, the DDA significantly reduces the uncertainty in annual precipitation estimates (especially interannual variability) when only short records are available. The DDA also reduces the bias in annual precipitation quantiles with high return periods. We also show that using precipitation data aggregated every 24 h, as commonly available at most weather stations, introduces a noticeable bias in the DDA. Our results point to the tangible benefits of installing high-resolution (hourly or less) precipitation gauges at previously ungauged locations. We show that the DDA, in combination with high resolution gauging, provides more accurate and less uncertain estimates of long-term precipitation statistics such as interannual variability and quantiles of annual precipitation with high return periods even for records as short as 5 years.