Stochastic precipitation modeling using circulation patterns to analyze climate impact on floods

This paper presents the conditioning of a precipitation model to objectively classified circulation patterns (CP). The application of CPs is considered useful with regards to model accuracy improvement and preparation of a downscaling model by using CPs classified with climate model data. As this study aims to produce rainfall as input for derived flood frequency analyses, the validation focuses on extreme values and precipitation events. The analysis is carried out by modifications of a well tested alternating renewal precipitation model.


Introduction
Extreme value statistics of floods are of key importance in climate impact analyses.Changes in floods cannot directly be derived from climate models.Therefore it is necessary to obtain long time series of future high resolution precipitation data as input for rainfall-runoff models.
Different downscaling methods were developed to utilize climate model data on a local scale.First there are dynamical downscaling methods, which use regional climate models (RCM) to obtain regional data which is consistent to the large scale data gained from the general circulation models (GCM).An alternative are statistical downscaling methods, which use statistical relations between regional data and selected parameters of the GCMs to access changes on a local scale.
For climate impact analyses focusing on floods in mesoscale catchments some features are of major interest.The precipitation data must be available in a high temporal resolution (e.g.hourly) and for a long time period.A long time period is necessary to minimize the sampling error.This effect is shown in Fig. 1 which compares confidence intervals of a bootstrap of different sample sizes of simulated discharge values.A larger sample produces a significantly smaller confidence band and, hence, the uncertainties are minimized.
This is where we see the main disadvantages of existing models.In northern Germany the major RCM is the REMO model (see e.g.Jacob and Podzun, 1997).This model has a comparably short time series output and only few realizations are available.Although different statistical downscaling data sets are available, e.g.WETTREG (Spekat et al., 2007) or STAR (Werner and Gerstengarbe, 1997), most of these concentrate on a daily time step, which is insufficient for flood modeling.
Therefore alternatives are desirable which feature long simulation periods and high resolution data.Here a precipitation model is proposed with parameters conditioned to special GCM predictors.This model has to be tested to deliver usable flood model input data.The hybrid precipitationmodel according to Haberlandt et al. (2008) was chosen to be the basis of the new downscaling approach.The model parameters will be estimated related to circulation patterns (CP).As CPs can be obtained using climate model pressure data as input for an objective CP-classification (see Bárdossy et al., 2002) a downscaling is thus possible.Nevertheless recent research concluded that merely of CPs is not sufficient to accomplish a downscaling.This is shortly discussed in the conclusions chapter and will hopefully be addressed in a later article.
This work presents the results of the precipitation model conditioned on circulation patterns.

Methods
The precipitation model creates synthetic data on the points of observed precipitation.For hydrological use it is possible to regionalize the model to places without observations.This is not discussed in this paper though.The model consists of two separate parts.First part of the model is a statistical precipitation generator consisting of an alternating-renewalmodel (ARM) to simulate rainfall for single points.Second part is an optimization algorithm which reproduces the spatial dependencies of the synthetic data with a simulated annealing resampling (SA).

First part: Alternating-Renewal-Processes-Model (univariate)
The ARM differentiates between external and internal structures.The methods core is the external part which defines rainfall as events (Fig. 2).These events are sub divided into wet spells and dry spells.The wet spells are defined through length (wsd) and mean intensity (wsi).They are illustrated as red empty boxes.The dry spells are only defined by length (dsd).
In the synthesis the spells (wsd + wsi and dsd) are generated alternately with theoretical distribution functions (Table 1).The fitting is accomplished with l-moments.In order to choose the best fitting distribution function several goodness of fit tests and graphical analyses were carried out.In this study other distribution found to be best fitting than in Haberlandt et al. (2008), which is plausible as Haberlandt et al. (2008) focused on low mountain range, while this study considers stations from the coastal area (0 m a.m.s.l.) up to mountainous areas (609 m a.m.s.l.).The dependency of wsd and wsi is considered with a copula function.Copulas allow considering dependencies of the two variables, while using univarite distribution functions.The internal structure of the precipitation events is reproduced with statistical model profiles (see Haberlandt, 1998).In Fig. 2 they are shown as grey filled boxes.To consider the differences in summer and winter rainfall different parameter sets are used for each of the two seasons.

Second part: Simulated Annealing
Second part of the model is a simulated annealing algorithm (SA).This non-linear optimization method is used to reproduce the spatio-temporal dependencies in synthetic rainfall data.For more information on the algorithm see Bárdossy (1998).The principal of the algorithm is: 1. calculation of the objective function for the original status of the time series; 2. two events are switched; 3. calculation of the objective function for the new status of the time series; 4. decision about the acceptance of the switch according to the SA (worsening of the objective function is allowed to a certain degree, so the global optimum can be found); 5. repetition of the procedure until the objective function is minimal.
The elements of the objective function are shown in the Appendix.The single parts are added together to a single objective function with the use of weights.A detailed discussion can be found in Haberlandt et al. (2008).The 1st station is optimized univariately only regarding the CP transition probabilities.The 2nd station is compared to the 1st station, the 3rd station compared to both other stations and so on until the nth station is compared to the 1st up to the (n − 1)th station.

Circulation pattern
Circulation patterns (CP) can be defined using the mean pressure distribution on sea level or in the middle troposphere over a large area and a time period of several days (Werner and Gerstengarbe, 2010).There are two principles to define CPs.It can be either classified by experts or by means of an objective classification.The other one is to use an objective classification.Here the fuzzy rule based method of Bárdossy et al. (2002) is used to realize an objective classification, where the applied method has two advantages.First it is possible to define the objective function in a way which is especially fitted to the hydrological purpose.Secondly the automatic classification allows fast processing of hundreds of years of data using reanalysis or GCM data as input.Thus the model is usable to project future CPs in a daily solution.
As pressure data of a very large area (in this case Europe plus outskirts) is necessary NCAR reanalysis data (Kistler et al., 2001) is used for the CP classification.For climate projections ECHAM GCM data (Roeckner et al., 2003) will be used in the classification of CPs for current and future climate in the following studies.Under the assumption that change in precipitation is caused by change in the CP frequency it is possible to combine the precipitation model with the CP prognosis to form a downscaling model.For the ARM the conditioning to CPs is simply done by using a specific parameter set (see Sect. 2.1) for each CP.The frequency of the CPs is calculated from the empirical probability distribution.In the SA algorithm the sequence and synchrony of the CPs are considered via adjusting the objective function.For the first station the temporal sequence and for all other stations the synchrony of the CPs had to be included.This study was carried out with 8 CPs.

Study area and data
The study area is the north-German federal state of Lower Saxony (Niedersachsen) while a validation with a hydrological model is planned in the Aller-Leine-catchment (see Fig. 3).As the data availability for hourly rainfall is not ideal a compromise between data length and station density is necessary.As a result a core data set of twenty recording precipitation stations with an observation length of 11-15 yr with www.adv-geosci.net/32/93/2012/Adv.Geosci., 32, 93-97, 2012 at least 11 overlapping years has been selected.This core set consists only of data from the German Meteorological Service (DWD).The extended data set also considers 9 stations with a length of 7-9 yr and data from the Meteomedia AG.

Analyses and results
First step was to fit the precipitation model without CPs to the new study area.Afterwards, the results of the original ARM were compared to the CP-conditioned model results.For the validation statistical rainfall characteristics and extreme values were considered.Figure 4 compares observed short term and synthetic 1000 yr precipitation data of the model conditioned to CPs.The statistical properties of the events are overall satisfyingly reproduced.Only the dsd's moments are slightly underestimated in the synthesis.As a result of the dsd's underestimation the mean yearly event number is overestimated.Combined with the correctly modeled average wet spell amounts, the overestimation of the number of wet spells leads to a slight overestimation of the mean yearly precipitation.Changing the dsd's distribution function may solve this issue.
Table 2 shows the comparison of statistical parameters of the synthetic precipitation data modeled with and without CPs.The results are indifferent as the accuracy of the reproduction of some parameters is improved, while reduced for others.Nevertheless, more than half of the parameters are improving and the changes in the improved parameters are more significant (higher delta) than those in the parameters that get reproduced less accurate.Thus we can conclude that overall the use of CPs lead to an improvement in the reproduction of rainfall time series.Only the accuracy of the estimation of the dsd values has been slightly reduced.The synthesis of wsd and wsa (wet spell amount) is significantly improved by including CPs.For the variable wsi the lower order statistical moments are modeled slightly better and the higher order moments slightly less accurate.
Figure 5 compares observed and simulated extremes of hourly and 3-hourly sums of both model types.Overall the extremes are reproduced quite well.Hourly extremes are reproduced slightly better than extreme values of longer rainfall duration.The conditioning of the model to CPs did not improve the simulation of hourly extremes.However the 3-hourly extremes are simulated significantly better, as visible in the example given in the figure.This visual observation was confirmed by statistical tests.So overall the reproduction of extreme values in the synthetic data could be improved by using CPs.
The results of the simulated annealing resampling are not measurably affected by considering CPs.

Conclusions
In this paper a precipitation model was fitted to circulation patterns.This is the first step towards a stochastic downscaling of high resolution precipitation with projected CPs.The conditioned model was compared to the original version.The CPs were classified using reanalysis data.Through the conditioning of the precipitation model an improvement of performance in respect of reproduction of statistical parameters and extreme values was achieved.
An analysis carried out by Haberlandt et al. (2011) found that not only the occurrence frequency of the CPs will change in the future but also their internal structure.This means that e.g. a wet CP can get wetter or dryer.Therefore the conditioning of a precipitation model to CPs alone is not sufficient for downscaling.The internal changes of the CPs have to be addressed too.For this reason a simple statistic-dynamic approach will be developed.

Elements of the simulated annealing objective function
Probability of bivariate occurrence: n 11 (n 01 + n 10 + n 11 + n 00 ) , n xy -number of intervals with rain in station x and y; 1: raining 0: not raining; z x -precipitation at station X. Bivariate correlation: var(z i ) • var(z j ) z i > 0, z j > 0.

Fig. 3 .
Fig. 3. Study Area Lower Saxony (Niedersachsen) and precipitation network with two stations indicated for extreme value comparison (see.Fig. 5); the station shown in Fig. 5 marked with slightly bigger points than the other stations.

Table 1 .
Methods used in the ARM.

Table 2 .
Percentage deviation between observed and simulated statistical moments with and without using CPs as mean values over all stations (Var-Variance; Skew-Skewness, Kurt-Kurtosis); grey indicates an improved and dark grey a reduced accuracy through using CPs; light grey indicates when no changes occurred.