• No results found

Hydrometeorological extremes in the Adige river basin, Italy

N/A
N/A
Protected

Academic year: 2021

Share "Hydrometeorological extremes in the Adige river basin, Italy"

Copied!
53
0
0

Loading.... (view fulltext now)

Full text

(1)

UPTEC W 18 015

Examensarbete 30 hp April 2018

Hydrometeorological extremes in the Adige river basin, Italy

David Gozzi

(2)

i

ABSTRACT

Hydrometeorological extremes in the Adige river basin, Italy David Gozzi

This study aimed at describing the characteristics of daily precipitation and discharge extremes in the Adige river basin at the city of Trento. Annual maximum series for the period 1975−2014 were analyzed in terms of trends, seasonality indices and L-moments.

A Mann-Kendall trend analysis showed a weak but significant signal of decreasing ex- tremes; the percentages of sites with significant negative trends were overall larger than the significance levels. Precipitation extremes were characterized primarily by autumn storms, while floods had a stronger seasonality with peaks occurring predominantly in June and July which indicated that the timing not solely explained by rainfall maxima.

The Adige basin was found to be a homogenous region with respect to precipitation, but the results did not support a corresponding assumption for discharge. A regional fre- quency analysis was performed for precipitation data and found both the Pearson type III and generalized normal distributions to be adequate regional frequency distributions. The extreme daily precipitation at Trento with a 100-year return period was estimated to be between 114 and 148 mm/d.

Keywords: Hydrometeorological extremes, precipitation, discharge, floods, seasonality, L-moments, regional frequency analysis, trend analysis, Adige river.

Department of Earth Sciences, Program of Air, Water and Landscape Sciences, Uppsala University, Villavägen 16, SE-752 36, Uppsala, Sweden.

(3)

ii

REFERAT

Hydrometeorologiska extremvärden i Adigeflodens avrinningsområde, Italien David Gozzi

Egenskaperna hos extremvärden av dygnsnederbörd och -vattenföring i Adigeflodens av- rinningsområde vid staden Trento undersöktes. Serier med årsmaxima för perioden 1975–

2014 analyserades med avseende på trender, säsongsindex och L-moment. Trendanalys med Mann-Kendallmetod antydde en svag men signifikant signal om minskande extrem- värden, då andelen mätstationer med signifikant negativa trender överlag var större än signifikansnivån. Den extrema nederbörden karakteriserades huvudsakligen av höststor- mar, medan vattenföringen hade en starkare säsongsbundenhet då maxima inträffade främst under juni och juli. Vattenföringens extremvärden kunde därmed inte enbart för- klaras av nederbördsmaxima. Avrinningsområdet kunde betraktas som en homogen reg- ion för nederbörd, men resultaten gav inte stöd åt ett motsvarande antagande för vatten- föring. En regional frekvensanalys genomfördes för nederbördsdata och visade att Pear- son typ III och den generaliserade normalfördelningen var lämpliga regionala sannolik- hetsfördelningar. Över Trento uppskattades den extrema dygnsnederbörden med en åter- komstperiod på 100 år till mellan 114 och 148 mm/d.

Nyckelord: Hydrometeorologiska extremvärden, nederbörd, vattenföring, säsongsindex, L-moment, regional frekvensanalys, trendanalys, Adigefloden.

Institutionen för geovetenskaper, luft-, vatten och landskapslära, Uppsala Universitet, Villavägen 16, 752 36, Uppsala, Sverige.

(4)

iii

PREFACE

Korbinian Breinl has been the supervisor for this thesis and Giuliano Di Baldassarre the academic supervisor, both at the Department of Earth Sciences at Uppsala University.

The author wishes to thank them for their guidance over the course of the project, and also to acknowledge the University of Trento who provided the data used in the study.

Copyright © David Gozzi and the Department of Earth Sciences, Program of Air, Water and Landscape Sciences, Uppsala University

UPTEC W 18 015, ISSN 1401-5765

Published digitally at the Department of Earth Sciences, Uppsala University, Uppsala 2018

(5)

iv

POPULÄRVETENSKAPLIG SAMMANFATTNING

Alperna är ett område som ofta är utsatt för extremväder och som upplevt flera svåra översvämningar de senaste åren till följd av häftiga regn. Flera av Europas större floder har sina källor här vilket gör att effekterna också drabbar områden nedströms. I november 2014 orsakade en kraftig storm dödsfall och stora skador i Schweiz och i norra Italien.

Efter regnoväder i augusti 2005 brast fördämningar till flera floder i Sydtyskland och många tvingades fly eller evakueras. Samtidigt omkom flera människor i Österrike och Schweiz efter att regnen utlöst jordskred.

I Italien räknas ovädret 1966 som en av de allvarligaste väderkatastroferna under 1900- talet. Stormen orsakade stora skador och många dödsfall i de centrala och nordöstra de- larna av landet. Bland annat översvämmades staden Trento i Alperna som ligger längs Italiens näst längsta flod, Adige. För att motverka Adigeflodens skadeverkningar har man byggt vallar längs flodens sträckning genom Lagarinadalen och flera dammar i området kan dämpa höga flöden. Här finns även många hydrologiska och meteorologiska stationer som mäter vattenföring, nederbörd, temperatur och andra variabler.

För att förebygga framtida översvämningar och dimensionera infrastruktur och byggnads- verk är det viktigt att förstå sannolikheten att en händelse av en viss magnitud ska inträffa.

En sådan uppskattning kräver att man känner till sannolikhetsfördelningen, det vill säga den matematiska beskrivningen av hur sannolika observationer av olika magnituder är.

Analyser av statistik för uppmätta hydrometeorologiska variabler är ett vanligt sätt att ta reda på en sådan sannolikhetsfördelning, och därför en viktig del i bedömningar av över- svämningsrisk.

I det här arbetet undersöktes extremvärden av nederbörd och vattenföring i avrinnings- området till Adigefloden ner till staden Trento. Analyserna har utförts på dataserier med årsmaxima, det vill säga de maximala dygnsvärdena som observerats varje år under peri- oden. Metoderna bakom resultaten kan i korthet beskrivas enligt följande. (1) Sannolik- hetsfördelningen för nederbörd togs fram genom en regional frekvensanalys. Ett vanligt problem med miljödata är att observerade serier av årsmaxima är korta. Regional frekven- sanalys bygger på att serier från olika stationer kan slås ihop och bedömas tillsammans om de är tillräckligt lika, vilket gör en sådan metod lämplig för korta serier. De matema- tiska parametrarna till fördelningen uppskattades med så kallade L-moment, som jämfört med andra metoder visats prestera väl då datamängden är liten. (2) En trendanalys ge- nomfördes för att undersöka om årsmaxima har ökat eller minskat under den observerade perioden. Mann-Kendalls metod användes då denna inte kräver att serien är normalför- delad, vilket sällan är fallet för miljödata. (3) Säsongsvariationen av årsmaxima för ne- derbörd och vattenföring beskrevs med säsongsindex. De är mått på vilket tid på året som maxima inträffar i medeltal, samt hur mycket datumet varierar.

(6)

v

Tre huvudsakliga resultat kan lyftas fram. (1) Sannolikhetsfördelning för nederbörd vi- sade på att den maximala dygnsnederbörden som kan väntas falla över Trento sett över en 100-årsperiod är mellan 114 och 148 mm/d. Detta kan användas i vidare studier som indata till modeller som kan beräkna flodens extremflöden, eller för bedömningar av kli- matförändringars effekter på vattenresurser i regionen. Resultaten indikerar även (2) att magnituden av extremvärdena minskat under den undersökta perioden 1975–2014, samt (3) att det utöver extremnederbörd sannolikt är snösmältning som styr vilken tid på året som de högsta flödena inträffar.

(7)

TABLE OF CONTENTS

Abstract ... i

Referat ... ii

Preface ... iii

Populärvetenskaplig sammanfattning ... iv

1. Introduction ... 1

1.1. Purpose and research questions ... 2

1.2. Limitations ... 2

2. Theory ... 3

2.1. Block maxima ... 3

2.2. Flood generation processes ... 3

2.3. Trend analysis ... 3

2.4. Seasonality analysis ... 4

2.5. Probability theory ... 5

2.6. Estimators ... 6

2.7. Moments ... 6

2.8. L-moments ... 7

2.9. Regional frequency analysis ... 7

2.9.1. Index flood method ... 8

2.9.2. Identification of homogenous regions ... 8

2.9.3. Discordancy ... 9

2.9.4. Homogeneity test ... 9

2.9.5. Choice of regional frequency distribution ... 10

2.9.6. Estimation of a regional frequency distribution and its accuracy ... 10

3. Data and methods ... 11

3.1. Database... 11

3.2. Procedure for developing annual maximum series... 11

3.3. Catchment delineation ... 13

3.4. Screening of data ... 13

3.5. Study area ... 13

3.6. L-moment and regional analysis methodology ... 15

3.7. Trend analysis and seasonality analysis methodology ... 15

(8)

4. Results ... 15

4.1. Trend analysis of annual maximum series ... 15

4.1.1. Sensitivity analysis ... 16

4.2. L-moments summary ... 17

4.3. Discordancy test ... 20

4.4. Regional homogeneity ... 20

4.5. Regional frequency analysis ... 20

4.6. L-moments and mean annual precipitation ... 23

4.7. Seasonality analysis ... 24

4.7.1. Trends in timing of maxima ... 26

5. Discussion ... 27

5.1. Trend analysis of annual maximum series ... 27

5.1.1. Sensitivity analysis ... 28

5.2. L-moments summary ... 28

5.3. Regional homogeneity ... 29

5.4. Regional frequency analysis ... 29

5.5. L-moments and mean annual precipitation ... 30

5.6. Seasonality analysis ... 31

5.7. Uncertainties ... 31

5.7.1. Data ... 32

5.7.2. Serial correlation... 32

5.7.3. Selection procedure ... 32

5.8. A note on scales ... 33

6. Conclusions ... 33

7. References... 35

Appendix ... 38

(9)

1

1. INTRODUCTION

The Adige river in the north-eastern Italian Alps had its latest severe flood in 1966 fol- lowing a cyclonic storm considered to be the most important hydrometeorological event in Italy in the last century (Malguzzi et al., 2006). The synoptic scale storm caused dam- ages and casualties over the whole of central and northeastern Italy, including the flooding of the town of Trento on the Adige river.

Extreme precipitation and discharge events such as this pose considerable risks to human life, economy and infrastructure. In the Alpine region, large floods have been shown to be more frequent than in the past and may become even more frequent under global warm- ing (Allamano, 2009). Although, predictions are particularly difficult to make here since data are sparse and the spatial variability of the hydrological environment is significant (Parajka, 2005). Even so, the spatial and temporal patterns of the extremes need to be characterized for flood risk analyses, assessments of climate change effects and water resource management.

An important characteristic of hydrometeorological extremes is the seasonality, i.e. the ten- dency for events to occur in certain parts of the year. Seasonality has an impact on both the precipitation inputs to a catchment and its soil wetness and therefore has a great influence on the magnitude and timing of annual maximum discharge peaks (Blöschl et al., 2013).

The seasonality of hydrological processes has been the focus of recent studies on both the European scale (Blöschl et al., 2017; Parajka et al., 2010) and the scale of Alpine catch- ments (Turkington et al., 2016).

A common way to characterize extremes is by estimating the frequency of events.

Knowledge of the magnitude and probable frequency of recurrence is needed for planning decisions. Especially for engineering purposes, frequency analysis of extreme hydrome- teorological events is needed for proper design of structures such as dams, levees, water- works and sewage disposal plants (Dalrymple, 1960).

The statistical approach to frequency analysis of floods has been under debate since its introduction (see Klemeš (2000) and references therein), where a major criticism con- cerns the extrapolation beyond the range of observations for higher return periods. The issue of short records can be mitigated by so called regional frequency analysis. Related samples of data can be analyzed together, as a region, if the event frequencies are similar (Hosking and Wallis, 1997). Thus, the large sampling errors associated with short records can be reduced.

The use of L-moment statistics is a common approach in regionalization studies (e.g.

Adamkowski, 2000; Hailegeorgis et al., 2013). Conventional moments such as mean, var- iance, skewness and kurtosis describe the scale and shape of a probability distribution; L- moments are analogues to these but have been shown to characterize a wider range of

(10)

2

distributions and having a better performance for small samples (Hosking and Wallis, 1997).

Central to regionalization is the concept of homogeneity, i.e. sufficiently similar fre- quency distributions among the pooled samples. Regional analysis studies of extreme precipitation have found significant relationships between L-moments and mean annual precipitation, and successfully used this to group sites into homogenous regions (e.g.

Schaefer, 1990; Di Baldassarre et al., 2006).

The Italian VA.PI. project identified a nationwide approach for frequency analysis of ex- treme rainfall and floods based on data records up to the 1980s (COST, 2012). The re- sulting reference procedure for regional flood frequency estimation adopts the use of hi- erarchical regions. Italy is delineated into regions of the first level, where shape parame- ters are considered constant, and sub-areas of the second level where dispersion (e.g. var- iance) is assumed constant. The Adige basin belongs to the Triveneto region which is considered homogenous at both the first and second level. Manfreda and Fiorentino (2008) applied the VA.PI. procedure to the Adige river and assumed the flood data to be homogenous, while a recent study of Triveneto finds this larger region to be heterogene- ous (Persiano et al., 2016). It is therefore interesting to revisit the homogeneity assump- tion for the Adige basin.

1.1. PURPOSE AND RESEARCH QUESTIONS

This study aimed at describing the characteristics of daily precipitation and discharge extremes in the Adige river basin down to the city of Trento. Annual maximum (AMAX) series for the recent 40-year period 1975-2014 were developed and analyzed in terms of temporal trends, their distributional properties and the seasonality of the events.

The research questions were as follows.

1. Do the AMAX series exhibit any trends?

2. (a) Can the catchment be considered a homogenous region with respect to precipita- tion and discharge? (b) If so, which frequency distribution is in accordance with the data? (c) And what are the expected precipitation depths or discharge peaks for return periods up to 100 years?

3. (a) What does the sample L-moments say about the spatial distribution of the extremes in the catchment? (b) Is there a relationship between L-moments of precipitation ex- tremes and the mean annual precipitation?

4. (a) What is the seasonality of annual maxima and how strong is it? (b) What does the seasonality say about precipitation as a driving process of floods in the basin?

(c) Have there been any shifts in the seasonal timing of extremes?

1.2. LIMITATIONS

- Only daily data have been analyzed. Available sub-daily series were found to be too short.

(11)

3

- Results can only describe flood characteristics in terms of precipitation, and not with regard to other flood generation process such as soil moisture and snow melt.

- Minimum required length of record (20 years) was kept short due to scarcity of dis- charge data.

2. THEORY

2.1. BLOCK MAXIMA

The block maxima (BM) method is one of two fundamental approaches in extreme value theory, the other being peak-over-threshold (Ferreira and Haan, 2015). The BM method consists of dividing the observation period into equally sized and non-overlapping periods and restricting the analysis to the maximum observations in each period – e.g. annual maxima. Ferreira and Haan (2015) compared the two methods applied with L-moments (see section 2.8) and suggested that the BM method is generally more efficient under many practical conditions.

2.2. FLOOD GENERATION PROCESSES

Extreme rainfall processes are a form of climate forcing on flood generation (Blöschl et al., 2013). Events may be produced by smaller scale convective storms, covering a few kilometers and lasting a few hours or less with high intensities. They can also be produced by atmospheric mechanisms on larger scales caused by dynamic uplifting or, in moun- tainous regions, by orographic effects. These storms cover larger spatial scales, have a longer duration and lower intensities. In colder regions, snowmelt and rain on snow events are important generation processes as well (Merz and Blöschl, 2003).

The storm durations relative to the mean response time of a catchment is a key influence on flood peaks. The largest floods typically occur when the storm duration is equal to or greater than the response time of the catchment, since this may give rise to a resonance effect (Blöschl et al., 2013).

2.3. TREND ANALYSIS

The Mann-Kendall (MK) test is a nonparametric test for monotonic trends. Mann (1945) used the significance test of Kendall’s correlation coefficient τ with time as the independ- ent variable. The test statistic S is computed for all possible data pairs and measures the monotonic dependence of the dependent variable y on time. It compares the number of pairs where y increases with time, and the number of pairs when y decreases with time.

The null hypothesis of no change is rejected if S, and therefore Kendall’s τ of y versus time, is found to be significantly different from zero (Helsel and Hirsh, 2012).

The MK trend test is often suitable for environmental data since it is robust to outliers and missing values, and no assumption of normality is required (Helsel and Hirsh, 2012).

However, there must be no serial correlation for the p-values of the trend significance test to be correct. A positive serial correlation increases the likelihood of detecting a signifi- cant trend when none may exist, i.e. a type I error in the significance test (Yue et al.,

(12)

4

2002). The reverse is true for negative serial correlation, which may cause underestima- tion of significant trends (type II error). Moreover, the presence of trends affects the de- tection of serial correlation. Yue et al. (2002) suggests a modified MK test for serially correlated data which includes detrending the series prior to pre-whitening (i.e. removal of serial dependence), to accurately estimate the serial correlation.

The rate of change of a trend can be assessed by the nonparametric slope estimator of Theil (1950) and Sen (1968). The null hypothesis is stated as a significant test for the slope coefficient β, in similar manner as for Kendall’s τ. Blöschl et al. (2017) use an adjusted β estimator for trend estimation of timing of annual maximum events, which accounts for the circular nature of dates. The slope estimator is calculated as the median of the differences between dates D over all possible pairs i and j in the AMAX series,

𝛽 = 𝑚𝑒𝑑𝑖𝑎𝑛 (𝐷𝑗−𝐷𝑖+𝑘

𝑗−𝑖 ) with 𝑘 =

−𝑚̅ 𝑖𝑓 𝐷𝑗− 𝐷𝑖 > 𝑚̅ /2 𝑚̅ 𝑖𝑓 𝐷𝑗− 𝐷𝑖 < 𝑚̅ /2

0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

, (1)

where 𝑘 adjusts for circularity and 𝑚̅ is the average number of days per year (to account for leap years). D is given in day numbers (January 1 corresponding to D=1 and Decem- ber 31 to D=365 or D=366). Unit of β is days per year.

2.4. SEASONALITY ANALYSIS

The seasonality analysis of annual maximum precipitation and discharge is based on di- rectional statistics (Mardia, 1972) which can account for the fact that the first and last days of the year have adjacent values in the time series. Bayliss and Jones (1993) adapted the directional statistics for analysis of extreme hydrological events and introduced indi- ces that reflect the mean date of occurrence of the events and its variability. Hall and Blöschl (2017) use the following procedure to estimate these indices.

The date of occurrence Di of an event in year i is expressed as an angular value θi by plotting it on a unit circle in polar coordinates:

𝜃𝑖= 𝐷𝑖2𝜋𝑚

𝑖 where 0 ≤ 𝜃𝑖 ≤ 2𝜋, (2)

where Di=1 corresponds to January 1 and Di = mi to December 31, and mi is the number of days in year i. The mean x- and y-components of the sample of events are obtained by

𝑥̅ =1𝑛𝑛𝑖=1cos (𝜃𝑖) (3)

𝑦̅ =1

𝑛𝑛𝑖=1sin (𝜃𝑖) (4)

where n is the total number of events at a station. The mean date of occurrence Dmean at a station is defined as

(13)

5 𝐷𝑚𝑒𝑎𝑛=

tan−1(𝑦̅

𝑥̅) ∙𝑚̅

2𝜋 𝑖𝑓 𝑥̅ > 0, 𝑦̅ ≥ 0 (tan−1(𝑦̅

𝑥̅) + 𝜋) ∙𝑚̅

2𝜋 𝑖𝑓 𝑥̅ ≤ 0 (tan−1(𝑦̅

𝑥̅) + 2𝜋) ∙𝑚̅

2𝜋 𝑖𝑓 𝑥̅ > 0, 𝑦̅ < 0

(5)

The variability r of the mean date of occurrence around the average date is

𝑟 = √𝑥̅2+ 𝑦̅2 where 0 ≤ 𝑟 ≤ 1. (6)

r = 0 corresponds to events being widely dispersed throughout the year, and r=1 to events occurring on the same day of the year.

2.5. PROBABILITY THEORY

The following sections are to a large degree an account of the L-moment approach to regional frequency analysis outlined in the influential work of Hosking and Wallis (1997), beginning with a note on probability theory.

Environmental data are often regarded as observations of random variables, generally denoted by X, and it is very rare that these values are equally likely to be observed (Hosk- ing and Wallis, 1997). In probability theory, the relative frequency with which the values of X occur is described by its probability distribution. The cumulative distribution func- tion,

𝐹(𝑥) = Pr[𝑋 ≤ 𝑥] where 0 ≤ 𝐹(𝑥) ≤ 1, (7)

of a probability distribution describes the probability that the random variable, or obser- vation, is lower than a specific value x. If F(x) is a continuous function, which is often the case for environmental variables, it has an inverse function x(F) called the quantile func- tion of X. x(p) is called the quantile of non-exceedance, given a probability p that X does not exceed the value x(p). For example, the discharge of a river might have a probability p=1% of exceeding 100 m3/s, which would then be the value of x(0.01).

In frequency analysis, the object is to estimate the quantiles belonging to the distribution of the random variable of interest (Hosking and Wallis, 1997). The quantiles may also be expressed in terms of the return period, which is common in environmental and engineer- ing practice. A quantile of return period T, XT, is an observation, or event magnitude, that has the average probability 1/T of being exceeded by any single event. For annual data, an event magnitude with a return period of T years is equivalent to an annual non-exceed- ance probability

𝐹(𝑋𝑇) = 1 − 1/𝑇, (8)

and the magnitude of such an event is given by

𝑋𝑇 = 𝑥(1 − 1/𝑇). (9)

(14)

6

For engineering purposes, a return period of interest may be the design life of a structure, and thus the quantity XT can be referred to as the design event. When flood and precipi- tation data are analyzed, the design event is called the design flood and design storm, respectively.

2.6. ESTIMATORS

To estimate the quantiles of the variable of interest, a distribution for the variable must first be known (Hosking and Wallis, 1997). It is common to assume that a suitable distri- bution can be defined apart from a set of unknown parameters. These usually include location and shape parameters, which are estimated from the observed data. A common measure of performance for the estimator 𝜃̂ of parameter 𝜃 is the root mean square error (RMSE),

𝑅𝑀𝑆𝐸(𝜃̂) = {𝐸(𝜃̂ − 𝜃)2}1/2, (10)

which has the same units as the parameter. A dimensionless measure, the relative RMSE, is obtained by the ratio of RMSE and 𝜃.

2.7. MOMENTS

The moments of a probability distribution are used for describing its scale and shape (Hosking and Wallis, 1997). The first moment is the mean, the center location of the distribution and the expected value of a random variable X,

𝜇 = 𝐸(𝑋). (11)

Higher order moments are given by

𝜇𝑟 = 𝐸(𝑋 − 𝜇)𝑟 with 𝑟 = 2, 3, … (12)

The second moment is the variance which measures the dispersion around the mean value,

𝜎2 = 𝜇2= 𝐸(𝑋 − 𝜇)2. (13)

The coefficient of variation (CV) measures dispersion as a proportion of the mean,

𝐶𝑉= 𝜎/𝜇, (14)

and is a useful alternative to the variance. Estimating the shape of a distribution involves higher order moments, as for skewness and kurtosis. Skewness,

𝛾 = 𝜇3/𝜇23/2, (15)

contains the third moment and measures whether the distribution is concentrated at the left and has a longer right tail (positive skew), or vice versa (negative skew). The forth moment is included in kurtosis,

𝜅 = 𝜇4/𝜇22, (16)

which is related to the influence of extreme values on the variance, i.e. kurtosis increases as more of the variance is due to the presence of outliers (Westfall, 2014).

(15)

7 2.8. L-MOMENTS

Probability weighted moments (PWMs) were developed by Greenwood (1979) as an al- ternative to ordinary moments when estimating parameters to distributions whose inverse form are explicitly defined. However, the PWMs can only indirectly be interpreted as measures of scale and shape of a probability distribution. To overcome this, Hosking (1990) defined L-moments as linear combinations of the PWMs (the “L” refer to this fact, that they are linear combinations). L-moments have a good performance for small sam- ples and computational simplicity compared to other estimation methods such as maxi- mum likelihood, making them popular for applications to hydrologic extremes (Katz, 2002).

Analogous to ordinary moments, useful L-moments for summarizing data are the L-loca- tion (λ1), which is the same as the mean of the distribution, and the L-moment ratios L- CV (τ), L-skewness (τ3) and L-kurtosis (τ4). The L-moment ratios are given by the for- mula

𝜏𝑟 = 𝜆𝑟/𝜆2 with 𝑟 = 3, 4, … (17)

where λ2 is the scale measure L-scale. The L-moment ratios are therefore dimensionless.

Sample L-moments are denoted lr, and sample ratios by tr. See Hosking and Wallis (1997) for detailed definitions.

The L-moment ratio diagram is a convenient way of comparing sample L-moment ratios with population values of frequency distributions (Hosking and Wallis, 1997). The values are plotted on a graph whose axes are L-skewness and L-kurtosis. Two-parameter distri- butions plot as points, and three-parameter distributions plot as lines, with different points on the line equivalent to different values of the shape parameter. For a homogenous region (see definition below), it is useful to plot the regional average L-moment ratios – average of at-site ratios in a region weighted by record length – to assess which distribution it resembles. In this way, the L-moment ratio diagram provides a visual assessment of the dispersion of the at-site L-moment ratios and can be used as a graphical tool to guide the selection of a suitable parent distribution in a regional frequency analysis.

2.9. REGIONAL FREQUENCY ANALYSIS

It is often a problem when estimating quantiles from annual data that record lengths are too short compared to the return period of interest (Hosking and Wallis, 1997). Generally, a record length at least as long as the return period is needed for reliable estimates, which seldom is the case for environmental observations.

Regional frequency analysis is a way to mitigate the problem of short records (Hosking and Wallis, 1997). It works by pooling together data from sites which are deemed similar enough, i.e. the at-site frequency distributions are approximately the same. In other words, a region is a group of sites whose observations are assumed to be drawn from the same distribution. Quantile estimates are then made from the larger dataset of the region, ideally with better accuracy than the at-site estimates.

(16)

8

The similarity of the at-site distributions is referred to as the homogeneity of the region, or heterogeneity if they are too dissimilar (Hosking and Wallis, 1997). A regional analysis involves assigning sites to regions, testing if the proposed regions are homogenous and finding suitable frequency distributions that fit the regional datasets.

2.9.1. Index flood method

One way to conduct a regional analysis is by the index flood method of Dalrymple (1960).

As the name suggests, the method was developed for flood frequencies, but it can be used for any kind of data. It consists of two steps, where the first is to develop a dimensionless frequency curve, referred to as the regional growth curve in later works (e.g. by Schaefer, 1990; Hosking and Wallis, 1997; Di Baldassarre et al., 2006). The growth curve repre- sents the ratio of an event magnitude of any frequency, i.e. the quantiles, to an index flood. The index flood is defined for each site in a region and commonly taken to be the at-site mean (Dalrymple, 1960). When applied to precipitation data, this is instead called the index storm.

The second step involves relating the index flood to some physical characteristic to enable the prediction of the index flood, and thus also the quantiles, at any point within a region.

This makes it possible to assign any site with that characteristic to a frequency curve.

Because of this, regionalization is a vital tool in the field of flood prediction at ungauged sites (Blöschl et al., 2013).

In the approach to the index flood method of Hosking and Wallis (1997), the index flood at site i is estimated by the sample mean of the data. The parameters of the at-site distri- butions are found by estimating L-moments from the sample data. The at-site estimates are combined in the regional average to give the parameters of the regional growth curve.

If 𝑄̂i(F) is the estimated quantile function of the frequency distribution for site i belonging to a homogenous region, then quantile estimates are calculated as

𝑄̂ (𝐹) = 𝜇𝑖 ̂ 𝑞̂(𝐹), 𝑖 (18)

where 𝜇̂i and 𝑞̂(F) are the estimated index flood and regional growth curve.

Given that the regional growth curve is correctly specified and that frequency distribu- tions at different sites indeed are identical apart from a scale factor, the procedure of Hosking and Wallis (1997) assumes that observations are identically distributed, not se- rially dependent and that there is no cross-correlation between sites. Although these as- sumptions may not be exactly satisfied in practice, Hosking and Wallis argue the proce- dure to be appropriately robust to departures from the assumptions.

2.9.2. Identification of homogenous regions

A crucial step in regionalization is the identification of homogenous regions. There are numerous ways to delineate sites into regions and several involve subjective judgement.

Hosking and Wallis (1997) argue that formation of regions should not be based on at-site statistics, but rather on site characteristics. These are, in principle, quantities that can be

(17)

9

known about a site without measurements having been carried out, such as location, ele- vation and other physical properties. Mean annual precipitation may also be considered as such a characteristic.

At-site statistics should instead be used in testing of homogeneity of a proposed set of regions (Hosking and Wallis, 1997). Otherwise, if for example L-CV is used for grouping, there would be a tendency to group together sites with high outliers, even though these outliers might be due to random fluctuations that happened to affect one site but not its neighbors.

2.9.3. Discordancy

The discordancy measure of Hosking and Wallis (1997) is used to identify sites which are inconsistent with a group of sites as a whole. This is measured by the at-site L-moment ratios and summarized in the discordancy measure D. The critical value of D, above which a site is considered discordant, is 3 for regions with 15 or more sites (D increases from 1.33 to 3 for 5 to 15 or more sites in a region). A site flagged as discordant should be scrutinized for errors in the data or put under consideration for removal from the region.

2.9.4. Homogeneity test

All sites in a homogenous region have equal population L-moment ratios, but due to sam- pling variability the at-site ratios will differ. The question is if the dispersion of the ob- served L-moment ratios is larger than what would be expected. The homogeneity test proposed by Hosking and Wallis (1997) compares the dispersion of at-site L-CV to the statistics of a homogenous region, obtained from Monte Carlo simulations. In the test, the flexible four-parameter Kappa distribution is fitted to the regional average L-moment ra- tios calculated from the sites (see appendix Eq. A26–A28). A large number of realizations are simulated with this distribution, with the same number of sites and record lengths as the samples. The simulation results are used to calculate the heterogeneity measure

𝐻 =(𝑉−𝜇𝑉)

𝜎𝑉 , (19)

where V is the weighted standard deviation of at-site sample L-CV, and µV and σV are the mean and standard deviation of simulated V. The region may be considered acceptably homogenous if H≤1, possibly heterogeneous for 1≤H≤2, and definitely heterogeneous if H≥2.

It is also possible to use a heterogeneity measure based on L-skewness and L-kurtosis.

Hosking and Wallis (1997) refer to this measure as V3 and consider it appropriate for procedures based on hierarchical regions. Although, compared to the measure based on L-CV, they find that this measure lacks power to discriminate between homogenous and heterogeneous regions.

In this study, the H-statistic of the homogeneity test corresponding to V3 is termed H3 and the H-statistic based solely on L-CV is referred to as H1. Unless specified otherwise, ho- mogeneity refers to H1.

(18)

10 2.9.5. Choice of regional frequency distribution

Hosking and Wallis (1997) suggest considering many families of distributions as candi- dates for a regional dataset. The set of distributions suggested by Hosking (2015) is able to adapt to a wide range of properties of the data, and comprises the generalized logistic, generalized extreme value, generalized Pareto, generalized normal, Pearson type III and the Gumbel distribution (see appendix for formulas).

Given a set of candidate distributions, a goodness-of-fit test may be used to asses which distribution gives the best fit to the data. The test for regional distributions of Hosking and Wallis (1997) is based on the notion that regional average L-moment ratios summa- rize the L-statistics of homogenous region. Given homogeneity, the scatter of at-site val- ues in the L-moment ratio diagram should represent no more than sampling variability.

The location and scale parameters of the distributions are estimated by the regional aver- age mean (L-location) and L-CV. The test compares the differences between L-skewness and L-kurtosis of the fitted distributions, and the corresponding regional averages.

To assess the significance of the differences, simulations are used to calculate a sampling variability for the regional averages. The simulations are in principle the same as for the homogeneity test, and those computations can be used again here to calculate the standard deviation and bias of the regional averages. The goodness-of-fit measure Z reflects a fit acceptable at the 10% significance level for |Z| ≤1.64. This assumes that Z has a standard normal distribution, which is only accurate if the region is perfectly homogenous and if there is no serial correlation or cross-correlation present in the data. Should several dis- tributions be found to be acceptable, their growth curves can be compared. If these are approximately equal, then any of the distributions is adequate.

2.9.6. Estimation of a regional frequency distribution and its accuracy

In the L-moment procedure of Hosking and Wallis (1997), a regional frequency distribu- tion is fitted by equating the L-moment ratios of a suitable distribution to the regional averages calculated from the samples. For three-parameter distributions, sample L-mo- ment ratios t, t3 and t4 are computed and used to estimate the regional quantile function 𝑞̂(𝐹). Since the quantile estimates for each site is scaled by the index flood, the regional average mean, l1R, is set to 1. Estimate of quantile with non-exceedance probability F at site i is calculated as

𝑄̂ (𝐹) = 𝑙𝑖 1(𝑖)𝑞̂(𝐹). (20)

The accuracy of the quantile estimates can be assessed in Monte Carlo simulations of a synthetic region that matches the region used for the estimates in terms of heterogeneity and intersite dependence. Variation of the at-site L-CV in the synthetic region is chosen to match the observed H1-value. A correlation matrix can be used to describe the observed correlation pattern of the data, but if no specific pattern is discernible between the sites, they can be assumed to be equicorrelated. The average cross-correlation between all sites is then used. A large number of realizations of the synthetic region are made with the

(19)

11

same number of sites, record lengths and regional distribution as the region used for esti- mates. A relative RMSE of the quantiles may be computed from the simulations. An in- terval around the estimated growth curve can also be calculated, within which a given ratio of simulated values fall. This is referred to as error bounds in Hosking and Wallis (1997).

3. DATA AND METHODS

3.1. DATABASE

The database used in this study was created involving the work of researchers at the Uni- versity of Trento. The software DB Browser for SQLite (v. 3.9.1) was used to extract data from the database.

The database contains hydrological and meteorological time series data as well as spatial information of the Adige basin. Precipitation and discharge data were available in daily, hourly and sub-hourly (5 min, 10 min and 30 min) time series. However, the majority of hourly and sub-hourly series were initiated after the year 2000 and no sub-daily series extended further back than 1977. Availability of daily data were better, with records going as far back as the 1920s. The most recent records end in early 2015, and roughly half of all records end before 2010. There were also historic records of monthly series which were not considered in this study.

Based on the availability of data records (summarized in Table 1) it was decided that a suitable study period would be the 40-year period 1975-2014. It was also decided that the focus should be on daily data and that sub-daily records could be included after aggrega- tion into daily series.

Table 1. Number of available records in database and summary of record lengths.

Variable Available records Mean (yrs) Min (yrs) Max (yrs) Std. (yrs)

Precipitation 643 29 0.02 94 29

Discharge 128 18 0.01 89 19

3.2. PROCEDURE FOR DEVELOPING ANNUAL MAXIMUM SERIES

The extracted time series were tested to assess if they were suitable for developing AMAX series. Primarily, the years used in the AMAX series should not have too many missing values, otherwise it is plausible that the maximum value that year was not rec- orded. The main idea in the implemented selection procedure was to use the information in the seasonality of annual maxima, together with a threshold for missing values. The steps are explained in detail below and were performed in MATLAB (v. 2017b). Selected records are summarized in Table 2.

(20)

12

1. First, every time series (daily and sub-daily) for precipitation and discharge sites with more than 15 years between the reported start and end dates were extracted as csv-files from the database.

2. For each time series, the years in the period 1975-2014 with less than 5% missing values were selected. This threshold was included to get a reliable first estimate of the seasonality indices.

3. Sub-daily series were then aggregated to daily series. For precipitation series, this meant finding the sum of values recorded over 24 hours. However, there seemed to be some inconsistencies in the database regarding which 24 h-period was de- fined as a day. For several sites, both daily and sub-daily series were available, which allowed for comparisons between the aggregated daily series and the data- base counterparts. For 10 min-data, values were summed between 9 a.m. the first day to 9 a.m. the following day (the date of the second day was assigned to the value). Although there were examples of 5 min-series where summing between midnight to midnight or 10 a.m. to 10 a.m. yielded values closer to the database daily series, 9 a.m. to 9 a.m. was more common and thus used for all series. For 60 min data, sums between 10 a.m. to 10 a.m. gave values most similar to the database daily series. In the case of discharge data, the mean value was calculated for each day, defined as midnight to midnight.

4. To construct AMAX series, the largest daily event was selected for each hydro- logical year among the selected calendar years. The hydrological year was defined as October 1 to September 30 the following calendar year. The resulting series were checked so that no two maxima occurred within one week, to ensure that they were not generated by the same storm event, i.e. to minimize serial correla- tion.

5. The seasonality of each AMAX series was then analyzed to find the mean date of occurrence, Dmean, and its variability, r.

6. The information from the seasonality analysis was used for a more rigorous se- lection criterion. The extracted time series were tested one more time. For each series, a window was defined as +/-10 days/r around Dmean, i.e. the size of the window was inversely proportional to the variability of the mean date of occur- rence (0 ≤ 𝑟 ≤ 1). The average window size was +/-26 days for precipitation and +/-20 days for discharge. Years with less than 0.05% missing values within the window were considered to have complete records. Time series with at least 20 complete years were selected.

7. Steps 3–5 were repeated for time series selected in step 6 to construct the AMAX series used in further analysis. In the case where daily and aggregated sub-daily series both met the criteria in step 6, the daily series were chosen since these tended to have longer records and this minimized the uncertainty introduced by the aggregation. Before generating the final AMAX series, it was checked in GIS software that only sites located within the Adige basin were used.

(21)

13

Table 2. Number of selected records and summary of their record lengths.

Variable Selected records Mean (yrs) Min (yrs) Max (yrs) Std. (yrs)

Precipitation 84 32 21 39 5.5

Discharge 17 31 20 39 5.7

3.3. CATCHMENT DELINEATION

Discharge time series were available in units of m3/s and converted to mm/d in the AMAX series in accordance with precipitation data. The specific discharge, i.e. the value in m3/s divided by sub-catchment area in m2, was calculated for each discharge site and then transformed from m/s to mm/d. This scaling affects L-location but not L-CV and higher order L-moment ratios.

The Adige basin at the city of Trento and its sub-catchments were delineated using the digital elevation model Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) from the U.S. Geological Survey and the National Geospatial-Intelli- gence Agency (U.S. Geological Survey, 2015). The spatial resolution used was 7.5 arc- seconds. Delineation was performed in QGIS (v. 2.18.7) with the SAGA (v. 2.3.2) algo- rithms Fill sinks, Strahler order, Channel network and drainage basins and Upslope area.

3.4. SCREENING OF DATA

Time series in the database were labelled with a quality assessment. Each value in the time series was associated with one of 53 different quality codes. These codes were grouped into flag equal to 0 or 1. Data with flag=0 were described as missing, incomplete or suspect, whereas data with flag=1 contained descriptions such as good or complete, but also unverified or interpolated from hydrological records of the Italian Hydrological Service. Test results for L-moments were first generated including all available data. An examination of suspicious outliers showed that these all belonged to data with flag=0.

All data with flag=1 was included in this study. While the inclusion of unverified data introduced uncertainty, the trade-off for longer records was deemed necessary. As an ex- ample, precipitation site 90269 was found to have 34 complete years but would lose 16 years of available data in the study period if unverified data were excluded. Overall, the assessment is that unverified data represents a minority of analyzed data.

The lag-one serial correlation coefficient was computed for the AMAX series. Precipi- tation sites had an average value of 0.024, with a standard deviation of 0.17. For dis- charge, the values were -0.09 and 0.14, respectively.

3.5. STUDY AREA

The Adige river is the second longest river in Italy (Encyclopædia Britannica, 1998). It rises from mountain lakes below the Resia pass in the north-eastern Italian Alps and flows south and east to Bolzano where is joins with the Isarco River. Past Bolzano it flows south through the Lagarina Valley passing Trento, the capital of the Trentino-Alto Adige region.

(22)

14

The drainage area of the Adige at Trento is 9,800 km2. It is a mountainous region, with the highest peaks predominantly in the western part and at the northern edge along the Austrian border (Figure 1). The low-lying river valley is mainly oriented in the north-south direction.

Mean annual precipitation varies between 500 and 1,260 mm with an average of 830 mm.

The main direction of storms is from the south and west during autumn, and the catchment response time is approximately 20 h (Manfreda and Fiorentino, 2008). This response time makes the analysis of daily data suitable for detecting the largest flood peaks.

The river is modified by human action, with dikes built on both its sides in its path through the valley (Alkema et al., 2003), and the main river and its tributaries being exploited for hydroelectric power generation. Dams in use today were built before the 1960s (Zolezzi et al., 2009) and the locations of the 25 reservoirs are marked in Figure 1.

Figure 1. Map of the Adige basin down to Trento showing elevation. Locations of selected pre- cipitation (circles) and discharge (squares) sites are plotted, together with reservoirs (triangles) in the river network. Coordinate system WGS 84/UTM zone 32N, EPSG: 32632. Coordinates in decimals.

(23)

15

3.6. L-MOMENT AND REGIONAL ANALYSIS METHODOLOGY

The development and analysis of L-moments and the regional frequency analysis were performed in R (v. 3.4.2), with the packages lmom (Hosking, 2015) and lmomRFA (Hosk- ing, 2017).

3.7. TREND ANALYSIS AND SEASONALITY ANALYSIS METHODOLOGY Trend analysis of AMAX series were performed in R with the packages data.table (Dowle et al, 2017), and modifiedmk (Patakamuri, 2017).

A sensitivity analysis was performed to examine the influence of gaps (i.e. missing years) in the AMAX series on the detection of significant trends.

1. First, AMAX series with no missing years were selected and the sensitivity test was performed on this set of series.

2. A percentage of the AMAX values was randomly removed from each series, and the MK trend test was then performed on these series with artificial gaps.

3. The test was repeated with 5%, 10%, 15%, 20% and 25% of AMAX values re- moved. For each percentage of artificial gaps, the number of significant trends was stored.

4. Since the algorithm involved the generation of random numbers, it was imple- mented with a Monte Carlo procedure. For each percentage of artificial gaps, re- sults were averaged over 1000 outcomes of the algorithm.

5. Finally, the difference between the results with artificial gaps and the original se- ries, i.e. with no artificial gaps, was compared to assess the reliability of trend results for series which contained gaps.

Calculations of β-values (Eq. 1) for the trend analysis of mean date of occurrence, as well as the seasonality analysis, were implemented in MATLAB (v. 2017b).

4. RESULTS

Results are summarized in the following sections. See Tables A1 and A2 in appendix for detailed results for each site.

4.1. TREND ANALYSIS OF ANNUAL MAXIMUM SERIES

The MK trend analyses of the AMAX series performed at 10%, 5% and 1% significance levels displayed a pattern where a minority of sites exhibited predominantly negative trends (Table 3). The percentages of AMAX series with significant negative trends were larger than the significance levels, with the exception of precipitation series at the 5%

level. There, 4 out of 84 series (4.8%) showed significant negative trends. Record lengths of AMAX series with significant trends at the 1% level were all above 30 years.

All AMAX series were included in the data summary in Figure 2, but for the estimation of distributions, series with significant trends at the 5% level were excluded.

(24)

16

Table 3. Trend analysis of annual maximum series for selected significance levels, with number of significant positive (+) and negative (-) trends for each variable and the number of samples (n).

Variable Significance level 10% 5% 1%

Precipitation (n=84)

+ 2 2 0

- 11 4 3

Discharge (n=17)

+ 0 0 0

- 5 3 2

4.1.1. Sensitivity analysis

36 out of the 84 precipitation AMAX series, and 10 out of 17 discharge series, were found to have no missing years and where thus used in the sensitivity analysis. The number of significant trends decreased when artificial gaps were introduced, and the number gener- ally decreased more for larger gaps (Table 4 and 5).

Furthermore, the p-values for each site were averaged over the outcomes of the Monte Carlo simulations performed for each percentage of artificial gaps. This analysis showed that no site developed significant trends with increasing gaps, but rather the opposite: the average p-values increased with increasing artificial gaps for the sites that had significant trends when no artificial gaps had been introduced, leading to sites losing significance as the artificial gaps increased (Table A3 and A4 in appendix).

Table 4. Sensitivity analysis of precipitation trend results, with number of significant positive (+) and negative (-) trends with different percentages of AMAX values randomly removed (artificial gaps). Analysis performed on a subsample of 36 series which had no gaps in record.

Note that the significant numbers of trends are 3.6, 1.8 and 0.36 for the 10%, 5% and 1%

significance levels respectively. Results are based on 1000 realizations of the algorithm. Trend results for the 36 series with no years removed (0% artificial gaps) are included for comparison.

Artificial gaps Trend Significance level 10% 5% 1%

0% + 1 1 0

- 4 2 1

5% + 1.2 1.0 0

- 3.4 1.6 0.5

10% + 1.5 0.9 0

- 3.3 1.5 0.3

15% + 1.5 0.8 0

- 3.1 1.4 0.3

20% + 1.6 0.8 0

- 3 1.3 0.2

25% + 1.6 0.8 0

- 2.9 1.2 0.1

(25)

17

Table 5. Sensitivity analysis of discharge trend results, with number of significant positive (+) and negative (-) trends with different percentages of AMAX values randomly removed

(artificial gaps). Analysis performed on a subsample of 10 series which had no gaps in record.

Note that the significant numbers of trends are 1, 0.5 and 0.1 for the 10%, 5% and 1%

significance levels respectively. Results are based on 1000 realizations of the algorithm. Trend results for the 10 series with no years removed (0% artificial gaps) are included for comparison.

Artificial gaps Trend Significance level 10% 5% 1%

0% + 0 0 0

- 3 2 1

5% + 0 0 0

- 2.4 1.4 0.3

10% + 0 0 0

- 2.2 1.3 0.3

15% + 0 0 0

- 2.0 1.1 0.2

20% + 0 0 0

- 1.7 0.9 0.2

25% + 0 0 0

- 1.5 0.8 0.1

4.2. L-MOMENTS SUMMARY

The spatial distributions of L-location, L-CV and L-skewness are summarized in Figure 2. Both color and size of symbols reflect the magnitude of values, where the smallest and largest symbols belong to the sites with the lowest and highest values, respectively. The precipitation L-moment values were typically higher in the central part of the basin. For discharge, the L-location and L-skewness were mainly higher for sites belonging to the smaller sub-catchments in the northern part of the basin. Beyond this, other geographic patterns are difficult to discern.

For discharge, the L-moment ratio diagram (Figure 3) revealed that the regional average had the best agreement with the generalized normal (GNO) distribution. In the case of precipitation, the regional average was closest the line of Pearson type III (PE3), but also near the generalized normal, generalized extreme value (GEV) and Gumbel (G) distribu- tions (Figure 4). In both cases, the scatter of sample points covers the set of distributions.

(26)

18

Figure 2. At-site values of L-location (1st row), L-CV (2nd row) and L-skewness (3rd row) for precipitation (left column) and discharge (right column) sites. Symbol size increase with increasing values. Values are color-categorised in five levels according to magnitude (see legends). Units of L-location are mm/d.

(27)

19

Figure 3. L-moment ratio diagram for discharge comparing sample L-kurtosis and L-skewness (hollow circles) and the regional average (filled circle) to values of the frequency distributions generalized logistic (GLO), generalized extreme value (GEV), generalized Pareto (GPA), gener- alized normal (GNO), Pearson type III (PE3) and Gumbel (G).

Figure 4. L-moment ratio diagram for precipitation. See description for Figure 3.

(28)

20 4.3. DISCORDANCY TEST

Two precipitation series were found to be discordant, with D larger than the critical value 3.0, which warranted these series to be checked for potential errors. The first series, be- longing to site 90104, had unverified values for the years 1975-1980, and subsequently verified data quality from 1981 to 2013. The annual maxima of 1975-1980 were in good agreement with the rest of the period. The reason for discordancy was the high L-CV of this series, likely caused by the extreme value in 1999 of 190 mm/d, which was the highest measured at any site. The second series, from site 90014, had the smallest maximum value of all series, and all values in the time series were verified apart from a short interval in March 2013. Based on this, neither series was excluded. No discharge series was found to be discordant.

4.4. REGIONAL HOMOGENEITY

The heterogeneity measure of Hosking and Wallis (1997) was used to assess whether the sites in the study area could be treated as a single homogenous region. The region was considered acceptably homogenous for H≤1, possibly heterogeneous for 1≤H≤2 and def- initely heterogeneous for H≥2. With 10 000 realizations of the algorithm, the values of H1 were found to be 0.27 for precipitation and 1.50 for discharge, with similar magnitudes for H3. (Table 6). In other words, the whole basin was homogenous with respect to pre- cipitation but possibly not for discharge. Therefore, only precipitation data were consid- ered in the regional frequency analysis.

Table 6. Heterogeneity test statistics. H1 is based on L-CV and H3 on L-skewness and L- kurtosis.

Variable H1 H3

Precipitation 0.27 -0.28 Discharge 1.50 -1.86

4.5. REGIONAL FREQUENCY ANALYSIS

A more formal way of identifying suitable frequency distributions than inspection of L- moment ratio diagrams is to use the goodness-of-fit test described in section 2.9.5. Both PE3 and the GNO distributions had acceptable fits (Table 7). Growth curves for the dis- tributions were compared and found to be approximately equal. Both distributions may be adequate, but PE3 was selected for quantile estimates on account of the lower Z-value.

The Gumbel distribution was not tested, as it was not included in the R-package. But judging from the relative distances to the regional average in Figure 4, Gumbel should have a Z-value very similar to GEV.

(29)

21

Table 7. Test statistics for goodness-of-fit test. The fit is considered acceptable if |Z| is less than 1.64.

Distribution Z

GLO 7.30

GEV 2.03

GNO 1.32

PE3 -0.50

GPA -9.72

The regional frequency distribution was fitted by the method of L-moments. The popula- tion L-moments of the region were equated to the regional average L-moment ratios from the sample data (Table 8).

Table 8. Regional average L-location (l1), L-CV (t), L-skewness (t3) and L-kurtosis (t4).

l1R t R t3R t4R

1 0.166 0.172 0.136

The PE3 distribution was parametrized by the conventional moments mean μ, standard deviation σ, and skewness γ. These parameters were estimated from the regional L-mo- ments (Table 9). Expressions for the L-moments in terms of parameters of PE3 and re- maining distributions may be found in Hosking and Wallis (1997).

Table 9. Fitted parameters for regional Pearson type III distribution.

Distribution μ σ γ PE3 1 0.305 1.045

Monte Carlo simulations were used to assess the accuracy of the quantile estimates from the fitted regional distribution, accounting for heterogeneity and inter-site dependency.

The variation of the at-site L-CV in the simulated region was set to the L-CV range of the samples, divided by 4.1 in order to match the observed H1=0.27. Sites were assumed to be equicorrelated. The average cross-correlation was calculated as 0.360 and included in the simulations.

The estimated regional growth curve of the region with 90% error bounds is plotted in Figure 5 and summarized for selected values of non-exceedance probabilities F in Table 10. The index storm is scaled by the growth curve to give at-site quantile estimates. An event with a return period of 100-years (F=0.99) was estimated to be up to twice the magnitude of the index storm.

(30)

22

Figure 5. Regional growth curve for precipitation with 90% error bounds (dashed lines) for re- turn periods up to 100-years.

Table 10. Regional quantile estimates q̂(F) with regional average RMSE and 90% error bounds (lower and upper) for selected non-exceedence probabilities F. Results based on 10 000

realizations.

F q̂(F) RMSE Lower bound (0.05) Upper bound (0.95)

0.01 0.53 0.03 0.48 0.59

0.50 0.95 0.01 0.94 0.96

0.80 1.23 0.01 1.21 1.25

0.90 1.41 0.03 1.36 1.46

0.98 1.78 0.06 1.68 1.89

0.99 1.93 0.08 1.81 2.07

Event magnitudes with return periods of up to 100 years were estimated for annual max- imum precipitation at site 90462 located just outside Trento, using Eq. 20 and the index storm of 66.7 mm/d (Table 11). The quantile estimate for the site had an upper bound of 147.6 mm/d for the 100-year storm. The time series showed that a similar amount, 134 mm, fell over January 31– February 1 in 1986. This did not coincide with a notable in- crease in the series of the nearby discharge site 90415 in Trento.

(31)

23

Table 11. Estimated annual maximum precipitation Q̂(F) at site 90462 on the outskirts of Trento for return periods of up to 100 years. Results represent regional quantile estimates and error metrics in Table 8 scaled by the index storm.

Return period (yrs)

F Q̂(F)

(mm/d)

RMSE (mm/d)

Lower bound (0.05) (mm/d)

Upper bound (0.95) (mm/d)

1 0.01 35.0 2.9 30.7 40.4

2 0.50 63.2 4.2 56.6 70.4

5 0.80 82.0 5.6 73.4 91.9

10 0.90 94.0 6.6 83.9 105.8

50 0.98 118.9 9.1 105.2 135.4

100 0.99 128.8 10.2 113.7 147.6

4.6. L-MOMENTS AND MEAN ANNUAL PRECIPITATION

A visual inspection of the relationship between sample L-CV and MAP (Figures 6), as well as L-skewness and MAP (Figure 7), gives some indication of a decrease of the L- moments with increasing MAP. When the relationship was formalized by fitting a linear model to the data, it was in both cases very weak (adjusted R2=3.1∙10-3 for L-CV and 2.4∙10-2 for L-skewness) and only significant at a 10% level for L-skewness (p- value=0.27 for L-CV and 0.09 for L-skewness). The slope was in both cases negative.

Figure 6. Sample L-CV of precipitation plotted against mean annual precipitation. Line repre- sents linear regression model (R2=3.1∙10-3, p-value=0.27).

References

Related documents

The Kalman filter is used for handling motion prediction, while the Hungarian method is used for assigning detected objects to already tracked objects (tracklets): a batch of

För att kunna använda sig av displayen och få bättre möjligheter att lägga till ny hårdvara så bestämdes att en ordentlig ställning skulle byggas, som ska

nering av vägar enligt Vägverkets ATB VÄG och styvhetsmoduler för olika material och olika delar av året anges i ATB VÄG kapitel C.. De material som är mest jämförbara med

Accordingly, in health care research several practice approaches, such as Patient Participation (Roter, 1977), Self-Managed Care (Lorig and Holman, 2003) and Patient Centered

We quantified wood mould decay and biodiversity in the boxes, measuring species richness, total abundances and community- weighted mean of body mass (CWM) as an index of

In a subsequent study (Ng et al. 2015 ), where the list length of the SWIR test was reduced from eight to seven sentences per list, thus reducing the task difficulty, both

Strengths Uniform communication system Better network coverage than the short-range radio system Faster and more practical communication than an ordinary cell phone One device to

The main focus is on regional precipitation extremes, and a formal test procedure for temporal change detection and a method to characterize spatio-temporal patterns are proposed..