THE APPLICATION OF CROSS-SPECTRAL ANALYSIS TO
HYDROLOGIC
TIME SERI
ES
By
Ignacio
Rodr{guez
- Iturbe
September
1967
September 1967
THE APPLICATION OF CROSS-SPECTRAL ANALYSIS TO HYDROLOGIC TIME SERIES
by
Ignacio Rodr{guez - lturbe
HYDROLOGY PAPERS COLORADO STATE UNIVERSITY
FORT COLLINS, COLORADO
ACKNOWLEDGMENTS
This paper is pTimarily based on research performed by the author during studies toward a Doctor of Philosophy degree at Colorado State University. The author wishes to extend his sincere appreciation to his major professor and adviser, Dr. Vujica Yevjevich, Professor of Civil Engineering, for his counsel and encouragement. Special thanks are given to Dr. Mohammed M. Siddiqui, Professor in the Department of Mathematics and Statistics, whose guidance has been invaluable in the author's work.
The writer gratefully acknowledges the U. S. National Science Foun-dation (grant GK-1661) under whose sponsorship this research is supported with V. Yevjevich as the principal investigator and from which the writer's graduate research assistantship was awarded. The National Center for At-mospheric Research, Boulder, Colorado, has also substantially helped the study be allowing free computer time to the project.
Financial assistance for the author's academic training was also re-ceived from La Universidad del Zulia, Maracaibo, Venezuela, which kindly allowed the author a leave of absence in order to finish his graduate studies.
Abstract. II III IV v VI VII TABLE OF CONTENTS Introduction.
1.1 Significance of the Study 1.2 Main Objectives of This Study.
General ~lathematical Techniques for Cross-Spectral Analysis 2.1 Stationary Random Processes
2.2 Spectral Density Functions. 2.3 Coherence Functions . 2.4 Partial Coherence Functions
2.5 Application of Partial Coherence Functions 2.6 Procedure of Computation
2.7 Confidence Limits.
Mathematical Techniques of Spectral Analysis for Linear Systems. 3.1 Frequency Response Functions
3.2 Single Input Linear Systems 3.3 Multiple Input Linear Systems.
Theory of Cross-Spectral Analysis of Linearly Dependent Stochastic Processes.
4.1 Moving Average Processes 4.2 Autoregressive Processes
4.3 Mathematical Development of the Cross-Spectral Characteristics of Filtered Series .
Data Assembly and Procedure For the Analysis of Hydrologic Series By Cross-Spectral Techniques
5.1 Data Selection 5.2 Method of Analysis
Application of Cross-Spectral Techniques to Hydrologic Time Series
6.1 Analysis of Monthly Precipitation Data. 6.2 Analysis of Monthly Runoff Data .
6.3 Joint Analysis of Monthly Rainfall and Monthly Runoff in a Watershed
6.4 Analysis of Annual Precipitation Data .
6.5 The Application of Cross-Spectral Analysis in the Study of Hydrologic Stochastic Processes
Conclusions . Bibliography. Appendix 1 Appendix 2 Appendix 3 Page ix 1 1 1 2 2 2 3 4
s
s
7 8 8 8 9 11 11 11 16 19 19 20 21 21 30 32 34 35 37 38 40 43 45Figure 2.1 3.1 3.2 3.3 5.1 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 6.10 6.11 6.12 6.13 6.14 6.15 6.16 6.17 6.18 6.19 LIST OF FIGURES
Example of erroneous high coherence.
Linear system with noise in the measurement of the output
Linear system with noise in the measurement of the inout and output Multiple input linear system .
Geographic distribution of stations used in the analysis. Coherence functions for monthly data of temperature, atmospheric pressure and precipitation at Eureka (California) .
Partial coherence functions for monthly data of temperature, atmospheric pressure and precipitation at Eureka (California) Coherence functions for monthly data of temperature,
atmospheric pressure and precipitation at San Francisco (California). Partial coherence functions for monthly data of temperature,
atmospheric pressure and precipitation at San Francisco (California).
Coherence functions for monthly data of temperature, atmospheric pressure and precipitation at San Diego (California)
Partial coherence functions for monthly data of temperature, atmospheric pressure and precipitation at San Diego (California) Coherence functions for monthly data of temperature, atmospheric pressure and precipitation at Portland (Oregon).
Partial coherence functions for monthly data of temperature, atmospheric pressure and precipitation at Portland (Oregon). Coherence functions for monthly data of temperature,atmospheric pressure and precipitation at Tatoosh Island (Washington) Partial coherence functions for monthly data of temperature,
atmospheric pressure and precipitation at Tatoosh Island (Washington) Phase and partial phase diagrams for monthly data of
temperature and atmospheric pressure at Eureka (California) . Phase and partial phase diagrams for monthly data of
temperature and precipitation at Eureka (California) Phase and partial phase diagrams for monthly data of
atmospheric pressure and precipitation at Eureka (California) Examples of coherence functions for monthly data of region Examples of coherence functions for monthly data of region 2
Average coherence functions for monthly data of regions 2 with their respective variances
Examples of cross-correlation functions for monthly data
of region 1.
and
Average cross-correlation function for monthly data of region and variance of the precious function .
Examples of cross-correlation functions for monthly data of region 2. Page 5 9 9 9 19 . 21 . 21 . 22 . 22 . 22 . 22 . 23 • 23 . 23 . 23 . 24 . 24 24 26 26 . 27 . 27 . 27 . 27
LIST OF FIGURES - Continued
Figure
6.20 Average cross-correlation function for monthly data of re~ion 2 and variance of the previous function.
6.21 Mean annual precipitation at San Diego (California) and Tatoosh Island (Washington).
6.22 Examples of gain functions for monthly data of region 1.
6.23 Examples of gain functions for monthly data of region 2.
6. 24 Examples of phase functions for monthly data of regions 1 and 2 6.25 Examples of coherence functions for monthly data of region 3 . 6.26 Examples of cross-correlation functions for monthly data
of region 3
6.27 Examples of coherence functions for monthly runoff series
6.28 Examples of cross-correlation functions for monthly runoff series
6.29 Average coherence function for monthly runoff series and variance of the previous function .
6.30 Average cross-correlation function for monthly runoff series and variance of the previous function
6.31 Examples of phase functions for monthly runoff series
6.32 Examples of gain functions for monthly runoff series
6.33 Coherence and phase functions between monthly precipitation at Auburn (California) and monthly runoff of the ~1iddle Fork American River at Auburn .
6. 34 Comparison of spectra obtained in the joint analysis of monthly precipitation at Auburn and monthly runoff of the Middle Fork American River at Auburn
6.35 Gain function of monthly runoff of the Middle Fork American River at Auburn based on monthly precipitation at Auburn
6.36 Comparison of the actual and predicted annual cycle of the Middle Fork American River at Auburn .
6.37 Examples of coherence functions for annual data of region 1 6.38 Examples of cross-correlation functions for annual data of region
6.39 Power spectrum of the Wolf River's annual standardized flows
6.40 Power spectrum of the Fox River's annual standardized flows 6.41 Coherence function between the annual flows of the Wolf and
the Fox Rivers
6.42 Boise River spectral densities: (1) daily flows, (2) stochastic components of daily flows (after Quimpo, 1967).
6.43 Coherence function between stochastic components of daily flows of the Boise and St. ~!aries Rivers.
Page 28 28 28 29 29 29 29 30 30 30 31 31
.,
.)_ 32 32 33 33 34 1. 34 35 35 35 36 36Table 6.1
6.2
6.3
LIST OF TABLES
Cross-spectral characteristics as functions of distance for monthly precipitation series (region no. 1)
Cross-spectral characteristics as functions of distance for monthly precipitation series (region no. 2)
Cross-spectral characteristics as functions of distance for monthly runoff series
Page
25
25
ABSTRACT
The main objective of this paper is to study the potentials of cross-spectrum and multiple cross-spectrum for the analysis of
hydro-logical data.
Groups of precipitation and runoff stations were selected in dif -ferent climatic environment and complete cros>spectral analyses were performed between those stations. The coherence and partial coherence functions were used for the study of frequency correlations between the series and they show t'hat there exists a very strong correlation between the annual cycles of the stations. Along the Pacific Coast of the United States the annual cycle in precipitation appears to be basically the same up to distances of 1000 Km.
Cyclic regression analysis with the use of the gain and phase fu nc-tions is shown to work correctly in hydrologic time series. This type of regression may be very useful in regions where frequency components account for a large percentage of the variance of the series.
Cross-spectral characteristics of the moving average and autore -gressive processes arc shown to be a powerful tool in testing and analyz-ing these types of generating processes in hydrology. Special signifi -cance has the coherence between two 1st order autoregressive processes which is shown to be equal to a constant independent of frequency.
The effects of smoothing or pre-filtering in the cross-spectral properties of two series are studied and recommendations made when w ork-ing with this practice which is frequently used in hydrology.
THE APPLICATION OF CROSS-SPECTRAL ANALYSIS TO HYDROLOGIC TIME SERIES
by Ignacio Rodr{guoz - Iturbe*
mAPTER I
INTRODUCTION
1. Significance of the Study. Many studies have been performed with regard to the spectral characteris
-tics of hydrologic data, but problems involving the simultaneous behavior of two or more series have not been worked on in a wide varjety of fields of appl
ica-tion, although enough has been done to point the way
and suggest the possibilities of hydrologic spectral
analyses.
There is an increasing number of problems in
the geophysical sciences, which can be approached and solved by multiple regression analysis. They can also be effectively studied by multiple spectral techniques
which are precise analogs of multiple regression in spirit and, if care is taken in choice, in the algebraic form of their basic equations (Tukey, 1961). The dif -ferences which arise in the development stem from:
(a) the fact that regression goes on
sepa-rately at each frequency, and
(b) the fact that regression coefficients take complex values rather than real values, which
enables one to learn more about the underlying
relation-ship.
In studying time series, as in its more clas-sical situations, regression analysis is a more sensi-tive and powerful form of analysis than variance com-ponent analysis whenever there is a suitable regression variable. As a consequence, one major reason for
learning about spectrum analysis is a foundation for
learning about cross-spectrum analysis (Tukey, 1961).
2. Main Objectives of This Study. The main
objective of this study is to look into the
possibili-ties of the still young techniques of cross-spectrum
and multiple cross-spectrum for the analysis of hydro
lo-gic data.
Two general approaches may be taken when analyzing interrelations between hydrologic data. The first approach is to represent a given process by a multidimensional model and then study the
characteris-tics of this model, specifically the covariance and spectral matrices. The second approach is to consider each series as a realization of the process and study
the interrelations between these realizations. Both approaches are considered here.
Five specific aspects are specially stressed:
(1) The use of the coherence and the partial
coherence (defined in Chapter II-3) as correlation
measures between the different frequency components of
hydrologic time series. Also, how these measures
com-pare with the classical time-domain methods.
(2) Frequency regression analysis of hydrolo-gic time series by using the gain function (defined in Chapter III-1) as a regression coefficient at each
frequency.
(3) Potentials in using cross-spectral analy-sis to study input-output relationships in hydrologic
systems.
(4) Cross-spectral characteristics of
genera-ting processes in common usage in hydrology.
(5) Effects of smoothing of time series on the
coherence and phase functions between two series.
CHAPTER II
GENERAL MATHEMATICAL TECHNIQUES FOR CROSS-SPECTRAL ANALYSIS 1. Stationary Random Processes A random process
[xk (t)] , - m < t < m , is an ensemble of real valued or complex valued functions which can be characterized
through its probability structure. The variable t can represent any characteristic of a process, although for convenience it will be interpreted as time in the fol
-lowing discussion. Each particular function xk (t) • where t is a variable and k is fixed, is called a sample function. In practice a sample function may be thought of as the observed result of a single
experi-ment.
A particular sample function ~ (t) is, in
general, not suitable for representing the entire
ran-dom process. It is one of the main goals of statistics
to estimate the properties of the entire process on the
basis of particular sample functions.
Consider two.arbitrary random processes
[xk (t)] and [yk (t)] with mean values
u (t) X u (t) y E [ xk (t)
J
E [ yk (t)] (2-1) (2-2)Their autocovariance functions arc defined at
arbitrary fixed values of t and t - T , by
Similarly, the cross-covariance function is defined by
In general, all the preceding quantities vary for different values of t and <
Other statistical quantities can be defined
over the ensemble by fixing three or more times. The probability structure is thus described in finer and finer detail by increasing the number of fixed times. If all possible probability distributions involving
~ (t) are independent of the absolute times t 1 , t 2 ,
tn , and are only function of the intervals
t
1 , t2 , --- , tn , --- , then the process is said to
be strongly stationary. If only the first "n"
probabi-lity distributions are independent of the absolute times, the process is called nth-order stationary. In
order to prove nth_order fitationarity it is only
neces-sary to prove that the nt probability density is independent of absolute times because the first (n-1)
probability densities are obtained from the nth density by successive integrations.
In the special case of a Gaussian independent
process, the mean value and the covariance function provide a complete description of the underlying pro-bability structure. In this case, second order sta-tionarity or weak stationarity is equivalent to strong stationarity because the former implies the mean and
covariance function are independent of absolute times
and this inturn implies all the possible probability
distributions are independent of absolute times be-cause all of them may be derived from the mean value and the covariance function.
2. Spectral Density Functions. It will now be assumed that for the two stationary processes [xk (t)] and [yk (t)] , the functions a (r) , a (r) and
X y
axy(t) exist and have Fourier transforms Sx(f), Sy(f) and Sxy(f) given by:
s
(f) =l:
( ) •211"fTi d a T e T X X S (f) =Joo a
(T) e -2r.fTi dT Y -oo Y S (f) = J oo xy -oo ( ) -2 1T fTi d a T e T xy (2-6) (2-7) (2-8)S (f) and S (f) are defined as the power spectra of the
X y
stochastic processes [x k (t)] and [yk(t)] . S (f) xy
is defined as the cross-spectrum function between these
processes.
It is convenient to define the so-called
physically realizable one-sided power spectra and cross-spectrum functions. These functions given by
G (f)
=
2 S (f), 0.s
f<
oo ,otherwise zero (2-9)X X
(2-10) G (f)
=
2 S (f) • 0.s
f<
oo l otherwise zeroy y
G (f)
=
2 S (f), 0.s
f<
oo, otherwise zero (2-11)xy xy
are the quantities measured by direct procedures in practice.
In the case of real valued process all the previous equations may be simplified. The real valued
two-sided power spectrum is obtained from equation 2-6
S (f) =
J
CD a (T) COS 2 7rfT dT (2-12)X X
-CD
Due to the fact that the covariance is an even function,
s
(f) X (2-13) and G (f) = 4 JCD a (T) cos 2 7rfT dT X X 0 (2-14)for 0 ~ f < ~ , otherwise zero.
The physically realizable one-sided cross-spectrum function can also be expressed as
G (f) = 2
J
CD
a (T) e -21rfTid'Txy
0 xy (2-15)
and being a complex number, it can be written as: G (f)
=
C (f) - i Q (f)xy xy xy (2-16)
where
C
xy (f) andQ
xy (f) are the co-spectrum and quadrature spectrum, respectively. Following Bendat and Piersol (1966) the co-spectrum can be thought of as the average product of x(t) and y(t) within a narrow frequency interval, between f and f + 6f , divided by the frequency interval, 6f. The quadrature spectrum is the same except that either x(t) or y(t) , not both, is shifted in time sufficiently to produce a.90-degree phase shift at the frequency £. In this manner, C (f) is a measure of the in-phase-covariance, andxy
Q
(f) is a measure of the out-of-phase covariance. xyIn more practical words, the co-spectrum measures the contribution of oscillations of different frequencies to the total cross-covariance at the lag zero between two time series. The quadrature spectrum measures the contribution of the different harmonics to the total cross-covariance between the series when all the harmon-ics of the series x(t) are delayed by a quarter period but the series y(t) remains unchanged (Panofsky and Brier, 1958).
Equation 2-16 may be inverted to give the cross covariance function
a (T)
=
JCD
[c
(f) cos Z1rfT+ Q (f) sin 21rfT] dfxy 0 xy >.:y
Because a (T) satisfies the relation xy
(2-17)
the co-spectrum may be expressed as:
c
(f)xy
=jCD[a
xy (T)+a
yx (T)] COS 27rfT dT: C xy (-f)0 (2-18)
meaning that C (f) displays symmetry about the ordin-xy
ate.
Similarly the quadrature spectrum may be expressed as:
0 )
Q (f)
=1
[a
(T)-
a
(T)J
sin 21rfT dT=
-
Q (-f)xy o xy yx xy
(2-19) meaning that
Q
xy (f) is an odd function. From equa-tions 2-18 and 2-19 and from the definitionsG (f)
=
c
(f) - lQ (f) yx yx yx G (f) xy .. c xy (f) - iQ xy (f) one obtains, c xy (f) "2
1 [ G xy (f) + Gyx(f)J (2-20) Q {f) = xy Zi [ G xy (f) - G yx (f)J
(2-21)An alternative way to describe G (f) is by xy
the complex polar form
where and -i 6 (f) e xy O:Sf<CD
'fez
(f) + Qz (f)V
xy xy (2-22) 6 (f) = Tan -1 xy [Qcxxyy~ff~
]
(2- 23)which is called the phase function.
The physical meaning of G xy (f) and
e
xy (f) and the role they play in a linear system will be explained in Chapter 111-2. By interchanging x(t) and y(t) one finds that C (f) yx=
C
xy (f) and0
-yx (f)=
-Qxy(f). Therefore, one can write:and
G (f)
yx G xy
*
(f) (2-24)G (f) ; G (-f) (2-25)
yx xy
where G* (f) denotes the compJ ex conj ugatc of C. (f).
~ ~
3. Coherence Funct jon-<. Thl' cohcn·nl'l' funct ion
is a real valued quantitv .
v
'
xy (f) defined 3"yz (f) xy
I
s
(f)ll
xy s(i)s(r) X yFor a better understanding of the coherence function it is useful to make an analogy with the clas-sical results of correlation and regression analysis.
In statistical analysis of real variables, the correlation coefficient between two variables x and y with mean values of zero is defined as
Pxy =
E
(xY)
cov [x,y]rr u X y
(2-27) where o2 and o2
X y represent the variances of x and y ' respectively.
Similarly, if complex numbers X and Y are being considered the square of the correlation coeffi-cient becomes:
E
[xx"]
(2-28)
where the (*) symbol represents the complex conjugate of the term in question.
2
Pxy
Rewriting equation 2-28 one gets:
E[xy*) E(x
*
Y)
E
[xx*]
E[yy"]
(2-29) From equation 2-29 it is seen that the coherence func-tion may be thought of as a correlation coefficient squared if ~e replace oXY with Sxy(f) , a~ with S (f) and ay2 with S (f) . We will proceed to show
X y
the meaning of these changes.
Cramer's representation of a stationary process with zero mean gives:
where is an sx (f) x(t) -=
J
co -co e 2 r itf dz (f) X~o.•e have writ ten dzx {f) for zx (df) and orthogonal set function with
I
dzx(f)lz
E ·- dS (f)
s
{f) dfX X
being the power spectrum of the process Similarly we have: E
I
dzy{f}12
=
dS (f)=
s
{f) df y y and El
dzx(f} · dz;(r}J
dS {f) ::s
{f) df xy xy {2-30) zx (f) (2-31) (x(t)}. (2-32) . (2-33)Comparing equations 2-31, 2-32 and 2-33 with the expres-sions
2
crx
E
[
xx*
]
E
lxlz
(2-34)IT2
y
=
E
[ yy*)
:: E
IYI2(2-35)
aXY :: E
[xy*]
(2-36)it may be seen that the coherence function can be interpreted as a correlation coefficient squared between the spectral variables z (f) and z (f) calculated
X y
at each frequency f .
It should now be clear that it is often advantageous to study correlation problems in the frequency domain rather than in the time domain. W ork-ing in the frequency domain, any stationary series can be considered as a sum of components or frequency bands, each component being statistically independent of the others. One of the important things that the theory of stationary processes tells us is that not only is the component with center f. independent of all the
J
other components of the process, but it is also inde -pendent of all components of another process except for the component centered on f. . In this manner when the
J
coherence between two time series is calculated one looks for correlations among them in a very small range of frequencies. On the other hand, with the cross covariance function one is looking for correla-tions between the two processes considering each one as a whole.
4. Partial Coherence Functions. Consider two real-valued stationary processes [x(t)] and [y(t)] and assume that the mean values are zero in order to simplify the notation. The residual random variable 6y(t) of y(t) from x(t) is defined by:
.Oy{t} = y{ t) - 9{t} (2-37) where 9Ct) is the least squares prediction of y(t) from x(t) , y(t) a ~ x{t) a XX (2-38)
Consider now three real-valued stationary ran-[y(t)] where dom processes [x
1 {t)] , [x2(t)] and
the mean values are assumed to be zero. One can define the partial correlation coefficient ply·2 by
az
.Oy az
.Ox! 1~· 2
a a a a (2- 39)
.6x
1 ~XI .Oy.Oy 11· 2 yy· 2
2 ply 2 where a
=
a ly· 2•
a 11{ 1 -P~l)
(2-40) .Ox! .Ox! a :: a a (1- p~} (2-41)a
=
a = a(I
-
a 12a 2y) (2-42) llx1 y ly. 2 ly a 22a ly in the quency x1(t)Similar to the partial correlation coefficient
time domain, it is possible to define in the
fre-domain a partial coherence function between and y(t) with x
2(t) removed at every t by
least squares prediction from x
1(t) and y(t) :
G11.2(f} Gyy.2(f)
(2-43)
The terms in equation 2-43 are called residual tial spectra and are defined by:
[
st2(f) szy(f)
J
sty. 2(f) sty(f) t - s22(f) sly{f)s
2(f)=
s
(f) ( 1 - y2
~ (f) ) YY· yy y or par-(2-44) (2-45) (2-46)The proof that the partial coherence is
nothing else but an analog of the partial correlation
coefficient between the spectral variables, calculated
at each frequency f , can be carried out by following
the same procedure used for the normal coherence.
The case of multiple processes is only a
gen-eralization of the three variable case explained before. The partial coherence function bewteen x1 (t) and y(t)
with x
2(t) , x3(t) , --- , xn(t) removed at every
t by least squares prediction from x
1 (t) and y(t) , is defined by ya (f) " ly· 23 ... n J S 1 y. 2 3 .. . . n (f) J a
s
(f) .s
{f)
11· 23 .... n yy· 23 .... n l2-47)The definition and calculation of the partial spectra of formula 2-47 has been done in matrix form by
Goodman (1965) in a very suitable form for the use of
high speed digital computers. Their meaning is
essen-tially the same as those of formula 2-43.
Similarly to the development rnade for the
par-tial coherence function it is possible to define the
partial phase function between x1 (t) and y(t) with
x
2(t) , x3(t), ---- , xn(t) removed at every t br
lease squares prediction from x1 (t) and y(t) ,
-1
8
=
Tanty.23 ... n
s
Imag. part of 1y.23 .... n
Real part of S 1y.23 ... n
(2-48)
S. Application of Partial Coherence Functions.
When more than two var1ables are be1ng cons1dered, the
partial coherence function, rather than the ordinary
coherence, gives a quantitative indication of the degree
of linear dependence between the variables. An example
of erroneous high coherence is shown in Figure 2.1.
Assume that a coherence function value near
unity is computed between the variables x1 (t) and
y(t) . One would be inclined to believe that there is
a linear system relating these two variables.
__________ y{t)
Figure 2.1 Example of erroneous high coherence
(Ben-dat and Piersol, 1966).
Suppose there is a third variable x2(t) which is
high-ly coherent with x
1(t) and also passes through a
lin-ear system to make up y(t) . In this type of
situa-tion, the high coherence computed between x1 (t) and y(t) might only be due to the fact that x2(t) is
highly coherent with x
1 (t) . .If this is in fact the situation, the partial coherence between x
1(t) and
y(t) will be very low.
On the other hand, the opposite situation can
exist. If t~o uncorrelated inputs x
1 (t) and x2(t)
pass through existing linear systems to make up the
output y(t) , the coherence functions riy(f) and
y~y(f) will appear less than unity since there will
exist a contribution due to the other input which will
appear as noise. If the partial coherences are
compu-ted, the effects of the other input will be subtracted
out and the true coherence 11•ill be obtained.
6. procedure of Computation. All the
computa-tions were carried out in a 6600 CDC digital computer.
The first step in computing the spectrum is
the calculation of the autocovariance function of the
series according to the formula,
~ (i)
=
N-i [N~i
xtxt+i - N-i XX t:j(t~+l
xt)(N~i
t= Ix,
)]
(2-49)for i .. 0, 1, 2, , m , where m is thc
max-imum number of lags and N the number of observation~.
Next, thc fi nit<· cos i m· ~cries t ran~ fonn full
t·-tion of tht.' autocovariat~n·~ is calculated ;u:t·orJIIII! tu
the formula (Blackman ami TuJ..cy. 1958) •
m
a
(il:r:
~' (') co~ __ll_.!_X
j = 0 XX J m I : :•PJ
~' (0}
XX ~ XX (0)
(2-51) ~' (i)
=
zfi (i) for 1 :SXX XX :S m-1
and
~' (m)
=
~ (m)XX XX
The spectrum is calculated according to the spectral window formulas (Blackman and Tukey, 1958):
and 1\
G (0}
=
X o.5G(o) X + o.5G(X t)
8
<o
a o.z5G'
(i-1} + o.5c
(ilX X X + o.25
G'
(i+tl X for i = 1, 2, 3, . . . , m -a(m) = 0.5G(m-1) + 0.5G(m) X X X (2-52)In computing the cross spectra, the first step is the calculation of the cross-covariance func-tions of series x and y
[ N_;i
~
(-i) = N-i~
xtyt+i xy t=l ~ ( -i) = yx[
~
-i
t= 1 ( N ) ( N-i )] 1: Y t 1: xt t=i+1 t=l (2-53)Next, the cross-covariance transform functions are com-puted according to the formulas (Granger and Hatanaka, 1964):
c
(i) xy and m=
~
1: (~
' (j) +~
' (j) ) cos~
j=O xy yx m (2-55) mQ
(i)=
2
1: (~'
{j) -~'
(j))
sin~
xy j=O xy yx m (2-56) for i 0, 1, 2, . . . . , m where ~I (0) xy ~ xy (0)~ 1 (i) = 2~ (i) for 1 :S :S m - 1
xy xy
~' (m}
xy ~ xy (m)
and similarly for
f:~
yx(2-57)
In order to obtain the real xnd imaginary part of the cross spectrum, the cross-covariance transform functions are weighted according to the spectral win-dow formulas: 1\ c (0) xy
e
(i) xy 1\ C (m) xy and 1\ Q (0) xy-
0.5c
(1)=
0.5 c (0) + xy xy"'
0. 25 c (i-1) xy +0.50 C xy (i) r J + 0. 25 c (i+1) xy for i=
1, 2, 3,.
m-1 ,..._"""'
0.5 C (m-1) + 0.5 C (m) xy xy 0.5Q
(0) xy 0.25Q
(i-1) + 0.50Q
(i) xy xy +0.25Q (1+1) xy fori = 1, 2, 3, . . . . , m-1 0.5Q
(m) xy (2-58) (2-59)The resulting
e
·
(i) and 1\Q (i) are the estimatedxy xy
co-spectrum and quadrature spectrum, respectively.
The gain being similar to a regression coef-ficient, of series y on series x at each frequency is computed by
I
~w
I
..
"
G (i)
X
for i
=
0, 1 , 2 , . . . . , m The phase is estimated by1\
B(i) tan -1
for i
=
0, 1, 2, . . . . , m and the coherence becomes,(2-60)
~l
(i) +el
{i) xy xy (2-62)a
(i)a
{i) X y for i " 0' 1' 2' . ...
'
mFrequently there are obtained unstable values of the gain in the higher frequencies of the analysis performed. These values reflect no more than the rou nd-ing-off error in the division of two small quantities (Jenkins, 1963). This unrealiability carries over into
the coherence function and the phase angle and it is common to obtain nonsense values for these function in
the higher frequencies of the analysis. Because of
this, the gain, the coherence and the phase should
al-ways be interpreted in the light of the information
given by the cross-amplitude curve defined as
(2-63)
for i 0, 1, 2, . . . . , m
7. Confidence Limits. The distribution of the cross-spectral estimates has been studied by Goodman (1957) with the main assumption being the process
(xt , yt) has a bivariate normal distribution. If the
sample size is N and the cross-spectrum is estimated over m frequency bands, the distribution of the esti -mated coherence, y2(f), when the true coherence is
zero at a given frequency, is given by
F(u) (2-64)
Equation 2-63 enables one to fix confidence limits for the coherence. Tables which give these limits have been presented by Granger and Hatanaka
(1964).
Goodman's work also provides a frequency function for the estimated phase Rngle, ~(f) . This
frequency function is extremely complicated but two important simplifications are noted by Granger and Hatanaka (1964) :
i) When the true coherence is zero, a(f) is rectangularly distributed over the
en-tire admissible range of values.
ii) ~~en the t;ue coherence is one, the
var-iance of e(f) is zero.
Jenkins (1962) deduces
~(f)
approximatelynormally distributed with mean e(f) and variance
given by,
" 1 km [ 1 ]
Var (6(f) ),_..
2 N
-yr[f)
-
1 (2-65) where m and N are the same as in equation 2-63 and k is a constant associated with the particularspec-tral window used. The values of k are described by
Parzen (1961).
Jenkins' approach has been used here to fix
CHAPTER III
~~THEMATICAL TECHNIQUES OF SPECTRAL ANALYSIS
FOR LINEAR SYSTEMS 1. Frequency Response Functions. A physically
realizable, constant parameter linear system is defined by the convolution integral
CD y(t)
=
J
h(T) x(t - T) dT + n(t) 0 (3-6) y( t) =J
CD h( 'T) X( t - 'T) dT 0where n(t) is a noise term which arises because the
(3-1) input and output variables may not be well controlled. n(t) may also include quadratic and higher terms omit-ted in the linear approximation.
The value of the output y(t) is given as a 1~eighted linear sum over the entire history of the in
-put x(t) . The weighting function h(r) associated
with the system is defined as the output or response
of the system to a unit impulse function, and is
meas-ured as a function of time, T , from the moment of
occurrence of the impulse input.
The dynamic characteristics of this type of
system can be represented by the Fourier transform of
h(t)
J
CD -i21rfTH(f) " h(T) e dT 0
(3-2) The frequency response function is of great
interest since it contains both amplitude magnification
and phase shift information. Since H(f) is complex
valued, it can be ex.p·ressed as
H(f) "
I
H(f)I
e-
i~(f)
(3-3) The absolute value IH(f) I is called the sys-tem gain factor and the angle ~(f) is called the sys-tem phase factor.From equations 3-1 and 3-3 it is easily shown that the response of the system to a sinusoidal input
of the type
x(t) = a sin (2w'ft + ~) (3-4)
can be expresses as
y(t)
=
aI
H(f)I
sin [ 2 rit +~
+tb(f)]
(3-S)Therefore, the gain IH(f) I measures the am-plitude magnification at frequency f when the input is a sinusoid of frequency f , while ~(f) gives the corresponding phase shift.
2. Single Input Linear Systems. From the start, it is good to notice that the largest part of the re-sponses in geophysical systems are nonlinear. When the
deviations from the linear case are not too large, the output can be written in tho form
If x{t) and y(t) may be regarded as
sta-tionary time series, and n(t) can be neglected, it
can be shown the following relations hold for the
sys-tem represented by equation 3-1 (Enochson, 1964),
G (f)
xy H(f) G X (f)
From equation 3-8 we get:
and X
e
(c)=
q,(r) xy (3-7) (3-8) (3-9) (3-10)Equation 3-7 contains only the ga:in factor and in this manner it only gives amplitude information. Equation 3-8 is actually a pair of equations containing both the gain and the phase factor. By means of
equa-tion 3-8, if the input and corresponding output of a
system are known, we can estimate H(f) which will be of great importance in predicting future responses of
the system. If the input x(t) in equation 3-6 is of
the type x(t) • a cos (2rrft + t), the ouput of the system is:
y(t) = a . cos (2 rit + ~ + l/l(f) ) + n(t)
X
where the spectrum of the residuals term n(t) is
given by Jenkins (1963) as
G (f) = G (f) ( 1 - '( z (f) )
TITJ yy xy
c:~-11)
(3-12) Gnn(f) will give an idea of possible other
periodici-ties in the series y(t) which are not shared by x(t).
response function for a constant parameter linear sys-tem is a function of frequency only. If the system
were nonlinear the weighting function, h(t) , would be a function of the applied input, hx(t) , and then the frequency response function would be a function of
both, frequency and applied input. If the parameters of the system were not constant, the dynamic properties
would have to be described by a time-varying weighting function, h(T , t) , which is defined as the output of the system at any time t to a unit impulse input at
time t - T . In this case the frequency response
function would be a function of both, frequency and time.
For a linear system, equations 3-7 and 3-8 may be substituted into the definition of coherence
(equation 2-26) giving
'12 (f)
xy (3-13)
Thus, the coherence function may be thought of as a measure of linear relationship in the sense that it attains a theoretical maYimum of one for all
f in a single input linear system.
Goodman et al. (1961) examined a single in-put linear system with the assumption there was noise in the measurement of the output.
x(t) Process
7](1)
t---L,
y'(t)(tl
Figure 3.1 Linear system with noise in the
measure-ment of the output (Goodman et al. 1961)
Assuming ~(t) and x(t) statistically un-correlated and all three processes x(t) , y(t) and n(t) stationary Gaussian noises, the effect of the
disturbance n(t) appears only in the coherence
y2 (f) which is now in the form
xy '12 (f) xy 1+ G
(f)
T)CC1fY
y (3-14)It is seen from equation 3-14 that the coherence
de-creases as the size of the disturbance increases. Enochson (1964) considered a general case of
noise in both input and output measuring devices. As-suming that a measured input x(t) and a measured out
-put y(t) are composed of true signals u(t) and
v(t) and uncorrelated noise components n(t) and
m(t) respectively as shown below,
u(t) 1--._,v( ... tl_-o----~t)
m(t) n(t) x(t)
Figure 3.2 Linear system with noise in the measure-ment of the input and output (Enochson, 1964).
then the "desired" coherence function is
'12 (f) uv
but the measured coherence function will be
(3-15)
(3-16)
Thus, theoretically, the measured coherence function will always be less than the desired coherence
function.
The concepts outlined above are most important in the analysis of multiple hydrologic time series and in the design of adequate hydrologic instrumentation
(Eagleson and Shack, 1966).
3. Multiple Input Linear Systems. Constant
parameter linear systems responding to multiple inputs from stationary random processes will no\\' be
consider-ed. It will be assumed that N inputs are occurring with a single output being measured. The output may be considered as the sum of the
,---
-
,
I
I
I
I
h, (t)~
y,(t)
I
xf'tl :·
I
h2 (t)~
yf'
t
)
II
I 1I
1x~
(1)-+-
l-·-:l_
hn_(
t_l__,f--
y~tl
I~
----
--
---FigurC' 3.3 ~luJtjplc input linear systt•n,.
N partial-non-measured-outputs y i (t)
.
i"
1, 2,.
N That is, N y(t)=
~ y.(t) i=
1 1 (3-17)where y. (t) is defined as that part of the output
~ th
which is produced by the i input when all the other inputs are zero.
The cross-spectral relations between the in-puts and the output can be expressed concisely with matrix notation. The following formulation of results is contained in Enochson (1964) and in Bendat and Piersol (1966).
First define a N-dimensional input vector
Let H(f) be a N-dimensional frequency re -sponse function vector
Next, define aN-dimensional cross-spectrum vector of the output y (t) wi·th the inputs xi (t) ,
[ Sxy(f)]
=
[
S1y(f) , s2y(f) , • . . • . , SNy(f)] (3-20)where
S (f) , i , j = 1 , 2 , . . . . , N (3-23) xixj
The fundamental equation for multiple input, constant parameter linear systems can be written as:
(3-24)
where [H*'(f)] denotes the complex conjugate tr ans-pose vector of [H(f)] .
The basic equation which gives the transfer functions H.(f) in the case of multiple correlated
J inputs is
=
(3-25)Equation 3-25 may be rewritten as the system of equations N ~ j
=
Hj (f) S .. (f) lJ (3-26)Solving equation 3-25 for the transposed row vector [H' (f)j we get
(3-27) Equation 3-27 gives each Hi (f) as a function of the input-output cross spectrum and holds whether or not the inputs are correlated
s.
(f)lU i = 1, 2, . . . , N (3-21) The solution of equation 3-27 has been
pre-Finally, define the N x N cross-spectral matrix of the inputs xi(t):
811 (f) 812(f) 8 21(f) 8 22(!)
(3-22)
sented by Goodman(l965) in the form,
s
(D
1y.234 ... N 811.234 ..... Ntr) (3-28)s
(f) Ny. 1 2 34 ..•. N- 1 8NN·1234 .... N-t(r)Equation 3-28 will be used in Chapter VI. where monthly rainfall in different parts of a watershed is consi-dered as the multiple input vector which gives as total output the monthly runoff at the outlet.
Chapter IV
THEORY OF CROSS-SPECTRAL ANALYSIS OF LINEARLY DEPENDENT STOCHASTIC PROCESSES
1. Moving Average Process. The moving average
th
process of m order is defined by
y(t}
=
m I: j=
0 a . x(t - j} J (4-1)where x(t) is a random process uncorrelated with X(t-j) for all j > 0 , and the a's are weights as-signed to each past value of x(t).
This process may be used, at least as a first approximation, as the generating scheme for certain hydrologic phenomena. For example, consider that run-off for a given interval of time is a function of all climatic factors, present and past, since the begin-ning of time. The dominant factor is effective pre -cipitation which is defined as total precipitation less all losses. Because the effect of effective
precipi-tation in the present runoff decreases with an increase in antecodency, each value of effective precipitation
must be given a weight whose value decreases with an increase in antecedency. If the present runoff is
essentially independent of the effective precipitation
beyond the mth antecedent interval of time, then
runoff may be represented as being generated by a mov-ing average of extent m of effective precipitation
(Matalas, 1966).
From equations 3-7 and 3-8 we can write the
spectrum of the process y(t) as
G (f) • y m I: t=o G (f) X
and the cross-spectral density function G (f)
xy Gx (f)
=
I: at e -( mzru)
Y t•o G (f) X (4-2) as: (4-3)where the form of G (f) has been studied by Siddiqui
y (1962). ~taking m I: t=o -2lli.f ate
=
H(f) (4-4)the spectral matrix of the process y(t) can be w
rit-ten as: G (f) XX G (f) G (f) XX xy G (f) yx H • (f) G (f) yy H(f)
where the use has been made of equation 2-24.
From equation 4-5 one gets,
'(2 (f) • xy H(f) · H* (f)
I
H(f) 12 (4-5) (4-6)The coherence function is one for all f es-sentially because the process y(t) is a determinis
-tic linear function of the process x(t) . Equation
4-6 provides a simple means of testing the validity to
assume a moving average process.
2. Autoregressive Processes. In a wide variety
of geophysical problems, multidimensional types of autoregressive schemes are of common application. The
th d' 0 1 0 f th d
n 1mens1ona autoregress1ve process o m or er is defined by (j) (j) (j) att3t2'''' '' '3tn x1(t-j) 2.1 ( t) (j) (j) (j) X (t) n m ~21~22' .. 0 • • ·~zn x 2(t-j) t ) t l
=
I
:(j) 0 (j) j=l (j)an!an2 ... ann X' (I• j) 1 . ( t)
n II
where [z. (t)]
J is a random component uncorrelated with
[x. (t-k)] and
J [z.(t-k)J ] for all
cal cases, the random component,
k > 0 In
practi-[z.(t)] , may be i
n-J
terpreted as the residuals which of events, other than [x.(t-k)]
J
represent the action
, affecting [x.(t)]. J Examples of 'this type of process occur
fre-quently in hydrology. In the case of representing run-off as a moving average process of effective precipi-tation, m may happen to be very large. It would be a mistake to try to reduce the order of the moving average because it is convenient to consider the
ef-fective precipitation for all antecedent intervals of
time, however small their contribution may be to the present runoff. In this case, the generating scheme
for the runoff can be represented by an autoregressive
"process which involves far fewer coefficients than the
moving average process (Matalas, 1966).
River flows can be represented in many in
-stances by autoregressive processes (Yevjevich 1964, Roesner 1965, Quimpo 1966).
In the analysis of the cross-spectral
char-acteristics of autoregressive processes, the mathema-tical complications increase very rapidly with the order of the process. Fortunately, most of the auto-regressive processes used in hydrology are the first or second order Markov linear processes. Tho analysis
and results obtained for the cross-spectral charact
er-istics of Markov linear processes is believed to be new by the author.
First order Markov linear process. Let us
as-sume that a certain process, such as annual runoff,
can be represented by a first order model,
(4-8)
where the terms are the matrices of equation 4-7 when m
=
1 . Two realizations of this process can beex-pressed by
+ (4-9)
which results from making m a 1 , n • 2 in equation 4-7.
Equation 4-9 is equivalent to the system:
( 4-10)
(4-11)
Quenouille (1957) shows the coefficient matrix of a first order autoregressive scheme to be equal to:
( 4-12)
where is the covariance matrix for the lag
one, (crij(t)J
r
11(1)a
12(1] (l' 21 ( 1) a 22( 1) (4-13) and similarly[a
11(0)a
12(0] [ cr ij(O)]
=
a 21 ( 0) a 22(0) ( 4-14)the inverse of [a (O)] denoted by [a(0)]-1. From equation 4-12 it is directly obtained:
where
a
11(1) a 22(0)-a 12(1) a 21(o)
a11 a11(0)a22(o)-a21(0)a12(o)
p11(1) - pl2(1) Pt2(0) 1 - p~
1
(0) [ a 11(0) ·a 22(0) ) 1/2 (4-15) (4-16) The terms of x 1 (t)a11(0) and a22(0) represent the variances and x
2(t) , respectively:
Similarly we can obtain:
0"2
1 (4-17)
(4-18)
-a 11 ( 1) a 1 2 ( 0 ) + a 1 2 ( 1) a 11 ( 0) a i1(o) a 22(0) - a 21(0) a 12(0) o-1 (P12( 1)- P11(t) P12(o)) v 2 {1-
p:
2(o))
cr21(l) a 22(0) - a 22(l) a 21{0)a 11(o) a 22(o) - a 21(o) a 12(o)
o-2 ( Pzt(t) - Pz2(t) P12(o))
~r
1
(t-
p:
2(o))
=
( 4-20) (4-21) Assuming z1(t) uncorrelated with x1{t-s) and mul-tiplying equation 4-10 by x
1 (t-s) , one gets after taking expected values:
Assuming now that one has two first order Markov linear processes, or in other words that the coefficients a
12 and a21 are identically equal to zero, further simplifications are possible. Substitut
-ing in equation 4-22 the well known result (Kendall, 1966) : one gets:
I
sl
(
1) p11 (4-23) a 2!l(s)Taking the Fourier transform of ~
21
(s) cross-spectrum between the series x (t)1 or (X) k s=-co ( 4-27) we get the and x 2(t): (4-28) (4-29)
Equation 4~29 gives the cross-spectrum between two first order Markov linear processes.
Siddiqui (1962) shows that the spectrum of a first order Markov linear process is equal to:
(4-30)
(4-24) which can be written as, Using for a
11 and a12 the values given by equations 4-15 and 4-20 and performing some simplifications, equation 4-24 may be reduced to:
(4-25)
From equation 4-25 it becomes:
(4-26) and (4-31) Similarly, ( 1 - p 2ll'ifl ( -Zll'if) [4 3'1 22(1) e I - p22(1) (; •
-Using equations 4-29, 4-31 and 4-32 and making some
simplifications, the coherence between the two
proces-ses is found to be equal to:
(1- p
11(1) p22(t)) a p:2(0)
(1
-
Pf
1
(t)j
jt-
P2z(1)}
(4-33)Equation 4-33 shows the coherence function
between two first order autoregressive processes equal to a constant independent of frequency*. This result should provide a valuable tool in the analysis of this
type of processes.
Second order ~larkov linear process. The method
that will be presented now can be used to study the spectral matrix of any autoregressive process
regard-less of its order or dimension.
Quimpo and Yevjevich (1967) have shown that
2nd order autoregressive schemes may be used to fit
the patterns in the sequence of daily river flows
af-ter the periodic component has been removed from the series.
The equation for a two-dimensional 2nd order autoregressive process is obtained by making n
=
2 and m=
2 in equation 4-7:(4-34)
which is equivalent to the system:
(4-36)
In order to obtain t'he coefficient matrix
[aij] as was done for the l st order ~tarkov process, it will be necessary to transform the 2nd order process to a 1st order process. This, in turn, can be
accom-plished by doing
*
This result was first pointed out to the author by Dr. M. M. Siddiqui of Colo·rado State University(4-37) and
( 4-38)
Equations 4-34, 4-37, and 4-38 can be written as: x 1 (t) 8tta12 0 0 x1(t-l) x 2(t) a21a22 0 0 x2(t-1) x 3(t) 0 0 0 x3(t-1) x 4(t) 0 0 0 x4(t-1) 0 0 bl1b12 x 1(t-1) z I (t) 0 0 b21b22 x 2(t-1) zz(t)
+
0 0 0 0 x p-1) 0 0 0 0 0 x 4(t-1) 0which can be expressed:
or where 8tt 812 btl b12 a21 822 b21 b22 0 0 0 0 0 0 + (4-39) ( 4-40) (4-41) (4-42)
and [x.(t)] now represents a four-dimensional 1st
J
order process for which we can obtain [u .. ] l)
(4-43)
[o. .. (1)] and [o. .. (0)] represent now four by four
1) 1)
covariance matrices.
Using straightforward relations like:
a 13(1) = Cov
(x
1(t)x
3(t-1)] Cov[x
1(t)x
1(t-z)J
we can express and similarly, [aij (1)] by: a 11(1) a 12(1) a 11(2) a 12(2) a 21(1) a 22(1) a 21(2) a 22(2) a 11(0) a 12(0) a 11(1) a 12(1) a 21(0) a 22(0) a21(1) a 22(1) a 11 ( 0) a 12( 0) a 11 ( 1) a 12( 1) a 21(0) a 22(0) a 21(1) a 22(1) a 11(1) a21(1) a 11(0) a 12(0) a 12(1) a 22(1) a 21(0) a 22(0) ( 4-44) (4-45)After the calculation of [uij] • we may return to the original model of equation 4-34 which can be
written:
[
( 4-46)or in shorter notation:
( 4-4 7)
where the operator D is defined by: n
D x(t)
=
x(t-n) ( 4-48)The spectral matrix of the process is now
ob-tained using a formula given by lfuittle (1954):
( 4-49)
where
z "' e 2ll'fi
and
(4-50)
with [v~.' (z)] standing for the complex conjugate
1)
transpose matrix of [vij(z)] .
Equation 4-49 is valid provided the series
x
1 (t) and x2(t) have unit variance.
In the case studied here [q .. ] is the unit
lJ matrix, therefore and [pt. J' (D) ]-
1,
...,...,
det....,.t___,,...,..
[Pij(D)
J.
(4-51) (4-52) The spectral matrix of the process can be written:(1.-'i I)
+aa +hz +nz +h~ -+(a b 1
I 2 2l U. tl U I.:
(4-55)
( 4-56)
The coherence function of the process can then be ex-pressed as:
F11(f)F22(f) (4-57)
After going through some algebra, y2 (f) can be written: where a
=
f3
=
e=
A -T) p=
fJ tj, v .. ~ 2 fJ cos 2rrf + 2 1/1 cos 471! + 2v cos 6 1rf + 2 ~ cos 81rf + p[a
+ 2{3 cos 211'f- 2b 22 cos 4rlJ[k
+ 26 cos 2rl- 2b11 cos411'fJ (4-58) a bz z b z l+at2+ 22+a22+ 12a22 b22 + a12 b12- azz:
a21
-
a22 b21 + at 2 blt- b22 a21 + a12
-
b12a11b12 btl
-
a22a21 - b22b21 -a12a11ez + TJz + bz
21 + b:2 + Az eb21 + ET) + b12 A + ATJ
eX+ b21TJ + bt2 TJ
E b12 + b21 A
b21 b12
3. Mathematical Development of the Cross-Spectral Characteristics of Filtered Series. The smoothing of time series by moving average schemes and other types of filters is a practice sometimes applieq in hydrology and other geophysical sciences. So it is of great practi~al importance to understand clearly the different effects that filters can have in the cross-spectral characteristics of time series. In this chapter, we will study the behavior of the cross-spectral density function, the coherence function and the phase function when one or both of the series in which the analysis will be performed have been pre-filtered.
Let us have two random input functions x 1(t) related to two output functions y1(t) through a linear filter function h(t) by means of a simple convolution,
(4-59)
( 4-60)
The cross-covariance function between the filtered out -put functions i.s:
lim T~co 1 2T
J
T y 1(t) y2(t + T) dtc
4_61 ) -Twhich can be written:
s_:
x 2(t+T-s)h2(s)ds=I:
h 1 (u) duJ~
oo
h 2(s) ds • lim 1 JT x 1(t-u) x2(t+T-s)dt Joo h1{u) du • T....,ooIT
-T -ooJ:
h 2(s) ds (4-62)In order to go from the time domain to the frequency domain , we take Fourier transforms at both sides of equation 4-62,
_1_ a (T) "iWT-_1_ e -\<JT dT •
J
oo Joo .27r -oo YtYz e - 211' -oo
(4-63)
where w represents the angular frequency (radians per unit of time).
The left hand side of equation 4-63·· is the
cross-spectrum between y
1 (t) and y2(t) : Gy1y2 (w)
Doing t
=
T + u - s in equation 4-63 we get:I
co-
iws
1Joo
-iw£h (s)e. ds • - a (£)e df 2 2tr x 1x2 -00 -CXl Therefore,
J
CXl -lWS . • h 2(s) e ds -CXl ( 4-64)Equation 4-64 gives the relation between the
corss-spectrum of the filtered series as a function of
the cross-spectrum of the original series and of the
linear operators h
1(u) and h2(s)
The transfer function of a filter h(t) is
defined as the Laplace transform of h(t) :
R(w) "'
Jro
h(t)e
=iwt dt =I
R(w)I
e
·i ¢ (w) •C) Joo!'.(
~
)
cos wt dt- iJ
00 h(t) sin wt dt-ro
-oo
Re { R(w)) + i Im { R(w)} ( 4-65) whereI
R (w)I
=
[
(Re { R(w)} )~
+
(Im { R(w)} ) 2 ]i
and¢(w)
=
tan -1 ( 4-66) ( Im { R(w)} / Re { R(w)} ) (4-67)The angle ~(w) represents the phase shift
which the filtering function h(t) oroduces at the frequency w
For smoothing and filtering functions having (n + m + 1) discrete weights, the transfer function is
computed by the following form of equation 4-65:
m m
R(w) !: hk cos wk - i !: hk sin wk
k=-n k=-n (4-68)
In the spectral analysis of a time series, the
effect of applying a filter to the series is to
multi-ply the power spectrum by IR(w) 12 , (Siddiqui, 1962).
When analyzing a single time series, the phase shifts
will have no effect on the spectral analysis since the
spectrum suppresses all phase information. This is
not the case in cross-spectral analysis where the phase diagram is a very useful one. So, it is highly d
e-sirable that smoothing and filtering functions do not
shift the phase of waves of any frequency. The shift
angle can be made equal to zero by requiring that the
imaginary part of R(w) be zero. This, in turn, can be accomplished by requiring the filter function h(t) to be even, for if h(t) is even, the terms contain -ing the sines in equations 4-65 and 4-68 are zero, and
R(w) is a pure real quantity computed by
R(w)
=
2Joo
h(t) cos wt dt0
for continuous h(t) functions, or by
n n R(w) (4-69) !: hk cos wk
=
h 0 + 2 !: hk cos wk k=-n k= 1 ( 4-70)for smoothing and filtering functions having (2n + 1) discrete weights.
Using the definition of R(w) , equation 4-65, we can write equation 4-64 as
Rh
(w) • ~ (w)1 2 (4-71)
From equation 4-71 it is seen that the
cross-spectrum of the filtered series 1dl! be different: of
the cross-spectrum of the original series.
It is of fundamental interest to know if lin-ear filters like those of equations 4-59 and 4-60 will
change the coherence between the series. It js known
(Siddiqui, 1962) that the individual spectra of the
filtered series are equal to
G
(w)=
IRh/w)laG
(w) y1 X 1 ( 4-72) and G y (w) ' = IRh(w)la G (w) x2 2 2 (4-73)In this manner the coherence y2 (w) can hl' wri ttl'" y ly 2
'Y;
y (w)=
1 2)
GYtYz(w)l2
G(w)
G(w)
Y 1 Yz=
jRh/w)I21Rh2(w)I2
1Gx
1x2(w)l2
.
::~~
(w)j 2 G {w)
jRh{w)j2
G{w}
1x1
2
x2
(4-74)Equation 4-74 shows that the coherence
func-tion of the filtered series is the same as the
coher-ence function between the original series.
The main conclusions of this chapter with
re-gard to the use of linear filters before any
cross-spectral analysis is performed may be summarized as
follows:
a. The phase function of the filtered series
is different from the phase function o·f the
original series except if both filters are
even, in which case, the phase function will
remain unchanged.
b. The cross-spectrum of the filtered series is
different from the cross-spectrum of the
original series, their relation being given
by equation 4-71.
c. The coherence function will remain unchanged
CHAPTER V
DATA ASSEMBLY AND PROCEDURE FOR THE ANALYSIS OF
HYDROLOGIC SERIES BY CROSS-SPECTRAL TECHNIQUES
1. Data Selection. One of the aspects this
dissertation was directly concerned with was to study
the frequency correlations between hydrologic time
series and the characteristics of the gain and phase
functions between them. To attempt this, several
sta-tions were chosen and complete cross-spectral analyses
were performed between them and groups of other st
a-~
• MonThly Runoff
tions in the same or different environment.
Precipitation data consisted of annual and
monthly series. Only monthly runoff data were ana
-lyzed. Figure 5.1 shows the location of the stations
used in the analysis. A detailed description of these
stations is done in the appendix. ·
1, An11unl M, MorHhly
Proc,..1p1IOtlllfl
In the analysis of annual precipitation,
there were 27 stations with an average length of data
of 62 years per station. There were 41 stations with
monthly precipitation data averaging 57 years per sta~
tion. The rainfall stations were divided into five
regions in each of which one base station was fixed.
Complete cross-spectral analyses were then made
be-tween the base stations and all tho stations in the
region. The characteristics of the base stations are
as follows:
Region No. 1 Pacific Coast (California,
Oregon, Washington) with 14 stations
and the base station: San Diego Lat:
32.733 Long: 117.167 Period of
rec-ords: 1850-1960.
Region No. 2 Valley Environment (California,
Oregon) with 8 stations and the base
station: Sonora Lat: 37.983 Long:
120.383 Period of records: 1888-1960.
Region No. 3 Mountain Environment (Colorado,
Wyoming, Idaho, Montana) with 6 stations
and the base station: Durango Lat:
37.280 Long: 107.880 Period of
rec-ords: 1895-1960.
Region No. 4 Gulf of Mexico (Texas,
Louisi-ana) with 7 stations and the base
sta-tion: New Orleans Lat: 29.95 Long:
90.07 Period of records: 1870-1960.
Region No. 5 Plain Regions (Texas, Oklahoma,
Kansas) with 6 stations and the base
station: Lampasas Lat: 31.05 Long:
98.18 Period of records: 1895-1960.
There were 37 runoff stations, all of them
located west of the 117° meridian. The average length
of series among these stations was 36 years. The base
station characteristics are:
Middle Fork American River near Auburn,
Cali-fornia. Lat: 38.92 Long: 121.00 Period
of records: 1912-1960.
2. Method of Analysis. All the data used were
previously standardized by the transformation:
x"' (t) = x(t) -
x
s(x) (S-1)
where x(t) is the series of values of each station,
x is the mean of x(t) , and s(x) is the standard
deviation of the series. Because of the previous
standardization, the spectral ordinates are all in
[cycles per unit of time]-1 .
With the annual data there were used 15 lags
with a resolution of 0.067 cycles per year. In the
monthly analysis there were 24 lags with a resolution