• No results found

Estimating the Variance of the k-step Ahead Predictor for Time-series

N/A
N/A
Protected

Academic year: 2021

Share "Estimating the Variance of the k-step Ahead Predictor for Time-series"

Copied!
7
0
0

Loading.... (view fulltext now)

Full text

(1)

Estimating the Variance of the k-step Ahead Predictor for Time-series

Fredrik Tjarnstrom

Department of Electrical Engineering Linkoping University, S-581 83 Linkoping, Sweden

WWW:

http://www.control.isy.li u.s e

Email:

fredrikt@isy.liu.se

March 16, 1999

REGLERTEKNIK

AUTOMATIC CONTROL LINKÖPING

Report no.: LiTH-ISY-R-2130 Submitted to (Nothing)

Technical reports from the Automatic Control group in Linkoping are avail-

able by anonymous ftp at the address

ftp.control.isy.liu.se

. This re-

port is contained in the compressed postscript le .

(2)

Estimating the Variance of the k-step Ahead Predictor for Time-series

Fredrik Tjarnstrom March 16, 1999

Abstract

This paper considers the problem of estimating the variance of a linear k-step ahead predictor for time series. (The extension to systems including deterministic inputs is straightforward.) We compare the theoretical results with empirically calculated variance on real data, and discuss the quality of the achieved variance estimate.

Keywords

: Prediction Time-series

1 Introduction

We will discuss how to calculate the variance of a k-step ahead predictor for time series. This will give us useful information on how long prediction horizon we possibly should use. If we take the horizon to be even longer we will get less valuable results. (Less valuable in the meaning that the variance is so high in the prediction, so we could as well just have guessed an equally good prediction).

We start by giving a short introduction to prediction error methods in Section 2. Section 3 gives the main results and in Section 4 we apply the results in an example. Finally Section 5 gives the conclusions.

2 Prediction error methods for time-series

In this section we will discuss how conventional linear black-box models are used as predictors for time-series. In connection to this we will measure the goodness of such a predictor in form of estimated variance for the k-step ahead predictor.

To be able to evaluate the performance of a model we will assume that there exists a true model that describes how the output is generated

y ( t ) = H

0

( q ) e ( t ) (1)

(3)

Here the sequence

f

e ( t )

g

is white noise with variance 

0

. Our aim is to

nd a model H ( q ) that is close to H

0

( q ) in some sense. With A ( q ) = 1 + a

1

q

;1

+



+ a n q

;

n and C ( q ) = 1 + c

1

q

;1

+



+ c m q

;

m we can point out three main linear model classes for time-series:

1. The case when H ( q ) = C ( q ) is called a MA( m )-model (moving aver- age).

2. When H ( q ) = 1 =A ( q ) the model is called an AR( n )-model (auto re- gression).

3. Finally the case H ( q ) = C ( q ) =A ( q ) is called an ARMA( nm )-model.

Now let H ( q ) = H ( q ), where the  -vector represents our parameteri- zation of the model structure, e.g., in the case of an ARMA-model we have

 = ( a

1

:::a n c

1

:::c m ) T . Using prediction error methods means that we choose the model that is the best predictor of future outputs. See 1]. Since white noise is unpredictable the best possible predictor of y ( t ) given the value of  must be

y ^ ( t

j

 ) = (1

;

H

;1

( q )) y ( t ) (2) If we have collected data, y (1) ::: y ( N ), the natural estimate of  will be

 ^ N = arg min  V N (  ) (3)

V N (  ) =

X

N

t

=1



2

( t ) (4)

 ( t ) = y ( t )

;

y ^ ( t

j

 ) = H

;1

( q ) y ( t ) (5)

3 The k-step ahead predictor and its variance

The k-step ahead predictor can be found from our estimated model ^ H ( q ) = C ^ ( q ) = A ^ ( q ) using the following. (See, e.g., 1] for similar expressions.) Let the polynomials F k

;1

( q ) = 1 +



+ f k

;1

q

;

k

+1

and G n

;1

( q ) = g

0

+



+ g n

;1

q

;

n

+1

be given by the Bezout identity

C ( q ) = A ( q ) F k

;1

( q ) + q

;

k G n

;1

( q ) (6) Then the k-step ahead predictor, ^ y ( t + k

j

t ), and the corresponding prediction error will be given by

y ^ ( t + k

j

t ) = G n

;1

( q )

C ( q ) y ( t ) (7)

y ( t + k )

;

y ^ ( t + k

j

t ) = F k

;1

( q ) e ( t + k ) (8)

(4)

Note from (8) that the expected value of the prediction error is zero since

f

e ( t )

g

is a white noise sequence.

Since we only have an estimate of the system from which the data was generated, we will get an increased uncertainty in the predictions. Therefore this lack of knowledge must be included in the expression for the variance of the k-step ahead prediction. How this is done can be seen from the following.

If we denote the \true" k-step ahead predictor with ^ y

0

( t + k

j

t ) and the one estimated from our N data points with ^ y N ( t + k

j

t ) we get

 ( t + k

j

t ) = y ( t + k )

;

y ^ N ( t + k

j

t )

= y ( t + k )

;

y ^

0

( t + k

j

t ) + ^ y

0

( t + k

j

t )

;

y ^ N ( t + k

j

t )

= F k

;1

( q ) e ( t + k ) +



G n

;1

( q )

C ( q )

;

G ^ n

;1

( q ) C ^ ( q )

!

y ( t ) (9) Since e ( t + s )  s



1 is independent of y ( t ) we get

Var  ( t + k

j

t ) =(1 + f

12

+



+ f k

2;1

) 

0

+ Var

"

G n

;1

( q )

C ( q )

;

G ^ n

;1

( q ) C ^ ( q )

!

y ( t )

#

=(1 + f

12

+



+ f k

2;1

) 

0

+ Var f (^  )

(10)

where f (^  ) is dened by the last equality. We can now apply Gauss' ap- proximation formula which states that if ^  is suciently close to the \true"

parameter vector, 

0

, we can approximate

Var f (^  )



f

0

( 

0

) P  f

0

( 

0

)] T (11) where f

0

( 

0

) = df d

(



)j



=



0

and P is the covariance matrix of  . Now apply (11) to (10) which results in

Var  ( t + k

j

t )



(1 + f

12

+



+ f k

2;1

) 

0

+ f

0

( 

0

) P  f

0

( 

0

)] T (12) Furthermore we have

f (  ) =



G n

;1

( q

0

)

C ( q

0

)

;

G n

;1

( q ) C ( q )



y ( t ) (13) which gives

f

0

(  ) =

;

1

C ( q ) dG n

;1

( q )

d y ( t ) + G n

;1

( q )

C

2

( q ) dC ( q )

d y ( t ) (14)

Now dC d

(

q

)

= (0 ::: 0 q

;1

:::q

;

m ) T and dG

n;1(

d q

)

can be calculated from

(6).

(5)

When we have estimated the variance of the k-step ahead predictor, we could use it to decide how long the prediction horizon should be. Two simple comparisons could be done. Firstly, we could check if Var  ( t + k

j

t ) is lower than, e.g.,

12

Var y ( t ). Secondly, we could compare Var  ( t + k

j

t ) with the mean square error of a simple ad hoc predictor based on some physics or intuition. If we cannot improve on this for some k it would probably be of no use in trying to make longer predictions with this model.

An interesting note is that the covariance matrix P in (12) decays like

N

1

, which implies that if we have a suciently large number of data, i.e., N should at least be larger than 100, we could neglect the second term and approximate the variance of the prediction by

Var  ( t + k

j

t )



(1 + f

12

+



+ f k

2;1

) 

0

(15) This expression is possible to calculate at the time the model is estimated since it does not depend on the data we want to predict.

4 Experimental studies on the water demand for Barcelona

To evaluate these methods, we applied them to the water demand data from Barcelona. The data is collected daily and show a strong weekly correlation.

An AR(15)-model was tted to the rst 570 data points (after the mean of the data was removed). This model was chosen on due to its better perfor- mance on validation data, than the other considered models. Furthermore the variance of the prediction error was calculated for k = 1 ::: 15 from Equations (12) and (14) on the following 128 data points. As a compari- son to these theoretical expressions we computed the empirical prediction error variance, i.e., the variance estimated from the calculated k-step ahead predictions (for k = 1 ::: 15):

Var

d

 ( t

j

t

;

k ) = 1 127

698

X

t

=571



y ~ ( t

j

t

;

k )

;

1 128

698

X

s

=571

y ~ ( s

j

s

;

k )

!

2

(16)

y ~ ( t

j

t

;

k ) = y ( t )

;

y ^ ( t

j

t

;

k ) (17)

These two curves are shown in Figure 1 together with the mean square error

of a simple ad hoc predictor (the water demand seven days from now is

the same as today). (We have used the mean square error for the ad hoc

predictor since it could be biased.) We see that the estimated AR(15)-model

has a much lower variance than the simple weekly predictor. Also note that

the variance of the prediction error is lower than

12

Var y ( t )



2 : 7



10

8

for

k



14. By the discussion in the previous section this means that the

prediction horizon should probably not be chosen longer than two weeks.

(6)

2 4 6 8 10 12 14 0

1 2 3 4 5 6 7 8 9x 108

Prediction horizon (in days)

Variance of prediction

Figure 1: Estimated (thick line) and empirical (dashed line) prediction error variance together with the mean square error for the ad hoc weekly k-step ahead prediction error (thin line) for k = 1 ::: 15. (The variance of y was approximately 5 : 4



10

8

.)

We also notice that the estimated variance is in close agreement with the empirical prediction error variance.

Finally it would be interesting to see if (15) is a good approximation of (12). In Figure 2 we have plotted both expressions. It is obvious that the

2 4 6 8 10 12 14

0 0.5 1 1.5 2 2.5

3x 108

Prediction horizon (in days)

Variance of prediction

Figure 2: The estimated prediction error variance according to equations (12) (thick line) and (15) (thin line) for k = 1 ::: 15.

last term in (12) can be neglected without much loss of accuracy. This was

to be expected since the covariance matrix P decays like N and we have

(7)

N = 570.

5 Conclusions

We have derived an expression (12) for the prediction error variance of a linear time-invariant model. Its validity has been tested on the water de- mand data from Barcelona and is shown to be in good agreement with the empirical variance (16) estimated from data. We have also studied an ap- proximation of (12) and found that this (very simple) expression (15) is a good estimate if the model has been estimated on a large data set.

References

1] L. Ljung. System Identication: Theory for the User . Prentice-Hall,

1987.

References

Related documents

management’s outlook for oil, heavy oil and natural gas prices; management’s forecast 2009 net capital expenditures and the allocation of funding thereof; the section on

– Visst kan man se det som lyx, en musiklektion med guldkant, säger Göran Berg, verksamhetsledare på Musik i Väst och ansvarig för projektet.. – Men vi hoppas att det snarare

If the patient’s file is available by the palm computer with a home visit should it strengthen the key words picked by us which represents the district nurse skill; “seeing”,

Byggstarten i maj 2020 av Lalandia och 440 nya fritidshus i Søndervig är således resultatet av 14 års ansträngningar från en lång rad lokala och nationella aktörer och ett

Omvendt er projektet ikke blevet forsinket af klager mv., som det potentielt kunne have været, fordi det danske plan- og reguleringssystem er indrettet til at afværge

I Team Finlands nätverksliknande struktur betonas strävan till samarbete mellan den nationella och lokala nivån och sektorexpertis för att locka investeringar till Finland.. För

40 Så kallad gold- plating, att gå längre än vad EU-lagstiftningen egentligen kräver, förkommer i viss utsträckning enligt underökningen Regelindikator som genomförts

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in