• No results found

Maximum Likelihood Estimation of Models with Unstable Dynamics and Non-minimum Phase Noise Zeros

N/A
N/A
Protected

Academic year: 2021

Share "Maximum Likelihood Estimation of Models with Unstable Dynamics and Non-minimum Phase Noise Zeros"

Copied!
15
0
0

Loading.... (view fulltext now)

Full text

(1)

Maximum Likelihood Estimation of Models with Unstable Dynamics and Non-minimum Phase Noise Zeros

Hakan Hjalmarsson and Urban Forssell Department of Electrical Engineering Linkping University, S-581 83 Linkping, Sweden

WWW: http://www.control.isy.liu.se

E-mail: hakan.hjalmarsson@s3.kth.se, ufo@isy.liu.se

August 7, 1998

REGLERTEKNIK

AUTOMATIC CONTROL LINKÖPING

Report no.: LiTH-ISY-R-2043

Submitted to IFAC's World Congress Beijing'99

Technical reports from the Automatic Control group in Linkping are available by anony-

mous ftp at the address

ftp.control.isy.liu.se

. This report is contained in the com-

pressed postscript le

2043.ps.Z

.

(2)

Maximum Likelihood Estimation of Models with Unstable Dynamics and Non-minimum Phase Noise

Zeros

Hakan Hjalmarsson and Urban Forssell

y

August 7, 1998

Abstract

Maximum likelihood estimation of single-input/single-output linear time- invariant (LTI) dynamic models requires that the model innovations (the non- measurable white noise source that is assumed to be the source of the randomness of the system) can be computed from the observed data. For many model structures, the prediction errors and the model innovations coincide and the prediction errors can be used in maximum likelihood estimation. However, when the model dynamics and the noise model have unstable poles which are not shared or when the noise dynamics have unstable zeros this is not the case. One such example is an unstable output error model. In this contribution we show that in this situation the model innovations can be computed by anti-causal ltering. Di erent implementations of the model innovations lter are also studied.

Keywords:

Maximum likelihood estimation Prediction error methods Output error models.

H. Hjalmarsson is with the Signal Processing Group, Department of Signals, Sensors and Systems, Royal Institute of Technology, S-100 44 Stockholm, Sweden. Email: hakan.hjalmarsson@s3.kth.se.

y

U. Forssell is with the Division of Automatic Control, Department of Electrical Engineering, Link oping University, S-581 83 Link oping, Sweden. Email: ufo@isy.liu.se.

1

(3)

1 Introduction

Consider the following single-input/single-output output error (OE) model structure

y

(

t

) =

G

(

q

)

u

(

t

) +

e

(

t

)

, B

(

q

)

A

(

q

)

u

(

t

) +

e

(

t

) (1) where

u

(

t

) is the input,

y

(

t

) is the output, and

e

(

t

) is assumed to be a zero mean white noise sequence which will be called the model innovations sequence. Furthermore,



is a parameter vector containing the unknown parameters of the polynomials

A

(

q

) and

B

(

q

) (

q

is the time shift operator in discrete time). Throughout this paper we will assume that the data really has been generated by a model in the model structure we are considering, e.g. for the structure (1) there is a parameter

0

such that the data are described exactly by (1):

y

(

t

) =

G

(

q0

)

u

(

t

) +

e

(

t

) (2) We will denote the corresponding model, the true model and the corresponding sequence

fe

(

t

)

g

the true model innovations sequence.

In the prediction error approach 4] to system identication, the model parameters are determined by minimizing the prediction error, i.e. the error between the true output and the optimal (w.r.t. the model) linear causal predictor based on past inputs and out- puts. When the transfer function

G

(

q

) is stable, the predictor for the model (1) has the following simple form

^

y

(

tj

) =

G

(

q

)

u

(

t

) (3) and the prediction error is

"

(

t

) =

y

(

t

)

;y

^ (

tj

) =

y

(

t

)

;G

(

q

)

u

(

t

) (4) For the true model it then holds that the prediction errors are equal to the true model innovations

"

(

t0

) =

e

(

t

) (5)

This property underlies the nice consistency properties of the prediction error method (PEM). It also implies that there is a close connection between the maximum likelihood (ML) method and the PEM. If the probability distribution function (PDF) of

e

(

t

) is known

2

(4)

and the log-likelihood function of this PDF is used as criterion in the PEM, the ML estimate of the parameters is obtained.

When the system

G

(

q

) is unstable, the predictor does not have the simple form (3), see 1] and below, and, furthermore, the property (5) does not hold. The prediction errors evaluated for the correct model is still a white noise sequence but this sequence is now an all-pass ltered version of the true model innovations sequence. The prediction errors are in this case the innovations in a second order sense. It can be shown, see 1], that the nice consistency properties of the PEM are retained also in this case. However, since the prediction errors diers from the sequence

e

(

t

), the PEM cannot be interpreted as a ML method. Another situation where the prediction errors evaluated for the true model are not equal to the assumed white noise sequence is when the noise model has non-minimum phase zeros. Thus, there is a wide range of applications where this problem occurs. In this contribution we show how to perform ML estimation in these situations.

To illustrate the ideas we will for simplicity concentrate on the problem of OE identi- cation of unstable systems. Apart from discussing the theoretical issues alluded to above we will also study dierent implementations of the model innovations lter and present a simulation study. Here we compare three dierent methods that are applicable to OE identication of unstable systems. The rst one was originally derived in 1] and relies on causal ltering operations only. This method circumvents the problem of unstable predic- tors by modifying the OE model structure. The other two method are novel and utilize non-causal ltering. Since the \predictors" in these methods are computed using non- causal ltering and are based on future signals, the prediction errors should be interpreted not as \prediction errors" but rather as the model innovations. These methods therefore should be seen as ML methods rather than prediction error methods.

2 Prediction Error Identication

Consider the linear OE model structure (cf. (1))

y

(

t

) =

G

(

q

)

u

(

t

) +

e

(

t

) =

B

(

q

)

A

(

q

)

u

(

t

) +

e

(

t

) (6)

A

(

q

) = 1 +

a1q;1

+



+

anaq;na

(7)

B

(

q

) =

b0

+

b1q;1

+



+

bnbq;nb

(8)



=

a1:::anab0:::bnbT

(9)

3

(5)

where

fe

(

t

)

g

is assumed to be the unpredictable part of the system, i.e. the innovations sequence. The parameter vector



is assumed to range over some subset

DM

of

Rd

(

d

= dim



).

Given measured data

ZN

=

fy

(1)

u

(1)

:::y

(

N

)

u

(

N

)

g

, the prediction error estimate is computed as the straightforward t 4]:

^



N

= arg min

2D

M V

N

(



) (10)

V

N

(



) = 1

N N

X

t=1

`

(

"F

(

t

)) (11)

"

F

(

t

) =

L

(

q

)(

y

(

t

)

;y

^ (

tj

)) (12)

^

y

(

tj

) =

B

(

q

)

A

(

q

)

u

(

t

) (13)

Here

`

(



) is a suitably chosen function. A standard choice is

`

(

x

) =

12x2

which gives the least-squares method. For ML estimation

`

(

x

) =

;

log(

fe

(

x

)) where

fe

(



) is the PDF of the true model innovations.

"F

(

t

) is the ltered prediction error corresponding to the predictor (13).

L

(

q

) is a stable (possibly parameter dependent) prelter that, e.g., can be used to aect the frequency domain t of the resulting estimate.

Typically one nds the estimate ^

N

using a gradient search algorithm. This is a crucial step in the identication but a detailed discussion of these issues goes beyond the scope of this paper. Instead we refer to 4] for further details.

Note that the prelter

L

(

q

) can be interpreted as an inverse noise model since the model structure

y

(

t

) =

B

(

q

)

A

(

q

)

u

(

t

) +

L;1

(

q

)

e

(

t

) (14) has the optimal predictor

^

y

(

tj

) =

L

(

q

)

B

(

q

)

A

(

q

)

u

(

t

) + (1

;L

(

q

))

y

(

t

) (15) provided

L

(

q

) and

L

(

q

)

B

(

q

)

=A

(

q

) have all their poles strictly inside the unit circle.

The corresponding prediction error is

y

(

t

)

;y

^ (

tj

) =

L

(

q

)(

y

(

t

)

; B

(

q

)

A

(

q

)

u

(

t

)) (16) cf. (12). Hence, by suitably parameterizing

L

(

q

) we can modify the model (6) to obtain other model structures such as ARX, ARMAX, or Box-Jenkins (BJ). These model

4

(6)

structures correspond to the choices

L

(

q

) =

A

(

q

),

L

(

q

) =

A

(

q

)

=C

(

q

), and

L

(

q

) =

D

(

q

)

=C

(

q

), respectively. (The notation is explained in 4]). Thus we may view

L

(

q

) either as a prelter or as a noise model depending on what is most natural in the application at hand. From this it should be clear that the method (10)-(13) with an OE model is rather general and basically covers prediction error identication using most interesting linear model structures.

We alert the reader of the fact that when (13) is the optimal predictor, for the true model it holds that the prediction error coincides with the true model innovations, i.e.

y

(

t

)

;y

^ (

tj0

) =

e

(

t

) (17) If the underlying system is unstable, the predictor (13) will be unstable and is not the optimal predictor. This is also the theme of this paper: to study what happens when the (13) is not the optimal predictor.

3 Maximum Likelihood Estimation

To be able to apply the ML method one has to be able to compute the model innovations.

In some cases the prediction errors coincide with the model innovations, as we saw in the previous section. However, in other situations this does not hold and, in two separate subsections, we will discuss two dierent cases where this occurs.

3.1 Unstable Dynamical Models

Suppose that the PDF

fe

(



) of

e

(

t

) in the model (6) is known and that we would like to use ML estimation. Consider now the case that

A

(

q

) has some zero(s) outside the unit-circle.

As we saw in the preceding section, the prediction errors and the model innovations are not identical in this case. Rather, as will be shown in the next section, the prediction error is an all-pass ltered version of the model innovations. Hence, the prediction errors cannot be used for ML estimation (unless the PDF is Gaussian since then the criterion is quadratic).

In this section we will show that by instead using non-causal lters, the model inno- vations

e

(

t

) can indeed be obtained. Since non-causal ltering is required this method is limited to o-line identication.

Consider the model (6) and assume that

A

(

q

) has some zero(s) outside the unit circle.

Furthermore, suppose that through some feedback mechanism from

y

(

t

) to

u

(

t

) which need

5

(7)

not be linear,

fy

(

t

)

g

and

fu

(

t

)

g

are stationary and bounded even though the open loop system (6) is unstable. From (6) it follows that the signals

y

(

t

),

u

(

t

), and

e

(

t

) in the system satisfy

A

(

q

)

y

(

t

)

;B

(

q

)

u

(

t

) =

A

(

q

)

e

(

t

) (18) Now

A

(

q

) is non-minimum phase and hence it is not possible to obtain

e

(

t

) from

A

(

q

)

y

(

t

)

;

B

(

q

)

u

(

t

) by stable causal ltering. However, no constraint is imposed, unless

A

(

q

) has some zero(s) on the unit circle, if non-causal ltering is applied! Hence by applying the non- causal lter 1

=A

(

q

) to

A

(

q

)

y

(

t

)

;B

(

q

)

u

(

t

),

e

(

t

) is obtained modulo transient eects. This gives

e

(

t

) = 1

A

(

q

) (

A

(

q

)

y

(

t

)

;B

(

q

)

u

(

t

)) =

y

(

t

)

;B

(

q

)

A

(

q

)

u

(

t

) (19) where 1

=A

(

q

) is a non-causal lter.

To summarize, if the lter 1

=A

(

q

), used in (12) to compute the prediction error, has no pole(s) on the unit circle it can always be taken to be stable and the model innovations can be computed. When

A

(

q

) is non-minimum phase, the ltering is non-causal which means that (12) looses its interpretation as the prediction error

y

(

t

)

;y

^ (

tj

) (20)

where ^

y

(

tj

) is based on past observations of

y

(

t

) and

u

(

t

). Instead (20) should be inter- preted not as the prediction error but as the model innovations.

3.2 Models with Non-Minimum Phase Noise Zeros

As we pointed out in Section 2, noise models can be incorporated by parameterizing the prelter, e.g. a BJ model structure

y

(

t

) =

B

(

q

)

A

(

q

)

u

(

t

) +

C

(

q

)

D

(

q

)

e

(

t

) (21)

is obtained by taking

L

(

q

) =

D

(

q

)

C

(

q

) (22)

It is customary to require that the zeros of the noise model are inside the unit circle. The reason is the same as for models with unstable dynamics { since the zeros of the noise

6

(8)

model ends up in the denominator of

L

(

q

), it is not possible to compute the innovations using a stable causal lter if the noise model has non-minimum phase zeros.

In this case, if the zeros of the noise model are constrained to be inside the unit circle while the true innovations are ltered through some lter with zero(s) outside the unit circle, the minimum-phase equivalent of this lter will be identied and a consistent estimate of the noise spectrum will be obtained. It follows from the spectral factorization theorem that this is no limitation for identication based on second order statistics.

However, there is no guarantee that the minimum-phase counterpart to the true noise dynamics is identied if other criteria than the quadratic are used. Hence, e.g. ML estimation may not be consistent if the noise model is constrained to be minimum-phase.

The solution to this problem is the same as for the problem with unstable noise dy- namics. Instead of restricting the computation of the model innovations to casual stable

ltering, also non-causal ltering is allowed, i.e.

L

(

q

) should always be taken as a stable

lter which if there are poles outside the unit circle has to be non-causal. As in the case of unstable dynamical models, the non-causal ltering means that (12) looses its interpre- tation as the prediction error

y

(

t

)

;y

^ (

tj

) (23)

and ^

y

(

tj

) looses its interpretation as predictor. Instead (23) should be interpreted not as the prediction error but as the model innovations.

Notice however, as mentioned above, that when a quadratic criterion is used, i.e. when only the spectrum is of interest, it is not necessary or even desirable to allow for non- minimum phase zeros.

4 Identication of Unstable OE Models

In this section we will discuss prediction error identication of unstable OE models. As we have mentioned previously, the prediction errors and the model innovations will not coincide in this case. This means that ML estimation cannot be used unless non-causal

ltering is considered, the reason being that the predictor is unstable otherwise. We will study three dierent methods that handles this problem, two of them are ML methods and utilize non-causal ltering. The third method is a prediction error method which circumvents the problem of unstable predictors by calculating a second-order equivalent

7

(9)

which involves only causal ltering operations. We will start by presenting this method which was originally derived in 1].

4.1 Modifying the Model Structure

In 1] modied versions of the OE and BJ model structures were derived that could handle also unstable systems. The idea was to use a certain all-pass lter

La

as prelter

L

such that the prediction error lter becomes stable. This method can hence be interpreted as if the noise model is changed from

e

(

t

) to

L;1a e

(

t

). In 1] this approach is also interpreted from a Kalman lter perspective (see also below). Here we will review this method by re- deriving it starting from a prediction perspective and show that it ts into the prediction error framework.

Consider the OE model structure (6). As noted earlier when

A

(

q

) has all its zeros strictly inside the unit circle, the optimal linear time-invariant predictor given past inputs and outputs is

^

y

(

tj

) =

B

(

q

)

A

(

q

)

u

(

t

) (24)

However, this is no longer true if

A

(

q

) has some zero(s) outside or on the unit circle. When there are zeros on the unit circle, there exists no asymptotically stable stationary optimal predictor and a time-varying Kalman lter has to be used. However, if there are no zeros on the unit circle and some zero(s) outside the unit circle, the optimal stationary predictor is given by

^

y

(

tj

) =

B

(

q

)

A



a

(

q

)

As

(

q

)

u

(

t

) +



1

;Aa

(

q

)

A



a

(

q

)



y

(

t

) (25)

where

As

(

q

) (

Aa

(

q

)) is the stable (anti-stable), monic part of

A

(

q

):

A

(

q

) =

As

(

q

)

Aa

(

q

) (26) and

Aa

(

q

) denotes the monic, stabilized

Aa

-polynomial, i.e.,

Aa

(

q

) is the monic polynomial whose zeros are equal to the zeros of

Aa

(

q

) reected into the unit disc.

To see that (25) is indeed optimal notice that

y

(

t

)

;y

^ (

tj

) =

Aa

(

q

)

A



a

(

q

)

y

(

t

)

; B

(

q

)

A



a

(

q

)

As

(

q

)

u

(

t

) =

Aa

(

q

)

A



a

(

q

)

e

(

t

) (27) Since

Aa

(

q

)

=Aa

(

q

) is an all-pass lter this means that the prediction error is white and hence uncorrelated with past inputs and outputs. The orthogonality principle now proves

8

(10)

that (25) is optimal. Comparing (25) and (15) we see that the optimal predictor (25) results if

L

(

q

) is chosen as

L

(

q

) =

Aa

(

q

)

A



a

(

q

) (28)

The predictor (25) can be interpreted rather nicely in terms of the Kalman lter.

Suppose we realize (6) in state-space form:

x

(

t

+ 1) =

Ax

(

t

) +

Bu

(

t

)

y

(

t

) =

Cx

(

t

) +

e

(

t

) (29) The steady state Kalman lter predictor is

^

x

(

t

+ 1) = (

A;K C

)^

x

(

t

) +

Bu

(

t

) +

K y

(

t

)

^

y

(

tj

) =

Cx

^ (

t

) (30)

where

K

is determined from an algebraic Riccati equation in the usual way:

K

=

A C T

=

(

R

+

C CT

), =

A AT ;K

(

R

+

C CT

)

KT

. Here

R

is the variance of the inno- vations

e

(

t

):

R

=

Ee2

(

t

)

>

0. (

E

denotes mathematical expectation.)

Now, if

A

is stable the solution is

K

= 0 and (30) corresponds to the standard OE predictor (24). However if

A

is not stable, the solution to the Kalman predictor problem is a nonzero

K

which makes (30) exactly equal to (25). Note that this holds regardless of

R

.

Restricting to only causal lters, we thus have two alternative ways of computing the predictor ^

y

(

tj

) in the OE case: (25) and (30). As we have seen these two variants give identical solutions ^

y

(

tj

) although (25) is computationally less demanding. In contrast with the Kalman lter solution (30), it is also straightforward to generalize (25) to other model structures, like the BJ model structure.

4.2 Using Non-causal Filtering

Let us not study how the predictor can be computed using non-causal lters. Consider the predictor (24):

^

y

(

tj

) =

B

(

q

)

A

(

q

)

u

(

t

) This can also be written

^

y

(

tj

) = 1

A

a

(

q

)

B

(

q

)

A

s

(

q

)

u

(

t

) =

B

(

q

)

A

s

(

q

) 1

A

a

(

q

)

u

(

t

) (31)

9

(11)

using the factorization (26). Thus we can generate the signal ^

y

(

tj

) by rst computing

^

y

causal

(

tj

) =

B

(

q

)

A

s

(

q

)

u

(

t

) (32)

followed by backwards ltering of the reversed sequence

fy

^

causalR

(

tj

)

gNt=1

through the lter

1

A

a

(q)

giving

^

y

R

(

tj

) = 1

A

a

(

q

)^

ycausalR

(

tj

) (33) The desired signal ^

yR

(

tj

) is then nally obtained by reversing the sequence

fy

^

R

(

tj

)

gNt=1

. It is also clear from (31) that the ltering operations can change places so that the non- causal lter is applied before the causal. Algorithmically these two versions are equivalent

in practice the results will dier due to unknown initial conditions in both the causal and non-causal ltering. This will be further discussed below. Here we will assume that the causal ltering is performed rst followed by the non-causal ltering.

An alternative to using (31) is to rewrite the predictor as

^

y

(

tj

) = (

C

(

q

)

A

a

(

q

) +

D

(

q

)

A

s

(

q

))

u

(

t

) (34)

that is, to rewrite the predictor lter using a partial fraction expansion in one stable part and one anti-stable part. The anti-stable lter should of course be applied using backwards

ltering, just as in the previous method. The polynomials

C

(

q

) and

D

(

q

) in (34) are found by solving the Diophantine equation

C

(

q

)

As

(

q

) +

D

(

q

)

Aa

(

q

) =

B

(

q

) (35) This variant requires more computations compared to factoring the denominator into stable and anti-stable factors, but on the other hand both lters should be applied directly to the input signal and not to some intermediate signal like in the previous case. Also in this case will transients due to unknown initial conditions in both the causal and the anti-causal

lter deteriorate the results, unless some measures are taken to circumvent this.

We remind that since (31) and (34) utilize non-causal ltering the error signal

y

(

t

)

;

^

y

(

tj

) should be regarded as the model innovations. The two methods (10)-(13) where

^

y

(

tj

) is found as in either (31) or (34) are therefore ML methods rather than prediction error methods.

Forward/backward ltering is common in o-line signal processing applications, like zero-phase ltering using IIR lters (cf. the function

filtfilt

in Matlab's Signal Process- ing Toolbox 6]) and implementation of non-causal Wiener lters (e.g., 3]), but has not

10

(12)

been widely used in identication applications even though these typically are also o-line applications.

As we have mentioned, the eects of unknown initial states in the dierent lters will deteriorate the results of the identication and it is desirable to have some means for circumventing this. In 2] methods for eliminating the transients in forward/backward l- tering are derived. The idea is to match the output of a forward/backward implementation with that of a backward/forward implementation of the same lter. This leads to good compromise solutions in many signal processing problems. Unfortunately these methods are not well suited for the identication problem. The reason is mainly that in prediction error identication it is more important that the criterion function is kept small than that all predicted outputs are kept small. However, with some changes the ideas in 2] could perhaps be useful in this case too. This is a topic for future research.

4.3 Simulation Example

To illustrate the applicability of the proposed methods for OE identication of unstable systems we will in this section present a small simulation study. We will refer to the PEM using a modied OE model as Method 1 and the two ML methods which employ non- causal ltering will be called Method 2 and 3, respectively. The estimates are found using (10)-(13) where

`

(

x

) =

12x2

is used.

In the Matlab implementations of the methods we have tried to diminish the negative eect of transients by matching the initial conditions in the predictor lters to the measured outputs, as far as possible. For Method 1 this is done by ensuring that the rst few samples of the predicted output is equal the the corresponding samples of the measured output.

Methods 2 and 3 involve both forwards and backwards ltering and transient eects will be present both at the beginning and at the end of the resulting sequence of predicted outputs. It is relatively straightforward to remove the transients at one end, the beginning say, but it is not clear how to achieve transient free results both at the beginning and the end. The implementation we have used removes the transients at the end for Method 2 and at the beginning for Method 3.

The true model { to be identied { is given by

y

(

t

) = 1 +

a1q;1b0

+

a2q;2u

(

t

) +

e

(

t

) (36) with

b0

= 1,

a1

=

;

2

:

5, and

a2

= 1. This system is unstable with poles in 2 and 0

:

5.

11

(13)

To generate identication data we simulated this system using the feedback law

u

(

t

) =

r

(

t

)

;

1

:

5

y

(

t;

1) + 0

:

75

y

(

t;

2) (37) which places both closed-loop poles in 0

:

5. In the Monte Carlo simulation we used inde- pendent, zero mean, Gaussian white noise reference and noise signals

fr

(

t

)

g

and

fe

(

t

)

g

with variances 1 and 0

:

01, respectively. The Monte Carlo simulation comprised 100 dier- ent runs, in each run

N

= 200 data samples were used. The search was initialized at the point ^

(0)

=

;

2

:

48 0

:

98 1

:

02

T

in Methods 1,2, and 3, for fair comparisons between the methods. Apart from the methods presented above, we also tried the standard PEM with an OE model structure and a second-order ARMAX model structure. Here we used the standard routines for OE and ARMAX identication available in the System Identication Toolbox 5] in Matlab. It should be noted that all these methods, except the standard OE method, should under ideal conditions give consistent estimates of the true system, despite the feedback.

In Table 1 we have summarized the results of the identication, the numbers shown are the estimated parameter values together with their standard deviations.

Table 1: Summary of identication results.

Parameter

a1 a2 b0

True value -2.5 1 1

OE -0.3499 0.0490 -0.3123 0.0344 0.0357 0.0091 Method 1 -2.4848 0.9831 1.0034

0.0049 0.0067 0.0091 Method 2 -2.5130 1.0043 1.0133

0.0134 0.0089 0.0130 Method 3 2.5141 1.0074 1.0103

0.0088 0.0134 0.0131 ARMAX -2.4974 0.9983 0.9989

0.0132 0.0087 0.0124

From Table 1 we see that the standard OE method gives completely useless results, as could be expected since the system is unstable. Methods 1, 2, and 3 all give estimates of roughly the same quality. The ARMAX method gives the best results in this simulation although the dierences are not that big. A possible explanation is that transient eects

12

(14)

is better taken care of in the ARMAX function than our implementations of Methods 1, 2, and 3.

To further illustrate the performance of these methods we calculated the average num- ber of ops required for each method. The numbers are shown in Table 2. In this example

Table 2: Average Number of Flops.

OE 1

:

6537



10

5

Method 1 1

:

7861



10

5

Method 2 1

:

2161



10

5

Method 3 1

:

4758



10

5

ARMAX 4

:

1605



10

5

Method 2 required the least number of ops. Methods 1 and 3 were slightly worse. Note especially that Method 1 and the standard OE method required roughly the same number of ops, despite the more involved gradient calculations in Method 1. The relatively small dierence between this methods can partly be ascribed to be due to convergence problems for the standard OE method since the true system is unstable. Estimating the ARMAX model required signicantly more computations than the other methods. Partly this dif- ference is due to the more advanced initialization procedure used in the routine, but this should not make that big a dierence as we obtained.

5 Conclusions

We have discussed estimation of models where the dynamic model is unstable and/or the noise model has non-minimum phase zeros. In such situations the predictor has to be modied and the prediction error does no longer coincide with the model innovations. The main contribution was to show that by instead using non-causal stationary lters, the model innovations can be computed if the transfer functions from the observed data to the model innovations does not have any poles on the unit circle. The advantage of using these non-causal lters is that since the model innovations are obtained, maximum likelihood identication can be performed. Hence, maximum likelihood identication of systems with unstable dynamics or non-minimum phase noise zeros is possible.

To illustrate the ideas we concentrated on a rather simple and straightforward appli- cation, output error identication, and we have compared three novel methods that can

13

(15)

handle also the case of output error identication of unstable systems. In the rst method the idea is to compute a suitable all-pass prelter that cancel the unstable poles in the predictor and gradient lters. This can be interpreted as a Kalman lter solution. In the other two methods the unstable parts of the lters are treated as stable non-causal lters.

This can either be done by factorization of the lter into one stable and one anti-stable part which should be applied in a forwards and backwards manner, respectively, or by comput- ing a partial fraction expansion of the lter into one stable part and one anti-stable part, etc. The latter approach requires slightly more computations due to the partial fraction expansion that is needed. The feasibility of these methods has been shown in a simulation study and the conclusion is that the proposed methods are interesting alternatives to using ARMAX models when the system is unstable.

We also mention that with minor changes the methods for parameter estimation pre- sented in this paper can also be used in other applications, such as methods for tuning of controller parameters based on direct criterion minimization when the controller is non- minimum phase and/or unstable.

References

1] U. Forssell and L. Ljung. Identication of unstable systems using output error and Box-Jenkins model structures. To appear in IEEE Transactions on Automatic Control, 1998. Preliminary version available as Link"oping University report no. LiTH-ISY-R- 1988.

2] F. Gustafsson. Determining the initial states in forward-backward ltering. IEEE Transactions on Signal Processing, 46(4):988{992, 1996.

3] T. Kailath. Lectures on Wiener and Kalman Filtering. Springer-Verlag, 1981.

4] L. Ljung. System Identication: Theory for the User. Prentice-Hall, 1987.

5] L. Ljung. System Identication Toolbox { User's Guide. The MathWorks, Inc., Cochi- tuate Place, Natick, MA. USA, 1991.

6] The MathWorks, Inc., Cochituate Place, Natick, MA. USA. Signal Processing Toolbox { User's Guide, 1993.

14

References

Related documents

and tells us how correlated the layers are. If cross-correlation length is much larger than the layer thickness, then the layers are perfectly correlated and this factor will be

He had enjoyed the support of different ethnic and religious groups in Plateau State, and when he held meetings at the union office members had participated in large numbers

Hughes anser att till skillnad från manöverkrigföring skall man sträva efter att slå mot motståndarens starka sida, vilket utgörs av de stridande enheterna, för att

För den fulla listan utav ord och synonymer som använts för att kategorisera in vinerna i facken pris eller smak (se analysschemat, bilaga 2). Av de 78 rekommenderade vinerna

Generellt kan konstateras att både Portvakten och Limnologen har en låg energi- användning och båda husen uppfyller med god marginal det uppsatta målen på 45 respektive 90 kWh/m²

Flera familjehemsföräldrar beskriver sin relation till fosterbarn som att de tycker om fosterbarnet som det vore sitt eget men att det saknas något för att det exakt

The children in this study expose the concepts in interaction during the read aloud, which confirms that children as young as 6 years old are capable of talking about biological

Barnets diagnos kunde innebära att föräldrar behövde göra arbetsrelaterade förändringar för att tillgodose barnets behov, som att arbeta mer för att ha råd med