• No results found

Identication of Wiener Models

N/A
N/A
Protected

Academic year: 2021

Share "Identication of Wiener Models"

Copied!
10
0
0

Loading.... (view fulltext now)

Full text

(1)

Identication of Wiener Models

Anna Hagenblad

Department of Electrical Engineering Linkoping University, S-581 83 Linkoping, Sweden

WWW:

http://www.control.isy.li u.se

Email:

annah@isy.liu.se

May 15, 1998

REGLERTEKNIK

AUTOMATIC CONTROL LINKÖPING

Report no.: LiTH-ISY-R-2031 Submitted to CCSSE'98

Technical reports from the Automatic Control group in Linkoping are available by anony-

mous ftp at the address

ftp.control.isy.liu.se

. This report is contained in the com-

pressed postscript le

2031.ps.Z

.

(2)

Anna Hagenblad

Division of Automatic Control Department of Electrical Engineering

Linkoping University 581 83 Linkoping email:

annah@isy.liu.se

May 15, 1998

Abstract

The identication task consists of making a model of a system from measured input and output signals. Wiener models consist of a linear dynamic system, followed by a static nonlinearity. We derive an algorithm to calculate the maximum likelihood esti- mate of the model for this class of systems. We describe an implementation in some detail and show simulation results where a test system is successfully identied from data.

Keywords

: Identication, Wiener models, nonlinear systems, maximum likelihood estimate

1 Introduction

The identication task is to determine a mathematical model of a system from measured data. Using the parametric method, the model is chosen from a set of models, where the dierent models are described by a number of parameters. The identication consists of determining the parameters that \best" describe the data. A maximum likelihood estimate of the model is obtained if we select the model that minimizes the prediction error, i.e., the dierence between the measured output and the output predicted by the model.

A Wiener model consists of a linear system followed by a static nonlinearity. See gure 1.

The input

u

and the output

y

is measured. (We will consider only single input - single output systems for simplicity.) Note that the intermediate signal

x

cannot be measured. An example of a Wiener system can be a DC-engine, followed by a saturation.

G

(



)

f

(



)

u x y

Figure 1: A Wiener model. The linear system

G

is parameterized by a parameter vector



, and the static nonlinearity

f

by a parameter vector



.

To identify the Wiener system, we want to determine

G

and

f

, or the composite function

f G

. There are several approaches to the identication of Wiener systems. See e.g. Kalafatis

et al. 1] or Wigren 4], who also has other references. One method might be to use a nonlinear

structure which includes the Wiener model as a special case, and identify the global model,

(3)

for example with some kind of neural network. Another method is to identify the linear and the nonlinear block separately. If the nonlinearity is known, we can in principle invert it to calculate the intermediate signal

x

and then identify the linear block. If we know the linear block we can also calculate

x

, and then identify the nonlinearity.

This paper describes how to nd the maximum likelihood estimate of

G

and

f

when the blocks are parameterized separately. Section 2 derives the maximum likelihood estimate.

The structures of the blocks are not xed, we will only assume that the linear system

G

is parameterized by a parameter vector



, and the nonlinearity

f

by a parameter vector



. Section 3 describes details of a particular implementation. Some simulation results are shown in section 4. Section 5 concludes with a discussion of open questions for further research.

2 The Maximum Likelihood Estimate

The maximum likelihood (ML) estimate of the parameters can be found by minimizing a prediction error criterion. We will write the (unmeasurable) intermediate signal

x

as

x

(

t

) since it depends on both the time

t

and the parameters



from the linear block. If the nonlinearity

f

is parameterized by



, we can use this to predict

y

:

^

y

(

t

) =

f

(

x

(

t

)



) (1) As a measure of the quality of the model we will use a prediction error criterion:

V

(



) = 1

N N

X

t=1

(

y

(

t

)

;y

^ (

t

))

2

= 1

N N

X

t=1

(

y

(

t

)

;f

(

x

(

t

)



))

2

(2) The maximum likelihood estimate of the parameters



and



are the minimizing arguments of the prediction error criterion.

(



) = argmin



V

(



) (3)

Normally the equation (2) cannot be minimized analytically. We will have to use numer- ical methods in the search for a minimum. An iterative solution is









(i+1)

=









(i)

+

(i)g(i)

(4)

g

(i)

is the search direction, and

(i)

the step length, chosen to guarantee that the criterion is decreased in each step. A quasi-Newton algorithm bases the search direction on the gradient of the criterion, modied with an approximation of the Hessian

We now compute the gradient of (2) in two steps. The rst part of the gradient is with respect to



:

V 0



=

;

1

N N

X

t=1

2(

y

(

t

)

;f

(

x

(

t

)



))(

t

) (5) where  is the derivative of

f

with respect to



, or more explicitly

V 0



=

;

1

N N

X

t=1

2(

y

(

t

)

;f

(

x

(

t

)



))

fx0

(

x

(

t

)



)

x0

(

t

) (6) The derivative with respect to



is

V 0



=

;

1

N N

X

t=1

2(

y

(

t

)

;f

(

x

(

t

)



))

f0

(

x

(

t

)



) (7)

2

(4)

We can now choose either to use the total gradient in each step, or we can iterate between minimizing with respect to



and



. We will choose the latter, to easier be able to analyze the behavior. Thus, we will x



and minimize for



, then x



and minimize with respect to



. These two steps will be iterated until no further improvement can be distinguished.

The solution that uses the total gradient is briey discussed in section 5, and is subject to further research.

Looking at equation (7), we see that the



-parameters only comes in through

x

(

t

). For a x



, we can calculate

x

(

t

) and minimize the criterion with

x

as input. This is very useful, since it enables us to use established methods for identication of nonlinearities, e.g.

neural networks. We can even use non-parametric methods, as long as they minimize some prediction error criterion.

If we turn to equation (6), we get the corresponding equation for the linear case (without the nonlinearity) if we let

f

= identity. This will give us

V 0



=

;

1

N N

X

t=1

2(

y

(

t

)

;x

(

t

))

x0

(

t

) (8) We see two main dierences:

f

(

x

(

t

)



) instead of

x

(

t

) in the nonlinear case. This is natural since we want to minimize the total prediction error, not some intermediate one.

The other dierence is the factor

fx0

(

x

(

t

)



). This corresponds to a weighting of the data, as can be illustrated by the following example.

Suppose the nonlinearity is a saturation:

f

(

x

) =

8

>

<

>

:

1 if

x>

1

x

if

;

1

<x

1

;

1 if

x;

1 (9)

Then it is sensible not to use the saturated data in the estimation of the linear system, since we don't know their \real" value. In equation (6) this is accomplished since

fx0

is zero in these regions.

So the gradient from equation (6) will put more weight on data that are amplied by the nonlinearity (

fx0

large), and less weight on data that are saturated (

fx0

small).

It is possible to use only the gradient as search direction, but to get better convergence, the Newton method uses

g

(i)

=

;



V00

(

(i)

)]

;1V0

(

(i)

) (10) where

V00

is the Hessian. We get the Hessian by dierentiating equation (5)

V

00

(



) = 1

N N

X

t=1

2(

t

)

T

(

t

)

;

1

N N

X

t=1

2

0

(

t

)

"

(

t

) (11) where

"

(

t

)

,y

(

t

)

;f

(

x

(

t

)



) is the prediction error. The second term is close to zero and can be discarded if the minimum gives independent prediction errors. This will make the Hessian easier to calculate, and also guarantee that it is positive semidenite (see Ljung

2]).

3 Implementation

We have implemented the method for the case where the linear block

G

is described by an

output-error model, and the nonlinearity by a hinging hyperplanes. OE-models are described

in Ljung 2] and hinging hyperplanes in Sjoberg et al. 3].

(5)

+

u y

B

F

Figure 2: An output-error model

3.1 The Output-Error Block

An output-error (OE) model is shown in gure 2. We have the following equations for the relations between input and output:

w

(

t

) +

f1w

(

t;

1) +



+

fnfw

(

t;nf

) =

b1u

(

t;

1) +



+

bnbu

(

t;nb

)

y

(

t

) =

w

(

t

) +

e

(

t

)

If we put

B

(

q

) =

b1q;1

+



+

bnbq;nb

and

F

(

q

) = 1 +

f1q;1

+



+

fnfq;nf

, where

q;1

is the backwards time shift operator, we can also write the model as

y

(

t

) =

B

(

q

)

F

(

q

)

u

(

t

) +

e

(

t

) (12) The parameter vector



will consist of the

bi

:s and

fi

:s. For a given



, we can calculate an estimate of

x

as

^

x

(

t

) =

B

(

q

)

F

(

q

)

u

(

t

) (13)

To implement the minimization (4) we will use the gradient (5) and the approximation of the Hessian (11). Both these uses , the derivative of

f

with respect to



.

(

t

) =

fx0

(

x

(

t

)



)

x0

(

t

) (14) For a xed parameter



, we can write an expression of

f

and dierentiate it. Often it is more convenient to dierentiate it numerically. This has the advantage that we don't have to know anything about the structure of

f

when optimizing

G

. In fact,

f

doesn't even have to be parameterized, as long as we can get

f

(

x

) for a given

x

, we can approximate

f0

(

x

) with a dierential quotient:

f

0

(

x

)

 f

(

x

+

h

)

;f

(

x

)

h

(15) for some suciently small

h

. This is a possible solution also to the case when

f

is not dierentiable everywhere.

When we have decided on the structure of the linear block (in our case OE), we can also calculate

x0

. With

^

x

(

t

) =

B

(

q

)

F

(

q

)

u

(

t

) (16)

we get

@x

^

@b

k

(

t

) =

q;k

F

(

q

)

u

(

t

) (17)

@x

^

@f

k

(

t

) =

; B

(

q

)

(

F

(

q

))

2q;ku

(

t

) =

;q;k

F

(

q

) ^

x

(

t

) (18)

4

(6)

This means we can obtain the gradient by ltering the input

u

and the estimate ^

x

, and selecting the proper time shifted elements.

Remember that we want to choose the search direction

g

as

g

=

;



V00

(



)]

;1V0

(



) (19)



"

N

X

t=1

(

t

)

T

(

t

)

#

;1

N

X

t=1

(

t

)

"

(

t

) (20) This is the solution to the equation

N

X

t=1

(

t

)

T

(

t

)

g

=

XN

t=1

(

t

)

"

(

t

) (21) which in turn yields the least-squares solution to the overdetermined system of equations

0

B

@



T

(1



)



T

(

N

...



)

1

C

A g

=

0

B

@

"

(1



) ...

"

(

N

)

1

C

A

(22)

Ecient methods exist to solve this in

Matlab

, by use of the backslash operator:

g = psi\epsilon

3.2 The Hinging Hyperplanes Block

As mentioned above, the iterative method puts very few restrictions on the representation of the static nonlinearity. We have implemented the method for hinging hyperplanes.

Hinging hyperplanes (HH) is a way of describing piecewise linear functions. It can be regarded as a function expansion of the same family as neural nets and Taylor expansions.

The basis function is called a hinge function, and can be described as being a hyperplane on half the input space and zero on the other. The parameter vector



modies the slope and the location of the hinge.

In the case of a Wiener model, the nonlinearity has only one input and one output. A one-dimensional hinge function has the equation

h

(

x

) =

(



0

+

1x

if

x>;10

0 otherwise (23)

An example is shown in gure 3.

A hinging hyperplanes model consists of a sum of hinge functions with dierent slopes and locations, plus a linear (or ane) part.

y

=

XM

i=1

(

i0

+

i1x

)

ISi

+

00

+

01x

(24) We use the indicator function

ISi

which is 1 if

x>;i0=i1

and zero otherwise, to express the hinging hyperplanes. If the input signal to the hinges has a larger dimension than one, we get the same expression, but

x

and

ij

will be vectors.

To nd the HH parameters, a prediction error criterion of the type (2) is minimized.

V

(



) =

XN

t=1

(

y

(

t

)

;y

^ (

t

))

2

(25)

(7)

−2 −1 0 1 2 3 4

−2

−1 0 1 2 3 4

y = −1+x if x>1, 0 otherwise

Figure 3: A hinge function.

0

=

;

1

1

= 1

where

^

y

(

t

) =

XM

i=1

(

i0

+

i1x

(

t

))

ISi

+

00

+

01x

(

t

) (26) Our parameter vector is



=

;00 01 ::: M0 M1

. The derivative of the criterion with respect to



(cf eq (7)) is

V 0



=

;XN

t=1

2(

y

(

t

)

;y

^ (

x

))^

y0

(

x

) (27) where

^

y 0



(

x

) =

;

1

x

(1

x

)

IS1 :::

(1

x

)

ISMT

(28) The hinge function is not smooth with respect to

x

, but even so, it is dierentiable with respect to



. Note that the value of the

ISi

depends on

x

, and on



. This can still be fairly easily calculated in

Matlab

by constructing an index matrix for each x, where the index matrix tells us which hinges are active.

From equation 14 we know that we need also the derivative of

f

with respect to

x

. This might cause some trouble as

f

is only piecewise continuous in

x

and thus not dierentiable everywhere. One solution to this is to dene the derivative at a hinge as the right (or left) derivative. Using the numerical derivative 15 will give reasonable values also to (the few) points that this applies to, which is the solution we have adopted.

3.3 Comments on the Implementation

The fact that we are working with nonlinear systems implies that the criterion (2) might have several local minima. The algorithm will converge (see Ljung 2]), but it might converge to a local minimum that is not the global minimum. This may or may not be a problem.

It is always useful to evaluate the obtained model by simulation, to see if it is good enough for its purpose. If so, the local minimum is not a problem, if the model is not good enough a new try with another initialization will often work better. Also the simulations, described

6

(8)

in section 4, suggest that for simple systems, it is possible get a plot of the nonlinear block and easily judge if the optimal solution is found.

When using hinging hyperplanes, the modeled nonlinearity will always be piecewise lin- ear. This is not a severe restriction. By using small enough pieces, any continuous function can be described to a given accuracy as piecewise linear. Also, a lot of physical systems are naturally described as a Wiener model with piecewise linear nonlinearity, for example when the linear system is followed by a saturation or a dead-zone. By describing the nonlinearity as piecewise linear, the whole system will in fact be piecewise linear.

We can note that by using OE and HH models, the system will in fact be over-parameter- ized. A constant gain can be moved from the OE model to the HH model while the total system stays the same. To avoid numerical problems due to this, the parameter

b1

is kept

xed after the rst iteration. This is implemented by setting the corresponding component in the search direction to zero.

Another implementation question is how many iterations to use in the optimization of the linear block (



) before changing to the



optimization. If we iterate between the linear and nonlinear block, and the criterion is decreased in each iteration, we will eventually end up in a (possibly local) minimum, but the optimization time might depend on the number of iterations in each step. The simulations in section 4 have shed some light on this. Another implementation that might have better convergence rate is suggested in section 5.

4 Simulations

We have tested the method, implemented as described in section 3 with OE and HH models.

G

is a rst order system, and

f

is a saturation-like function. They are described by the following equations:

G

= 0

:

3

q;1

1

;

0

:

5

q;1

(29)

f

(

x

) =

8

>

<

>

:

0

:

1

x;

0

:

9 if

x;

1

x

if

;

1

<x

1

0

:

1

x

+ 0

:

9 if

x>;

1 (30)

As input, we have used a uniformly distributed random signal in the interval -5 5] from

Matlab

's random number generator function. No noise is added to the data.

In general, the identication works ne. We have tried dierent numbers of data (from 500 to 5000 data points) and dierent numbers of parameters both in the linear and the nonlinear block. The nonlinear parameters are initialized randomly. Depending on this initialization, the minimization might get stuck in a local minimum, but trying another identication with dierent initial values almost always nds a better solution if this is the case. Two typical cases is shown in gure 4, with a successful identication to the left, and a case of local minimum to the right.

We can also plot the estimated values of

x

versus the estimated values of

y

and ^

y

. This is done in gure 5. To the right, the optimization is stuck in a local minimum. The dashed line shows the estimated nonlinearity, which has only one hinge, and at a slightly skewed position. We can also see that the OE model is not completely correct, we have a clutter of output values (

y

:s) for the same estimated

x

. In spite of this failure, it is easy to see the shape of the saturation. One could either try to make a model \by hand" from these data, or retry the identication process since this picture clearly shows that we are in a local minimum.

Some other impressions from the simulations:



How many steps should be taken in the optimization of the linear model before switch- ing to the nonlinear? The impression is that the criterion is reduced the most in the

rst iterations. More than ve iterations give only small improvements.

(9)

100 150 200 250

−1.5

−1

−0.5 0 0.5 1 1.5

Mean square fit: 1.6e−10

Solid line: measured output. Dashed line: simulated output.

100 150 200 250

−2

−1.5

−1

−0.5 0 0.5 1 1.5

Mean square fit: 0.019

Solid line: measured output. Dashed line: simulated output.

Figure 4: Simulations of two estimated models. The true output is drawn with a solid line, the simulated with a dashed line. To the left the identication is successful, we cannot distinguish between the true and the estimated output. To the right we have stopped in a local minimum.

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−1.5

−1

−0.5 0 0.5 1 1.5

Mean square fit: 1.6e−10

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−2

−1.5

−1

−0.5 0 0.5 1 1.5

Mean square fit: 0.019

Figure 5: The identied nonlinearity. Left: Successful identication. Right: Convergence to a local minimum



The method works well also if we take only one step in each iteration. This of course requires more steps in the outer loop (changing between the optimization of the linear and the nonlinear model).



It is useful to iterate between the linear and nonlinear part. An optimization that seems to be stuck in a local minima might obtain a lower criterion value in the next iteration, after having updated the other parameters. This is true also when only one step is taken in each iteration.

When identifying a Wiener model, we need to select the number of parameters, i.e., the order and time delay of the linear system, and the number of basis functions (hinges) of the nonlinear block. This is a nontrivial issue in itself, but a good start could be a fourth order system, and 4-5 hinges. If this behaves well, an easier model could be tried, if not a plot of the estimated nonlinearity might give a hint on how many hinges to use.

Although our simulations suggest that a plot of the nonlinearity can tell us if we are stuck in a local minimum, this might be harder for a more complex system. It is therefore

8

(10)

of interest to get a good initialization. This is further discussed in section 5.

5 Discussion

We have demonstrated a method for identifying Wiener models by a maximum likelihood es- timate, and shown simulations results from an implementation using OE-models and hinging hyperplanes. Several interesting questions arises.



We have implemented the method using separate optimization steps for the linear and nonlinear parameters. The alternative is to use the whole gradient and update both



and



in each step. The separate optimization has the advantage that it is easy to change the code if we want to use another parameterization. A drawback however might be that the convergence is slower. If we only use the gradient in the search, this will make no dierence. But with the modied search direction, we have restricted ourselves, and don't use any information from the nonlinear gradient in the linear optimization and vice versa. By optimizing



and



simultaneously we might get better convergence properties. This will be implemented and studied.



An alternative to the separate parameterization of the linear and nonlinear system, we could parameterize the whole system in one. We would then discard the knowledge about the structure of the system, but instead get a more exible model, that might be useful.



We have seen that the optimization may stop in a local minimum, not nding the best parameter estimate. In Ljung 2] there is a discussion on the initialization of the OE model. The implemented initialization for the hinging hyperplanes uses random initial values. This way, we can hope for a better initialization next time if one fails.

In further research, dierent ways to initialize this structure will be studied. We want the initial values to be easy to calculate, and close enough to the global minimum to guarantee convergence. Kalafatis et al. 1] have interesting ideas about this.



A lot of dierent nonlinear structures are possible to use when identifying Wiener models. The Maximum Likelihood estimate is applicable to most of them. Are some structures better to use in this context than others? And which structures makes the most use of this approach? A study of this is soon to be performed.

References

1] A.D. Kalafatis, L. Wang, and W. R. Cluett. Identication of wiener-type nonlinear systems in a noisy environment.

2] Lennart Ljung. System Identication, Theory for the User. Prentice Hall, 1987.

3] Jonas Sjoberg, Qinghua Zhang, Lennart Ljung, Albert Beneviste, Bernard Delyon, et al.

Nonlinear black-box modeling in system identication: a unied overview. Automatica, 31(12):1691{1724, 1995.

4] Torbjorn Wigren. Recursive prediction error identication using the nonlinear wiener

model. Automatica, 29(4):1011{1025, 1993.

References

Related documents

One is the hinge nding algorithm (HFA) as it is presented in 3]. It will here be shown that the HFA actually is a Newton algorithm for function minimization applied on a

Maximum Likelihood Identication of Wiener Models with a Linear Regression Initialization.. Anna Hagenblad and Lennart Ljung Department of Electrical Engineering Linkoping

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Byggstarten i maj 2020 av Lalandia och 440 nya fritidshus i Søndervig är således resultatet av 14 års ansträngningar från en lång rad lokala och nationella aktörer och ett

Omvendt er projektet ikke blevet forsinket af klager mv., som det potentielt kunne have været, fordi det danske plan- og reguleringssystem er indrettet til at afværge

I Team Finlands nätverksliknande struktur betonas strävan till samarbete mellan den nationella och lokala nivån och sektorexpertis för att locka investeringar till Finland.. För

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

– Visst kan man se det som lyx, en musiklektion med guldkant, säger Göran Berg, verksamhetsledare på Musik i Väst och ansvarig för projektet.. – Men vi hoppas att det snarare