• No results found

Frequency Domain Identication, Subspace methods and Periodic Excitation

N/A
N/A
Protected

Academic year: 2021

Share "Frequency Domain Identication, Subspace methods and Periodic Excitation"

Copied!
5
0
0

Loading.... (view fulltext now)

Full text

(1)

Frequency Domain Identication, Subspace methods and Periodic Excitation

T. McKelvey

Dept. of ElectricalEngineering

Linkoping University

S-581 83Linkoping, Sweden

tomas@isy.liu.se

Presented at the 2nd Russian-Swedish Control Conference, Sankt Petersburg, Russia, August 1995.

Technical Report LiTH-ISY-R-1765

Abstract

Recent frequency domain identication algorithms based on subspace based techniques are discussed. The algorithms construct a state-space model by means of extraction of the signal subspace from a matrix constructed from frequency data. A singular value decomposition plays a key part in the subspace extraction. The subspace methods are non-iterative methods in contrast to classical iterative parametric optimization techniques. The use of periodic excitation leads to a leakage free discrete Fourier transform of the measured data as well as simple noise reduction possibilities by averaging.

1 Introduction

System identi cation is the technique of tting a linear dynamic model to measured input and output data. Most often the model tted is represented by the fraction of two polynomials (in the forward shift operator

q

or the

Z

-transform variable

z

) and the polynomial coe cients constitutes the parameters. The values of the parameters are found by minimizing a criterion function, often represented by the sum of the squares of the prediction errors 4]. This is known as prediction error minimization (PEM). For most model structures the criterion is a complicated function of the parameters and iterative optimization techniques have to be applied. Some drawbacks with this approach can be identi ed: The iterative optimization can lead to convergence problems with local minima and has a high computational complexity. These properties become more pronounced as the model order increases and the system has lightly damped modes, e.g., a exible mechanical structure. A polynomial representation of a high order system is also known to be numerical sensitive with respect to small perturbations in the polynomial coe cients.

During the last decade an alternative branch of identi cation methods for multi-input multi- output systems has emerged. These are commonly known as subspace methods since the key step in all methods is the extraction of a signal subspace from a matrix constructed from measured data.

From this signal subspace a state-space model is constructed by utilizing the geometrical properties of the signal subspace. The subspace methods use robust numerical algebra methods, the singular value decomposition (SVD) and QR-factorization. The identi ed state-space model is derived without any iterations except in the SVD calculation. The state-space model is delivered in a state-space basis which is less sensitive to perturbations in the matrix elements compared with the polynomial

This work was supportedby the SwedishResearch Council for Engineering Sciences (TFR),which is gratefully

acknowledged.

1

(2)

representation. The lack of iterations and the numerical robustness of the calculations as well as the representation has opened up the possibilities to identify high order systems with lightly damped modes. The most recent time-domain identi cation methods can be found in Van Overschee and De Moor 9] and Verhaegen 11]. A comprehensive overview of the time domain subspace identi cation are is given by Viberg 12].

This paper will highlight recent subspace based identi cation algorithms for use with frequency data.

Preliminaries A discrete time linear system of minimal order

n

can be represented by a state-space model

x

t

+1

=

Ax

t +

Bu

t

y

t =

Cx

t +

Du

t +

v

t (1)

where

u

t

2 R

m ,

x

t

2 R

n ,

y

t

2 R

p and

v

t

2 R

p are the input, state, output and noise signals, respectively. Since the system is of minimal dimension, the extended observability matrix

O

q =

C

T (

CA

) T



(

CA

q

;1

) T



T

2R

qp



n

 q>n

(2) and the extended controllability matrix

C

r =

B AB  A

r

;1B 2R

n



rm

 rn

(3) both have full rank

n

. The transfer function of (1) is given by

G

(

z

) =

D

+

C

(

zI;A

)

;1B

(4)

2 Subspace Identication

Let us assume that a matrix

H

qr

2 R

pq



mr can be constructed from data such that the following relation holds

H

qr =

O

q

X

+

V

(5)

where

O

q

X

represents the signal part and

V

is a matrix composed of the noise. Consider the SVD

H

qr =

U

s

U

o





 s 0 0  o

!

V

Ts

V

To

!

(6) where

U

s

2R

pq



n and

V

s

2R

mr



n are orthonormal matrices and  s

2 R

n



n is a diagonal matrix with the

n

largest singular values. If

kVk

F is small compared to the smallest non-zero singular value of

O

q

X

, we have approximately

O

q

X U

s  s

V

Ts . The observability range space is then accurately given by

U

s and we can use it as an estimate of the observability matrix. In the noise free case

V

= 0,

U

s =

O

q

T

for some non-singular matrix

T

which shows that

U

s is the extended observability matrix of some realization of the system (1). Let ( ^

AB

^

C

^

D

^ ) denote the the state-space realization associated with

U

s . Then the rst

p

rows of

U

s is equal to ^

C

. Denote by

U

s the sub-matrix obtained by deleting the

p

rst rows and denote by

U

s the corresponding matrix obtained by deleting the

p

last rows. Then in the noise free case

U

s

A

^ =

U

s and a natural estimate of ^

A

is

^

A

=

Uy

s

U

s

:

(7)

Given ^

C

and ^

A

the determination of ^

B

and ^

D

is straight forward. The transfer function

G

(

z

) (4) as well as the output

y

t is a linear function in ^

B

and ^

D

. A simple choice is a minimization in the least-square sense

^

BD

^ = arg min BD

X

t

ky

t

;y

^ t

k2

F



or ^

BD

^ = arg min BD

X

t

kG

(

z

t )

;G

^ (

z

t )

k2

F (8)

(3)

where

G

(

z

t ) and

y

t are the measured quantities and ^

G

(

z

t ) and ^

y

t are the model quantities. Notice that (8) has an analytical solution.

3 Frequency Domain Algorithms

3.1 Identication from Transfer Function Measurements

Assume that

M

samples

G

k of the transfer function

G

(

z

) is given at equidistant frequencies covering the full unit circle,

z

=

e

j

2

k=M ,

k

= 0

:::M;

1. Construct the coe cients

h

k by use of the inverse discrete Fourier transform

h

k = 1

M

M

X;1

s

=0 G

s

e

j

2

sk=M

:

Let the Hankel matrix

H

qr be de ned by

H

qr ] st =

h

s

+

t

;1 s

= 1

:::q

and

t

= 1

:::r:

Straight forward calculations 6] reveal that

H

qr =

O

q (

I;A

M )

;1C

r +

V

where

V

is the Hankel matrix of the noise contribution. A state-space model can now be estimated along the steps presented in the previous section.

3.2 Identication from the Transform of the Input and Output Signals

If, at arbitrary frequencies, the discrete Fourier transform is given for the input and output signals respectively, a matrix

H

qr with the properties (5) can be constructed by the use of some matrix projections 3, 6, 5].

Let us introduce the notation

W

 (

!

) = 1

e

j!

e

j

2

!

 e

j

(



;1)

!



T



and the lower Toeplitz matrix

;  =

0

B

B

B

B

@

D

0

:::

0

CB D :::

0

... ... ... ...

CA



;2B CA



;3B ::: D 1

C

C

C

C

A

:

(9)

and let

Y

(

!

),

U

(

!

) and

V

(

!

) denote the discrete Fourier transform of

y

t ,

u

t and

v

t respectively. By recursive use of the state-space equations (1) we obtain

W

 (

!

)

Y

(

!

) =

O



X

(

!

) + ; 

W

 (

!

)

U

(

!

) +

W

 (

!

)

V

(

!

) (10) where



denotes the Kronecker product.

Assume we have samples of the Fourier transform of the input and output signals at

M

frequencies

!

k ,

k

= 1

::: M

. Collect these samples in matrices

Y =

W

 (

!1

)

Y

(

!1

)

   W

 (

!

M )

Y

(

!

M )

2C

p



M



(11)

U =

W

 (

!1

)

U

(

!1

)

  W

 (

!

M )

U

(

!

M )

2C

m



M



(12)

V =

W

 (

!1

)

V

(

!1

)

  W

 (

!

M )

V

(

!

M )

2C

p



M



(13)

X =

X

(

!1

)

  X

(

!

M )

2C

n



M

:

(14)

(4)

From (10) we arrive at the matrix equation

Y =

O

 X + ;  U + V

:

(15) The rank of the matrix U is a measure of the excitation of the input signal. In the time domain this represents the concept of persistence of excitation 4]. In the single input case U is of full rank whenever

U

(

!

k ) is non-zero for at least



distinct frequencies. The characterization of the rank condition for the multi-input case is more involved and from a user's point of view it su ces to check that U is of full rank for a given set of data.

By use of a matrix projection we will construct a matrix with the desired properties (5). A simple algorithm was proposed in 3]. Analysis in 7] show that this simple algorithm is consistent only under restrictive noise assumptions. Here we will present an algorithm adopted from the time domain technique of instrumental variables 10]. This leads to an algorithm with improved consistency properties.

3.2.1 Matrix Projections

Partition Y , U and V as

Y =



Y p

Y f

!



U =



U p

U f

!



V =



V p

V f

!

:

By following the nomenclature of the time domain case, we call U p and Y p the \past" inputs and outputs, respectively, and conformally U f and Y f , the \future" inputs and outputs. The partition is done such that each sub-matrix will have

q

block rows and consequently 2

q

=



block rows. The size requirement of this partition is that

q >n

where

n

is the system order. It is straightforward to show that

Y f =

O

q X f + ; q U f + V f



(16) where X f is given by

X f =

e

jq!

1X

(

!1

)

e

jq!

2X

(

!2

)

::: e

jq!

MX

(

!

M )

:

We now have the possibility to remove the future inputs by a projection and then use the past inputs in order to remove the noise 10]. Using (16) we get

Y f 

?UH

f

U Hp =

O

q X f 

?UH

f

U Hp + V f 

?UH

f

U Hp (17)

where 

?UH

f

denotes the orthogonal projection onto the null-space of U f and is given by



?UH

f

=

I;

U Hf ( U f U Hf )

;1

U f

:

(18)

The inverse in (18) exists whenever U f has full rank. If we assume

V

(

!

k ) to be zero mean independent random variables with uniformly bounded second moments

EV

(

!

k )

V

(

!

k ) H =

R

(

!

k )

R  8!

k

and assume that

U

(

!

k ) to be uniformly bounded and that U f has full rank, the following relation follows from a standard limit result 2, Theorem 5.1.2]

M lim

!1

1

M

N f 

?UH

f

U Hp = 0



w.p. 1

:

Hence we can use

H

qr = Y f 

?UH

f

U Hp

as a consistent estimate for the derivation of the observability range space as outlined in the previous

section.

(5)

3.2.2 Periodic Excitation

If we assume the input signal is periodic with period time

M

,

u

(

t

) =

u

(

t

+

kM

),

tk

= 0



1



2

:::

the deterministic part of the output of the system will be stationary. It is well known 1] that the fraction between the

M

point discrete Fourier transform of the output and the input exactly equals the transfer function

G

(

z

) at

M

equidistant samples around the unit circle. Hence, by use of periodic excitation we can recover the transfer function without leakage e ects. The periodic nature of the output signal presents an easy way of noise reduction: Take the average over several periods. The periodic excitation together with the subspace identi cation can be shown to be consistent under mild assumptions on the time domain noise signal

v

t , see 8, 5] for details.

4 Conclusions

This contribution has highlighted how the deterministic part of a linear system can be estimated by use of periodic excitation, frequency domain formulation, and a subspace based identi cation step.

The advantage with subspace methods is the numerical robustness which facilitates the identi cation of multivariable high order systems which are lightly damped.

References

1] E. O. Brigham. The Fast Fourier Transform. Prentice Hall, New Jersey, 1974.

2] K. L. Chung. A Course in Probability Theory. Academic Press, San Diego, CA, 1974.

3] K. Liu, R. N. Jacques, and D. W. Miller. Frequency domain structural system identi cation by observability range space extraction. In Proc. of the American Control Conference, Baltimore, Maryland, volume 1, pages 107{111, June 1994.

4] L. Ljung. System Identication: Theory for the User. Prentice-Hall, Englewood Cli s, New Jersey, 1987.

5] T. McKelvey. Identication of State-Space Models from Time and Frequency Data. PhD thesis, Link"oping University, May 1995.

6] T. McKelvey and H. Ak#cay. An e cient frequency domain state-space identi cation algorithm.

In Proc. 33rd IEEE Conference on Decision and Control, Lake Buena Vista, Florida, pages 3359{3364, December 1994.

7] T. McKelvey and H. Ak#cay. An e cient frequency domain state-space identi cation algorithm:

Robustness and stochastic analysis. In Proc. 33rd IEEE Conference on Decision and Control, Lake Buena Vista, Florida, pages 3348{3353, December 1994.

8] T. McKelvey and H. Ak#cay. Subspace based system identi cation with periodic excitation signals. In Proc. Third European Control Conference, Rome, Italy, 1995. To appear.

9] P. Van Overschee and B. De Moor. N4SID: Subspace algorithms for the identi cation of combined deterministic-stochastic systems. Automatica, 30(1):75{93, 1994.

10] M. Verhaegen. Subspace model identi cation, Part III: Analysis of the ordinary output-error state space model identi cation algorithm. Int. J. Control, 58:555{586, 1993.

11] M. Verhaegen. Identi cation of the deterministic part of MIMO state space models given in innovations form from input-output data. Automatica, 30(1):61{74, 1994.

12] M. Viberg. Subspace methods in system identi cation. In 10th IFAC Symposium on System

Identication, volume 1, pages 1{12. IFAC, 1994.

References

Related documents

The work in this thesis is to design and test a concept of the system for a high power data extracting using a contact-less excitation and response system.. In the paper a first

In the paper a non-parametric noise model, estimated directly from the measured data, is used in a compensation strategy applicable to both least squares and total least

In this paper, different approaches to direct frequency domain estimation of continuous-time transfer functions based on sampled data have been presented. If the input

Enligt Dryzek (2013) finns det plats för både bredd och djup inom miljödiskurser och fokus borde vara på helheten snarare än detaljer. Därför grundar den här uppsatsen sig på

is the electronic energy corresponding to a bonded hydrogen atom and was calculated as the energy difference between a fully hydrogen terminated cluster and a cluster with

If the model is too complex to be used for control design, it can always to reduced: In that case we know exactly the dierence between the validated model and the reduced one..

Given samples of the discrete time Fourier transform of the input and output signals of a dynamical system we seek an algorithm which identify a state-space model of nite order?.

Summary: This paper describes the proposed identification procedure, where unknown parameters (mainly spring-damper pairs) in a physically parameterized nonlinear dynamic model