• No results found

Model Reduction from an

N/A
N/A
Protected

Academic year: 2021

Share "Model Reduction from an"

Copied!
13
0
0

Loading.... (view fulltext now)

Full text

(1)

Model Reduction from an

H1

/LMI perspective

A. Helmersson

Department of Electrical Engineering Linkoping University

S-581 83 Linkoping, Sweden tel: +46 13 281622 fax: +46 13 282622 email:

andersh@isy.liu.se

September 26, 1994 1995 ECC

Abstract

This paper treats the problem of approximating an

n

th order contin- uous system by a system of order

k <n

. Optimal solutions minimizing the

H1

norm exist for

k

= 0

n;

1. This paper presents an iterative two-step LMI method for improving the

H1

model error compared to Hankel norm reduction. As an intermediate step, the algorithm nds a generalized balanced realization such that the

n;k

Hankel singular values coincide.

Keywords:

LMIs, model reduction, H-innity.

1 Introduction

Truncated balanced realization and Hankel norm reduction 6] are well-behaved algorithms for nding reduced order models. In this paper the selection of model is performed using an unweighted H

1

norm as criterion. When reducing a model of order n to order k , the Hankel norm reduced model is optimal if k = n

;

1. In other cases the H

1

model error is bounded by the sum of the n

;

k smallest Hankel singular values. If these values are close to each other, one can expect that the Hankel norm reduction can be improved in some cases.

During recent years linear matrix inequalities (LMIs) have attracted a lot of interrest for solving H

1

problems. Since the model reduction problem can be considered as a special case of H

1

design with conned degree of the controller, it seems natural to adopt LMIs for model reduction. The LMI formulation does not yield a convex problem when the degree of the controller is conned, and it is not guaranteed that the solution will converge to a (global) minimum.

However, we can expect improvement compared to the Hankel model reduction

in most cases (if k < n

;

1).

(2)

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples show that both linear (geometric) and sub-linear convergence can occur.

The paper is outlined as follows. In section 2 the results on the optimal Hankel norm reduction is summarized and in section 3 the LMI formulation for H

1

problems is given. The algorithm is presented in section 4 and it is applied to a number of examples in section 5.

1.1 Notations

X T denotes the transpose of X  X > (



) 0 a symmetric, positive denite (semidenite) matrix diag  X

1

X

2

] a block-diagonal matrix composed of X

1

and X

2

 rank X denotes the rank of the matrix X    ( X ) the maximal singular value of X and

k

:

k1

the H

1

norm.

2 Optimal Hankel Norm Reduction

One well-behaved method for model reduction is the Optimal Hankel-norm ap- proximation (see 6] for a comprehensive treatment). It is based on balanced realizations and Hankel singular values. The Hankel-norm model reduction starts by nding the balanced realization of the system.

2.1 Balanced Realizations

The Hankel norm reduction and the truncated balanced realization are both based on the balanced realization of a system G ( s ) = D + C ( sI

;

A )

;1

B . A balanced realization is characterized by having observability and controllability gramians equal and diagonal, that is

A T  +  A + C T C = 0

 A T + A  + BB T = 0

with  = diag 

1

:::  k  n I n

;

k ]. The diagonal elements in  are called Hankel singular values.

2.2 Upper and lower bounds

According to the theory for Hankel-norm model reduction 6], the H

1

model error, here denoted  , is bounded by





 k

+1

+

X

n

i

=

k

+

r

+1

 i (1)

where we have assumed that the Hankel singular values  i are ordered



1



2

:::



 k = ::: =  k

+

r



 k

+1

:::



 n



0 : (2) For any reduced model of order k we have that

 k

+1

 (3)

Thus, the Hankel-norm model reduction is optimal in the H

1

sense if k = n

;

1,

but not necessarily so in the general case.

(3)

2.3 Optimal Hankel Norm Reduction

The optimal Hankel norm reduction (see 6] for a complete treatment) is based on the balanced realization as is truncated model reduction. The H

1

model error is in this case improves to  n when n

;

k states are removed assuming these correspond to coinciding Hankel singular values.

The Hankel model reduction can be performed by the following manipula- tions. First introduce 

12R

k



k dened by

diag

1

I n

;

k ] =  (4)

and a diagonal matrix ;

1

dened by

;

21

= 

21;

I k : (5)

We can now write the reduced model ^ G ( s ) = ^ D + ^ C ( sI

;

A ^ )

;1

B ^ , where



A ^ B ^ C ^ D ^



=



;

;11

0

0 I





1

A

11



1

+ A T

11



1

B

1

C

1



1

D



;





1

A

12

+ A T

21

C

2



;

A

22

+ A T

22;1

A

21



1

+ A T

12

B

2 



;

;11

0

0 I



(6) where ABC and D have been partitioned into a structure corresponding to k ( A

112R

k



k )

2

6

4

A

11

A

12

B

1

A

21

A

22

B

2

C

1

C

2

D

3

7

5

=

"

C D A B

#

: (7)

It is now straight-forward to show that (9) is satised with  =  n with P =

2

4



1

0

;

;

1

0 I n

;

k 0

;

;

1

0 

1

3

5

> 0 : (8) Note that it is not necessary to compute the balanced realization explicitly. A numerically better method is given in 10].

3 Theory And Algorithms

3.1 The Bounded Real Lemma

Model reduction can be seen as a special case of a H

1

problem with conned degree. The original system G ( s ) = D + C ( sI

;

A )

;1

B of order n and is to be approximated by a lower-order system ^ G ( s ) = ^ D + ^ C ( sI

;

A ^ )

;1

B ^ of degree k < n such that

k

G

;

G ^

k1

 . Using the Bounded Real Lemma (see e.g. 11]) this is equivalent of the existence of a P > such that

2

6

4

A ~ T P + P A P ~ B ~ C ~ T B ~ T P

;

I D ~ T C ~ D

;

I

3

7

5



0 (9)

(4)

with

"

A ~ B ~ C ~ D ~

#

=

2

6

4

A 0 B

0 A ^ B ^ C

;

C D ^

;

D ^

3

7

5

: (10)

3.2 A First Algorithm

As can be seen this matrix inequality is bilinear in P and ( ^ A B ^ C ^ D ^ ) and can not be solved directly using an LMI solver. The following iterative two-step algorithm can be used even if it does not guarantee (global) convergence.

Algorithm 3.1 Assume that P =



S N N T L



(i) Start with a ^ G obtained from, for example, Hankel model reduction or truncated balanced realization.

(ii) Keeping ( ^ A B ^ ) constant, minimize  subject to (9) with respect to ( P C ^ D ^ )

(iii) Keeping ( NL ) constant, minimize  subject to (9) with respect to ( S A ^ B ^ C ^ D ^ )

(iv) Repeat (ii) and (iii) until the solution converges given some criterion.

This algorithm has been tried on a number of examples of low order ( n



5), see 7].

3.3 Solvability Conditions

The following lemma plays an important role for both LMI-based H

1

synthesis and model reduction.

Lemma 3.1 Given a symmetric matrix 

2R

m



m and two matrices U and V of column dimension m and full column rank, consider the problem of nding some matrix  of compatible dimensions and of full column rank, such that

 + U  V T + V  T U T < 0 : (11) Denote by U

?

and V

?

any matrices such that U

?

T U = 0 V

?

T V = 0 and  U U

?

]   V V

?

] square and invertible. Then (11) is solvable for  if and only if



U

?

T  U

?

< 0

V

?

T  V

?

< 0 : (12)

Proof. Necessity of (12) is clear: for instance, U

?

T U = 0 implies U

?

T  U

?

< 0 when pre- and post-multiplying (11) by U

?

T and U

?

respectively. For details on

the suciency part, see 3].

2

(5)

Using this lemma, step (ii) in Algorithm 3.1 can be simplied by removing C ^ and ^ D . To achieve this we start by rewriting (9) as

2

6

6

4

A ~ T P + P A P ~ B ~



C T 0



B ~ T P

;

I 0



C 0



0

;

I

3

7

7

5

+ U



C ^ D ^



V T + V



C ^ T D ^ T



U T



0 (13) where U =



0 0 0

;

I



T and V =



0 I 0 0 0 0 0 I



T

. Using U

?

=

2

6

6

4

I 0 0 0 I 0 0 0 I 0 0 0

3

7

7

5

and V

?

=

2

6

6

4

I 0 0 0 0 0 0 0 I 0 0 0

3

7

7

5

, we obtain the following lemma

Lemma 3.2 Let ~ G = G

;

G ^ with given ^ A and ^ B . Then there exists an approx- imation ^ G such that

k

G ~

k1

 if and only if there exist a P > 0 satisfying

"

A ~ T P + P A P ~ B ~ B ~ T P

;

I

#



0 (14)

3.4 Model Reduction from an

H1

Control Perspective

The problem of nding a reduced-order model minimizing the H

1

error can be considered as nding a controller of order k of the corresponding H

1

control problem. Using the results based on the LMI approach on the (augmented) plant dened by

2

6

4

A B 0 C D

;

I

0 I 0

3

7

5

: (15)

Using the LMI-based parametrization of all H

1

controllers given in 4], we arrive at the following equivalence to the existence of a reduced model with an H

1

error less than or equal to  :



AR + RA T B B T

;

I





0  (16)



A T S + SA T C T C

;

I





0  (17)



R I I S





0 (18)

and

rank( I

;

RS )



k: (19)

(6)

The inequalities (16) and (17) are equivalent to the Lyapunov inequalities AR + RA T + 

;1

BB T



0 (20) and

A T S + SA + 

;1

C T C



0  (21) respectively. We interpret R and S as generalized controllability and observ- ability gramians. With the exception of the rank condition these are linear and, thus, the solution set is convex.

If we can nd solutions R and S , we can also nd a generalized balanced realization using a similarity transformation such that R = S = 

;1

 = .

The rank condition (19) now implies that (the diagonal)  has n

;

k elements equal to one. We can now use (6) to nd the reduced order model of order k , see also 9, 8]. Thus, we have

Theorem 3.1 Assume G is of order n . Then the existence of a ^ G of order k , such that

k

G

;

G ^

k1

 is equivalent to the existence of a generalized balanced realization of G

A T  +  A + C T C



0 A  +  A T + BB T



0 with  = diag 

1

:::  k I n

;

k ] > 0.

We can now make the following observations. Except for the trivial case with k = n ( ^ G = G ), we can discern two special cases of the model reduction problem: k = 0 and k = n

;

1. These two can be solved as non-iterative LMI problem since the explicit rank condition can be eliminated. The case k = n

;

1 has already been discussed.

For the case k = 0 the rank condition (19) implies that R

;1

= S and since (16) can be rewritten as



A T R

;1

+ R

;1

A R

;1

B B T R

;1 ;

I





0  (22)

which together with (17) is an LMI problem with respect to S .

The general problem is harder to solve since no non-iterative method using LMIs has been found. In the algorithm proposed previously, steps (ii) and (iii) are modied.

Algorithm 3.2 Assume that P =



S N N T L



(i) Start with a ^ G obtained from, for instance, Hankel model reduction or truncated balanced realization.

(ii) Let

A ~ =



A 0 0 ^ A



(23)

(7)

and

B ~ =



B B ^



: (24)

With constant ( ~ A B ~ ) minimize  subject to (17) and



A ~ T P + P A P ~ B ~ B ~ T P

;

I





0 (25)

with respect to

P =



S N N T L



> 0 :

This step nds an appropriate N , which spans the subspace of the original state-space to be kept in the reduced controller

(iii) Keeping N constant, minimize  subject to (17) and (22) with respect to R

;1

> 0 and L

;1

> 0 with S = R

;1

+ NL

;1

N T . This step nds the best reduced approximation of the original system given the subspace to be kept (dened by N ). The reduced order model ^ G = ^ D + ^ C ( sI

;

A ^ ) ^ B is found via the (generalized) balanced realization and the Hankel model reduction (6).

(iv) Repeat (ii) and (iii) until the solution converges given some criterion.

Note that the exact choice of N is immaterial as long as it spans the same subspace. The degree of freedom in the subspace dened by N is k ( n

;

k ). This also explains why we can solve the case k = 0 (and the trivial k = n ) more easily since the degree of freedom in N disappears. In the case k = n

;

1 the subspace to be truncated is uniquely dened by the balanced realization.

3.5 Computational Complexity

The computational complexity per iteration is mainly determined by the number of variables involved in the two LMI problems. The rst LMI nds a P to the augmented system with n + k states and thus has N = ( n + k )( n + k +1) = 2 free variables to be solved for in addition to  . According to 2] the computational complexity for solving an LMI is typically

O

( N

2

:

1

) for a xed number of LMI inequalities. The second LMI has R

;1

and L

;1

as free variables, that is n ( n + 1) = 2 + k ( k + 1) = 2 variables. Both LMIs have roughly between n

2

= 2 and n

2

variables the second LMI has always less variables than the rst one. Thus the complexity per iteration is

O

( n

4

:

2

). This has to be compared with other typical matrix operations such as solving a Lyapunov equation, matrix multiplication and inversion, which all are in the order of

O

( n

3

).

4 Examples

We will illustrate the algorithm on a number of examples and compare it with

the Hankel norm reduction. Four examples are given. The rst one shows how

to approximate a system with a constant. This only requires one step and is

(8)

done without iterations. The second example reduces a third order system to

rst order. For this problem the convergence is quite fast. The third example shows a case when the converge is slow, which indicates that the algorithm may fail to convergence in some applications. The fourth example shows a case when the improvement to the standard Hankel norm reduction is more signicant.

4.1 Implementation

In these examples a Matlab implementation employing the LMI-lab by Gahinet and Nemirovskii 5] has been made. This package has a user-friendly and exible interface for Matlab.

The LMI-based algorithm has been compared to Hankel norm reduction.

The following commands in the -Analysis and Synthesis Toolbox 1] has been used for obtaining the Hankel norm reduced model

>> bal, sv] = sysbal (sys)

where bal is the balanced realization of the system sys and sv contains the Hankel singular values. Then the Hankel-norm reduced model is obtained using the command

>> hank = hankmr (bal, sv, k, 'd')

where k is the order of the reduced system hank .

4.2 Example 1: A Non-Iterative Solution

We start by giving a very simple example of model reduction. We want to replace a dynamical system with a constant ( D -matrix). This can be obtained without iterations since R

;1

= S . We consider the system

G ( s ) = 1 ( s + 1)( s + 5)

with Hankel singular values 0.111236 and 0.012361. Hankel-norm model reduc- tion yields ^ G H ( s ) = 0 : 1, while the LMI method gives ^ G

0

( s ) = 0 : 08513. The model errors are 0.124722 and 0.115684 respectively.

4.3 Example 2: A Third-order System

We rst consider the third-order system

G ( s ) = 1 s + 1 + 1 s

2

+ s + 4

We will now try to reduce this to a rst order system. The Hankel singular values are 0.71443, 0.19114, 0.10171. Thus, by reducing the model order to one the model error can not be lower than 

2

= 0 : 19114. The Hankel-norm model reduction guarantees a maximum H

1

error of

P

 i , where the sum is taken over the removed states. In this case we get 0.2928. The Hankel-norm reduced model is

G ^ = 1 : 43514 1

;

0 : 106451 s

1 + 0 : 825176 s

(9)

0 1 2 3 4 5 6 10

−14

10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

iteration

gamma−gamma*

gamma convergence

Figure 1:  -convergence of Example 2.

The LMI reduced model is

G ^

1

( s ) = 1 : 41322 1

;

0 : 110249 s 1 + 0 : 817918 s:

The model errors are 0.197142 and 0.192139 respectively. The LMI reduced model is only marginally better than the Hankel-norm reduced one (about 2.6%). The Hankel-norm model reduced model is in this case quite close to its theoretical bound and much improvements using other techniques cannot be obtained.

In Figure 1 the error convergence versus iteration is shown. The plot gives



;





where 



is the minimum value obtained during the iterative search.

Usually it is also the value obtained in the last iteration but in this example we have run the algorithm to the extremes of the numerical precision of the computer. The convergence is linear (or geometric) in this case, that is the remaining error is scaled by an approximately constant factor less than one in each iteration. In this case the factor is about 0.25. The convergence rate depends on how well the LMIs are performed in each iterative step. If the LMIs, which are also iterative, are run into higher accuracy by modifying the stopping criterion, then the convergence rate of the model reduction iterations are faster.

However, it is not known which trade-o" o"ers the best overall computational performance.

In Figure 2 the model errors are shown for the Hankel-norm reduced (dashed line) and the LMI reduced (solid line) models.

4.4 Example 3: Slow converge

The next example is somewhat more dicult. The system to be reduced is G ( s ) = 1

( s + 1)( s

2

+ s + 10)

(10)

10

-2

10

-1

10

0

10

1

10

2

0.16

0.165 0.17 0.175 0.18 0.185 0.19 0.195 0.2

frequency

error magnitude

Model error

Figure 2: Model errors of Example 2. The Hankel model error is given as a dashed line and the LMI reduced model error as a solid line.

The Hankel-norm singular values are 0.07608, 0.04925 and 0.02317. The Hankel model reduction yields

G ^ H ( s ) = 0 : 14017 1

;

0 : 63286 s 1 + 1 : 801126 s

with a model error of 0.057330. The LMI method reduces the error close to 0.05, which apparently is the optimal value. The LMI model converges to

G ^

1

( s ) = 120 1

;

s 1 + s

The convergence rate is in this case very slow, probably due to the fact that the original system and the optimally reduced system have a common pole. The convergence rate seems to be sublinear, at least during the rst 100 iterations (see Figure 3). After about 350 iterations the error norm is not reduced any longer due to numerical errors. The accuracy in the LMI solver was set to 10

;12

. The reason for the diculty is not understood, but one reason could be that the model error converges towards an all-pass second order system

G ( s )

;

G ^

1

( s ) = 120 s

2;

s + 10 s

2

+ s + 10 

which is of order 2 only compared to 5 of the augmented system ~ G .

4.5 Example 4: A Higher-order Example

Next we will look at a 11th order system:

G ( s ) =



1

;

0 : 05 s 1 + 0 : 05 s



10

1

1 + 0 : 1 s

(11)

10

0

10

1

10

2

10

3

10

−8

10

−7

10

−6

10

−5

10

−4

10

−3

10

−2

iteration

gamma−gamma*

gamma convergence

Figure 3: Model error convergence versus iteration of Example 3 The Hankel singular values are: 0.9716, 0.8946, 0.7885, 0.6723, 0.5586, 0.4530, 0.3567, 0.2687, 0.1874, 0.1106, and 0.03657. Since the singular values are of the same order, we can expect problems using the Hankel norm reduction due to the fact that the upper and lower bounds are far apart.

Reducing the system to fth order with Hankel model reduction yields an H

1

error of 0.66875. LMI reduction yields 0.51637 after the rst iteration and 0.49400 after ve iterations.

4.6 Computational Load

The computational load was determined by running a number of LMI reductions of di"erent orders. Each system was reduced to half that order, rst by Hankel reduction and then by one iteration of the LMI reduction. The CPU time for the two LMI steps were clocked. The computations were performed on a Sparc- station 5.

4.7 Summary

The model errors obtained for the di"erent examples are summarized in the

table below. As can be seen the improvement in error norm by using LMI

reduction is usually not signicant. However, the LMI model comes very close

to the theoretical bound set by the Hankel singular values. Even if the the

proposed model reduction algorithm is iterative, most of the improvement from

Hankel norm reduction is achieved already after the rst iteration.

(12)

10

0

10

1

10

2

10

0

10

1

10

2

10

3

10

4

model order n

CPU time in seconds

CPU time vs. model order

Figure 4: CPU time per iteration versus model order n . The reduced model order k = n= 2. The test was performed on a Sparc-station 5

Example n

!

k Hankel LMI  k

+1

1 2

!

0 0.1247 0.1157 0.1112 2 3

!

1 0.1971 0.1921 0.1911 3 3

!

1 0.05733 0.05000 0.04925 4 11

!

5 0.6688 0.4940 0.4530 4 11

!

7 0.3829 0.2956 0.2687 4 11

!

8 0.2571 0.2065 0.1874 4 11

!

9 0.1380 0.1201 0.1106

5 Conclusions

A model reduction algorithm using linear matrix inequalities (LMIs) have been implemented and studied on a number of examples. The algorithm starts by using the Hankel-norm reduced model and performs improvements by applying an iterative two-step LMI scheme. The special case of reducing the model to a constant (matrix) with no states ( k = 0) can be performed optimally in one step without iterations. The well-known Hankel norm reduction is noniterative and optimal for n

;

1. Generally, k

2

1 n

;

2], convergence to the global optimum can not be guaranteed, but higher and lower bounds are provided by the Hankel singular values.

In all examples studied a better approximation in the H

1

sense have been found provided that the reduced model is of order n

;

2 or less, where n is the order of the original model. Often, the improvement is not signicant (in the order of ten percents) but some cases with more improvement have been found.

The computational load with the LMI algorithm is higher than Hankel norm

reduction and truncated balanced realization: about

O

( n

4

:

2

) (per iteration)

(13)

compared to

O

( n

3

). It is usually not worth-while to employ the LMI algorithm unless optimal performance is a necessity. In the examples studied the LMI reduced model has been close to the theoretical limit dened by the largest Hankel singular value that was removed (  k

+1

).

The convergence of the iterative algorithm has not been analyzed. In the ex- amples studied both linear (geometric) and sub-linear convergence occur. How- ever, already in the rst step most of the improvement compared to the Hankel norm reduction is achieved.

References

1] G. Balas, J. Doyle, K. Glover, A. Packard, and R. Smith. -Analysis and Synthesis Toolbox for Use with Matlab, User's Guide. The MathWorks, Inc., 1993.

2] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix In- equalities in System and Control Theory. SIAM Studies in Applied Math- ematics. SIAM, 1994.

3] P. Gahinet and P. Apkarian. A linear matrix inequality approach to H

1

control. International Journal of Robust and Nonlinear Control, 1:656{661, October 1992 (to appear in).

4] P. Gahinet and P. Apkarian. An LMI-based parametrization of all H

1

controllers with applications. In IEEE Proceedings of the 32nd Conference on Decision and Control, volume 1, pages 656{661, San Antonio, Texas, December 1993.

5] P. Gahinet and A. Nemirovskii. LMI-lab - A Package for Manipulating and Solving LMI's. Version 2.0. 1993.

6] K. Glover. All optimal Hankel-norm approximations of linear multivari- able systems and their L

1

-error bounds. International Journal of Control, 39(6):1115{1193, 1984.

7] A. Helmersson. Model reduction using LMIs. In Proceedings of the Con- ference on Decision and Control, 1994. accepted paper.

8] D. Kavrano$glu. A computational scheme for H

1

model reduction. IEEE Transactions on Automatic Control, 39(7):1447{1451, July 1994.

9] D. Kavrano$glu and M. Bettayeb. Characterization of the solution to the op- timal H

1

model reduction problem. Systems & Control Letters, 20(2):99{

107, February 1993.

10] M. Safonov, R. Chiang, and D Limebeer. Optimal Hankel model reduc- tion for nonminimal systems. IEEE Transactions on Automatic Control, 35(4):496{502, April 1990.

11] C. Scherer. The Riccati Inequality and State-Space H

1

-Optimal Control.

Ph. D. Dissertation, Universitat Wurtzburg, Germany, 1990.

References

Related documents

While firms run by second-generation immigrants from OECD countries exhibit higher growth rates than natives, the reverse is true for second generation immigrants from

Även om Albin inte formulerar konkreta drömmar om ett framtida musicerande (vilket vissa andra av pojkarna gör), vill jag kalla detta ett ytterst starkt

Ahlberg belyser att forskningen inom medeltidsarkeologi tyder på att de gamla svenska medeltidsstäder kan vara planerade och inte spontant anlagda, vilket man inte skulle kunna

relationships affect adolescents’ delinquency: a) we examined the relationships adolescents have with mothers and fathers separately, b) we used peers’ own reports of their delinquent

De positiva reaktionerna från företaget och användarna (genom testfall under design och undersökning av den färdiga applikationen) visar att det finns incitament för vidareutveckling

Alice kunde en tid efter dödsfallet ringa till makens jobb eller tro att maken skulle komma hem, vilket dels kan bero på försvarsmekanismer eller som Cullberg

While several control techniques were used to control the vibration in the manipulator in different applications, no validated stability control technique was given to con- trol

Perceptions of how sleep is influenced by rest, activity and health in patients with coronary heart disease: a phenomenographical studyp. Sleep, arousal and health-related quality