Model Reduction from an
H1/LMI perspective
A. Helmersson
Department of Electrical Engineering Linkoping University
S-581 83 Linkoping, Sweden tel: +46 13 281622 fax: +46 13 282622 email:
andersh@isy.liu.seSeptember 26, 1994 1995 ECC
Abstract
This paper treats the problem of approximating an
nth order contin- uous system by a system of order
k <n. Optimal solutions minimizing the
H1norm exist for
k= 0
n;1. This paper presents an iterative two-step LMI method for improving the
H1model error compared to Hankel norm reduction. As an intermediate step, the algorithm nds a generalized balanced realization such that the
n;kHankel singular values coincide.
Keywords:
LMIs, model reduction, H-innity.
1 Introduction
Truncated balanced realization and Hankel norm reduction 6] are well-behaved algorithms for nding reduced order models. In this paper the selection of model is performed using an unweighted H
1norm as criterion. When reducing a model of order n to order k , the Hankel norm reduced model is optimal if k = n
;1. In other cases the H
1model error is bounded by the sum of the n
;k smallest Hankel singular values. If these values are close to each other, one can expect that the Hankel norm reduction can be improved in some cases.
During recent years linear matrix inequalities (LMIs) have attracted a lot of interrest for solving H
1problems. Since the model reduction problem can be considered as a special case of H
1design with conned degree of the controller, it seems natural to adopt LMIs for model reduction. The LMI formulation does not yield a convex problem when the degree of the controller is conned, and it is not guaranteed that the solution will converge to a (global) minimum.
However, we can expect improvement compared to the Hankel model reduction
in most cases (if k < n
;1).
The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples show that both linear (geometric) and sub-linear convergence can occur.
The paper is outlined as follows. In section 2 the results on the optimal Hankel norm reduction is summarized and in section 3 the LMI formulation for H
1problems is given. The algorithm is presented in section 4 and it is applied to a number of examples in section 5.
1.1 Notations
X T denotes the transpose of X X > (
) 0 a symmetric, positive denite (semidenite) matrix diag X
1X
2] a block-diagonal matrix composed of X
1and X
2rank X denotes the rank of the matrix X ( X ) the maximal singular value of X and
k:
k1the H
1norm.
2 Optimal Hankel Norm Reduction
One well-behaved method for model reduction is the Optimal Hankel-norm ap- proximation (see 6] for a comprehensive treatment). It is based on balanced realizations and Hankel singular values. The Hankel-norm model reduction starts by nding the balanced realization of the system.
2.1 Balanced Realizations
The Hankel norm reduction and the truncated balanced realization are both based on the balanced realization of a system G ( s ) = D + C ( sI
;A )
;1B . A balanced realization is characterized by having observability and controllability gramians equal and diagonal, that is
A T + A + C T C = 0
A T + A + BB T = 0
with = diag
1::: k n I n
;k ]. The diagonal elements in are called Hankel singular values.
2.2 Upper and lower bounds
According to the theory for Hankel-norm model reduction 6], the H
1model error, here denoted , is bounded by
k
+1+
Xn
i
=k
+r
+1i (1)
where we have assumed that the Hankel singular values i are ordered
12:::
k = ::: = k
+r
k
+1:::
n
0 : (2) For any reduced model of order k we have that
k
+1(3)
Thus, the Hankel-norm model reduction is optimal in the H
1sense if k = n
;1,
but not necessarily so in the general case.
2.3 Optimal Hankel Norm Reduction
The optimal Hankel norm reduction (see 6] for a complete treatment) is based on the balanced realization as is truncated model reduction. The H
1model error is in this case improves to n when n
;k states are removed assuming these correspond to coinciding Hankel singular values.
The Hankel model reduction can be performed by the following manipula- tions. First introduce
12Rk
k dened by
diag
1I n
;k ] = (4)
and a diagonal matrix ;
1dened by
;
21=
21;I k : (5)
We can now write the reduced model ^ G ( s ) = ^ D + ^ C ( sI
;A ^ )
;1B ^ , where
A ^ B ^ C ^ D ^
=
;
;110
0 I
1
A
111+ A T
11 1B
1C
11D
;
1
A
12+ A T
21C
2
;
A
22+ A T
22;1A
211+ A T
12B
2
;
;110
0 I
(6) where ABC and D have been partitioned into a structure corresponding to k ( A
112Rk
k )
2
6
4
A
11A
12B
1A
21A
22B
2C
1C
2D
3
7
5
=
"
C D A B
#
: (7)
It is now straight-forward to show that (9) is satised with = n with P =
2
4
10
;;
10 I n
;k 0
;
;
10
13
5
> 0 : (8) Note that it is not necessary to compute the balanced realization explicitly. A numerically better method is given in 10].
3 Theory And Algorithms
3.1 The Bounded Real Lemma
Model reduction can be seen as a special case of a H
1problem with conned degree. The original system G ( s ) = D + C ( sI
;A )
;1B of order n and is to be approximated by a lower-order system ^ G ( s ) = ^ D + ^ C ( sI
;A ^ )
;1B ^ of degree k < n such that
kG
;G ^
k1. Using the Bounded Real Lemma (see e.g. 11]) this is equivalent of the existence of a P > such that
2
6
4
A ~ T P + P A P ~ B ~ C ~ T B ~ T P
;I D ~ T C ~ D
;I
3
7
5
0 (9)
with
"
A ~ B ~ C ~ D ~
#
=
2
6
4
A 0 B
0 A ^ B ^ C
;C D ^
;D ^
3
7
5
: (10)
3.2 A First Algorithm
As can be seen this matrix inequality is bilinear in P and ( ^ A B ^ C ^ D ^ ) and can not be solved directly using an LMI solver. The following iterative two-step algorithm can be used even if it does not guarantee (global) convergence.
Algorithm 3.1 Assume that P =
S N N T L
(i) Start with a ^ G obtained from, for example, Hankel model reduction or truncated balanced realization.
(ii) Keeping ( ^ A B ^ ) constant, minimize subject to (9) with respect to ( P C ^ D ^ )
(iii) Keeping ( NL ) constant, minimize subject to (9) with respect to ( S A ^ B ^ C ^ D ^ )
(iv) Repeat (ii) and (iii) until the solution converges given some criterion.
This algorithm has been tried on a number of examples of low order ( n
5), see 7].
3.3 Solvability Conditions
The following lemma plays an important role for both LMI-based H
1synthesis and model reduction.
Lemma 3.1 Given a symmetric matrix
2Rm
m and two matrices U and V of column dimension m and full column rank, consider the problem of nding some matrix of compatible dimensions and of full column rank, such that
+ U V T + V T U T < 0 : (11) Denote by U
?and V
?any matrices such that U
?T U = 0 V
?T V = 0 and U U
?] V V
?] square and invertible. Then (11) is solvable for if and only if
U
?T U
?< 0
V
?T V
?< 0 : (12)
Proof. Necessity of (12) is clear: for instance, U
?T U = 0 implies U
?T U
?< 0 when pre- and post-multiplying (11) by U
?T and U
?respectively. For details on
the suciency part, see 3].
2Using this lemma, step (ii) in Algorithm 3.1 can be simplied by removing C ^ and ^ D . To achieve this we start by rewriting (9) as
2
6
6
4
A ~ T P + P A P ~ B ~
C T 0
B ~ T P
;I 0
C 0
0
;I
3
7
7
5
+ U
C ^ D ^
V T + V
C ^ T D ^ T
U T
0 (13) where U =
0 0 0
;I
T and V =
0 I 0 0 0 0 0 I
T
. Using U
?=
2
6
6
4
I 0 0 0 I 0 0 0 I 0 0 0
3
7
7
5
and V
?=
2
6
6
4
I 0 0 0 0 0 0 0 I 0 0 0
3
7
7
5
, we obtain the following lemma
Lemma 3.2 Let ~ G = G
;G ^ with given ^ A and ^ B . Then there exists an approx- imation ^ G such that
kG ~
k1if and only if there exist a P > 0 satisfying
"
A ~ T P + P A P ~ B ~ B ~ T P
;I
#
0 (14)
3.4 Model Reduction from an
H1Control Perspective
The problem of nding a reduced-order model minimizing the H
1error can be considered as nding a controller of order k of the corresponding H
1control problem. Using the results based on the LMI approach on the (augmented) plant dened by
2
6
4
A B 0 C D
;I
0 I 0
3
7
5
: (15)
Using the LMI-based parametrization of all H
1controllers given in 4], we arrive at the following equivalence to the existence of a reduced model with an H
1error less than or equal to :
AR + RA T B B T
;I
0 (16)
A T S + SA T C T C
;I
0 (17)
R I I S
0 (18)
and
rank( I
;RS )
k: (19)
The inequalities (16) and (17) are equivalent to the Lyapunov inequalities AR + RA T +
;1BB T
0 (20) and
A T S + SA +
;1C T C
0 (21) respectively. We interpret R and S as generalized controllability and observ- ability gramians. With the exception of the rank condition these are linear and, thus, the solution set is convex.
If we can nd solutions R and S , we can also nd a generalized balanced realization using a similarity transformation such that R = S =
;1= .
The rank condition (19) now implies that (the diagonal) has n
;k elements equal to one. We can now use (6) to nd the reduced order model of order k , see also 9, 8]. Thus, we have
Theorem 3.1 Assume G is of order n . Then the existence of a ^ G of order k , such that
kG
;G ^
k1is equivalent to the existence of a generalized balanced realization of G
A T + A + C T C
0 A + A T + BB T
0 with = diag
1::: k I n
;k ] > 0.
We can now make the following observations. Except for the trivial case with k = n ( ^ G = G ), we can discern two special cases of the model reduction problem: k = 0 and k = n
;1. These two can be solved as non-iterative LMI problem since the explicit rank condition can be eliminated. The case k = n
;1 has already been discussed.
For the case k = 0 the rank condition (19) implies that R
;1= S and since (16) can be rewritten as
A T R
;1+ R
;1A R
;1B B T R
;1 ;I
0 (22)
which together with (17) is an LMI problem with respect to S .
The general problem is harder to solve since no non-iterative method using LMIs has been found. In the algorithm proposed previously, steps (ii) and (iii) are modied.
Algorithm 3.2 Assume that P =
S N N T L
(i) Start with a ^ G obtained from, for instance, Hankel model reduction or truncated balanced realization.
(ii) Let
A ~ =
A 0 0 ^ A
(23)
and
B ~ =
B B ^
: (24)
With constant ( ~ A B ~ ) minimize subject to (17) and
A ~ T P + P A P ~ B ~ B ~ T P
;I
0 (25)
with respect to
P =
S N N T L
> 0 :
This step nds an appropriate N , which spans the subspace of the original state-space to be kept in the reduced controller
(iii) Keeping N constant, minimize subject to (17) and (22) with respect to R
;1> 0 and L
;1> 0 with S = R
;1+ NL
;1N T . This step nds the best reduced approximation of the original system given the subspace to be kept (dened by N ). The reduced order model ^ G = ^ D + ^ C ( sI
;A ^ ) ^ B is found via the (generalized) balanced realization and the Hankel model reduction (6).
(iv) Repeat (ii) and (iii) until the solution converges given some criterion.
Note that the exact choice of N is immaterial as long as it spans the same subspace. The degree of freedom in the subspace dened by N is k ( n
;k ). This also explains why we can solve the case k = 0 (and the trivial k = n ) more easily since the degree of freedom in N disappears. In the case k = n
;1 the subspace to be truncated is uniquely dened by the balanced realization.
3.5 Computational Complexity
The computational complexity per iteration is mainly determined by the number of variables involved in the two LMI problems. The rst LMI nds a P to the augmented system with n + k states and thus has N = ( n + k )( n + k +1) = 2 free variables to be solved for in addition to . According to 2] the computational complexity for solving an LMI is typically
O( N
2:
1) for a xed number of LMI inequalities. The second LMI has R
;1and L
;1as free variables, that is n ( n + 1) = 2 + k ( k + 1) = 2 variables. Both LMIs have roughly between n
2= 2 and n
2variables the second LMI has always less variables than the rst one. Thus the complexity per iteration is
O( n
4:
2). This has to be compared with other typical matrix operations such as solving a Lyapunov equation, matrix multiplication and inversion, which all are in the order of
O( n
3).
4 Examples
We will illustrate the algorithm on a number of examples and compare it with
the Hankel norm reduction. Four examples are given. The rst one shows how
to approximate a system with a constant. This only requires one step and is
done without iterations. The second example reduces a third order system to
rst order. For this problem the convergence is quite fast. The third example shows a case when the converge is slow, which indicates that the algorithm may fail to convergence in some applications. The fourth example shows a case when the improvement to the standard Hankel norm reduction is more signicant.
4.1 Implementation
In these examples a Matlab implementation employing the LMI-lab by Gahinet and Nemirovskii 5] has been made. This package has a user-friendly and exible interface for Matlab.
The LMI-based algorithm has been compared to Hankel norm reduction.
The following commands in the -Analysis and Synthesis Toolbox 1] has been used for obtaining the Hankel norm reduced model
>> bal, sv] = sysbal (sys)
where bal is the balanced realization of the system sys and sv contains the Hankel singular values. Then the Hankel-norm reduced model is obtained using the command
>> hank = hankmr (bal, sv, k, 'd')
where k is the order of the reduced system hank .
4.2 Example 1: A Non-Iterative Solution
We start by giving a very simple example of model reduction. We want to replace a dynamical system with a constant ( D -matrix). This can be obtained without iterations since R
;1= S . We consider the system
G ( s ) = 1 ( s + 1)( s + 5)
with Hankel singular values 0.111236 and 0.012361. Hankel-norm model reduc- tion yields ^ G H ( s ) = 0 : 1, while the LMI method gives ^ G
0( s ) = 0 : 08513. The model errors are 0.124722 and 0.115684 respectively.
4.3 Example 2: A Third-order System
We rst consider the third-order system
G ( s ) = 1 s + 1 + 1 s
2+ s + 4
We will now try to reduce this to a rst order system. The Hankel singular values are 0.71443, 0.19114, 0.10171. Thus, by reducing the model order to one the model error can not be lower than
2= 0 : 19114. The Hankel-norm model reduction guarantees a maximum H
1error of
Pi , where the sum is taken over the removed states. In this case we get 0.2928. The Hankel-norm reduced model is
G ^ = 1 : 43514 1
;0 : 106451 s
1 + 0 : 825176 s
0 1 2 3 4 5 6 10
−1410
−1210
−1010
−810
−610
−410
−2iteration
gamma−gamma*
gamma convergence
Figure 1: -convergence of Example 2.
The LMI reduced model is
G ^
1( s ) = 1 : 41322 1
;0 : 110249 s 1 + 0 : 817918 s:
The model errors are 0.197142 and 0.192139 respectively. The LMI reduced model is only marginally better than the Hankel-norm reduced one (about 2.6%). The Hankel-norm model reduced model is in this case quite close to its theoretical bound and much improvements using other techniques cannot be obtained.
In Figure 1 the error convergence versus iteration is shown. The plot gives
;where
is the minimum value obtained during the iterative search.
Usually it is also the value obtained in the last iteration but in this example we have run the algorithm to the extremes of the numerical precision of the computer. The convergence is linear (or geometric) in this case, that is the remaining error is scaled by an approximately constant factor less than one in each iteration. In this case the factor is about 0.25. The convergence rate depends on how well the LMIs are performed in each iterative step. If the LMIs, which are also iterative, are run into higher accuracy by modifying the stopping criterion, then the convergence rate of the model reduction iterations are faster.
However, it is not known which trade-o" o"ers the best overall computational performance.
In Figure 2 the model errors are shown for the Hankel-norm reduced (dashed line) and the LMI reduced (solid line) models.
4.4 Example 3: Slow converge
The next example is somewhat more dicult. The system to be reduced is G ( s ) = 1
( s + 1)( s
2+ s + 10)
10
-210
-110
010
110
20.16
0.165 0.17 0.175 0.18 0.185 0.19 0.195 0.2
frequency
error magnitude
Model error
Figure 2: Model errors of Example 2. The Hankel model error is given as a dashed line and the LMI reduced model error as a solid line.
The Hankel-norm singular values are 0.07608, 0.04925 and 0.02317. The Hankel model reduction yields
G ^ H ( s ) = 0 : 14017 1
;0 : 63286 s 1 + 1 : 801126 s
with a model error of 0.057330. The LMI method reduces the error close to 0.05, which apparently is the optimal value. The LMI model converges to
G ^
1( s ) = 120 1
;s 1 + s
The convergence rate is in this case very slow, probably due to the fact that the original system and the optimally reduced system have a common pole. The convergence rate seems to be sublinear, at least during the rst 100 iterations (see Figure 3). After about 350 iterations the error norm is not reduced any longer due to numerical errors. The accuracy in the LMI solver was set to 10
;12. The reason for the diculty is not understood, but one reason could be that the model error converges towards an all-pass second order system
G ( s )
;G ^
1( s ) = 120 s
2;s + 10 s
2+ s + 10
which is of order 2 only compared to 5 of the augmented system ~ G .
4.5 Example 4: A Higher-order Example
Next we will look at a 11th order system:
G ( s ) =
1
;0 : 05 s 1 + 0 : 05 s
10