Synthesis and LFT Gain Scheduling with Time Varying Mixed Uncertainties
Anders Helmersson
Department of Electrical Engineering Linkoping University
S-581 83 Linkoping, Sweden tel: +46 13 281622 fax: +46 13 282622 email:
andersh@isy.liu.seDecember 21, 1994
Abstract
This paper presents the solution to the mixed- synthesis problem, and how to design gain scheduling controllers with linear fractional transfor- mations (LFTs). The system is assumed to have parameter dependences described by LFTs. The goal of the controller is, with real-time knowledge of the parameters, to provide disturbance and error attenuation. The pa- per treats both complex and real parametric dependencies. Time-varying uncertainties with bounds on rate of variation can also be included in the same framework by left and right non-square multipliers that correspond to an augmented uncertainty structure.
Keywords:
Gain sceduling, linear fractional transformations, struc- tured singular values, parametric dependent systems, time-varying sys- tems, synthesis.
1 Introduction
During the last couple of years synthesis methods for gain scheduling using linear fractional transformations (LFTs) and structured singular values have been developed, see e.g. 12, 11, 9]. The idea behind this approach is to let the controller have access to some of the uncertainties or perturbations of the system to be controlled. The design process involves to two linear matrix inequalities (LMIs), which are coupled by a third LMI and non-convex rank conditions. This is analogous with the two-Riccati algorithm for solving H
1synthesis.
In the referred papers 12, 11, 9], the shared uncertainties have been assumed to be complex, possibly repeated scalar blocks. In this paper the mixed- syn- thesis problem (that is with both real and complex uncertainties) is elaborated.
The synthesis algorithm has the same structure as in the complex case with
two LMIs coupled by a third LMI and rank conditions. The dierence to the
This work was supported by the Swedish National Board for Industrial and Technical
Development (NUTEK), which is gratefully acknowledged
complex case is that blocks corresponding to real uncertainties in the connecting LMI disappear only the rank conditions remain.
In this paper we will treat how to design controllers for time invariant and time varying uncertainties. Uncertainties with bounded `
2gain, that is time varying uncertainties without bounds on rate of variation, are handled by using constant scaling matrices. Time invariant uncertainties are treated by having frequency dependent scalings matrices. This leads to the so called D - K itera- tion, which involves two alternating steps: rst nding scaling D that minimizes
( DMD
;1) and then nding a controller that minimizes the H
1norm of the scaled system.
Recently methods for bridging the gap between constant and fast-changing uncertainties have been presented 1, 7, 14, 15, 16]. If the rate of change of the uncertainties are bounded we can reduce the conservativeness in the design by scaling the original system by dynamic left and right multipliers. These multipliers correspond to an augmented uncertainty structure that can include the derivative of the original uncertainty blocks. In this way we can allow for time varying uncertainties in a unied framework that ts nicely withing the standard formulation.
The paper is organized as follows. In Section 2 we review the analysis concepts and linear fractional transformations (LFTs). Section 3 presents an approach for handling time varying uncertainties when these have bounded rate of variation. The synthesis problem is treated in Section 4 both for complex and mixed uncertainties. In Section 5 the synthesis problem is generalized to shared uncertainties by augmenting the uncertainty structure. Both complex and real uncertainties are included. The main results are summarized in Section 6 and some short conclusions are given in Section 7.
1.1 Notations
The notations used are fairly standard. We use I n to denote a unit matrix of dimension n n X
denotes the complex conjugate transpose of X X > (
) 0 a hermitian ( X = X
) positive denite (semidenite) matrix X
;= ( X
)
;1ker X denotes the null space of X and range X its range X
?denotes a matrix such that ker X
?= range X and X
?X
?> 0 note that X
?only exists if X has linearly dependent rows and that X
?is not unique but in this paper any choice is acceptable X
yis the Moore-Penrose pseudo inverse of X diag X
1X
2] a block-diagonal matrix composed of X
1and X
2rank X denotes the rank of the matrix X herm X =
12( X + X
)
S( :: ) denotes the Redheer star product A
B denotes the Kronecker product of the matrices A and B ( X ) the maximal singular value of X .
2 -analysis and LFTs
This section gives a short review on structured singular values and linear frac-
tional transformations (LFTs), see also e.g. 3].
M
-
Figure 1: System with uncertainties.
2.1 Denitions
The denition of depends upon the underlying block structure of the un- certainties , which could be either real or complex, see Figure 1. See also
18, 19]. For notational convenience we assume that all uncertainty blocks are square. This can be done without loss of generality by adding dummy inputs or outputs.
Given a matrix M
2Cn
n and two non-negative integers f R , and f C , with f = f R + f C
n the block structure is an f -tuple of pairs of positive integers
N K
= ( n
1k
1) ::: ( n f
Rk f
R) ( n f
R+1k f
R+1) ::: ( n f
R+f
Ck f
R+f
C)]
(1) where
Pf i
=1n i k i = n for dimensional compatibility. The block repetition struc- ture is dened by
Nand the basis blocks by
K. The set of allowable perturba- tions is dened by a set of block diagonal matrices
X 2Cn
n dened by
X
=
f= diag I n
1R
1::: I n
fRRf
RI n
fR+1Cf
R+1::: I n
fR+fCCf
R+f
C] :
Ri
2Rk
ik
iCf
R+i
2Ck
fR+ik
fR+ig(2) where
denotes the Kronecker product:
A
B =
2
6
4
a
11B a
12B ::: a
1n B
... ... ...
a m
1B a m
2B ::: a mn B
3
7
5
:
This structure is slightly more general than in e.g. 18, 19], since also repeated full blocks are allowed both for real and complex uncertainties.
Assuming the uncertainty structure
N K, the structured singular value of a matrix M
2Cn
n is dened by
=
min
2X
f
() : det( I
;M ) = 0
g
;1
(3)
and if no
2Xsatises det( I
;M ) = 0 then ( M ) = 0.
2.2 Upper and Lower Bounds
Generally the structured singular value cannot be exactly computed, and instead we have to resort to upper and lower bounds, which are usually sucient for most practical applications. A tutorial review of the complex structured singular value is given in 10].
An upper bound can be determined using convex methods, either involving minimization of singular values with respect to a scaling matrix or by solving a linear matrix inequality (LMI) problem. The upper bound is conservative in the general case, but can be improved by branch and bound schemes.
A lower bound can be found by maximizing the real eigenvalue of a scaled matrix. This bound is nonconservative in the sense that if the true global maximum is found it is equal to . However, since the problem is not convex, we cannot guarantee that we nd the global maximum.
We will here focus on the computation of the upper bound, which we here denote , in order to distinguish it from the true function. The upper bound can be computed as a convex optimization problem. For complex uncertainties it is dened by
( M ) = inf D
2D
( DMD
;1)
( M ) (4) where
Dis the set of block diagonal Hermitian matrices that commute with
X, that is
D
=
f0 < D = D
2Cn
n : D = D
82Xg: (5) This problem is equivalent to an LMI problem
( M ) = inf >
P
2D0f
: M
PM <
2P
g: (6) Real uncertainties can be included in the LMI problem for computing the upper bound (see e.g. 4, 18, 19]). We dene
( M ) = inf >
P G
2D2G0f
: M
PM + j ( GM
;M
G ) <
2P
g(7) where
G
=
fG = G
2Cn
n : G =
G
82Xg: (8) Every G
2Gis block diagonal with zero blocks for complex uncertainties. If we let
G=
f0
gin (7) we recover the complex upper bound (6).
We can reformulated (7) as a positive real property
( M ) = inf >
W
2W0: herm
;( I +
1M
) W ( I
;1
M )
> 0
(9) where herm X =
12( X + X
) and
W=
fW = P + jG : P
2DG
2Gg. Note that herm W > 0 always and that W = W
for complex uncertainties. Another equivalent reformulation of (7) is
( M ) = inf >
D G
2D2G0n
:
;1
DMD
;1;jG
( I + G
2)
;12< 1
o: (10)
2.3 Linear Fractional Transformations (LFTs)
Suppose M is a complex matrix partitioned as M =
M
11M
12M
21M
22
2C
(
p
1+p
2)(m
1+m
2)(11) and let u
2Cm
1p
1and l
2Cm
2p
2. The upper and lower linear fractional transformations (LFTs) are dened by
F
u ( M u ) = M
22+ M
21u ( I
;M
11u )
;1M
12(12) and
F
l ( M l ) = M
11+ M
12l ( I
;M
22l )
;1M
21(13) respectively. Clearly, the existence of the LFTs depends on the invertibility of I
;M
11u and I
;M
22l respectively.
The Redheer star product 17] is a generalization of the LFTs. Assume that Q is partitioned similarly to M . Then the star product is dened by
S
( QM ) =
F
l ( QM
11) Q
12( I
;M
11Q
22)
;1M
12M
21( I
;Q
22M
11)
;1Q
21 Fu ( MQ
22)
: (14) Note that the denition above is dependent on the partitioning of the ma- trices Q and M . The LFTs can be dened by the
Snotation, as
F
u ( M u ) =
S( u M ) and
Fl ( M l ) =
S( M l ) : The star product is associative, that is
S( A
S( BC )) =
S(
S( AB ) C ) :
2.4 Frequency transformation
The bilinear transformation between the z -domain and the s -domain is s = z z
;1+1, which is given by
Fu ( Nz
;1I ) where
N =
I
p2 I
;
p
2 I
;I
:
Using this transformation we can map the continuous time problem to a static
problem 3].
3 Time Varying Uncertainties
3.1 A unied approach
We will here adopt a unied approach for including time-varying uncertainties in the formalism. We have previously based the computation of the upper
bound on two commutative sets
Dand
G. In the case of time-invariant
(constant) uncertainties, either parametric (real) or complex (dynamic), we may
have multipliers D and G that are frequency dependent (dynamic). In the case
of time-varying uncertainties, including nonlinear elements with bounded `
gain and parametric varying parameters, D and G are generally restricted to constants (with respect to time or frequency).
Both these structures, constant and nonlinear, are extremes. A \constant"
parameter in practice is usually not constant but slowly varying. A dynamic nonlinear uncertainty is normally too conservative. Thus, it would be of great importance to include other structures in between these two extremes. We will here look at uncertainties that are varying with bounds on the rate of change.
We will present an approach that allows us to include these uncertainties in the standard framework.
We will denote uncertainties that have bounds on _ :::
(m
), but not on
(m
+1)or higher derivatives, to belong to the class
Vm . Any time-varying uncertainty (without bounds on _) belongs to
V0. We can include these time- varying structures into the framework presented here.
3.2 Uncertainty block augmentation
The main idea in this approach to handle slowly time-varying uncertainties is the concept of uncertainty block augmentation. This idea can be used also for constant uncertainties. An uncertainty block can be augmented by either adding copies of the original uncertainty or its derivative other choices may also be possible.
We have previously used the commutative property as an important tool for improving the upper bound on . The commutative property D = D can also be stated as D D
;1= since D by denition is nonsingular.
Generally, we need no commutative set, but it is enough to nd an aug- mented uncertainty block ~ together with left and right multipliers, Y and Z respectively, such that
Y ~ Z = :
If this is possible, then the system M can be shown to be stable if (~) <
1 = ( ZMY ).
For a constant uncertainty we can augment the uncertainty block by copies of itself: ~ = I n . Then we can choose any dynamic multipliers Y and Z such that Y Z = 1. Thus, Y ~ Z = .
Example 3.1 Let ~ = diag ], Y =
12 1+21+s s
1+21+s s
and Z =
"
1+2
s
1+
s
1+
s
1+2
s
#
,
then Y Z = 1 and consequently Y ~ Z = .
2For slowly time-varying uncertainties we include _ = ddt into the aug- mented uncertainty block ~. We use s to denote the dierential operator ddt interchangeably with the Laplace argument.
Example 3.2 Let ~ =
_
, Y =
1 + as
;a
and Z = 1 = (1 + as ). Let
y = 1 = (1 + as ) x . Then
Y ~ Zx = (1 + as ) y
;a y _
= y + a y _ + a y _
;a y _
= ( y + a y _ ) = (1 + as ) y = x:
Thus, Y ~ Z = .
2We can generalize this to the multivariable case, for which we provide the following lemma.
Lemma 3.1 Let = I where is a real or complex scalar. Assume that A has all eigenvalues with negative real part and let the system be initialized to zero at t =
;1. Then we have = Y ~ Z , if either
(i) ~ =
_
, Y =
sI
;A
;I
and Z = ( sI
;A )
;1or (ii) ~ =
_
, Y = ( sI
;A )
;1and Z =
sI
;A I
.
Proof:
(i) Let y = ( sI
;A )
;1x , then
Y ~ Zx = ( sI
;A ) y
;y _
= _ y + y _
;Ay
;y _
= ( _ y
;Ay ) = ( sI
;A ) y = x:
(ii)
Y ~ Zx = ( sI
;A )
;1( ( sI
;A ) x + _ x )
= ( sI
;A )
;1( x _
;Ax + _ x )
= ( sI
;A )
;1( sI
;A )( x ) = x:
2
We can now combine the two results in Lemma 3.1 into the following more general lemma.
Lemma 3.2 Let = I where is a real or complex scalar. Assume that A
1and A
2have all eigenvalues with negative real part and let the system be initialized to zero at t =
;1. Then with ~ = diag _] we have = Y ~ Z if
Y ( s ) =
Y
0( s ) Y
1( s )
=
( sI
;A
1)
;1( sI
;A
2) ( sI
;A
1)
;1(15) and
Z ( s ) =
Z
0( s ) Z
1( s )
=
( sI
;A
2)
;1( sI
;A
1) ( sI
;A
2)
;1( A
1;A
2)
: (16)
Proof: We expand the -block by using Lemma 3.1 (ii) and (i) in two con- secutive steps.
= ( sI
;A
1)
;1_
sI
;A
1I
= ( sI
;A
1)
;1
sI
;A
2 ;I
_
( sI
;A
2)
;1_
sI
;A
1I
= ( sI
;A
1)
;1( sI
;A
2)( sI
;A
2)
;1( sI
;A
1) + ( sI
;A
1)
;1;_( sI
;A
2)
;1( sI
;A
1) + _
= ( sI
;A
1)
;1( sI
;A
2)( sI
;A
2)
;1( sI
;A
1) + ( sI
;A
1)
;1_( sI
;A
2)
;1( A
1;A
2)
= Y
0 0 _
Z:
2
Remark 3.1 ~ If = I k we can generalize the set of scaling matrices by letting Y = CY and ~ Z = ZB where Y and Z are dened by (15) and (16) respectively and CB = I k .
Remark 3.2 We can also include and higher order derivatives in the un- certainty structure. The bounds on these higher order derivatives of can be included easily in the same formalism, since the augmented block has the same properties as the original uncertainties. Hence, we can augment the _ -block with similarly to the augmentation of with _.
Remark 3.3 We have assumed that both and _ are weighted equally in each block. Dierent weighting can be introduced by scaling. Let ~ = diag =d _ ] and scale the last block in Y or Z by d . Another approach is to let I be structured as a diagonal block matrix where each block is of the type i I .
Remark 3.4 The approach taken here is related to \the swapping lemma" 8]
used in adaptive control for stability analysis. The swapping lemma states in principle that the dierence D ( s )
;D ( s ) is bounded. In our approach we explicitly solve the dierence and compensates for it exactly:
Z
0;Z
0= Z
0Y
1Z _
1where we have used the fact that Z
0= Y
0;1.
Example 3.3 To illustrate the technique we give a simple example. The problem is described by
M ( s ) =
0
1+44+s s
1+4
s
4+
s 0
=
1
0 0
2
122C
(17) for which max ! ( M ( j! )) = 1 with time-invariant uncertainties. To obtain this we can use the dynamic multipliers
D ( s ) = diag 1 4 + s
1 + 4 s ]
since
( DMD
;1) =
0 1 1 0
= 1 :
Assuming dynamic uncertainties then ( M ) = 4 with a constant D = I . In order to make the system unstable using a nonlinear uncertainty already at a low gain >
14we need uncertainty elements that are able to transform energy from one low frequency to another higher and vice versa. If the uncertainty elements are slowly varying we may expect a lower value on , that is we can tolerate larger uncertainties.
We now assume that the uncertainty
22V1is slowly time-varying (
12V0may change without bound on rate). Use the scalings derived from Lemma 3.2.
Y =
"
1 0 0
0
1+44+s s
p4+15d= s
4#
and Z =
2
6
6
4
1 0 0
1+44+s s 0
p1+460d s
3
7
7
5
then
= Y ~ Z = Y
2
4
10 0 0
20 0 0 _
2=d
3
5
Z
and ( M )
( ZMY )
q1 +
154d . We can now choose d in order to cope with varying uncertainties. If we choose d to be small, the value will be approximately 1. For instance, let d = 0 : 1 then
1 : 1726. Thus the system is stable if
j1j< 0 : 8528,
j2j< 0 : 8528 and
j_
2j< 0 : 0853.
If we choose d to be large, the value increases. For instance, with d = 1 we obtain
2 : 1794 and thus the system is stable if
j1j< 0 : 4588,
j2j< 0 : 4588 and
j_
2j0 : 4588. The bound can be improved further, especially for large
values of d by modifying Y and Z .
24 Synthesis
In this section we treat the mixed- synthesis problem, see Figure 2. This is equivalent to minimizing (
S( MK )) with respect to K , that is, minimizing the structured singular value of M controlled by K . Typically M is an augmented or scaled system.
4.1 An a ne problem
We assume that M is partitioned as:
M =
M
11M
12M
21M
22
2C
(
n
+m
)(n
+p
): (18) Without restriction we may assume that M
22= 0 ( M is strictly proper). If M
226= 0, we replace M with
M =
S( MN M ) =
M
11M
12M 0
(19)
K
-
M
-
Figure 2: The synthesis problem consists of the design task of nding a controller K to the system M such that the structured singular value of the closed loop system (
S( MK )) is minimized or becomes below a specied value.
with
N M =
0 I I
;M
22
(20) Thus, if we can nd a controller K to the modied system M , then K =
S
( N M K ) will solve the original problem. Since the star product inverse of N M
exists for all M , the modied problem is equivalent to the original one as long as I
;M
22K is not singular.
Due to the simpler structure of M , the star product can be rewritten as the following matrix expression, which is ane in K :
S
( M K ) = M
11+ M
12KM
21= Q + U KV
: (21) provided that det( I
;M
22K )
6= 0. In order to formally prove the equivalence between the ane problem and the general LFT parametrization we provide the following lemma, which is essentially from 13].
Lemma 4.1
K
2Cinf
p mdet(
I
;M
22K
)6=0;1
D ( Q + UKV
) D
;1;jG
( I + G
2)
;12= inf K
2Cp m
;1
D ( Q + UKV
) D
;1;jG
( I + G
2)
;12Proof: For any M
22, the closure set
K
2Cp
m : det( I
;M
22K )
6= 0
is
Cp
m , which shows that the inmum are the same.
24.2 Mathematical preliminaries
In this section we give two important and related lemmas to be used for solv- ing synthesis problems. The lemmas, theorems and proofs are made on the assumption that the matrices are complex. The real case can be obtained by replacing
Cwith
Rwithout aecting the validity of the theory.
We rst consider an ane optimization problem 2].
Lemma 4.2 Assume that Q
2Cn
n and that both U
2Cn
p and V
2Cm
n have full column rank and row rank respectively. Then
K
2Cinf
p m( Q + UKV
) < (22) if and only if
V
?;Q
Q
;2I
V
?< 0 (23)
and U
?;I
U
?< 0 : (24)
Proof: See 2].
2A related lemma 5, 6], goes as follows.
Lemma 4.3 Given a hermitian matrix Q
2Cn
n and U , and V as above. Then Q + UKV
+ ( UKV
)
< 0 (25) is solvable for K
2Cp
m if and only if
U
?QU
?< 0
V
?QV
?< 0 : (26)
Proof: Necessity of (26) is clear: for instance, U
?U = 0 implies U
?QU
?< 0 when pre- and post-multiplying (25) by U
?and U
?respectively. For details
on the suciency part, see 5, 6].
2In 6] the set of K solving (25) is parametrized.
4.3 Complex synthesis
If we restrict the problem to only have complex uncertainties, the following lemma from 12] applies.
Lemma 4.4 Let Q , U , and V be given as above. Then
K
2CD inf
2Dp m;
D ( Q + UKV
) D
;1< (27) if and only if there exists a P
2Dsuch that
V
?;Q
PQ
;2P
V
?< 0 (28)
and U
?;QP
;1Q
;2P
;1U
?< 0 : (29)
Proof: See 12] or use Lemma 4.5 with
W=
D, i.e. W = W
= D .
24.4 Mixed synthesis
If both real and complex uncertainties are involved, we have a mixed- problem.
The following lemma then applies.
Lemma 4.5 Let Q , U , and V be given as above. Then
K
2CD G inf
2D2Gp m;
1
D ( Q + UKV
) D
;1;jG
( I + G
2)
;12< 1 (30)
if and only if there exists a W
2Wwhere
W=
fW = P + jG : P
2DG
2Gg, such that
herm
;V
?( I +
1Q
) W ( I
;1
Q ) V
?> 0 (31) and
herm
;U
?( I +
1Q ) W
;1( I
;1
Q
) U
?> 0 (32) where herm X =
12( X + X
).
Proof: Let ^ Q =
1Q , ~ Q = ( D QD ^
;1 ;jG )( I + G
2)
;12, ~ U = DU , ~ V
=
V
D
;1( I + G
2)
;12and ^ K =
1K . Then using Lemma 4.2 we obtain
Q ~ + ~ U K ^ V ~
< 1 (33) if and only if
U ~
?( ~ Q Q ~
;I )~ U
?= U
?D
;1( D QD ^
;1;jG )( I + G
2)
;1( D
;Q ^
D
+ jG )
;I
D
;U
?= U
?QP ^ U Q ^
+ j ( ^ QG U
;G U Q ^
)
;P U
U
?= herm
U
?( ^ Q + I )( P U
;jG U )( ^ Q
;I ) U
?< 0 with P U = D
;1( I + G
2)
;1D
;and G U = D
;1( I + G
2)
;1GD
;, and
V ~
?( ~ Q
Q ~
;I )~ V
?= V
?D
;1( D
;Q ^
D
+ jG )( D QD ^
;1;jG )
;( I + G
2)
D
;V
?= V
?Q ^
P V Q ^ + j ( G V Q ^
;Q ^
G V )
;P V
V
?= herm
V
?( ^ Q
+ I )( P V + jG V )( ^ Q
;I ) V
?< 0
with P V = D
D and G V = D
GD .
Next, by introducing W = P V + jG V
2Wand observing that W
;1= P U
;jG U ,
we can conclude the proof.
2The solution of the mixed- synthesis problem is similar to the complex case.
Both are solved by nding a common solution of two LMIs: one in W and the other in W
;1. In the complex case W = W
> 0 while in the real case herm W > 0. The mixed- synthesis problem can be viewed as a positive real condition while the complex case corresponds to the small gain theorem.
5 Shared Uncertainties
In this section we consider the synthesis problem when shared uncertainties are involved. Dynamic controllers can be included in this formalism by letting the frequency be treated as a shared uncertainty.
5.1 Structure of shared uncertainties
Formally we treat this by letting the controller K have access to a copy of the system's uncertainties. Not all blocks in the uncertainty matrix are accessible or shared and we denote the accessible uncertainties by
I
R
() = diag I r
11I r
22::: I r
ff ] (34) The structure of the controller accessible uncertainties are given by
R
= r
1r
2::: r f ] : (35) Blocks that are not shared have r i = 0. We also dene r =
Pf i r i k i .
5.2 The general synthesis problem
The general synthesis problem can be depicted as in Figure 3. Find a controller K to the system M (or the augmented system ~ M ), such that (
S( MK )) =
(
S( ~ MK )) is minimized or becomes below a specied value. The diagrams below show three equivalent representations of the -synthesis problem. In the left diagram M and K have separated uncertainty blocks, and
IR() respectively. In the middle diagram the uncertainties are explicitly shared.
Here ~ = diag
IR()]. In the right diagram through connections between K and have been included in ~ M .
Here we use a somewhat more general structure of ~ than used so far. With this formulation it is possible to catch most of the relevant synthesis problems in the setting. The generalized structure of ~ is nothing more than a reordering of rows and columns, so that both M and K may share the same uncertainties.
For instance, if includes frequency, either s or z , we can handle continuous- time and discrete-time problems respectively.
The system ~ M is augmented with through connections between the uncer- tainty block ~ and K . We assume that M
22= 0.
M ~ =
2
4
M ~
11M ~
12M ~
210
3
5
=
2
6
6
6
4
M
110 M
120
0 0 0 I r
M
210 0 0
0 I r 0 0
3
7
7
7
5
=
2
6
6
6
4
Q 0 U 0
0 0 0 I r
V
0 0 0
0 I r 0 0
3
7
7
7
5
(36) :
I
R
() K
- -
M
-
,
K M
!
!
!
!
!
a a a a a
~
- -
,
K
-
M ~
~
-
Figure 3: Equivalent representations of a system M controlled by K that have access to a reduced copy
IR() of the system's uncertainties . The augmented uncertainty block ~ = diag
IR()].
Thus, ~ U = diag M
12I r ] and ~ V
= diag M
21I r ]. Then, ~ U
?= diag U
?0] = diag M
12?0] and ~ V
?= diag V
?0] = diag M
21?0]. Applying Lemma 4.5 we obtain
herm
;V
?( I +
1Q
) S ( I
;1
Q ) V
?> 0 (37) and
herm
;U
?( I +
1Q ) R ( I
;1Q
) U
?> 0 (38) where S , R
2Cn
n are the upper left block of W and W
;1respectively:
W =
S
2R
(
n
+r
)(n
+r
)and W
;1=
R
:
The matrices R and S are mutually constrained by these two equations. Next, we will show that this constraint can be reformulated as a rank condition on I
;RS and, in the case of complex uncertainties, an additional LMI.
Before going much further we need to specify the structure of ~, W and W
;1more explicitly. The uncertainty matrix ~ is assumed to be a block diagonal matrix such that
~ = diag
IR()]
= diag I n
11:::I n
ff I r
11:::I r
ff ] : (39)
We assume that W have the corresponding block structure:
W =
2
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
4
W
1110
0 W
1210
0 0 W
1120 0 W
1220 ... ... ... ... ... ... ... ...
0 0
W
11f 0 0
W
12f W
2110
0 W
2210
0
0 W
2120 0 W
2220 ... ... ... ... ... ... ... ...
0 0
W
21f 0 0
W
22f
3
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
5
(40)
For notational convenience, all W
12i , W
21i and W
22i are included even if they are not shared and are in such case empty matrices. The structure of W
;1will be identical to the structure of W .
We now denote
W i =
W
11i W
12i
W
21i W
22i
:
Specically, for uncertainties that are not shared we have W i = W
11i .
The structure of W i is given by the type of uncertainty it refers to. We will now look specically at the two kinds of uncertainties that may be shared:
complex and real. They are of the type I n
i+r
ii , where i
2 Rk
ik
ior
i
2Ck
ik
irespectively. The corresponding structure of W i = W i
I k
iwhere W i
2 C(n
i+r
i)(n
i+r
i). For complex uncertainties we have W i = W i
> 0 for real uncertainties W i could be any square matrix such that herm W i > 0.
We will now assume that R and S are block diagonal matrices with a struc- ture identical to W
11. Depending on the structure of , and consequently of W , the matrices R and S are related to each other. The next two sections treat the two cases of shared uncertainties: complex and real. For notational convenience we drop the sux i and consider each block at a time. Also, since W i
;1= ( W i
I k
i)
;1= W i
;1I k
i, it is no restriction to only treat repeated scalar blocks for repeated full blocks we substitute W for W .
5.3 Complex uncertainties
If we are dealing with complex uncertainties we restrict W
2C(n
+r
)(n
+r
)to be hermitian and positive denite W = W
= P > 0. Then we have the following lemma from 9].
Lemma 5.1 Suppose that R = R
2Cn
n , S = S
2Cn
n , with R > 0 and S > 0. Let r be a positive integer. Then there exist matrices N
2 Cn
r and L
2Cr
r such that L = L
and
P =
S N N
L
> 0 and P
;1=
R
if and only if
R I n
I n S
0 and rank
;S
;R
;1r:
Proof: (
)) Using Schur complements, R = S
;1+ S
;1N ( L
;N
S
;1N )
;1N
S
;1. Inverting, using the matrix inversion lemma, gives that R
;1= S
;NL
;1N
. Hence S
;R
;1= NL
;1N
0, and indeed, rank( S
;R
;1) = rank( NL
;1N
)
r . (
() By assumption, there is a matrix N
2Cn
r such that S
;R
;1= NN
.
Dening L = I r completes the construction.
25.4 Real uncertainties
For real uncertainties we have a more general set of matrices such that herm W >
0. We rst state the following lemma
Lemma 5.2 If herm W > 0 then W is nonsingular and herm( W
;1) > 0.
Proof: We rst show the nonsingularity of W . Since herm W > 0 it follows that x
( W + W
) x > 0
8x . Then
x
( W + W
) x = x
Wx + ( x
Wx )
= 2Re( x
Wx ) > 0
8x:
Thus W is nonsingular. Next
W
;(herm W ) W
;1=
12W
;( W + W
) W
;1= herm W
;1> 0 :
2
We are now ready to state the following lemma connected with real shared uncertainties.
Lemma 5.3 Suppose that R
2 Cn
n , S
2 Cn
n , with herm R > 0 and herm S > 0. Let r be a positive integer. Then there exist matrices N
2Cn
r , M
2Cr
n and L
2Cr
r such that L is nonsingular and
W =
S N M L
herm W > 0 and W
;1=
R
if and only if
rank
;S
;R
;1r:
Proof: (
)) Using Schur complements, R = S
;1+ S
;1N ( L
;MS
;1N )
;1MS
;1. Inverting, using the matrix inversion lemma, gives that R
;1= S
;NL
;1M . Hence, S
;R
;1= NL
;1M , and indeed, rank( S
;R
;1) = rank( NLM )
r . (
() Dene t = rank( S
;R
;1)
r . Find any M t
2 Ct
n of full row rank, such that S
;R
;1= KM t , with K
2Cn
t having full column rank. Choose N t as a linear function of L t by letting N t = ( S
;R
;1) M t
yL t where M t
y= M t
( M t M t
)
;1 2 Cn
t is the Moore-Penrose inverse of M t . Then S
;R
;1= N t L
;1t M t . By dening M =
M t
0
, N =
N t 0
and L = diag L t I r
;t ], we obtain
W =
S N M L
=
2
4
S 0 0 M t 0 0 0 0 I r t
3
5
+
2
4
( S
;R
;1) M t
yI t
0
3
5