• No results found

Methods for robust gain scheduling

N/A
N/A
Protected

Academic year: 2021

Share "Methods for robust gain scheduling"

Copied!
238
0
0

Loading.... (view fulltext now)

Full text

(1)

Methods for

Robust Gain Scheduling

Anders Helmersson

Department of Electrical Engineering

Linkoping University

S-581 83 Linkoping, Sweden

email:

andersh@isy.liu.se

(2)
(3)

Abstract

This thesis considers the analysis of systems with uncertainties and the design of controllers to such systems. Uncertainties are treated in a relatively broad sense covering gain-bounded elements that are not known a priori but could be available to the controller in real time.

The uncertainties are in the most general case norm-bounded operators with a given block-diagonal structure. The structure includes parameters, linear time-invariant and time-varying systems as well as nonlinearities. In some applications the controller may have access to the uncertainty, e.g. a parameter that depends on some known condition.

There exist well-known methods for determining stability of systems subject to uncertainties. This thesis is within the framework for structured singular values also denoted by . Given a certain class of uncertainties,  is the inverse of the size of the smallest uncertainty that causes the system to become unstable. Thus,

 is a measure of the system's \structured gain". In general it is not possible to computeexactly, but an upper bound can be determined using ecient numerical methods based on linear matrix inequalities.

An essential contribution in this thesis is a new synthesis algorithm for nding controllers when parametric (real) uncertainties are present. This extends previous results onsynthesis involving dynamic (complex) uncertainties. Speci cally, we can design gain scheduling controllers using the newsynthesis theorem, with less conservativeness than previous methods. Also, algorithms for model reduction of uncertainty systems are given.

A gain scheduling controller is a linear regulator whose parameters are changed as a function of the varying operating conditions. By treating nonlinearities as uncertainties, methods can be used in gain scheduling design. In the discussion, emphasis is put on how to take into consideration di erent characteristics of the time-varying properties of the system to be controlled. Also robustness and its relation with gain scheduling are treated.

In order to handle systems with time-invariant uncertainties, both linear sys-tems and constant parameters, a set of scalings and multipliers are introduced. These are matched to the properties of the uncertainties. Also, multipliers for treat-ing uncertainties that are slowly varytreat-ing, such that the rate of change is bounded, are introduced. Using these multipliers the applicability of the analysis and syn-thesis results are greatly extended.

(4)
(5)

Preface

This thesis is a combined result of a personal ten-year experience as a control engineer and a couple of years of scienti c research. After my graduation at Lund Institute of Technology in 1981, I moved to Linkoping where I worked with missile guidance during my rst three professional years.

In 1984 I joined Saab Ericsson Space where I quite soon got involved in tra-jectory guidance and attitude control systems for sounding rockets, small satellites and shuttles [48, 49, 50, 47]. I also had the opportunity and privilege to participate in the launches of two record-breaking sounding rockets from Esrange, Sweden: Maxus Test reaching an apogee of 532 km and Maxus 1B reaching 717 km, both of which were guided by our control systems.

During this time I used, what I believe is, the most common approach in aerospace industry to this kind of problem: to regard gain scheduling as a quasi-stationary problem. This means that the design in each ight condition, or oper-ating point, is treated as a linear static problem, which is solved using traditional tools. With this I felt somewhat unsatis ed since no analytical tools or methods were available at that time to treat the time-varying aspects of the problems. Sim-ulations were used for showing that the system behaved well when subjected to these changes. We must however bear in mind that this approach has been suc-cessful with very few failures, if any, due to this negligence. However, my hope with the formal approach is to be able to reduce the turn-around and cost for new control designs by reducing the simulation and test e orts.

During my time as a control engineer I took some graduate courses at the department of Electrical Engineering at Linkoping University. In 1993 I was con-vinced by Professor Lennart Ljung to join the group at Automatic Control where I got a part-time position as a Ph.D. student.

Some of the applications in this thesis are inspired by my work at Saab Ericsson Space with guidance and control system for sounding rockets and small launchers. The problems are, however, not fully realistic since parameters and models have been modi ed to protect proprietary rights and the models have been simpli ed to better show the principles of the applied methods and algorithms.

(6)
(7)

Acknowledgments

I would like to express my sincere gratitude to the people who made this work possible by their inspiration and contributions.

First of all I would like to thank my supervisor Professor Lennart Ljung for his excellent guidance and inspiring discussions throughout the elaboration of this thesis. He and Professor Torkel Glad have succeeded in creating a stimulating atmosphere in the group. Also, much of the nice spirit is due to our secretary Ulla Salaneck.

I would also like to thank Professor Karl-Johan Astrom for drafting me for the automatic control team during my time as an undergraduate student at Lund Institute of Technology.

Ever since the manuscript of this thesis started to grow as an embryo, it has been largely in uenced and improved by the many proof-readers: Karin Stahl Gunnarsson, Dr. Ke Wang Chen, Dr. Inger Klein, Dr. Tomas McKelvey, Dr. Fredrik Gustafsson, Dr. Henrik Jonson and Mats Jirstrand. Their many suggestions and constructive criticisms have greatly contributed to the nal touch of the thesis. I also would like to thank Peter Lindskog and Dr. Roger Germundsson for their valuable helps and hints on LATEX and Magnus Sundstedt for keeping the computers

running.

The numerical solutions of the linear matrix inequalities have been performed using LMItool version 2.0, which was provided for free by Dr. Pascal Gahinet, INRIA, Le Chesnay, France.

I am also grateful to Saab Ericsson Space for letting me be on leave for com-pleting the thesis. I would also like to thank the sta at the Linkoping oce for creating a nice and friendly place to work at.

This work was supported by the Swedish National Board for Industrial and Technical Development (NUTEK), which is gratefully acknowledged.

Finally, I would like to thank my parents Anna and Torsten, as well as my brothers Lars, Karl Gustav and Bengt and their families for their love and support.

(8)
(9)

Contents

Abstract : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : iii Preface : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : v Acknowledgments: : : : : : : : : : : : : : : : : : : : : : : : : : : : : vii

1 Introduction

1

1.1 Background : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 1.2 Robustness and Structured Singular Values : : : : : : : : : : : : : : 2 1.3 Gain Scheduling : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 1.4 Parameter Variations: : : : : : : : : : : : : : : : : : : : : : : : : : : 5 1.5 Contributions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 6 1.6 Outline : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 6

2 Preliminaries

9

2.1 Matrices : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 10 2.2 Linear Systems : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 10 2.2.1 Continuous-Time Linear Systems : : : : : : : : : : : : : : : : 10 2.2.2 Discrete-Time Linear Systems: : : : : : : : : : : : : : : : : : 11 2.2.3 Similarity Transformations : : : : : : : : : : : : : : : : : : : 12 2.3 Norms : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 12 2.3.1 Vector Norms : : : : : : : : : : : : : : : : : : : : : : : : : : : 12 2.3.2 Singular Value Decomposition: : : : : : : : : : : : : : : : : : 12 2.3.3 Induced Matrix Norms: : : : : : : : : : : : : : : : : : : : : : 13 2.3.4 Rank and Pseudo-inverse : : : : : : : : : : : : : : : : : : : : 13 2.4 Signal Spaces : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 14 2.4.1 Lebesgue Spaces : : : : : : : : : : : : : : : : : : : : : : : : : 14 2.4.2 Operators : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 15 2.4.3 Induced Norms : : : : : : : : : : : : : : : : : : : : : : : : : : 15 2.4.4 Hardy Spaces : : : : : : : : : : : : : : : : : : : : : : : : : : : 15 2.5 Factorization of Transfer Matrices : : : : : : : : : : : : : : : : : : : 16

(10)

2.5.1 Inverse of Transfer Matrices : : : : : : : : : : : : : : : : : : : 16 2.5.2 Factorization of Matrices : : : : : : : : : : : : : : : : : : : : 16 2.5.3 The Riccati Equation and Its Solution : : : : : : : : : : : : : 17 2.5.4 Spectral Factorization : : : : : : : : : : : : : : : : : : : : : : 17 2.5.5 Canonical Factorization : : : : : : : : : : : : : : : : : : : : : 19 2.5.6 The Kalman-Yakubovich-Popov Lemma : : : : : : : : : : : : 20 2.6 Stability : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 20 2.6.1 Small Gain Theorem : : : : : : : : : : : : : : : : : : : : : : : 21 2.6.2 Structured Uncertainties: : : : : : : : : : : : : : : : : : : : : 21 2.6.3 Passivity : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 21 2.6.4 Positive Real Transfer Functions : : : : : : : : : : : : : : : : 22 2.7 Performance Bounds : : : : : : : : : : : : : : : : : : : : : : : : : : : 22 2.8 Matrix Inequalities : : : : : : : : : : : : : : : : : : : : : : : : : : : : 23 2.8.1 Continuous Time : : : : : : : : : : : : : : : : : : : : : : : : : 23 2.8.2 The Riccati Inequality : : : : : : : : : : : : : : : : : : : : : : 24 2.8.3 Linear Matrix Inequalities (LMIs) : : : : : : : : : : : : : : : 24 2.9 Structured Dynamic Uncertainties : : : : : : : : : : : : : : : : : : : 27 2.9.1 Structured Nonlinear Dynamic Uncertainties : : : : : : : : : 28 2.9.2 Structured Parametric Uncertainties : : : : : : : : : : : : : : 30 2.10 Strictness of Quadratic Lyapunov Functions : : : : : : : : : : : : : : 31

3 Linear Matrix Inequalities

33

3.1 Some Standard LMI Problems : : : : : : : : : : : : : : : : : : : : : 34 3.2 Interior Point Methods : : : : : : : : : : : : : : : : : : : : : : : : : : 36 3.2.1 Analytic Center of an Ane Matrix Inequality : : : : : : : : 36 3.2.2 The Path of Centers : : : : : : : : : : : : : : : : : : : : : : : 37 3.2.3 Methods of Centers : : : : : : : : : : : : : : : : : : : : : : : 38 3.2.4 Primal and Dual Methods : : : : : : : : : : : : : : : : : : : : 38 3.2.5 Complexity : : : : : : : : : : : : : : : : : : : : : : : : : : : : 39 3.3 Software Packages : : : : : : : : : : : : : : : : : : : : : : : : : : : : 39 3.4 Nonstrict LMIs : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 40 3.5 Some Matrix Problems : : : : : : : : : : : : : : : : : : : : : : : : : : 40 3.5.1 Minimizing Matrix Norms : : : : : : : : : : : : : : : : : : : : 40 3.5.2 Minimizing Condition Number : : : : : : : : : : : : : : : : : 41 3.5.3 Treating Complex-Valued LMIs: : : : : : : : : : : : : : : : : 42 3.6 Rank Conditions and Nonlinear Constraints : : : : : : : : : : : : : : 42 3.6.1 Convexity and Complexity : : : : : : : : : : : : : : : : : : : 43

4 Model Parametrization

45

4.1 Linearly Dependent Parametrization : : : : : : : : : : : : : : : : : : 46 4.1.1 Scalar Uncertainties : : : : : : : : : : : : : : : : : : : : : : : 46 4.1.2 Repeated Scalar Blocks : : : : : : : : : : : : : : : : : : : : : 48 4.2 Linear Fractional Transformations (LFTs) : : : : : : : : : : : : : : : 51 4.2.1 Upper and Lower LFTs : : : : : : : : : : : : : : : : : : : : : 51 4.2.2 The Star Product: : : : : : : : : : : : : : : : : : : : : : : : : 51

(11)

Contents

xi 4.2.3 Conventions and Notations : : : : : : : : : : : : : : : : : : : 53 4.3 Nonlinear Parametrization: : : : : : : : : : : : : : : : : : : : : : : : 55 4.3.1 Rational Parametrization : : : : : : : : : : : : : : : : : : : : 56 4.3.2 Reducing Nonlinear LFT Models : : : : : : : : : : : : : : : : 57 4.4 Block Uncertainties: : : : : : : : : : : : : : : : : : : : : : : : : : : : 58 4.5 Dynamic Uncertainties : : : : : : : : : : : : : : : : : : : : : : : : : : 58 4.6 Performance Requirements : : : : : : : : : : : : : : : : : : : : : : : 60 4.7 Some Common Uncertainty Structures : : : : : : : : : : : : : : : : : 62 4.7.1 Multiplicative and Additive Uncertainties : : : : : : : : : : : 63 4.7.2 Coprime Factorization : : : : : : : : : : : : : : : : : : : : : : 63 4.8 Treating the Frequency as an Uncertainty : : : : : : : : : : : : : : : 64 4.8.1 Continuous-Time Systems : : : : : : : : : : : : : : : : : : : : 64 4.8.2 Discrete-Time Systems: : : : : : : : : : : : : : : : : : : : : : 65 4.9 Uncertainty Systems : : : : : : : : : : : : : : : : : : : : : : : : : : : 65 4.10 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 68

5 Structured Singular Values

69

5.1 Rationale : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 70 5.2 Structured Singular Values : : : : : : : : : : : : : : : : : : : : : : : 71 5.2.1 De nitions : : : : : : : : : : : : : : : : : : : : : : : : : : : : 71 5.2.2 Upper and Lower Bounds : : : : : : : : : : : : : : : : : : : : 72 5.2.3 Scaling and Multiplier Structures : : : : : : : : : : : : : : : : 72 5.2.4 The Main Loop Theorem : : : : : : : : : : : : : : : : : : : : 78 5.2.5 Strictness of Bounds : : : : : : : : : : : : : : : : : : : : : : : 78 5.2.6 Connection with Bounded Real Lemma : : : : : : : : : : : : 80 5.3 Contraction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 81 5.4 Commuting and Convex Sets : : : : : : : : : : : : : : : : : : : : : : 82 5.4.1 Commuting Matrices: : : : : : : : : : : : : : : : : : : : : : : 82 5.4.2 Convexity : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 85 5.4.3 LMIs with Rank Constraints : : : : : : : : : : : : : : : : : : 86 5.5 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 87

6 Synthesis

89

6.1 Problem Formulation: : : : : : : : : : : : : : : : : : : : : : : : : : : 90 6.2 Solvability of LMIs : : : : : : : : : : : : : : : : : : : : : : : : : : : : 91 6.2.1 Solvability of Strict LMIs : : : : : : : : : : : : : : : : : : : : 91 6.2.2 Solvability of Nonstrict LMIs : : : : : : : : : : : : : : : : : : 92 6.3 Synthesis : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 94 6.3.1 An Ane Problem : : : : : : : : : : : : : : : : : : : : : : : : 94 6.3.2 ComplexSynthesis : : : : : : : : : : : : : : : : : : : : : : : 95 6.3.3 MixedSynthesis : : : : : : : : : : : : : : : : : : : : : : : : 96 6.4 Shared Uncertainties : : : : : : : : : : : : : : : : : : : : : : : : : : : 99 6.4.1 Structure of Shared Uncertainties: : : : : : : : : : : : : : : : 99 6.4.2 Synthesis with Shared Uncertainties : : : : : : : : : : : : : : 100 6.4.3 Complex Uncertainties: : : : : : : : : : : : : : : : : : : : : : 102

(12)

6.4.4 Real Uncertainties : : : : : : : : : : : : : : : : : : : : : : : : 102 6.5 The LFT Gain Scheduling Theorem : : : : : : : : : : : : : : : : : :104 6.6 Finding the Controller : : : : : : : : : : : : : : : : : : : : : : : : : :105 6.7 Discussion : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :107 6.7.1 Comparing Real and Complex Uncertainties: : : : : : : : : :107 6.7.2 LPV Synthesis : : : : : : : : : : : : : : : : : : : : : : : : : :107 6.7.3 Conservativeness : : : : : : : : : : : : : : : : : : : : : : : : : 108 6.7.4 Rank Conditions and Convexity : : : : : : : : : : : : : : : :108 6.7.5 Nonconvex Problems : : : : : : : : : : : : : : : : : : : : : : :109 6.8 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :110

7 Model Reduction

111

7.1 Model Reduction using LMIs : : : : : : : : : : : : : : : : : : : : : : 112 7.1.1 Gramians and Internal Balancing: : : : : : : : : : : : : : : :112 7.1.2 Upper and Lower Bounds of Unweighted Approximations : :113 7.1.3 Optimal Hankel Norm Reduction : : : : : : : : : : : : : : : :113 7.2 LFT Model Reduction : : : : : : : : : : : : : : : : : : : : : : : : : :117 7.2.1 Solving for ^C and ^D : : : : : : : : : : : : : : : : : : : : : : :119 7.2.2 Solving for ^Aand ^B : : : : : : : : : : : : : : : : : : : : : : : 120 7.2.3 The Pure Complex Case : : : : : : : : : : : : : : : : : : : : :121 7.2.4 Some Algorithms : : : : : : : : : : : : : : : : : : : : : : : : : 122 7.3 Minimality : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :124 7.3.1 Reducibility : : : : : : : : : : : : : : : : : : : : : : : : : : : :124 7.3.2 Verifying Minimality : : : : : : : : : : : : : : : : : : : : : : :125 7.3.3 Comparison with Classical Tests : : : : : : : : : : : : : : : :126 7.3.4 Discussion: : : : : : : : : : : : : : : : : : : : : : : : : : : : :128 7.4 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :128

8 Scalings and Multipliers

129

8.1 Uncertainties and Multipliers : : : : : : : : : : : : : : : : : : : : : : 130 8.1.1 Complex and Real Uncertainties : : : : : : : : : : : : : : : :130 8.1.2 Uncertainty Systems : : : : : : : : : : : : : : : : : : : : : : :130 8.2 Constant Scalings and Multipliers: : : : : : : : : : : : : : : : : : : :132 8.3 Frequency Dependent Scalings : : : : : : : : : : : : : : : : : : : : :134 8.3.1 Scalings for Dynamic Uncertainties : : : : : : : : : : : : : : :135 8.3.2 Multipliers for Mixed Uncertainties: : : : : : : : : : : : : : :137 8.4 State-Space Methods : : : : : : : : : : : : : : : : : : : : : : : : : : : 140 8.4.1 Dynamic Uncertainties: : : : : : : : : : : : : : : : : : : : : : 141 8.4.2 Mixed Uncertainties : : : : : : : : : : : : : : : : : : : : : : :142 8.4.3 State-Space Conditions : : : : : : : : : : : : : : : : : : : : :142 8.5 TheD-K Iterations : : : : : : : : : : : : : : : : : : : : : : : : : : : 144 8.5.1 Generalization of theD-KAlgorithm : : : : : : : : : : : : :145 8.5.2 TheY-Z-KIterations : : : : : : : : : : : : : : : : : : : : : : 146 8.5.3 D-K Iterations for Gain Scheduling : : : : : : : : : : : : : : 147 8.6 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :152

(13)

Contents

xiii

9 Multipliers for Slowly Time-Varying Systems

153

9.1 Parametrization of the Lyapunov Function: : : : : : : : : : : : : : :154 9.1.1 Background : : : : : : : : : : : : : : : : : : : : : : : : : : : :154 9.1.2 Approach : : : : : : : : : : : : : : : : : : : : : : : : : : : : :154 9.1.3 Connection with Time-Varying Lyapunov Functions : : : : :155 9.1.4 Structure of Uncertainty Augmentation : : : : : : : : : : : :156 9.1.5 An Example : : : : : : : : : : : : : : : : : : : : : : : : : : : 158 9.1.6 Discussion: : : : : : : : : : : : : : : : : : : : : : : : : : : : :160 9.2 Frequency Dependent Scalings : : : : : : : : : : : : : : : : : : : : :160 9.2.1 Uncertainty Augmentation : : : : : : : : : : : : : : : : : : : 160 9.2.2 Treating the Uncertainty as an Operator: : : : : : : : : : : :161 9.2.3 An Example : : : : : : : : : : : : : : : : : : : : : : : : : : : 164 9.2.4 Structure of Uncertainty Augmentation : : : : : : : : : : : :167 9.3 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :168

10 Gain Scheduling and Robustness

169

10.1 Gain Scheduling : : : : : : : : : : : : : : : : : : : : : : : : : : : : :170 10.1.1 Aerospace Applications : : : : : : : : : : : : : : : : : : : : :170 10.1.2 Parameters : : : : : : : : : : : : : : : : : : : : : : : : : : : :171 10.1.3 Model Representations: : : : : : : : : : : : : : : : : : : : : : 172 10.2 Linearizing Nonlinear Models : : : : : : : : : : : : : : : : : : : : : : 174 10.2.1 Di erential Inclusions : : : : : : : : : : : : : : : : : : : : : : 175 10.2.2 Local Linearization: : : : : : : : : : : : : : : : : : : : : : : : 175 10.2.3 Global Linearization with Fixed Equilibrium : : : : : : : : : 176 10.2.4 Moving Reference Point : : : : : : : : : : : : : : : : : : : : :178 10.3 Robustness and Gain Scheduling : : : : : : : : : : : : : : : : : : : :180 10.3.1 Robustness Aspects of Gain Scheduling : : : : : : : : : : : :180

11 Examples of Applications

183

11.1 A Rocket Example : : : : : : : : : : : : : : : : : : : : : : : : : : : :184 11.1.1 Requirements : : : : : : : : : : : : : : : : : : : : : : : : : : : 184 11.1.2 Complex-Design : : : : : : : : : : : : : : : : : : : : : : : : 186 11.1.3 Mixed-Design: : : : : : : : : : : : : : : : : : : : : : : : : :187 11.2 Uncertain Resonant Modes : : : : : : : : : : : : : : : : : : : : : : :188 11.2.1 Discussion: : : : : : : : : : : : : : : : : : : : : : : : : : : : :192 11.3 LFT Gain Scheduling : : : : : : : : : : : : : : : : : : : : : : : : : :193 11.3.1 Discussion: : : : : : : : : : : : : : : : : : : : : : : : : : : : :195 11.4 A Missile Example : : : : : : : : : : : : : : : : : : : : : : : : : : : :196 11.4.1 The Model : : : : : : : : : : : : : : : : : : : : : : : : : : : :196 11.4.2 The LPV Model : : : : : : : : : : : : : : : : : : : : : : : : : 197 11.4.3 Discussion: : : : : : : : : : : : : : : : : : : : : : : : : : : : :202

12 Conclusions

205

Bibliography

207

(14)

Glossary

217

Notations : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :217 Acronyms : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :220

(15)

Introduction

This thesis considers the problem of analysis and design of robust gain scheduling controllers. A robust controller maintains its stability and performance even if the plant to be controlled is uncertain and time-varying. A gain scheduling controller is parametrized by a function of the operating conditions.

Uncertainties are treated in a relatively broad sense, covering parametric and dynamic uncertainties, constant or time-varying, which can be available or not to the controller in real time. The approach to analysis and synthesis as based on linear fractional transformations and linear matrix inequalities.

This chapter introduces the concepts and gives an outline of the thesis.

(16)

1.1 Background

The analysis and design of robust controllers have attracted a large number of researchers for more than a decade. One fact that spurred this development was the fact that linear quadratic Gaussian (LQG) controllers can have arbitrarily bad robustness [25].

An important step towards a robust control theory was taken in 1981 when Zames introduced the H

1 control theory [97]. It was soon generalized to also

include spatial structure. In 1981 Doyle and Safonov independently introduced similar concepts for this: Doyle called it structured singular values or[26], while Safonov called it stability marginsKmof diagonally perturbed systems [81].

TheH

1synthesis, which is a fundamental tool for robust design, was a rather

dicult problem until the advent of the two-Riccati-equation method [28] in 1988. Then robust design tools became easier to use and found its way into many new applications.

Relatively soon thereafter, linear matrix inequalities (LMIs) were found to be very well suited for formulating and solving control problems, includingH

1

anal-ysis and synthesis. Generalizations to more general control problems, such as gain scheduling synthesis, is possible [30]. In parallel with the theoretical results, nu-merical methods for solving LMIs eciently were developed and made available.

1.2 Robustness and Structured Singular Values

The approach adopted in this thesis is based on structured singular values also denoted by. Consider a time invariant stable systemG that is disturbed by an

uncertainty element , which is block diagonal. The blocks in  may correspond to disturbances, performance requirements, nonlinearities and parameter variations.

G     -y u

In the  analysis we pose the question: which is the \smallest"  that can make the system G unstable? The size of  is taken as the maximum singular

value for parametric uncertainties, or as the L2 or`2 induced norm for dynamic

uncertainties.

If constant uncertainties are considered we can employ the Nyquist criterion for determining stability. For single-input-single-output systems it states that the closed-loop system is stable if the Nyquist contour, i.e. the graph of 1 G(s) does

not encircle the origin as the argumentstraverses the imaginary axis. This can be generalized to multivariable systems, in which case the Nyquist contour is given by det(I G(s)).

(17)

1.3 Gain Scheduling

3 Theanalysis involves the problem of nding the smallest  such that det(I

M) = 0. The structured singular value, denoted by (M), is de ned as the inverse of the norm of such a . By letting M = G(j!) and making a sweep

through all frequencies ! we can provide an upper bound of  that guarantees robust stability. Unfortunately the  problem is sometimes too hard to compute and instead we must be content with upper and lower bounds.

An upper bound can be determined using the small gain theorem on the scaled system,DMD 1. The scalingsD are chosen from a set of matrices that commute

with the uncertainties, i.e.D = D. By nding the minimum ofkDMD 1kwe

can nd an upper bound of(M), denoted by (M), and thus ifkkis less than

the inverse of this bound we have proved stability.

The set of admissible scaling matrices depends on the class of uncertainties. If the uncertainties are time-invariant, frequency dependent scalings are allowed; if time-varying or nonlinear uncertainties are present, only constant scalings are admissible.

The analysis can be further re ned to also take into account that the uncer-tainties are parametric. This is described as real unceruncer-tainties since they do not cause phase shift, in contrast to the dynamic or complex uncertainties discussed so far. Real uncertainties are included in the set of complex uncertainties, but treating them as complex is sometimes too conservative. Real uncertainties appear naturally in models where they may enter as physical parameters.

Dynamic or complex uncertainties are also relevant and can be used for taking unmodeled dynamics into account. Another important application of dynamic uncertainties is for specifying performance requirements. This can be performed by augmenting the model with disturbance inputs and performance outputs together with weighting functions. Thus, the robust performance problem, i.e. maintaining performance speci cations for an uncertainty model, can be included into the robust stability problem.

The upper bounds can be computed eciently using linear matrix inequalities (LMIs). An LMI is an ane matrix function with positive or negative de nite constraints. Many common control problems, such as H2 and H

1 synthesis, can

be stated as LMIs. Also, the LMI method o ers possibilities for analysis and design of the gain scheduling problem.

1.3 Gain Scheduling

Gain scheduling is a nonlinear feedback of a special type. We will use the de nition: Gain scheduling: a linear parameter varying (LPV) feedback regulator whose parameters are changed as a function of operating conditions. This thesis is devoted to how to analyze LPV systems and how to synthesize robust gain scheduling controllers.

(18)

In order to be able to analyze and synthesize gain scheduling systems we need a description of the plant. We assume that an explicit model is available that is linear in the statesx but may have known nonlinear dependency on a parameter vectorthat a ects the dynamics of the system:

(

_

x=A()x+B()u

y=C()x+D()u; (1.1)

where u is the plant's input, y its output and A, B, C, and D are parameter-dependent matrices.

Using linear matrix inequalities (LMIs) it is now possible to nd a controller that makes the controlled system quadratically stable for all combinations of parameters if such a controller exists. A system is quadratically stable if there exists a quadratic Lyapunov function,V(x) =xTPxwherePis a symmetric matrix, such that _V <0

for allx6= 0. We can extend the problem to also include measures for robustness

and performance.

In [12, 92] this problem is solved by nding a common solution to a set of LMIs. Three LMIs are in principle needed for every parameter combination. This is done by gridding the parameter space in a dense enough grid and including the grid points in the set of LMIs. It is sometimes possible to reduce the set of points to the vertices of the parameter space.

In this thesis we solve a similar problem by rst nding a parametrization of the system (1.1) and then performing the analysis and synthesis on the parametrized system. The structure of the system is of a special type called a linear fractional transformation (LFT). The parameter dependency has been extracted from the original plant and has been placed in a feedback loop as an uncertainty block. The uncertainty block is parametrized by the original parameters and may also include model uncertainties and performance requirements.

~ G    () -y u

The advantage with this parametrization approach is that only three coupled LMIs need to be solved in the synthesis step. Thus there is no need to grid the parameter space or even to check the vertices. The controller, which has the same structure as the LFT model, is more easily parametrized compared to the LPV description.

(19)

1.4 Parameter Variations

5 On the other hand, a parametrized model on LFT form is to be built from (1.1). During this stage the original model is augmented with inputs and outputs corresponding to the parameter uncertainties (). This process assumes that the model is suciently smooth with respect to the parameters and that they can be approximated with rational functions not necessarily by the parameters themselves but by bounded functions () thereof. Also, in the parametrization step we might introduce conservativeness in the design, since is an upper bound of.

1.4 Parameter Variations

In the parametrization above we did not assume anything about how fast the parameters may change. By using this extra piece of information we can in some problems re ne the performance of the system. We will here use three classes to describe how fast a parameter is allowed to change:

 a constant parameter,

 a slowly varying parameter (bounded _),  a fast varying parameter.

A constant parameter is virtually constant. For instance the mass of a vehicle can be considered constant during normal operation. The fuel consumption will reduce the mass at a rate that can be neglected compared to the dynamics of the vehicle. Typical examples of slowly varying parameters are velocity and altitude of an aircraft. These parameters have bounds on the rate of change set by e.g. engine performance and operational constraints.

Fast varying parameters are not assumed to have any bounds on the rate of change. They are typically used for including nonlinear e ects in the linear model (1.1).

For showing robust stability we search for scalings that commute with the set of uncertainties. Depending on the characteristic of the uncertainty di erent sets of scalings are used.

uncertainty scaling commuting property constant frequency dependent D(s) = D(s) slowly varying frequency dependent D(s) = D(s) +U(s) _V(s)

fast varying constant D = D

There exist various methods for analysis and design of systems with slowly vary-ing parameters, see e.g. [92]. The approach proposed in this thesis is to augment the original uncertainty block  with the time derivative of some blocks of . This yields a model of the same structure as before but with an augmented uncertainty structure.

(20)

1.5 Contributions

Chapters 2{5 give introductions to mathematical concepts and to structured sin-gular values, and contain essentially nothing new. The main results are presented in the remaining chapters.

The main contributions are listed below:

 Real-synthesis (chapter 6) and in particular theorem 6.1;  Real-model reduction (chapter 7);

 D-K-like iterations for synthesis with real, constant uncertainties and gain

scheduling (chapter 8);

 Uncertainty augmentation for analyzing systems with slowly varying

uncer-tainties (chapter 9).

The real-synthesis forms an important backbone for this thesis. It provides the main tool for gain scheduling synthesis in which the controller is parametrized using LFTs. This allows us to quite accurately approximate smooth functions over a relatively wide parameter range. Using real-synthesis we can not only design controllers but also apply model reduction on the parametrization for reducing the complexity of the model.

The conservativeness in the analysis and design is reduced by introducing ap-propriate scaling or multiplier matrices complying to the temporal characteristics of the uncertainties. These matrices together with either the small gain or the passivity theorems are the main tools for the stability analysis and consequently the synthesis methods. There is an equivalence between the scaling matrices and quadratic Lyapunov functions, which is shown and discussed. Thus, theapproach is nothing but a Lyapunov method using quadratic Lyapunov functions.

1.6 Outline

The thesis is outlined as follows.

In chapter 2, singular values, signal spaces and signal norms are discussed brie y. It also covers factorization, stability and performance bounds. Chapter 3 gives a short introduction to linear matrix inequalities and how they can be solved numerically. The uncertainty description of a parametrized model needed for the analysis and synthesis is discussed in chapter 4.

The structured singular values are treated in three chapters. In chapter 5, the de nition is given together with a computational method of the upper bound of

(M). In chapter 6, new results on real and mixedsynthesis are presented and compared with well-known complexsynthesis. Results on model reduction based on the synthesis results are presented in chapter 7.

(21)

1.6 Outline

7 Scalings and multipliers are important tools for both analysis and synthesis. The well-knownD-K iterations together with extensions for real- synthesis and gain scheduling are presented and discussed in chapter 8. In chapter 9 the con-cepts are extended to handle slowly time-varying systems, by using parametrized Lyapunov functions and frequency dependent scalings.

In chapter 10, we discuss how to use the proposed methods in some applications, mainly from the aerospace area. Also, linearization of nonlinear models is discussed. This is exempli ed in chapter 11, where a few applications are given.

Finally the conclusions are given in chapter 12.

A Short Tour of the Thesis

The readers already familiar with uncertainty models, linear matrix inequalities and structured singular values can probable skip chapter 2{5. The main theoretical re-sults are presented in chapters 6 and 7. The rst part of chapter 8 is probably also well-known while the last part, concerningD-Kiterations with mixed-uncertainties and applications to gain scheduling problems, is new. Also the uncertainty aug-mentation concept for coping with time-varying uncertainty given in chapter 9 is novel. The examples of applications in chapter 11 would also illustrate some of the potentials and motivations for the techniques presented in this thesis.

(22)
(23)

Preliminaries

This chapter gives an introduction to some of the basic concepts for describing systems and their performance. The approach taken here is based on linear dynamic systems that are perturbed by some bounded elements. Three kinds of linear systems are de ned: linear time-invariant (LTI), linear time-varying (LTV) and linear parameter-varying (LPV) systems.

In order to quantify performance of systems a number of norms are used, such as

L2and`2norms for signals in the continuous and discrete time cases, respectively.

Using these, induced norms for systems can be de ned. For LTI systems the L2

and`2 induced norms coincide with theH

1-norm, which is the maximum gain of

the system with respect to frequency.

An upper bound of the L2-induced gain of a system can be determined as

a matrix inequality. Lyapunov and Riccati equations can also be modi ed into matrix inequalities, which provide criteria for stability and performance bounds. Using Schur complements the quadratic Riccati inequality can be rewritten into a linear matrix inequality (LMI).

(24)

2.1 Matrices

This thesis contains almost 600 matrices. The set of real-valued matrices with p

rows andmcolumns is denoted byRp

n and complex-valued matrices of the same

size byCp

n. The unit matrix of sizen

nis denoted byIn.

Vectors are denoted byRn and Cn and are assumed to be column vectors, i.e. Rn

1 and Cn

1.

A square matrix Y is symmetric if Y = YT where YT denotes the transpose

ofY. A matrix is Hermitian if Y =Y where Y denotes the complex conjugate

transpose. A symmetric real matrix is also Hermitian. A unitary matrix U is square and satis esUU =UU=I.

A matrix Y is called positive de nite, denoted by Y > 0, if it is Hermitian and if xY x > 0 for all nonzerox; It is called positive semide nite, denoted by

Y 0, if x Y x

0 for allx. The de nitions are analogous for negative de nite

and semide nite matrices.

We will also useY > Z and Y Z to denote that Y Z >0 andY Z 0

respectively.

There exist several methods to determine if a Hermitian matrix is positive de nite. One possibility is to check if all its eigenvalues are positive. If they are non-negative the matrix is positive semide nite.

2.2 Linear Systems

This thesis treats nite-dimensional linear dynamic systems with uncertainties. The main focus is on continuous-time systems.

2.2.1 Continuous-Time Linear Systems

Time-Invariant Systems

A linear time-invariant (LTI) system is de ned by its state-space representation _

x=Ax+B u

y=C x+D u; (2.1)

where x 2 Rn is the state vector, u2 Rm is the input vector and y 2Rp is the

output vector; _x= ddtx denotes the time-derivative ofx. The system matricesA,

B,CandD, of compatible sizes, are xed in time and describe the behavior of the system. Sometimes we will use the more compact notation,

 _ x y  =  A B C D  x u  : (2.2)

The notationG(s) =D+C(sI A) 1Bis also used for describing an LTI system.

The symbol s can be interpreted both as the time-derivative operator ddt and as the argument of the Laplace transform of the systemG.

(25)

2.2 Linear Systems

11 A continuous-time LTI system is stable if A has all its eigenvalues, i, in the

open left half plane, i.e. Rei <0.

A transfer function is proper ifG(1) is bounded. A system is inversely stable,

ifG 1(s) is proper and stable.

Time-Varying Systems

A linear time-varying (LTV) system has time-varying system matrices and is de- ned by

_

x=A(t)x+B(t)u

y=C(t)x+D(t)u: (2.3)

Parameter-Varying Systems

A more general description is obtained by letting the system matrices depend on (time-varying) parameters (t). This model is called a linear parameter-varying (LPV) system:

_

x=A((t))x+B((t))u

y=C((t))x+D((t))u: (2.4)

The parameters (t)2 S Rs can either be an external input to the system or

they can depend on the states of the system. In the latter case we can describe nonlinear e ects by an LPV model.

2.2.2 Discrete-Time Linear Systems

Discrete-time systems are represented similarly by

x(t+ 1) =Ax(t) +B u(t)

y(t) =C x(t) +D u(t): (2.5)

Using the forward-step operatorqde ned byqx(t) =x(t+1), this can be rewritten as

qx=Ax+B u

y=C x+D u: (2.6)

A discrete-time LTI system is stable ifAhas all its eigenvalues strictly within the unit disc, i.e.jij<1.

Time-varying and parameter-varying systems are de ned analogously to the continuous-time case.

(26)

2.2.3 Similarity Transformations

A linear system, either LTI, LTV or LPV, can be represented di erently in terms of the system matrices. Two representations, (A;B;C;D) and ( ^A;B;^ C;^ D^), are similar if there exists a (constant) nonsingular transformation matrixT 2Rn

n such that  ^ A B^ ^ C D^  =  TAT 1 TB CT 1 D  : (2.7)

The similarity transformation can be interpreted as a mapping from one base in the state space to another: ^x =Tx. The input-output behaviors of two similar systems are identical.

2.3 Norms

In the scalar case the gain of a systemG at a given frequency ! is given by the

absolute value of G(j!). In this thesis we will treat systems with multiple inputs

and outputs represented by a matrix of transfer functions. The gain of such a systemG is not a single value but should rather be viewed as a range of gains. We

will use matrix norms that are induced by Euclidian vector norms.

2.3.1 Vector Norms

Ifu2Cm denotes a vector, the Euclidian vector norm is de ned by

kuk= v u u t m X i=1 juij2= p uu (2.8)

whereu denotes the complex conjugate transpose ofu.

2.3.2 Singular Value Decomposition

IfM2Cp

m, it can always be factored [58] into

M=UV

where  = diag[1;2;::: ;k] are the singular values of M, k = minfp;mg,

U 2Cp

k such thatUU =I

k andV 2Cm

k such thatVV =I

k.

The maximum singular value1is denoted by  and the smallest onek by.

IfM has not full rank thenk = = 0. The singular values of M are related to

the eigenvalues ofMM andMM:

(27)

2.3 Norms

13 If p  m then V is unitary and V 1 = V

. Thus the eigenvalues of MM are

given byi2. Whenpm, analogous results can be derived, i.e. the eigenvalues of

MM are given by2

i. Speci cally, the maximum eigenvalues ofMM andMM

are both equal to the square of the maximum singular value ofM.

The condition number of a nonsingular matrix is de ned as the fraction between the maximum and minimum singular values:

cond(M) = ((MM)): (2.9)

2.3.3 Induced Matrix Norms

Using the Euclidian vector norm, we can de ne the induced norm of a matrix

M2Cp m by kMk= sup u6=0 kMuk kuk = sup kuk=1 kMuk: (2.10)

The Euclidian-induced norm ofM is equal to the maximum singular value ofM:

kMk= (M): (2.11)

We will use both notations in the sequel.

If we apply this to a frequency function G then the singular values of G(j!)

are called the principal gains ofG at!.

2.3.4 Rank and Pseudo-inverse

Due to numerical errors the rank of a matrix cannot in general be determined by counting the number of independent rows or columns of a matrix. A numerically more reliable method is to instead compute the singular value decomposition of the matrix M and counting the number of singular values that are signi cantly larger than the numerical precision of the oating point operations. For instance inMatlab, which uses 64-bit oating point representation the numerical precision

is about= 210 16. Ifk is of the same order as1 the rank of the matrix is

set to be less thank.

The pseudo-inverseMy of a matrixM that is not full rank can be determined

using the singular value decomposition. If

M=U r 0 0 0  V (2.12) then My=V  r1 0 0 0  U (2.13)

(28)

2.4 Signal Spaces

For a more complete treatment on functional spaces related to control systems, see [33].

2.4.1 Lebesgue Spaces

Consider a continuous-time signal x 2 R ! Rn de ned in the interval [0;1).

Restrictxto be square-Lebesgue integrable

Z 1

0 kx(t)k2dt <1: (2.14)

The set of all such signals is the Lebesgue space denoted by Ln2[0;1) or just by L2[0;1). This space is a Hilbert space under the inner product

hx;yi = Z

1

0 x(t)

y(t)dt:

The norm ofx, denotedkxk2, is de ned as the square root ofhx;xi.

Similarly to the continuous-time Lebesgue space, L2, we can de ne the

coun-terpart for discrete time signals x 2 Z! Rn in the interval [0;1) by the inner

product de ned by hx;yi = 1 X k=0x(k) y(k):

The set of signals such thathx;xiis bounded, that is hx;xi = 1 X k=0x(k) x(k)< 1; (2.15)

is the Lebesgue space denoted by`n2[0;1) or just by`2[0;1).

The Extended Lebesgue Space

The Lebesgue spaceL2only includes signals with bounded energy. To also include

unbounded signals, e.g. in order to discuss unstable systems, we need the extended space. LetPT denote the projection operator

(PTx)(t) =xT(t) =

(

x(t) tT

0 t > T (2.16)

The extended Lebesgue space,L2eis de ned as the space of continuous-time signals

x2R!Rn such thatxT 2L2. The scalar product inL2eis hx;yiT =hxT;yTi=

Z T

0 x

(29)

2.4 Signal Spaces

15

2.4.2 Operators

An operator G is a function from one signal space to another. The operator is linear if

G(u1+u2) = (Gu1) + (Gu2)

G(u) =(Gu)

where2R. For instance, linear systems are linear operators.

An operator is causal if (Gu)(t) only depends on past values ofu. Using the projection operatorPT this can be written asPTG=PTGPT.

An operator is parametric if (Gu)(t) only depends onu(t). ThusGis a time-varying function ofu(t), i.e. (Gu)(t) =g(u(t);t). Linear parametric operators can be represented as linear systems with no states and only described by aD-matrix, which can be constant or time-varying.

2.4.3 Induced Norms

Based on the de nition of the L2 and `2 norms for signals, we can de ne the

induced norms or gains for operators, called the L2-induced or `2-induced norms.

A continuous-time operatorGis a function from one signal spaceu2Lm2 to another

y2L

p

2:

y=Gu

TheL2-induced norm is de ned as kGk= sup u2L 2 u6=0 kGuk2 kuk2 :

The discrete-time case is de ned analogously. A discrete-time operator G is a function from one signal spaceu2`m2 to anothery2`

p

2, and the`2-induced norm

is de ned as kGk= sup u2` 2 u6=0 kGuk2 kuk2 :

2.4.4 Hardy Spaces

The Hardy spaceH

1 [33] consists of all complex-valued scalar functions,G: C ! C, of a complex variables, that are analytical and bounded in the open right half

plane, Res >0. This means that there exists a real numberbsuch that

jG(s)jb; Res >0:

The smallest such bound bis the H

1-norm ofG, denoted kGk 1. The H 1-norm is de ned by kGk 1= sup fjG(s)j: Res >0g: (2.17)

(30)

By the maximum modulus theorem we can replace the open right half-plane in (2.17) by the imaginary axis:

kGk

1= sup

fjG(j!)j:!2Rg: (2.18)

In the more general case of matrix transfer functions we have

kGk

1= sup

fkG(j!)k:!2Rg: (2.19)

It can be shown that for LTI systems theH

1-norm and the

L2-induced norm are

equivalent, i.e.kGk 1=

kG(s)k.

In this thesis we will focus on ( nite-dimensional) real-rational functions, which are rational functions with real coecients. The subset of H

1 consisting of

real-rational functions is denoted byRH 1. If

G is real-rational, thenG 2RH

1if and

only ifG is proper (jG(1)j exists and is nite) and stable (G has no poles in the

closed right half plane, Res0). We denote byRHp m

1 multivariable functions

inRH

1withminputs andpoutputs.

2.5 Factorization of Transfer Matrices

In this section we will review some factorization methods for transfer matrices in space form. These factorization problems emerge in connection with state-space-analysis.

2.5.1 Inverse of Transfer Matrices

We start by stating the well-known matrix inversion lemma , see e.g. [58]:

(D+CAB) 1=D 1 D 1C(BD 1C+A 1) 1BD 1 (2.20)

assuming thatAand Dare nonsingular.

Using (2.20) the inverse of a square transfer matrix de ned by the realization

G(s) =D+C(sI A) 1B exists ifD is nonsingular and is given by G

1(s) =D 1 D 1C(sI A+BD 1C) 1BD 1:

The eigenvalues of the closed-loop system is given byA BD 1C.

2.5.2 Factorization of Matrices

A Hermitian or symmetric matrix P, positive de nite or semide nite, can be de-composed into two factorsP =DTD. Such a factorization is unique except for a

unitary left factor onD. There exist a number of possibilities for doing this.

Cholesky factorization

nds an upper triangular factor,T such thatP =TTT.

Singular value decomposition

ofP =UV, thenD=D =U1=2V. Note

(31)

2.5 Factorization of Transfer Matrices

17

2.5.3 The Riccati Equation and Its Solution

Many factorization algorithms use the Riccati equation [33, 28, 94]. The Riccati equation is a quadratic matrix equation

XA+ATX XRX+Q= 0; (2.21)

whereQandRare symmetric matrices, andRis positive or negative semide nite. The equation has, in general, more than one solution but we will restrict our attention to the case when (A;R) is stabilizable, namely when there exists an X

that makesA RX stable, i.e. all its eigenvalues have negative real parts. Such a solution of the Riccati equation does not always exist, but we can nd a sucient criterion for its existence by studying the HamiltonianH associated to (2.21)

H =  A R Q AT  : (2.22)

The Hamiltonian, if real, has symmetric eigenvalues both about the real and imag-inary axes. If H has no eigenvalues on the imaginary axis and if (A;R) is stabi-lizable, then there exists a unique stabilizing, symmetric, semide nite solution to (2.21), which is denoted byX = RicH, see [33, 28]. The solution is obtained by rst computing the modal subspace spanned by the generalized (real) eigenvectors ofH corresponding to stable eigenvalues. It can be shown that this eigenspace can be written as the range space of [IX].

2.5.4 Spectral Factorization

We will here consider the problem of factoring a square real-rational Hermitian transfer matrixW =W

, whereW(s) =WT( s). We assume that

W andW 1

are both proper and have no poles on the imaginary axis. This problem is described as the spectral factorization problem [33] and the standard procedure is to write

W = D0+G1+G 

1 where D0 is constant and G1 is stable, inversely stable and

strictly proper.

We here con ne ourselves to a particular structure of W, and assume that

W = G



G > 0 where G is stable but not necessarily inversely stable or even

square. This particular structure emerges in state-space -analysis with complex uncertainties.

Lemma 2.1 (Spectral Factorization [33])

Assume that G(s) = D+C(sI

A) 1Bis stable andW =G 

G >0. Then there exists a stable and inversely stable, ^ G, such that W = ^ G  ^ G. One such ^ G is obtained by ^ G(s) = ^D+ ^C(sI A) 1B

with ^DTD^=DTD and ^C= ^D T(DTC+BTX) whereX = RicH 0,

H =  A BDyC B(DTD) 1BT CTC+CTD(DTD) 1DTC A BDyC T  ; Dy = (DTD) 1DT:

(32)

Proof

A state-space realization ofW is W = 2 6 6 4 A 0 B CTC AT CTD DTC BT DTD 3 7 7 5= " ~ A B~ ~ C D~ #

A realization of the inverse ofW has anA-matrix given by

~

A= ~A B~D~ 1C~=H

Thus, H has no eigenvalues on the imaginary axis since W has no zeros there

(W > 0). Also, ( ~A;B~) is stabilizable, since A is stable. Thus, X = RicH  0

exists and satis es

X(A BDyC) + (A BDyC)TX XB(DTD) 1BTX

+CTC CTD(DTD) 1DTC= 0;

(2.23) such thatA BDyC B(DTD) 1BTX is stable. Applying a similarity

transfor-mation toW with T =  I 0 X I  and T 1= I 0 X I  yields W  ~ W = 2 6 6 4 A 0 B CTC XA ATX AT CTD XB DTC+BTX BT DTD 3 7 7 5:

Using (2.23) it is straightforward to show that ^ CTC^=CTC+XA+ATX; and thus ~ W = ^ G  ^ G = 2 6 6 4 A 0 B ^ CTC A^ T C^TD^ ^ DTC B^ T D^TD^ 3 7 7 5:

We can nd a (square) nonsingular ^D, e.g. by Cholesky, QR-factorization or sin-gular value decomposition, such that ^DTD^ =DTD. It is evident that ^

G is stable

sinceG is such. Next, since ^D is nonsingular, ^ G 1exists and ^ G 1(s) = ^D 1 D^ 1C^(sI A^ ) 1BD^ 1: where ^ A=A BD^ 1C^=A BD^ 1D^ T(BTX+DTC) =A BDyC B(DTD) 1BTX:

Thus, ^A is stable and, hence, ^

(33)

2.5 Factorization of Transfer Matrices

19

2.5.5 Canonical Factorization

The canonical factorization problem is to nd stable and inversely stable factors such thatY



Z =W. We will here study the problem of factoring a particular kind

of square transfer functionsW that are proper and have no poles on the imaginary

axis. In addition it is assumed that W(j!) +W

(j!) > 0 for all ! 2 R

1 = R[f 1;1g.

In the lemma we use the concept of unimodular polynomial matrices, see e.g. [58]. A unimodular polynomial matrix U is a polynomial matrix such that U 1

is also polynomial. A polynomial matrix is unimodular if and only if detU is

constant. The following lemma is essentially from [24].

Lemma 2.2

LetW be a square rational matrix that is proper and has no poles on

the imaginary axis. Also assume that W(j!) +W

(j!)>0 for all! 2R

1. Then

there is a factorizationY 

Z =W, such thatY andZ are both stable and inversely

stable.

Proof

We prove the lemma by construction. First observe thatW has no poles or

zeros on the imaginary axis. Compute the Smith-McMillan form [58]: UWV =D

where D is diagonal with scalar rational functions in the diagonal, andU andV

are unimodular polynomial matrices. FactorD =D 

+D where D+ and D have

all poles and zeros in the open left half plane. This is always possible, sinceD is

diagonal and each element ofD has no poles or zeros on the imaginary axis. Let Y

 = U 1D



+ andZ =D V 1. ThenY andZ both have their poles and zeros in

the left half plane. The properness follows by the fact that the argument variation (cf. generalized Nyquist stability criterion [65]) of detW(s) along the imaginary

axis s = j! is zero since W(j!) +W

(j!) > 0 implies that all eigenvalues of W(j!) have positive real parts, and consequently the number of poles and the

number of zeros in the right half plane are equal. Thus, Y and Z are stable and

inversely stable. 2

The following factorization algorithm is essentially from [33]. Here RicH is not restricted to be positive semide nite or even symmetric.

Lemma 2.3 (Canonical Factorization)

Let W =Y 

Z with Y(s) = DY +

CY(sI A) 1B and Z(s) = DZ+CZ(sI A) 1B such that A is stable. Also

assume that W(j!) +W

(j!) > 0 for all ! 2 R

1 Then there exist stable and

inversely stable factors ^ Y and ^ Z such thatW = ^ Y  ^ Z. ^ Y(s) = ^DY+ ^CY(sI A) 1B, ^ Z(s) = ^DZ+ ^CZ(sI A) 1B, with ^DTYD^Z=DTYDZ, ^CY = ^DZT(DTZCY+BTXT)

and ^CZ= ^DYT(DTYCZ+BTX) whereX = RicH and

H =  A B(DTYDZ) 1DTYCZ B(DTYDZ) 1BT CTYCZ+CTYDZ(DTYDZ) 1DTYCZ AT +CTYDZ(DTYDZ) 1BT  :

(34)

2.5.6 The Kalman-Yakubovich-Popov Lemma

The Kalman-Yakubovich-Popov lemma [90, 91, 76] states the equivalence between a frequency criterion and an LMI.

Lemma 2.4 (Kalman-Yakubovich-Popov Lemma)

Given A 2 Rn n, B

2 Rn

m,M=MT

2R(n+m)

(n+m), withdet(j!I A)

6

= 0 for!2R

1 and(A;B)

controllable, the following statements are equivalent. (i) For!2R 1  (j!I A) 1B I   M (j!I A) 1B I  0

(ii) There exists a matrix P =PT 2Rn

n such that M+  ATP+PA PB BTP 0  0

The corresponding equivalence for strict inequalities (<) holds even if(A;B) is not

controllable. 2

Note that P does not necessarily need to be positive de nite. However, if A is stable, i.e. has all its eigenvalues in the open left half plane, thenP >0.

2.6 Stability

We have already de ned stability for linear time-invariant systems in terms of the eigenvalues of the A-matrix. We will now extend the de nition to uncertainty systems. Consider the feedback con guration shown in gure 2.1. The LTI system

G(s)2 RHn n

1 , which is assumed to be stable, is interconnected with the causal

operatorH :Ln2 !Ln2 with a bounded L2-induced norm.

G(s) 6  +i ? -+i - H  u w z

Figure 2.1

Illustration of a dynamic system with uncertainty feedback.

The closed-loop system response fromutowis given by

(35)

2.6 Stability

21 We say that the closed-loop system (H;G(s)) is (input-output) stable if w 2Ln2

for allu2Ln2. A related de nition is the following.

The feedback system de ned by the pair (H;G(s)) is well posed if the operator

(I HG(s)) is causally invertible, i.e. there exists a causal F : Ln2 ! Ln2 such

that F(I HG(s)) = (I HG(s))F =I. Assume that



is a set of operators



f : 2Ln2 !Ln2g. The system (



;G(s)) is robustly stable if (;G(s)) is

well-posed for every 2



.

The discrete-time case has analogous de nitions.

2.6.1 Small Gain Theorem

The well-known small gain theorem [24] states that, assuming  and G(s) are

stable operators, the closed-loop system (;G(s)) is stable ifkG(s)k <1 and kk1. This follows for instance from the de nition of well-posedness, since

(I  G(s)) 1 = 1 X k=0( G(s)) k  1 X k=0 kG(s)k k  1 X k=0 k= 1 1 <1:

Thus the gain of the closed-loop system is always bounded.

2.6.2 Structured Uncertainties

Very often the small gain theorem is too conservative for showing stability of closed-loop systems. To improve the criterion we need to employ more structure of the problem. Assume that  has a block diagonal structure,  = diag [1;2;::: ;f]

where each block i:Lk2i !Lk2i has a boundedL2-induced norm less or equal to

one.

A less conservative bound can be determined by scaling the LTI system G(s)

with nonsingular scalingsT= diag [t1Ik1;t2Ik2;::: ;tfIkf]. The system is stable if

and only if [84] = infT T GT 1 1<1: (2.25)

Note that the scalings are such that T = T, i.e. they commute with . The stability condition follows by applying the small gain theorem on the scaled system

TG(s)T 1.

2.6.3 Passivity

A system with inputuand outputyis passive [24, 5] if and only if there exist some constant such that

hu;yiT 

for allu2L2e and allT. Note that can be chosen to zero if the system is linear

(36)

The system is input strictly passive if and only if there exist  >0 and such that

hu;yiT kuTk2+

and output strictly passive

hu;yiT kyTk2+

for allu2L2eand allT. The following result can now be established [24].

Theorem 2.1 (Passivity Theorem [24, 5])

Let G(s) be both input and

out-put strictly passive and H passive. Then the feedback system (H;G(s)) in

g-ure 2.1 is input-output stable, i.e. z andw2L2 for all u2L2. 2

2.6.4 Positive Real Transfer Functions

A transfer functionG(s) is positive real [24] ifG(s) +GT( s)0 for all Res >0,

and strictly positive real if G(s ") is positive real for some " >0. This applies

to the scalar as well as the multivariable case. If G(s) is a positive real transfer

function then operatorG(s) is passive. IfG(s) is strictly positive real thenG(s) is

output strictly passive [5].

The passivity theorem achieves stability by restricting the phase while the small gain theorem gives stability by restricting the loop gain.

2.7 Performance Bounds

Consider a dynamic system described by a di erential equation _

x=f(x;w) (2.26)

and a performance criterion,Jw2Rn !R:

Jw(x(t)) =

Z T

t g(x();w())d (2.27)

where x2R !Rn is the state vector as a function of time, w2R !Rm is the

input (or disturbance) vector andg2Rn Rm !R is the cost function.

The time variable t can be included in the state-space vector x by having a statexi such that _xi= 1 andxi(t) =t.

Theorem 2.2

A strict upper bound V for the performance criterion Jw, such

thatJw(x)< V(x) for allw, can be established if there exists a continuously

di er-entiable, positive de nite Lyapunov or storage function (see [90]), V, that makes the Hamiltonian, H, negative for all x andw:

H =g(x;w) +Vx(x)f(x;w)<0; 8x;w; (2.28)

(37)

2.8 Matrix Inequalities

23

Proof

Jw(x(t)) = Z T t g(x;w)d = Z T t g(x;w)d+V(x(T)) V(x(t)) +V(x(t)) V(x(T))  Z T t  g(x;w) + _V(x) d+V(x(t)) = Z T t (g(x;w) +Vx(x)f(x;w))d+V(x(t))< V(x(t)): 2

This theorem can be modi ed to a nonstrict version by replacing<with.

In the thesis we will use this inequality to provide conditions for stability and performance bounds on linear systems subject to nonlinear disturbances. We will use the L2[t;T] norm as a performance criterion. We assume that x(t) = 0 and

that t = 0 and T = 1 if nothing otherwise is stated. However, the analysis is

general and theL2 norm can easily be extended to any interval [t;T].

2.8 Matrix Inequalities

2.8.1 Continuous Time

We will here study linear, stable systems subject to nonlinear uncertainties: _

x=Ax+Bw

z=Cx+Dw; (2.29)

wherewis the disturbance input andz is the performance output.

The aim of this section is to give criteria for assuring upper bounds of the

L2-induced norm fromwtoz for LTI system, i.e. to show that kzk2< kwk2:

By making a scaling of eitherworz, it is no restriction to assume that = 1:

kzk2<kwk2; (2.30)

or equivalently

kzk2 kwk2= Z

zT(t)z(t) wT(t)w(t)dt <0:

For this problem the following cost function can be used

(38)

and a quadratic Lyapunov function is chosen

V(x) =xTPx: (2.32)

To assure internal stability, it is assumed that the Lyapunov matrixP is symmetric and positive de nite (P >0), that isxTPx >0;8x6= 0. Ifx(0) = 0 theL2-induced

norm fromwtozis less than one if the Hamiltonian for (2.29) and (2.31) is negative for allx:

H = _V +g(x;w)

= _xTPx+xTPx_+zTz wTw

=xTP(Ax+Bw) + (Ax+Bw)TPx+ (Cx+Dw)T(Cx+Dw) wTw:

(2.33) In order to assure thatkzk2<kwk2thenH <0 must hold for allx andw.

2.8.2 The Riccati Inequality

One way of arriving at the related Riccati inequality is by completing the squares in (2.33). First observe that by lettingx= 0 it can be inferred thatDTD < I and

thusR=I DTD is invertible. Then

H =xT ATP+PA+ (BTP+DTC)TR 1(BTP+DTC) +CTC x w R 1(BTP+DTC)xT R w R 1(BTP+DTC)x xT ATP+PA+ (BTP+DTC)TR 1(BTP+DTC) +CTC  x:

Equality is obtained for

w=R 1(BTP +DTC)x; (2.34)

which can be interpreted as the worst-case disturbance.

2.8.3 Linear Matrix Inequalities (LMIs)

Instead of completing the squares, the Hamiltonian (2.33) can be rewritten into

H =  x w T PA+ATP+CTC PB+CTD BTP+DTC DTD I  x w  <0; (2.35) which shall hold for all nonzerox;w. This implies that



PA+ATP+CTC PB+CTD

BTP+DTC DTD I



<0

which is a linear matrix inequality (LMI) inP, for given (A;B;C;D). This implies that the set ofP satisfying the LMI is convex, which substantially simpli es the search forP, see chapter 3.

(39)

2.8 Matrix Inequalities

25

Schur Complements

The equivalence between the Riccati inequality and the LMI can be seen by the following well-known fact:

Lemma 2.5 (Schur Complement)

Suppose R and S are Hermitian, i.e. R =

R andS=S. Then, the following conditions are equivalent:

R >0; S+GTR 1G <0; (2.36) and  S GT G R  <0: (2.37)

Proof

Post-multiply (2.37) by the nonsingularh

I 0

R 1G I

i

and pre-multiply by its transpose:  I GTR 1 0 I  S GT G R  I 0 R 1G I  =  S GTR 1G 0 0 R  <0;

which is equivalent to the conditions in (2.36). 2

The Schur complement result can be generalized to nonstrict inequalities.

Lemma 2.6 (Nonstrict Schur Complement [15])

Suppose R andS are Her-mitian. Then, the following conditions are equivalent:

R0; S+GTR yG 0; (I RR y)G= 0; (2.38) and  S GT G R  0; (2.39)

whereRy denotes the pseudo-inverse of R, see (2.13).

Proof

[15] LetU be a nonsingular matrix that diagonalizesR, so that

UTRU =

 0 0 0



;

where >0. Now, the inequality (2.39) holds if and only if

 I 0 0 UT  S GT G R  I 0 0 U  = 2 4 S GT1 GT2 G1  0 G2 0 0 3 5 0;

(40)

where GT1 GT2 =GTU. We must then haveG2= 0, which holds if and only if (I RRy)G= 0, and  S GT1 G1   0;

which holds if and only ifS+GTRyG

0. 2

Using Schur complements we can infer that if a matrix is positive de nite then an arbitrary diagonal square sub-block is also positive de nite. For instance, if any diagonal elementpii of a matrixP is negative or zero the matrixP is not positive

de nite.

Congruence Transformations

When proving the Schur complements a so called congruence transformation is employed. LetU be a nonsingular matrix, then

F >0 and UTFU >0; (2.40)

are equivalent statements. The inequality can be replaced by equality (=) or non-strict inequality ().

Equivalent Matrix Inequalities

We have shown the equivalence between the Riccati inequality and the correspond-ing LMI. They have di erent virtues, which will be employed when convenient. One of the objectives for choosing the LMI is that it in many cases provides a simple tool for showing that the set of solutions is convex.

By repeating the Schur complement we arrive at the following equivalent con-ditions: (i) 8 < :  (D)<1; ATP+PA+ (BTP+DTC)T(I DTD) 1(BTP +DTC) +CTC <0; (ii)  PA+ATP+CTC PB+CTD BTP+DTC DTD I  <0; (iii) 2 4 PA+ATP PB CT BTP I DT C D I 3 5<0:

All but the rst one of these inequalities are linear in P if (A;B;C;D) are kept xed. The last one of these inequalities (iii) is linear in (A;B;C;D) for a given

P, from which we conclude that the set of system matrices satisfying the Riccati inequality or equivalently the LMI is convex. The bounded real lemma states an extension of these results.

(41)

2.9 Structured Dynamic Uncertainties

27

Lemma 2.7 (Bounded Real Lemma [82, 35])

The following statements are equivalent

(i) kGk

1< andA stable with

G(s) =D+C(sI A) 1B;

(ii) there exists a solution P >0 to the LMI

2 4 PA+ATP PB CT BTP I DT C D I 3 5<0: (2.41)

Proof

We have already shown that (ii))(i). By using the

Kalman-Yakubovich-Popov lemma, see section 2.5.6, the opposite direction is proved. 2

2.9 Structured Dynamic Uncertainties

The block diagram of a system subject to structured uncertainties, linear or non-linear, according to the-formalism is depicted in gure 2.2.

G     -z w ^ z w^

Figure 2.2

Illustration of a dynamic system with uncertainty feedback.

The systemG subject to uncertainties is assumed to be linear and described by

_ x=Ax+B1w+B2w^ z=C1x+D11w+D12w^ ^ z=C2x+D21w+D22w:^ (2.42) The structured uncertainties are described by , which is block diagonal,  = diag[1;::: ;f]. Each block i is a either static parametric or causal dynamic

operator.

Example 2.1

Consider the system depicted in gure 2.3 and imagine that it models a vehicle of some kind. Here two uncertainty blocks are present, 1 and 2.

We can assume that 1 contains dynamic uncertainties for describing the unmodeled

dynamics. For instance it can include higher-order dynamics describing exible modes in the structure of an aircraft (see example 4.8). The second uncertainty block 2 includes

a parametric uncertainty for describing variations in the velocity of the vehicle. It is set to zero for the mean speed of the vehicle and1 for the minimum and maximum speed

References

Related documents

Den största skillnaden mellan bostadsrätten och ägarlägenheten är att den som äger en bostadsrätt endast innehar en nyttjanderätt till en lägenhet i och med

However, it has recently been argued that digital sketching tools are on pair with pen-and-paper sketches in design ideation (Camba, J. The lack of digital sketching tools was

This thesis simulates a system, tests different networks between the antenna ports, optimizes those networks to achieve higher diversity gain and antenna

Vid intervju med Faizal Lutamaguzi på företaget Rackspace, som är ett av företagen i vår grupp av företag som erbjuder molntjänster, fick vi svaret att det ur juridisk synpunkt

175 Vår tids arv, är således att vi befinner oss i en situation där neurobiologin och andra psykologiskt relaterade grenar inte kan påverka svensk lagstiftning även

Time-varying uncertainties with bounds on rate of variation can also be included in the same framework by left and right non-square multipliers that correspond to an

Conclusions: The antenatal lifestyle intervention reduced mean GWG and short-term PPWR but no long-term effects on maternal weight retention or offspring obesity were

Den generella uppfattningen är att det inte finns så många risker kopplade till internationella inköp, det gäller bara att utforma avtalet korrekt och ha tålamod.. Allt kommer