• No results found

Model Predictive Control for Active Magnetic Bearings

N/A
N/A
Protected

Academic year: 2021

Share "Model Predictive Control for Active Magnetic Bearings"

Copied!
73
0
0

Loading.... (view fulltext now)

Full text

(1)

Institutionen för systemteknik

Department of Electrical Engineering

Examensarbete

Model Predictive Control for Active Magnetic Bearings

Examensarbete utfört i Reglerteknik vid Tekniska högskolan vid Linköpings universitet

av Joachim Lundh LiTH-ISY-EX--12/4608--SE

Linköping 2012

Department of Electrical Engineering Linköpings tekniska högskola

Linköpings universitet Linköpings universitet

(2)
(3)

Model Predictive Control for Active Magnetic Bearings

Examensarbete utfört i Reglerteknik

vid Tekniska högskolan vid Linköpings universitet

av

Joachim Lundh LiTH-ISY-EX--12/4608--SE

Handledare: Niklas Wahlström

isy, Linköpings universitet

Johan Sjöberg

ABB CRC, Västerås

Examinator: Johan Löfberg

isy, Linköpings universitet

(4)
(5)

Avdelning, Institution Division, Department

Instutitionen för Systemteknink Department of Electrical Engineering SE-581 83 Linköping Datum Date 2012-04-09 Språk Language Svenska/Swedish Engelska/English   Rapporttyp Report category Licentiatavhandling Examensarbete C-uppsats D-uppsats Övrig rapport  

URL för elektronisk version

http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81325 ISBN

— ISRN

LiTH-ISY-EX--12/4608--SE Serietitel och serienummer Title of series, numbering

ISSN —

Titel Title

Modellbaserad prediktionsreglering för aktiva magnetlager Model Predictive Control for Active Magnetic Bearings

Författare Author

Joachim Lundh

Sammanfattning Abstract

Det här examensarbetet diskuterar möjligheten att positionsreglera en rotor som leviteras på aktiva magnetlager. Reglerstrategin som används är modellbaserad prediktionsreglering vilket är en online-metod där ett optimeringsproblem löses i varje sampel. Detta gör att regulatorn blir mycket beräkningskrävande. Samplingstiden för systemet är mycket kort för att fånga dynamiken hos rotorn. Det betyder att regulatorn inte ges mycket tid att lösa optimeringsproblemet. Olika metoder för att lösa QP-problem betraktas för att se om det är möjligt att köra regulatorn i realtid.

Dessutom diskuteras hur valet av prediktionshorisont, reglerhorisont och straff på sluttill-ståndet påverkar regleringen. Simuleringar som visar karakteristiken av dessa val har ut-förts.

Nyckelord

Keywords active magnetic bearing (AMB), model predictive control (MPC), quadratic programing (QP), real time, parameter scheduling

(6)
(7)

Sammanfattning

Det här examensarbetet diskuterar möjligheten att positionsreglera en rotor som leviteras på aktiva magnetlager. Reglerstrategin som används är modellbaserad prediktionsreglering vilket är en online-metod där ett optimeringsproblem löses i varje sampel. Detta gör att regulatorn blir mycket beräkningskrävande. Samp-lingstiden för systemet är mycket kort för att fånga dynamiken hos rotorn. Det betyder att regulatorn inte ges mycket tid att lösa optimeringsproblemet. Olika metoder för att lösa QP-problem betraktas för att se om det är möjligt att köra regulatorn i realtid.

Dessutom diskuteras hur valet av prediktionshorisont, reglerhorisont och straff på sluttillståndet påverkar regleringen. Simuleringar som visar karakteristiken av dessa val har utförts.

(8)
(9)

Abstract

This thesis discuss the possibility to position control a rotor levitated with active magnetic bearings. The controller type considered is model predictive control which is an online strategy that solves an optimization problem in every sample, making the model predictive controller computation-intense. Since the sampling time must be short to capture the dynamics of the rotor, very little time is left for the controller to perform the optimization. Different quadratic programming strategies are investigated to see if the problem can be solved in realtime. Additionally, the impact of the choices of prediction horizon, control horizon and terminal cost is discussed. Simulations showing the characteristics of these choises are made and the result is shown.

(10)
(11)

Contents

Notation ix

1 Introduction 1

1.1 Active Magnetic Bearings . . . 1

1.1.1 Multibearing . . . 2

1.1.2 Advantages and Applications . . . 4

1.2 Model Predictive Control . . . 4

1.3 Purpose and Goal . . . 6

1.4 Demarcation . . . 7

1.5 Thesis Outline . . . 7

2 Model 9 2.1 Rotor and amb . . . 9

2.2 Amplifier . . . 12

2.3 Sensor . . . 13

2.4 Complete Model . . . 13

2.5 Model Parametrization for Parameter Scheduling . . . 14

2.6 Properties . . . 15

3 Model Predictive Control 17 3.1 mpc Framework . . . 17

3.2 Dense Problem Form . . . 19

3.2.1 Shorter Control Horizon . . . 21

3.3 Sparse Problem Form . . . 22

3.3.1 Shorter Control Horizon . . . 23

3.4 Explicit mpc . . . 23 3.5 Parameter Scheduling . . . 24 3.6 Horizons . . . 24 3.6.1 Prediction Horizon . . . 24 3.6.2 Control Horizon . . . 25 3.7 Stability . . . 28 3.8 Optimization . . . 30

3.8.1 Methods for Solving QP-problems . . . 30

(12)

viii CONTENTS

3.8.2 Quadratic Programing Software . . . 31

4 Controller 33 4.1 Observer . . . 33

4.1.1 Observer Evaluation . . . 34

4.2 Model Predictive Controller . . . 37

4.2.1 Parameter Setting . . . 38 4.3 Simulation Environment . . . 39 4.4 External Implementations . . . 39 4.4.1 quadprog . . . 39 4.4.2 IPopt . . . 39 4.4.3 qpOASES . . . 41 5 Results 43 5.1 Control Policy . . . 43 5.2 Tuning . . . 45 5.2.1 Weight Matrices . . . 45 5.2.2 Horizons . . . 45 5.3 Performance . . . 45 5.4 Stability . . . 46

5.5 External Implementation Results . . . 50

6 Summary 53 6.1 Conclusion . . . 53

6.2 Future Work . . . 54

(13)

Notation

Abbreviations

Abbreviation Description

mpc Model Predictive Control

amb Active Magnetic Bearing

LQG Linear Quadratic Gaussian (control)

Model predictive control

Notation Description

ω Axle rotational speed

A0 Continuous time invariant system matrix

Continuous parameter varying system matrix

B Continuous system input matrix

C Continuous system output matrix

F0 Discrete time invariant system matrix

Discrete parameter varying system matrix

F Slope for linear approximation of discrete system

G Discrete system input matrix

H Discrete system output matrix

X State vector for entire prediction

U Input vector for entire prediction

Y Output vector for entire prediction

Np Prediction horizon

Nc Prediction horizon

Q1 Weight matrix for mpc model states

Q2 Weight matrix for mpc control signal

α, β, γ Trim parameters for weight matrices

(14)

x Notation

Active magnetic bearings

Notation Description

µ0 Electromagnetic permeability of air/vaccum

Nr Number of actuator winding turns

Ibias The actuator bias current

Agr Pole face area

γ Geometry factor

g0 Nominal airgap

Observer

Notation Description

ω Axle rotational speed.

ˆ

x Estimated state

K Kalman gain

P Error covariance

R1 Noise covariance for process

(15)

1

Introduction

This thesis discusses the use of model predictive control to position control a rotor levitated with active magnetic bearings. The work has been done at and for ABB Corporate Research in Västerås Sweden.

1.1

Active Magnetic Bearings

The idea of an active magnetic bearing (amb) is to use magnetic fields to lev-itate the rotor of a machine. It can however through electromagnetic laws be shown that no charged particle can be stationary in a constant magnetic field. This means that in order keep the rotor in a stationary position, a dynamic field is required. The amb consists of sensors, actuators, amplifiers, a controller and backup bearings. The backup bearings keep the rotor in the machine in case of failure and also provide a rest for the rotor when not in operation. When running, the air gap between rotor and the backup bearings is 0.5 mm, only allowing small displacements. A block diagram view of an active magnetic bearing can be seen in Figure 1.1. The actuators and the backup bearings are mounted on the motor housing.

To date there are some amb implementations made with mpc. In Huang [2007], a small amb-process with low mass, circa 300 g, is considered. Since the rotor is used in a non-rotation state, no resonance frequencies caused by bending modes are considered. The smaller system also cause the sensor and actuator dynamics to be very fast making them practically negligible. This makes it possible to iden-tify a black-box-model of the amb, rotor, sensors and amplifiers, with only eight states for the complete 3D model. The four states per dimension then roughly represent the stiff modes of the rotor. This is a fairly accurate model since the

(16)

2 1 Introduction  Sensor Actuator ZŽƚŽƌ Controller Amplifier

Figure 1.1:Block diagram view of active magnetic bearing. Actuator system

denoted in blue, sensor system in red and control system in black.

operating point, ω = 0, is lower than the resonance frequencies of the rotor. This thesis only considers an mpc-amb implementation. However, several other regulator types for controlling magnetic bearings have been investigated in other theses and articles. In Nonami and Ito [1996], a 3D rotor with magnetic axial control is modeled and stabilized with µ-synthesis. In Jastrzebski et al. [2010], a

centralized H∞controller is implemented. LQ-implementations have also been

investigated. In Grega and Piłat [2005] and Zhuravlyov [2000] amb processes have been stabilized with LQ-control.

The rotors from the experimental setups above are all substantially smaller and lighter than the rotor considered in this thesis. When a large rotor is studied, the flex of the rotor is much more apparent. This is accounted for by using a more sophisticated model that includes more of the rotors dynamic. More details need to be added to the model including the sensor and actuator dynamics as well as the bending modes causing the resonance frequencies in the vicinity of the operation frequencies.

1.1.1

Multibearing

An interesting concept to study is the correlation between the bearings in each end of the machine. Controlling each machine end separately is a waste of knowl-edge. In Figure 1.2 a multibearing setup can be seen. If the model includes the coupling between ends and axes some decoupling can be made. Ideally a mul-tivariable process would map one input to one output making it easy to design controllers for. This is hardly never the case, inputs usually affect many outputs

(17)

1.1 Active Magnetic Bearings 3



Figure 1.2:Multibearing setup with actuators and sensors.



Figure 1.3:Multimachine setup with actuators and sensors.

through the process dynamics. Decoupling alter the problem to map one input to one output. Complete decoupling is hard to achieve but even partial decoupling can be useful.

The extension of this is when two machines are merged, using three or more bear-ings with the rotor. For conventional bearbear-ings this is commonly done with a soft coupling that is designed to only transfer the torque between the machines and leaves out vibrations and other unwanted effects. Soft couplings also correct for slight mismatches in the line up between the machines. With active magnetic bearings it is however possible to make one stiff axle without any couplings, let-ting the control system regulate all the bearings and the two machines become one big unit. If more bearings than two are used, some may be redundant and could either be removed to make the machine smaller or be kept as a backup. With a stiff axle and more control points the process properties can be improved to easier pass through the rotor resonance frequencies. Figure 1.3 shows a setup with two machines on a stiff axle.

Making a centralized control design is usually a good idea. Including more of the process in the same loop grants better control performance. In Grega and Piłat [2005], linear control is implemented on two different designs of the same process and compared. The decentralized design divide the process and utilize one controller per actuator, as opposed to the centralized design where the entire

(18)

4 1 Introduction

process is controlled with one controller. Centralized design is shown having pos-itive effects to the control of the processes. In Zhuravlyov [2000], a comparison between centralized and decentralized amb-control with LQ design is also done.

1.1.2

Advantages and Applications

The main advantages with amb compared to conventional bearing technology is the contact free operation which makes it very suitable for remote locations where maintenance is difficult, e.g., deep sea or space. The lack of lubricants make sure that sensitive surroundings are not contaminated. With no friction and wear, lifetime of the machine will be longer with magnetic bearings.

It is possible to do an unbalance, control letting the rotor spin round its center of mass instead of its geometric center. If small vibrations can be removed through feedback control, the production of the rotors could be less precise and money would be saved. The machine would also cause less vibration to itself and ad-jacent equipment making it possible to place machines closer together and to other sensitive equipment. The unbalance vibrations will ultimately crash the rotor into the backup bearings if the rotational speed is high enough. In Shi et al. [2002], unbalance control that suppress these vibrations is discussed.

The power consumption of the bearing is in the range of the friction loss from a conventional bearing, but both are negligible in comparison to the rotor drag. However if the rotor would have been surrounded by vacuum a large amount of energy could be saved.

The active magnetic bearing makes it possible to obtain incredibly high rotor speeds, over 100 000 rad/s (≈ 1 000 000 rpm).

1.2

Model Predictive Control

Model predictive control is based around the ability of models to predict system behavior. mpc originates from dynamic matrix control theories developed in the late ’70s, see Cutler and Ramaker [1979]. At each sample the optimal controls are calculated with respect to minimization of a cost criterion. Commonly a weighted quadratic norm of the inputs and the states of the process is used as the cost crite-rion. This is expressed as a quadratic programing problem which is solved online producing the future control signals. The first control signal is applied to the pro-cess and the rest is discarded. At the next sample a new optimization problem is solved with a new initial state. In Figure 1.4, the basic terms are visualized for time k. The state is measured or estimated at time k and the appropriate control

sequence, u(k), . . . , u(k+Nc) is calculated with respect to system dynamics and

choice of weights. The prediction can then be calculated by inputing the control sequence to the model.

Quadratic problems are time consuming to solve and traditionally it has been suitable for applications with slow dynamics in the order of minutes or hours. However, mpc can be used for processes with small time constants, but in these

(19)

1.2 Model Predictive Control 5



k+Np k+Nc k Control horizon Prediction horizon k+1 Output Input

Figure 1.4:Graph of mpc signals: predicted control sequence and predicted

output in solid lines, previous control signals and output in dashed lines. Time k represent the present, k < 0 the past and k > 0 the future. Reference can be arbitrary, but is here omitted representing r = 0, ∀k.

cases the system cannot have a lot of states for execution time purposes. A small amount of states opens up for explicit mpc where the problem is solved offline. The controller then simply looks up the control policy from a table. See Bempo-rad et al. [2002a,b] for further reading about explicit mpc.

Much research is done to decrease the execution time of mpc and solution times are progressively being lowered. In Zeilinger et al. [2009], a 12 state problem with a prediction horizon of 10 is solved in 2 ms. Techniques using application specific exploits, as the active set method described in Ferrau et al. [2008], also al-lows for short sampling times. Software is not the only aspect that can be changed to increase performance. Altering the architecture of the processor where the op-timization problem is solved is also a possibility. In Vouzis et al. [2009], an mpc control strategy is implemented on two processors. The host processor uses a coprocessor to handle specific matrix operations. The shared workload and spe-cialization of the coprocessor provides for faster solving times.

Theoretical amb-mpc implementations have also been studied. In Zhang et al. [2004], an energy storage flywheel is modeled and controlled with both mpc and a PID-controller. The results show that the mpc gives better performance and possibility to stabilize the rotor at higher rotational speed. An example where

mpchave been successfully implemented on a small experimental magnetic

(20)

6 1 Introduction

mpc is one of few control strategies that explicitly handles limitations and

con-ditions related to the controlled process. Commonly, constraints related to the control signals and outputs are added, but the mpc framework is general and many different constraints can be formed.

Currently mpc is usually found controlling refineries and chemical plants. Here, time constants are long enough to ensure that optimization is completed before next sample.

1.3

Purpose and Goal

The goal of the thesis changed due to problems that were not foreseeable. The initial tasks were to:

• Model the active magnetic bearing process and expand the model to a multi-bearing and multimachine scenario.

• Implement an mpc solution with Maple/IPopt.

• Look into automatic differentiation and see what benefits it could have com-pared to symbolic differentiation. Maple/IPopt supports both and it would consequently be tested on that platform.

• Evaluate different mpc strategies to achieve fast enough optimization to ensure the short sampling intervals needed for the magnetic bearing. Opti-mization time must be in the vicinity of 0.1 ms.

• Perform extensive robustness tests of the obtained controller.

Due to difficulties with finding a stable proof of concept implementation for the

mpc-amb system the multimachine track was dropped.

The thesis considers:

• Model the active magnetic bearing process and expand the model to a multi-bearing scenario.

• Implement an mpc solution with Maple/IPopt and qpOASES.

• Evaluate different mpc strategies to achieve fast enough optimization to ensure the short sampling intervals needed for the magnetic bearing. Opti-mization time must be in the vicinity of 0.1 ms.

• Design a model predictive controller to control the amb.

• Expand the model with parameter scheduling. Invsetigate what the advan-tages and disadvanadvan-tages are.

• Analyze properties of unstable processes controlled with mpc. Compare with conventional design methods and investigate what new aspects must be considered.

(21)

1.4 Demarcation 7

The aim is to test if it is possible to speed up the optimization and use mpc on systems with approximately 30 states and a sampling rate of 10 kHz.

1.4

Demarcation

There are a lot of interesting aspects of the magnetic bearing setup and the model predictive controller. This thesis will be limited to the position dynamics of the rotor, no housing dynamics will be included. Only radial movement of the rotor will be considered, the axial movement is assumed to be zero.

The rotor is assumed to be unaffected by gravity. Therefore, it is also assumed to be in a levitating position when the controller is started, i.e., no initial lift up or let down sequence will be designed.

Multi machine setup is not treated in this thesis.

The rotor is considered to be unloaded, i.e., there will be no disturbances from motor, generator or turbine forces.

1.5

Thesis Outline

Chapters 2 and 3 introduce the model and the model predictive control frame-work. In Chapter 3, the choice of horizons related to the mpc is investigated. These chapters provide a base for the controller design in Chapter 4. Chapter 4 also describes the external implementations. The two last chapters, 5 and 6, de-scribe the results from simulations and conclusions drawn, respectively. Chapter 6 also includes some suggestions of interesting future topics.

(22)
(23)

2

Model

A main component of the model predictive controller is a model of the plant. For the active magnetic bearing case the model consists of three major parts, the rotor, the amplifier and the sensor. This chapter shows the general principle of how the model is derived based on the theories in Schweitzer et al. [2009] and Lantto [1999]. The model is derived one part at a time in Section 2.1 to 2.3 The outputs of the model refer to the cartesian position of the rotor ends denoted 1 and 2. Figure 2.1 shows the coordinates.

2.1

Rotor and

AMB

The rotor is studied in one axis (x or y) at a time, creating a 2D model of the rotor movement. By merging two 2D models, a 3D model is created. At this stage the model only covers the movement of each axis separately. The process has gyroscopic dependencies between the axes that is added to improve the model. In addition to that the amb dynamics is built into the rotor model directly. To create the 2D model the rotor is divided into a number of segments, each with its own specific mechanical properties, e.g., mass, inertia, density, stiffness. The movement of the segments, as effect of the rotation and external forces, is studied with finite element method calculations. It is described in terms of modes that represent bending properties of the rotor. Each mode corresponds to a certain resonance frequency. Any rotor position can be described by adding the appro-priate amount of each mode. This can roughly be compared to a Fourier series where the modes represent the base functions that build up the signal, or as in this case, the position of the rotor segments.

(24)

10 2 Model  y1 x1 y2 x2

Figure 2.1:Visualisation of the rotor coordinate set up. The in- and outputs

will be indexed with these coordinates.

The first mode represents the zeroth order component of the movement, an oscil-lation perpendicular to the axis of the rotor. The second mode represent the first order component, a rotational oscillation round the rotors center of mass, i.e., a conical mode. These two modes are referred to as the rigid body modes and can be seen in Figure 2.2. All following modes will have bending properties and be referred to as bending modes with the third mode being the first bending mode and so on. Figure 2.3 gives an idea of what the bending modes look like, the third bending mode will have another bend to it and so on. The exact look of the bend-ing modes will vary with the properties of the rotor, i.e., mechanical properties such as stiffness and mass placement.

In the same manner as a Fourier signal, the modes representing movement can be truncated to obtain an approximation of reasonable size. Keeping more modes



(a)First rigid body mode.



(b)Second rigid body mode, also referred to as a conical mode.

(25)

2.1 Rotor and amb 11



(a)First bending mode.



(b)Second bending mode.

Figure 2.3:Principal sketches of the two first bending modes.

increase the resemblance with the real movement. A model with six modes is for this rotor appropriate to describe its movement. The oscillating motion of each mode is described by two states.

The 2D dynamics of the rotor is represented by a continuos state-space system

consisting of the matrices a, b and c. Here qx is the modal coordinates of the

model in the x-axis, uxis the input forces and yx is the the displacement at the

sensor positions. ˙qx(t) = aqx(t) + b ˜ux(t) ˜ yx(t) = cqx(t) (2.1) with ˜ ux(t) =" ˜uu˜x1(t) x2(t) # , y˜x(t) =" ˜yy˜x1(t) x2(t) # (2.2)

The axes have equal properties, by doubling up the system, a 3D model is created. ˙q(t) = Arq(t) + Bru(t) y(t) = Crq(t) (2.3) Where Ar ="a 00 a # , Br ="b 00 b # Cr =" c 00 c # , Dr = 0 (2.4)

(26)

12 2 Model and ˜ u(t) =" ˜uu˜x(t) y(t) # , y(t) =˜ " ˜yy˜x(t) y(t) # , q(t) ="qqx(t) y(t) # . (2.5)

The model in (2.3) only account for the internal rotor movement and does not include the amb dynamics nor the coupling between the axes, this needs to be added. The amb dynamics includes an internal feedback from the displacement. The displacement feedback is a positive feedback since the force of a magnet is in-versely proportional to the distance from the source. That is, rotor displacement will amplify itself causing the rotor to be drawn further away from the set point in an unstable fashion. According to Schweitzer et al. [2009] and Lantto [1999],

the amb dynamics is made with the coefficients Kir and Kxr, which are products

of electromagnetic properties. Kir = µ0NrIbiasAgrγ g02 (2.6) Kxr = µ0NrIbias2 Agrγ g03 = Kir Ibias g0 , (2.7)

where µo is the permeability of vacuum, Nr is the number of actuator winding

turns, Ibiasis the bias current, Agr is the pole face area, γ is a geometry factor and

g0 is the nominal air gap. Kir and Kxr are included directly into the input and

state matrix.

The gyroscopic coupling between the axes is proportional to the rotational speed ω. The relation a linear function ωg. The gyroscopic matrix g, is a product from the same finite element method calculation where a, b and c were derived. These terms are added to the off-diagonal of the system matrix.

The complete rotor model Gr will then be

Ar ="a + Kxrbcωg ωg a + Kxrbc # , Br= Kir "b 0 0 b # Cr =" c 00 c # , Dr = 0 . (2.8)

The coupling between the axes is ω dependent, changing the model as ω changes. This knowledge will be used later to parametrize the model.

2.2

Amplifier

Ideally the amplifier would amplify the current for the actuators instantly and in a linear fashion. This is however rarely the case and the dynamics of the ampli-fier has to be taken into account when building the model. The actuator can be

(27)

2.3 Sensor 13

described with a first order transfer function per signal put through. Ga=

1 τas + 1

· I4x4 (2.9)

where τadescribes the time constant of the amplifier dynamics.

2.3

Sensor

The sensor also has some dynamics. It to can be modeled as one first order trans-fer function per signal put through. The sensor dynamics is fast, i.e., the time constant τsis small, but not negligible.

Gs=

1 τss + 1

· I4x4 (2.10)

where τsdescribes the time constant of the amplifier dynamics.

2.4

Complete Model

The resulting system is obtained by merging the three models described in Sec-tions 2.1, 2.2 and 2.3.

Gtot= Gs· Gr· Ga (2.11)

The complete model state matrix will have a block diagonal structure with link terms on the top off-diagonal.

A =         As BsCr 0 0 Ar BrCa 0 0 Aa         , B =         0 0 Ba         C =hCs 0 0 i , D = 0 (2.12)

The complete model will then be

˙x(t) = Ax(t) + Bu(t)

y(t) = Cx(t) . (2.13)

The model has four inputs, four outputs and 32 states. The inputs represent control current and outputs represent sensor voltage.

u(t) =                ux1(t) ux2(t) uy1(t) uy2(t)                , y(t) =                yx1(t) yx2(t) yy1(t) yy2(t)                , x(t) =           x1(t) .. . x32(t)           (2.14)

The model is then discretized with sampling time T , so that x(t) = x(kT ). Intro-duce the short-hand notation x[k]. The equivalent discrete matrices F and G are created, C is unaffected of the discretization and is therefore equal to H. The

(28)

14 2 Model

discrete version of the model is the one used in the model predictive controller in this thesis.

x(k + 1) = Fx(k) + Gu(k)

y(k) = H x(k) (2.15)

2.5

Model Parametrization for Parameter Scheduling

As described in Section 2.1, the cross term linking the axes together has a linear dependency in ω. This can be taken into account by creating a model that is para-metric with ω as the parameter. It is done by designing two models with different

rotational speeds, ω1and ω2, and creating an affine function in ω between them.

To ensure numerical accuracy the difference between ω1and ω2 is chosen large.

F is the discrete version of the system matrix. The slope for the affine function is defined as

F = F(ω1) − F(ω2) ω1−ω2

. (2.16)

This can then be used to create the affine function describing the relation. F(ω) = F(0)

|{z}

F0

+∆Fω = F0+ ∆ (2.17)

The only part of the model affected is the state matrix. The observant reader will note that the ω dependency should propagate both to the state matrix F and input matrix G of the discrete system since discretization is defined as

F(ω) = eA(ω)T G(ω) = T Z 0 eA(ω)tB dt . (2.18)

Looking at the first order approximation of the discretization it will be noted that F(ω) ≈ I + A(ω)T G(ω) ≈ T Z 0 (I + A(ω)t)B dt = BT + A(ω)BT 2 2 . (2.19)

With a short sample time the effect of the rotational speed in the discrete in-put matrix is negligible since the only contribution comes from the continu-ous system matrix which is scaled with the sample time squared. Therefore G(ω) ≈ G(0) = G0, and it is appropriate to use G0instead of G(ω).

(29)

2.6 Properties 15

The model on state-space form is then represented as x(k + 1) = F(ω)x(k) + G(ω)u(k)

(F0+ ∆Fω)x(k) + G0u(k) y = H x(k) .

(2.20)

This model provides better resemblance with the process and is used for parame-ter scheduling of the controller and observer design.

2.6

Properties

The process modeled is unstable. In the continuous model, this property is repre-sented by poles in the right half plane. These poles are transferred to the outside of the unit disc during the discretization. This being a mechanical system the properties are characteristic with peaks and notches in the frequency response. Each pair, peak and notch, corresponds to the resonance frequency of a bending mode. In Figure 2.4, the singular values of the system are shown, clearly display-ing the peaks and the notches. Operation in the vicinity of the peaks is difficult. The gain is very high here and makes the process hard to control.

To make a good approximation of the process, dynamics from below the maxi-mum operation frequency must be included. This means that the bending modes that have their resonance frequency below the maximum operation frequency must be included. Also the sample rate must be high enough to be able to iden-tify the movement.

(30)

16 2 Model

Singular Values

Frequency (rad/s)

Singular Values (dB)

(31)

3

Model Predictive Control

This chapter introduces the model predictive controller. The flexibility of the

mpcframework makes it easy to add other features such as reference tracking,

integral action and more. Maciejowski [2002] is a good source for further reading. The choice and impact of the prediction and control horizons that are central to the mpc application is discussed in Section 3.6.

3.1

MPC

Framework

Consider the time discrete process

x(k + 1) = Fx(k) + Gu(k)

y(k) = H x(k) (3.1)

Assume that the process has m inputs, p outputs and n states. The input, output and state vectors can then be formed.

u(k) =            u1(k) .. . um(k)            , y(k) =            y1(k) .. . yp(k)            , x(k) =            x1(k) .. . xn(k)            (3.2)

The general idea of the control strategy is to minimize a cost function that takes both the control signal and the system states into account. A conventional choice of cost function is J(x(k), u(k)) = ∞ X j=0 kx(k + j)k2 Q1+ ku(k + j)k 2 Q2. (3.3) 17

(32)

18 3 Model Predictive Control

The notation kxk2Qis defined as xTQx. This penalizes the inputs and states

devi-ation from the origin quadratically, weighted with the penalty matrices Q1 and

Q2. System dynamics and constraints are added to form the optimization

prob-lem. The main design parameter of the controller is choice of the penalty matrices which trims the optimization into focusing on decreasing the cost for either the states or the inputs. Thus, the design balances driving the states to zero fast or using small inputs.

In theory, if the dynamics of the process is perfectly modeled and the system is completely noise free, it would be possible to measure the state at time k and cal-culate all future control signals. This would result in an infinitely large problem and is not of any practical use since it is unsolvable with the methods used in this thesis. Instead the optimization is done for a reasonably long time period which scales the problem down to proportions possible to handle. This time period is

called the prediction horizon and is here denoted Np. The cost function from

(3.3) is truncated to Npterms and results in

JNp(x(k), u(k)) = Np X j=1 kx(k + j)k2 Q1+ ku(k + j − 1)k 2 Q2.

The framework allows for easy inclusion of constraints of (almost) any kind. Here illustrated with time varying input and output constraints.

min u JNp(x(k), u(k)) s.t. x(k + 1) = Fx(k) + Gu(k) .. . x(k + Np) = Fx(k + Np1) + Gu(k + Np−1) ulb(k) ≤ u(k) ≤ uub(k) . .. ulb(k + Np1) ≤ u(k + Np1) ≤ uub(k + Np−1) ylb(k + 1) ≤ Cx(k) ≤ yub(k + 1) .. . ylb(k + NP) ≤ Cx(k + Np) ≤ yub(k + Np) (3.4)

To account for disturbances and model errors the optimization problem (3.4) is solved in every sample. The first part of the control signal, u(k), is applied to the process and the rest are discarded. From an optimization point of view it is convenient to use the last optima as an initial guess for the next iteration. From one sample to another the problem does not change drastically unless a large disturbance affects the system.

The control signal does not need to be unique in every sample, allowing the con-trol signal to alternate in the early samples is usually enough. The time over which the control signal is allowed to vary is called control horizon and is

(33)

de-3.2 Dense Problem Form 19

noted Nc, after this point the control signal is forced to be fix. The size and

complexity of the optimization problem is proportional to Npm + Ncn.

Choos-ing smaller Npand Ncreduces the problem size and decreases the computation

time needed to solve the resulting QP-problem. From experience it is known that a control horizon set to one fourth of the prediction horizon is appropriate. In Section 3.6, the choice of horizons is further discussed.

Commonly the controller is designed to be integrating, to remove stationary er-rors. For mpc this is achieved by introducing the change in control signal ∆u, and using this term in the cost function instead of the control signal itself. If gravity was to be included in the simulation environment it would perhaps be necessary to include an integrator. Since the process disturbance is alternating fast and does not have a constant term the model has been designed without an integrator.

There are two main ways of forming the problem which are used in the thesis, they are discussed in the following sections. Dense form is better suited for qpOASES and sparse form is better suited for IPopt. quadprog can be set up to use either sparse or dense form but has here been used with dense form only.

3.2

Dense Problem Form

From (3.1) we know that the process can be expressed as x(k + 1) = Fx(k) + Gu(k) . Recursively the following states can be obtained.

x(k + 2) = Fx(k + 1) + Gu(k + 1) = F(Fx(k) + Gu(k)) + Gu(k + 1) = F2x(k) + FGu(k) + Gu(k + 1) x(k + 3) = F3x(k) + . . . .. . ... . .. x(k + Np) = FNpx(k) + · · · + Gu(k + Np−1) (3.5)

To present the predictions in a compact way the vectors containing the control signal and the state vector for all sample times within the prediction horizon are formed as X =            x(k + 1) .. . x(k + Np)            , U =                 u(k) u(k + 1) .. . u(k + Np−1)                 . (3.6)

(34)

20 3 Model Predictive Control

The prediction can now be written as

X = F x(k) + GU (3.7) with F =                F F2 .. . FNp                , G=                G FG G .. . . .. ... FNp−1G · · · FG G                . (3.8) Introduce e Q1=           Q1 . .. Q1           , Qe2=           Q2 . .. Q2           . (3.9)

The problem can now be expressed as a quadratic programing problem, QP-problem. JNp(x(k)) = Np−1 X j=0 kx(k + j)k2 Q1+ ku(k + j)k 2 Q2 = XTQe1X + UTQe2U = (F x(k) + GU ) eQ1(F x(k) + GU ) + UTQe2U = xT(k)FTQe1Fx(k) + 2GTQe1Fx(k)U + UTGTQe1GU + UTQe2U (3.10) Removing the term that is independent on u result s in

JNp(x(k)) = 2G

T

e

Q1Fx(k)U + UT(GTQe1G+ eQ2)U . (3.11) It is now possible to identify the terms of a QP-problem

Hqp= GTQe1G+ eQ2 fqp= 2GTQe1Fx(k)

(3.12) and set up an optimization problem that can be solved.

min u U TH qpU + fqpU s.t. ulb(k) ≤ u(k) ≤ uub(k) . .. ulb(k + Np1) ≤ u(k + Np1) ≤ uub(k + Np−1) ylb(k + 1) ≤ Cx(k) ≤ yub(k + 1) .. . ylb(k + NP) ≤ Cx(k + Np) ≤ yub(k + Np) (3.13)

(35)

3.2 Dense Problem Form 21

states have been substituted and expressed in terms of u. The name dense refers

to the high number of non-zero elements in the quadratic problem hessian, Hqp.

To fit the QP-solver the constraints are expressed in the following way "Au Ax # U ≤"bu bx # with Au =" II # , bu =" UUmax min # Ay =" II # , by=" YYmax min # (3.14)

3.2.1

Shorter Control Horizon

We can see that the length of the control horizon is the same as the number of matrix columns in G in (3.8). Each column represents how each input will affect the prediction. If u(k + j) = u(k + Nc) when j = Nc, Nc+1, . . . , Np it would be

possible to rewrite G by removing the last NpNcmatrix columns and replacing

them with a new Nc:th column.

G=                                        I F . .. .. . . .. . .. .. . F I .. . ... ... FNp−1 · · · FNc+1 N c X j=0 Fj                                        G (3.15)

This means that the last control signal is weighted of the combined weight of the

remaining samples up to Np. The penalty matrix eQ2 must also be modified to

match the smaller dimension of U . 3.1 Example

Expressing this for the general case proved to be difficult and is instead illustrated

with a simple example. Np= 6 and Nc= 3 would yield the following G.

G=                        G 0 0 FG G 0 F2G FG G F3G F2G (F + I) G F4G F3G F2+ F + IG F5G F4G F3+ F2+ F + IG                       

(36)

22 3 Model Predictive Control

The three columns represent the number of control signals and the six rows rep-resent the prediction horizon thus matching the initial description.

3.3

Sparse Problem Form

From equation (3.1) we know that the system can be expressed as

x(k + 1) = Fx(k) + Gu(k) (3.16)

To obtain the sparse formulation the process dynamics is expressed as Npequality

constraints. Define X =           x(k + 1) .. . x(k + Np)           , U =                u(k) u(k + 1) .. . u(k + Np−1)                , Z =" X U # . (3.17)

The equality constraints are written on the form AeqZ = beq. This form can easily

be obtained by moving all the terms in (3.16) to one side for each sample in the prediction. −Fx(k) = −x(k + 1) + Gu(k) 0 = Fx(k) − x(k + 1) + Gu(k) .. . 0 = Fx(k + Np1) − x(k + Np) + Gu(k + Np−1) (3.18)

The matrices are then separated from the variables and written on the form

Aeq=                  −I G F . .. . .. . .. ... . .. FI G                  , beq=                −Fx[k] 0 .. . 0                (3.19)

The inequality constraints (3.20) are built similarly to the inequality constraints of the dense form (3.14). The major difference is that in the sparse form the predicted states are optimization variables. Therefore it is, compared to the dense form, slightly more intuitive to add constraints to them.

Ax=" II # , bx=" XXmax min # Au =" II # , bu =" UUmax min # . (3.20)

(37)

3.4 Explicit mpc 23

The full optimization problem will then be min u Z T " e Q1 0 0 Qe2 # Z s.t. AeqZ = 0 (3.21) "Ax Au # "x(k) Z # ≤" bx bu # .

The optimization problem hessian will, as of the block diagonal structure of

weight matrices Q1and Q2, have few nonzero elements, thus the form is sparse.

The sparse form will have Npn + Ncm optimization variables, as opposed to Ncm

for the dense form.

3.3.1

Shorter Control Horizon

Modifying the problem to reduce the control horizon is very easy for a sparse problem, by altering the equality constrains from (3.19) to

Aeq=                         −I G FI . .. . .. ... G FI ... FI G                        

it is achieved. The number of columns is now Npnx+ Ncnuinstead of Np(nu+ nx).

The weight matrix eQ2and Q2 must also be modified to match the dimension of

U and u(k) respectively. 3.2 Example

Using the same setup as in Example 3.1, Np = 6 and Nc= 3, results in the

follow-ing Aeq. Aeq=                      −I G FI G FI G FI G FI G FI G                     

The input u(k + 2) now clearly affects the four last state predictions and not just the third.

3.4

Explicit

MPC

The mpc problem can also be solved offline to create a number of regions with dif-ferent control laws. Each region represent difdif-ferent active sets. The appropriate

(38)

24 3 Model Predictive Control

control signal is obtained by determining what region the process is in and with simple calculations derive the control signal from the control law in that region. Since no optimization is done online this is usually a fast mpc method. Limiting is the number of states in the process. As each state will represent a dimension in the region space, e.g., a process with two states will create a two dimensional space with regions, a process with three dimensions will create a three dimen-sional space with regions and so on. According to Ferrau et al. [2008], the

num-ber of regions grows as 3n with the number of states, n. This method is thereby

only suitable for processes with few states. For further reading, see Bemporad et al. [2002a] and Bemporad et al. [2002b].

3.5

Parameter Scheduling

In some cases the process properties are not constant. Properties may vary for many reasons. For instance, there can be a variation over time, e.g., a part of the hardware heats up after a certain time or as for the magnetic bearing the rotational speed changes the cross coupling between the axes. If this variation can be modeled and adjusted for in each sample, a more precise model can be made and better control can be obtained.

As described in Section 2.5 the model has a dependency of ω. With the dense problem form the predictor matrices must be restated each sample through either

a polynomial function f (ω) with order Np or a time consuming for-loop. The

sparse problem form has the advantage that it is linear in F and thus also linear in ω.

3.6

Horizons

Choosing the horizons for the mpc is central to the function and performance desired. Some aspects and problems considered when choosing horizons are dis-cussed in this section.

3.6.1

Prediction Horizon

As mentioned above mpc predicts the outcome of the system for some future samples. The length of the prediction horizon is one of the important design parameters, choosing a too long horizon makes the problem large and complex whilst a too short horizon might not capture the characteristics of the process. For stable processes the rule of thumb states that the prediction horizon should include the step response settling time for the process to capture its character-istics. For unstable systems this is of little or no use. Any perturbation from a stationary point will cause the states to grow exponentially towards infinity or mi-nus infinity. That means it is impossible to make a step response for the process to determine the time constants.

(39)

3.6 Horizons 25

analyzed for the closed loop system instead. The time for the system to settle is observed and used to choose prediction horizon. The signals settled after 0.004 s. With a sampling rate of 10 kHz, a prediction horizon of approximately 40 is appropriate for the controller in this case.

3.6.2

Control Horizon

The control horizon is another important design parameter. It is directly propor-tional to the problem size in of the dense problem form and can vastly improve optimization speed if shortened. For the stable system this is of much use and the techniques described in Sections 3.2.1 and 3.3.1 are intuitive and effective. For the unstable system this becomes a bit more problematic. If the control sig-nals are set to be constant during the samples k + Nc, . . . , k + Np, the prediction

of the unstable modes will not converge to the reference. Instead they will be centered around the reference to minimize the quadratic cost. The predictions in the sample range k, . . . , k + Ncwill as expected converge nicely.

For a stable process a longer control horizon often leads to a more aggressive control policy, since the controller is given the opportunity to compensate for large initial control signals.

3.3 Example

Let us look at a very simple process with only one state. The pole is placed so that the process is unstable, p = 1.1.

Gc=

1 s − 1.1

The process is then sampled with the sample time 0.1 s to create a discrete version. A simple mpc loop is formed and the system is simulated with different choices

of control horizon. The system is started in x0, 0 and driven towards zero. The

inputs and the states is penalized equally, Q1= Q2. The input is limited to [−3, 3].

The plots in Figures 3.1 and 3.2 show the behavior of the predictions with dif-ferent control horizons. Note that all choices stabilize the system. In the cases where the control horizon is shorter than the prediction horizon it can clearly be seen that the predictions run off towards infinity once the system run in open loop. The only limitation that prevents the predictions from reaching infinity is the length of the prediction horizon contra the speed of divergence. In Fig-ure 3.1a, the prediction and control horizons are equal and naturally all the pre-dicted states converge. Also, thanks to the terminal cost, described in Section 3.7, the states converge into the same trajectory.

The behavior observed in Example 3.3, is caused by the balance between the cost of using too much control and the cost of deviating from the origin. Looking at the extreme case in Figure 3.2b, where N c = 1, it can be seen that the system is still stabilized but converges much slower.

The stylistic sketch in Figure 3.3 shows the same properties for one prediction. It is simply the optimal way to control the system according to the weights chosen.

(40)

26 3 Model Predictive Control 0 1 2 3 4 5 6 0 1 2 3 4 5 0 1 2 3 4 5 6 −3 −2.5 −2 −1.5 −1 −0.5 0

(a)Outputs and inputs for the system with Np= 16 and Nc= 16.

0 1 2 3 4 5 6 −1 0 1 2 3 4 5 Output Time [k] 0 1 2 3 4 5 6 −3 −2.5 −2 −1.5 −1 −0.5 0 Input Time [k]

(b)Outputs and inputs for the system with Np= 16 and Nc= 8.

Figure 3.1: The black dots mark the states and applied control signals

re-spectively. The colored lines mark the predictions, colors match input and output between the plots.

(41)

3.6 Horizons 27 0 1 2 3 4 5 6 −1 0 1 2 3 4 5 Output Time [k] 0 1 2 3 4 5 6 −3 −2.5 −2 −1.5 −1 −0.5 0 Input Time [k]

(a)Outputs and inputs for the system with Np= 16 and Nc= 4.

0 1 2 3 4 5 6 −2 0 2 4 6 Output Time [k] 0 1 2 3 4 5 6 −3 −2.5 −2 −1.5 −1 −0.5 Input Time [k]

(b)Outputs and inputs for the system with Np= 16 and Nc= 1.

Figure 3.2: The black dots mark the states and applied control signals

re-spectively. The colored lines mark the predictions, colors match input and output between the plots.

(42)

28 3 Model Predictive Control  u3 x0 u2 Np Np k x u1 k u k x k u k x k u Np Np Np Np x0 x0

(a)Large input cost. 

u3 x0 u2 Np Np k x u1 k u k x k u k x k u Np Np Np Np x0 x0

(b)Medium input cost. 

u3 x0 u2 Np Np k x u1 k u k x k u k x k u Np Np Np Np x0 x0

(c)Small input cost.

Figure 3.3:Sketch of predictions and input for one sample with control

hori-zon set to one. The predictions are displayed in the top axis and the inputs in the bottom axis. Both (b) and (c) are stabilizing, but the end of the prediction can drift away when large controls are allowed.

The gray areas in the plot represent the cost before square and weight but gives an idea of the balance problem solved. Perhaps could a constraint that forces

0 ≤ x(k + Np) ≤ x(k) solve the problem with diverging predictions. Adding ad

hoc constraints should be done with extra care. Seemingly harmless constraints can alter the system in other ways than intended and have destructive effect on the control.

3.7

Stability

The mpc does not guarantee stability in its basic form. However, there are several methods to guarantee stability. One of them is to add a terminal cost to the op-timization problem. The cost is balanced to match the cost of the truncated part

of the problem and substitutes the last Q1of eQ1. This means that the problem is

virtually an infinite problem with u = 0 for the samples Np+ 1, . . . , ∞. Naturally

this only guarantees stability if the open loop system is stable since the system is

running in open loop after Np.

Consider an unconstrained stable case and study the part of (3.3) that was orig-inally truncated. A suboptimal solution, with the remaining controls equal to zero, allows for the final cost to be expressed as

J(x(k+Np)) = ∞ X j=0 kx(k + Np+ j)k2 Q1+ ku(k + Np+ j)k 2 Q2 = ∞ X j=0 kx(k + Np+ j)k2 Q1 = xT(k + Np)  Q1+ FTQ1F +  F2T Q1F2+ . . .  | {z } P x(k + Np) . (3.22)

(43)

3.7 Stability 29

The infinite matrix sum P can be simplified. Multiply P with FT from the left

and F from the right.

P = Q1+ FTQ1F +  F2T Q1F2+ . . . (3.23a) FTP F = FTQ1F +  F2T Q1F2+  F3T Q1F3+ . . . (3.23b)

Subtract (3.23a) from (3.23b).

FTP F − P = −Q1 (3.24)

Equation (3.24) is known as the Lyapunov equation and its solution P represent

the cost of reaching the origin from x(Np) with all controls equal to zero. Note,

that this is not formal proof but a hint of what can be done to ensure stability. Further reading can be done in Maciejowski [2002].

Now consider instead an unstable process that can be stabilized with an LQG-controller. The control signals can not be set to zero unless the system is in a noise free stationary state. Instead, let an LQG-controller stabilize the process after the prediction and calculate the loss of that control policy.

J(x(k + Np)) = ∞ X j=0 kx(k + Np+ j)k2 Q1+ ku(k + Np+ j)k 2 Q2 = ∞ X j=0 kx(k + Np+ j)k2 Q1+ k−Lx(k + Np+ j)k 2 Q2 = xT(k + Np) ∞ X j=0  (F − GL)jT Q1+ LTQ2L  (F − GL)j  | {z } P x(k + Np) (3.25)

In the same manner as before, P is multiplied both from the left and the right. P = ∞ X j=0  (F − GL)jTQ1+ LTQ2L  (F − GL)j (3.26a) (F − GL)TP (F − GL) = ∞ X j=0  (F − GL)j+1T Q1+ LTQ2L  (F − GL)j+1 (3.26b)

Subtract (3.26a) from (3.26b).

(F − GL)TP (F − GL) − P = −Q1−LTQ2L (3.27)

This is also a Lyapunov equation, as with (3.24), the solution P describes the

cost of reaching the origin from x(Np). However, here it only holds under the

assumption that the LQG-controller does not exceed the allowed input limit. There are other strategies for guaranteeing stability to model predictive

(44)

con-30 3 Model Predictive Control

trollers, see Maciejowski [2002] for further reading.

The terminal cost P does not only guarantee stability, but also pushes the tail of the predictions closer to the reference.

3.8

Optimization

The actual solving of the optimization problem is not covered in this thesis but here is a short note about some of the techniques that can be used.

In the field of optimization it is a common problem to find the global minimum and not getting stuck in a local minimum. Convex problems have the property that there is only one minimum, the local minimum is also the global one. Op-timization problems have to fulfill two criteria to be convex; it must have con-vex constraints and a concon-vex loss function, for further reading see Maciejowski [2002]. This is the case here, all the constraints are linear and the hessian is posi-tive definite, meaning the problem is convex.

Many strategies use what is called a warm start. Warm start is done by initializ-ing the problem in an active set that is known to be close to the true active set. Warm start is not to be confused with initial guessing. The initial guess provides information of where it is likely that the next solution is found, whereas the warm start provides information on what set of constraints that is likely to be active at optimum together with an initial guess.

3.8.1

Methods for Solving QP-problems

The QP solving strategies central for the thesis is briefly described here. The two forms of the mpc is implemented with two different solver structures. The dense form is suited for qpOAES and the sparse form for IPopt.

Interior Point

The idea with interior point methods is to make it costly to be close to a constraint. Interior point methods are also referred to as barrier methods. When the minima approaches a constraint the cost is gradually decreased, by scaling the problem and the set, so the solution can end up with an active constraint. The interior point method stays inside the allowed area until the solution is found, opposite of the simplex method, discussed below, that quickly finds a boundary to follow to the minima.

Active Set

Some solvers use knowledge about what set of constraints that are active. Once the active set is defined the optimization problem is reduced to a system of linear equations which is easy and fast to solve. These solvers are good at figuring out what is the active set but can also be warm started, i.e., a qualified guess of the active set is provided at start of the optimization. The solver can then figure out the correct active set faster. In the mpc case QP:s are solved sequentially, one at

(45)

3.8 Optimization 31

each sample. Most of the time the problem does not change very much between one sample to another. Providing the active set from the last sample is a good guess to what the next active set will be. This results in fewer iterations for the solver to figure out the correct active set.

3.8.2

Quadratic Programing Software

This section lists the solvers that have been investigated in this thesis.

qpOASES

qpOASES uses an active set method. By evaluating the derivatives of the KKT variables, the dual problem, it figures out whether there has been a change of region and what constraints should be included or excluded from the current active set. qpOASES is suited for the dense problem form. This is common for active set solvers.

What differentiates qpOASES from other active set solvers is the way qpOASES is designed to solve a sequence of problems. mpc in its dense form is parametric

in the current state x0, i.e., the only change between iterations is the current

state. Since the state usually does not change a lot between samples, two adjacent samples will yield almost the same QP-problem, i.e., almost the same solution, dual solution and active set. qpOASES saves information between the samples to start the search for a new solution in an already feasible solution. For further reading, see Ferrau et al. [2008].

ACADO

acadois a framework for optimization. It provides an easy way of defining

complete problems, without the user having to know much about optimization.

acadowill interpret the problem and express it in a way interpretable to the

appropriate solver. For a QP-problem acado will solve problems with qpOASES. However, the interpretation of the initial problem creates an overhead that will cause the optimization to take extra time. Since the allowed optimization time is very low for this application it is more suitable to express the problem directly for qpOASES. Doing so, cuts the overhead and gains time to be able to get closer to the optimization times needed.

IPOPT

IPoptis an interior point solver distributed by coin-or. IPopt is mainly a sparse

solver and is good at handling large problems. On the downside, IPopt is not a dedicated QP-solver but a general non-linear solver, and performance might be unsatisfactory. IPopt is suited for the sparse problem form. For further reading, see Wächter and Biegler [2006].

quadprog

quadprog is a QP-solver included in Matlab® it can be set up to use most com-mon optimization solving methods. The main advantage is that it is already there, it is easily accessed and does not take any additional time to set up when using

(46)

32 3 Model Predictive Control

an experimental setup. On the other hand it is rumored to be slow, a rumor that could not be corroborated during this thesis. quadprog can be set up to use sev-eral different methods. Here it has been used with the dense form and an active set method.

(47)

4

Controller

Using the model from Chapter 2 and the control strategy from Chapter 3, a closed loop design is created and tuned. Experimental simulations were made with the

Matlab® Simulink®environment. The Simulink® overview can be seen in

Figure 4.5.

4.1

Observer

The measured signals consist of the displacement of the rotor. For model predic-tive control it is necessary to know, or at least have estimates of, all states. The output of the system model does not provide all states, only the sensor states affect the output. The states are therefore estimated with a state observer. The system is observable if the observability matrix (4.1) has full rank.

O=                    C CA CA2 .. . CAN −1                    (4.1)

This is the case for the process. Since the process varies with ω which is time variant it is suitable to implement the observer with a time varying Kalman filter that includes the ω-variation as well. See Kalman [1960], for more information about Kalman filters. Remember the affine function that alters F from (2.17).

(48)

34 4 Controller

This is done as follows Set F: F(ω) = F0+ ∆ (4.2a) Predict: ˆ x(k|k − 1) = F(ω) ˆx(k − 1|k − 1) + Gu(k) (4.2b) P (k|k − 1) = F(ω)P (k − 1|k − 1)FT(ω) + R1 (4.2c) Update: K(k) = P (k|k − 1)HT H P (k|k − 1)HT + R2 −1 (4.2d) ˆ x(k|k) = ˆx(k|k − 1) + K(k) (y(k) − H ˆx(k|k − 1)) (4.2e) P (k|k) = (I − K(k)H) P (k|k − 1) . (4.2f)

The noise covariances are difficult to estimate and were instead tuned during

simulations. The covariances R1were set proportional to the magnitudes of the

states and the covariances for the output were set proportional to the magnitude

of the measurements. R1and R2are diagonal matrices with dimensions matching

the number of states and number of outputs, respectively.

The observer could also be equipped with a disturbance model that would

esti-mate the unbalance forces Funb. These could then be used to update the states

and improve the estimates. Possibly the disturbances from load variations could also be estimated for further improvements.

As an alternative, a feed forward approach can be used. Pose that the disturbance is measured or estimated outside of the observer. Then modify the observer to be able to include this input, i.e., expand the input matrix G. For the iterative loop (4.2) this would mean that the prediction of x in (4.2b) would be changed accordingly to (4.3). ˆ x(k|k − 1) = F ˆx(k − 1|k − 1) + [G, Gdist]" u(k)F unb # (4.3) This modification leads to more accurate estimation since it does not have to wait to observe disturbances only from the output. In the simulations made in Sec-tion 4.1.1, the real disturbance signal is used. This of course gives a better result than if the disturbance would be measured, but shows that the performance gain of handling the process disturbances in the observer is not negligible.

4.1.1

Observer Evaluation

The estimates calculated by the observer were validated by simulating the system. Four configurations were considered, with or without parameter scheduling and with or without disturbance model. The rotational speed was ramped up dur-ing five seconds. Four states are displayed, the remaindur-ing states display similar behavior and are excluded from the figures for better overview. The estimated

(49)

4.1 Observer 35 0 1 2 3 4 5 −4 −2 0 2 4 x 10−4 x5 Time [s] 0 1 2 3 4 5 −2 −1 0 1 2 x 10−4 x6 Time [s] 0 1 2 3 4 5 −1 −0.5 0 0.5 1 x 10−6 x7 Time [s] 0 1 2 3 4 5 −4 −2 0 2 4 x 10−7 x8 Time [s]

Figure 4.1:Observer validation for state x5to x8. Parameter scheduling not

active and no disturbance model used. True state in blue, estimated state in red and error in black.

0 1 2 3 4 5 −4 −2 0 2 4 x 10−4 x5 Time [s] 0 1 2 3 4 5 −2 −1 0 1 2 x 10−4 x6 Time [s] 0 1 2 3 4 5 −1 −0.5 0 0.5 1 x 10−6 x7 Time [s] 0 1 2 3 4 5 −4 −2 0 2 4 x 10−7 x8 Time [s]

Figure 4.2:Observer validation for state x5to x8. Parameter scheduling

ac-tive but no disturbance model used. True state in blue, estimated state in red and error in black.

References

Related documents

The CCFM scale-space is generated by applying the principles of linear scale- space to the spatial resolution of CCFMs and simultaneously increasing the res- olution of feature

Mer än 90% av eleverna har svarat stämmer bra eller stämmer mycket bra när det kommer till frågan om de anser att det är viktigt att skolan undervisar om anabola steroider och

Thus, provided that the observer and the control error can be bounded and shown to lie in sufficiently ”small” sets, it is possible to find an admissible control sequence ¯ u and ¯

For all solution times and patients, model DV-MTDM finds solutions which are better in terms of CVaR value, that is, dose distributions with a higher dose to the coldest volume..

We developed a new method of model predictive control for nonlinear systems based on feedback linearization and local convex approximations of the control constraints.. We have

A natural solution to this problem is to provide the optimized control sequence u ob- tained from the previous optimization to the local estimator, in order to obtain an

In case of the most common permanent magnet synchronous machine (PMSM) applications, the VSI is constructed from 3 half bridges connected in parallel to an input capacitor, a

In the cascading algorithm, on the other hand, both the horizontal and vertical problems are solved in parallel, and an inner controller is used to control the system while the