• No results found

Introduction to Computer Control Systems Lecture 4: Linearization + Basis of PID controller Dave Zachariah

N/A
N/A
Protected

Academic year: 2022

Share "Introduction to Computer Control Systems Lecture 4: Linearization + Basis of PID controller Dave Zachariah"

Copied!
28
0
0

Loading.... (view fulltext now)

Full text

(1)

Introduction to Computer Control Systems

Lecture 4: Linearization + Basis of PID controller

Dave Zachariah

Div. Systems and Control, Dept. Information Technology, Uppsala University

December 9, 2014

(2)

Today’s lecture: What and why?

Linearization

Why: Approximating real nonlinear systems with linear models.

PID controller

Why: Widely used, simple yet sufficiently powerful in many systems.

(3)

Today’s lecture: What and why?

Linearization

Why: Approximating real nonlinear systems with linear models.

PID controller

Why: Widely used, simple yet sufficiently powerful in many systems.

(4)

Linearization: Steady-state and deviation variables

y(t)

u(t) G

Most real systems arenonlinear and often modeled as:

˙x(t) = f (x(t), u(t)) y(t) = h(x(t), u(t)).

Linearization: Pick a point (x0, u0) of f (x, u). Use linearmodel to describe system behaviour around (x0, u0):

(˙x(t) = f (x(t), u(t)) y(t) = h(x(t), u(t)) ≈

(˙˜x(t) =A˜x(t) + B˜u(t)

˜

y(t) =C˜x(t) + D˜u(t) When system is at rest ˙x = f (x0, u0) =0, we call (x0, u0) a stationary point. Note: could be more than one!

(5)

Linearization: Steady-state and deviation variables

y(t)

u(t) G

Most real systems arenonlinear and often modeled as:

˙x(t) = f (x(t), u(t)) y(t) = h(x(t), u(t)).

Linearization: Pick a point (x0, u0) of f (x, u). Uselinear model to describe system behaviour around (x0, u0):

(˙x(t) = f (x(t), u(t)) y(t) = h(x(t), u(t)) ≈

(˙˜x(t) =A˜x(t) + B˜u(t)

˜

y(t) =C˜x(t) + D˜u(t)

When system is at rest ˙x = f (x0, u0) =0, we call (x0, u0) a stationary point. Note: could be more than one!

(6)

Linearization: Steady-state and deviation variables

y(t)

u(t) G

Most real systems arenonlinear and often modeled as:

˙x(t) = f (x(t), u(t)) y(t) = h(x(t), u(t)).

Linearization: Pick a point (x0, u0) of f (x, u). Uselinear model to describe system behaviour around (x0, u0):

(˙x(t) = f (x(t), u(t)) y(t) = h(x(t), u(t)) ≈

(˙˜x(t) =A˜x(t) + B˜u(t)

˜

y(t) =C˜x(t) + D˜u(t) When system is at rest ˙x = f (x0, u0) =0, we call (x0, u0) a stationary point. Note: could be more than one!

(7)

Linearization: Steady-state and deviation variables

Define deviation variables

x(t) = x(t) − x˜ 0

˜

u(t) = u(t) − u0

Then linearize byfirst-order Taylor expansion around x0, u0:

˙˜x= ˙x − 0

= f (x0+ ˜x, u0+ ˜u)

≈ f (x0, u0)

| {z }

=0

+∇xf(x0, u0)˜x+ ∇uf(x0, u0)˜u

=A˜x+ B˜u,

where A and B given by simple derivatives,

aij = ∂fi(x, u)/∂xj bi = ∂fi(x, u)/∂u, evaluated at x = x0 and u = u0.

(8)

Linearization: Steady-state and deviation variables

Similarly, y0 = h(x0, u0) for the stationary point. Define

˜

y(t) = y(t) − y0

Then linearize byfirst-order Taylor expansion around x0, u0:

˜

y= h(x0+ ˜x, u0+ ˜u) − y0

≈ h(x0, u0) + ∇xh(x0, u0)˜x+ ∇uh(x0, u0)˜u − y0

= ∇xh(x0, u0)˜x+ ∇uh(x0, u0)˜u + 0

=C˜x+ D˜u,

where C and D given by derivatives,

cj = ∂h(x, u)/∂xj D= ∂h(x, u)/∂u, evaluated at x = x0 and u = u0.

(9)

Linearization: Example

Nonlinearsystem:

 ˙x1

˙x2



= −K√ x1 x113x2

 +

1

cu 0



Find equilibrium point: ˙x = 0

0 0



= −K√ x1 x113x2

 +

1

cu 0



With constant input u ≡ u0 we get solvex1 and x2: x1,0= 1

Kcu0 x2,0= 3

Kcu0 ⇒ x0 =

 1

Kcu0 3 Kcu0



(10)

Linearization: Example

Nonlinearsystem:

 ˙x1

˙x2



= −K√ x1 x113x2

 +

1

cu 0



Find equilibrium point: ˙x = 0

0 0



= −K√ x1

x113x2

 +

1

cu 0



With constant input u ≡ u0 we get solvex1 and x2: x1,0= 1

Kcu0 x2,0= 3

Kcu0 ⇒ x0 =

 1

Kcu0 3 Kcu0



(11)

Linearization: Example

Nonlinearsystem:

 ˙x1

˙x2



= −K√ x1 x113x2

 +

1

cu 0



Find equilibrium point: ˙x = 0

0 0



= −K√ x1

x113x2

 +

1

cu 0



With constant input u ≡ u0 we getsolve x1 and x2: x1,0= 1

Kcu0 x2,0 = 3

Kcu0 ⇒ x0 =

 1

Kcu0 3 Kcu0



(12)

Linearization: Example, cont’d

Nonlinear system:

 ˙x1

˙x2



| {z }

˙ x

= −K√ x1 x113x2

 +

1

cu 0



| {z }

f (x,u)

Tolinearize around (u0, x0): Define

˜

u = u − u0 x˜= x − x0

weevaluatederivatives

a11= ∂f1/∂x1 a12= ∂f1/∂x2 b1 = ∂f1/∂u a21= ∂f2/∂x1 a22= ∂f2/∂x2 b2 = ∂f2/∂u at (u0, x0) so that

˙˜x1

˙˜x2



=

"

2Kx

1,0 0

1 −13

#

| {z }

A

 ˜x1

˜ x2

 +

1

c

0



|{z}

B

u

(13)

Linearization: Example, cont’d

Nonlinear system:

 ˙x1

˙x2



| {z }

˙ x

= −K√ x1 x113x2

 +

1

cu 0



| {z }

f (x,u)

Tolinearize around (u0, x0): Define

˜

u = u − u0 x˜= x − x0

weevaluatederivatives

a11= ∂f1/∂x1 a12= ∂f1/∂x2 b1 = ∂f1/∂u a21= ∂f2/∂x1 a22= ∂f2/∂x2 b2 = ∂f2/∂u at (u0, x0)

so that

˙˜x1

˙˜x2



=

"

2Kx

1,0 0

1 −13

#

| {z }

A

 ˜x1

˜ x2

 +

1

c

0



|{z}

B

u

(14)

Linearization: Example, cont’d

Nonlinear system:

 ˙x1

˙x2



| {z }

˙ x

= −K√ x1 x113x2

 +

1

cu 0



| {z }

f (x,u)

Tolinearize around (u0, x0): Define

˜

u = u − u0 x˜= x − x0

weevaluatederivatives

a11= ∂f1/∂x1 a12= ∂f1/∂x2 b1 = ∂f1/∂u a21= ∂f2/∂x1 a22= ∂f2/∂x2 b2 = ∂f2/∂u at (u0, x0) so that

˙˜x1

˙˜x2



=

"

2Kx

1,0 0

1 −13

#

| {z }

A

 ˜x1

˜ x2

 +

1

c

0



|{z}

B

u

(15)

Feedback approach: Use control error e(t)

y u

r

+

e Controller G

Control error:

e(t) =r(t)−y(t)

Q: How should controller determine input u(t)based on error e(t)?

(16)

Feedback approach: Use control error e(t)

y u

r

+

e Controller G

Control error:

e(t) =r(t)−y(t)

Q: How should controller determine input u(t)based on error e(t)?

(17)

Feedback approach: PID controller

y u

r

+

e Controller G

A popular approach, set input u(t)based on:

present value of e(t): u(t) = Ke(t) (Proportional)

historyof e(t): u(t) = KRt

τ =0e(τ ) (Integral) changeof e(t): u(t) = K ˙e(t) (Derivative) where K are tuning constants.

(18)

Feedback approach: PID controller

y u

r

+

e Controller G

A popular approach, set input u(t)based on:

present value of e(t): u(t) = Ke(t) (Proportional) historyof e(t): u(t) = KRt

τ =0e(τ ) (Integral)

changeof e(t): u(t) = K ˙e(t) (Derivative) where K are tuning constants.

(19)

Feedback approach: PID controller

y u

r

+

e Controller G

A popular approach, set input u(t)based on:

present value of e(t): u(t) = Ke(t) (Proportional) historyof e(t): u(t) = KRt

τ =0e(τ ) (Integral) changeof e(t): u(t) = K ˙e(t)(Derivative) where K are tuning constants.

(20)

PID controller: time-domain

y u

r

+

e

H

G

A popular approach: the PIDcontroller u(t) = Kpe(t)

| {z }

P

+ Ki

Z t τ =0

e(τ )dτ

| {z }

I

+ Kd˙e(t)

| {z }

D

(21)

PID controller: Laplace-domain

y u

r

+

e

H

G

A popular approach: the PIDcontroller U (s) = KpE(s) + Ki

1

sE(s) + KdsE(s)

=



Kp+Ki

s + Kds



| {z }

transfer functionH(s)

E(s),

where E(s) = R(s) − Y (s).

(22)

PID controller: Laplace-domain, cont’d

y u

r

+

e

H

G

Final value theorem: If e(t) has a final value, then it can be computed by

t→∞lim e(t) = lim

s→0sE(s)

Static error analysis: Study limit of e(t) when r(t) is a constant. On the board: Let r(t) = C for t ≥ 0. Then e(t) goes to 0 when H(s)G(s) contains an integrator 1/s.

(23)

PID controller: Laplace-domain, cont’d

y u

r

+

e

H

G

Final value theorem: If e(t) has a final value, then it can be computed by

t→∞lim e(t) = lim

s→0sE(s)

Static error analysis: Study limit of e(t) when r(t) is a constant.

On the board: Let r(t) = C for t ≥ 0. Then e(t) goes to 0 when H(s)G(s) contains an integrator 1/s.

(24)

PID controller: Laplace-domain, cont’d

y u

r

+

e

H

G

Final value theorem: If e(t) has a final value, then it can be computed by

t→∞lim e(t) = lim

s→0sE(s)

Static error analysis: Study limit of e(t) when r(t) is a constant.

On the board: Let r(t) = C for t ≥ 0. Then e(t) goes to 0 when H(s)G(s) contains an integrator 1/s.

(25)

PID controller: Laplace-domain, cont’d

y u

r

+

e

H

G

G

tot

Total system from reference r(t) to output y(t):

Y (s) = Gtotal(s)R(s)

On the board: Derive Gtotal(s) = 1+H(s)G(s)H(s)G(s)

(26)

PID controller: Laplace-domain, cont’d

y u

r

+

e

H

G

G

tot

Total system from reference r(t) to output y(t):

Y (s) = Gtotal(s)R(s) On the board: Derive Gtotal(s) = 1+H(s)G(s)H(s)G(s)

(27)

PID controller: practical concerns

PID controllers

1 have typically bad performancewhen there is a significant time-delay in the loop. They work nicely for relativelysmall time-delays.

2 with derivative-term amplify noise. Add a low-pass filterto term by Kds ≈ 1+εKKds

ds, for a small ε.

3 candifferdepending on manufactures. It is important to know the configuration of the PID algorithm before tuning.

4 are often implemented in discrete timebut tuned using a continuous formulation

(28)

Today’s lecture: What and why?

Linearization

Why: Approximating real nonlinear systems with linear models.

PID controller

Why: Widely used, simple yet sufficiently powerful in many systems.

References

Related documents

Application of the PIDD-controller based on the root locus method Constant power level, the step response. The modelling results are shown in figures 5.24

Following a Lyapunov approach, we first study the global synchronization of nonlinear systems in the canonical control form with both distributed proportional-derivative

The aim of this thesis has been threefold; first to develop a decentralized controller for a system of agents using a continuous time unicycle model with time varying speed for

Feedback control using states.. Controlling

A DC motor is propelling the monowheel forward whereas a stepper motor with a battery pack attached will actively balance the wheel with the help of a PID-controller.. This method

Syftet med denna litteraturstudie var att sammanställa modern, interantionell och nationell forskning kring fenomenet ”hemmasittare”, för att synliggöra faktorer som

To answer these questions and fulfil the present purpose, this article examines inter- national and regional agreements concerning dignity in relation to the rights of children,

Due to the stochastic packet transmis- sion process, in wireless systems, most of the assumptions that the classical control theory is based on, appear invalid.. Some examples to