• No results found

Kappa Control with Online Analyzer Using Samples from the Digester's Mid-phase

N/A
N/A
Protected

Academic year: 2021

Share "Kappa Control with Online Analyzer Using Samples from the Digester's Mid-phase"

Copied!
95
0
0

Loading.... (view fulltext now)

Full text

(1)

Kappa Control with Online Analyzer

Using Samples from

the Digester’s Mid-phase

Examensarbete utf¨ort i Reglerteknik vid Tekniska H¨ogskolan i Link¨oping

av Peter G¨a¨ard

Reg nr: LiTH-ISY-EX-3469-2004 Link¨oping 2004

(2)
(3)

Kappa Control with Online Analyzer

Using Samples from

the Digester’s Mid-phase

Master Thesis in Automatic Control at Link¨oping Institute of Technology

by Peter G¨a¨ard

Reg nr: LiTH-ISY-EX-3469-2004 Link¨oping 2004

Supervisor: Martin Enqvist Claes Lys´en

Examiner: Svante Gunnarsson Link¨oping 18th February 2004.

(4)
(5)

Avdelning, Institution Division, Department

Institutionen för systemteknik

581 83 LINKÖPING

Datum Date 2003-02-13 Språk

Language Rapporttyp Report category ISBN Svenska/Swedish

X Engelska/English Licentiatavhandling X Examensarbete ISRN LITH-ISY-EX-3469-2004

C-uppsats

D-uppsats Serietitel och serienummer Title of series, numbering ISSN

Övrig rapport

____

URL för elektronisk version

http://www.ep.liu.se/exjobb/isy/2004/3469/

Titel

Title Kappa Control with Online Analyzer Using Samples from the Digester's Mid-phase Kappa Control with Online Analyzer Using Samples from the Digester's Mid-phase

Författare

Author Peter Gäärd

Sammanfattning

Abstract

In the pulp industry, digesters are used to disolve lignin in wood chips. The concentration of lignin is measured and is called the Kappa number. In this thesis, the question of whether an online Kappa sensor, taking samples from the mid-phase of the digester, is useful or not is

analyzed. For the samples to be useful, there has to be a relationship between the measured Kappa at the mid- phase and the measured Kappa in the blowpipe at the bottom of the digester. An ARX model of the lower part of the digester has been estimated. Despite a lot of noise, it seems that it might be possible to use the mid-phase samples and for this model predict the blowpipe flow Kappa signal. It is concluded that the mid-phase samples should be further improved to be more useful. The mid-phase samples have also been used in another ARX model, this time to LP-filter these values without time loss.

Another important issue has been to examine if the existing controller is good or not. In order to be able to compare it with other controllers, a simulator has been created in MATLAB - Simulink. Test results from this simulator show that the existing controller's use of the mid-phase Kappa samples improves its performance. For a simplified digester model, the existing controller has also been compared with an MPC controller. This test shows that the MPC controller is significantly better. Hence, the conclusion in this thesis is that it might be interesting to study MPC further using a more advanced model.

Nyckelord

Keyword

Kappa Control, Digester Control, Mid-phase Measurement, Online Kappa Analyzer, System Identification, Model Predictive Control

(6)
(7)

Abstract

In the pulp industry, digesters are used to disolve lignin in wood chips. The con-centration of lignin is measured and is called the Kappa number. In this thesis, the question of whether an online Kappa sensor, taking samples from the mid-phase of the digester, is useful or not is analyzed. For the samples to be useful, there has to be a relationship between the measured Kappa at the mid-phase and the measured Kappa in the blowpipe at the bottom of the digester. An ARX model of the lower part of the digester has been estimated. Despite a lot of noise, it seems that it might be possible to use the mid-phase samples and for this model predict the blowpipe flow Kappa signal. It is concluded that the mid-phase samples should be further improved to be more useful. The mid-phase samples have also been used in another ARX model, this time to LP-filter these values without time loss.

Another important issue has been to examine if the existing controller is good or not. In order to be able to compare it with other controllers, a simulator has been created in MATLAB - Simulink. Test results from this simulator show that the existing controller’s use of the mid-phase Kappa samples improves its performance. For a simplified digester model, the existing controller has also been compared with an MPC controller. This test shows that the MPC controller is significantly better. Hence, the conclusion in this thesis is that it might be interesting to study MPC further using a more advanced model.

Keywords: Kappa Control , Digester Control , Mid-phase Measurement, Online Kappa Analyzer, System Identification, Model Predictive Control

(8)
(9)

Acknowledgments

The work done for this thesis has been carried out at Kvaerner Pulping AB in Karlstad. This has been an inspiring environment, both the facility and the people at the ELI department. My supervisor at Kvaerner, Claes Lys´en, has been very helpful in helping me understand the pulp process in general, and the digester process in particular. By allowing me to pursue with many different approaches, he has contributed in making this thesis very interesting and fun to work with. Throughout this thesis, it has been very helpful to discuss (and ask questions about) different problems and approaches with my supervisor Martin Enqvist at the Control and Communication group at Link¨oping Institute of Technology. My examiner Professor Svante Gunnarsson, also at the Control and Communication group, has also been very helpful, especially during the first phase of the thesis. Finally, the ground support in Karlstad by my parents has made the 20 weeks of commuting feasible. Thank you all very much!

(10)
(11)

Notation

The most commonly used symbols, abbreviations and terminology in this thesis is introduced here.

Symbols

ˆ

y(t|θ) Predictor.

Ttop Steam temperature in the top of the digester TC8 Temperature in C8 area (lower part of the digester) p Production

Fblow Blowpipe flow

ar Residual alkali concentration

ac Alkali concentration

κmid Measured Kappa value on the mid section of the digester

κblow Measured Kappa value at the bottom of the digester

x Mean of x

Abbreviations

ARX Autoregressive with external input

ARMAX Autoregressive moving average with external input

BJ Box-Jenkins

LP-filter Low pass filter

LQ Linear quadratic control MPC Model predictive control OE Output error

QP Quadratic program

SITB System identification toolbox

Terminology

Lignin : A wood ingredient, binding the cellulose together

Kappa : A number that is proportional to the percentage of lignin in a pulp sample

White Liquor : [N a2S, N aOH, H2O]

Black Liquor : White liquor, lignin and other bi-products v

(12)
(13)

Contents

1 Introduction 1 1.1 Background . . . 1 1.2 Kvaerner Pulping . . . 1 1.3 Objectives . . . 2 1.4 Limitations . . . 2 1.5 Thesis Outline . . . 2 2 Preliminaries 3 2.1 Correlation Measurements . . . 3 2.2 System Identification . . . 4 2.2.1 ARMAX . . . 6 2.2.2 ARX . . . 6 2.2.3 Output Error . . . 6 2.3 Prediction . . . 7 2.4 Feedforward Control . . . 7 2.5 Smith Control . . . 8

2.6 Model Predictive Control . . . 9

2.7 Observer . . . 11

3 Pulp Preliminaries 13 3.1 The Pulp Process . . . 13

3.2 The Continuous Digester . . . 14

3.3 Kappa . . . 16

3.3.1 Kappa Control . . . 16

3.3.2 Kappa Measurements . . . 17

3.3.3 Kappa disturbance . . . 18

4 System Identification of the Digester’s Lower Part 19 4.1 Selection and Preprocessing of Data . . . 19

4.2 Correlation Comparisons . . . 21

4.3 ARX Modeling . . . 22

4.3.1 ARX Prediction . . . 23

4.3.2 Multiple Inputs . . . 24 vii

(14)

4.3.3 ARX Simulation . . . 26

4.3.4 Resampling . . . 26

5 Signal Processing of the Mid-phase Signal 29 5.1 Filtering . . . 30

5.2 Prediction of κmid . . . 30

5.2.1 Choise of Inputs . . . 30

5.2.2 Estimating Parameter Values . . . 31

5.2.3 Prediction Horizon . . . 32 5.2.4 Implementation Issues . . . 34 6 Simulation Models 35 6.1 Model 1 . . . 36 6.2 Model 2 . . . 38 6.3 Adding Noise . . . 40

7 Evaluation of Different Control Strategies 41 7.1 The Existing Controller . . . 41

7.1.1 Control Strategy . . . 41

7.1.2 Step Response . . . 43

7.2 The Smith Controller . . . 44

7.3 Model Predictive Control . . . 45

7.3.1 Observability . . . 46 7.3.2 Creating an Observer . . . 46 7.3.3 Control Structure . . . 47 7.3.4 Step Response . . . 50 8 Conclusions 53 8.1 Results . . . 53

8.1.1 Using Kappa Values from the Digester’s Mid-phase . . . 53

8.1.2 Different Control Strategies Effect on the Kappa Value . . . . 53

8.2 Future Work . . . 54

8.2.1 κmid Measurement Improvements . . . 54

8.2.2 Further Evaluation of MPC . . . 54

Bibliography 55 A Kappa series from the Plant 57 B m-files 59 B.1 correlation.m . . . 59

B.2 corrTester.m . . . 60

(15)

Contents ix

C Polynomials and Parameters 63

C.1 ARX Polynomials . . . 63 C.1.1 Chapter 4 Polynomials . . . 63 C.1.2 Chapter 5 Polynomials . . . 65 C.2 Model Parameters . . . 66 C.2.1 Model 1 . . . 66 C.2.2 Model 2 . . . 66

D m-files for used Controllers 67 D.1 m-files for existing Controller . . . 67

D.1.1 nonlinreg.m . . . 67

D.1.2 changeT . . . 70

D.2 m-files for Model Predictive Control . . . 71

D.2.1 makemodel.m . . . 72 D.2.2 makedigestermodel.m . . . 72 D.2.3 initMPCdigester.m . . . 72 D.2.4 create HS.m . . . 73 D.2.5 blockrepeate.m . . . 74 D.2.6 observer.m . . . 74 D.2.7 MPC Controller.m . . . 75 D.2.8 solveQPdigeser.m . . . 76

(16)
(17)

Chapter 1

Introduction

A very important step when producing pulp is the removal of lignin. There are several ways of doing this, however, they all include the usage of chemicals. Since chemicals are expensive and pollutive it is natural to control the process carefully. The amount of lignin in pulp is measured with the Kappa number.

1.1

Background

A few years ago, new online analyzers were developed, making it possible to eval-uate the Kappa number within five minutes. This must be viewed as a very small time delay compared with the other time constants for the pulp process. For ex-ample, the average cook-time in a continuous digester is about five to six hours. Recently, a Finnish plant (Mets¨abotnia Oy, Joutseno Mill) bought a new digester from Kvaerner Pulping and later installed the analyzer, which takes samples from the middle of the digester, as well as in the blowpipe at the bottom, approximately every twenty minutes. The big difference between these samples is the fact that the last measurement is done on a homogenous pulp whereas the earlier samples are taken on chips in a small region in the vessel.

If the samples in the mid-phase are significant compared with the bottom, they will enable implementation of a considerably faster control loop. Clearly it is in the interest of Kvaerner Pulping to evaluate if this gives benefits in terms of better Kappa control for the customer.

1.2

Kvaerner Pulping

Kvaerner Pulping is one of the big competitors in the pulp industry worldwide. The headquarters is located in Karlstad with approximately 350 employees. In total, Kvaerner Pulping has about 600 employees and a turnover of SEK 1.4 billion. It delivers solutions all the way from the chips being fed on to the conveyor to the

(18)

point where the pulp is ready to be processed into paper. Among other things, they have developed a patented continuous digester.

1.3

Objectives

The main objective of this thesis is to determine whether the new Kappa measure-ments are reliable enough to be used for faster feedback control for the specific plant or not. If it turns out that the system can be made significantly faster, the question is how the new feedback control structure should look. Since there already is a control loop that uses the new data, the effect of this needs to be analyzed and possible improvements suggested.

1.4

Limitations

The data from the plant have been stored as ten minute samples. This is not the actual sample time for the Kappa measurement but, since the exact values are unretrievable, the data are used as if they were the true samples.

In simulations, all variables except the control signal have been kept constant for the results shown in this thesis. However, unpresented tests have been done to verify that the simulators work for some changes on the other variables.

Only controllable digester dynamics, for a potential two to three hour control loop and slower, are analyzed, faster behavior is left outside. Long transients that take up to 24 hours to fade out are disregarded in the simulation models. For all controllers, temperature is used as a control signal instead of the H-factor. The parametric models and the model used for Model Predictive Control use only linear relationships.

1.5

Thesis Outline

Several notions from control and system identification will be used throughout this thesis. These notions, and some relevant background theory, are described in Chapter 2. This chapter can be viewed as extra reading and is not necessary for a general understanding of the thesis. In Chapter 3, an overview of the pulp process is given and some terminology is introduced. An analysis of the new set of data is conducted in Chapter 4 and Chapter 5. This includes a discussion of whether the data are useful or not and how they should be processed if they are to be used. In Chapter 6, a simulation model of the continuous digester is introduced and explained. This simulaton model is used to test and compare the different control strategies described in Chapter 7. Finally, the results and future work are discussed in Chapter 8.

(19)

Chapter 2

Preliminaries

The results and ideas of this thesis require an understanding of some background theory in different areas such as signal theory, modeling, prediction and control theory. These areas are briefly introduced in this chapter.

2.1

Correlation Measurements

One way to show that there is a relationship between two stochastic processes u and y is to calculate the correlation function between them. Correlation is the normalized covariance and will therefore always range between -1 and 1. The definition of the covariance (between u(t) and y(t) for fixed times t1and t2) is (see, for example, [7])

Cov y(t1), u(t2)= E (y(t1)− y)(u(t2)− u) (2.1) where E denotes the expected value. For a simple scenario where there is a lin-ear relationship between, say y(1) and u(10) and where all other elements are uncorrelated, the correlation is at its maximum when these are compared. More specifically, this means that

Corr y(1), u(10)= Cov y(1), u(10) 

σyσu

= 1, Corr y(1), u(x)= 0 (2.2)

when x6= 10.

The output of a time invariant linear system can always be written as a sum of the different input contributions with corresponding coefficients

y(t) =

X

k=Td

gku(t− k) (2.3)

where Td is the deadtime in the process and gk are scalars. This is useful for the next definition.

(20)

The cross-covariance function Ryu(τ ) of two processes u and y with zero means is defined as

Ryu(τ ) = E y(t), u(t− τ)



(2.4) In signal theory, it is common to use Ryu(τ ), defined in (2.5), as a way to

determine how signals are related (see, for example, [7]). If the linear relationship (2.3) holds, we get Ryu(τ ) = E y(t), u(t− τ)  = X k=Td gkE u(t− k)u(t − τ)  (2.5)

We see that Ryu(τ ), for any given τ , is summed up by comparing u(t− τ) with all u(t− k), where k goes from Td to ∞. Also, each contribution is weighted by the

coefficient gk, which thus decides how much impact each u(t− k) will have.

When used in reality, the parameters gk have to be estimated. In particular,

identification of the true Tdmay not be trivial, especially if the effect of the input

(u(t)) is spread over a lot of outputs (y(t + Td),· · · , y(t + Td+ m), where m is the

number of outputs it is spread on).

2.2

System Identification

Since all systems in a way are built up by differential equations, an approach to identifying a system that is unknown is to assume that it has certain characteristics, e.g., the order of the differential equation. The time discrete equivalence of such an equation is called a difference equation and it will be used for the model structures described in the following subsections. For more theory, [4] is recommended. The equation for a system with one input and one output can be written as

y(t) + a1y(t− 1) + · · · + anay(t− na) = b0u(t) +· · · + bnbu(t− nb) (2.6)

or as

A(q)y(t) = B(q)u(t) (2.7)

A(q) = 1 + a1q−1+· · · + anaq−na B(q) = b0+ b1q−1+· · · + bnbq−nb

where q−kis a shift operator k steps backwards in time. Usually noise needs to be added to this to reflect reality. This gives

y(t) = η(t) + w(t) (2.8)

where w(t) represents the noise and the part without noise can be rewritten as

(21)

2.2 System Identification 5

In the case of G(q, θ) being a rational function of q it can be written

G(q, θ) = B(q)

F (q) (2.10)

If we use (2.9) and (2.10) we get

η(t) + f1η(t− 1) + · · · + fnfη(t− nf)

= b1u(t− nk) + · · · + bnbu(t− (nb + nk − 1)) (2.11) For the sake of simplicity, we assume the sample time T to be one time unit in all equations. The parameter nk is the time delay of the system while nf and nb decide how many delayed elements of η(t) and u(t) that are included. Hence, they define the degrees of the numerator and denominator polynomials for the system.

By describing the noise w(t) as

w(t) = H(q, θ)e(t) (2.12) with H(q, θ) = C(q) D(q)= 1 + c1q−1+· · · + cncq−nc 1 + d1q−1+· · · + dndq−nd (2.13) where e(t) is white noise, we can view w(t) as filtered white noise. The parameters

nc and nd work in the same way as nf and nb, i.e., they define the degrees of the

numerator and the denominator polynomials, respectively. All together, this results in the following model

y(t) = G(q, θ)u(t) + H(q, θ)e(t) (2.14)

which often is called a Box-Jenkins model if G(q, θ) and H(q, θ) are chosen as (2.10) and (2.13). The model is illustrated in Figure 2.1.

u(t) y(t) e(t) + B(q) F (q) C(q) D(q)

Figure 2.1. A visual illustration of the Box-Jenkins model. u(t) and e(t) goes through

(22)

2.2.1

ARMAX

Under the assumption that the noise enters early in the process, it goes through the same system, i.e. poles, as the input. This assumption means that we get

F (q) = D(q) = A(q) = 1 + a1q−1+· · · + anaq−na

A(q)y(t) = B(q)u(t) + C(q)e(t) (2.15)

which is a special case of (2.14) and is called an ARMAX-model (Auto Regression Moving Average with eXternal signal ). It is visualized in Figure 2.2.

u(t) y(t)

e(t)

+

B(q) A(q)1

C(q)

Figure 2.2. A visual illustration of the ARMAX-model. After the white noise, e(t), is

filtered it goes through the same poles of the system as the input.

2.2.2

ARX

A natural reduction of complexity in the ARMAX-model is to let the noise term

C(q)e(t) in (2.15) be white. This means that we have C(q)≡ 1

A(q)y(t) = B(q)u(t) + e(t) (2.16)

This model is called an ARX-model (Auto Regression with eXternal signal ). An ARX-model is useful when a high complexity is undesirable and it is in many circumstances a reasonable noise model. As in the ARMAX-model the noise will go through the same poles as the input. It is visualized in Figure 2.3.

2.2.3

Output Error

By assuming that the noise is white and added directly on the output, not going through the dynamics of the system, we get

H(q)≡ 1

y(t) = G(q, θ)u(t) + e(t) (2.17)

(23)

2.3 Prediction 7

u(t) y(t)

e(t)

+

B(q) A(q)1

Figure 2.3. A visual illustration of the ARX-model. White noise is added together with

the input and goes through the same poles of the system.

2.3

Prediction

Using previous values of u(t) and y(t), it is often interesting to predict values of

y(t). We will here describe how such a prediction can be made for the ARX-model

case.

In the time domain, the ARX-model has the following apperance

y(t) =−a1y(t− 1) − · · · − anay(t− na) +

+ b1u(t− nk) + · · · + bnbu(t− nk − nb + 1) + e(t) (2.18)

Since e(t) represents white noise, it cannot be predicted and it is thus not included in the predictor. The 1-step predictor can be written as

ˆ

y(t + 1|θ) = −a1y(t)− · · · − anay(t− na + 1) +

+ b1u(t− nk) + · · · + bnbu(t− nk − nb + 1) (2.19)

Note that only past output values and past and present (if nk = 0) values of the input are used here.

It is now clear that the values of θ = [a1,· · · , ana, b1,· · · , bnb]T in the linear

predictor (2.19), which can be rewritten as ˆy(t|θ) = θTϕ(t), determine the result,

given that na, nb and nk are set. ϕ(t) = [y(t),· · · , y(t−na+1), u(t−nk), · · · , u(t−

nk− nb + 1)].

2.4

Feedforward Control

The most basic way of controlling systems with some uncertainties is through feedback control. It is done by using the output to see how big the error is and changing the input accordingly. Now, consider a system where it is possible to measure some kind of disturbance before it affects the output signal. An often used approach to reduce the effect of this disturbance on the output is to compensate for it by adding a term to the control signal. This control strategy is called feedforward control and requires a good model of the system. It can be implemented as in Figure 2.4.

(24)

G1 v G2 y P Ff H u

Figure 2.4. A system with measured disturbance. The feedforward implementation is

used to reduce the effect of the disturbance by choosing a good Ff.

In Figure 2.4, a disturbance goes through the block H and affects the output

y. With some knowledge about G1 and H, the measured disturbance v can be connected to the feedforward controller, seen as Ff. The feedback control loop is not seen in the figure, but is easily added by connecting the output, y(t), with the input, u(t), after going through a controller.

A similar approach can be used for feedforward of reference signals and is further explained in [2]. Feedforward control is sensitive for parameter variations and might be difficult to implement for some systems (see, for example, [3]).

2.5

Smith Control

A system with a time delay (denoted e−sT) causes decreased phase margin. Hence, to avoid instability, the control gain often has to be reduced, which implies that we get a slower closed loop system. An alternative is to use the Smith controller shown in Figure 2.5, which is designed to avoid this problem.

r P P F (e−sL− 1)G e−sLG y u +

(25)

2.6 Model Predictive Control 9

The closed loop system will have the same dynamics as the inner system in Figure 2.5. However, the sensitivity and robustness of the resulting system are af-fected, as described in [3]. Stability of the open system is a fundamental restriction for the Smith controller to work.

2.6

Model Predictive Control

By creating a state space model (see [5] for more on this), of a system it is possible to use Model Predictive Control (MPC) to control it. The approach is to optimize the control signal by solving a quadratic program (QP). Taken from [2], MPC can be implemented using the following algorithm

1. Measure the states x(k) (or use an observer to get bx(k))

2. Calculate u(k + j), j = 0, 1, . . . , N− 1, by solving the quadratic program (2.20).

3. Apply u(k) as a control signal. 4. Update the time, k := k + 1. 5. Repeat from step 1.

The problem solved by MPC is formulated as a QP with linear constraints.

min u N −1X j=0 ||x(k + j + 1)||2 Q1+||u(k + j)|| 2 Q2 (2.20) subject to Au≤ b

The variable x denotes the states, u the control signals and there are two matrices, Q1 andQ2, that works as design parameters.

First, the following matrices are defined in order to be able to write the problem in a form MATLAB can solve. By introducing matrices such that all u(·) and x(·), for the control sequences, are conveniently stored, a more compact expression is achieved. U =      u(k) u(k + 1) .. . u(k + N− 1)     , X =      x(k + 1) x(k + 2) .. . x(k + N )      (2.21)

(26)

It can be shown that X = Hx(k) + SU where H =      A A2 .. . AN     , S =      B 0 . . . 0 AB B . . . 0 .. . ... . .. ... AN −1B AN −2B . . . B      (2.22)

After defining Q1and Q2as block diagonal matrices containingQ1andQ2repeated

N times, the cost function of the QP problem can be rewritten N −1X j=0 ||x(k + j + 1)||2 Q1 + ||u(k + j)||2Q2 = XTQ1X + UTQ2U = (Hx(k) + SU )TQ1(Hx(k) + SU ) + UTQ2U (2.23) Dealing with constraints, adding reference signals, using integral action and feed-forward are not explained here, for more on these topics see [12] and [2].

When solving the QP problem using MATLAB - Optimization Toolbox [14], it should be written in the following form

min x 1 2x THx + fTx (2.24) subject to Ax≤ b

Rewriting (2.23) to this structure results in the following expression. min U = 1 2U T(STQ 1S + Q2)U + (STQ1Hx(k))TU (2.25) subject to AU ≤ b

Since constants, being all that do not contain the variable U , do not affect the optimum, they have been excluded in (2.25).

(27)

2.7 Observer 11

2.7

Observer

Using state space models often provides a powerful tool for feedback control. How-ever, many times it is impossible to measure the states and only inputs and outputs are known. By introducing an observer, it may be possible to estimate the states. It is desirable to have observability, since then we know we can choose the ele-ments within the observer matrix as pleased. Observability can be easily tested using MATLAB, creating a matrix using the obsv(A,C) command. If the rank of this matrix (rank) is full, we know if the states are observable.

The following result is taken from [3]. If the state of the system

x(k + 1) = Ax(k) + Bu(k)

y(k) = Cx(k) (2.26)

is estimated with the observer

ˆ

x(k + 1) = Aˆx(k) + Bu(k) + K(y(k) − Cˆx(k)) (2.27)

then the estimate error ˜x = x− ˆx can be written as

˜

x(k + 1) = (A − KC)˜x(k) (2.28)

If the system is observable the eigenvalues of A− KC can be placed arbitrary. On the other hand, if the system is not observable the observer may still work as long as there is a K such that A− KC is stable. It is then called detectable. This is further explained in [5].

A very common observer is the Kalman filter, being the best possible observer given certain conditions. If the characteristics of the noise are known, a formulation of the error of the estimate, as a function of K, is possible. Thus by minimizing that function an optimal estimate is found.

(28)
(29)

Chapter 3

Pulp Preliminaries

In order to manufacture paper, the main ingredient wood needs to be processed in several steps. Wood consists of cellulose (43%), hemi-cellulose (28%) and lignin (29%). This distribution holds for soft wood, for example, pine and spruce, but is a good approximation for most types of wood. Lignin works as a kind of glue binding the cellulose fibers together, and it must be removed to achieve good paper quality.

First, the logs need to be cut up in small chips (approximately 2.5×1×0.4 cm). The state between chips and paper is called pulp and the processing of it includes many advanced steps. Most of the information described in this chapter has been taken from discussions with [8] and from [6]. Two articles that are specifically interesting for this thesis are [11] and [16].

3.1

The Pulp Process

A key task in the pulp process is to remove most of the lignin and some hemi-cellulose while preserving as much as possible of the hemi-cellulose. Since broken down cellulose actually is sugar, this also needs to be removed.

However, the most obvious task of the process, treating the chips and later the pulp, is not the only key task. Equally important is the semi-closed loop of chemicals. Without it, most of the steps 1 - 8 described below would not work. The steps can also be seen in Figure 3.1.

1. Steaming; replacing the air in the chips with water. This will make the chips heavier than water.

2. Impregnation; Letting the white liquor [N a2S, N aOH, H2O] penetrate into

the chips so it can reach the lignin (with which it should react).

3. Increasing the temperature (by adding steam) and adding white liquor if needed.

(30)

4. Cooking; allowing the chemicals and chips to fully react.

5. Screening; removing sand, knots, shives (uncooked wood fragments) and other undesired materials.

6. Washing; removing black liquor (which is remaining white liquor and all dissolved fragments of lignin and hemi-cellulose) from the pulp. Washing is done in several steps throughout the process.

7. Bleaching; removing remaining lignin or changing the structure of it. 8a. Drying; evaporating water from the pulp. It is then stored in a suitable way.

Or:

8b. Pumping the pulp to a paper machine without drying.

Screening Washing Washing

Bleaching DUALOXTM O2 O2 ClO2 ClO2 O2 MP steam NaOH COMPACT COOKING TM Chips EO D 1 D 2 D UAL DTM Acid MP steam MP steam © Kvaerner Pulping AB 1 2 5 6 6 7 3 4

Figure 3.1. Schematics of the pulp process. The first four steps are called compact cookingT M, they are followed by different steps of washing and bleaching until the pulp has achieved sufficient quality.

3.2

The Continuous Digester

A continuous digester is a chemical reaction vessel where wood chips are treated with caustic cooking chemicals at about 150-160C. The vessel is typically about 50-70 meters high with a bottom diameter of about 6-10 meters. The chemical reactions occur in different steps, during a couple of hours to produce paper pulp. Cook times vary depending on the size of the vessel and the production rate. However, it will always end up in the time span four to six hours.

(31)

3.2 The Continuous Digester 15

The cooking starts at the top of the digester. The absolute top is filled with continuously added hot steam. This is also where the chips are fed into the digester. When the chips have entered the digester, they fall down through the steam onto the top of a chip pile that extends all the way to the bottom of the digester. Changing steam temperature is a way of controlling the temperature in the digester. The liquor (black liquor) level in the vessel is almost as high as the chip level.

Wash liquor (used wash water from succeeding wash stages) is added at the bottom and is removed at the mid-phase of the digester. Hence, a net flow from the bottom to the mid-phase is created. Because large amounts of liquid are located within the chips, there will be a flow of liquid downwards as well. The upward-flowing liquid is heated in a circulation, displayed in Figure 3.2. A very small amount of white liquor is also added here.

This lower section of the vessel is a washing zone with the purpose of removing chemicals. When the chips have passed this zone, the temperature is quickly re-duced and the chips are blown out in a pipe at the bottom. Finally, the pressure in the pipe is reduced to one bar, and the mechanical force due to this mashes the now very soft chips into pulp. (The change in flow velocity causes friction between the chips which is enough to mash them.)

Chips + liquor Steam

Chip level Liquor level Pulp out Steam W ash w ater Co−current zone Counter− zone current W hite liquor P roduction

Residual alkali concentration

T op temperature (Ttop) Alkali concentration Kappa (κmid) Extraction alkali Extraction temperature C8− temperature (TC8) Kappa (κblow) Blowpipe f low

Figure 3.2. Principal appearance of a continuous digester. The most interesting variables

for this thesis are displayed where they are measured.

Figure 3.2 shows the principal appearance of the digester and where the most important variables are measured. These variables will be used in this thesis.

(32)

The purpose of using a digester is to dissolve the lignin that binds the cellulose fibres to a point that is economically viable. If the chips are overtreated some of the cellulose will be damaged producing a lower yield and fibres with lower mechanical strength. On the other hand, if the pulp contains too much lignin after the cooking process more chemicals must be used in the bleaching process. Hence, the goal of digester lignin control is to keep the level of undissolved lignin at a constant level which has been chosen with respect to the overall optimum. This lignin level has previously been measured by analyzing samples in a lab. Today, online analyzers dominate the industry. The resulting number has historically been called the Kappa number (after the greek letter κ). There is a linear relationship between the lignin concentration (L) and the Kappa number, (L∝ κ). We will call the digester lignin control Kappa control in this thesis. Also, the notation κmid

and κblowwill be used for the measured signals (mid refers to the mid-phase of the

digester and blow refers to the blow flow that goes through a pipe at the bottom of the digester).

3.3

Kappa

There are physical models of how the cooking process works, however, they are still subject to improvements. Here are some important characteristics for the process worth mentioning.

The process is nonlinear and time-varying. Though it is preferable to have constant production, it is often not the case. It is the variable production rate that causes the system to be time-variant.

There are multiple inputs, such as temperature, alkali concentration and pro-duction rate. If two analyzers for Kappa are used, we also have multiple outputs (κmid and κblow). Hence, the system examined in this thesis is a MIMO-system

(Multiple Input Multiple Output ).

Since the samples for κmidare taken from a specific region in the digester, they

may not necessarily reflect the values of the cross section on average. Unwanted circulation of chips and liquid in smaller regions is not uncommon.

The process has slow dynamics, which vary from half an hour to almost a day.

3.3.1

Kappa Control

Feedback control is used in the digester to maintain a constant value of the Kappa number. However, it is also possible for the technician monitoring the process to manually set the control signals. A common feedback control strategy is a PI-controller with feedforward features (with respect to production and blowpipe flow).

It is common to use the temperature as a control signal but there are other important inputs such as the alkali concentration and residual alkali concentration after the impregnation vessel. Naturally, the production rate is not an accessible

(33)

3.3 Kappa 17

control signal since it is used for the entire plant. However, it is an important disturbance factor.

A common way to implement the Kappa control is to use the H-factor, H(t). It is derived from the following equation

dL

dt =−L f(OH

, HS) k eE

R(373.151 −T (t)1 ) (3.1)

where L is the amount of lignin in % of the original amount of wood, E is the activation energy, R is the molar gas constant, T (t) is the temperature, OH− and HS− are the concentrations of hydroxy ions and hydrosulphide ions. An approximation that says that the concentrations of OH− and HS− are a function of L makes it possible to write (3.1) as

dL

L f (g(L)) = k e E

R(373.151 −T (t)1 )dt (3.2)

By assuming there is a function F (L), such that we can integrate the left side of (3.2), we have F (L) = k H(t) (3.3) where H(t) = Z τ 0 e E R(373.151 −T (t)1 )dt (3.4)

The Kappa control goal translated into H-factor control is to keep H(t) constant over time. The relationship between the Kappa value and H(t) is not linear. However, it can be disregarded without loss of accuracy for control purposes. By using the H-factor as a control signal (instead of the temperature), problems with altering production are better taken care of. This is because we have L(T ) and T (t), where t depend on the production rate. Still, the H-factor needs to be translated into temperature for the actual output of the control signal.

3.3.2

Kappa Measurements

The first Kappa measurement is made on samples taken from the mid-phase where the chips have been processed for approximately two hours (at the average pro-duction rate). The second is made on samples from the pulp in the blowpipe immediately after the digester.

Samples from the mid-phase are taken in the following way. A pipe that is connected to the digester sucks out the appropriate amount of chips passing this section. This is then transported to a tank in the analyzer. It is unclear to the author if the pipe is cleared of all chips at all times or if there sometimes are chips left.

Once in the analyzer the sample is washed and prepared, then a special tech-nique using a laser and evaluating reflections is applied to determine the Kappa value. The sample is then flushed away from the tank. Only one test at a time can be conducted, and it takes approximately five minutes.

(34)

Since the analyzer is a very expensive device, only one such is used for the mea-surement of Kappa in the plant. These includes κmid and κblow measurements as well as measurements before and after the O2-delignification (step 7 in Figure 3.1). Therefore, the time between two samples will be approximately 20-25 minutes.

3.3.3

Kappa disturbance

There are a few factors that have a clear impact on the Kappa value and that are relevant, in the sense that they are slow enough to control with feedback control.

Firstly, the size of the chips varies with the sharpness of the chipper knifes. Since chips are also bought, it is also important whether they are purchased or manufactured at the mill. Usually, chips bought from saw mills have higher density, which is explained in [6]. Since it is volumes that are controlled (as opposed to weights), a different density change the needed amounts of chemicals.

Secondly, wood characteristics such as type of wood and location of origin (climate) is important as they have a deep impact on the density.

Thirdly, wind causing small chips to concentrate by piling up is a known dis-turbance factor. The chips are stored in big piles before usage. It is when these piles are built the problem starts. Later, when used, the higher concentration of smaller chips will change the Kappa value. This happens because the chemicals react faster with smaller sized chips.

Finally, variations in the degree of compaction of the chips may occur. More chemical reactions results in softer chips which implies an increased compaction. As a result they occupy less space (the density increases). This space will be filled by new chips consuming more chemicals. This has a clear effect on the alkali concentration, which is calculated to be enough for the volume of chips at another density level. Thus the alkali is not enough to maintain the desired Kappa value and it increases. As the chemical reactions decrease so will the level of compaction, leaving us with a new problem (namely the opposite of the described). It can take up to 24 hours before this oscillation fades out.

Besides from these disturbance factors, there is also noise on the measured samples which will be further described in Chapter 4.1.

(35)

Chapter 4

System Identification of the

Digester’s Lower Part

In order to identify the system that defines the digester we will use data from the Finnish plant mentioned in Chapter 1. A brief glance at these data series, which were collected during 2003, shows that there seems to be a correlation between

κmidand κblow. This can be seen in Appendix A. However, in order to actually be

able to use the data, a closer study has to be made.

It seems natural to start evaluating the correlation between κmidand κblowwith

a straightforward approach, namely to use the definition of the covariance function described in Section 2.1.

The next step is to consider the dynamics of the process. Different model structures, such as Output Error, ARX, ARMAX and Box-Jenkins models, can be used. These model structures were described in Section 2.2. ARMAX and Box-Jenkins add complexity in their models without improving much compared with ARX and are not used for that reason. Output Error models give, not suprisingly, bad results as they simulate the output from just the input. All these models have been estimated using MATLAB - System Identification Toolbox (SITB) (see [10]). Since the ARX model seems to be most suitable, only its results will be presented here.

4.1

Selection and Preprocessing of Data

As mentioned before, there are a lot of factors that influence the resulting κblow. In order to interpret the data we need to make sure there are no outliers among the selected samples. There are three major issues to take into consideration.

Firstly, there are a lot of samples that actually are old data. This seems to derive from problems with the analyzer. Clearly this is a problem that needs to be addressed. However, in this thesis, series with this behavior will not be used at all. We will simply split the long series of samples into shorter ones.

(36)

Secondly, big changes on κmidand κbloware often direct results of the produc-tion slowing down or stopping. These situaproduc-tions cause changes in the system that are not significant for the problem we want to examine, namely the results during a typical production rate. When the rate changes, so does the time delay in the system, which makes it harder to compare the right κmidand κblow values. Hence,

we will not use these data either.

Thirdly, the noise on κmidis too large for us to be able to see short-time trends.

The standard deviation for κmid is approximately three times higher than that of κblow. Although considering the factors mentioned above, the correlation between

the Kappa signals is poor. This can be seen in Figure 4.1. The decline for κmid

after t≈ 1685 can be seen as a resulting decline for κblowafter t≈ 1700. As κmid

increases again, at t≈ 1740, so does κblow but at t≈ 1760. However, we cannot at

all see any effects of the big changes in κmid at t = 1740− 1760 on the output. As

a result we need to filter κmid before evaluating the correlation between the two Kappa signals. 1660 1680 1700 1720 1740 1760 1780 1800 25 30 35 40 45 Time (samples) Kappa

Figure 4.1. κmid is seen above κblow, both are unfiltered.

We will assume that there is little noise on the κblow-values. Hence, by filtering the values of κmid until they get the same kind of behavior (but with a higher standard deviation), we have two sets of data that can be compared. The filtering

(37)

4.2 Correlation Comparisons 21

is done in SITB using a fifth order Butterworth filter (for more on this filter, see [17]). The lost information due to filtering is not relevant for our purposes as it has too high frequencies for feedback control. This will be further explained in Section 5.1 but with another deadtime.

4.2

Correlation Comparisons

As mentioned in Section 2.1, the correlation function between two signals describes how they depend on each other as a function of the delays τ . The idea of comparing correlation is that if we can establish a relationship between say u(t− τ) and y(t) we know that the new variable (κmid) is useful for later use. Hence, trying this for

a few reasonable delays seems to be an intuitive first test.

By selecting six series divided up by the approach described in the previous section we should have enough statistics to see accurate results. The series in Table 4.1 are samples from the Finnish plant collected during the first half of 2003. Most of them contain samples from approximately ten days of nonstop production. All have been detrended (using the SITB command detrend) and then filtered with a fifth order Butterworth filter trying different cutoff frequencies, shown in Appendix B.2.

Table 4.1. The different peaks in correlation between κmid and κblow shown for six

different series, all with filtered κblow. The maxima are displayed in boxes.

Data Correlation between κblow and κmid

as a function of the time delay, τ (in samples, T = 10)

0 7 8 9 10 11 12 13 14 Series 1 0.60 0.63 0.63 0.64 0.64 0.64 0.63 0.62 0.61 Series 2 0.21 0.21 0.23 0.25 0.26 0.27 0.26 0.26 0.25 Series 3 0.34 0.33 0.33 0.34 0.35 0.36 0.36 0.35 0.34 Series 4 0.56 0.56 0.56 0.56 0.55 0.54 0.53 0.51 0.48 Series 5 0.05 0.28 0.30 0.32 0.33 0.33 0.32 0.30 0.27 Series 6 0.16 0.21 0.21 0.22 0.21 0.21 0.20 0.19 0.19

The table shows that the difference in correlation between zero delay and the delay which gives a maximum is small, sometimes nonexistent.

Hence, using this type of test does not give much information. We have filtered and chosen those parts of the series where disturbances from production, alkali and other factors are minor and still it does not get any better than in Table 4.1. There are a few good reasons why the results look the way they do.

Most samples u(t), are well correlated with u(t− 1) and a few more of the surrounding samples. Although there is a minority of u(t) where a quick change in value, compared with u(t− 1), has occured resulting in bad correlation these will blend with the majority. Furthermore, the dynamics of the true system ruin the results.

(38)

As described in Section 2.1, each signal coming out of a linear time-invariant system can be written as a function of the input. Let us show (2.5) once again:

Ryu(τ ) = X l=Td glE u(t− l)u(t − τ)  (4.1)

From this expression it is obvious that we will not only see the effect of correlation between different samples of the input, but we will also see that the coefficients

gk complicate the intuition of what should be expected. For example, although

the correlation between two samples is high, a small coefficient gk will make the

resulting value small.

No matter how we look at it, all the problems described above come from the fact that the correlation test does not take into account dynamics. Clearly we need to perform a more detailed analysis.

4.3

ARX Modeling

The ARX prediction, previously described in Section 2.2.2, uses only a given struc-ture to find the difference equation that best describes the system. As can be seen in Figure 4.2, in an ARX model the noise is going through the same poles as the input. This is not obvious but seems to be the case here. It is not the measuring of κblow that adds noise one can argue, instead noise is already added when the chips enter the system and hence the noise goes through the poles of the system. Figure 4.2 shows how the noise and input generates the output.

κmid(t) κblow(t)

e(t)

+

B(q) A(q)1

Figure 4.2. ARX model of the relationship between the measured Kappa values. Both

the noise and κmid goes through the same system dynamics.

All that is required to estimate a model, when using SITB, is y(·), u(·) and the design parameters na, nb and nk. Setting these parameters is not trivial in our case and several models need to be tested and compared. In the ARX prediction previously described in Section 2.3 we translate y(·) to κblow and u(·) to κmid.

(39)

4.3 ARX Modeling 23

4.3.1

ARX Prediction

One way of proving correlation between κmid and κblow is by using the first to

predict the latter. This means the prediction horizon should be at least as long as the deadtime of the system, i.e., the time it takes for a chip to descend from where κmid is measured to where κblow is measured. Since the production rate is

so important, it needs to be taken in consideration. There are three alternatives to choose between. Either we resample the system after the production rate (which will be described in Section 4.3.4) or we use it as an input. A third choice is to use samples where the production is kept at a constant level.

We start by filtering the data, using the cutoff frequency 1· 10−3Hz and a fifth

order Butterworth filter. The choice of frequency is made by filtering the signals until the remaining data look like what is belived to be the true data. This filtering makes typical κmidsignals look like they have been delayed 20 samples in time. It

is not the same filter as the one used in Section 4.2. Filtering and preprocessing of the data is done in SITB. By tuning the design parameters na, nb and nk we get the fit for a zero-step net prediction (+20 predicton horizon−20 filter delay = 0 net) shown in Table 4.2. The fit measures how well a model can predict the output. Another series of κblowis used for this validation. The numerical values of

the A and B polynomials can be seen in Appendix C.1.1.

Table 4.2. The different parameters for the ARX predictor and the resulting fit for a

20-step prediction. Output κblow na 10 10 11 9 Input κmid nb 3 1 3 3 nk 23 25 23 23 Resulting fit 70% 70% 69% 68%

We can see the parameter nk as the deadtime of the process. The fact that nb is such a low number is okey but the polynomial coefficients are disturbingly small compared with their standard deviations. It tells us that the prediction is made using more contributions from earlier outputs than κmid. However, results from

some test series indicate that the κmid signal actually improves the estimates of κblow. It is obvious that the extra signal contains a lot of uncorrelated behavior as

well, which probably derives from where and how the samples are measured. How do we know if our predictor is good or not? In our case, a meaningless predictor is one that only uses previous output to guess future ones. In Figure 4.3, the predicted output can be seen around a peak in the Kappa value. As in all tests, a new series has been used as validation data.

(40)

1130 1140 1150 1160 1170 1180 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Time (samples)

Kappa deviation from Kappa setpoint

Measured and 20 step predicted output

Figure 4.3. Prediction beyond a peak; at t = 1140, using the value of the current output

and previous (grey), the value at t = 1160 is predicted (black).

4.3.2

Multiple Inputs

Assuming we actually want to use the predictor for κblow, it is interesting to see how

the model can be improved. By using more inputs, and setting the corresponding parameters (nb and nk) individually we get the results displayed in Table 4.3. A zero-step net prediction is still used and the filter frequency is still 1·10−3Hz. Using κmid, production rate, extraction temperature and extraction alkali concentration

as input has given the best results when tested in SITB. These variables are all displayed in Figure 3.2. The use of more inputs implies that the predictor can work in a more general production scenario with good results.

Table 4.3. The different parameters for an ARX predictor and the resulting fit for a

20-step prediction. Output κblow na 10 Input κmid p ac ar nb 1 9 5 4 nk 23 20 17 20 Resulting fit 73%

(41)

4.3 ARX Modeling 25

In Table 4.3, where the same data series as in Table 4.2 has been used, it can be seen that the chosen nk:s are very reasonable. The parameter na has a higher order than expected but is not extremely high. The low order on nb for κmidraises the question; can we do without it? However, tests show that it actually is needed in order to get good predictions. Hence, κmid contains useful information. The

resulting polynomials for this model can be seen in Appendix C.1.1.

A possible way of using the prediction is to get a filtered version of κblowwithout

the time delay an ordinary filter causes. This whole idea is based on the fact that there is a deadtime in the system. During this time no changes of the inputs can affect the output to come.

Figure 4.4 shows how the time delay due to filtering is regained by the pre-diction. The prediction is compared with the actual filtered output which is why it might not seem that good. However, if we compare the predicted output with the actual unfiltered output, which can be seen in Figure 4.5, it gets interesting. It is now clear that the prediction actually is a filtered version of the output. We will not pursue in this direction since the samples of κbloware not that noisy. The

predictor idea will there be used in a similar approach to filter the values of κmid

without time loss.

400 450 500 550 600 650 700 750 800 850 900 −1 −0.5 0 0.5 1 1.5 2 Time (samples)

Deviation from Kappa mean

Measured and 20 step predicted output

(42)

400 450 500 550 600 650 700 750 800 850 900 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3 Time (samples)

Deviation from Kappa mean

Figure 4.5. Predicted κblow (black) and actual unfiltered κblow(grey).

4.3.3

ARX Simulation

If we can use only the input (as opposed to in- and output) and predict the output we have a simulator. This can be tested in SITB. The coefficients of the A- and

B-polynomials have been estimated and then tested on a new set of data. The

input has been used to simulate the output, and the result has been compared with the actual output. Figure 4.6 shows the poor simulation performance of the estimated model. It seems that there is more to the system than the linear ARX model can describe.

4.3.4

Resampling

As mentioned before, the system is not time-invariant because of the varying pro-duction rate. By resampling the data with respect to the propro-duction, the samples will now occur irregularly in time. An understanding for it can be obtained from the following example.

(43)

4.3 ARX Modeling 27

Assume the following scenario:

• It takes 6 hours for a given chip to pass through the digester. This

corresponds to 36 samples when the sampling time is 10 minutes. This is true when the average production rate is 1700 [tons/24 h].

• The production rate is halved for 2 hours leaving the chip processed

7 hours, which corresponds to 42 samples. By sampling after production rate we get:

• The length between two samples is set to 1700 [tons/24 h]. When the

production rate is at this level there will be 10 minutes between the samples in the time domain.

• When the production rate is halved it will take 20 minutes to reach

the sample length of 1700 [tons/24 h]. Thus we will have 36 sampels of process time for the chip in the new domain.

Since production is not actually a part of the system dynamics, resampling should theoretically give us a better model. However, the results are not significantly better compared with using production rate as an input. Considering the extra calculations that need to be performed (see Appendix B.3), we will not use re-sampling in our model. Rere-sampling and the MATLAB implementation of it is explained in [9].

(44)

0 200 400 600 800 1000 1200 1400 1600 1800 −3 −2 −1 0 1 2 3 Time (samples)

Deviation from Kappa mean

Measured and simulated model output

Figure 4.6. Simulated κblow (grey) and the actual κblow (black). Both are seen as deviations from mean.

(45)

Chapter 5

Signal Processing of the

Mid-phase Signal

In the previous chapter, we have shown that κmid contains usable information.

However, we need to find a way to process the signal before we actually use it. Naturally, a first step is to filter the data. Since we loose time when doing so, an idea of how to regain this time will be examined. This includes identifying what inputs affect κmid. Hence, this chapter will focus on the upper part of the digester.

Ttop ar ac p κmid κblow G1 G2 part 1 part 2

Figure 5.1. The digester seen as two subsystems. In Chapter 4 part 2 was studied, while

in this chapter part 1 will be studied.

(46)

5.1

Filtering

When having such a slow system as the digester is, it is not in our interest to use information with high frequencies. So what can be considered a high frequency?

Assume that κmidchanges like a sinusoid with a cycle time of t = 12, which means 2 hours. This corresponds to the delay identified in Table 5.1. The value of κmid at t = 0, t = 12, t = 24, · · · is 0. This corresponds to the

frequency 1.39· 10−4 Hz.

f : [oscillations/second] = 1

2· 3600= 1.39· 10

−4[Hz]

The denominator is the total time in seconds for the cycle time. Anything that changes faster than this is too fast for us to react against, i.e., control. Hence, these frequencies should be filtered leaving us with a spectrum of controllable data.

In a more practical approach, it can be shown that in order to get a predictor without any time loss the time delay caused by filtering should not be longer than the deadtime in the process. The predictor is explained in the following section.

5.2

Prediction of

κ

mid

As mentioned in the previous discussion about the ARX modeling, the predictor does provide useful information when used as a kind of filter (see Section 4.3.2). It is actually an observer giving us information about a signal that we otherwise could not see. Or simply; we predict ahead in time as long as the length of the deadtime in the system, but because of the time lost in filtering the net gain is zero. Here, we are interested in the prediction of κmidinstead of κblow. This means that κmid is the output, and we need to choose the inputs.

5.2.1

Choise of Inputs

The model used for the predictor has been tested with several inputs, namely

• alkali concentration

• residual alkali concentration after the impregnation • production rate

• temperature

Typically, production and temperature affect κmid the most, but when the others do change, the accuracy is improved if they are included.

(47)

5.2 Prediction of κmid 31

We conclude that the ARX model should have the following structure:

A(q)κmid(t) = B1(q)Ttop(t) + B2(q)ar(t) + B3(q)ac(t) + B4(q)p(t) (5.1) where Ttop is the top temperature, ar(t) is the residual alkali concentration after impregnation, ac(t) is the alkali concentration, p(t) is the production rate and

A(q) = 1 + a1q−1+· · · + anaq−na

B1(q) = b1,1q−nk1+· · · + b1,nb1q−nk1−nb1+1 B2(q) = b2,1q−nk2+· · · + b2,nb2q−nk2−nb2+1 B3(q) = b3,1q−nk3+· · · + b3,nb3q−nk3−nb3+1 B4(q) = b4,1q−nk4+· · · + b4,nb4q−nk4−nb4+1

This gives us the design parameters na, [nb1, nb2, nb3, nb4], [nk1, nk2, nk3, nk4].

5.2.2

Estimating Parameter Values

In our preprocessing of the signals, a fifth order Butterworth filter with the cutoff frequency 1·10−4Hz is used. This causes a 10 sample delay which we try to regain

through the prediction. Thus we have a zero-step net prediction.

Many models have been estimated, using knowledge about reasonable values, but with so many inputs it is impossible to state that the best model has been found. However, most reasonable models seems to have very similar fit (the deviation is less than 1%). Several different data series have been used in the identifications. The chosen model has the parameters displayed in Table 5.1. The polynomials A,

B1, B2, B3 and B4 can be seen in Appendix C.1.2.

Table 5.1. The different parameters for the ARX predictor and the resulting fit for a

10-step prediction.

Output κmid

na 11

Input Ttop ar(t) ac(t) p(t)

nb 4 3 6 2

nk 11 22 14 11

Resulting fit 64%

The chosen nk:s seem very reasonable since it should take about two hours to pass through the upper part of the digester. Change of production will be seen earlier whereas the temperature and alkali concentration roughly will follow the chips. The high number on nk for the residual alkali concentration is harder to explain.

(48)

Figure 5.2 shows the results of simulation using the new ARX model. It is still not good enough to use for simulation purposes.

200 300 400 500 600 700 800 900 1000 1100 −8 −6 −4 −2 0 2 4 6 8 Time (samples)

Deviation from Kappa mean

Measured and simulated model output

Figure 5.2. Simulation of κmid using temperature, alkali concentration, residual alkali concentration and production rate. The black signal is the true output, which the grey should follow.

5.2.3

Prediction Horizon

With the parameters and coefficients described above we only have one more pa-rameter to set, namely the prediction horizon. Since κmidhas been filtered, we try

to gain back this time with the predictor. In Figure 5.3, we see the filtered signal using a fifth order Butterworth filter with the cutoff frequency 1·10−4Hz together

with the unfiltered signal. A 10-step predictor is also shown with its jerky behavior. This behaviour increases with the length of the prediction horizon. However, if we allow some time loss, the resulting predictor is significantly better. In Figure 5.4, we allow three more samples of lost time using a 7-step predictor, and as a result the predicted output looks better.

(49)

5.2 Prediction of κmid 33 920 940 960 980 1000 1020 1040 1060 1080 34 36 38 40 42 44 46 48 50 52 54 Time (samples) Kappa−mid

Figure 5.3. Actual κmid(grey), actual κmidfiltered (dot-dashed) and a 10-step predicted

κmid (black). 920 940 960 980 1000 1020 1040 1060 1080 34 36 38 40 42 44 46 48 50 52 54 Time (samples) Kappa−mid

Figure 5.4. A 7-step predicted κmid (black) regaining some of the lost time. It is

(50)

5.2.4

Implementation Issues

The suggested predictors are very easy to implement since they actually can be seen as FIR-filters (Finite Impulse Response), see [17]. The mathematical opera-tions required are simply adding and subtracting previous samples of the different variables to get a resulting predicted output. For more on how to implement a

(51)

Chapter 6

Simulation Models

Since it is not possible to run tests on the actual system (at least not in the design phase) we need a simulator. It does not need to function outside a realistic operat-ing region. Therefore it is possible to linearize some of the equations describoperat-ing the digester. In order to achieve a good model, noise needs to be added in a realistic way. As we will see later, this turns out to be very important. The models that will be described in this chapter have all been created in MATLAB - Simulink and the noise has also been generated here. There will be some nonlinear relationships in the Simulink models.

The simulator construction problem has been approached by dividing the model design into two steps, first the entire digester (Model 1) and then the digester split up in two subsystems (Model 2), which can be seen in Figure 6.1. This has been done since the parameters of the first model are better known than the ones of the second. By comparing the two models, the parameters of the second model can be derived. ångfastemperatur produktion blåsflöde mittkappa övre kokardel [t T] temp [t T] temp [t p] produktion [t p] produktion C8−temperatur ångfastemperatur mittkappa produktion blåsflöde blåskappa nedre kokardel [t K] blåskappa [t Fb] blåsflöde [t Fb] blåsflöde simout To Workspace Scope brus Brusmodell Band−Limited White Noise

Figure 6.1. Simulink model of the digester, here viewed as two connected systems. This

is an implementation of Model 2.

References

Related documents

However other authors like Spijkerman (2015) believe that e-shopping will not change the number of trips customers make to physical stores even though for the

Using this approach, the best control is selected based on the data, and we do not need to make out-of-sample predictions based on preintervention trends (as in ARIMA intervention

improvisers/ jazz musicians- Jan-Gunnar Hoff and Audun Kleive and myself- together with world-leading recording engineer and recording innovator Morten Lindberg of 2l, set out to

In this situation care unit managers are reacting with compliance, the competing logic are challenging the taken for granted logic and the individual needs to

combination of trust and control in certain space and time, Vosselman and Meer-Kooistra (2009) consider the process of interaction to be at center stage. In the

Taking this lens, the impact of the COVID-19 pandemic on electricity transitions can be studied, paying attention to regional variations in and the longevity of changes in

Tommie Lundqvist, Historieämnets historia: Recension av Sven Liljas Historia i tiden, Studentlitteraur, Lund 1989, Kronos : historia i skola och samhälle, 1989, Nr.2, s..

the company should prepare targets and guide values for all energy use on the basis of the results of a first energy analysis FEA of the building; the targets are in respect of