• No results found

Evaluation and Configuration of a Control Loop Asset Monitoring Tool

N/A
N/A
Protected

Academic year: 2021

Share "Evaluation and Configuration of a Control Loop Asset Monitoring Tool"

Copied!
109
0
0

Loading.... (view fulltext now)

Full text

(1)

Institutionen för systemteknik

Department of Electrical Engineering

Examensarbete

Evaluation and Configuration of a Control Loop

Asset Monitoring Tool

Examensarbete utfört i Reglerteknik vid Tekniska högskolan vid Linköpings universitet

av

Calle Skills¨ater

LiTH-ISY-EX--11/4461--SE Linköping 2011

Department of Electrical Engineering Linköpings tekniska högskola

Linköpings universitet Linköpings universitet

(2)
(3)

Evaluation and Configuration of a Control Loop

Asset Monitoring Tool

Examensarbete utfört i Reglerteknik

vid Tekniska högskolan i Linköping

av

Calle Skills¨ater

LiTH-ISY-EX--11/4461--SE

Handledare: Andr´e Carvalho Bittercourt

isy, Link¨opings universitet

Alf Isaksson

ABB AB

Examinator: Martin Enqvist

isy, Link¨opings universitet Linköping, 29 April, 2011

(4)
(5)

Avdelning, Institution

Division, Department

Division of Automatic Control Department of Electrical Engineering Linköpings universitet

SE-581 83 Linköping, Sweden

Datum Date 2011-04-29 Spr˙ak Language � Svenska/Swedish � Engelska/English � � Rapporttyp Report category � Licentiatavhandling � Examensarbete � C-uppsats � D-uppsats � ¨Ovrig rapport � �

URL f¨or elektronisk version

http://www.control.isy.liu.se http://www.ep.liu.se ISBNISRN LiTH-ISY-EX--11/4461--SE

Serietitel och serienummer

Title of series, numbering ISSN

Titel

Title Svensk titelEvaluation and Configuration of a Control Loop Asset Monitoring Tool

F¨orfattare

Author Calle Skills¨ater

Sammanfattning

Abstract

In this thesis, an automatic control performance monitoring tool is analyzed and evaluated. The tool is called Control Loop Asset Monitor (CLAM) and is a part of the Asset Optimization extension to the ABB platform System 800xA. CLAM calculates and combines a number of performance indices into diagnoses. The functionality, choice of configuration parameters and the quality of the result from CLAM have been analyzed using data from the pulp mill S¨odra Cell M¨orrum.

In order to get reliable diagnoses from CLAM, it is important that it is correctly configured. It was found that some of the default parameters should be modified and the recommendations in the user guidelines should be updated. Using the current default parameters, there are some combinations of indices that never can exceed defined alarm severity thresholds.

The conclusions in this thesis have been documented in an online help that also includes simple user instructions for how the results from CLAM should be interpreted. The results have been analyzed together with the staff at S¨odra Cell M¨orrum in order to validate that they are correct and relevant from a user per-spective. It was found that the results are correct, but there are some things that can be improved in order to make CLAM more user friendly.

Nyckelord

(6)
(7)

Abstract

In this thesis, an automatic control performance monitoring tool is analyzed and evaluated. The tool is called Control Loop Asset Monitor (CLAM) and is a part of the Asset Optimization extension to the ABB platform System 800xA. CLAM calculates and combines a number of performance indices into diagnoses. The functionality, choice of configuration parameters and the quality of the result from CLAM have been analyzed using data from the pulp mill S¨odra Cell M¨orrum. In order to get reliable diagnoses from CLAM, it is important that it is correctly configured. It was found that some of the default parameters should be modified and the recommendations in the user guidelines should be updated. Using the current default parameters, there are some combinations of indices that never can exceed defined alarm severity thresholds.

The conclusions in this thesis have been documented in an online help that also includes simple user instructions for how the results from CLAM should be in-terpreted. The results have been analyzed together with the staff at S¨odra Cell M¨orrum in order to validate that they are correct and relevant from a user per-spective. It was found that the results are correct, but there are some things that can be improved in order to make CLAM more user friendly.

(8)
(9)

Acknowledgments

This thesis has been performed at ABB Corporate Research and S¨odra Cell M¨orrum. It feels like a privilege to have the opportunity to both see the inside of the de-veloper of this tool and a user. Using this approach is really something that I can recommend.

My work here has been very educating and fun. The main reason for this is my supervisor Alf Isaksson. Thank you for your patience, enthusiasm and the freedom you gave me. I also want to thank Christian Johansson for your support and interest in my work.

Thank you, Martin Enqvist and Andr´e Carvalho Bittercourt at Link¨oping Univer-sity for your support and all help with my report.

I would also like to thank S¨odra Cell M¨orrum for giving me the opportunity to see how things work in reality. Thank you, Gert Svensson and Magnus Andersson for your patience and support.

Finally, I want to thank my family and friends. Thank you, Elise, Maths, Anne, Jonna and Sunny. Without you, I would not have been where I am today.

Calle Skills¨ater V¨aster˚as, April 2011

(10)
(11)

Contents

1 Introduction 1 1.1 Background . . . 1 1.2 Goal . . . 3 1.3 Limitations . . . 3 1.4 Thesis Outline . . . 3

2 Pulp Production Process at S¨odra Cell M¨orrum 5 2.1 Process Overview . . . 5

2.2 Wood . . . 6

2.3 Barking and Chipping . . . 7

2.4 Digesting . . . 7

2.5 Washing and Filtrering . . . 7

2.6 Bleaching . . . 8

2.7 Drying . . . 8

2.8 Chemical Recovery . . . 8

3 Control System at S¨odra Cell M¨orrum 9 3.1 System 800xA Platform . . . 9

3.2 Control Equipment . . . 10

3.3 RegDoc . . . 10

3.4 History Logging . . . 11

4 Causes of Malfunctioning Control Loops 13 4.1 Static Friction (Stiction) . . . 13

4.2 Dead-band . . . 14 4.3 Backlash . . . 14 4.4 Saturation . . . 14 4.5 Quantization . . . 15 4.6 External Causes . . . 16 4.7 Other Reasons . . . 16

5 Control Loop Performance Monitoring 17 5.1 Tool Requirements . . . 17

5.2 Performance Indices . . . 18 ix

(12)

x Contents

5.3 Control Loop Asset Monitoring Tool within System 800xA . . . 18

5.3.1 Configuration . . . 23

6 Analyses of Configuration Parameters 25 6.1 Data Set Size . . . 25

6.2 Data Interval . . . 27

6.3 CO low/CO high . . . 28

6.4 Cascade . . . 28

6.5 PV low/PV high . . . 29

6.6 Weight Parameters . . . 30

6.6.1 Final Control Element Summary Weights . . . 30

6.6.2 Loop Performance Summary Weights . . . 31

6.7 Thresholds . . . 32 6.7.1 Severity Thresholds . . . 32 6.7.2 Alarm Thresholds . . . 33 6.8 Filter Parameters . . . 33 6.9 Dead Time . . . 37 6.10 Loop Category . . . 41 6.11 Resample Interval . . . 42 6.12 Aggregate Function . . . 42 6.13 Inhibit Value . . . 42 7 History Logging 43 7.1 OPC Direct Logging with 3 Seconds Cyclic Rate . . . 43

7.2 Effects on CLAM Results . . . 49

7.3 Size Analyses of History Log . . . 50

8 Validation of Results from CLAM 53 8.1 PC451029 . . . 53 8.2 LC471640 . . . 56 8.3 LC482122 . . . 57 8.4 FC542024 . . . 58 8.5 FC442064 . . . 60 9 Online Help 65 9.1 Outline of the Online Help . . . 65

10 Conclusions 67 10.1 Configuration Parameters . . . 67

10.2 History Logging . . . 68

10.3 Validation of Results from CLAM . . . 69

11 Future Work 71

A Performance Indices 75

(13)

Contents xi

C Indices vs. Logging Interval 83

D Dead Time Histograms 87

(14)
(15)

Chapter 1

Introduction

In this chapter, the background of this thesis is described. For more detailed information about the production process see Chapter 2 and Chapter 5 for more details about control loop performance monitoring.

This thesis has been a combination of two worlds. The major part of the work has been done at ABB Corporate Research (SECRC) in V¨aster˚as and the remaining at S¨odra Cell M¨orrum. ABB is a worldwide engineering company with products in many different areas. One such area is automation systems, where the plat-form System 800xA is widely used in industries all over the world. One software extension for the 800xA base system is Asset Optimization (AO), which enables real-time asset monitoring, notification, etc. The software Control Loop Asset Monitor (CLAM) is one tool included in AO. It enables monitoring of control loop performance based on advanced combinations of a large amount of performance indices (ABB, 2003a).

Today, the industries get more complex and automated in combination with a growing interest in energy savings. A modern process industry may have hundreds or even thousands of control loops and no human can keep track of the maintenance need for all of them.

1.1 Background

One user of System 800xA with the AO extension is S¨odra Cell M¨orrum. Their plant in M¨orrum is one of five pulp mills in a division of the main organization S¨odra called S¨odra Cell. The other mills are located in V¨ar¨o and M¨onster˚as in Sweden and Tofte and Folla in Norway. S¨odra is owned by 52,000 land owners in the southern parts of Sweden and does also consist of the divisions S¨odra Timber, S¨odra Interi¨or and S¨odra Skog.

(16)

2 Introduction

S¨odra Cell is one of the world’s leading producers of market pulp with a total production capacity of 2.1 million tons per year. Today the market for bioenergy gets more and more important. S¨odra delivers 3.7 TWh of green energy each year and is one of the biggest bioenergy companies in Sweden. At S¨odra Cell M¨orrum this is delivered as both electricity and heat. Today you can say that from every log of wood half is turned into pulp and half is used to produce energy (S¨odra, 2008).

The pulp from S¨odra Cell is sold to customers producing almost any paper product you may think of, from paper board and magazines to tea bags and tissues. About 90 percent of their products are sold to customers in Europe, and 10 percent to Asia (mainly China and Thailand). The plant in M¨orrum contains a large number of control loops; approximately 1200-1400 loops (S¨odra Cell M¨orrum, 2010b). It would require an enormous amount of personnel to continuously monitor all loops manually. The need for a tool for automatic monitoring of control loops was something that S¨odra Cell M¨orrum realized already in the late 1990’s. Since then they have tried a various range of software tools. Some of them did not work as expected and some were hard to use for people with different levels of competence. Since a few years back they have the System 800xA extension CLAM installed in combination with another ABB tool called Sigma Monitor (S¨odra Cell M¨orrum, 2010c). The Sigma monitor performs a simple monitoring of the standard deviation of the control error. CLAM provides information about the loop performance and the control actuator (called final control element (FCE) in CLAM). The loop performance analysis includes detection of oscillations, high variance, evaluation of cascade tracking, etc. The FCE analysis includes detection of high static friction, valve leakage, indications of non-optimal actuator size and FCE nonlinearities. CLAM is intended to be a complement to the Sigma Monitor for loops of extra importance. For example, it can be loops with a large and direct effect on the product quality, production economy or the environment.

Except continuous monitoring of important control loops S¨odra Cell M¨orrum has another need. When a process segment is about to be investigated for possible improvement S¨odra Cell M¨orrum wants to have the possibility to use CLAM on all loops in that section. This would result in a pre-study over the segment, making the work more efficient as CLAM would give indications on where to put the most resources.

In order to get a reliable diagnosis from CLAM, it is necessary that the software is configured correctly. There are a number of parameters that can be modified to get a monitoring system that is suitable for different types of loops and areas of use.

Before this thesis was performed, the staff at S¨odra Cell M¨orrum regarded CLAM as complicated and hard to use. There were no simple guidelines on how to monitor a new loop and how the result from CLAM should be interpreted.

(17)

1.2 Goal 3

1.2 Goal

The goal with this thesis is to clarify how CLAM should be configured for different types of control loops in order to get reliable diagnoses. The result should be for-mulated into guidelines and organized in an online help feature. The reliability of the CLAM diagnoses should thereby be increased and the configuration simplified for the customers.

S¨odra Cell M¨orrum should finally be supplied with a reliable tool for control loop performance monitoring that is easy to configure and interpret for the users. The vision with this project is to make CLAM a better product with increased value for the customers.

1.3 Limitations

There exist some limitations regarding the data logging. At S¨odra Cell M¨orrum the history log is divided into 3 parts with different logging density and only 3 signals per loop are logged:

• The data are logged for 8 hours with a sampling interval of 5 seconds. After 8 hours the data are logged for 7 days with an interval of 15 seconds, where every stored value is an average of three 5 second samples. Finally, the data are logged for 30 days with an interval of 1 minute where every stored value is an average of four 15 second samples.

• The logged information consists only of controller output (CO), process value (PV) and setpoint (SP).

1.4 Thesis Outline

In the next chapter, the process of pulp production at S¨odra Cell M¨orrum is de-scribed in more detail. In Chapter 3, the System 800xA and the control equipment at S¨odra Cell M¨orrum are described. Possible causes of malfunctioning control loops and control loop performance monitoring are described in Chapter 4 and 5. In Chapter 6, results from analyses of how CLAM should be configured are described. An analysis of history logging is described in Chapter 7 and in Chap-ter 8 results from CLAM are validated. In the final chapChap-ters, conclusions and suggestions for future work are described.

(18)
(19)

Chapter 2

Pulp Production Process at

S¨odra Cell M¨orrum

In this chapter, a brief description of the pulp production is given. First, an overview is presented and all major steps are then described in more detail.

2.1 Process Overview

The production process from wood to pulp can be described by Figure 1. Wood goes through steps of barking, chipping, digesting (cooking), washing and filtering, bleaching and drying. The chemicals used in the process are recycled in a separate part that also produces electricity and heat. The energy surplus is sold to the regular market and yields money from green certificates on the energy market (S¨odra Cell M¨orrum, 2010a).

The plant at S¨odra Cell M¨orrum consists of two separate production lines due to the fact that the mill has been upgraded in several phases since the opening in 1962. This means that the need of controllers, staff, etc. is increased compared to a plant with the same capacity with one production line.

(20)

6 Pulp Production Process at S¨odra Cell M¨orrum

Figure 2.1. Pulp production at S¨odra Cell M¨orrum. Wood goes through steps of barking, chipping, digesting, filtering, bleaching and drying. Finally it leaves the plant as cellulose pulp.

2.2 Wood

At S¨odra Cell M¨orrum, both hard- and softwood are used as raw material. The main difference between hardwood such as aspen and birch, and softwood such as fir or pine, is that softwood has longer cellulose fibers than hardwood. Most of the wood is Swedish (83 % year 2001) and the imported wood is mainly hardwood from Russia and the Baltic countries (M¨orrums Bruk, 2002). The raw material arrives as both logs and chips from saw mills. The saw mill chips originate from the outer parts of sawed logs and represent about 30 % of all wood used in the process.

(21)

2.3 Barking and Chipping 7

2.3 Barking and Chipping

The logs need to be barked before they can be used in the process. Since the bark is not used as raw material in the pulp, it is removed in large barking drums. These drums consist of large rotating cylinders where the logs are rubbed against each other and the cylinder walls. The removed bark is a very important source of energy that has almost totally eliminated the need of oil for heating at S¨odra Cell M¨orrum (M¨orrums Bruk, 2002).

If the barking process does not work properly, the pulp will be impure and more chemicals are needed in the digesting and bleaching steps (S¨odra, 2004). The barked wood is chipped into small pieces and stored in huge stacks. The cellulose fibers are located inside the wood chips and will later form the pulp, but the fibers are embedded in lignin that has to be removed in the digester.

2.4 Digesting

In the digesting step, the chips are boiled together with cooking chemicals and water. At S¨odra Cell M¨orrum there are ten such cooking vessels, four for line one and six for line two (S¨odra Cell M¨orrum, 2010c). The cooking is done batchwise, i.e. the digester is filled with chips and chemicals and left cooking for about five hours and then the pulp is pumped to a clearance vessel. This procedure differs from continuous digesters where chips and chemicals are added and cooked pulp is removed continuously. The cooking chemicals are called white liquor. The main ingredient in white liquor is caustic soda, often used for pipe cleaning in our homes. During the digesting process bad smelling gases are formed. They are transported to the soda recovery boiler where they are burned (S¨odra, 2004). In this way the nearby environment is less affected and the characteristic “paper mill smell” is today just a memory from the past. When the cooking is finished the white liquor is polluted by lignin and called black liquor.

2.5 Washing and Filtrering

In the washing and filtering step, the black liquor is separated from the cellulose pulp and proceeds to the recycling step. Except for liquor and lignin, the pulp is mixed with pollutions like residues of bark and twigs. The pulp passes several steps of filtering and draining before it is pumped to the bleaching step. The separated liquor can be used again after chemical recovery.

(22)

8 Pulp Production Process at S¨odra Cell M¨orrum

2.6 Bleaching

The cleaned pulp is very brown in its color. To make the pulp white it has to be bleached. When the mill was new in the 1960s this was done with chlorine gas, but today the pulp is bleached with oxygen gas followed by chlorine dioxide (M¨orrums Bruk, 2002). The bleaching also reduces the percentage of lignin even more.

2.7 Drying

When the pulp reaches the drying step it has a consistency similar to porridge. First the pulp is accumulated and a lot of water percolates away. Then more water is removed by vacuum suction and by running it through several press cylinders. The pulp is now solid and transported to a drying cabinet, heated by steam. This step is very energy consuming. After drying, the pulp only consists of approximately 10 % water and is ready for delivery. This is done when the pulp is cut into sheets and bundled together.

2.8 Chemical Recovery

Black liquor from the digester consists of released wood substances, cooking chemi-cals and about 85 % water. To be able to burn the black liquor in the soda recovery boiler the water percentage has to be reduced to 25-30 % through several evap-oration steps. In this step sulfate soap is removed. After reaction with acid the soap turns into pine oil that is used for production of biodiesel.

When the black liquor is burned in the recovery boiler the chemicals react in the heat forming a melt at the bottom. The pollutions burn and generate heat, usable in other parts of the production process. The flue gases are cleaned in several steps before they are released through the top of the chimney. The melt at the bottom is pumped to a causticizing step where the white liquor is recovered to be used in the process again.

(23)

Chapter 3

Control System at S¨odra

Cell M¨orrum

3.1 System 800xA Platform

S¨odra Cell M¨orrum uses an ABB automation system called System 800xA. It is a Distributed Control System (DCS) creating a flexible and collaborative platform with large integration possibilities (ABB, 2008). As can be seen in Figure 2 be-low, control equipment from different suppliers and product generations as well as controllers for different purposes can easily be attached to the same network. The platform can be accessed with different interfaces, customized for operators, engineers, maintenance personnel, etc. This enables the users to concentrate on their own task and thereby work more efficiently.

A common issue in the industry today is the cost of modernization. Motivating the cost of upgrading all old equipment is often unthinkable (ABB, 2003b). Therefore the solution is to build on what you have. As a consequence of this most of the controllers at S¨odra Cell M¨orrum are from the product family Advant Master, where they have the latest controller generation called AC450. These controllers are connected to System 800xA through Connectivity Servers, providing data from the controllers to operator workplaces, history collection, etc. The installation outline can be seen in Figure 3. Modern controllers like AC 800M may be installed side by side with older equipment like AC450. This makes it possible for customers to efficiently upgrade their systems and still keep the major parts of their older systems.

System 800xA is used in a wide variety of applications that except for Pulp and Paper also include Oil and Gas, Petro Chemical, Biotech/Pharmaceutical, Power, Water, Utilities, Chemicals/Fine Chemicals, Metals, Mining, etc (ABB, 2011).

(24)

10 Control System at S¨odra Cell M¨orrum

Figure 3.1. System 800xA Extended Automation Platform version 5.1 (ABB, 2011). The platform includes possibilities to attach a wide variety of control products to the same system, including products for power automation, process electrification, safety, PLC, etc.

3.2 Control Equipment

The control module Advant Controller 450 (AC450) is commonly used at S¨odra Cell M¨orrum (S¨odra Cell M¨orrum, 2010b). It can be compared to the descendant AC 800M that is the latest ABB controller on the market. At S¨odra Cell M¨orrum the controllers are attached to a communication network called Master Bus 300 (MB300), which is a system originating from the 1980s. The network capacity is quite limited compared to standards today. A common setup is that a unit is not allowed to use the network more often than once every 3 or 9 seconds.

3.3 RegDoc

S¨odra Cell M¨orrum uses a documentation system called RegDoc. This is a database storing parameters for each control loop like limits for measured values, units, filter time, scan time, dead band, PID parameters, dead time, cascade mode, etc. When a change is made a new item in the RegDoc history log is added. This allows the staff to backtrack what has already been done to optimize their controllers.

(25)

3.4 History Logging 11

Figure 3.2. Advant Master Controllers within the System 800xA Platform (ABB, 2011) The Master Bus network including Advant Master Controllers is attached to the 800xA platform via connectivity servers.

3.4 History Logging

In System 800xA there are three different potential setups for the history log: • Time tagged data (TTD) Logging.

• OPC1 Direct logging with controller generic cyclic subscription2 (1, 3 and 9 seconds).

• OPC Direct Logging without controller generic cyclic subscription.

The AC 400 series controllers have a historical logging feature called TTD that performs logging directly in the controller (S¨odra Cell M¨orrum, 2010b). This setup is used at S¨odra Cell M¨orrum, where the capacity is 2 hours using a logging interval of 5 seconds. As a complement to the TTD log in the controller data is transferred in packages over the network to a history log placed on a server. This setup is recommended due to best availability, accuracy and performance (ABB, 2008).

The alternative is to use OPC Direct Logging, with or without controller generic cyclic subscription. In both cases data is continuously transferred from the con-troller to the history server sample by sample, without any storage in the concon-troller.

1OPC is an industrial consortium that maintains and creates standards for open connectivity

within industrial automation. See www.opcfoundation.org for more information.

(26)

12 Control System at S¨odra Cell M¨orrum

These two alternatives are the least costly regarding CPU and memory usage in the controller, but due to network limitations the choice of logging interval might be quite limited.

A cyclic rate of 9 seconds is recommended for the cyclic subscription alternative. In reality this means that a logging alternative smaller than 9 seconds is impossible. A logging interval that is greater than 9 seconds and not evenly dividable with the cyclic rate means that the logging interval is not constant. A discussion about the effect on CLAM by using different logging setups is presented in Section 6.2.

(27)

Chapter 4

Causes of Malfunctioning

Control Loops

In the process industry, there might be several causes of malfunctioning control loops besides bad tuning. An old study from 1993 made on pulp and paper pro-cesses showed that as many as 30 % of all control loops in a mill have equipment problems (Ender, 1993). A common consequence of these kinds of equipment problems is loop oscillations. In this chapter, a brief description of some of the problems is given.

4.1 Static Friction (Stiction)

A common cause of oscillations is high static friction (stiction) in control valves (Horch, 2000). This means that a valve is stuck in a certain position and that a higher CO is required in order to start moving the valve. Since the dynamic friction is smaller, the valve can be easily moved to a new position once it is loose, but when it stops it will be stuck again. Usually, these stop positions are on opposite sides of the desired SP. This means that the controller then will try to move the valve in the opposite direction. This kind of behavior can often be detected easily in the PV and CO signals, where the PV signal is shaped as a square wave and CO like saw teeth. This kind of behavior is usually tuning-independent as long as there is integral action (Horch, 2000), but until the next maintenance stop an ad hoc strategy could be to use a dead-band in the controller.

(28)

14 Causes of Malfunctioning Control Loops

4.2 Dead-band

Dead-band (also called dead-zone) simply represents the amount an input signal needs to be changed before the output changes (Horch, 2000). This approach is often used to decrease the load on the hardware due to fewer A/D-conversions (S¨odra Cell M¨orrum, 2010c). On the other hand, the use of dead-band might lead to a sluggish behavior followed by increased variance and oscillations.

Nowadays the computer performance is huge compared to the time when the system was installed. Therefore the need of dead-band is a lot smaller today and a common action is to decrease or remove the dead-band if a loop does not perform well (S¨odra Cell M¨orrum, 2010c).

4.3 Backlash

In a mechanical system where elements are not directly connected to each other there will be backlash (Horch, 2000). This is a dynamic nonlinearity, meaning that its state depends on the current input and past state. A simple example is two gear wheels, see Figure 4. As long as the driving gearwheel rotates in the same direction the backlash has no effect, but when the direction changes a clear backlash is present. This is due to the fact that it takes some time for the gear teeth to grab hold of each other again.

Figure 4.1. Two gear wheels where the large distance between the gear teeth will cause backlash when the driving gear wheel changes direction.

4.4 Saturation

Due to certain constraints, the control signals are always limited (Horch, 2000). For example a valve has a limited capacity and engines can only be fed with a limited amount of power. This means that even if the control output is calculated to be at a certain level outside the allowed limits, the actual output is set to a value as high as the constraints allow, see Figure 5. If this problem often appears in a control loop a capacity upgrade or process rebuilding might be necessary.

(29)

4.5 Quantization 15

Figure 4.2. Illustration of the Saturation principle. All values equal or higher than the limit will be set to the limit value.

4.5 Quantization

In a computer based control system measurements are converted from analog to digital values. In older systems the resolution might be relatively low, which may cause oscillations (Horch, 2000). This originates from the fact that when an analog value is converted to a digital value it is assigned to a discrete value using a certain number of bits, see Figure 6. As a consequence a small change in the analog signal might result in another discrete value, which may lead to a larger control move than necessary. If this situation appears an AD-converter upgrade is reasonable.

Figure 4.3. Illustration of the Analog/Digital conversion principle. The nearest digital value (dotted line) is assigned to the analog value (dashed line).

(30)

16 Causes of Malfunctioning Control Loops

4.6 External Causes

Since many loops in the process industry are coupled, an oscillating loop will probably affect other loops (Horch, 2000). These external oscillations often appear outside the frequency range where the controller is configured to work, meaning that the oscillation cannot be removed by the controller. Diagnoses of this kind of oscillations are very challenging since it has to be analyzed whether the oscillations are caused internally in the loop or by another loop.

4.7 Other Reasons

Even two well tuned single loops might start oscillating when they are coupled. Of course there might be other process specific reasons of malfunctioning control loops. One example from S¨odra Cell M¨orrum is flow loops where the flow suddenly might decrease due to the consistency of the cellulose pulp and the concentration of water.

(31)

Chapter 5

Control Loop Performance

Monitoring

Today, more and more people realize that malfunctioning control loops with high variance or oscillations may lead to unnecessary costs. Oscillations can wear valves faster than necessary and high variance may consume more energy than necessary to fulfill requested quality requirements. A typical example from pulp production, where high variance consumes energy, is drying of pulp. If the variance is high, it is necessary to dry the pulp more to make sure that the percentage of water never exceeds a certain limit. If the variance is low, the setpoint can be moved closer to the limit. This means that the pulp is not dryer than necessary and a lot of energy can be saved.

As the industries become more complex and the number of control loops increase, manual performance monitoring is no longer an alternative. Therefore an auto-matic tool for control loop performance monitoring is necessary.

5.1 Tool Requirements

There are some restrictions when applying an automatic performance monitoring tool. The most important is that it should be non-invasive, i.e. not disturb the production in any way (Horch, 2000). As a consequence, the tool cannot perform any dedicated experiments to gather information since that would result in a direct influence on the production. Of course the more information you have, the easier it gets to make good performance evaluations. But often an investment in new sensors or similar is unthinkable. Only available signals may be used, i.e. logged values of PV, CO and SP. The major request is of course to have a tool that can find malfunctioning loops and additionally diagnose the reason for the abnormal behavior and suggest how to eliminate it.

(32)

18 Control Loop Performance Monitoring

5.2 Performance Indices

In CLAM, performance indices are used to estimate the performance of the control loops. Malfunctioning loops are usually revealed as high variance or oscillations (Horch, 2000). CLAM uses a number of indices for these purposes, but also in-cludes indices for detection of loop mode, data validity, etc. All indices in CLAM are described in Appendix A.

A popular performance index is Harris index (Harris, 1989), comparing actual variance with minimum achievable variance. It has become very popular in the process industry since it is easy to implement, easy to interpret, non-invasive and requires only limited knowledge about the actual process (Horch, 2000). Only a sequence of PV measurements and knowledge about the process dead time are needed.

5.3 Control Loop Asset Monitoring Tool within

System 800xA

An extension to System 800xA is the Asset Optimization. One of its features is the Control Loop Asset Monitor (CLAM), described in Figure 5.1. First, a number of performance indices are calculated (1). A brief description of the indices is presented in Appendix A. These indices are thresholded (2) and combined into preconditions (3). The preconditions are multiplied with weights (4) and finally summed together into a diagnosis (5).

If the diagnosis result exceeds an alarm severity threshold, the user will be alerted with a new item in the CLAM alarm list. Thereby, a problem can be detected and rectified before it becomes too severe and affects the plant operation (ABB, 2010b). If there already exists an alarm in the alarm list for the current CLAM object at the same severity level there will not be a new item in the list until the next severity level is exceeded.

(33)

5.3 Control Loop Asset Monitoring Tool within System 800xA 19

Figure 5.1. Internal structure of CLAM. A number of performance indices (1) are thresholded (2) and combined into preconditions (3). These preconditions are multiplied with weights (4) and summed into a diagnosis (5).

(34)

20 Control Loop Performance Monitoring

By double-clicking at an item in the CLAM alarm list the main window (called faceplate) opens, see Figure 5.2. Users will directly be informed of the severity of possible errors by looking at the status icons inspired by traffic signals. A red light means that severe problems are present, yellow that problems are present but do not require immediate attention and green that no problems are severe enough to exceed any thresholds.

Figure 5.2. CLAM Faceplate. This is the main window where a summary of the results from CLAM is presented. To the left a summary of the FCE performance is presented and to the right there is a summary of the loop performance.

(35)

5.3 Control Loop Asset Monitoring Tool within System 800xA 21

To get more information, there are several sources providing detailed information. In the tabs Final Control Element Details and Loop Performance Details a list of diagnoses is presented, see Figure 5.3 and Figure 5.4. If any signal exceeds its threshold the description text is changed to red. It is possible that some index might not have been calculated, for example as a consequence of chosen loop category. This is shown as a message saying “Current Data Not Analyzable”. The outline is similar for both tabs. In both cases there might be red messages indicating errors, while the status bars on the main faceplate are green. This is due to the fact that not all hypotheses are taken into account for the main summaries. It can also be so that an alarming index is weighted so that it cannot give rise to any status change by itself or that the error has not yet become visible due to filtering.

Figure 5.3. Example from the Final Control Element Details tab where information about FCE Size, Leakage, etc. is presented. Here there is an indication of nonlinearity in the FCE.

(36)

22 Control Loop Performance Monitoring

Figure 5.4. Example from the Loop Performance Details tab where information about oscillations, variance, cascade tracking, etc. is shown. Here there are indications of data quantization and few setpoint crossings.

In order to make decisions about needed actions there is information about possible causes and suggested actions to be found in the Asset Reporter, see Figure 5.5. It is opened by clicking at one of the buttons in the upper right corner in the CLAM window. There, detailed information about the latest execution of CLAM is presented.

Figure 5.5. Condition view of the Asset Reporter. Here, both Loop Performance summary and FCE summary is indicating the quality status good.

By right-clicking at a preferred row and choosing Condition Details:. . . detailed information is provided, see Figure 5.6. There descriptions of possible causes and a list of suggested actions are given. The suggested actions originate from the diag-noses presented on the tabs Final Control Element Details and Loop Performance Details.

(37)

5.3 Control Loop Asset Monitoring Tool within System 800xA 23

Figure 5.6. Asset Reporter Condition Details. Here information about the last CLAM execution is presented. A list of list of possible causes and suggested action is presented. Finally there is a possibility to see the original data (SP, PV and CO) under the Trend tab, see Figure 5.7. The displayed plots allow the user to zoom and investigate the signals in great detail.

Figure 5.7. Trend display showing SP (straight blue line) and PV (varying green signal).

5.3.1 Configuration

In order for CLAM to generate a reliable result the software must be configured correctly. There are a number of different parameters to be set by the user, see Table 5.1. Since a major part of this thesis was to analyze the configuration parameters this is described in detail in Chapter 6.

(38)

24 Control Loop Performance Monitoring

Table 5.1. CLAM configuration parameters. Name/Description Where to configure

Data Interval CLAM Faceplate / Loop Configuration

CO low CLAM Faceplate / Loop Configuration

CO high CLAM Faceplate / Loop Configuration

PV low CLAM Faceplate / Loop Configuration

PV high CLAM Faceplate / Loop Configuration

Loop Category CLAM Faceplate / Loop Configuration

Cascade CLAM Faceplate / Loop Configuration

Alarm severity levels Config view / Conditions

Data Set Size CLAM Config view / Asset Parameters

ResampelIntervalSec CLAM Config view / Asset Parameters

AggregateFunction CLAM Config view / Asset Parameters

Filter Loop Performance Summary CLAM Config view / Asset Parameters Filter Final Control Element Summary CLAM Config view / Asset Parameters

Weight H FCE Stiction Backlash CLAM Config view / Asset Parameters

Weight H FCE Leakage CLAM Config view / Asset Parameters

Weight P Harris Index CLAM Config view / Asset Parameters

Weight P Setpoint Crossing Index CLAM Config view / Asset Parameters

Weight P Oscillation Index CLAM Config view / Asset Parameters

Weight P Controller Output Saturation CLAM Config view / Asset Parameters

Weight P Manual Mode CLAM Config view / Asset Parameters

Weight P Cascade Tracking CLAM Config view / Asset Parameters

Weight P Response Speed CLAM Config view / Asset Parameters

Threshold FCE Critical CLAM Config view / Asset Parameters

Threshold FCE Severe CLAM Config view / Asset Parameters

Threshold FCE Warning CLAM Config view / Asset Parameters

Threshold FCE Moderate CLAM Config view / Asset Parameters

Threshold LPS Critical CLAM Config view / Asset Parameters

Threshold LPS Severe CLAM Config view / Asset Parameters

Threshold LPS Warning CLAM Config view / Asset Parameters

Threshhold LPS Moderate CLAM Config view / Asset Parameters

Dead Time CLAM Config view / Asset Parameters

Inhibit Value CLAM Config view / Asset Parameters

(39)

Chapter 6

Analyses of Configuration

Parameters

In this chapter, results from this project regarding the configuration parameters are described and analyzed. The results are based on simulations, source code analyses and discussions with the staff at S¨odra Cell M¨orrum.

In CLAM, there are a number of configuration parameters that can be changed by the user. Some parameters affect the index calculations, others how the data should be collected or how the fusion of indices into diagnoses should be performed. Previously, it was not documented which parameters that affect which indices. In order to conclude this, the parameters were changed one by one and the source code was read. This resulted in the table presented in Appendix B.

6.1 Data Set Size

In the loop configuration tab there is a possibility to adjust the Data Set Size. The lower limit is 400 samples and the upper limit is 5000 samples (ABB, 2010a). If the number of samples would be less than 400 the auditing algorithms would not be able to make reliable diagnoses. The more information used the slower oscillations you can detect. By default 1000 samples are used, but this means that slow oscillations, like in the example in Figure 6.1 below, will not be detected. Here the oscillation period is about 1500 samples and the auditing algorithms require at least a few periods within a dataset to be able to detect an oscillation.

(40)

26 Analyses of Configuration Parameters

Figure 6.1. Example of a slow oscillation that is impossible to detect if not enough samples are used.

In order to see how the CPU load was affected when the data set size was changed to 5000 samples, the CPU load of the Asset Optimization server at S¨odra Cell M¨orrum was monitored during an execution of 42 CLAM objects. The test showed that the CPU usage was maximized for 2 minutes and 12 seconds, see Figure 6.2. This gives an average of 3.1 seconds per CLAM execution.

By definition, the maximum number of CLAM objects in a system is 500. Using the time average this would result in a CPU usage peak for about 26 minutes. With the setup at S¨odra Cell M¨orrum, with 5000 samples and an execution interval of 25000 seconds, there are no problems with the CPU load that would motivate usage of less than 5000 samples.

It must be mentioned that the amount of time needed for one CLAM execution depends on how severe the loop problems are. As an example, calculations of stiction indices will only be performed if the loop oscillates. Detailed studies of the current loops showed that at least five of them were oscillating and at least one was diagnosed to be affected by stiction. Therefore, it can be concluded that even if the time needed for execution is just a rough estimate, there are no signs of performance problems using 5000 samples.

(41)

6.2 Data Interval 27

Figure 6.2. CPU usage during an execution of 42 CLAM objects. The usage peak lasted for 2 minutes and 12 seconds. By looking at the plot in detail it can be seen that there are actually 42 peaks present, each corresponding to one CLAM object.

6.2 Data Interval

The parameter Data Interval (defined in seconds) adjusts how often the CLAM software should be executed. By default this value is set to 28800 seconds (8 hours). According to the configuration guide this should always work and should usually not be altered. If the data interval is set to a low value, CLAM runs more often and thereby brings a larger load to the system. On the other hand, if the parameter data interval is set to a high value together with a small data set size, there is a risk of leaving a gap between the executions where the performance is not monitored, see Figure 6.3.

Figure 6.3. Data set Size vs. Data interval. This figure shows how monitoring gaps may appear when using a high data interval together with a low data set size.

To make sure that no information or events are lost between two execution points the data interval should not be larger than the sample interval in the log multiplied with the Data Set Size. For example, at S¨odra Cell M¨orrum the log sample interval is set to 5 seconds and if the Data Set Size is set to 5000 samples, this means that the data interval cannot be larger than 25000 seconds.

(42)

28 Analyses of Configuration Parameters

Hence, the default value of 28800 seconds would leave a gap of about one hour where the process is not monitored. But this is of course a balance between data gaps and increased load on the system. Since the changes in the system dynamics are often slow and probably long lasting, there would not be a problem to have small gaps in the data sequence, since these events would probably appear again. According to the staff at S¨odra Cell M¨orrum the alarm list for CLAM is normally checked once a day. This does not necessarily have to affect how often the soft-ware should be executed. Therefore, this will be taken into account during the configuration of the filter parameters.

6.3 CO low/CO high

CO low and CO high are used to normalize the controller output in order to re-ceive a signal in the interval 0-100 %. Normally these parameters do not need to be configured since the logged CO values are often already defined in the correct interval. Otherwise they should be set according to the signal range of the con-troller output. The parameters affect a large number of performance indices and it is of great importance that these values are configured correctly.

6.4 Cascade

This parameter is used by the cascade mode index and for choosing the correct dead time from the loop category table. There are two alternatives: master or slave. It is recommended to choose the master alternative if the loop is providing its output as setpoint to other underlying loops and slave otherwise (ABB, 2010a). This means that the slave alternative should be chosen for single loops. Accord-ing to the staff at S¨odra Cell M¨orrum, this configuration approach is confusAccord-ing. Therefore, it should be considered to divide slave and single into two separate alternatives.

(43)

6.5 PV low/PV high 29

6.5 PV low/PV high

The parameters are used to normalize the process value, setpoint and control error to an interval of 0-100 %. Most index algorithms use these normalized variants and these configuration parameters are therefore of great importance for the end result. For example, the normalization interval is decisive for the calculation of the standard deviation. A deviation of one engineering unit (l/s, kPa, etc) from the setpoint in a small interval results in a larger standard deviation than an equal deviation in a larger interval.

The configuration guide for system 800xA states that data outside the normal-ization interval will be excluded during analysis (ABB, 2010a). Whether this is really the case is rather doubtful after source code investigations. For example, the algorithm for outlier detection only uses the unnormalized control error as input parameter.

The configuration guide also suggests that PV high ideally should be set to a value twice as big as the upper limit of the actual signal range of the process value. This is not correct (ABB, 2011-01-24). It is clear that a high value results in a lower sensitivity for the auditing algorithms, with the disadvantage that the ability to trace simple errors may be limited. A low value would on the other hand result in higher sensitivity with a disadvantage that the algorithms may find errors that do not exist.

Since the CLAM users do not have the opportunity to adjust any thresholds for any specific indices, the adjustment of the PV interval is their only chance to adjust the algorithm sensitivity. The user recommendations must therefore be to set PV high to a lower value if increased sensitivity is wanted and to a higher value if less sensitivity is desired.

It can be discussed whether there should be some other way to adjust the sensitivity by letting the user specify some other scaling/sensitivity parameter, since adjusting the PV interval may be confusing. Since it would require major changes in the source code, this cannot be done until there is a new release.

At S¨odra Cell M¨orrum, the operating intervals for each process are well docu-mented. The users at this site can therefore configure PV low/PV high in line with reality in a simple way, as long as the documentation is up to date.

(44)

30 Analyses of Configuration Parameters

6.6 Weight Parameters

There are two different types of weight parameters, defining how the FCE Sum-mary and the Loop Performance SumSum-mary should be weighted together. They will be described in the two following sections.

6.6.1 Final Control Element Summary Weights

For the FCE summary there are only three configurable parameters: Weight H FCE Stiction Backlash, Weight H FCE Leakage and Weight H Loop Nonlinear-ity. They adjust how much the results from the thresholded indices should be taken into account. As can be seen in Figure 6.4 these signals are multiplied with individual weight parameters before they are added. The result after the summa-tion is compared against the alarm thresholds mensumma-tioned earlier. By default, the stiction/backlash weight value is set to 0.9, the leakage value to 0.1 and the loop nonlinearity value to 0.2. In the default case they do not sum to one, but this does not matter since the parameters are normalized by the software with the sum of the weights.

Figure 6.4. Final Control Element Summary. Indices calculating FCE

Stic-tion/Backlash, FCE Leakage and FCE Loop Nonlinearity are thresholded, multiplied with weights and summed together to a FCE summary parameter.

FCE Leakage is only calculated when there is no oscillation, while oscillations are a requirement for calculating FCE Stiction/Backlash. Since stiction and backlash are both nonlinearities, FCE Stiction/Backlash signal can only be one when the FCE Loop Nonlinearity is one.

(45)

6.6 Weight Parameters 31

Consider the case when the leakage signal is one (indicating leakage) and the other signals are zero. With the default weights this would result in F CESU M = (0 · 0.9 + 0 · 0.2 + 1 · 0.1)/(0.1 + 0.2 + 0.9) = 0.083. Even if no filtering (see Section 6.8 for details) is used, the FCE SUM would never be able to exceed any alarm severity threshold (0.2/0.4/0.6/0.8). Thus, this kind of error would not even result in an alarm on the lowest severity level.

For the stiction case the result would be F CESU M= (1·0.9+1·0.2+0·0.1)/(0.1+ 0.2 + 0.9) = 0.92. This means that in case of stiction it would be possible to reach all severity levels. If there is some nonlinearity but no stiction in the process FCE SUM would be 0.17 which would not exceed any severity levels.

As seen above, the default FCE weight parameters are not satisfactory since there is no possibility to detect leakage or nonlinearities unless they are combined. Therefore a change of the default threshold or the alarm severity levels should be considered. It should also be mentioned that the index naming is not consis-tent today. To follow the standard Weight H Loop Nonlinearity should be renamed to Weight H FCE Loop Nonlinearity in order to follow the convention.

If the software for some reason is unable to calculate one of the preconditions, it will still affect the result since it is taken into account in the normalization. This problem appears also in the Loop Performance Summary and will therefore be discussed in further detail in Chapter 11.

6.6.2 Loop Performance Summary Weights

There are seven different weights for the Loop Performance Summary, one for every precondition. The fusion of thresholded indices is more complicated than in the Final Control Element Summary, see Figure 6.5.

In this case the default alarm severity levels are 0.1, 0.2, 0.3 and 0.4. Also in this setup there are some preconditions that can never give rise to any alarm by themselves. For example, independent of filtering the preconditions bad cascade tracing and sluggish control are both unable to give rise to any alarms without help from other preconditions, since not even the lowest threshold can be exceeded. If filtering is used, the 0.1 weighted preconditions in Figure 6.5 will all converge to 0.1, but they will never exceed the threshold and thereby never give rise to any alarm. Thus, with the default configuration only two of seven preconditions can separately result in an alarm. Therefore it might be a good idea to reconsider the default Loop Performance Summary weights.

As mentioned in the FCE part before, preconditions that for some reason have not been calculated would still affect the result as a consequence of the normalization.

(46)

32 Analyses of Configuration Parameters

Figure 6.5. Loop Performance Summary. Several performance indices are calculated, combined into preconditions and multiplied with weights. Finally they are summed together to a loop performance summary parameter.

6.7 Thresholds

In the CLAM software there are three major types of thresholds; thresholds on the index level, alarm thresholds and thresholds adjusting the alarm severity levels. The users cannot adjust the thresholds on the index level and they are therefore excluded in this thesis.

6.7.1 Severity Thresholds

The alarm severity levels are adjusted by changing the thresholds in the Conditions tab. The severity levels should be set between 1 and 1000. They are adjusting how the alarms should be presented in System 800xA. These parameters usually do not need to be changed and can therefore be run with the default parameters.

(47)

6.8 Filter Parameters 33

Table 6.1. Default alarm thresholds for Final Control Element Summary and Loop Performance Summary.

Threshold FCE Critical 0.8

Threshold FCE Severe 0.6

Threshold FCE Warning 0.4

Threshold FCE Moderate 0.2 Threshold LPS Critical 0.4

Threshold LPS Severe 0.3

Threshold LPS Warning 0.2

Threshold LPS Moderate 0.1

6.7.2 Alarm Thresholds

In CLAM there are four alarm thresholds for the final control element summary and four thresholds for the loop performance summary. They are divided into moderate, warning, severe and critical.

As mentioned before, the different indices are thresholded and then weighted to-gether. The weighting results in a value between zero and one, where one means that a threshold is exceeded and zero that it is not. By default these values are set as presented in Table 6.1 below. As an alternative to changing these thresholds, the weight parameters can be modified.

6.8 Filter Parameters

There are two configuration parameters with a prefix of ”filter” (not W as it says in the system 800xA configuration guide), one filter parameter for the Final Control Element Summary and one for the Loop Performance Summary. They both work in the same manner, performing low-pass filtering to adjust how much the result from the last CLAM execution should be taken into account compared to the prior ones. The filtering is performed using exponential moving average. The filtered value Dout represents the diagnostic parameter that is shown in the CLAM main faceplate and is calculated as Dout(t) = f · D(t) + (1 − f) · Dout(t − 1), where f is the filter parameter and D represents the latest calculated diagnostic summary. By default, the filter parameters are set to 0.5. According to the staff at S¨odra Cell M¨orrum the CLAM alarm list is checked once a day, usually in the afternoon. To avoid facing a huge amount of alarms they do not want alarm upgrades with a higher frequency than once a day. Since the aim with their use of CLAM is to discover changes in the dynamics that often happens slowly over a long period of time, it does not matter if it takes a few days before the alarms exceed the thresholds and become visible in the alarm list.

(48)

34 Analyses of Configuration Parameters

Of course the configuration of the filter parameters depends on the choice of the Data Set Size and Data Interval. A short data sequence and a large filter value would make the diagnosis sensitive to temporary disturbances. In the same way a low value on the Data Interval CLAM would calculate new diagnoses more often and thereby making changes visible in the alarm list very fast. Consequently, a larger execution interval and a rather long data sequence allow a larger filter value without being sensitive to temporary disturbances.

Compared to the alarm thresholds, in the loop performance summary case, an alarm on the lowest level (moderate) requires a value larger than 0.1 to be regis-tered. The other severity levels are 0.2, 0.3 and 0.4.

In order to determine a suitable filter value, diagnosis values of 0.11 and 0.5 were compared. In Figure 6.6, Figure 6.7 and Figure 6.8 the alarm curves are plotted for the filter values 0.1, 0.2 and 0.5 (default).

Figure 6.6. Loop performance summary alarm curve with filter parameter f=0.1. Here new alarms for the severe error are not generated more than once in 24 hours (until the highest level is reached), while it takes 176 hours for a small error to generate an alarm.

(49)

6.8 Filter Parameters 35

Figure 6.7. Loop performance summary alarm curve with filter parameter f=0.2. Here new alarms for the severe error are generated more than once in 24 hours (until the highest level is reached), while it takes only 78 hours for a small error to generate an alarm.

(50)

36 Analyses of Configuration Parameters

Figure 6.8. Loop performance summary alarm curve with filter parameter f=0.5. Here new alarms for the severe error are generated several times in a period of 24 hours (until the highest level is reached) and it takes less than 24 hours for the small error to generate an alarm.

As mentioned before, the staff at S¨odra Cell M¨orrum had a wish that the alarm severity levels should not be upgraded more than once a day. This is rather hard to achieve since large diagnosis values will exceed the thresholds faster than lower values and it is often not wanted to wait too long until small diagnosis values appear in the alarm list. This can be seen in Figure 6.6, where the large error upgrades about once every 24 hours, while the small error does not show up in the alarm list until about 176 hours have passed (more than one week). Hence, there is a balance between receiving a lot of alarms and discovering of small diagnosis values within a reasonable amount of time. For the need at S¨odra Cell M¨orrum a filter value around 0.2, shown in Figure 6.7, seems reasonable. Then a small error would appear in the alarm list after about 3 days and a large error would not result in new alarms too often.

For the FCE case with a filter value set to 0.2, a small error of 0.21 and a large of 0.9 the result would be like Figure 6.9. Here the severity levels are 0.2, 0.4, 0.6 and 0.8. By default both the LPS and FCE filter parameters are set to equal values and there is usually no need to configure them differently. But still this possibility is available if the user for some reason would like to tune them separately.

(51)

6.9 Dead Time 37

The description in the system 800xA configuration guide about these parameters is not updated and needs to be rewritten.

Figure 6.9. Final Control Element summary alarm curve with filter parameter f=0.2. Here new alarms for the severe error are not generated more than once in 24 hours (until the highest level is reached), while it takes 80 hours for a small error to generate an alarm.

6.9 Dead Time

In CLAM, there is a configuration parameter called dead time that represents the time it takes until a change in the controller output is visible in the process value. It affects Harris index and ACF ratio index and is by default set to the values shown in Table 6.4. At S¨odra Cell M¨orrum the dead time is often well documented in their documentation system and therefore it is easy for the staff to check up the correct values.

(52)

38 Analyses of Configuration Parameters

In data collected at S¨odra Cell M¨orrum, the dead time parameter is defined in 18 out of 24 cases. To investigate how the indices are affected if CLAM is configured with the correct or default dead time, the algorithm for calculating Harris index was run three times, once with the RegDoc value (see Section 3.3), once with the RegDoc value + 20 % and once with the default value. The result is shown in Table 6.3. In all cases, there is a major difference between using the estimated dead time and the default one. It is very obvious that Harris index is affected by the choice of dead time. A longer dead time gives a higher Harris index, i.e. less improvement potential.

Some of the loops exceed the thresholds and are marked with italic text. In 5 of 12 cases (marked with bold text) the correct dead time results in a value below the threshold (alarm) while the default value gives a value greater than the threshold (no alarm), resulting in missed detections. It can therefore be stated that the choice of the dead time parameter is of great importance. In these particular examples the default dead times were quite far from the correct ones.

To investigate how a smaller deviation would affect the result, the algorithm was fed with 20 % larger values than the estimated ones. For most loops, this re-sulted in a small deviation or sometimes no deviation at all. For some loops, like LC471640, the resulting deviation was quite big and led to missed detection. Thus, the choice of the dead time parameter is of great importance for the result. A larger dead time than the real one gives an index value that allows worse control. A smaller dead time could result in false alarms. Since a small deviation from the correct value may result in a false alarm it is also important that the documented dead time parameters are correct. Otherwise the result may be just as misleading as in the case with the default parameters.

Since the dead times at S¨odra Cell M¨orrum are often documented in RegDoc, some statistics can be concluded. Unfortunately there is no simple way to get more information about cascade mode and more details about the loop type than which major category it belongs to. Therefore, these statistics are not quite comparable with the default values in Table 6.4. In Table 6.2 some statistics are presented and histograms are provided in Appendix D.

(53)

6.9 Dead Time 39

The dead times at S¨odra Cell M¨orrum differ quite a lot from the default values. Most values are smaller than the default values and the result from Harris index may thereby allow worse control, but as seen by looking at the column max in Table 6.2, there are at least one flow loop and one temperature loop that may cause false alarms. This motivates the use of estimated dead times instead of default values. It is also an indication that the default values should be reconsidered when using them in a process industry like S¨odra Cell M¨orrum, where the dead times differ. On the other hand, it would be even worse to use these dead times from S¨odra Cell M¨orrum in an industry with larger dead time, since this would result in false alarms.

Table 6.2. Dead time statistics from S¨odra Cell M¨orrum.

Average min max median # Documented (# Total)

Flow 2.45 0 45 1.73 224 (584)

Level 36.74 0 1000 1.6 79 (275)

Pressure 2.88 0 26 1.6 47 (239)

Composition 44.11 0.4 193 29.5 38 (70)

(54)

40 Analyses of Configuration Parameters Ta ble 6.3 . H arri s inde x fo r di ffe re nt de ad tim es. Lo op FC 471614 FC 482215 FC 542024 FC 452003 FC 490106 FC 471609 FC 471010 LC 542015 LC482201 LC482122 LC471640 TC 482017 QC 542030 QC 542018 QC 542002 QC 482213 QC 472404 QC 471619 D efau lt 10 10 10 10 10 10 10 100 100 100 200 200 100 50 50 100 50 100 de ad tim e E stim at ed 1. 3 3. 8 1. 5 4. 4 1. 1 2 3. 6 43 50 10 5 13 151 9. 5 17 21 1. 2 36 de ad tim e Har ris in de x 0. 68 0. 71 0.37 0.33 0.4 0. 73 0. 84 0. 72 0.09 0.02 0. 99 0. 95 1 0. 99 0. 98 0. 94 0. 86 0. 94 de fau lt Har ris in de x 0.34 0. 61 0.27 0.33 0.28 0. 64 0.56 0.52 0.03 0 0. 54 0. 7 1 0. 96 0. 94 0. 89 0.11 0. 78 est im at ed + 20% Har ris in de x 0.34 0. 61 0.27 0.19 0.28 0. 64 0.56 0.48 0.03 0 0.29 0. 64 1 0. 8 0. 83 0. 83 0.11 0. 72 est im at ed Di ffe re nc e 0. 34 0. 1 0. 1 0. 14 0. 12 0. 09 0. 28 0. 24 0. 06 0. 02 0. 7 0. 31 0 0. 19 0. 15 0. 11 0. 75 0. 22 de fau lt-est im at ed

(55)

6.10 Loop Category 41

6.10 Loop Category

The parameter Loop Category decides what indices that should be calculated and the choice of default dead time for Harris index. The default dead time is different for different loop types and cascade modes can be seen in Table 6.4.

Table 6.4. Default dead time (in seconds) for different loop categories used in CLAM.

Flo w liquid Flo w ot he r Te m pe rat ure C om pos ition Pre ssure Le ve lt igh t Le ve la ve rage ot he rS R P ot he rINT Master 30 30 200 30 100 100 200 100 100 slave/single 10 30 100 10 50 50 100 50 50

There are several different loop categories: • Flow liquid: loop with a flow of liquid.

• Flow other: loop with a flow that is not liquid. Typical examples: gas or steam. In this case there is no difference between the dead times for master or slave/single alternatives.

• Temperature: loop controlling temperature, often resulting in slower dynam-ics.

• Composition: loop controlling a composition, for example chemical mixing. • Pressure: loop controlling pressure.

• Level tight: level control loop where it is important to keep a steady level. • Level average: loop with more loose level control like buffers, etc.

• otherSRP: other self regulating loops. • otherINT: other integrating loops.

(56)

42 Analyses of Configuration Parameters

6.11 Resample Interval

The parameter Resample interval decides the interval of the received data from the history log. If this parameter is set to the same value as the logging interval, CLAM will collect values with the same interval. If it is set to a lower value, some points in the log will be excluded and a higher value would result in a sequence with a tighter interval than the log. This means that these extra points come from interpolations between the correct points.

Since the data should be used for making diagnoses about the control loop per-formance we want to use data as close to the real data as possible. Therefore this parameter should be set to the same value as the logging interval.

6.12 Aggregate Function

This function is used as an input in a function call for collecting data from the log. By default this parameter is set to 1, meaning that interpolation will be used if some logged data is flagged to be bad. This value can be changed to a large number of different aggregates that would perform operations on the data before it is used in CLAM. One such aggregate is time average that returns a single value instead of a sequence of samples. If this aggregate is used, there is no possibility for CLAM to calculate any diagnoses. Therefore, this is a parameter that should definitely not be possible to modify for an ordinary user.

6.13 Inhibit Value

To save resources there is a Boolean parameter called Inhibit value deciding whether a CLAM analysis should be executed during a production stop. By de-fault this parameter is set to TRUE, meaning that CLAM will not calculate new diagnoses when the inhibit input signal is true.

(57)

Chapter 7

History Logging

In this chapter, analyses and results regarding the effects on CLAM due to the history log setup are presented. First, the risks of using the OPC direct logging approach are analyzed. This is followed by an analysis of how CLAM is affected by the choice of logging setup and logging interval. Finally, a discussion about the size of the history log is presented.

7.1 OPC Direct Logging with 3 Seconds Cyclic

Rate

In order to store a new value in the history log each value has to be transferred over the network (MB300). Since the network originates from the 1980s its capacity is quite limited. The cyclic rate might be 1, 3 or 9 seconds, where 9 seconds is recommended due to performance. Since this is a quite slow sample rate with a larger risk of aliasing, a cyclic rate of 3 seconds is assumed in this analysis. Using a logging interval of 5 seconds (like S¨odra Cell M¨orrum) and by assuming that the starting time is the same, the actual logging procedure would be like Figure 7.1 (assuming that the time for data transfer is neglected). Since 5 is not a multiple of 3, the logging interval cannot be constant. This means that even if we configure the log to save a value at the timestamps 0/5/10/15/20/25 the actual samples saved at these points will correspond to the timestamps 0/3/9/15/18/24. The saved samples might be up to two samples old, for example the sample saved at timestamp 5 in the log sequence (C), corresponds to sample 3 in the controller sequence (A), see Figure 7.1.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Syftet eller förväntan med denna rapport är inte heller att kunna ”mäta” effekter kvantita- tivt, utan att med huvudsakligt fokus på output och resultat i eller från

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

The EU exports of waste abroad have negative environmental and public health consequences in the countries of destination, while resources for the circular economy.. domestically