• No results found

On the On-line Tools for Treatment of Deterioration in Industrial Processes

N/A
N/A
Protected

Academic year: 2021

Share "On the On-line Tools for Treatment of Deterioration in Industrial Processes"

Copied!
110
0
0

Loading.... (view fulltext now)

Full text

(1)

On the On-line Tools for

Treatment of Deterioration in

Industrial Processes

Christer Karlsson

2008

(2)
(3)

Everything is vague to a degree you do not realize till you have tried to make it precise.

(4)
(5)

A

BSTRACT

For industrial processes high availability and efficiency are important goals in plant operation. This thesis presents studies and development of tools for on-line treatment of process deterioration and model and sensor errors in order to achieve these goals. Deterioration of measurement devices, process components and process models has caused economical losses, plant failure and human losses. The development of on-line methods to prevent such losses is of special interest and has been conducted at The Department of Energy Technology, Mälardalen University. Important technological obstacles to implementing automatic on-line methods have been identified, such as data selection for adaptation and adaptation of data-driven models to new states.

A new method has been developed for decision support by combining artificial intelligence methods and heat and mass balance models, and concepts are proposed for decision support in order to detect developing faults and to conduct appropriate maintenance actions. The methods have been implemented in simulation environment and evaluated on real process data when available. The results can be sumarised as successful development of a decision support method on a steam turbine by combining artificial neural networks and Bayesian networks, and identification of important obstacles for automation of methods for adaptation of heat and mass balance process models and data-driven models when they are subject to deterioration.

(6)
(7)

S

VENSK SAMMANFATTNING

Målet med avhandlingen är att undersöka och kombinera metoder för att öka tillförlitligheten i mätningar och processmodeller. Sådana modeller används inom processindustrin som till exempel kraftvärmeverk där man genom att elda ett bränsle producerar både el och fjärrvärme. Processmodeller imiterar den verkliga processen i datormiljö. De behöver uppdateras och anpassas till förändringar i verkligheten för att vara tillförlitliga.

Det övergripande syftet med avhandlingen är att analysera och ta fram nya metoder för högre säkerhet och mindre miljöpåverkan från industriella processer. Mätfel har för flera företag inom processindustrin betytt haverier med katastrofala följder, miljöpåverkan och miljoner i kostnader och uteblivna intäkter. Flera forskare som arbetar med rättning av mätdata och processmodeller har föreslagit att kombinera olika metoder. Kunskapen om kombinerade metoder är däremot begränsad. Ett bidrag till mer kunskap ges i avhandlingen ”On the On-line Tools for Treatment of Deterioration in Industrial Processes” skriven av Christer Karlsson vid Mälardalens Högskola, Avdelningen för Energiteknik. I avhandlingen beskrivs hur ett antal metoder kombinerats för att automatisera felsökning och korrigera fel i mätdata för industriella processer. Viktiga tekniska hinder har belysts med tanke på hur metoderna kan föras in i processindustrin. Förslag på nya kombinerade metoder har tagits fram i projekt utförda på bland annat kraftvärmeverken i Enköping och Västerås.

Resultaten kan sammanfattas med att det visats att man till stor del kan automatisera rättning av felaktiga mätdata och uppdatera processmodeller med kombination av metoder. Uppgifter lämpliga att automatisera är att känna igen fel som har ett visst mönster, eller väga ihop flera osäkra mätningar som mäter samma sak med målet att presentera den mest sannolika. För vissa deluppgifter krävs manuellt arbete. Till exempel behövs ingenjörer för att bedöma vilka mätdata som ska användas för uppdatering av processmodeller. Nyttan med att kombinera metoder och att automatisera är i första hand att öka säkerheten genom att förse de som tar beslut med tillförlitliga data. Det kan man göra genom att följa och korrigera fel i sensorer och processmodeller, upptäcka fel tidigt och ge stöd för beslut. Metoder för detta har föreslagits i avhandlingen.

(8)
(9)

A

CKNOWLEDGEMENTS

I would like to express my gratitude to all those who gave me support during the voyage of doctoral studies and research projects that made it possible to complete this thesis. My appreciation goes to Professor and main supervisor Erik Dahlquist for being a well of energy, ideas, and for the opportunity to take part in many research projects and for being co-author in papers. I would like to thank my co-supervisor and Adjunct Professor Erik Dotzauer for invaluable guidance as co-author in papers and support in general and especially during critical moments, my Co-supervisor and Professor Finn V. Jensen for guidance in the world of Bayesian networks and for arranging workshops at Aalborg University.

I would like to recognise the contribution of the bodies that funded the research projects: Swedish Energy Agency (Energimyndigheten), Thermal Engineering Research Institute (Värmeforsk), Knowledge Foundation (KK-stiftelsen), Swedish Foundation for Strategic Research (SSF), Mälarenergi AB, ENA Energi AB, ABB, Siemens AB, VTT and Korsnäs.

My gratitude also goes to my colleagues at the Energy Department, Mälardalen University. I would especially like to thank Anders Avelin and Björn Widarsson for the fun years of sharing an office and collaborating during projects and for patiently listening to ideas on practical and philosophical levels, Lic. Eng Jan Sandberg for discussions on superheaters and fouling, Professor Lars Wester for sharing knowledge on boilers, secretaries Inger Björklund and Inger Dahlby for always helping out with administrative issues, and all my colleagues for sharing laughs, knowledge and discussions during my time at the department.

Special thanks go to my collaborating partners at The Faculty of Engineering at Lund University, Dr. Jaimie Arriagada for teaching me about artificial neural networks and Dr. Magnus Genrup for sharing steam turbine knowledge and part of his thesis, Professor Mohsen Assadi and Klas Jonshagen for good cooperation in projects. At Mälardalen University my gratitude also goes to Peter Funk at the department of Computer Science for the opportunity to participate in the artificial intelligence group, and Kimmo Eriksson at the department of Maths and Physics for helping me with maths during his time as co-supervisor.

In the research projects lot of ideas, data and knowledge has also been shared by the companies involved. Appreciation is expressed to ENA Energi

(10)

and their president Urban Eklund, and their former president Eddie Johansson for being very supportive and to the service-minded staff at ENA Energi AB for all their help. Peter Karlsson and Sven-Olof Kindstedt for data on boilers at Mälarenergi AB and their staff who helped me to find data both on computer and on paper; Bengt Svensson at Siemens in Finspång for collaboration on an interesting project on gas turbines; Carl-Fredrik Lindberg at ABB Corporate Research for being Co-supervisor during a project on fault detection systems; Dr. Anders Holst and Dr. Björn Levin at Swedish Institute Computer Science for cooperation and discussions on statistical models. Thanks to companies for answering questions and supporting with data on heat and power plant components and parts. Special thanks go to Dr. David Ribé, the language expert who corrected this thesis and many of the papers for style and grammar.

Conducting doctoral studies has involved everybody around me, especially my beloved fiancé Angélica, who has supported wholeheartedly. I owe gratitude to my brother Mats and close friends Patrik and Staffan for diverting activities and many laughs. Last but not least, my deepest appreciation to my mother Eivor, and her comrade in life Anders for always being there with love and care.

(11)

L

IST OF APPENDED PAPERS

Publications included in the thesis

This thesis is based on the following papers and reports, referred to in the text by roman numerals. Parts of this thesis (Paper III and IV) were previously published in the licentiate thesis, “Tools for Reconciliation of Measurement Data for Processes at Steady-State”, Mälardalen University Press, 2004.

Journal papers

Paper I

Karlsson C., Avelin A., Dahlquist E., “New Methods for On-line Adaptation to Deterioration in Process Models for Process Industry”, Submitted to Journal of Chemical Product and Process Modeling, June 2008, revised July 2008.

Paper II

Karlsson C., Arriagada J., and Genrup M., “Concept for Detection and Isolation of Faults in a Steam Turbine and Components”. Submitted Oct 2007 to Journal of Simulation Modeling Practice and Theory, revised April and June 2008. Conference papers

Paper III

Karlsson C., Dahlquist E., Dotzauer E. “Data Reconciliation and Gross Error Detection for the Flue Gas Train in a Heat & Power Plant”. Proceedings of Probabilistic Methods Applied on Power Systems, USA, 2004.

Paper IV

Widarsson B., Karlsson C., Dahlquist E., “Bayesian Network for Decision Support on Soot Blowing in a Biomass Fuelled Boiler”. Proceedings of Probabilistic Methods Applied on Power Systems, USA, 2004.

Paper V

Karlsson C., Avelin A., Dahlquist E., “Improving the usage of process data collected in process industry and power plants”. Proceedings of EUROSIM 2007, Slovenia, 2007.

Publications where I have been main author, editor and driving force are papers I, II, III, and V. Björn Widarsson did most of the work described in paper IV, and I contributed with written sections on fouling and in the

(12)

discussions of Bayesian Networks. Ideas in the papers have been moulded through discussions during supervision by Erik Dahlquist and co-supervisors and many day to day discussions with members of the Process Diagnostics Group and colleagues both at Mälardalen University and at Lund University Faculty of Technology. Co-supervisor Erik Dotzauer is recognized for much of the supervision for paper III, and co-supervisor Finn Jensen regarding Bayesian networks in papers II and IV, and the workshops at Aalborg University. In paper III former co-supervisor Kimmo Eriksson provided advice on the maths behind data reconciliation. The experiments presented in the papers are mostly planned, performed and analyzed by the author. Supervisors have contributed to the analysis of results.

Publications not included in the thesis

Journal papers

Karlsson C., Khirnaia L., Dahlquist E., “Increasing the power to heat ratio in heat and power plants - study of pressure losses in the feedwater system”. Submitted to Journal of Applied Energy in June 2008.

Starfelt F., Karlsson C., “Feasibility Study of a Bio-Fuelled Gas Turbine Cogeneration Plant”. Submitted to Journal of Energy in June 2008.

Conference papers

Dahlquist E., Lindberg T., Karlsson C., Weidl G., Bigaran C., and Davey A., Integrated process control, fault diagnostics, process optimization and production planning – Industrial IT, Proceedings of 4th IFAC Workshop on “On-line Fault Detection and Supervision in the Chemical Process Industries”, Korea, 2001. Karlsson C., Kvarnström A., Dotzauer E. and Dahlquist E., ”Estimation of process model parameters and process measurements – a heat exchanger example”, Published in Conference on New Trends in Automation, Västerås, Sweden, September 2006.

Book Chapter

Chapter on data reconciliation, 8 pages, in the book “Use of modeling and simulation in pulp and paper industry”. Editor: Erik Dahlquist for COST E36 - European Union project for modeling and simulation in pulp and paper industry, May 2008, ISBN 978-91-977493-0-5.

(13)

Reports

Karlsson C., and Dahlquist E., “Process and sensor diagnostics – Data reconciliation for a flue gas channel”. Published in Varmeforsk project publication series. 2003; no. 834. In Swedish.

Widarsson B., Karlsson C., Nielsen T.D., Jensen F.V. and Dahlquist E., “Bayesian networks applied to process diagnostics”. Published in Varmeforsk project publication series. 2004; no. 884. In English.

(14)
(15)

N

OMENCLATURE

A Model matrix B Predictor matrix E Noise term matrix

M Mass balance model matrix O Occurrence matrix

Q Regression coefficients matrix T Time window

W Weight matrix w Individual weight

X Variables predictor matrix x Estimated state

y Measured state Y Output vector θ Model parameter Abbrevations

ANN Artificial neural network AVTI Average type I error BN Bayesian network

CHP Combined heat and power (plant) CPT Conditional probability table DAG Directed acyclic graph DCS Distributed control system DR Data Reconciliation GED Gross error detection

HMBP Heat and mass balance programs NLP Non linear programming (problem) OP Overall power

PCR Principal Component Regression (model) PLS Partial least squares (models)

(16)
(17)

L

IST OF FIGURES

Figure 2-1. Development and causes of recovery boiler explosions

[BLR05] ... 26

Figure 2-2. Development of critical incidents in recovery boiler [BLR05]. .. 26

Figure 2-3. Classification of diagnostic algorithms [Ven03a]. The algorithms used in this thesis are in bold text. ... 31

Figure 2-4. Clean superheater 2 in Boiler 5. Courtesy of Mälarenergi and Jan Sandberg 2005... 33

Figure 2-5. Deposits on superheater 2 in Boiler 5 at the end of the opera-tional season. Courtesy of Mälarenergi and Jan Sandberg 2005... 33

Figure 2-6. Example of deterioration in steam turbine [Wal88]. ... 35

Figure 2-7. Typical fault types for a pressure sensor [Sza03]. ... 39

Figure 2-8. Venturi meter fouling [Gri01]. ... 40

Figure 2-9. A typical multilayer perceptron network ... 43

Figure 2-10. Static and feedforward ANN (left), Dynamic and recurrent ANN (right) adopted from [Arr03] ... 43

Figure 2-11. Artificial neuron [Lug02]. ... 44

Figure 2-12. Example of a directed acyclic graph. ... 45

Figure 2-13. Example of a BN with DAG and CPT. ... 46

Figure 2-14. Connected pipes and measured mass flows. ... 48

Figure 3-1. Different sets of parameters for hardwood (Case 1-3) and softwood (Case 4-6). Fast lignin (FL), slow lignin (SL), stoichiometry for hydroxyl ions (OH-), and hydrogensulphide ions (HS-). ... 52

Figure 3-2. Measured and estimated Kappa number of pulp from the continuous digester. ... 53

Figure 3-3. The model fit power R2 (left bar) and the predictive power Q2 of the model (right bar). ... 54

Figure 3-4. Fault diagnostic system. ... 55

Figure 3-5. Structure of the neural networks for the fault detection module.56 Figure 3-6. Bayesian network of root causes for fouling and gassing in the condenser. ... 57

Figure 3-7. Result in BN for F7 when no fault detected (F7_ANN set to not detected). ... 58

Figure 3-8. Result in BN for F7 when fault detected (F7_ANN set to detected). ... 58

Figure 3-9. Result in BN for F7 after input of collected evidence (F7_ANN set to detected, and operator is interacting). ... 58

Figure 3-10. Data treatment from raw data to application. ... 60

Figure 3-11. Fouling on HPSH2 (superheater) after one year of operation. Photo: Jan Sandberg, 8th Aug. 2003. ... 62

(18)

Figure 3-12. Top of HPSH2 (superheater) after annual cleaning. Photo: Jan

Sandberg, 29th Aug. 2003. ... 62

Figure 3-13. BN structure for fouling build-up ... 63

Figure 3-14. Test result from prediction of soft fouling build-up... 64

Figure 3-15. Test result from prediction of hard fouling build-up. ... 64

Figure 3-16. The proposed system for production planning. ... 66

L

IST OF TABLES

Table 1-1. Areas discussed and tools used in papers. ... 22

Table 2-1. Knowledge requirements, applicability, and limitations of selected fault diagnosis approaches. Table based on [Car01] with my extension in bold text. ... 30

Table 2-2. Comparison of diagnostic methods. Key: √ - favorable; × - not favorable; ? - situation dependent [Das00]. Columns for BN and Qualitative physics in bold are extensions of the original table. ... 32

Table 2-3. Sources of errors in a Pt100 sensor. Adapted from [Pen08]. ... 38

Table 3-1. PLS-model prediction error for 100% developed fault. In brackets are the model test set RMSE and R-square. ... 52

Table 3-2. Target ratio for ANN#1 and the combination of ANN#1 & ANN#2... 56

Table 3-3. Test setups. ... 60

Table 3-4. Summary of performance measures. ... 61

(19)

T

ABLE OF CONTENTS

Abstract ... 1

Svensk sammanfattning ... 3

Acknowledgements ... 5

List of appended papers ... 7

Publications included in the thesis ... 7

Publications not included in the thesis ... 8

Nomenclature ... 11

List of figures ... 13

List of tables ... 14

1 Introduction ... 17

1.1 Background ... 17

1.2 Problem formulation and hypothesis ... 18

1.3 Limitations... 19

1.4 Methodology ... 20

1.5 Thesis outline ... 22

2 Theoretical background ... 23

2.1 Definitions and expressions ... 23

2.2 Tools and concepts ... 25

2.3 Problem definition ... 25

2.3.1 Incidents in process industries ... 25

2.3.2 Enhancement of performance of industrial processes ... 27

2.3.3 Fault diagnosis system ... 28

2.3.4 Deterioration in heat and power plants ... 32

2.4 Selected methods for on-line tools ... 40

2.4.1 Mass and heat balance programs ... 41

2.4.2 Artificial neural network ... 42

2.4.3 Bayesian networks ... 45

2.4.4 Partial least squares regression model ... 46

2.4.5 Data reconciliation ... 47

3 Results ... 51

Summary of Paper I ... 51

Summary of Paper II ... 55

Summary of Paper III ... 59

Summary of Paper IV ... 62

Summary of Paper V ... 65

4 Discussion ... 67

4.1 Process deterioration ... 67

(20)

4.1.2 Deterioration in a steam turbine ... 69 4.2 Adaptation to deterioration ... 70 4.3 Data reconciliation ... 71 4.4 Decision support ... 73 5 Concluding remarks ... 76 5.1 Conclusions ... 76 5.2 Future work ... 79 Bibliography ... 80 Appendix A ... 87 Appendix B ... 95

(21)

1 I

NTRODUCTION

The aim of this research has been to develop tools and methods to handle deterioration in sensor readings, process models, and processes themselves in order to improve process industries performance and safety. The research has been conducted at the Department of Energy Technology, Mälardalen University since 2001. The work in this thesis has been funded by grants from the Swedish Energy Agency, The Swedish Foundation for Strategic Research, Thermal Engineering Research Institute, Knowledge Foundation, and by funding from and in co-operation with the companies ABB, ENA Energi AB, Mälarenergi AB, Siemens, VTT and Korsnäs. The projects have been run in co-operation with Lund University of Technology in Sweden, and Ålborg University in Denmark.

1.1 Background

Process industries such as pulp and paper plants and heat and power plants provide us with paper products, heating and electricity. Their high buildings and the steam plumes from stacks are often visible from far away. Cities heated by district heating demand safe, efficient operation and high availability. The demands are even higher on electricity generating plants because electricity cannot be efficiently stored and is therefore used as it is generated. Disturbances affecting production in pulp and paper plants have large impact on the economy. Down-time in these capital intensive industries is avoided by any possible means.

Monitoring of the process plays a central role in checking whether the plant meets standards of reliability needs, emission reports, optimal control and efficient maintenance. Large numbers of sensors are installed to monitor the process. The number of sensors in a modern process industries can range from a few hundred to several thousand. The introduction of computers and databases has made it possible to manage and collect vast amounts of data. Processes where sensors may not be installed for economic or practical reasons may be monitored using values calculated from models. Mathematical process models are constructed from knowledge of the process gained through experiments and experience of operation. Computers execute process models for on-line use and the output data is used for multiple purposes - to increase safety in production through alarm generation, process control, and function-based maintenance, and to improve economic performance by efficient monitoring and plant

(22)

optimization. Outputs to be monitored and computed are sometimes regulated by law, for example when generating emission reports.

The calculations apply information from sensor readings and process models. Both these sources of information are subject to deterioration such as that commonly described as ageing, fouling, drift and wear. These words are examples that summarize the effects of deterioration caused by temperature, mechanical stress, corrosion, fouling, etc. These mechanisms can impact on safety issues when they cause slow drifts in sensor readings and constants in process models. Changes in sensor readings and models can also have a real economic impact when applied to optimization or to economic transactions such as billing for finished product from a refinery. This thesis came about from the discovery of the large extent of unreliable sensors on an oil platform and in a pulp and paper plant and the obstacle this created when it came to implementing an efficient platform for production control. During the course of these studies it was apparent that the use of process models in industry declines with time as a result of issues with reliability. There is often a discrepancy between a calculated model output and the sensor readings at the same point in the process. There are existing tools for treating measurement data and estimating process model parameters, but there is plenty of room for improvement. Combining methods from the artificial intelligence field and heat and mass balance models open up new possibilities for solving more complex tasks automatically. The development of combined methods for heat and power plants and their particular limitations is at an early stage of development.

1.2 Problem formulation and hypothesis

This thesis investigates methods and tools appropriate for the implementation of on-line methods for treatment of developing process model errors, treatment of sensor errors for efficient monitoring, and decision support for early detection of errors. The methods are applied on heat and power plants and use heat and mass balance models, ANNs and BNs to deal with faults. The information from sensors and models deteriorates over time and this is at the heart of the problem. The hypothesis for this research is thus:

On-line quantitative and qualitative methods can deliver a balanced state estimation dataset in spite of deterioration in the process and sensors. Quantitative methods refers to methods based on process history and qualitative methods refers to methods based on mass and heat balance process models.

(23)

The hypothesis has been tested in various ways:

A literature study was performed to create a timeline and to identify pros and cons of the many methods available for gross error detection, fault isolation and data reconciliation in heat and power plants (Appendix A).

In paper III (Data Reconciliation and Gross Error Detection for the Flue Gas Train in a Heat and Power Plant) a system for fault detection and identification was constructed in Matlab code and simulated on plant data in order to determine the possibilities and limitations of sensor fault detection, fault isolation and data reconciliation in heat and power plants, and to study applicability of the method.

Paper IV (Bayesian Network for Decision Support on Soot Blowing in a Biomass Fuelled Boiler) discusses the causes of fouling of a superheater and ways to manage uncertainty in measurements and in the relationships between fuel properties and fouling build-up. A BN was constructed based on plant data and used in simulations to study ways of predicting fouling build-up. Applicability and pros and cons of the method were also considered.

Paper II (Concept for Detection and Isolation of Faults in a Steam Turbine and Components) used the knowledge gained in paper III and paper IV to formulate a new concept for fault detection and isolation, simulated this time for application on a steam turbine. The aim here was to study method performance, and to realise boundaries between methods applied to different tasks.

Paper V (Improving the usage of process data collected in process industry and power plants) discussed the important features of a production planning system for pulp and paper industry and proposed a concept for such a system.

Paper I (New Methods for On-line Adaptation to Deterioration in Process Models for Process Industry) was a continuation of the study on managing process model deterioration from paper V. The study considered in depth the main technical obstacles for implementation of automatic adaptation of process models to deterioration in three research projects. The aim was to paint an overall picture of the system, from sensor readings to the presentation of reliable sensor and model data, and to identify problems and propose new methods for on-line adaptation to deterioration in process models.

Results and conclusions of the individual papers are presented in the Results section.

1.3 Limitations

The area where methods have been proposed is for process industries, but has mainly been applied in heat and power plants which is the area of

(24)

greatest experience and expertise of the research group. The work on decision support and fault diagnosis has been limited to methods based on data-driven methods in combination with heat and mass balance models. Studies have been conducted at both system and component levels. The use of heat and mass balance models was decided on early in the process because of the transparency that physical process models offer when considering the relationships between parameters in the model and the possibility of measuring the same parameters in the real process or of finding reference values in the literature.

In the deterioration studies the methods are restricted to errors that affect the process or sensors in ways that are detectable with mass and heat balance calculations, such as fouling on turbine blades, but not vibration, as the latter is grouped under frequency analysis methods. Deterioration is a relatively slow process and seldom affects dynamics. Therefore dynamic models and methods are less suitable for this purpose than steady state process models and methods. This was the reason for performing these studies on processes in the steady state and not using a large group of methods that applied dynamics and control theory. However, these methods are interesting for control loop investigations and for processes where dynamics are important. Data–driven methods are computationally fast and can perform many tasks on-line. They can also solve tasks that cannot be solved with heat and mass balance models. Data-driven methods are suitable for tasks such as pattern recognition, reasoning under uncertainty, and modeling relationships whose physical basis is not fully understood. The combination of heat and mass balance methods and data-driven methods is attractive for solving complex tasks on-line.

1.4 Methodology

The research began with the observation that sensors on oil platforms and paper machines suffered from a lack of reliability and accuracy. The question of how to solve this was raised and a hypothesis was formed. The problem has a potentially large impact on process industry and there was no solution to the problem. However these sites were not available to us for study and so the application was applied to heat and power plants that share the same problem. The steps and projects needed to gather the necessary knowledge and to produce a proposal to solve the problem were decided on. Literature searches were conducted within each project to collect and understand previous research in the field. Theoretical frameworks were developed in consultation with experts and colleagues. Ideas generated were first applied to a simulated process and then whenever possible to the real process. Implementation in the process environment took place in close collaboration with plant engineers and colleagues. Evaluation of the results led to new questions and new ideas. The overall objective was used as a means of

(25)

filtering out ideas that were not within the main scope of the research objective.

Where there were no existing tools available for a particular task, new tools were programmed in Matlab. Doing this improved understanding of the theories and created a basis for developing more tools. Improved understanding of the methodology of reconciliation of sensor data allowed programming of an algorithm and connected subtasks (variable classification, global test, sequential isolation, sensor value estimation) for use on heat balance in a flue gas channel. The knowledge gained in this process was later used when proposing new methods.

At a system level, heat and mass balance software is used to simulate heat and power plants and to develop new process components. Ready-made tools were used to generate data-driven models after detailed study of the theory through reading the literature and attending courses. Participation in workshops at Aalborg University and Lund Institute of Technology also played an important part in the development of methods and tools.

The methodology used to develop the process models and the data-driven models initially involved gathering information about the system to be modelled. This was usually done in close co-operation with experts at the site who often had knowledge of the specific process that was not in the literature and was important for successful modeling. Expert or operator knowledge of process changes and specific events that affected the collected data was also crucial for modeling.

Selection of sensors for input to the model and the output variables of the model was made using both statistical screening and process knowledge. These approaches were complementary. Statistical analysis tools were used to screen large numbers of sensors for correlations between inputs and outputs. The set of inputs and outputs was reduced or expanded until a set was found using correlation analysis and process knowledge that could represent the process in line the objectives. This ensured that the inputs and outputs that were known to be important from process knowledge were part of the selected set, and the screening ensured that no sensors were overlooked. The selected dataset was used to estimate parameters in the heat and balance models. Data-driven models were generated by training on the dataset containing the selected inputs and outputs. The prediction power of the models and the magnitude of the model error were evaluated using a second dataset. When possible, sensitivity analysis to e.g. bias was performed on the model. The final model was evaluated using knowledge about the modeling method, pre-treatment of data, the dataset, process and sensors, and calculation of performance measures.

When the work began on deterioration and decision support the comprehensive method described above was necessary to understand the relationships between the sensors, dataset and the model in order to develop

(26)

methods for decision support and management of deterioration. The work presented in Papers I to V and the areas covered are related to each other as indicated in Table 1-1. The projects and their results are documented in the papers of this thesis.

Table 1-1. Areas discussed and tools used in papers.

Paper Areas discussed Tool used

I Identification of factors Parameter estimation, Adaptation to deterioration PLS II Decision support Deterioration HMBP ANN BN III Data reconciliation and

estimation HMBP GED and isolation

DR and estimation IV Decision support Deterioration HMBP BN V Decision support Identification of factors

1.5 Thesis outline

This thesis is an aggregated thesis where the main part is made up of the scientific contributions presented in a number of previously published research papers. The papers are put into their context in the Introduction which also serves as background for the scientific discussion and conclusions. The first part is organized as follows:

Chapter 1 Introduction to the background, objectives and limitations of the studies and the formulation of the problem. Presentation of the methodology used to tackle the formulation of the problem and outline of the thesis.

Chapter 2 Theoretical background including definitions and expressions used in the thesis, and presentation of theories in preparation for the scientific discussion.

Chapter 3 Results presented as summary of papers.

Chapter 4 Discussion of the contribution of this thesis in relation to other results published in the literature.

Chapter 5 Concluding remarks with conclusions and discussion of aspects of this thesis that need further research

(27)

2 T

HEORETICAL BACKGROUND

This section presents the background to the discussion of scientific contributions in the papers. Terms used in the thesis are defined and the causes and effects of deterioration in heat and power plant models and components such as boiler and steam turbine are discussed. This discussion is followed by a presentation of the tools and methods used to manage deterioration.

2.1 Definitions and expressions

This thesis spans a number of fields in the application area of heat and power plants, such as modeling using mass and heat balance models, data-driven and knowledge-based models, and decision support using artificial intelligence techniques.

Data coaptation is the sub-task of estimating unmeasured sensor readings from the solution of data reconciliation [Nar00].

Data-driven model is a model of a process that describes the relationship between inputs and outputs using collected data. This type of model is also called black-box, statistical, or process history based.

Data reconciliation is a method of noise reduction that uses the solution of an optimization problem to minimize the effect of errors in sensor readings on the process model [Rom00].

Decision support tool is a tool that supports the engineer or operator in making a decision on the best action to take to solve a task. The tool provides the user with immediate access to on-line expert knowledge in a restricted field of application.

Deterioration is a general expression for a gradual decrease of quality. Here the expression is first used for the decrease of precision in sensor readings and prediction power of process models. Deterioration is often caused by the development of fouling, wear and ageing of process components. Deterioration may be divided into model, process and sensor deterioration. Model deterioration occurs when a mathematical model fails to follow changes in the process it is intended to model and causes the model error to increase over time.

(28)

Process deterioration refers to deterioration of the actual process and components such as wear and tear and ageing, which cause the distance between optimal process performance and actual process performance to increase over time.

Sensor deterioration occurs when the sensor readings drift from the true values or when the variance in sensor readings increases.

Estimation is used to mean the real-time estimation of a parameter, for example the estimation of the current value of the parameter describing fouling of a heat exchanger surface.

Failure is the complete break-down of a system or function [Ise97].

Fault is an unexpected change in system function which hampers or disturbs normal operation, causing unacceptable deterioration in performance, [Ise97].

Fault detection is a binary decision process confirming whether or not a fault has occurred in a system, [Ise97].

Fault diagnosis is the determination of the type and location of faults. Heat and mass balance model is the expression of the process in the form of heat and mass balances that applies physical laws.

Knowledge-based model is the framing of the process in a model based on expert knowledge of the process.

On-line tool is a method developed and wrapped accordingly for use during plant operation. The tool is meant for operators or engineers for solving tasks on a running process.

Prediction is used to describe estimation of future process model output. Process refers to the actual plant (physical process) containing several components e.g. turbine or heat exchangers.

Process model is the mathematical abstraction of the physical process expressed as mathematical equations or expressions.

Process parameter is mainly used for constants that appear in mass and heat balance equations, but may also be used interchangeably to describe measured properties such as temperature, pressure, flow, etc.

(29)

Sensor is a measurement device that provides information about a certain property such as temperature, pressure or mass flow. Sensor reading is the value of the property reported by the device.

2.2 Tools and concepts

The original work described in this thesis is partly based on tools and work previously described by others. These include the steam turbine simulator developed by Markus Truedsson and Magnus Genrup at Lund Institute of Technology in the software IPSEpro [IPS08]. The bridge between the steam turbine simulator and Matlab [Mat08] was written by Andreas Kvarnström at Mälardalen University. The data reconciliation algorithm programmed in Matlab code by the author was mainly adapted from Romagnoli and Sánchez [Rom00] and Narasimhan et al [Nar00]. Other tools used were Neurosolutions for ANNs [Neu08], Hugin for BNs [Hug08], and Unscrambler for PLS-models [Cam08]. Matlab code for filtering and treating data and conducting simulation of cases was written by the author. Code and programs are assumed bug-free for the purposes they are used for.

2.3 Problem definition

This section introduces examples of incidents in the process industries here mostly on pulp and paper recovery boilers and the concept of fault diagnosis and treatment of deterioration in the process industry and defines the parts of the problem domain of process safety and process enhancement that are considered in this thesis. Some of the important types of deterioration that occur in heat and power plants are described. The requirements for an efficient fault diagnosis and deterioration treatment method are set out. The available technologies for meeting the requirements are studied in order to propose new methods and concepts for efficient fault diagnosis and management of deterioration for process enhancement. The prevention of component malfunction or deterioration and the enhancement of process performance and availability are also addressed.

2.3.1 Incidents in process industries

A survey on the current and future trend for severe incidents in heat and power plants in Sweden concluded that the process in heat and power plants and the pulp and paper industry is relatively low risk in terms of reported accidents of personal injury and death. The main reason the authors of the survey gave for this low risk relative to oil, gas and coal mining was because these other processes are carried out abroad (most often in countries with lower standards of safety) [SaB07]. For the pulp and paper industries recent figures are available on incidents.

There have been several reported cases of severe damage from explosions in Tomlinson type recovery boilers in the pulp and paper industry. Disasters

(30)

such as Vallvik, Sweden and Klabin, Brazil in 1998, and more than 100 previous cases of steam explosions caused loss of life and substantial economic losses [Gra82]. A report and summary of accidents in the North American market were presented at the Black Liquor Recovery Boiler Advisory Committee meeting in Atlanta USA 2005 (Figure 2-1).

Figure 2-1. Development and causes of recovery boiler explosions [BLR05].

The incidence of recovery explosions has decreased, and only two have occurred in the last five years, but the total amount of critical incidents is still increasing (Figure 2-2).

Figure 2-2. Development of critical incidents in recovery boiler [BLR05].

The systems installed to detect the leaks that can cause steam explosion only detected 3 of about 40 cases of leaks in the recovery boilers reported in 2005 in North America. There is therefore still a great deal of potential for

(31)

development of automatic fault detection and fault diagnosis in this area. The importance of maintenance and training for the systems is emphasized in the quote “Proper maintenance and support of leak detection systems is critical to their reliability so that operators will trust the indications they are receiving from the systems. The required resources should be dedicated to the leak detection system maintenance and calibration” [BLR05]. Further examples and analyses of incidents in plants are described in [Kle98].

As well as systems that monitor deterioration in process such as leakage, it is recognised that management is very important for incident prevention [Car01]. Databases of accident reports such as MARS (200 severe disaster cases world-wide) and FACTS (about 18000 cases world-wide) are a source of knowledge about accidents and can be used to improve safety in chemical plants. Kirchsteiger and Uther have studied trends of accidents and have concluded that there has been no great change in the frequency of incidents over the period considered, which they interpreted as meaning that lessons were not being learned and that new safety routines were not being implemented in the process industries [Kir98], [Uth99].

Workshops and conferences are held for heat and power plant managers to exchange experiences and they can refer to small databases that contain fewer but probably more relevant cases than MARS and FACTS. For example there is a Swedish database with 100 cases (as of 2003) maintained by an association of heat and power companies (Värme och Kraftföreningen). Reports on data from accident databases are not definitive and include many sources of uncertainty [Car01].

2.3.2 Enhancement of performance of industrial processes

Even resolving an error of small magnitude can lead to substantial improvements in plant performance in the present competitive market, leading to economic benefits. It has been shown in the case of a chemical plant, that economic losses are proportional to the standard deviation of the sensor readings during steady state operations [Bag05]. Biases or the reconciled sensor readings are not considered in this simple proportionality. The cost of a standard deviation, and of the sum of the bias and the standard deviation has been calculated by Bagajewicz [Bag06]. The example used in the calculation of how accuracy in instrumentation affects cost is of a crude distillation unit with a capacity of 420000 kg crude/h. The economical impact of the results from [Bag05] may be summarized as:

$7.36 million financial loss for instrumentation due to standard deviation. $7.12 million loss after applying data reconciliation.

$236,817 net present value for data reconciliation.

The analysis of costs related to biases considered the detectable limit of biases and the smearing effect (induced biases) after data reconciliation was

(32)

performed on the same crude distillation unit. The results can be summarized as:

$23.82 million expected financial loss when no data reconciliation is made. $7.38 million expected financial loss for residual precision after a single fault has been detected, removed and data reconciliation performed.

$16.44 million net present value of data reconciliation for this case.

If these illustrative examples of the difference in magnitude between expected costs for standard deviation errors ($240,000) and bias in measurements ($16,400,000) are also true for process model errors used in data reconciliation algorithms, then there is considerable scope for reductions in losses related to model errors. Although this has not been specifically studied, process model deterioration is believed to induce bias in the data reconciliation, especially in heat and mass balance models where there are uncertainties, such as in heat transfer coefficients for example. Correct information decreases losses and increases the availability and ability of process control to operate the process closer to its limits and increase production and efficiency. Deterioration in sensors and process models reduces this ability by causing the sensors and process models to drift or increase in noise.

Fault diagnosis enhances performance by detecting and providing information about possible errors which can then be removed. Data processing and reconciliation improve process knowledge to enhance plant operations and plant management [Rom00], [Nar00]. Data reconciliation uses redundancy in sensors to reduce sensor noise and improve accuracy to obtain a consistent set of data that complies with mass and heat balance equations. The unmeasured data in the model are estimated from this dataset. Leach et al [Lea07] present a model used to calculate the total cost of implementing a data treatment system.

2.3.3 Fault diagnosis system

The importance of fault diagnosis in the process industry lies both in economic benefits and in reducing risk to personnel. On-line fault diagnosis is important for early detection of incipient faults that may affect availability or decrease plant performance for long periods while undetected.

There are standard components inside each plant: heat exchangers, pumps, valves and fans, that may use standardized fault diagnosis. Consumer and industry products such as printers, pumps and cars are examples of products already available with built-in diagnostic features. For mass produced products the cost of standardized diagnostic systems is divided between many individual products. In the process industry fault diagnostic systems are usually made to measure because each plant is unique. According to

(33)

Venkatasubramanian et al [Ven03a], the desirable features of fault diagnostic systems in the chemical process industry are:

• Rapid detection and diagnosis.

• Isolability to distinguish between faults. • Robustness to noise and uncertainties.

• Novelty identifiability: To decide whether the process is in a normal or abnormal state, and if in an abnormal state whether the fault is known or novel.

• The ability to calculate a probability of classification error - to increase operator confidence.

• Adaptability to changes in production and to retrofitting, and to consider new cases when information becomes available.

• The ability to explain how a fault originated and propagated to the current situation.

• Modeling that requires minimum effort on the part of the operator. • Reasonably balanced storage and computational requirements. • Ability to identify multiple faults.

In addition, to meet requirements the system should to be stable in the long term. This means that the system needs to be easy to maintain and keep up to date. The issue of long term stability seems to be considered an operation problem rather than a system problem.

The applicability of a fault diagnosis system is limited to when the plant is in operation. Fault diagnosis systems may be used for early detection and may be part of maintenance on demand, but the systems are limited to and developed for a selection of fault types. These faults are selected on the basis of the balance between the impact of fault on personal safety and economic losses on the one hand and the cost of development and anticipated benefit of the system on the other.

Other limitations on fault diagnosis are in the measurement system, for example the ratio between the number of sensors and plant phenomena to be diagnosed and the reliability and sample frequency of the sensors.Table 2-1 below summarizes knowledge requirements, applicability, and limitations for some fault diagnosis approaches. The table, adapted from [Car01], has been extended with BN and mass and heat balance models.

(34)

Table 2-1. Knowledge requirements, applicability, and limitations of selected fault diagnosis approaches. Table based on [Car01] with my extension in bold text.

Knowledge

requirements Applicability Limitations

Quantitative observer-based approach Mathematical process models. System identification.

Sensor and actuator

faults Non-trivial residual evaluation, high knowledge requirements Qualitative

multiple-model approach Mathematical process models. System identification.

Modelled fault

scenarios High knowledge requirements of the fault scenarios Qualitative

SDG-approach Qualitative description of process interactions

Any fault, where the direct effect in the system is known Moderate knowledge requirements, time-consuming to model ambiguous diagnoses PCA-based signal

validation Process data from normal operation Sensor validation Does not apply to non-linear processes, non-trivial residual evaluation PCA-based fault

scenario classification

Process data from

fault scenarios Previous fault scenarios with sufficient data

Does not handle non-linear processes, limited to old fault scenarios

Neural network-based signal validation

Process data from

normal operation Sensor validation Does not handle actuator faults well Neural network-fault

scenario classification

Process data from fault scenarios or accurate fault model

Previous fault scenarios with sufficient data

Limited to old fault scenarios, non transparent Bayesian

network-based signal validation

Process data from normal operation

Sensor validation May be used on old sensor faults and incorporate statistical data on faults for validation Bayesian

network-based fault scenario classification

Process data from fault scenarios

Previous fault scenarios with sufficient data and expert knowledge

May be used on old fault scenarios and faults for which expert knowledge is available Heat and mass

balance models Mathematical process models. Sensor faults or model faults Very high knowledge requirements, difficult to distinguish sensor and model fault

Venkatasubramanian et al have classified methods available for fault diagnosis. The basis of this classification is the a priori information. In the hypothesis of this thesis the use of quantitative and qualitative methods refer respectively to methods based on process history and methods based on

(35)

physical principles (Figure 2-3). The methods used for different tasks in papers are indicated by bold text in Figure 2-3.

Figure 2-3. Classification of diagnostic algorithms [Ven03a]. The algorithms used in this thesis are in bold text.

The bold text in Figure 2-3 refers to model types and methods used in research projects presented as papers in the thesis. No single method is capable of handling all the requirements of a diagnostic system. The desirable features for a fault diagnosis system listed above together with the classification tree in Figure 2-3 can be represented as a table listing the desired features against selected methods. The table appearing in [Ven03c] is here extended by the author with columns for BNs used in dialogue with the operator, and for qualitative physics. Table 2-2 shows that using ANN, BN and qualitative physics together covers a larger set of the desirable features than either method covers on its own. This knowledge has been applied in the combined method presented in paper II. The table also reveals other combinations that could be exploited.

Many fault diagnosis methods have been developed based on dynamic state space models that belong to the class of qualitative model-based methods. These methods are less useful for slowly developing faults over long time periods. Slowly developing faults are best modelled in steady state models and detected by non-dynamic methods [ThL91]. For example boiler fouling and steam turbine deterioration affects the steady state values rather than the dynamics of the process. Sensor readings are often collected at a low sampling rate, which is not optimal for dynamic models. The data may be logged once an hour and/or filtered through a dead-band filter. This is inappropriate for development of dynamic models.

Observers

Diagnostic methods

Process history based Qualitative model-based Quantitative model-based Extended Kalman filters Parity space

Causal models Abstraction hierarchy Qualitative physics Fault trees Digraphs Functional Structural Qualitative Expert systems Statistical Statistical classifiers Quantitative Qualitative trend analysis Neural networks

Principal component analysis

(36)

Table 2-2. Comparison of diagnostic methods. Key: √ - favorable; × - not favorable; ? - situation dependent [Das00]. Columns for BN and Qualitative physics in bold are extensions of the original table. O bs er ve r D ig ra ph s A bs tr ac tio n hi er ar ch y E xp er t s ys te m s Q ua lit at iv e tr en d an al ys is, Q T A Pr in ci pa l co m po ne nt an al ys is, P C A N eu ra l n et w or ks , N N B a ye si a n N e tw o rk , B N Q u a li ta ti ve p h ys ic s

Quick detection and

diagnosis √ ? ? √ √ √ √ × × Isolability √ × × √ √ √ √ √ × Robustness √ √ √ √ √ √ √ √ √ Novelty identifiability ? √ √ × ? √ √ √ × Classification error estimate × × × × × × × × × Adaptability × √ √ × ? × × √ √ Explanation facility × √ √ √ √ × × √ × Modeling requirements ? √ √ √ √ √ √ ? × Storage and computational requirements √ ? ? √ √ √ √ × √ Multiple fault identifiability √ √ √ × × × × × ×

Bearing in mind the desirable features and the coverage of these features by the different methods ( Table 2-1 and Table 2-2), it is clear that no single method can provide all the desirable features. A combination of methods is therefore necessary.

The combination of ANN, BN and Qualitative physics provide most of the desirable features, but is only one of many suitable combinations of methods. Many more combinations are available if all of the methods listed in Figure 2-3 are considered. Each combination needs to be chosen for the particular task to be carried out and the desired features.

2.3.4 Deterioration in heat and power plants

Deterioration in heat and power plants is divided here into three categories: process deterioration, sensor deterioration and process model deterioration. Process deterioration is exemplified here by a boiler and a steam turbine. In the boiler the biofuel contents and combustion conditions cause fouling on

(37)

heat transfer surfaces. In the steam turbine feedwater chemistry causes deposits and corrosion in the steam path.

The sensors are exposed to the environment in the steam system or flue gas channel. The effect is distorted sensor reading because of bias, drift and changes in variance. The process models deteriorate because of their inability to model changes in the actual plant. This deterioration is described in greater detail below.

Fouling of superheater

Theory of boiler fouling and deposits presented here is derived from literature studies presented in paper II and paper III. Experience was also gathered from data on fouling build-up over a period of several years in 2005 on Mälarenergi AB Boiler 5 located in Västerås. The high pressure superheater 2 that was studied is situated directly after the sand separator and was inspected visually during revision after the operating season.

Biofuels are for example wood chips, bark, branches, treetops, recycled wood and food industry waste. There are large variations in fuel composition and particle size distribution. Most biofuels are known to have high alkali content which is closely correlated to build-up of deposits on boiler walls and superheater surfaces. Deposits on superheaters were found to be due to unburnt organic matter in the vapour state, melts and solid particles that may deposit on heat transfer surfaces when transported in the flue gas [San07]. Deposits on superheater tubes sinters if not removed, leaving a ceramic-like layer growing outwards from the tube. Clean tubes (Figure 2-4) build-up substantial deposits during the operational season (Figure 2-5). The hard deposits are covered by not yet sintered material that is loose and can be removed during the soot blowing cycle. In the short term soot-blowing keeps the tubes clean, but over several months the deposits grow and decrease the heat transfer coefficient between flue gases and the tubes.

Figure 2-4. Clean superheater 2 in Boiler 5. Courtesy of Mälarenergi and Jan

Sandberg 2005.

Figure 2-5. Deposits on superheater 2 in Boiler 5 at the end of the operational season. Courtesy of Mälarenergi and Jan

(38)

It has been found that if the fuel mixture and properties are not considering fouling, the rate of build-up may be considerably higher than normal. The deterioration of the process performance in this case is the shift of heat transfer in the heat exchanging parts due to fouling. In practice this means that the heat transferred decreases in the superheaters as the deposit layer grows and the heat not taken up in the superheaters is taken up later in the flue gas channel in the economizers, but at a lower temperature and at a cost to plant efficiency.

Steam turbine deterioration

The steam turbine is a high performance rotating machine and is one of the major investment costs in a heat and power plant. It is therefore important to monitor its performance in order to prevent and detect damage which may lead to costly repairs or breakdown of the turbine.

Studies on deterioration of steam turbines have included solid particle erosion (SPE) [Maz85] and erosion on stop valve by-pass discs [Bel88]. It has long been agreed that by-pass during boiler start-up is one of the best methods to avoid SPE. Strong arguments have been presented recently from a turbine manufacturer that the turbine first stage is prone to erosion from particles due to its design and that the best remedy for this is to use a design that allows the particles to pass without damaging the stage [Hol03]. Similar reasoning about SPE and other detailed information about deterioration and its effects on performance are presented in [Cot98].

Common causes of deterioration in steam turbine path is [Cot98], [Gen05]: • Deposits

• Surface roughness • Sealing leakage • Internal leakage • Solid particle erosion

The effect of these causes and counteractions on a German steam turbine is shown in Figure 2-6 below.

(39)

Figure 2-6. Example of deterioration in steam turbine [Wal88].

Literature studies have been conducted in order to understand the common causes and effects of deterioration in steam turbines [Cot98], [Ley97], [Gen05], [McC89]. Studies of selected deterioration scenarios were performed using the detailed steam turbine simulation model which basics are described in Appendix B. During the studies the model was again validated.

As well as the causes listed above, foreign object damage is also a factor in steam turbine damage. Foreign object damage is due to large objects hitting the first stage in the turbine. This happens particularly during start-up after maintenance in the steam system or turbine. Strainers may be installed to stop foreign objects reaching the turbine but can cause unwanted pressure losses during operation.

To date, none of these causes of deterioration have been reported for the modelled ENA Energi steam turbine. Knowledge of the deterioration cases in the list is derived from the literature. With the gathered knowledge it was possible to simulate the cases by manipulating parameters in the model and analyzing the output.

Model deterioration

Process and model deterioration are closely related. Process deterioration such as turbine blade erosion is an unwanted change in the turbine and shows up as reduced turbine efficiency. Deterioration of the process model is the increase in model error due to its inability to follow changes in the physical process it is supposed to mimic. The definition of model deterioration is natural to use if it is recognized that the actual process is the

(40)

one to mimic even if it deviates from the initial state due to fouling, erosion, wear and tear. Process model deterioration is a model error that increases over time.

The initial model error originates from an approximation of the physical world through assumptions and simplifications and the quality of data for tuning or training and evaluation. In operation the model error may increase temporarily. This may be due to factors such as:

• The use of a steady state model when the plant is operating under non steady state conditions.

• Differences in topology between the model and the plant (e.g. opening and closing of valves during start-up procedure).

The deterioration of the process model is managed by tuning and solving parameter estimation to decrease model error. Problems to be solved before adaptation to deterioration in process models are shown in Appendix A: detection of gross error, isolation of gross error, data reconciliation. The adaptation is performed by solving the parameter estimation problem. There are few examples of implementations in real processes for process model adaptation using model-based methods. Error in variables methods is one problem to be solved for this purpose and has been developed by Kim et al [Kim90] and Chen [Che98] proposed and developed a method including a method for solving a sequence of non-linear programming problems for the tasks of data reconciliation, parameter estimation and plant economic optimisation. More examples are found in Paper I. Identified as important hinders for adaptation to model deterioration are:

• Pre-treatment of data and selection of datasets to reject unreliable data for adaptation.

• Deterioration of sensor readings.

• Varying process conditions such as change from one quality to another and switching equipment on and off.

• To solve parameter estimation on-line is mathematically challenging for complex process models.

To handle deterioration in model-based approaches the methods of extended modelling, extended instrumentation or solving parameter estimation are used.

Data-driven models that deteriorate may generally be replaced by models trained on data representing the current process behaviour to track the changes in the relationships between variables. Adaptive data-driven models can add new data and update the model without retraining the entire model. BNs have this capability [Jen01].

(41)

Heat and mass balance models can follow deterioration in the physical process if there are parameters in the model that account for the deterioration by changing their values. An example of such a variable is the heat transfer coefficient in a heat exchanger. If the parameter estimation problem is solved on-line the model is continuously updated and the process model deterioration can be reduced or even eliminated. An alternative to solving the parameter estimation problem is to extend the model to include the deterioration mechanism. Functions can be introduced that are able to describe the development of process deterioration over time. Extending the instrumentation to measure the process deterioration (e.g. fouling thickness, fouling heat transfer coefficient) is also an efficient way to reduce model deterioration. If measurements of deterioration cannot be conducted on-line they can be made during pauses in operation. This has been proposed for clearances in turbines that are used to estimate internal leakage [Kub02]. Sensor deterioration

Sensors are commonly used for control, monitoring and alarm generation, and are increasingly used for diagnostics. They work in hostile environments and are subject to high temperatures and pressures. In a heat and power plant, mass flows, temperatures and pressures are typically measured throughout the steam, flue gas and feedwater systems. The number of sensors is in the range of hundreds to tens of thousands depending on the complexity of the plant. The sensor is the first link in the chain of information from measured property through to the value shown on the operator screen.

Failures may occur in the sensor itself, device electronics, the collection of data in the distributed control system (DCS) or in algorithms applied before the data is finally presented on the operator screen or stored in the plant measurement database. The latter parts of the system have not been studied in this thesis, but appear as a component of the bias and noise for sensors in the general model for sensor output presented in Equation 2-1, where y is the sensor reading, x the true value, and b is the bias in the measurement due to bad location, installation, deterioration, and calibration error. The random measurement error component e is in many cases assumed to be normally distributed around zero, and may therefore be treated using well-known statistical methods.

e b x

y= + + Equation 2-1

The calibration error and measurement noise are generally very small in comparison to the noise and bias once the device is installed in the plant. Noise induced by electrical equipment and upstream process disturbances are often not considered. Appendix A contains more detailed information on

(42)

the development of detection of gross errors, isolation of errors and data reconciliation of sensor measurement. Here follows a short description of deterioration in common sensors for measuring temperature, pressure and flow in process industries.

Temperature is the most commonly measured property in process industries and the common standard sensors apply either of two major principles:

• Proportional change with temperature of electrical resistance in materials such as Pt100, applied in the low to medium temperature ranges.

• Proportional change with temperature of the galvanic potential between two different materials (thermocouples), applied almost throughout the temperature range -200°C to +1700°C.

Other temperature measuring techniques are for example the acoustic pyrometer, which measures density changes in the gas, and the use of heat cameras sensitive to IR-radiation. Both can be used to measure temperatures in a plane when point measurements are not sufficient. These technologies have so far been too expensive for permanent use in monitoring.

Large biases can occur in measurement faults that are not caused by malfunction. These are related to installation. During operation deterioration affects the measurements due to instability over time, hysteresis, response time and oxidation (Table 2-3). A typical drift value for a Pt100 sensor is 0.05°C per year according to the manufacturer Pentronic.

Table 2-3. Sources of errors in a Pt100 sensor. Adapted from [Pen08].

Source of error Error contribution interval °C

Sensor construction/installation 0.1 – 3

Pt100 sensor tolerance 0.03 – 0.3 (at 0 °C)

Lead configuration 2-wire 0 – 5

Instrumentation 0.02 – 3

Thermocouples are sensitive to green rot, which is a type of corrosion which occurs around the wires at temperatures of 800-1100°C and at low oxygen levels. The drift is relatively slow but if not acted upon may increase measurement errors to tens of degrees, and 50°C or more in extreme cases. In addition to the above sources of error, fouling of the measurement pocket affects response time and absolute values because it insulates the sensor from the fluid temperature.

References

Related documents

Även om verksamhetens kvalitetsarbete inte kommer att kunna mätas så utgör ändå aktionen i sig en god grund för att på sikt kunna använda den som en del i det

Chemometric and signal processing methods for real time monitoring and modeling using acoustic sensors Applications in the pulp and paper industry Anders Björk.. Doctorial Thesis

Paper V, entitled “Knowledge creation and sharing in a transnational team—the development of a global business system” 13 , is a case study of the use of a transnational team in

Om det nu är så, att fram till år 2000 det har varit den tänkta motståndarens teknologiska nivå och utveckling som har varit styrande för de egna ytstridskrafterna - vad händer

Denna studie visar liknande resultat som Bradley (2013) kunde påvisa i sin forskning, att det inte går att mäta något signifikant samband mellan variablerna lönsamhet och

Arbetsterapeuten kan fungera som en länk för att minska stressorer och komma till rätta med ohälsa detta genom att hjälpa till att upprätta rutiner och visa på alternativa sätt

När uramaki bedömdes upplevdes alltid smakattributen sötma, sälta och fett dominanta (B5-B8).. Vid enstaka tillfällen upplevdes även syra (B5) och umami

För att förstå och kunna bemöta dem på ett värdigt sätt är det viktigt att ha kunskap om vilka följder långvarig smärta kan ha på människors aktivitet i det dagliga