• No results found

Loop Monitoring and Fault Diagnosis

2. Autonomous Process Control

2.5 Loop Monitoring and Fault Diagnosis

Filtering is also used in order to avoid aliasing effects in sampled data systems. The cut-off frequency of the anti-aliasing filter is coupled to the sampling interval. This implies that the filter should be altered when the sampling interval of the controller is changed. However, this is normally not possible, since the anti-aliasing filter is an analog filter just outside the IO board of the computer. This can be solved by having fast sampling of all signals with a fixed anti-aliasing filter, and then use decimation in order to achieve sampling intervals that match each control loop.

• Noise level monitoring. If the noise level is increased dramatically, or if it becomes very small, the sensor or some wiring is probably broken.

These alarms typically use very little computing power, and operate on a very basic level. It is up to the higher levels in the control system to decide which alarms are actually useful, and only implement those. There should thus be some supervisory function that uses the alarms in some way. If, for example, the noise level has become very small, the Loop Manager should do at least one of the following:

• Warn the operator that the sensor may be broken.

• Perform some simple experiment, for example a set point change, to see if the sensor value changes. Before such an experiment is performed, it might have to be accepted on the unit operation level.

• Pass the alarm to the unit operation level, which may use the alarm to explain errors in neighboring control loops and confirm the fault to the Loop Manager.

If an experiment or some higher level reasoning confirms that the sensor or wiring is broken, the instrument engineer should be notified, and the hardware should be repaired.

Performance assessment

The alarms discussed in the previous section provide low-level informa-tion about the status of the control loop. They may for example cover the most severe errors when the control loop has more or less stopped func-tioning. It is, however, more difficult to have simple alarms that give a more detailed status of the quality of the control. This is the motivation for performance assessment methods. The normal use of these methods is to constantly update the performance measure and compare it with the acceptable level which is defined somehow. If bad control performance is detected, an alarm is sent to the Loop Manager. In this respect, per-formance assessment algorithms do not differ from the low-level alarms discussed above. There is thus not a clear distinction between alarm gen-eration and performance assessment.

There are different classes of methods within the performance assess-ment category, for example:

• Variance-based methods according to Harris (1989) and numerous followers.

• Detection of oscillations, for example Hägglund(1995).

• Methods for detecting overdamped control, see Hägglund (1997a).

should be measured by comparing the current variance of the output with the one obtained by a minimum variance control law, Åström(1967). Har-ris also showed that this minimum variance can be estimated irrespective of currently used control law, as long as the dead time of the process is known. Several authors have suggested improvements and modifications to the original algorithm. Lynch and Dumont(1996) use a Laguerre net-work for estimating the coefficients in the noise description, and an on-line estimation of the dead time. Tyler and Morari (1995) take the effect of unstable poles and non-minimum phase zeros into account. Horch and Isaksson (1999) replace the implicit dead-beat assumption in the min-imum variance control law by a more realistic pole placement. Harris et al. (1996) extends the measure to multivariable plants. Some of the methods have been implemented in large-scale plants, with reported suc-cess.

The other methods presented above are not based on stochastic con-trol theory, but use a more pragmatic view. The oscillation detection algo-rithm in Hägglund (1995) repeatedly calculates the integrated absolute error (I AE) between two consecutive zero crossings of the control error.

If this sequence contains large values of the I AE during a limited time, this is interpreted as an oscillation of the control loop. The method is implemented in commercial controllers from ABB Automation Products.

The performance assessment methods typically have most of the cal-culations executing in hard real-time. The variance-based methods use recursive estimation of the noise model in order to estimate the mini-mum achievable variance. The oscillation detection algorithm calculates the I E A sample by sample. However, it is mostly not critical that the bad performance is actually detected exactly when it occurs for the first time.

This is especially true since performance typically deteriorates gradually, and there is probably a long time when the methods “almost” signal for bad performance. It should thus mostly be sufficient to send batches of on-line data to the Loop Manager on some regular basis and then perform the calculations without timing constraints.

Control loop diagnosis

The performance assessment algorithms discussed above are supposed to detect unsatisfactory control. However, none of them try to find any causes for the bad control. This is instead a task for fault detection and isolation(FDI) methods. When a control loop is performing badly, without being totally out of order, it is normally caused by either of the following reasons:

• The controller parameters are not set properly.

• External disturbances cause large variations which cannot be taken care of by the controller.

• Non-linearities such as valve friction induce oscillations in the con-trol loop.

• The current controller structure is not able to control the process with acceptable performance.

Many different approaches to FDI exist. Traditional model-based meth-ods are mostly based on residual generation and analytical redundancy, see Frank(1990) for a survey. This kind of methods requires a fairly ac-curate quantitative model of the nominal plant behavior, as well as the behavior when faults are present. This is mostly not available in a typical control loop in process industry. Neural networks and knowledge-based methods are other approaches that are often used, see for example Frank and Koeppen-Seliger(1997).

In most cases, the traditional FDI methods use multiple sensor read-ings to distinguish between different faults. Here, we would instead like to do diagnosis on the local control loop level using only the control sig-nal and the process value. The rather specific nature of this FDI problem has inspired some tailored methods. For example, Thornhill and Häg-glund (1997) use harmonics analysis in order to find a characteristic signature of an oscillating control loop. Horch (1999) shows that corre-lation analysis can be used to distinguish valve induced oscilcorre-lations from other ones. These methods use only the on-line data for the diagnosis. In Wallén(1997) a sequence of off-line experiments, including renewed loop assessment and controller tuning, is suggested in order to distinguish be-tween different causes for the bad control. It should not be considered a drawback to temporarily stop the PI(D) control, as long as the experi-ments are monitored properly.