• No results found

Robust Fault Isolation With Statistical Uncertainty in Identified Parameters

N/A
N/A
Protected

Academic year: 2021

Share "Robust Fault Isolation With Statistical Uncertainty in Identified Parameters"

Copied!
14
0
0

Loading.... (view fulltext now)

Full text

(1)

Robust Fault Isolation With Statistical

Uncertainty in Identified Parameters

Jianfei Dong, Michel Verhaegen and Fredrik Gustafsson

Linköping University Post Print

N.B.: When citing this work, cite the original article.

©2012 IEEE. Personal use of this material is permitted. However, permission to

reprint/republish this material for advertising or promotional purposes or for creating new

collective works for resale or redistribution to servers or lists, or to reuse any copyrighted

component of this work in other works must be obtained from the IEEE.

Jianfei Dong, Michel Verhaegen and Fredrik Gustafsson, Robust Fault Isolation With

Statistical Uncertainty in Identified Parameters, 2012, IEEE Transactions on Signal

Processing, (60), 10, 5556-5561.

http://dx.doi.org/10.1109/TSP.2012.2208639

Postprint available at: Linköping University Electronic Press

(2)

Robust Fault Isolation with Statistical Uncertainty in Identified

Parameters

Jianfei Dong, Michel Verhaegen, and Fredrik Gustafsson

Abstract

This paper is a companion paper to [1] and extends [1] to fault isolation. Also here use is made of a linear in the parameters model representation of the input-output behavior of the nominal system (i.e. fault-free). The projection of the residual onto directions only sensitive to individual faults is robustified against the stochastic errors of the estimated model parameters. The paper considers additive error sequences to the input and output quantities, that represent failures like drift, biased, stuck or saturated sensors/actuators.

Index Terms

Fault isolation; Parameter uncertainty; Statistical analysis; Additive faults; Closed-form solution.

I. INTRODUCTION

In classical FDI literature, fault isolation is usually enabled by projecting a residual vector onto the left null space of all but one fault input directions in the matrix that maps faults to outputs (referred to as fault

transfer matrix in what follows), e.g. [2], [3]. But if these projection vectors are identified from data as in

[3], it is difficult to quantify the statistical distribution of this solution against identification uncertainty. In this paper, we develop a new optimization-based solution, which searches for the projection directions in the subspace spanned by the non-principal components of the error covariance matrix of the identified fault transfer matrix. In other words, the residual vectors are projected onto the least variant subspace of the error covariance matrix, where the components of the identified parameters are most likely to be close to the their true values.

Corresponding author J. Dong, email: jfeidong@hotmail.com. J. Dong was with Delft University of Technology when this work was carried out, and was with Philips Research, 5656 AE, Eindhoven, The Netherlands. M. Verhaegen is with Delft University of Technology, 2628CD, Delft, the Netherlands. F. Gustafsson is with Department of Electrical Engineering, Link¨opings Universitet, SE-581 83 Link¨oping, Sweden.

(3)

The rest of the paper is organized as follows. We start in Sec. II with the preliminaries and problem formulation. Sec. III goes further to derive a closed-from optimal isolation solution against the parameter identification errors. Sec. IV shows the improvements in fault isolation performance by our robustified method on aircraft dynamics. The notations in this paper are the same as those defined in [1, Sec. II.A].

II. PRELIMINARIES AND PROBLEMFORMULATION

A. Fault isolation connected to the VARX description

We consider the following discrete-time state-space model with additive faults:

x(k + 1) = Ax(k) + B [u(k) + fa(k)] + Fw(k), (1)

y(k) = Cx(k) + fs(k) + v(k). (2) Here, x(k) ∈ Rn, y(k) ∈ R, u(k) ∈ Rm, and f

a(k) ∈ Rm and fs(k) ∈ Rℓ respectively stand for additive actuator and sensor faults. For brevity, we will collect all the faults into f(k) , fT

a(k), fsT(k)

T

∈ Rm+ℓ,

and denote nf , m + ℓ. Compared with the more general model in [1, Eqs. (1,2)], the fault model here correspond to the case where E= [B, 0] and G = [0, I]; i.e. actuator faults share the same input channels

with the control signals, and sensor faults directly add to the output measurements. This model can describe many commonly encountered additive faults, e.g. drifted, biased, stuck, or saturated actuators and sensors. The advantage of this model is that the Markov parameters from f(k) to y(k) are equal to

those from u(k) and y(k) to y(k), and can hence be estimated from I/O data.

Under the existence conditions of the stabilizing Kalman gain K, as specified in [1, Assumption 1], a closed-loop observer form of (1,2) is (with Φ, A − KC)

ˆ

x(k + 1) = Φx(k) + Bu(k) +ˆ h B −K i

f(k) + Ky(k), y(k) = C ˆx(k) +h 0 I if(k) + e(k).

Here, e(k) is the innovation signal defined in [1, Sec. II.D], and has a covariance matrix Σe. As detailed in [1], a residual generator for fault detection along the detection horizon [k − L + 1, k] takes the form:

rk,L = (I − TyL)yk,L− HzL,pzk−L,p− TuLuk,L (3)

= ϕf+ bk,L+ ek,L. (4)

To avoid repetition, we shall refer to [1] for the definition of the signal vectors rk,L, uk,L, yk,L, bk,L, ek,L, ϕf and the parametric matrices HzL,p, TuL, TyL. We also denote these matrices with identified parameters by a bar on their top, e.g. ¯HzL,p. It is useful to recall that ϕf =

h

HL,pf TL f

i

(4)

 fT(k − L − p + 1), · · · , fT(k)T

. Due to its role in mapping the fault signals to the outputs, we shall call

h

HL,pf TfL i

fault transfer matrix, which will be explicitly specified later.

The fault detection method in [1] aims at detecting the change in the mean of rk,L due to a nonzero

ϕf. But to tell what components of the various sensors and actuators in the system are faulty, one needs to separate the contributions of these components to rk,L. LetF denote the index set of the fault channels, i.e.F = {1, · · · , nf}, and F \ i = {1,··· ,i − 1,i + 1,··· ,nf}. Denote fk,pi +L= [ fi(k − L − p + 1), · · · , fi(k)], where fi(k) denotes the i-th fault component at instant k. Let fk,pF\i+L be the leftover of fk,p+L, after

all its elements appearing in fk,pi +L are removed. Similarly, let HL,pf ,i and TLf ,i contain the columns of respectively HL,pf and TfL that only correspond to the i-th fault; and HL,pf ,F\i and Tf ,LF\i contain all the other columns in HL,pf and TfL. Then, ϕf can be written as

ϕf = h HL,pf ,i Tf ,iL i · fk,pi +L+h HL,pf , F\i T L f ,F\i i · fk,pF\i+L.

Similar to classical parity space methods [2], [3], isolating fk,pi +L from fk,pF\i+L can be achieved by designing a projection vector pi∈ R1×Lℓ such that

pi· h HL,pf ,F\i Tf ,LF\i i = 0, and pi· h HL,pf ,i Tf ,iL i 6= 0. (5)

Projecting rk,L onto such a pi hence results in

rk,L(i) , pi·(I − TyL)yk,L− HzL,pzk−L,p− TuLuk,L = pi· h HL,pf ,i Tf ,iL i · fk,pi +L+ bk,L+ ek,L  ,

which is only sensitive to the i-th fault component. The existence condition for pi satisfying (5) is that rankhHL,pf TfLi> rankhHL,pf ,F\i Tf ,LF\ii. (6)

B. Fault isolation design using uncertain identified parameters

The residual generator (3) is realized by identified parameters in the companion paper [1]. Projecting such a residual vector onto a direction pi to isolate the i-th fault, i= 1, · · · , nf, can then be computed by

rk,L(i) = pi·(I − ¯TyL)yk,L− ¯HzL,pzk−L,p− ¯TuLuk,L . (7) To design pi, we first need to build and analyze the fault transfer matrix,

h

HL,pf TfL i

. However, the true parameters, HL,pf and TLf, are unknown, but can be constructed from the identified Markov parameters of the nominal plant; i.e. ˆΞin [1, Eq. (16)]. Based on the model (1,2) and its closed-loop observer form,

(5)

h

HL,pf TfL i

is derived and explicitly reads as follows,

     CΦp−1B −CΦp−1K ··· CΦB −CΦK CB −CK 0 I 0 0 CΦp−1B −CΦp−1K ··· CΦB −CΦK CB −CK 0 I . . . . .. . .. . .. ... . . . . . . . .. . .. . .. . .. 0 ··· 0 CΦp−1B ··· ··· −CΦL−1K CΦL−2B −CΦL−2K ··· ··· CB −CK 0 I      . (8) Then, based on the structure defined in (8),

h

HL,pf TL f

i

can be constructed from the following sequence of parameters:

ˆ

Ξid

f =

h

CΦp−1B −CΦp−1K · · · CB −CKi= ˆΞ· blockdiag(Im,−I,· · · , Im,−I). (9)

Here, blockdiag(M1, M2) =



M1

M2



, for square matrices M1, M2. For brevity, denote

Isgn, blockdiag(Im,−Iℓ,· · · , Im,−Iℓ) ∈ Rp(m+ℓ)×p(m+ℓ).

Note that due to the parameterization of hHL,pf TfLi by the identified Markov parameters ˆΞ, the fault transfer matrix hH¯L,pffLi contains the errors ∆Ξˆ, as specified in [1, Eq. (17)]. We shall denote the following error matrix,h∆H¯L,pf ∆T¯fLi=hHfL,pTLfi−h ¯HL,pffLi, with a dimension of ℓL× (p + L)(m + ℓ). Due to the shifting structure expressed in (8), the errors in each block row ofh∆H¯L,pf ∆T¯fLiis simply a repetition of the error source, ∆Ξˆ. We shall describe h∆H¯L,pf ∆T¯fLi in a compact form of∆Ξˆ with the shifting structure information. To this end, first note that due to the definition of ˆΞidf in (9), the errors in

ˆ

Ξid

f can be written as, ∆Ξˆidf = (Ξ− ˆΞ) · Isgn=∆Ξˆ· Isgn. On the other hand, G= [0, I] is constant.h∆H¯L,pf ∆T¯L

f

i

can hence be written as

     ∆Ξˆid f 0ℓ×m 0ℓ×ℓ 0ℓ×m 0ℓ×ℓ ∆Ξˆidf 0ℓ×m 0ℓ×ℓ . . . . . . . .. . .. . .. . .. . .. 0ℓ×m 0ℓ×ℓ ··· 0ℓ×m 0ℓ×ℓ ∆Ξˆidf 0ℓ×m 0ℓ×ℓ      .

With some tedious but straightforward derivations, we can write this matrix in a compact form as

(IL⊗∆Ξˆidf ) · (Sp,L⊗ Im+ℓ). (10)

Here, Sp,L represents the following shifting matrix

Sp,L=      Ip 0p×1 0p×(L−1) 0p×1 Ip 0p×(L−1) . . . 0p×(L−1) Ip 0p×1      ∈ RpL×(p+L), (11)

Now, the problem considered in this paper is as follows.

Problem 1: Determine a projection vector pi (i= 1, · · · , nf) that is only sensitive to the i-th fault component, and robust to the errors in the identified fault transfer matrix h ¯HL,pffLi.

(6)

III. OPTIMAL ISOLATION ROBUST TO IDENTIFICATION UNCERTAINTY

A. Optimization-based solution to Problem 1

A possible way to solve Problem 1 is via analyzing the statistical distribution of the residual in (7). But this is difficult. First, it is difficult to quantify in a closed-form the statistics of the left null space of

h ¯

HL,pf ,F\if ,LF\iivia SVD without any approximation. Second, even with an exact distribution function of this left null space, quantifying the distribution of r(i)k,L after the projection is still not easy, because the parameters in ¯TyL, ¯HzL,p, ¯TuL are also stochastic. We hence formulate a new optimization-based isolation problem against parameter errors.

Problem 2: Determine a projection vector pi (i= 1, · · · , nf) with kpik2= 1 in the residual generator (7), which is optimal in the following sense:

O1. pi· h ¯ HL,pf ,iL f ,i i 2 2 is maximized; O2. pi· h ¯ HL,pf ,F\iL f ,F\i i 2 2 is minimized;

O3. and the sensitivity of pi to the following error covariance of the fault transfer matrix,

Σ∆FT , E  h ∆H¯L,p f ∆ ¯ TLfi·h∆H¯L,pf ∆T¯fLiT  , (12)

is also minimized; i.e. pi·Σ∆FT· pTi is minimized.

A solution of pi satisfying the first two objectives, i.e. O1 and O2, is actually a nominal design, which does not take into account the parameter errors. Similar nominal data-driven fault isolation design has been reported in [3]. The third objective, i.e. O3, aims at robustifying pi against parameter errors.

Note that we choose to minimize pi·Σ∆FT · pTi , instead of directly minimizing pi·

∆¯

HL,pf ∆T¯fL,

because the error matrix herein is unknown. But what we can indeed extract from data is its covariance information. On the other hand, pi·Σ∆F T·pTi should ideally be zero. But this is unfortunately not possible, because by definition, the covariance matrix (12) is positive definite. Consequently, we can only find a

pi, which is least sensitive to the covariance (12), instead of being strictly insensitive to it. This can be found by taking the subspace spanned by the vectors corresponding to the least singular values of ΣFT; i.e. by the SVD Σ∆F T = h UpcUnpc∆ i   Spc Snpc     (Vpc∆)T (Vnpc∆ )T  . (13)

Here, the subscripts “(n)pc” mean (non-)principal components. The diagonal matrix Snpc contains the

smallest n singular values of ΣFT, with the integer nchosen by observing a gap among all the ℓL singular values of ΣFT. In subspace identification methods, e.g. [4], it is a standard practice to select model orders by observing singular values.

(7)

Similarly, the projection direction satisfying O2 shall be in the subspace spanned by the vectors corresponding to the least singular values of

Qfi ins, h ¯HL,p f ,F\i ¯ Tf ,LF\i i ·h ¯HL,pf ,F\if ,LF\i iT . (14)

Here, the subscripts “ins” are the abbreviation for insensitive.

The projection direction that satisfies both O2 and O3 is characterized in the following lemma.

Lemma 1: If range(Unpc) is not orthogonal to range(Qfi

ins), then the nonzero vector pi∈ R1×ℓL that simultaneously minimizes pi· Qinsfi · pTi and pi·Σ∆FT · pTi is pTi ∈ range(Unpc· U

, fi npc), where Unpc, fi is computed by the SVD (Unpc∆ )T· Qfi ins·Unpc∆ = h U, fi pc Unpc, fi i   S, fi pc S, fi npc     (V, fi pc )T (V, fi npc)T  , (15)

and has n, fi columns, corresponding to the smallest n, fi singular values of(U

npc)T· Q

fi

ins·Unpc∆ . 

Proof: If pTi ∈ range(Unpc∆ ), then ∃α ∈ Rn such that pT

i = Unpc∆ · α. Now, pi· Qinsfi · pTi is reduced to

αT· (Unpc∆ )T· Qfi

ins·Unpc∆ · α. (16)

If range(Unpc) is not orthogonal to range(Qfi

ins), then (Unpc∆ )T· Q fi

ins·Unpc∆ 6= 0, and (16) is minimized if

α∈ range(U, fi

npc); i.e. ∃β ∈ Rn, fi such that α= Unpc, fi· β; and hence pTi = Unpc·U

, fi

npc · β simultaneously minimizes pi· Qinsfi · pTi and pi·Σ∆F T· pTi . This is equivalent to pTi ∈ range(Unpc·U

, fi

npc). For brevity, denote

Qfi sen, h ¯ HL,pf ,if ,iL i ·h H¯L,p f ,if ,iL iT . (17)

Here, the subscripts “sen” are the abbreviation for sensitive. Now, Problem 2 can be mathematically described by the following optimization problem:

maxpi pi· Q fi sen· pTi , s.t. pTi ∈ range(Unpc·U, fi npc), and kpik2= 1. (18) The key challenge in solving the optimization problem (18) is to express the covariance matrix ΣFT in an explicit form of the covariance matrix of the identified ˆΞ.

As analyzed in the companion paper [1], the bias effects of the initial states on both the parameter errors and the residual distribution can be neglected with a reasonably large past horizon p. Similarly, we assume that p is large enough to ignore both bk,L in (4), and the bias in the identified parameters. Then, following the discussions in [1, Sec. III.B], and as a standard practice in least squares, the covariance of

(8)

the LS estimates of the Markov parameters in ˆΞcan be approximated by (where the strict equality holds with N→∞):

Cov(vec(∆Ξˆ)) = ZidZidT

−1

⊗Σe, (19)

which can be computed from the identification data matrix Zid as defined in [1, Eq. (11)].

Theorem 1: Let the fault transfer matrix h

¯

HL,pfL f

i

be constructed from the identified Markov parameters ˆΞ. Then, E  h ∆H¯L,p f ∆T¯Lf i ·h H¯L,p f ∆T¯fL iT =Π1−Π2, where (20) Π1 = ∑Lj=1 n tr(Pj⊗ Ih) · IsgnT (ZidZidT)−1Isgn · h (Wj+WjT) ⊗Σe io Π2 = tr IsgnT (ZidZidT)−1Isgn · (IL⊗Σe), with the structure matrices, Pj and Wj, defined as

Pj=   0( j−1)×(p− j+1) 0( j−1)×( j−1) Ip− j+1 0(p− j+1)×( j−1)  ∈ Rp×p, Wj=   0(L− j+1)×( j−1) IL− j+1 0( j−1)×( j−1) 0( j−1)×(L− j+1)  ∈ RL×L. Proof: See Appendix A.

With Lemma 1 and Theorem 1, solving Problem 2, or equivalently (18), becomes straightforward, which is given in the following proposition.

Proposition 1: Let the fault transfer matrix h

¯

HL,pffL i

consist of the identified Markov parameters ˆ

Ξ. Then the optimal solution to (18) is pii· (Unpc· U

, fi

npc)T, where ωiT is the eigenvector that corresponds to the largest eigenvalue of the matrix, (Unpc·U, fi

npc)T· Qsenfi · (Unpc·U

, fi

npc). 

Remark 1: An alternative way of finding pi that satisfies both O2 and O3 is seeking it in the subspace spanned by the non-principal components of Qfi

ins+Σ∆FT. This does not restrict the range of Qinsfi from being orthogonal to that of ΣFT, and is suitable for the case where the norms of these two matrices have comparable magnitudes, or the case of kΣFTk ≪ kQinsfi k, i.e. with negligible parameter errors.  As a summary, given ˆΞand ˆΣe, five steps are needed to design the projection vectors, pi, i= 1, · · · , nf. 1) Compute ΣFT by (20) and its SVD, and choose Unpcby selecting its nsmallest singular values. 2) Construct h ¯HL,pf ,if ,iLi andh ¯HL,pf ,F\if ,LF\ii, and compute Qfi

ins, Q fi

sen, respectively by Eqs. (14,17). 3) Compute the SVD of (Unpc∆ )T· Qfi

ins·Unpcand choose U

, fi

npc by selecting its n, fi smallest singular

values.

4) Compute the eigenvector ωiT corresponding to the largest eigenvalue of(UnpcU, fi

npc)T·Qsenfi ·(UnpcU, fi npc). 5) Compute pi= ωi· (Unpc·U, fi npc)T.

(9)

B. Hypothesis-based fault isolation test

With pi solved by Proposition 1, rk,L(i) has a maximum sensitivity to the i-th fault fi, and minimum sensitivity to the other faults and to the parameter errors. Therefore, fault isolation can already be applied by choosing a threshold, which is large enough to upper bound the maximum amplitudes that the residual

r(i)k,L can take in the fault-free case, see e.g. [5]. In this paper, we are interested in deriving a hypothesis test, where the threshold can be associated with a statistical confidence level.

The covariance of the residual vector rk,L is derived and analyzed in details in the companion paper [1]. On the other hand, since Qfi

ins and Q fi

sen contain stochastic parameter errors, pi as a solution of (18) is also stochastic. However, accounting the contribution of pi to the variance of rk,L(i) requires deriving the probability distribution of the eigenvector of (Unpc·U, fi

npc)T· Qsenfi · (Unpc·U

, fi

npc) that corresponds to its maximal eigenvalue, and then to consider the correlation between pi and the parameter errors. The analysis therein may not even be tractable. But fortunately, by the formulation of Problem 2, the projection vector pi is in the least variant subspace of the parameter errors. A reasonable approximation is therefore to treat pi, i= 1, · · · , nf as deterministic vectors, in determining the variance of rk,L(i).

As analyzed in the companion paper [1], with the past horizon p chosen sufficiently large, Cov(rk,L) can be approximated by [1, Eq. (38)]; and in the fault-free case, the statistical distribution of rTk,LCov−1(rk,L)rk,L can be described as a central χ2 distribution. Now, since pi, i= 1, · · · , nf are optimized such that they are least sensitive to the stochastic parameter errors, the variance of r(i)k,L can be approximated by

pi· Cov(rk,L) · pTi ; and fault isolation can be tested by a central χ2 test with 1 DoF, i.e.

τi(k) =  rk,L(i)2 pi· Cov(rk,L) · pTi faulty ≷ no fault γα, (21)

where γα is the threshold, with a false alarm rate (FAR) of α.

IV. SIMULATION STUDIES

Consider the same VTOL model as studied in the companion paper [1]. The parameters in the simulation are all the same as those given in [1, Sec. IV.B.1], and are not repeated.

Since there are two actuator channels and four sensor channels, nf = 6. We hence designed six projec-tion vectors for isolating each of these faults, using the data from one single identificaprojec-tion experiment, as described in [1, Sec. IV.B.1]. These filters are denoted as F1, F2, F3, F4, F5, and F6, and respectively designed for isolating a fault signal that can affect one out of the six I/O channels, i.e. u1, u2, y1, y2, y3, y4.

(10)

We considered similar additive actuator faults, as defined in [1, Sec. IV.B.2], but with an overlap in their effective intervals to test the fault isolation performance. The first actuator was stuck at−3 in the

interval of 301≤ k ≤ 600; and the second actuator had a bias of −5 in the interval of 501 ≤ k ≤ 800.

We compared the results achieved by the robust solution in Proposition 1 to the following nominal design without considering the parameter errors, as similar to the isolation solution in [3]:

maxpi pi· h ¯ HL,pf ,if ,iL i 2 2, i= 1, · · · , nf s.t. pi· h ¯ HL,pf ,F\if ,LF\i i 2 2= 0, and kpik2= 1.

Here, the fault transfer matrices were the same as used in the robust design. But in this nominal solution, only the innovation signals were considered in the residual variance. In the nominal filters, the orders of the non-principal subspace of h ¯HL,pf ,F\if ,LF\i

i

, i= 1, · · · , 6 were respectively chosen according to their

singular values (40 in total) as 30 for F1 and F2, and 20 for F3∼F6.

In the robust filters, the orders were chosen as n= 30 according to the singular values (40 in total)

of ΣFT, and as n, f1, n, f2= 20 for F1 and F2 and n, fi= 10, i = 3, · · · , 6 for F3∼F6 according to the

singular values (30 in total) of respectively(Unpc∆ )T· Qfi

ins·Unpc, i= 1, · · · , 6. The results are illustrated in Fig. 1. Clearly, when no fault occurred in I/O channels, the robust isolation test reacted “almost” correctly, since the stochastic parameter errors were accounted in the residual variance. The slightly bigger false alarm rate of the robust filters compared to 0.5% can be attributed to the fact that the hypothesis test (21) is an approximation by treating pi, i= 1, · · · , 6 as deterministic. But since pi is in the least variant subspace of the parameter errors, the approximation error is reasonably small. Moreover, the two faults were correctly isolated respectively by the robust filters F1 and F2, which were designed to be sensitive respectively only to the first and the second actuator.

In comparison, these results clearly outperformed those achieved by the nominal design, especially when the second actuator failed. The nominal filter F2 did not react at all to this fault. The nominal filters F3, F4, F5 all gave alarms in the interval 501≤ k ≤ 600. Besides, F1 still gave alarms in the

interval 601≤ k ≤ 800, when the first actuator recovered from the stuck failure. This then denied the

correct isolation of the two actuator faults.

V. CONCLUSIONS

In this paper, we have developed a new data-driven fault isolation method, which is robust to parameter identification errors. The main contributions are the closed-form error covariance matrix of the identified fault transfer matrix, and the robustified fault isolation vectors that belong to the subspace spanned by the

(11)

0 100 200 300 400 500 600 700 800 0 20 40 60 80 100 120 140 160 samples F1 F2 F3 F4 F5 F6 threshold 0 100 200 300 400 500 600 700 800 0 100 200 300 400 500 600 700 800 900 1000 samples F1 F2 F3 F4 F5 F6 threshold

Fig. 1. Robust (left) v.s. nominal (right) fault isolation.

non-principal components of this covariance matrix. Our analytical results are tested in the simulation studies, which have validated that the data-driven fault isolation method developed in this paper has clearly improved performance compared to the nominal data-driven solution without taking into account the identification uncertainty. Possible future directions are to extend the robust isolation method to deal with multiplicative faults and to linear parameter varying systems.

REFERENCES

[1] J. Dong, M. Verhaegen, and F. Gustafsson, “Robust fault detection with statistical uncertainty in identified parameters,” IEEE Transactions on Signal Processing, vol. 60, pp. 5064–5076, 2012.

[2] J. Gertler and D. Singer, “A new structural framework for parity equation based failure detection and isolation,” Automatica, vol. 26, pp. 381–388, 1990.

[3] S. Qin and W. Li, “Detection and identification of faulty sensors in dynamic processes,” AIChE Journal, vol. 47, pp. 1581–1593, 2001.

[4] A. Chiuso, “On the relation between CCA and predictor-based subspace identification,” IEEE Transactions on Automatic Control, vol. 52, pp. 1795–1812, 2007.

[5] Y. Wang, S. Ding, H. Ye, and G. Wang, “A new fault detection scheme for networked control systems subject to uncertain time-varying delay,” IEEE Transactions on Signal Processing, vol. 56, pp. 5258–5268, 2008.

APPENDIXA

PROOF OFTHEOREM 1

Lemma 2: Let Sp,L be defined as in (11). Then,

Sp,L· STp,L= L

j=1

(12)

where the matrices, Pj and Wj for j= 1, · · · , L, are defined in Theorem 1.  The derivation of this expression is lengthy and purely algebraic, and shall be omitted for brevity. It can be easily verified by a numerical simulation.

Lemma 3: Let M∈ Rn×n. Then, vecT(I

n) · vec(M) = tr(M).

Proof: Let ηi be the i-th column of In. Let Mi denote the i-th column of M, and Mii be its i-th diagonal element. Then, vecT(In) · vec(M) =ni=1ηiT· Mi=∑ni=1Mii, which equals tr(M), by definition.

Lemma 4: Let M, ˜M∈ Rn×n. Then, vecT(I

n) · (M ⊗ ˜M) · vec(In) = tr( ˜M· MT).

Proof: First, by [1, Property (43)], (M ⊗ ˜M) · vec(In) = vec( ˜M· In· MT) = vec( ˜M· MT). Now, by Lemma 3, vecT(In) · vec( ˜M· MT) = tr( ˜M· MT).

First, recall that

h

H¯L,p

f ∆T¯fL

i

can be written in terms of the error sources ∆Ξˆidf and its shifting structure as (IL⊗∆Ξˆidf ) · (Sp,L⊗ Im+ℓ), where ∆Ξˆidf =∆Ξˆ · Isgn. Since we consider the general case of multiple outputs, i.e. ℓ > 1, ∆Ξˆ is an ℓ× p(m + ℓ) matrix, instead of a vector. The covariance of this

matrix-value random variable denoted as Cov(vec(∆Ξˆ)) is given in (19). It is straightforward to show

Cov(vec(∆Ξˆidf )) =IT

sgn· (ZidZidT)−1· Isgn ⊗Σe. (22) To simplify notations in this proof, denote Θf , vec(∆Ξˆidf ) ∈ R

ℓpnf, , IT

sgn· (ZidZidT)−1· Isgn

Rpnf×pnf; i.e. (22) can be rewritten as Covf) =Σe.

Since Θf = vec(∆Ξˆidf ),∆Ξˆidf can be retrieved from Θf:

Ξˆid f = h (ηT 1 ⊗ Iℓ) · Θf (η2T⊗ Iℓ) · Θf · · · (ηTpnf⊗ Iℓ) · Θf i = [−→ITpnf⊗ I] · (Ipnf⊗ Θf), where ηi∈ Rpnf×1 denotes the i-th column of Ipnf. Thus,

h

H¯L,p

f ∆T¯Lf

i

can be explicitly written as

n IL⊗ h (−→ITpnf⊗ I) · (Ipnf⊗ Θf) io · (Sp,L⊗ Im+ℓ). (23)

For simplicity, denote−→I pnf , vec(Ipnf). By [1, Properties (45,46,47)], the error covariance of the fault

transfer matrix can be written as E hH¯L,p f ∆T¯fL i ·h ∆H¯L,p f ∆T¯fL iT = EnnIL⊗ h (−→IT pnf ⊗ I) · (Ipnf⊗ Θf) io · (Sp,L⊗ Inf) · (S T p,L⊗ Inf) · n IL⊗ h (Ipnf⊗ Θ T f) · ( − → Ipnf⊗ Iℓ) ioo = EnnIL⊗ h (−→ITpnf ⊗ I) · (Ipnf⊗ Θf) io · [(Sp,LSTp,L) ⊗ Inf] · n IL⊗ h (Ipnf⊗ Θ T f) · ( − → Ipnf ⊗ Iℓ) ioo .

Now, the key idea is to explicitly obtain E(Θf · ΘTf) in this equation and use (22). First note that

Sp,LSTp,L ∈ RpL×pL; but (

− →

IT

pnf ⊗ I) · (Ipnf ⊗ Θf) ∈ R

(13)

above then boils down to E ( n IL⊗ h (−→ITpn f⊗ I) · (Ipnf⊗ Θf) io · " L

j=1 Wj⊗ Pj+ WjT⊗ PjT − IpL ! ⊗ Inf # ·nIL⊗ h (Ipnf⊗ Θ T f) · ( − → Ipnf ⊗ Iℓ) io ) . (24) Let us first derive

E  n IL⊗ h (−→ITpnf⊗ I) · (Ipnf⊗ Θf) io · " ( L

j=1 Wj⊗ Pj) ⊗ Inf # ·nIL⊗ h (Ipnf⊗ Θ T f) · ( − → Ipnf⊗ Iℓ) io . (25)

Recall that Pj∈ Rp×pand Wj∈ RL×L for j= 1, · · · , L. By [1, Properties (44,45)], Eq. (25) equals to

L j=1E nn IL⊗ h (−→IT pnf⊗ I) · (Ipnf⊗ Θf) io · (Wj⊗ Pj⊗ Inf) · n IL⊗ h (Ipnf⊗ Θ T f) · ( − → Ipnf⊗ Iℓ) ioo =∑L j=1E nn Wj⊗ h (−→IT pnf⊗ I) · (Ipnf⊗ Θf) · (Pj⊗ Inf) io ·nIL⊗ h (Ipnf⊗ Θ T f) · ( − → Ipnf ⊗ Iℓ) ioo . Note that Pj⊗ Inf ∈ R pnf×pnf, and equals to (P j⊗ Inf) ⊗ 1; and Θf ∈ R ℓpnf×1. Therefore, (Ipnf⊗ Θf) · (Pj⊗ Inf) = (Ipnf⊗ Θf) · [(Pj⊗ Inf) ⊗ 1] = (Pj⊗ Inf) ⊗ Θf.

Now by [1, Property (45)], (25) further reduces to ∑L j=1E nn Wj⊗ h (−→ITpnf⊗ Iℓ) ·  (Pj⊗ Inf) ⊗ Θf io ·nIL⊗ h (Ipnf⊗ Θ T f) · ( − → Ipnf ⊗ Iℓ) ioo =∑L j=1E n (Wj· IL) ⊗ h (−→IT pnf⊗ Iℓ) ·  (Pj⊗ Inf) ⊗ Θf  ·Ipnf⊗ Θ T f  ·−→Ipnf⊗ Iℓ io =∑L j=1Wj⊗ n (−→ITpnf⊗ Iℓ) · h Pj⊗ Inf  ⊗ EΘf· ΘT f i ·−→Ipnf⊗ Iℓ o =∑L j=1Wj⊗ n (−→IT pnf⊗ Iℓ) · h Pj⊗ Inf  ⊗ (Ω⊗Σe) i ·−→Ipnf⊗ Iℓ o =∑L j=1Wj⊗ n (−→ITpnf⊗ Iℓ) · h Pj⊗ Inf⊗Ω  ⊗Σe) i ·→−Ipnf⊗ Iℓ o =∑Lj=1Wj⊗ nh−→ IT pnf·  Pj⊗ Inf⊗Ω  ·−→Ipnf i ⊗ (Iℓ·Σe· Iℓ) o =∑L j=1Wj⊗ h−→ ITpnf·Pj⊗ Inf⊗Ω  ·−→Ipnf i ⊗Σe.

Here, the first and the second equality are due to [1, Property (45)]; the fourth one due to [1, Property (44)]; the fifth one again due to [1, Property (45)].

Now, by Lemma 4,−→ITpnf·Pj⊗ Inf⊗Ω  ·−→I pnf= tr  Ω· (PT j ⊗ Inf)  = tr(Pj⊗ Inf) ·Ω  . The second equality is due to tr(MT) = tr(M). Thus, (25) finally boils down to

L

j=1 tr(Pj⊗ Inf) ·Ω  · (Wj⊗Σe). (26)

On the other hand,

E n IL⊗ h (−→ITpnf⊗ I) · (Ipnf⊗ Θf) io · " ( L

j=1 WjT⊗ PTj ) ⊗ Inf # ·nIL⊗ h (Ipnf⊗ Θ T f) · ( − → Ipnf⊗ Iℓ) io .

is the transpose of the matrix in (25), and hence equals to L

j=1 tr(Pj⊗ Inf) ·Ω  · (WjT⊗Σe). (27)

(14)

Similarly, E  n IL⊗ h (−→ITpnf⊗ I) · (Ipnf⊗ Θf) io · (−IpnfL) · n IL⊗ h (Ipnf⊗ Θ T f) · ( − → Ipnf⊗ Iℓ) io = −tr() · (IL⊗Σe). (28) Now, substituting (26,27,28) into (24), Eq. (20) follows.

References

Related documents

Generally, a transition from primary raw materials to recycled materials, along with a change to renewable energy, are the most important actions to reduce greenhouse gas emissions

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Denna förenkling innebär att den nuvarande statistiken över nystartade företag inom ramen för den internationella rapporteringen till Eurostat även kan bilda underlag för

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

Det är intressant att notera att även bland de företag som har ett stort behov av externt kapital så är det (1) få nya och små företag som är redo för extern finansiering –