• No results found

Supplementary material for “On parametric lower bounds for discrete-time filtering”

N/A
N/A
Protected

Academic year: 2021

Share "Supplementary material for “On parametric lower bounds for discrete-time filtering”"

Copied!
15
0
0

Loading.... (view fulltext now)

Full text

(1)

Supplementary Material for “On parametric lower bounds

for discrete-time filtering”

Carsten Fritsche, Umut Orguner

Division of Automatic Control

E-mail: carsten@isy.liu.se, umut@metu.edu.tr

22nd January 2016

Report no.: LiTH-ISY-R-3090

Address:

Department of Electrical Engineering Linköpings universitet

SE-581 83 Linköping, Sweden

WWW: http://www.control.isy.liu.se

AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

Technical reports from the Automatic Control group in Linköping are available from http://www.control.isy.liu.se/publications.

(2)

and lemmas that could not be included into the paper due to space limitations.

(3)

Supplementary Material for “On parametric lower

bounds for discrete-time filtering”

Carsten Fritsche and Umut Orguner

Department of Electrical & Electronics Engineering, Middle East Technical University, 06800, Ankara, Turkey

1

Proof of Theorem 1

The proof of Theorem 1 makes use of Lemma 2. Since we want to derive a parametric CRLB for unbiased estimators, the result of Lemma 2 simplifies to

M(ˆx0:k(y0:k)|x0:k) ≥ [J0:k(x0:k)]−1, (1)

where ˆxT

0:k(y0:k) = [ˆxT0(y0:k), . . . , ˆxTk(y0:k)] is any unbiased estimator of the state sequence x0:k, and J0:k(x0:k) is

the auxiliary Fisher information matrix of the state sequence x0:k, defined as

J0:k(x0:k) = Ey0:k,z1:k{−∆

x0:k

x0:klog(p(y0:k, z1:k|x0:k))|x0:k}. (2)

The above lemma is required to incorporate information from the deterministic state dynamics into the bound calculations. In particular, since the state sequence x0:k is deterministic, it can be rewritten as a set of equality

constraints: 0 = xi+1− fi(xi) − Giwi, i = 0, . . . , k − 1. These again can be interpreted as “perfect” measurements

zi+1= xi+1−fi(xi)−Giwiwith zi+1= 0 ∀i. In order to stay in a probabilistic framework, we add to these equations zero-mean Gaussian noise with covariance M , which is later on set to zero to recover the equality constraint. Thus, it is possible to establish the following recursion

p(y0:k, z1:k|x0:k) = p(yk|xk)p(zk|xk, xk−1)

× p(y0:k−1, z1:k−1|x0:k−1). (3)

The auxiliary Fisher information matrix J0:k−1(x0:k−1) can be then partitioned as follows:

J0:k−1(x0:k−1) = −Ey0:k−1, z1:k−1 (x0:k−2 x0:k−2xk−1 x0:k−2x0:k−2 xk−1xk−1 xk−1  log(p(y0:k−1, z1:k−1|x0:k−1)) x0:k−1 ) =  A11k−1 A12k−1 (A12 k−1) T A22 k−1  . (4)

The auxiliary Fisher information submatrix Jk−1(x0:k−1) is computed as the inverse of the n × n lower-right partition

of [J0:k−1(x0:k−1)]−1, given by Jk−1(x0:k−1) = A11k−1− (A 12 k−1) T(A22 k−1)−1A 12 k−1. (5)

Similarly, by making use of the recursion (3) it can be easily verified that the matrix J0:k(x0:k) can be simplified as

follows J0:k(x0:k) = −Ey0:k, z1:k      ∆x0:k−2 x0:k−2xk−1 x0:k−2xk x0:k−2x0:k−2 xk−1xk−1 xk−1xk xk−1x0:k−2 xkxk−1 xkxk xk  log(p(y0:k, z1:k|x0:k)) x0:k    =   A11 k−1 A12k−1 0 (A12 k−1)T A22k−1+ L11k L12k 0 (L12 k )T L22k   (6) with elements L11k = Ey0:k,z1:k{−∆ xk−1 xk−1log(p(zk|xk, xk−1))|x0:k}, (7a) L12k = Ey0:k,z1:k{−∆ xk xk−1log(p(zk|xk, xk−1))|x0:k}, (7b) L22k = Ey0:k,z1:k{−∆ xk xklog(p(zk|xk, xk−1))|x0:k} + Ey0:k,z1:k{−∆ xk xklog(p(yk|xk))|x0:k}. (7c)

(4)

Since J0:k(x0:k) has a block tri-diagonal structure, a recursive computation of the (auxiliary) Fisher information

submatrix Jk(x0:k) is possible. By noting that this submatrix is computed as the inverse of the n × n lower-right

partition of [J0:k(x0:k)]−1, block matrix inversion yields

Jk(x0:k) = L22k0 (L 12 k ) T  A11k−1 A12k−1 (A12k−1)T A22k + L11k −1 0 L12k  = L22k − L 21 k Ak−1+ L11k − (A 12 k ) T (A11k )−1A 12 k−1 −1 L12k . (8) Inserting the definition of Jk(x0:k) given in (5) into (11), yields the desired recursive formula for computing

Jk(x0:k) = L22k − L 21 k L 11 k + Jk(x0:k) −1 L12k . (9)

For nonlinear additive Gaussian systems, the pdfs required to evaluate the L-terms in (7) are given by p(zk|xk, xk−1) = N (zk; xk − fk−1(xk−1) − Gk−1wk−1, M ) and p(yk|xk) = N (yk; hk(xk), Rk), respectively. The corresponding

L−terms then simplify to

L11k = Fk−1(xk−1)M−1Fk−1T (xk−1), (10a)

L12k = Fk−1T (xk−1)M−1, (10b)

L22k = HkT(xk)R−1k Hk(xk) + M−1, (10c) where we have introduced the Jacobian matrices Fk−1(xk−1) = [∇xk−1f

T

k−1(xk−1)]T and Hk(xk) = [∇xkh

T k(xk)]T. The auxiliary Fisher information matrix recursion then simplifies to

Jk(x0:k) = HkT(xk)R−1k Hk(xk) + M−1− M−1Fk−1(xk−1)Fk−1(xk−1)M−1Fk−1T (xk−1) + Jk(x0:k)

−1 × FT

k−1(xk−1)M−1. (11)

Applying the matrix inversion lemma to the above expression yields

Jk(x0:k) = (M + Fk−1(xk−1)Jk−1−1 (x0:k−1)Fk−1T (xk−1))−1+ Hk(xk)TR−1k Hk(xk). (12) Finally setting M = 0 concludes the proof.

2

Proof of Lemma 1

Recall that the Kalman filter recursions are given by ˆ xk+1|k= F ˆxk|k, (13a) Pk+1|k= F Pk|kFT+ QT, (13b) ˆ xk|k= ˆxk|k−1+ Kk(yk− H ˆxk|k−1), (13c) Pk|k= Pk|k−1− KkSkKkT, (13d) where Sk= HPk|k−1HT+ R, (14a) Kk= Pk|k−1HTSk−1. (14b)

Hence, the conditional bias bk(x0:k) can be rewritten as

bk+1(x0:k+1), Ey0:k+1 ˆxk+1|k+1 x0:k+1 − xk+1 = Ey0:k+1 (In− Kk+1H)F ˆxk|k+ Kk+1yk+1 x0:k+1 − xk+1 = (In− Kk+1H)F Ey0:k+1 ˆxk|k x0:k+1 + Kk+1Ey0:k+1[ yk+1| x0:k+1] − xk+1 = (In− Kk+1H)F Ey0:k ˆxk|k x0:k + Kk+1Hxk+1− xk+1 = (In− Kk+1H)F Ey0:k ˆxk|k x0:k − xk+ xk − (In− Kk+1H) xk+1 = (In− Kk+1H)F bk(x0:k) + (In− Kk+1H)F xk− (In− Kk+1H) xk+1 = (In− Kk+1H)F bk(x0:k) − (In− Kk+1H) (xk+1− F xk) , (15) where In denotes the identity matrix of size n × n, and where we used the fact that the Kalman filter gain Kk is independent of yk for all times. Hence, the recursive equation for the conditional bias is given as

(5)

which can equivalently be written as

bk+1(x0:k+1) = (In− Kk+1H)F bk(x0:k) − (In− Kk+1H)wk, (17) where wk denotes the specific process noise realization associated with the given state sequence x0:k. This concludes

the proof.

3

Proof of Lemma 2

The proof of Lemma 2 is slightly non-standard. We omit the time dependency on the variables to enhance readability. In case the estimator ˆx(y) has a bias b(x) we can write

Z

x(y) − x)Tp(y|x) dy = bT(x). (18) The left hand side of the above equation can be further expanded to include the measurement z

Z Z

x(y) − x)Tp(y, z|x) dy dz = bT(x). (19) Taking the gradient ∇x= [∂/∂x1, . . . , ∂/∂xn]T on both sides yields

Z Z

xp(y, z|x)(ˆx(y) − x)Tdy dz = I + ∇xbT(x), (20) or equivalently

Z Z

xlog(p(y, z|x))(ˆx(y) − x)Tp(y, z|x) dy dz = I + ∇xbT(x). (21) Consider now the random vector

 ˆ x(y) − xxlog(p(y, z|x))  . (22)

The conditional mean of this vector is given by

Ey,z ( ˆ x(y) − xxlog(p(y, z|x))  x ) =  bT(x) 0  . (23)

The conditional covariance matrix is positive semi-definite by construction and is given by

Ey,z ( ˆ x(y) − xxlog(p(y, z|x))   ˆ x(y) − xxlog(p(y, z|x)) T x ) =  P (I + ∇xbT(x))T I + ∇xbT(x) J (x)  , (24)

where we have defined the conditional covariance

P = Ey{(ˆx(y) − x)(ˆx(y) − x)T| x} (25) and the auxiliary Fisher information matrix

J (x) = Ey,z{∇xlog(p(y, z|x))∇Txlog(p(y, z|x))| x}

= Ey,z{−∆xxlog(p(y, z|x))| x}, (26)

where ∆t

u = ∇u[∇t]T holds. Since we are interested in a bound on P , the Schur complement of the upper-right corner of (24) gives the desired result

P ≥ (I + ∇xbT(x))T[J (x)]−1(I + ∇xbT(x)). (27) Since the estimator ˆx(y) is biased, the conditional MSE matrix can be further bounded from below as follows

M(ˆx(y)|x) = P + b(x)bT(x) ≥ (I + ∇xbT(x))T[J (x)]−1(I + ∇xbT(x)) + b(x)bT(x), (28) which concludes the proof.

(6)

4

Proof of Theorem 2

The joint covariance for conditionally biased estimates ˆx0:k(y0:k) of x0:k is bounded by the parametric CRLB eC0:k

as given below

Cov (ˆx0:k(y0:k)|x0:k) ≥ eC0:k, eBkC0:kBekT, (29) where

e

Bk , I(k+1)n+ Bk (30)

and C0:k denotes the parametric CRLB for unbiased estimates given as

C0:k, J0:k−1 (31) with J0:k, Ey0:k,z1:k ∇x0:klog p(y0:k, z1:k|x0:k)∇ T x0:klog p(y0:k, z1:k|x0:k) x0:k  (32) denoting the auxiliary Fisher information matrix. The parametric CRLB eCk bounding the covariance of the biased estimate ˆxk(y0:k) can then be written as

e Ck , h e Bk i k,:J −1 0:k h e Bk iT k,:, (33)

where the notation [·]k,: denotes the kth block row of the argument matrix. The parametric CRLB eCk+1 for the

covariance of the biased estimate ˆxk+1(y0:k) is given as

e Ck+1= h e Bk+1 i k+1,: J0:k+1−1 hBek+1 iT k+1,: =hBek+1 i k+1,: C0:k+1 h e Bk+1 iT k+1,: . (34)

In order to be able to find a recursive expression for eCk+1, we have to express C0:k+1in terms of C0:kand

h e Bk+1 i k+1,: in terms ofhBek i

k,:. We can do the latter easily as follows. Suppose we decompose the matrix into blocks eB `m k ∈ Rn×n, 0 ≤ `, m ≤ k. The blocks eBk`m, 0 ≤ `, m ≤ k would be given as

e Bk`m= ( In+ Bk`m, m = `, B`m k , otherwise. (35)

For the computation of eB`m

k , we can make use of the following proposition.

Proposition 1. Suppose the matrixhBek i

k,:

can be decomposed into blocks eB`m

k ∈ Rn×n, 0 ≤ `, m ≤ k. Then

given the bias recursions (17), eBk`mcan be determined as follows

e Bk`m=         Q` j=1(In− KjH)F  ∇T x0b0(x0) +1({` 6= 0})In , m = 0,  Q` j=m+1(In− KjH)F  KmH, 1 ≤ m ≤ `, 0, ` < m ≤ k. (36)

Proof: See Section 6. 

Using the expression in (36), it is easy to show that h e Bk+1 i k+1,: =h (In− Kk+1H)F h e Bk i k,: Kk+1H i (37) holds. In a next step we express C0:k+1 in terms of C0:k. Suppose the state vector is decomposed as follows

x0:k= [xT0:k−1, x T

k]

T. Then the corresponding auxiliary Fisher information matrix J

0:kcan be decomposed accordingly

J0:k=  A B BT C  . (38)

(7)

Similarly, the decomposition of the state vector according to x0:k = [xT0:k−1, x T

k, x

T

k+1]

T yields the auxiliary Fisher

information matrix J0:k+1=   A B 0 BT C + D 11 D12 0 D21 D22  , (39)

where D21= DT12. The inverse of J0:k, i.e., C0:k is given as follows

C0:k=  Mk Nk NT k Ck  , (40)

where Ckdenotes the parametric CRLB for unbiased estimates ˆxk(y0:k). Note that using the matrix inversion formula

in block form, the inverse of J0:k+1, i.e., C0:k+1 can be written as

C0:k+1=  Mk+1 Nk+1 NT k+1 Ck+1  , (41) where Mk+1,  A B BT C + D 11− D12D−122D21 −1 , (42) Nk+1, − Mk+1  0 D12  D−122, (43) Ck+1, D22−  0 D21   A B BT C + D 11 −1 0 D12 !−1 =D22− D21 C + D11− BTA−1B −1 D12 −1 =D22− D21(Jk+ D11) −1 D12 −1 , (44)

where Jk= Ck−1. We now consider the matrix Mk+1below

Mk+1,  A B BT C + D 11− D12D−122D21 −1 =  J0:k+  0 0 0 D11− D12D22−1D21 −1 =  Mk Nk NT k Ck  −  Nk Ck   Ck+ D11− D12D−122D21 −1−1 NT k Ck  , (45)

where we used the following proposition to obtain the last equality.

Proposition 2. Consider the matrix defined as I,  A B C D  , (46) where A ∈ Rn×n , B ∈ Rn×m , C ∈ Rm×n

and D ∈ Rm×m. Consider the modification eI of I defined as

eI,  A B C D + E  , (47)

where E ∈ Rm×m. We assume that both I and eI are invertible. When the inverse of the matrix E exists, the

inverse of eI can be written in terms of inverse of I as follows

eI−1= I−1−  I−1 12 I−1 22  I−1 22+ E −1−1 I−1 21 I −1 22  , (48) where

• 0n×m denotes the zero matrix of size n × m; • In denotes the identity matrix of size n × n;

(8)

• The notation I−1

ij, 1 ≤ i, j ≤ 2 denotes ijth block of I

−1 defined as I−1 11,  In 0n×m  I−1  In 0m×n  , (49) I−1 12,  In 0n×m  I−1  0n×m Im  , (50) I−1 21,  0m×n Im  I−1  In 0m×n  , (51) I−1 22,  0m×n Im  I−1  0n×m Im  , (52)

which lead to the following partitioning of I−1

I−1,  I−1 11 I −1 12 I−1 21 I −1 22  . (53)

Proof: See Section 7. 

As result, we have Mk+1=  Mk Nk NT k Ck  −  Nk Ck   Ck+ D11− D12D−122D21 −1−1 NT k Ck  (54a) =  Mk Nk NT k Ck  −  Nk Ck  Jk  Ck− Jk+ D11− D12D−122D21 −1 Jk  NT k Ck  , (54b) where the second form is to be used in the case the matrix D11− D12D−122D21 is not invertible. Note that the

lower-right block of Mk+1 corresponds to eJk−1 and is given as

e Jk−1 = Ck− Ck  Ck+ D11− D12D22−1D21 −1−1 Ck = Ck−1+ D11− D12D22−1D21 −1 = Jk+ D11− D12D22−1D21 −1 . (55) Hence we have e Jk= Jk+ D11− D12D−122D21. (56)

We are going to shorten the expression for Mk+1 as follows

Mk+1= C0:k−  Nk Ck  Jk  Ck− eJk−1  Jk  NT k Ck  . (57)

We can now rewrite Nk+1according to

Nk+1, − Mk+1  0 D12  D−122 =  Nk Ck  D12D−122 −  Nk Ck   Ck+ D11− D12D−122D21 −1−1 CkD12D22−1 = −  Nk Ck   In−  Ck+ D11− D12D−122D21 −1−1 Ck  D12D−122. (58) Hence we have Nk+1= −  Nk Ck   In−  Ck+ D11− D12D−122D21 −1−1 Ck  D12D−122 (59a) = −  Nk Ck  Jk Jk+ D11− D12D−122D21 −1 D12D−122, (59b)

where the second form is to be used in the case the matrix D11− D12D−122D21 is not invertible. We are going to

shorten the expressions for Nk+1 as follows

Nk+1= −  Nk Ck  JkJek−1D12D22−1. (60)

(9)

The expressions in (44), (54) and (59) give recursions so that the inverted matrix C0:k+1= J0:k+1−1 can be obtained

directly from C0:k= J0:k−1 without making a direct inversion.

We are now in the position to derive a recursion for the parametric CRLB for biased estimates eCk as follows

e Ck = h e Bk+1 i k+1,:J −1 0:k+1 h e Bk+1 iT k+1,: (61) =h (In− Kk+1H)F h e Bk i k,: Kk+1H i Mk+1 Nk+1 Nk+1T Ck+1  ×h (In− Kk+1H)F h e Bk i k,: Kk+1H iT (62) = (In− Kk+1H)F h e Bk i k,: Mk+1 h e Bk iT k,: FT(In− Kk+1H)T + (In− Kk+1H)F h e Bk i k,:Nk+1H TKT k+1 + Kk+1HNk+1T h e Bk iT k,:F T(I n− Kk+1H)T + Kk+1HCk+1HTKk+1T (63) = (In− Kk+1H)F h e Bk i k,:C0:k h e Bk iT k,: | {z } =Cek FT(In− Kk+1H)T − (In− Kk+1H)F h e Bk i k,:  Nk Ck  Jk  Ck− eJk−1  Jk × NT k Ck h e Bk iT k,:F T(I n− Kk+1H)T − (In− Kk+1H)F h e Bk i k,:  Nk Ck  JkJek−1D12D22−1H TKT k+1 − Kk+1HD−122D21Jek−1Jk  NT k Ck h e Bk iT k,:F T(I n− Kk+1H)T + Kk+1HCk+1HTKk+1T (64) = (In− Kk+1H)F eCkFT(In− Kk+1H)T − (In− Kk+1H)F ΨkJk  Ck− eJk−1  JkΨTkF T(I n− Kk+1H)T − (In− Kk+1H)F ΨkJkJek−1D12D−122H TKT k+1 − Kk+1HD−122D21Jek−1JkΨTkF T(I n− Kk+1H)T + Kk+1HCk+1HTKk+1T (65) = (In− Kk+1H)F Kk+1H  × " e Ck− ΨkJk  Ck− eJk−1  JkΨTk −ΨkJkJek−1D12D−122 −D22−1D21Jek−1JkΨTk Ck+1 # × (In− Kk+1H)F Kk+1H  T , (66) where Ψk , h e Bk i k,:  Nk Ck  . (67)

(10)

We now find a recursion for Ψk as follows Ψk+1, h e Bk+1 i k+1,:  Nk+1 Ck+1  (68) =h (In− Kk+1H)F h e Bk i k,: Kk+1H i Nk+1 Ck+1  (69) = (In− Kk+1H)F h e Bk i k,: Nk+1+ Kk+1HCk+1 (70) = − (In− Kk+1H)F h e Bk i k,:  Nk Ck  | {z } =Ψk JkJek−1D12D22−1+ Kk+1HCk+1 (71) = − (In− Kk+1H)F ΨkJkJek−1D12D−122 + Kk+1HCk+1. (72) Hence the recursion for Ψk is given as

Ψk+1= −(In− Kk+1H)F ΨkJkJek−1D12D22−1+ Kk+1HCk+1. (73) Before we continue to evaluate the information submatrices D11, D12, D21 and D22for linear systems, we first

sum-marize our findings. It is possible to calculate eCkrecursively without growing memory and computation requirements. The recursion for eCk is given as

e Ck+1= (In− Kk+1H)F Kk+1H  × " e Ck− ΨkJk  Ck− eJk−1  JkΨTk −ΨkJkJek−1D12D22−1 −D−122D21Jek−1JkΨTk Ck+1 # × (In− Kk+1H)F Kk+1H T . (74)

We initialize eC0 using the parametric CRLB for unbiased estimates as follows

e C0=In+ ∇Tx0b0(x0) C0In+ ∇ T x0b0(x0) T . (75)

The intermediate matrix Ψk has the recursion

Ψk+1= −(In− Kk+1H)F ΨkJkJek−1D12D22−1+ Kk+1HCk+1, (76) which is initialized as

Ψ0=In+ ∇Tx0b0(x0) C0. (77)

The parametric CRLB Ck for unbiased estimates has the following recursion

Ck+1= 

D22− D21(Jk+ D11)−1D12

−1

, (78)

where Jk = Ck−1 is the auxiliary FIM. The recursion (78) is initialized with the true initial covariance of the initial estimate ˆx0(y0) (not with true MSE matrix!). The quantity eJk−1 is evaluated from

e

Jk−1= Jk+ D11− D12D−122D21 −1

. (79)

For the calculation of the parametric CRLB for a linear system, we have the information submatrices D11, D12, D21

and D22 given as

D11= FTQ−1F, (80)

D12= FTQ−1, (81)

D21= Q−1F, (82)

D22= Q−1+ HTR−1H, (83)

with Q → 0. In this case, first calculating D11, D12, D21 and D22 and then obtaining the relations D12D22−1,

D11− D12D−122D21and D22− D21 Ik|k+ D11

−1

(11)

Equivalent expressions for D12D22−1, D11− D12D22−1D21 and D22− D21 Ik|k+ D11

−1

D12 which do not require

Q−1 are given below

D12D22−1= F TQ−1 Q−1+ HTR−1H = FTQ−1 Q−1+ HTR−1H Q−1Q = FT Q−1 Q−1+ HTR−1H Q−1− Q−1+ Q−1 Q = FT  Q−1−Q + HTR−1H−1−1  Q = FT  In−  Q + HTR−1H−1−1 Q  , (84) D11− D12D22−1D21= FTQ−1F − FTQ−1 Q−1+ HTR−1H −1 Q−1F = FTQ−1− Q−1 Q−1+ HTR−1H−1 Q−1F = FTQ + HTR−1H−1−1 F, (85) D22− D21(Jk+ D11) −1 D12= Q−1+ HTR−1H − Q−1F (Jk+ D11) −1 FTQ−1 = HTR−1H + Q−1− Q−1F (Jk+ D11)−1FTQ−1 = F CkFT+ Q −1 + HTR−1H. (86)

Therefore, when Q → 0, the expressions D12D−122, D11− D12D22−1D21and D22− D21 Ik|k+ D11 −1 D12are given as D12D22−1= F T, (87) D11− D12D22−1D21= FTHTR−1HF, (88) D22− D21(Jk+ D11)−1D12= F CkFT −1 + HTR−1H. (89)

Substituting these results into (74) and noting that Ck= Jk−1 finally gives e Ck+1= (In− Kk+1H)F Kk+1H  ×   e Ck− ΨkJk  Jk−1− eJk−1JkΨTk −ΨkJkJek−1FT −F eJk−1JkΨTk J −1 k+1   × (In− Kk+1H)F Kk+1H  T (90) with Ψk+1= − (In− Kk+1H)F ΨkJkJek−1FT+ Kk+1HJk+1−1 , (91) e Jk−1= Jk+ FTHTR−1HF −1 , (92) Jk+1= F Jk−1F T−1 + HTR−1H, (93)

which concludes the proof.

5

Proof of Lemma 3

The conditional MSE matrix for a Kalman filter given a specific state sequence is defined as follows M(ˆxk|k(y0:k)|x0:k) = Ey0:k h ˆ xk|k(y0:k) − xk ˆxk|k(y0:k) − xk T x0:k i . (94)

Similar to the bias case, we call the quantity M(ˆxk|k(y0:k)|x0:k) as the conditional MSE of the Kalman filter. In the

following, we omit the dependency of the estimator ˆxk|k on the measurement sequence y0:k to enhance readability.

Notice that the conditional MSE matrix can be decomposed into two terms as follows

M(ˆxk|k|x0:k) = Covkxk|k|x0:k) + bk(x0:k)bTk(x0:k), (95)

where the quantity Covkxk|k|x0:k) defined as

Covkxk|k|x0:k), Ey0:k h ˆ xk|k− Ey0:k ˆxk|k x0:k  ˆ xk|k− Ey0:k ˆxk|k x0:k T x0:k i (96)

(12)

is called as the conditional covariance matrix. In the following we find a recursive expression for the conditional MSE matrix (ˆxk+1|k+1− xk+1) = (In− Kk+1H)F ˆxk|k+ Kk+1yk+1− xk+1 = (In− Kk+1H)F ˆxk|k+ Kk+1Hxk+1+ Kk+1vk+1− xk+1 = (In− Kk+1H)F ˆxk|k− (In− Kk+1H)xk+1+ Kk+1vk+1 = (In− Kk+1H)F ˆxk|k− (In− Kk+1H)F xk− (In− Kk+1H)(xk+1− F xk) + Kk+1vk+1 = (In− Kk+1H)F (ˆxk|k− xk) − (In− Kk+1H)(xk+1− F xk) + Kk+1vk+1. (97) Now taking the expectation of both sides with respect to y0:k+1 (i.e., with respect to v0:k+1) given x0:k+1 would

give the bias recursion we have found in Lemma 1. In order to find the conditional MSE, we can now take the dyadic product of both sides with their transpose and then take expected value of both sides with respect to y0:k+1 (i.e.,

with respect to v0:k+1) given x0:k+1 which gives

M(ˆxk+1|k+1|x0:k+1), Ey0:k+1 h (ˆxk+1|k+1− xk+1)(ˆxk+1|k+1− xk+1)T x0:k+1 i = (In− Kk+1H)Ey0:k+1 h F (ˆxk|k− xk) − (xk+1− F xk)  ×F (ˆxk|k− xk) − (xk+1− F xk) T x0:k+1 i (In− Kk+1H)T+ Kk+1RKk+1T = (In− Kk+1H) h F M(ˆxk|k|x0:k)FT− F bk(x0:k)(xk+1− F xk)T − (xk+1− F xk)bTk(x0:k)FT+ (xk+1− F xk)(xk+1− F xk)T i (In− Kk+1H)T + Kk+1RKk+1T , (98)

which is the recursion for the conditional MSE matrix. Note that by using the system dynamics, we can equivalently write (98) as M(ˆxk+1|k+1|x0:k+1) = (In− Kk+1H) h F M(ˆxk|k|x0:k)FT− F bk(x0:k)wTk − wkbTk(x0:k)FT + wkwTk i (In− Kk+1H)T+ Kk+1RKk+1T , (99)

where wk denotes the specific process noise realization associated with the given state sequence x0:k. This concludes

the proof.

6

Proof of Proposition 1

Considering the recursion (17), we can find an explicit formula for the conditional bias

bk(x0:k) = k Y i=1 (In− KiH)F ! b0(x0) − k X i=1   k Y j=i+1 (In− KjH)F  (In− KiH)(xi− F xi−1). (100)

We now define the conditional bias Jacobian matrix Bk as follows

Bk, ∇Tx0:k  bT0(x0) bT1(x0:1) . . . bk(x0:k)T T =B`mk  ∈ R(k+1)n×(k+1)n, (101) where B`m k ∈ R n×n, 0 ≤ `, m ≤ k is defined as Bk`m, ∇Txmb`(x0:`). (102)

Hence, Bk`m is the block of the Jacobian matrix Bk corresponding to the Jacobian of the conditional bias vector

b`(x0:`) with respect to xm. The blocks Bk`m, 0 ≤ `, m ≤ k can be calculated as follows. Since the conditional bias

b`(x0:`) is only dependent on x0:`and independent of xi, i > `, the matrix Bkis lower block diagonal, i.e., its blocks above the main block diagonal are all zero. Hence we have

(13)

for m > `. The blocks can be calculated as follows Bk`m=             Q` j=1(In− KjH)F  ∇T x0b0(x0) +1({` 6= 0})In , m = 0,  Q` j=m+1(In− KjH)F  KmH, 1 ≤ m ≤ ` − 1, −(In− K`H), m = `, 0, ` < m ≤ k, (104)

where the notation1(·) denotes the indicator function of the argument event defined as

1(E) , (

1, E is true,

0, E is false. (105)

This concludes the proof.

7

Proof of Proposition 2

By definition, we have eI = I +  0n×n 0n×m 0m×n E  . (106)

Hence, we can write the following

e I−1= lim →0  I +  In 0n×m 0m×n E −1 . (107)

Using the formula for the inversion of the sum of matrices given as

(A + B)−1= A−1− A−1 A−1+ B−1−1A−1, (108) we get eI−1= lim →0I −1− I−1  I−1+  1 In 0n×m 0m×n E−1 −1 I−1. (109)

Substituting the partitioned form of I−1 given in (53) into (109), we get

eI−1 = lim →0I −1− I−1 I−1  11 I −1 12 I−1 21 I −1 22  +  1 In 0n×m 0m×n E−1 −1 I−1, (110) = lim →0I −1− I−1  I−1 11+ 1 In I −1 12 I−1 21 I −1 22+ E −1 −1 I−1. (111)

We now consider the matrix inversion in block form given below  A B C D −1 =  ∆−1A −∆−1A BD−1 −D−1C∆−1 A ∆ −1 D  , (112)

where the matrices ∆Aand ∆D defined as

A=A − BD−1C, (113)

D=D − CA−1B, (114)

denote the Schur complements of A and D respectively. Applying the matrix inversion in block form given above to take the block matrix inverse on the right hand side of (111) we get

eI−1 = lim →0I −1 − I−1 " ∆−111 −∆−111 I−1 12 I −1 22+ E −1−1 − I−1 22+ E −1−1 I−1 21∆ −1 11 ∆ −1 22 #−1 I−1, (115)

(14)

where ∆11,I−1  11+ 1 InI −1 12 I −1 22+ E −1−1 I−1 21, (116) ∆22,I−1  22+ E −1I−1 21  I−1 11+ 1 In −1 I−1 12. (117)

Noting that we have

lim →0∆ −1 11 = 0n×n, (118) lim →0∆ −1 22 = I −1 22+ E −1−1 , (119) we obtain eI−1= I−1− I−1  0 n×n 0n×m 0m×n I−1  22+ E −1−1  I−1 (120) = I−1−  I−1 11 I −1 12 I−1 21 I −1 22   0 n×n 0n×m 0m×n I−1  22+ E −1−1   I−1 11 I −1 12 I−1 21 I −1 22  (121) = I−1−  I−1 12 I−1 22  I−1 22+ E −1−1 I−1 21 I −1 22  . (122)

This concludes the proof.

References

[1] C. Fritsche, U. Orguner, and F. Gustafssson, “On parametric lower bounds for discrete-time filtering,” in IEEE

(15)

Division of Automatic Control

Department of Electrical Engineering 2016-01-22

Språk Language 2 Svenska/Swedish 2 Engelska/English 2  Rapporttyp Report category 2 Licentiatavhandling 2 Examensarbete 2 C-uppsats 2 D-uppsats 2 Övrig rapport 2 

URL för elektronisk version

http://www.control.isy.liu.se

ISBN

ISRN

Serietitel och serienummer

Title of series, numbering

ISSN

1400-3902

LiTH-ISY-R-3090

Titel

Title

Supplementary Material for “On parametric lower bounds for discrete-time filtering”

Författare

Author

Carsten Fritsche, Umut Orguner

Sammanfattning

Abstract

This report contains supplementary material for the paper [1], and gives detailed proofs of all theorems and lemmas that could not be included into the paper due to space limitations.

Nyckelord

References

Related documents

On the experimental model the relationship between standardized flow and pressure was linear in the entire pressure range (R 2 = 1, n=5x6x6). Apart from possible leakage at

In Paper III it was shown that without substantially reducing the accuracy of the estimated parameter, the investigation time of a constant pressure infusion test could be reduced

En test bör utföras och utvärderas men eftersom examensarbetet skall behandla förslag till optimering av bladtillverkning i syfte att sänka tillverkningskostnaderna för de

Även vikten av kommunikation inom företaget där medarbetarna får erkännande på sina arbetsuppgifter och på så sätt vet att det utför ett bra arbete för företaget

Man måste antingen vara så snabb att man kan dra draget snabbare än vad vattnet flyttar sig eller så drar man draget i olika rörelser för att få nytt vatten som ej är i

The results support both our overall hypothesis that the perceived gender of an interlocutor, as signaled by voice quality, will affect the listener’s ratings of personality

Uppfattningar om allvarlighetsgrad utifrån respondentens kön och självkänsla Studiens andra frågeställning löd; “Hur skiljer sig manliga och kvinnliga studenters uppfattningar

Marken överfors sedan med de olika kombinationerna av hjul och band, med och utan belastning bild 9: A Band, belastad B Dubbelhjul, belastad C Enkelhjul, belastad D Band, obelastad