• No results found

Recent Results on Bayesian Cramer-Rao Bounds for Jump Markov Systems

N/A
N/A
Protected

Academic year: 2021

Share "Recent Results on Bayesian Cramer-Rao Bounds for Jump Markov Systems"

Copied!
10
0
0

Loading.... (view fulltext now)

Full text

(1)

Recent Results on Bayesian Cramer-Rao

Bounds for Jump Markov Systems

Carsten Fritsche, Umut Orguner, Lennart Svensson and Fredrik Gustafsson

Linköping University Post Print

N.B.: When citing this work, cite the original article.

Original Publication:

Carsten Fritsche, Umut Orguner, Lennart Svensson and Fredrik Gustafsson, Recent Results on

Bayesian Cramer-Rao Bounds for Jump Markov Systems, 2016, Proc. 19th International

Conference on Information Fusion (FUSION), 512-520.

Copyright:

http://www.ieee.org

Postprint available at: Linköping University Electronic Press

(2)

Recent Results on Bayesian Cram´er-Rao Bounds

for Jump Markov Systems

Carsten Fritsche, Umut Orguner

, Lennart Svensson

, and Fredrik Gustafsson

Department of Electrical Engineering, Link¨oping University, Link¨oping, Sweden

Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara, TurkeyDepartment of Signals and Systems, Chalmers University of Technology, Gothenburg, Sweden

Abstract—In this paper, recent results on the evaluation of the Bayesian Cram´er-Rao bound for jump Markov systems are presented. In particular, previous work is extended to jump Markov systems where the discrete mode variable enters into both the process and measurement equation, as well as where it enters exclusively into the measurement equation. Recursive approximations are derived with finite memory requirements as well as algorithms for checking the validity of these approxima-tions are established. The tightness of the bound and the validity of its approximation is investigated on a couple of examples.

I. INTRODUCTION

Jump Markov systems (JMSs) are nowadays widely used to model systems in various disciplines, such as target tracking [1], [2], econometrics [3] and control [4], [5] to name only a few. Compared to the nonlinear filtering framework, estimators for JMSs have to additionally estimate the discrete state (or mode) of a Markov chain that allow a switching between different state-space models, for which various estimation algorithms have been proposed e.g. [2], [6]–[9].

The computation of performance bounds for JMSs has also evolved over the past few years. To date, various Bayesian Cram´er-Rao bounds (BCRBs) for JMSs have been proposed that generally differ from each other in terms of tightness and computational complexity. The perhaps least computationally complex bound for JMSs is the enumeration BCRB (EBCRB). It is derived from a bound on the mean square error (MSE) conditioned on the entire mode sequence, and an unconditional bound is generally obtained by averaging over all possible mode sequences [1], [10]. A bound that is at least as tight as the EBCRB, but which is significantly more computational complex to evaluate since it relies on running particle filters, is the marginal EBCRB (M-EBCRB). The M-EBCRB is derived from the same principles as the EBCRB, but evaluates a different information matrix, see [11] for the details. A third bound that is directly bounding the unconditional MSE has been presented in [12], and is hereinafter termed BCRB. This bound cannot be related in terms of tightness to the EBCRB or M-EBCRB via an inequality, as explained in [12], [13]. However, its computation is in many cases (e.g. nonlinear models or time varying models) only slightly more complex than the computation of the EBCRB. Another bound that has been proposed in the literature, is the so-called marginal BCRB (M-BCRB) [14]. It is also directly bounding the unconditional MSE but similar to the M-EBCRB evaluates a different

information matrix. It has been shown that the M-BCRB is at least as tight as the BCRB, but it is much more complex to evaluate.

In this paper, we focus on the evaluation of the BCRB pro-posed in [12]. In particular, this bound is useful in situations when it is tighter than the EBCRB and when computational resources are not available for evaluating the BCRB or M-EBCRB. The main contributions are as follows. We generalize the approach presented in [12] to the important cases, where the discrete mode variable enters either exclusively into the measurement model or into both the process and measurement models, see e.g. [13], [15]–[17] and [18], [19] for application examples. For both cases we additionally derive recursions for approximately computing the BCRB. These recursions depend on a conditional independence assumption between temporal random variables within a certain time interval that has been chosen in [12] empirically. We present recursive algorithms with linear in time complexity that can be used to specify this time interval.

The rest of the paper is organized as follows. In Section II, the system model is presented together with some definitions used in the paper. Section III gives a brief background overview on the BCRB, and Section IV provides the main results for computing the BCRB. The algorithms for conditional independence assumption verification are presented in Section V, and the simulation results are summarized in Section VI. Section VII finally concludes this work.

II. SYSTEMMODEL

Consider the discrete-time JMS, that is described by the following process and measurement equation

xk= fk−1(xk−1, rk, vk−1), (1a)

zk= hk(xk, rk, wk), (1b)

where zk ∈ Rnz is the measurement vector at discrete time

instantk and xk∈ Rnx is the state vector and fk−1and hkare

arbitrary nonlinear functions. The process and measurement noise vectors vk−1 ∈ Rnv and wk ∈ Rnw are assumed

mutually independent white processes, with noise densities pv,rk(vk−1) and pw,rk(wk) that are assumed known. The mode variablerk denotes a discrete-time Markov chain with

s states and transition probability matrix Π with elements πij = Pr{rk = j|rk−1 = i}. At time instances k = 0 and

(3)

is available in terms of the probability density function (pdf) px0(x0) and probability mass function (pmf) π1i = Pr{r1 =

i}. The initial state x0and moder1are mutually independent

and also independent of wk and vk−1.

In the following, let x0:k = [xT0, . . . , xTk]T and z1:k =

[zT

1, . . . , zTk]T denote the collection of states and

measure-ment vectors up to time k. Furthermore, let ˆx0:k(z1:k) =

[ˆxT

0(z1:k), . . . , ˆxTk(z1:k)]T denote the estimator of the state

sequence, and let the sequence of mode variables at time k be given by ri

1:k = (r1i, r2i, . . . , rik), where i = 1, . . . , sk.

Whenever possible, and when there is no risk of ambiguity, the estimator’s dependency on the measurements z1:kis

omit-ted in the following. Let us further introduce the gradient operator ∇s = [∂/∂s1, . . . , ∂/∂sn]T and Laplace operator

△t

s= ∇s∇Tt for any vectors s and t, and letEp(x){·} denote

expectation with respect to the pdf (or pmf) p(x). III. BACKGROUND ONBCRB

The BCRB provides a lower bound on the MSE matrix M(ˆx0:k(z1:k)) of any estimator ˆx0:k(z1:k). Assuming that

suitable regularity conditions hold [20], the BCRB for esti-mating the state sequence x0:kis defined as the inverse of the

Bayesian information matrix (BIM) J0:k, bounding

M(ˆx0:k), Ep(x0:k,z1:k){[ˆx0:k(z1:k) − x0:k][·]

T} ≥ J−1 0:k, (2)

where [A][·]T stands for [A][A]T and where the matrix

in-equality A≥ B means that the difference A − B is a positive semi-definite matrix [21]. The BIM is defined as

J0:k= Ep(x0:k,z1:k){−∆

x0:k

x0:klog p(x0:k, z1:k)}, (3) with dimension given by (k + 1)nx× (k + 1)nx.

In the following, we are interested in computing the BCRB of the MSE matrix for estimating xk. Generally, this can be

achieved by taking the (nx × nx) lower-right submatrix of

[J0:k]−1, which can be expressed mathematically fork ≥ 1 as

M(ˆxk) = Ep(xk,z1:k){[ˆxk(z1:k) − xk][·] T} = U M(ˆx0:k) UT ≥ U J−1 0:kUT ∆= J −1 k , (4)

with mapping matrix

U= [0, Inx], (5) and where Inx is the (nx× nx) identity matrix and 0 is a matrix of zeros of appropriate size. The matrix Jk is denoted

as the filtering information matrix, whose inverse gives the BCRB for estimating xk we seek to derive.

IV. COMPUTING THEBCRB

A. Jump Markov System Models

Depending on how the mode variable rk enters into the

system equations, different JMSs will result. In total, three dif-ferent system models can be identified which are summarized in Table I. When both the process and measurement model are independent of rk, we arrive at systems without Markovian

switching structure and the BCRB for this case was presented

in [22]. In this paper, only algorithms for computing the BCRB for Model 1 and 2 are presented. The reader interested in the BCRB for Model 3 is referred to [12].

The approach followed in this paper for computing the BCRB is to numerically evaluate the BIM J0:k of the complete state

trajectory x0:kusing Monte Carlo methods. In many cases, the

expression inside the expectation of (3) is difficult to evaluate directly and it may then be easier to evaluate the equivalent expression J0:k= Ep(x0:k,z1:k) [∇ x0:kp(x0:k, z1:k)][·]T p(x0:k, z1:k)2  . (6) If the mode variable enters only into one of the system equations, structure inherent in the BIM can be exploited. In these cases, it is convenient to decompose the BIM as follows J0:k= Jx0:k+ Jz1:k, (7) where Jx0:k denotes the BIM of the prior and process model:

Jx0:k= Ep(x0:k)  [∇x0:kp(x0:k)][·]T p(x0:k)2  (8) and Jz1:k denotes the BIM of the data:

Jz1:k= Ep(x0:k,z1:k)  [∇x0:kp(z1:k|x0:k)][·]T p(z1:k|x0:k)2  . (9) In particular, different algorithms will be provided on how J0:k or Jx0:k and Jz1:k can be evaluated for the different models presented in Table I. The results can be then used to compute the BCRB for the current state xk according to

(4). The general algorithmic structure for computing the BIM J0:k for the different models is presented in Algorithm 1.

Algorithm 1 Computation of the BIM J0:k for different JMS

models

(1) At timek = 0, generate x(i)0 ∼ p(x0) for i = 1, ..., N ,

and define x(i)0:0= x(i)0 .

– For eachi, evaluate ∇x0:0p(x

(i) 0:0) = ∇x0p(x (i) 0 ) and p(x(i)0:0) = p(x (i) 0 ).

– Compute the initial BIM J0 = Jx0 and store the results of Jx0.

(2) Fork = 1, 2, . . . , do:

– Ifk = 1, generate r(i)1 ∼ Pr{r1}, otherwise generate

rk(i)∼ Pr{rk|r(i)k−1} for i = 1, . . . , N

– Compute the BIM J0:k:

∗ Model 1: from (6) using Algorithm 2 ∗ Model 2: from (7) by determining:

- Jx0:k using Algorithm 3 - Jz1:k using Algorithm 4

∗ Model 3: from (7) by determining: - Jx0:k, see (23)-(29) in [12] - Jz1:k, see (9) in [12]

(4)

TABLE I

JUMPMARKOVSYSTEMMODELS xk= fk

−1(xk−1, rk, vk−1) xk= fk−1(xk−1, vk−1)

zk= hk(xk, rk, wk) Model 1 Model 2

zk= hk(xk, wk) Model 3, see [12] [22]

B. BIM Computation for Model 1

For nonlinear JMSs as given by (1) closed-form solutions for computing J0:k generally do not exist. In the following,

Monte Carlo integration is used to approximate (6) J0:k≈ 1 N N X i=1 [∇x0:kp(x (i) 0:k, z (i) 1:k)][·]T [p(x(i)0:k, z(i)1:k)]2 , (10)

where x(i)0:k and z(i)1:k, i = 1, . . . , N, are indepen-dent and iindepen-dentically distributed (i.i.d.) vectors such that (x(i)0:k, z(i)1:k) ∼ p(x0:k, z1:k). We introduce the intermediate

quantities p(x0:k, z1:k|rk) and ∇x0:kp(x0:k, z1:k|rk) which

can be computed recursively as shown in the following lemma.

Lemma 1. For JMSs as given by Model 1 in Table I, the pdf

p(x0:k, z1:k|rk) and the gradient ∇x0:kp(x0:k, z1:k|rk) can be

updated recursively as follows: If k = 1: p(x0:1, z1|r1) = p(z1|x1, r1)p(x1|x0, r1)p(x0), (11a) ∇x0:1p(x0:1, z1|r1) = h [∇x0:1p(z1|x1, r1)] p(x1|x0, r1) + p(z1|x1, r1) [∇x0:1p(x1|x0, r1)] i p(x0) + p(z1|x1, r1) p(x1|x0, r1) [∇x0:1p(x0)]. (11b) If k 6= 1: p(x0:k, z1:k|rk) = p(zk|xk, rk)p(xk|xk−1, rk) ×X rk−1 Pr{rk−1|rk}p(x0:k−1, z1:k−1|rk−1), (12a) ∇x0:kp(x0:k, z1:k|rk) = h [∇x0:kp(zk|xk, rk)] p(xk|xk−1, rk) + p(zk|xk, rk)[∇x0:kp(xk|xk−1, rk)] i X rk−1 Pr{rk−1|rk} × p(x0:k−1, z1:k−1|rk−1) + p(zk|xk, rk)p(xk|xk−1, rk) ×X rk−1 Pr{rk−1|rk}[∇x0:kp(x0:k−1, z1:k−1|rk−1)], (12b) where Pr{rk−1|rk} = Pr{rk|rk−1} Pr{rk−1}/ Pr{rk}.

Proof: Due to space limitations, the proof is provided in

an accompanying technical report [23].

Consequently, the BIM J0:k can be computed recursively

which is summarized in Algorithm 2.

C. BIM Computation for Model 2

For JMSs whererkenters exclusively into the measurement

equation, structure in the BIM can be exploited by making

Algorithm 2 Computation of BIM J0:kfor Model 1 andk ≥ 1

(1) Fori = 1, . . . , N do:

• Generate x(i)k ∼ p(xk|x(i)k−1, r(i)k ) and set x (i)

0:k =

[x(i)0:k−1, x(i)k ]. Generate z(i)k ∼ p(zk|x (i) k , r

(i)

k ) and set

z(i)1:k= [z(i)1:k−1, z(i)k ].

• Ifk = 1, then evaluate the quantities p(x(i)0:1, z(i)1 |r1) and

∇x0:1p(x

(i) 0:1, z

(i)

1 |r1) using (11).

• If k 6= 1, then update the stored quantities Pr{rk−1},

∇x0:k−1p(x (i) 0:k−1, z (i) 1:k−1|rk−1) and p(x(i)0:k−1, z (i) 1:k−1 |rk−1), using (12) and Pr{rk} = X rk−1 Pr{rk|rk−1} Pr{rk−1}.

• Evaluatep(x(i)0:k, z(i)1:k) and ∇x0:kp(x

(i) 0:k, z (i) 1:k) as follows: p(x(i)0:k, z(i)1:k) =X rk p(x(i)0:k, z(i)1:k|rk)Pr{rk} ∇x0:kp(x (i) 0:k, z (i) 1:k) = X rk [∇x0:kp(x (i) 0:k, z (i) 1:k|rk)] Pr{rk}

(2) Evaluate the BIM J0:k according to (10).

use of (7). In fact, it is easy to verify that when the process equation satisfies

xk = fk−1(xk−1, vk−1), (13)

then the BIM of the prior Jx0:k will be independent of rk. More specifically, for Model 2 the state vector xk is a Markov

process, i.e. p(xk|x0:k−1) = p(xk|xk−1) holds, and the BIM

of the prior Jx0:k can be computed according to the following lemma.

Lemma 2. For a mode-independent process equation as given

by (13), the BIM of the prior can be computed according to:

Jx0:k = δk+1(1, 1) ⊗ Jx0+ k X n=1 h δk+1(n, n) ⊗ D11n + δk+1(n, n + 1) ⊗ D12n + δk+1(n + 1, n) ⊗ D21n + δk+1(n + 1, n + 1) ⊗ D22,an i , (14) with D11n = Ep(x0:n){−∆xn−1 xn−1log p(xn|xn−1)}, (15a) D12n = Ep(x0:n){−∆ xn−1 xn log p(xn|xn−1)} = [D 21 n ]T, (15b) D22,an = Ep(x0:n){−∆xn xnlog p(xn|xn−1)}, (15c)

(5)

where⊗ denotes the Kronecker product and δk(i, j) denotes a

(k × k) dimensional matrix whose elements are all zero except

at the i-th row and the j-th column which is one.

Proof: See technical report [23].

The expectations in (15) generally cannot be solved analyt-ically for nonlinear models as given by (13). In this case, the expectations can be converted into a different standard form, as in (6), and then may be approximated using Monte Carlo integration, yielding D11n ≈ 1 N N X i=1 [∇xn−1p(x (i) n |x(i)n−1)][·]T [p(x(i)n |x(i)n−1)]2 , (16a) D12n ≈ 1 N N X i=1 [∇xnp(x (i) n |x(i)n−1)][∇xn−1p(x (i) n |x(i)n−1)]T [p(x(i)n |x(i)n−1)]2 , = [D21n ]T (16b) D22,an ≈ 1 N N X i=1 [∇xnp(x (i) n |x(i)n−1)][·]T [p(x(i)n |x(i)n−1)]2 , (16c)

where x(i)0:n, i = 1, . . . , N are i.i.d. vectors such that x(i)0:n∼

p(x0:n). A method for numerically approximating Jx0:k is given in Algorithm 3.

Algorithm 3 Computation of BIM of the prior Jx0:kfor Model 2 and k ≥ 1

(1) Fori = 1, . . . , N do:

– Generate x(i)k ∼ p(xk|x(i)k−1)

– Evaluate ∇xk−1p(x (i) k |x (i) k−1), ∇xkp(x (i) k |x (i) k−1) and p(x(i)k |x (i) k−1).

(2) Evaluate D11k , D12k , D21k and D22,ak according to (16) and store the results.

(3) Evaluate the BIM of the prior Jx0:k according to (14).

It is easy to verify that the BIM for the prior (14) is the same as in [22]. However, we still cannot develop a recursive algorithm which is equally simple because of the correlation caused by the mode rk affecting the BIM of the data Jz1:k. For nonlinear models as given by (1b), Jz1:k is generally not tractable analytically. In the following, Monte Carlo integra-tion is used to numerically approximate (9) according to:

Jz1:k ≈ 1 N N X i=1 [∇x0:kp(z (i) 1:k|x (i) 0:k)][·]T [p(z(i)1:k|x(i)0:k)]2 , (17)

where x(i)0:kand z(i)1:k,i = 1, . . . , N, are i.i.d. vectors such that (x(i)0:k, z(i)1:k) ∼ p(x0:k, z1:k). We introduce the intermediate

quantities p(z1:k, rk|x0:k) and ∇x0:kp(z1:k, rk|x0:k), which can be computed recursively as stated in the following lemma.

Lemma 3. For JMSs as given by Model 2 in Table I, the pdf

p(z1:k, rk|x0:k, ) and the gradient ∇x0:kp(z1:k, rk|x0:k) can

be updated recursively as follows:

Ifk = 1: p(z1, r1|x0:1) = p(z1|x1, r1) Pr{r1}, (18a) ∇x0:1p(z1, r1|x0:1) = [∇x0:1p(z1|x1, r1)] Pr{r1} (18b) Ifk 6= 1: p(z1:k, rk|x0:k) = p(zk|xk, rk) ×X rk−1 Pr{rk|rk−1}p(z1:k−1, rk−1|x0:k−1), (19a) ∇x0:kp(z1:k, rk|x0:k) = X rk−1 Pr{rk|rk−1} ×h[∇x0:kp(zk|xk, rk)] p(z1:k−1, rk−1|x0:k−1) + p(zk|xk, rk) [∇x0:kp(z1:k−1, rk−1|x0:k−1)] i . (19b)

Proof: See technical report [23].

Then, the BIM of the data Jz1:kcan be computed recursively as summarized in Algorithm 4. Note, that a computation-ally more complex algorithm, which is recursively updating p(z1:k|x0:k, rk) and ∇x0:kp(z1:k|x0:k, rk), has appeared in our previous work [24].

Algorithm 4 Computation of BIM of the data Jz1:k for Model 2 andk ≥ 1 (1) Fori = 1, . . . , N do: • Generate z(i)k ∼ p(zk|x (i) k , r (i) k ) and set z (i) 1:k = [z(i)1:k−1, z (i) k ].

• Ifk = 1, then evaluate the quantities p(z(i)1 , r1|x(i)0:1) and

∇x0:1p(z

(i)

1 , r1|x(i)0:1) using (18).

• If k 6= 1, update the stored quantities p(z(i)1:k−1, rk−1

|x(i)0:k−1) and ∇x0:k−1p(z (i)

1:k−1, rk−1|x(i)0:k−1) using (19). • Evaluatep(z(i1:k|x(i)0:k) and ∇x0:kp(z

(i) 1:k|x (i) 0:k) as follows: p(z(i)1:k|x(i)0:k) =X rk p(z(i)1:k, rk|x(i)0:k), ∇x0:kp(z (i) 1:k|x (i) 0:k) = X rk ∇x0:kp(z (i) 1:k, rk|x (i) 0:k)

(2) Evaluate the BIM of the data Jz1:k according to (17).

D. Recursive Computation of the BCRB

The algorithm presented so far requires the computation of the matrix inverse[J0:k]−1, see (4). This approach eventually

becomes impractical in situations whenk is large, due to its computational complexity which is in the order of O(((k + 1)nx)3). In these situations, recursive algorithms are sought

after that avoid inverting J0:k.

1) Model 1: The recursive algorithm presented in [12] can

be generalized to Model 1 as described in the following. For nonlinear JMSs, a recursive calculation of the filtering information matrix Jk is generally not possible without

(6)

state vector xk is not a Markov process1, i.e. conditionally it

depends on the entire state sequence x0:k−1, or equivalently

p(xk, zk|x0:k−1, z1:k−1) 6= p(xk, zk|xk−1, z1:k−1). (20)

Nevertheless, it can be assumed that given the measurement sequence z1:k−1, the dependence between (xk, zk) and xk−l

decreases rather quickly, especially when conditioned on the state vector of all intermediate times, xk−l+1:k−1. Thus, it is

reasonable to assume that there exists an integerd, such that p(xk, zk|x0:k−1, z1:k−1) ≈ p(xk, zk|xk−d:k−1, z1:k−1), (21)

i.e. (xk, zk) and xk−l given z1:k−1 are approximately

inde-pendent for alll > d, when we condition on the state vectors xk−d:k−1. The above assumption will result in two important

properties for J0:k as stated in the following lemma.

Lemma 4. Suppose that given the measurements z1:k−1,

the joint vector (xk, zk) and x0:k−d−1 are conditionally

independent in the sense that p(xk, zk|x0:k−1, z1:k−1) =

p(xk, zk|xk−d:k−1, z1:k−1). It then follows that

[J0:k]0:k1−d−1×k1:k1 = ([J0:k]k1:k1×0:k1−d−1)

T= 0 (22a)

[J0:k+1]0:k−d×0:k−d= [J0:k]0:k−d×0:k−d (22b)

for any k and k1 such thatk ≥ k1> d.

Proof: See technical report [23].

Here we have used the following notation [J0:k]t1:t2×t3:t4= Ep(x0:k,z1:k){−∆

t3:t4

t1:t2log(p(x0:k, z1:k))}, (23) where [J0:k]t1:t2×t3:t4 denotes the submatrix of J0:k that contains the rows that correspond to time t1 and t2 and

the columns that correspond to time t3 to t4. Note, that the

dimension of[J0:k]t1:t2×t3:t4isnx(t2−t1+1)×nx(t4−t3+1), whereas that of J0:k isnx(k + 1) × nx(k + 1).

The above lemma basically states that the matrix J0:kbecomes

block tri-diagonal, a property required for developing a recur-sive algorithm for Jk.

Proposition 1. Suppose that the conditional independence

assumption of Lemma 4 holds. Then, the (nx× nx) filtering

information matrix Jk can be computed from the following

relation: Jk = Ek− DTkH −1 k Dk, (24) with Ek= [J0:k]k:k×k:k, (25a) Dk= [J0:k]k−d:k−1×k:k, (25b)

where Ek and Dk have size (nx × nx) and (nxd × nx),

respectively. The (nxd × nxd) matrix Hk can be updated

1In order to obtain a Markov process, we have to augment the state vector

xkwith the discrete mode variable rk.

recursively according to the following relations

Hk= " e H22 De2k ( eD2k)T Eek # − " e HT 12 ( eD1k)T # e H−111 h He12 De1k i (26a) with e Hk= " e H11 He12 ( eH12)T He22 # = ( e Ck− BTk−1A−1k−1Bk−1, k = d + 1 Hk−1+ eCk− Ck−1, k > d + 1 (26b) and e Dk= " e D1k e D2k # , (26c)

where the different matrices are defined as follows:

Ak = [J0:k]0:k−d−1×0:k−d−1, (27a) Bk = [J0:k]0:k−d−1×k−d:k−1, (27b) Ck = [J0:k]k−d:k−1×k−d:k−1, (27c) e Ck = [J0:k]k−d−1:k−2×k−d−1:k−2, (27d) e Dk = [J0:k]k−d−1:k−2×k−1:k−1, (27e) e Ek = [J0:k]k−1:k−1×k−1:k−1, (27f)

and where eD1k and eH11 are of dimension(nx× nx), eD2k and

e HT

12 are of dimension (nx(d − 1) × nx), whereas eH22 is a

(nx(d − 1) × nx(d − 1)) dimensional matrix.

Proof: See technical report [23].

For the computation of Jk, it is required to compute

[J0:k]k−d−1:k×k−d−1:k at each recursion. As this is a

sub-matrix of J0:kit can be easily computed from the techniques

introduced to compute the full matrix J0:k. Thus, we only have

to compute an approximation of [J0:k]k−d−1:k×k−d−1:k≈ 1 N N X i=1 [∇xkd−1:kp(x (i) 0:k, z (i) 1:k)][·]T [p(x(i)0:k, z(i)1:k)]2 , (28) which compared to (10) requires to store and update the much shorter vector ∇xkd−1:kp(x

(i) 0:k, z

(i)

1:k). Further, instead

of having to invert the (nx(k + 1) × nx(k + 1)) matrix

J0:k, whose dimension grows at each time step, it is only

required to invert matrices of constant size that do not exceed (nx(d + 1) × nx(d + 1)). The method to recursively compute

the BCRB for Model 1 is summarized in Algorithm 5.

2) Model 2: For Model 2, finding a recursion for Jk

also requires to introduce approximations. Even though the state vector xk for this model is a Markov process, i.e.

p(xk|x0:k−1) = p(xk|xk−1) holds, this property cannot be

exploited in the pdf of the current measurement given all previous states and measurements, i.e.

p(xk, zk|x0:k−1, z1:k−1) = p(zk|x0:k, z1:k−1)p(xk|xk−1)

6= p(zk|xk, z1:k−1)p(xk|xk−1).

(7)

Algorithm 5 Recursive computation of BCRB

(1) At timek = 0 do:

• Compute the initial filtering information matrix J0, see

Algorithm 1, and its inverse J−10 which gives the BCRB for estimating x0.

(2) Fork = 1, 2, . . . , d, do:

• Compute the full BIM J0:k using Algorithm 1.

• Compute U[J0:k]−1UT which gives the BCRB for

esti-mating xk.

• If k = d, then extract from J0:k the submatrices Ak =

[J0:k]0:k−1×0:k−1 and Bk = [J0:k]0:k−1×k:k and store

them.

(3) Fork = d + 1, d + 2, . . . , do:

• Compute the Bayesian information submatrix [J0:k]k−d−1:k×k−d−1:k using Algorithm 1, but replace

∇x0:k by∇xkd−1:k.

• Extract the matrices Ck, eCk, Dk, eDk, Ek and eEk which

are defined in (25) and (27) and store them.

• Evaluate eHk from (26b).

• Evaluate Hk from (26a) and store the result.

• Evaluate Jk from (24), and compute the inverse J−1k

which gives the BCRB for estimating xk.

However, we can assume that given z1:k−1, the dependence

between zkand xk−ldecreases rather quickly, especially when

conditioned on xk−l+1:k−1. Thus, we can assume that there

exists an integerd, such that

p(zk|x0:k, z1:k−1) ≈ p(zk|xk−d:k, z1:k−1), (30)

i.e. zk and xk−l given z1:k−1 are approximately independent

for alll > d, when we condition on xk−d:k.

The above assumption results again in favorable properties for J0:k as stated in the following lemma.

Lemma 5. Suppose that given the measurements z1:k−1,

the current measurement zk and x0:k−d−1 are

condition-ally independent in the sense that p(zk|x0:k−1, z1:k−1) =

p(zk|xk−d:k, z1:k−1). It then follows that

[J0:k]0:k1−d−1×k1:k1 = ([J0:k]k1:k1×0:k1−d−1)

T= 0 (31a)

[J0:k+1]0:k−d×0:k−d= [J0:k]0:k−d×0:k−d (31b)

for any k and k1 such thatk ≥ k1> d.

Proof: See technical report [23].

The conditional independence assumption of Lemma 5 results in that J0:k has a block tri-diagonal structure that is

needed for a recursive evaluation of the filtering information matrix Jk.

Proposition 2. Suppose that the conditional independence

assumption of Lemma 5 holds. Then, the (nx× nx) filtering

information matrix Jk can be computed from the recursion

presented in Proposition 1.

Proof: Since Proposition 1 requires Lemma 4 to hold

and Lemma 5 contains the same conditions on the structure

of J0:k as Lemma 4, it follows that both lemmas will yield

the same recursion as given in Proposition 1.

Hence, the algorithm to recursively compute the BCRB for Model 2 is essentially the same as for Model 1, which is summarized in Algorithm 5.

V. ALGORITHMS FORCONDITIONALINDEPENDENCE

ASSUMPTIONVERIFICATION

In Section IV. D approximations have been introduced that allow a recursive computation of the filtering information ma-trix Jk for different depthsd. In the following, algorithms are

presented to quantifyd such that the conditional independence approximation in (21) and (30) hold.

A. Model 1

In order to find a metric to quantifyd, we decompose the conditional density as follows

p(xk, zk|x0:k−1, z1:k−1) =

X

rk

p(zk|xk, rk)

× p(xk|xk−1, rk) Pr{rk|x0:k−1, z1:k−1}. (32)

Of particular importance is the probability Pr{rk|x0:k−1, z1:k−1), which tells us how well rk can

be predicted based on the information that is contained in the past states x0:k−1 and measurements z1:k−1. For the

approximation introduced in (21), a similar expression can be derived which is given by

p(xk, zk|xk−d:k−1, z1:k−1) =

X

rk

p(zk|xk, rk)

× p(xk|xk−1, rk) Pr{rk|xk−d:k−1, z1:k−1}, (33)

i.e. the two expressions differ only in their prediction prob-abilities. We introduce the abbreviations P (ℓ) , Pr{rk =

ℓ|x0:k−1, z1:k−1} and Q(ℓ), Pr{rk = ℓ|xk−d:k−1, z1:k−1},

and define an average Kullback-Leibler type divergence (AKLD) DAKL(P ||Q), Z DKL(P ||Q) p(x0:k−1, z0:k−1) dx0:k−1dz1:k−1 (34) with DKL(P ||Q) = X ℓ P (ℓ) log  P (ℓ) Q(ℓ)  , (35)

which is equal to zero when the probabilities are equal. Note that we have introduced an average divergence in order to get rid of the conditional dependency on (x0:k−1, z0:k−1).

We further introduce the average Jensen-Shannon divergence (AJSD), which is defined as

DAJS(P ||Q) = 0.5 · DAKL(P ||(P + Q)/2)

+ 0.5 · DAKL(Q||(P + Q)/2). (36)

In contrast to the AKLD, the AJSD is symmetric and bounded as0 ≤ DAJS(P ||Q) ≤ 1, but requires that (35) is defined with

(8)

to quantify the depthd of the BCRB recursions.

The AJSD generally cannot be computed in closed-form, due to the integral in the expression for the AKLD. We therefore resort to Monte Carlo integration techniques to approximate

DAKL(P ||Q) ≈ 1 N N X i=1 DKL(P(i)||Q(i)) (37) with P(i) , Pr{r

k = ℓ|x(i)0:k−1, z(i)1:k−1} and Q(i) defined

ac-cordingly, and where(x(i)0:k−1, z(i)1:k−1), i = 1, . . . , N , are i.i.d. vectors such that (x(i)0:k−1, z(i)1:k−1) ∼ p(x0:k−1, z1:k−1). For

the evaluation of the AKLD, it is required to have closed-form expressions for the prediction pmfsPr{rk|x0:k−1, z1:k−1} and

Pr{rk|xk−d:k−1, z1:k−1}. These probabilities can be

com-puted recursively using the following two lemmas.

Lemma 6. The prediction pmf Pr{rk|x0:k−1, z1:k−1} can be

computed forn = 1, . . . , k − 1 from the following recursion Pr{rn+1|x0:n, z1:n} = X rn  Pr{rn+1|rn}p(zn|xn, rn)p(xn|xn−1, rn) × Pr{rn|x0:n−1, z1:n−1}  X rn+1 X rn  Pr{rn+1|rn}p(zn|xn, rn)p(xn|xn−1, rn) × Pr{rn|x0:n−1, z1:n−1}  , (38)

which is initialized withPr{r1|x0, z1:0} = Pr{r1}.

Proof: See technical report [23].

Lemma 7. The prediction pmf Pr{rk|xk−d:k−1, z1:k−1} can

be computed forn = k − d + 1, . . . , k − 1 from the following

recursion Pr{rn+1|xk−d:n, z1:n} = X rn  Pr{rn+1|rn}p(zn|xn, rn)p(xn|xn−1, rn) × Pr{rn|xk−d:n−1, z1:n−1}  X rn+1 X rn  Pr{rn+1|rn}p(zn|xn, rn)p(xn|xn−1, rn) × Pr{rn|xk−d:n−1, z1:n−1}  , (39)

which is initialized withPr{rk−d+1|xk−d, z1:k−d}.

Proof: See technical report [23].

The only unknown in the latter recursion is the initial pmf Pr{rk−d+1|xk−d, z1:k−d}, which is obtained

by integrating out the past states x0:k−d−1 from

Pr{rk−d+1|x0:k−d, z1:k−d}. For nonlinear JMSs,

closed-form expressions for Pr{rk−d+1|xk−d, z1:k−d} generally do

not exist. However, we can rewrite Pr{rk−d+1|xk−d, z1:k−d} ∝ X rkd X rkd−1 Pr{rk−d+1|rk−d} Pr{rk−d|rk−d−1} × p(zk−d|xk−d, rk−d) Z p(xk−d|xk−d−1, rk−d) × p(xk−d−1, rk−d−1|z1:k−d−1) dxk−d−1 (40)

and approximate p(xk−d−1, rk−d−1|z1:k−d−1) using

Rao-Blackwellized particle filters (RBPFs), for details see [23].

B. Model 2

Similarly to Model 1, we can decompose the conditional densities p(zk|x0:k, z1:k−1) and p(zk|xk−d:k, z1:k−1) to

ob-tain expressions depending on Pr{rk|x0:k−1, z1:k−1} and

Pr{rk|xk−d:k−1, z1:k−1}. Hence, we can use AJSD to

quan-tify the depthd. The prediction probabilities can be computed from Lemma 6 and Lemma 7, with the exception that we have to replacep(xn|xn, rn) with p(xn|xn). Since p(xn|xn)

appears in both numerator and denominator and is independent of rn, the density p(xn|xn) cancels out, and the expressions

simplify accordingly, see [23] for further details. VI. SIMULATIONRESULTS

In order to be able to compare the bounds to the best achievable performance (i.e. the optimal filter), we consider jump Markov linear Gaussian systems given by

xk = F xk−1+ vk(rk), (41a)

zk = H(rk) xk+ wk(rk), (41b)

with mapping matrices F and H(rk), process noise distributed

according to vk(rk) ∼ N (µv(rk), Q(rk)) and measurement

noise distributed according to wk(rk) ∼ N (µw(rk), R). We

investigate for each model (i.e. Model 1 and Model 2) an example, and assume that for both examples, the discrete mode rk evolves according to a 2-component time-homogeneous

Markov chain with initial mode probabilitiesπ1

1= π21 = 0.5,

and transition probability matrixΠ with elements π11= π22=

0.95. We further assume that the initial state for both examples is zero-mean Gaussian distributed x0 ∼ N (0, P0|0) with

covariance matrix P0|0 = diag([0.5, 0.5]).

We compare the following bounds and filter performances: 1) Optimal filter (in MSE sense) [6], [25], 2) Interacting multiple model Kalman filter (IMM-KF) [2], [7], 3) M-BCRB using a RBPF with optimal importance density andNp = 50

particles [14], 4) enumeration BCRB (EBCRB) [1], [10], and 5) BCRB computed from Algorithm 1 (BCRB(non-recursive)) and Algorithm 5 (BCRB(recursive)). We perform in total N = 50.000 Monte Carlo runs (100.000 for Model 2) and compute the root mean square error (RMSE) according to

RMSEk = v u u t 1 N N X i=1 (x1,k− ˆx1,k)2+ (x2,k− ˆx2,k)2, (42)

with true state xk = [x1,k, x2,k]T and estimated state xˆk =

[ˆx1,k, ˆx2,k]T. Accordingly, every bound is computed by taking

the square root of the trace of the corresponding (2 × 2) BCRB matrix. We further compute the AJSD using a RBPF with optimal importance density [26] and Np = 50 particles

fromN = 10.000 Monte Carlo runs (even though 1.000 runs already yielded acceptable results).

Model 1) We assume the following mapping matrix

F=  1 0.632 0 0.368  , (43)

(9)

0 5 10 15 20 25 1 1.02 1.04 1.06 1.08 1.1 1.12 IMM-KF Optimal Filter M-BCRB BCRB (non-recursive) EBCRB R M S E time step k (a) 0 5 10 15 20 25 1 1.01 1.02 1.03 1.04 1.05 1.06 1.07 BCRB (non-recursive) BCRB (recursive), d=2 BCRB (recursive), d=5 BCRB (recursive), d=10 R M S E time step k (b) 0 5 10 15 20 25 10-8 10-6 10-4 10-2 100 d=2 d=5 d=10 d=15 d=20 A JS D time step k (c) 0 5 10 15 20 25 time step k 1 1.02 1.04 1.06 1.08 1.1 1.12 IMM-KF Optimal Filter M-BCRB BCRB (non-recursive) EBCRB R M S E (d) 0 5 10 15 20 25 time step k 1 1.01 1.02 1.03 1.04 1.05 1.06 1.07 BCRB (non-recursive) BCRB (recursive), d=2 BCRB (recursive), d=5 BCRB (recursive), d=10 R M S E (e) 0 5 10 15 20 25 10-8 10-6 10-4 10-2 100 d=2 d=5 d=10 d=15 d=20 A JS D time step k (f)

Fig. 1. Simulation results for the two examples. RMSE vs. time step k for the different bounds and algorithms are shown in (a) for Model 1 and in (d) for Model 2. A comparison of BCRB approximations with zoomed in RMSE scale is shown in (b) for Model 1 and in (e) for Model 2. The average Jensen-Shannon divergence (logarithmic scale) vs. time step k for different recursion depths d is shown in (c) for Model 1 and in (f) for Model 2.

and process noise with mean vector µv(1) = [0, −0.1]T

and µv(2) = [0, 0.1]T, and covariance matrix Q(1) =

diag([0.5, 1]) and Q(2) = diag([0.35, 0.35]). The measure-ment model is defined by a mode-dependent mapping ma-trix H(1) = diag([1, 1]) and H(2) = diag([0.8, 0.5]), and measurement noise with mean vector µw(1) = [0, 0]T and

µw(2) = [0.2, 0.5]T, and covariance matrix R= diag([1, 1]).

Model 2) We assume a mode independent process model

(41a), with mapping matrix F defined as in (43) and process noise with mean vector µv = 0 and covariance matrix

Q = diag([0.4, 0.4]). The measurement model is mode-dependent with mapping matrix

H(1) =  1 −0.2 0 0.5  , H(2) =  0.8 0 0 0.5  , (44) and measurement noise with mean vector µw(1) = [0, 0]T

and µw(2) = [0, 0.25]T, and covariance matrix given by

R= diag([1, 1]).

The simulation results for both examples are summarized in Fig. 1. For Model 1, we can observe that the different bounds fail to predict the performance of the optimal filter, see Fig. 1(a), i.e. all bounds are rather loose. Here, the M-BCRB is the tightest bound, followed by the BCRB and the EBCRB. It is worth noting that tightness relations between the M-BCRB and BCRB have been established [13], [14], i.e. the M-BCRB is at least as tight as the BCRB. Such tightness relations cannot be established in general between the EBCRB and the M-BCRB or M-BCRB, i.e. there are certain problem instances where the EBCRB is tighter than the M-BCRB and/or the BCRB,

whereas for other problem instances the reverse is true. This depends on the informativeness of the model as was explained in [12], [13]. Even though the M-BCRB is the tightest bound in this example, its computation requires to run a RBPF for each Monte Carlo trial, which is much more expensive than the computation of the BCRB and EBCRB using Monte Carlo integration. In Fig. 1(b) the recursive BCRB approximations for different depths d using Algorithm 5 are compared to the BCRB obtained from Algorithm 1 (note the different RMSE scale compared to Fig. 1(a)). It can be seen that even choosing d = 2 yields a fairly well approximation of the BCRB. However, differences to the BCRB are clearly visible. Increasing the depth d yields better approximations of the BCRB, with less differences, which is also reflected in the AJSD as shown in Fig. 1(c). Note, that the AJSD curves for k < d + 1 are not shown since in this case the prediction pmf and its approximation are equivalent, i.e. AJSD= 0. AJSD values of10−2 seem to be insufficient to obtain an excellent

approximation of the original BCRB. However, the more we increased, the better this approximation becomes and an AJSD value smaller than 3 · 10−4 (or d = 10) seems to be an

appropriate choice for this example. In order to obtain a better understanding of the AJSD, consider a true probabilityp and an approximationq which is 1% smaller in probability than p (e.g.p = 0.45 and q = 0.44), then the JSD averaged over all possible p is JSDavg. ≈ 1.94 · 10−4. Similarly, if we assume

thatq is 0.1% smaller in probability than p (e.g. p = 0.45 and q = 0.499), then JSDavg.≈ 1.7 · 10−6.

(10)

rather loose bounds since they are relatively far away from the optimal filter performance as time increases, see Fig. 1(d). As in the previous example, the M-BCRB is the tightest bound in this setting followed by the BCRB and EBCRB. For the recursive BCRB approximations with different depths d as shown in Fig. 1(e), conclusions similar to that of Model1 can be drawn. The AJSD values for this example are different to that of Model1 and one should better choose a larger depth d to obtain an excellent approximation of the BCRB, as depicted in Fig. 1(f). It is worth noting that compared to Model1, the AJSD curves start at k = d + 2, which is a result of the fact that in Model 2 the transition pdf p(xk|xk−1) is independent

of the mode variable rk, and the corresponding pmf and its

approximation for k = d + 1 are equivalent. Note, that the AJSD is one indicator to assess the quality of the bound approximations. By simulating many other examples we found out that an AJSD value smaller than10−6generally yields very

good approximations with almost no differences to the original BCRB. In many other examples, values smaller than 10−4 were already sufficient to obtain excellent approximations, but then other factors, such as the shape of the mixture density (32), i.e. if the mixture components overlap or not, play an important role. As a final remark we want to stress that the true benefit of the proposed approach is to adaptively change the depthd depending on the result of the AJSD value, as we run the algorithm for the bound computations.

VII. CONCLUSION

In this paper, we have developed algorithms to compute the BCRB for a wide class of jump Markov systems. Our work extends previous algorithms to models where the discrete mode enters the measurement model. We have presented recursive algorithms to compute the desired bound for both the general case where the discrete mode also enters the motion model and the special case where it does not. The calculation of the BCRB involves a design parameter that determines an independence approximation affecting the depth of the recursion, and we also provide a strategy for how to select this parameter. Simulations indicate that the recursive approximation is generally very accurate even for small depths and that the BCRB may provide a suitable trade-off between tractability and tightness compared to other bounds that have appeared in the literature.

ACKNOWLEDGMENT

This work was supported in part by the Excellence Center at Link¨oping and Lund in Information Technology (ELLIIT).

REFERENCES

[1] B. Ristic, S. Arulampalam, and N. Gordon, Beyond the Kalman Filter: Particle Filters for Tracking Applications. Boston, MA, USA: Artech-House, 2004.

[2] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with Appli-cations to Tracking and Navigation. New York, NY, USA: Wiley-Interscience, 2001.

[3] S. Chib and M. Dueker, “Non-Markovian regime switching with en-dogenous states and time-varying state strengths,” Econometric Society, Econometric Society 2004 North American Summer Meetings 600, Aug. 2004.

[4] F. Gustafsson, Adaptive Filtering and Change Detection. New York, NY, USA: John Wiley & Sons, 2000.

[5] O. L. V. Costa, M. D. Fragoso, and R. P. Marques, Discrete-Time Markov Jump Linear Systems, ser. Probability and Its Applications, J. Gani, C. C. Heyde, P. Jagers, and T. G. Kurtz, Eds. London, UK: Springer-Verlag, 2005.

[6] G. A. Ackerson and K. S. Fu, “On state estimation in switching environments,” IEEE Trans. Autom. Control, vol. 15, no. 1, pp. 10–17, 1970.

[7] H. A. P. Blom and Y. Bar-Shalom, “The interacting multiple model algorithm for systems with Markovian switching coefficients,” IEEE Trans. Autom. Control, vol. 33, no. 8, pp. 780–783, 1988.

[8] S. McGinnity and G. W. Irwin, “Multiple model bootstrap filter for maneuvering target tracking,” IEEE Trans. Aerosp. Electron. Syst., vol. 36, no. 3, pp. 1006–1012, 2000.

[9] E. ¨Ozkan, F. Lindsten, C. Fritsche, and F. Gustafsson, “Recursive maximum likelihood identification of jump Markov nonlinear systems,” IEEE Trans. Signal Process., vol. 63, no. 3, pp. 754–765, Mar. 2015. [10] A. Bessell, B. Ristic, A. Farina, X. Wang, and M. S. Arulampalam,

“Error performance bounds for tracking a manoeuvring target,” in Proc. of the International Conference on Information Fusion, vol. 1, Cairns, Queensland, Australia, Jul. 2003, pp. 903–910.

[11] C. Fritsche, U. Orguner, L. Svensson, and F. Gustafsson, “The marginal enumeration Bayesian Cram´er-Rao bound for jump Markov systems,” IEEE Signal Process. Lett., vol. 21, no. 4, pp. 464–468, Apr. 2014. [12] L. Svensson, “On the Bayesian Cram´er-Rao bound for Markovian

switching systems,” IEEE Trans. Signal Process., vol. 58, no. 9, pp. 4507–4516, Sept. 2010.

[13] C. Fritsche and F. Gustafsson, “Bounds on the optimal performance for jump Markov linear Gaussian systems,” IEEE Trans. Signal Process., vol. 61, no. 1, pp. 92–98, Jan. 2013.

[14] ——, “The marginal Bayesian Cram´er-Rao bound for jump Markov systems,” IEEE Signal Process. Lett., vol. 23, no. 5, pp. 575–579, May 2016.

[15] F. Caron, M. Davy, E. Duflos, and P. Vanheeghe, “Particle filtering for multisensor data fusion with switching observation models: Application to land vehicle positioning,” IEEE Trans. Signal Process., vol. 55, no. 6, pp. 2703–2719, Jun. 2007.

[16] M. Nicoli, C. Morelli, and V. Rampa, “A jump Markov particle filter for localization of moving terminals in multipath indoor scenarios,” IEEE Trans. Signal Process., vol. 56, no. 8, pp. 3801–3809, Aug. 2008. [17] N. Viandier, D. F. Nahimana, J. Marais, and E. Duflos, “GNSS

per-formance enhancement in urban environment based on pseudo-range error model,” in Position, Location and Navigation Symposium, 2008 IEEE/ION, Monterey, CA, USA, May 2008, pp. 377–382.

[18] R. J. Elliott, F. Dufour, and W. P. Malcolm, “State and mode estimation for discrete-time jump Markov systems,” SIAM Journal on Control and Optimization, vol. 44, no. 3, pp. 1081–1104, Mar. 2005.

[19] L. Blackmore, S. Rajamanoharan, and B. C. Williams, “Active estima-tion for jump Markov linear systems,” IEEE Trans. Autom. Control, vol. 53, no. 10, pp. 2223–2236, Nov. 2008.

[20] R. D. Gill and B. Y. Levit, “Applications of the van Trees inequality: a Bayesian Cram´er-Rao bound,” Bernoulli, vol. 1, no. 1/2, pp. 59–79, 1995.

[21] H. L. van Trees, Detection, Estimation and Modulation Theory Part I. New York, NY, USA: John Wiley & Sons, 1968.

[22] P. Tichavsk´y, C. H. Muravchik, and A. Nehorai, “Posterior Cram´er-Rao bounds for discrete-time nonlinear filtering,” IEEE Trans. Signal Process., vol. 46, no. 5, pp. 1386–1396, May 1998.

[23] C. Fritsche and U. Orguner, “Supplementary material for ’Recent results on Bayesian Cram´er-Rao bounds for jump Markov systems’,” Link¨oping University, Link¨oping, Sweden, Tech. Rep. LiTH-ISY-R-3089, Feb. 2016. [Online]. Available: http://www.control.isy.liu.se/publications/ [24] C. Fritsche, F. Gustafsson, and A. Klein, “Bayesian Cram´er-Rao bound

for mobile terminal tracking in mixed LOS/NLOS environments,” IEEE Wireless Comm. Lett., vol. 2, no. 3, pp. 335–338, Mar. 2013. [25] F. Gustafsson, Statistical Sensor Fusion. Lund, Sweden:

Studentlitter-atur AB, 2010.

[26] A. Doucet, N. J. Gordon, and V. Krishnamurthy, “Particle filters for state estimation of jump Markov linear systems,” IEEE Trans. Signal Process., vol. 49, no. 3, pp. 613–624, 2001.

References

Related documents

In this study, the aim was to investigate if the dog’s behaviour and the interaction between the dog and the owner differ in the Strange Situation Procedure test depending

(Corell lyckas i de avslutande förhandlingarna avtala bort den tredje instansen i rättsprocessen, internationella regler skulle alltså få gälla, super majority var en av

On the experimental model the relationship between standardized flow and pressure was linear in the entire pressure range (R 2 = 1, n=5x6x6). Apart from possible leakage at

Syftet med denna studie var att undersöka vad pedagogerna i förskolan anser om teknik, främst digital teknik i relation till förskolans läroplan. Denna stu- die visar att det

Även vikten av kommunikation inom företaget där medarbetarna får erkännande på sina arbetsuppgifter och på så sätt vet att det utför ett bra arbete för företaget

Diskussion Vi ville med denna studie belysa och problematisera eventuella skillnader mellan könen i den uppskattade samt upplevda prestationsförmågan kontra den faktiska

The contributions of this work can be summarized as follows: (I) state-space models based on Gaussian processes and corresponding state estimation algorithms are developed; (II)

Zhao, Y., Fritsche, C., Hendeby, G., Yin, F., Chen, T., Gunnarsson, F., (2019), Cramér–Rao Bounds for Filtering Based on Gaussian Process State-Space Models, IEEE Transactions