• No results found

Recursive State Estimation of Nonlinear Systems with Applications to Integrated Navigation

N/A
N/A
Protected

Academic year: 2021

Share "Recursive State Estimation of Nonlinear Systems with Applications to Integrated Navigation"

Copied!
22
0
0

Loading.... (view fulltext now)

Full text

(1)

Recursive state estimation of nonlinear systems

with applications to integrated navigation

Per-Johan Nordlund

Department of Electrical Engineering

Link¨

oping University, SE-581 83 Link¨

oping, Sweden

WWW:

http://www.control.isy.liu.se

Email:

perno@isy.liu.se

November 29, 2000

REGLERTEKNIK

AUTOMATIC CONTROL

LINKÖPING

Report no.: LiTH-ISY-R-2321

Technical reports from the Automatic Control group in Link¨oping are available by anonymous ftp at the address ftp.control.isy.liu.se. This report is contained in the file 2321.pdf.

(2)
(3)

Recursive state estimation of nonlinear systems

with applications to integrated navigation

Per-Johan Nordlund

Department of Electrical Engineering

Link¨

oping University

SE-58183 Link¨

oping, Sweden

perno@isy.liu.se

November 29, 2000

Abstract

Today, accurate navigation systems often consists of several separate navigation systems which are integrated together to provide a more accu-rate and reliable navigation solution. Integaccu-rated navigation is a nonlinear and non-Gaussian system integration problem. Sequential Monte Carlo methods (particle filters) handle recursive state estimation of arbitrary systems. A direct application of the particle filter shows that a vast num-ber of particles is needed for the filter to work well.

Another method is developed in this report using the typical structure of an integrated navigation system. This method takes advantage of the parts of the state equations that are linear and Gaussian and estimates these parts using the Kalman filter. The nonlinear and non-Gaussian parts are estimated using the particle filter. Based on Bayes’ rule the solutions from the two filters are fused together. This two-filter approach is applied to the same system and the result shows that the number of particles needed to achieve the same performance as the particle filter is much less.

(4)
(5)

Contents

1 Introduction 5

2 Problem formulation 5

3 Particle filtering 7

4 Combined particle and Kalman filtering 10

5 Simulations 13

6 Conclusions 16

List of Figures

1 An example of an integrated navigation system. . . 6 2 The number of particles found in the marked area divided by the

total number of particles represents the discrete approximation ˆ

P (dx) of P (x∈ dx) ≈ p(x)dx. . . . 8 3 The terrain profile used by TAN during the simulations. . . 15 4 RMSE of ˆxMMSt|t and Pt|tMMS provided by the particle filter based

on 100 Monte Carlo simulations (2 runs removed because of di-vergence). Number of particles = 30000. . . 19 5 RMSE of ˆxMMSt|t and Pt|tMMSprovided by the combined particle and

Kalman filtering method based on 100 Monte Carlo simulations. Number of particles = 1000. . . 20

List of Tables

1 The particle filter algorithm. . . 10 2 The combined Kalman and particle filter algorithm. . . 14 3 Percentage of diverged runs and mean RMSE of the estimated

states for the particle filter. . . 15 4 Percentage of diverged runs and mean RMSE of the estimated

(6)
(7)

1

Introduction

So far, system integration has been based on the extended Kalman filter (EKF) [1]. Unfortunately, due to nonlinearities and non-Gaussian noise, the EKF of-ten provides a solution that is far from optimal. Another possible technique for integration is to use sequential Monte Carlo methods, or particle filters [9], [7]. Particle filters provide a numerical solution to the general nonlinear and/or non-Gaussian filtering problem, where linearisations and/or Gaussian approxi-mations are intractable or would yield too low performance.

An example of nonlinear and non-Gaussian system integration is integrated navigation. The primary source of navigational data is often the Inertial Nav-igation System (INS). The quantities calculated by the INS are the vehicle’s attitude, heading, velocity, position and altitude. In integrated navigation, one usually tries to estimate the errors in the quantities calculated by the INS. The main supporting component for estimating the INS errors is terrain navigation, which is highly nonlinear and non-Gaussian. In case of a failure in the sup-porting component one still wants a solution which is as accurate as possible (graceful degradation). This means that one needs to estimate the errors of the whole chain of quantities calculated by the INS, for the INS to be able to provide accurate stand-alone data. Until now, this problem has been tackled in a suboptimal manner using the EKF.

Unfortunately, the dimension of the state vector representing the INS errors is quite high (≥ 9). Applying the particle filter directly on this high dimensional problem would require too many particles. One way of dealing with this is to solve as much as possible analytically, taking advantage of the parts of the system that are linear and Gaussian. The remaining part of reduced dimension can then be solved using particle filtering.

The main purpose of this report is to propose a modified particle filter that requires much less particles but at the same time retains the accuracy. This filter is then applied to a simplified integrated navigation system.

2

Problem formulation

The primary source of navigational data is often an INS (Inertial Navigation System), due to its good dynamical properties and high reliability. Based on measurements of acceleration and angular velocity in three dimensions, the INS calculates the vehicle’s attitude, heading, velocity, position and altitude. This is a dead-reckoning system which utilizes an accurate nonlinear motion model. However, due to sensor errors the errors in the calculated quantities grow with time.

In integrated navigation one tries to obtain better accuracy usually by esti-mating the INS errors and compensate for them. The estimation is performed using different supporting systems like the Global Positioning System (GPS) and Terrain Aided Navigation (TAN).

GPS is not further mentioned in this report, but for an example of inte-gration of INS and GPS see [12]. Terrain Aided Navigation is used during the simulations described in Section 5, and a short description of it is therefore justified. Based on measurements of height over the terrain and altitude over mean sea-level TAN obtains a measured terrain elevation. The height over the

(8)

terrain is provided by the Radar Altimeter (RA) and the altitude over mean sea-level comes from the Air Data Computer (ADC). A Geographical Informa-tion System (GIS) stores the terrain elevaInforma-tion over a certain area as a funcInforma-tion of the position, i.e. TAN obtains a corresponding terrain elevation from GIS, where the position comes from the INS. If no errors were present the two ter-rain elevations would be exactly the same. There are of course errors present, particularly an INS position error, but given the terrain elevation information it is possible to estimate a probability density function (pdf) for the position. For more information on terrain navigation see e.g. [4].

A graphical representation of how an integrated navigation system could look like is shown in Figure 1.

INS + -TAN FUSION FILTER GIS RA ADC GPS

INS error model

Other sensor error models

Figure 1: An example of an integrated navigation system.

The aim in this report is not to provide a solution to the entire integrated navigation problem, but to propose a method on how to integrate a more or less linear and Gaussian motion model with a nonlinear and/or non-Gaussian component such as TAN. The proposed method is compared with a standard particle filter.

Consider the triple integrator ˙ x1 = x2 ˙ x2 = x3 (1) ˙ x3 = v,

where x1, x2 and x3 represent the position, velocity and acceleration error in one dimension and v is a time continuous white noise. The discrete time version of this is (see e.g. [10] for details on discretization)

xt+1 =  1 T T2 2 0 1 T 0 0 1   xt+   T3 6 T2 2 T wt, (2)

(9)

where T is the sampling period. The system (2) can be looked upon as a very simple INS error model in one dimension. For more information on inertial navigation theory see e.g. [13]. The question here is how to estimate xt in an optimal manner using nonlinear and/or non-Gaussian measurements according to

yt= h(x1t) + et, (3) where h is a known nonlinear function and et a white noise sequence with a known but arbitrary pdf.

The system according to (2) and (3) is used during the simulations described in Section 5. The purpose is to compare the particle filter, described in Section 3, with the filter developed in Section 4.

3

Particle filtering

Consider the state space model

xt+1= f (xt) + wt,

yt= h(xt) + et, (4) where the process and measurement noises, wt∈ Rnwand e

t∈ Rnerespectively,

are assumed independent, and their probability density functions, p(wt) = p(xt|xt−1) and p(et) = p(yt|xt), are assumed known for every time t. Fur-thermore, the state of the system, xt ∈ Rnx, is considered to be a Markov process, p(x0:t) = p(x0) t Y k=1 p(xk|xk−1), (5)

where the prior distribution for the state at t = 0 is given by p(x0). The observations, yt∈ Rny, are conditionally independent given the states,

p(y0:t|x0:t) =

t

Y

k=0

p(yk|xk). (6)

In the linear and Gaussian case the posterior filtering density p(xt|y0:t) is Gaussian and therefore totally described by its expected value and covariance. In the general case this is not true, which means that one has to calculate the whole pdf. The aim here is to calculate the filtering pdf p(xt|y0:t), one of the marginals to the total posterior density p(x0:t|y0:t), recursively as more and more observations become available. Using Bayes’ rule, p(x0:t|y0:t) can be computed recursively according to

p(x0:t|y0:t) =p(yt|xt)p(xt|xt−1)

p(yt|y0:t−1) p(x0:t−1|y0:t−1). (7) Unfortunately, it is very seldom a closed-form expression exists for p(xt|y0:t), not to mention p(x0:t|y0:t). One of these rare cases is the linear and Gaussian case. This implies that some kind of approximation is necessary. The principle

(10)

for Monte Carlo methods is to approximate the posterior pdf by a discrete ver-sion utilizing a large number of samples. The expresver-sion for this approximation is ˆ P (dx0:t|y0:t) = 1 N N X i=1 δxi 0:t(dx0:t), (8)

where δ(x) is the Dirac delta function. The set of the random samples,{xi0:t}Ni=1, are drawn randomly from the posterior pdf p(x0:t|y0:t). A large concentration of samples in a particular area corresponds to a pdf having a large support in that area. The one-dimensional case of (8) is depicted in Figure 2. From this

0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 00 00 11 11 00 00 11 11 00 00 11 11 00 00 11 11 00 00 11 11 00 00 11 11 00 00 11 11 00 00 11 11 00 00 11 11 00 00 11 11 x dx p(x)

Figure 2: The number of particles found in the marked area divided by the total number of particles represents the discrete approximation ˆP (dx) of P (x∈ dx)≈ p(x)dx.

set of samples one can easily estimate any moment according to ˆ E[g(x0:t)] = Z g(x0:t) ˆP (dx0:t|y0:t) = 1 N N X i=1 g(xi0:t). (9) Based on the law of large numbers one can show

ˆ

E[g(x0:t)]−→ E[g(xa.s. 0:t)] = Z

g(x0:t)p(x0:t|y0:t)dx0:t (10) as N → ∞ (a.s. stands for almost surely convergence). For information on almost surely convergence and probability in general see e.g. [11].

Here, what we are trying to estimate is the posterior pdf p(x0:t|y0:t). In other words, we do not have access to an analytical expression of this pdf which we can sample from. To overcome this problem one uses a known pdf, q(x0:t|y0:t), written recursively using Bayes’ rule as

q(x0:t|y0:t) = q(xt|x0:t−1, y0:t)q(x0:t−1|y0:t−1). (11) Note that on the right hand side of (11) we write q(x0:t−1|y0:t−1) instead of q(x0:t−1|y0:t). This is a restriction on q(x0:t|y0:t) saying that the state at time t− 1 and backwards is independent of measurements at time t and onwards. This restriction allows us to draw {xit}Ni=1 from q(xt|x0:t−1, y0:t) and then just add this new set of samples to the previous set, {xi0:t}Ni=1 = {xi0:t−1, xit}Ni=1, without adjusting{xi0:t−1}Ni=1.

(11)

Together with the so called importance weights, wt= p(x0:t|y0:t)

q(x0:t|y0:t) =

p(yt|xt)p(xt|xt−1)

q(xt|x0:t−1, y0:t) wt−1, (12) we obtain the posterior pdf approximation

ˆ P (dx0:t|y0:t) = PN i=1witδxi 0:t(dx0:t) PN i=1wti . (13)

This gives, using (9), that any moment can be approximated by ˆ E[g(x0:t)] = PN i=1Pwtig(xi0:t) N i=1wit . (14)

Again, the law of large numbers states that ˆ

E[g(x0:t)]−→ E[g(xa.s. 0:t)] =Eq(x0:t|y0:t)[wtg(x0:t)]

Eq(x0:t|y0:t)[wt] (15) as N → ∞.

A simple choice of a pdf to draw from is to use the prior, i.e.

q(xt|x0:t−1, y0:t) = p(xt|xt−1). (16) The corresponding weights will then be updated, using (12), according to

wt= p(yt|xt)wt−1. (17) To avoid degeneracy of the filter, i.e. most of the weights tends to zero and therefore making the approximation of the posterior pdf worse, one has to resample among the particles at regular intervals. See also [14] for more aspects on resampling. On the other hand, to make the algorithm less sensitive to outliers resampling should not be performed too often.

The minimum mean square (MMS) estimate of xt and its covariance are estimated, using (14), according to

ˆ xMMSt|t = Ep(xt|y0:t)[xt] XN i=1 ˜ witxit, (18) and Pt|tMMS= Ep(xt|y0:t)[(xt− Ep(xt|y0:t)[xt])2] XN i=1 ˜ wit(xit− ˆxMMSt|t )(xit− ˆxMMSt|t )T. (19) In (18) and (19) ˜ wti= w i t PN i=1wti , (20)

where witis calculated according to (12).

The algorithm is summarized in Table 1. For more information on sequential Monte Carlo methods see e.g. [3] and [7].

(12)

1. Generate{xi0}Ni=1 from p(x0), and set wit= N1, i = 1, .., N 2. Update the weights and normalize:

wit = P p(yt|xit)wt−1i N

j=1p(yt|xjt)wjt−1, i = 1, .., N 3. At every k:th iteration resample with replacement:

{x1,j

t }Nj=1 ∈ {x1,it }Ni=1, where p(x1,jt = x1,it ) = wti. wit = N1, i = 1, .., N

4. Predict the particles: {xi

t+1}Ni=1 ∈ p(xit+1|xit)

Table 1: The particle filter algorithm.

4

Combined particle and Kalman filtering

A special case of (4) is when the system can be partitioned into a linear and a nonlinear part according to

x1t+1= f (x1t) + A1tx2t+ Bt1w1t

x2t+1= A2tx2t+ Bt2w2t (21) yt= h(x1t) + et.

(Note that the system defined by (1) and (2) is a special case of (21) where f is a linear function.)

Assume that the properties which applies to (4) also applies to (21). Fur-thermore, the pdf for the process noise is Gaussian with zero mean and known variance, i.e wt=  w1t w2t  ∈ N (0,  Q1t St StT Q2t  ). (22)

Also, the pdf for x20 is Gaussian, N (ˆx20|−1, P0|−12 ). The pdfs for et and x10 are known but arbitrary.

The aim here is, as for the general case in (4), to compute the filtering posterior pdf

p(xt|y0:t) = p(x1t, x2t|y0:t). (23) To do this consider instead the pdf p(x10:t, x2t|y0:t). Using Bayes’ rule this pdf can be factorized into two parts according to

p(x10:t, x2t|y0:t) = p(x2t|x10:t, y0:t)p(x10:t|y0:t). (24) The first pdf on the right hand side of (24) can be rewritten as

(13)

due to the fact that the measurement, yt, is conditionally independent given x1t. Consider now the system

zt= A1tx2t+ B1tw1t

x2t+1= A2txt2+ B2tw2t, (26) where zt = x1t+1− f(x1t). The system (26) is linear and Gaussian, i.e. the optimal solution is provided by the Kalman filter. While z0:tprovides the same information as x10:t+1 as far as x2t concerns, the pdf in (25) is Gaussian

x2t|x10:t∈ N (ˆx2t|t−1, Pt|t−12 ), (27) where the Kalman filter estimates ˆx2t|t−1 and Pt|t−12 are given by the Kalman filter equations (assuming uncorrelated wt1and wt2)

Kt= Pt|t−12 (A1t)T(A1tPt|t−12 (A1t)T + Bt1Q1t(Bt1)T)−1 ˆ

x2t+1|t= A2tx2t|t−1+ Kt(zt− A1txˆ2t|t−1)) (28) Pt+1|t2 = A2t(Pt|t−12 − KtA1tPt|t−12 )(A2t)T + Bt2Q2t(Bt2)T.

The second pdf on the right hand side of (24) can be rewritten recursively (using Bayes’ rule iteratively) according to

p(x10:t|y0:t) = p(yt, x 1 0:t|y0:t−1) p(yt|y0:t−1) =p(yt| x1 t z }| { x10:t, y0:t−1)p(x10:t|y0:t−1) p(yt|y0:t−1) =p(yt|x 1 t)p(x1t| x1 0:t−1 z }| { x10:t−1, y0:t−1) p(yt|y0:t−1) p(x 1 0:t−1|y0:t−1) =p(yt|x 1 t)p(x1t|x10:t−1) p(yt|y0:t−1) p(x 1 0:t−1|y0:t−1). (29)

One can apply particle filtering to this part. Let the importance weights be represented by the likelihood, p(yt|x1t). The particles will then be sampled

according to p(x1t|x10:t−1). Using the state equation for x1t according to (21) we get that

x1t+1|x10:t= A1tx2t|x10:t+ f (x1t) + Bt1w1t, (30) due to the fact that x1t is given and wt1 is independent of x10:t. This gives that p(x1t|x10:t−1) is also Gaussian,

x1t|x10:t−1∈ N (f(x1t−1) + At1xˆ2t−1|t−2, A1tPt−1|t−22 (A1t)T + Bt1Q1t(Bt1)T). (31) To deal with correlated process noise, St 6= 0 in (22), the state equations according to (21) has to be adjusted slightly. Replace w2t with

(14)

This new process noise is uncorrelated with w1t, E[  wt1 w2t   w1t w2t T ] =  Q1t 0 0 Q2t− StT(Q1t)−1St  . (33)

In (21), the new state equation for x2t will be

xt+12 = (A2t− CtA1t)x2t+ Bt2w2t+ Ct(x1t+1− f(x1t)), (34) where

Ct= B2tStT(Q1t)−1((Bt1)TBt1)−1(Bt1)T. (35) Note that in our case, i.e. with a system defined by (2), the two process noises are the same (w1t = wt2). This gives the expressions for w2t and Ct using (32) and (35)

w2t= 0 (36)

Ct= B2t((Bt1)TBt1)−1(Bt1)T. (37) For a more detailed description of how to deal with correlated noise see e.g. [10]. The MMS estimate of x1t and its covariance are calculated according to (18) and (19) respectively. The MMS estimate of x2t is given by

ˆ x2,MMSt|t = Ep(x2 t|y0:t)[x2t] = Z x2tp(x2t|y0:t)dx2t = Z x2t( Z p(x10:t, x2t|y0:t)dx10:t)dx2t = Z p(x10:t|y0:t)( Z x2tp(x2t|x10:t)dx2t)dx10:t = Z Ep(x2 t|x10:t)[x 2 t]p(x10:t|y0:t)dx10:t XN i=1 ˜ witEp(x2 t|x1,i0:t)[x 2 t]. (38)

The corresponding estimate of the covariance is given by Pt|t2,MMS= Ep(x2 t|y0:t)[(x2t− Ep(xt2|y0:t)[x2t])2] XN i=1 ˜ witEp(x2 t|x1,i0:t)[(x 2 t− N X i=1 ˜ wtiEp(x2 t|x1,i0:t)[x 2 t])2]. (39)

Using the quantities given by the Kalman filters ˆ x2,it|t−1= Ep(x2 t|x1,i0:t)[x 2 t], (40) Pt|t−12,i = Ep(x2 t|x1,i0:t)[(x 2 t − Ep(x2 t|x1,i0:t)[x 2 t])2], (41)

(15)

(38) and (39) can be rewritten according to ˆ x2,MMSt|t N X i=1 ˜ witxˆ2,it|t−1 Pt|t2,MMS≈ N X i=1 ˜ wit(Pt|t−12,i + (ˆx2,it|t−1− ˆx2,MMSt|t−1 )2) = Pt|t−12 + N X i=1 ˜ wtix2,it|t−1− ˆx2,MMSt|t−1 )2 (42)

where ˜witis calculated according to (12).

The algorithm is summarized in Table 2. Note that the covariances Pt|t−12,i provided by the Kalman filters are all the same (which is used in the last step in (42) as well). Therefore it is sufficient to update Pt|t−12,i = Pt|t−12 once for each iteration.

The feature to partition the state vector similar to (24) is sometimes referred to as Rao-Blackwellisation, and for other applications see e.g. [8], [5], [2] and [6].

5

Simulations

The system according to (2) has been simulated with wt∈ N (0, 0.00012). The model of the system used during the simulations has been slightly different compared to the true system. The reason for this was to be able to deal with divergence problems otherwise encountered with the particle filter. More specif-ically the model was set to

xt+1=  1 T T2 2 0 1 T 0 0 1   xt+  T T3 6 0 T22 0 T wt (43) with T = 1 and wt∈ N (0,  22 0 0 0.00012  ), (44)

i.e. an additional noise component enters the position error state. This is necessary for the particle filter not to diverge without using to many particles. A more detailed explanation of the problems and how to handle them is found below.

The initial value and initial covariance of the estimated state vector were set to ˆ x0|−1=  00 0   , P0|−1=  500 2 0 0 0 22 0 0 0 0.022   (45) The true initial value of the state vector was drawn randomly fromN (0, P0|−1). Terrain Aided Navigation (TAN) was used as the supporting system, with a terrain profile depicted in figure 3. Adding the simulated errors to a constant

(16)

1. Generate{x1,i0 }Ni=1 from p(x10).

Initialize the Kalman filters by{ˆx2,i0|−1}Ni=1 and P0|−12 . 2. Compute the weights and normalize:

wit = P p(yt|xit)wit N

j=1p(yt|xjt)wjt, i = 1, .., N

3. At every k:th iteration resample with replacement: {x1,jt }Nj=1 ∈ {x1,it }i=1N , where p(x1,jt = x1,it ) = wti.

4. Predict the particles: {x1,i

t+1}Ni=1 ∈ N (f(x1,it ) + At1xˆ2,it|t−1, A1tPt|t−12 (A1t)T+ B1tQ1t(B1t)T)

5. Update the Kalman filters:

Kt = Pt|t−12 (At1)T(A1tPt|t−12 (A1t)T+ B1tQ1t(B1t)T)−1 ˆ

x2,it|t = xˆ2,it|t−1+ Kt(xt+11,i − f(x1,it )− A1txˆ2,it|t−1), i = 1, .., N Pt|t2 = Pt|t−12 − KtA1tPt|t−12

6. Predict the Kalman filters: ˆ

x2,it+1|t = (A2t− CtA1tx2,it|t+ Ct(x1,it+1− f(x1,it )), i = 1, .., N Pt+1|t2 = (A2t− CtA1t)Pt|t2 (A2t− CtA1t)T + Bt2Q2t(Bt2)T where

Qt2 = Q2t− StT(Q1t)−1St

Ct = Bt2StT(Q1t)−1((Bt1)TBt1)−1(B1t)T

Table 2: The combined Kalman and particle filter algorithm.

reference speed of 50 m/s provides the absolute position needed for TAN to in its turn provide a likelihood. Furthermore, to compute the weights of the different particles the estimated errors are fed back to the TAN. This gives the measurement equation

yt= h(50t + x1t− ˆx1t|t) + et, (46) where the measurement noise was set to

et∈ N (0, 62). (47) First the particle filter is applied to the entire state vector, using the algo-rithm according to Table 1 with k = 5. Using 30000 particles, the RMSE, based on 100 Monte Carlo runs, of the estimated position, velocity and acceleration errors together with their estimated covariances are shown in figure 4. Note that 2 of the runs diverged, and therefore removed before the RMSE was calculated. To demonstrate the filtering method developed in Section 4 we partitioned

(17)

0 5 10 15 40 60 80 100 120 140 160 180 position (km) terrain elevation (m)

Figure 3: The terrain profile used by TAN during the simulations. the state vector into two parts,

x1t = x1,t, x2t =  x2,t x3,t  , (48)

and applied the particle filter to x1t and the Kalman filter to x2t. See Table 2 for details on the algorithm. Resampling was performed every 5:th iteration. The number of particles used was 1000. The RMSE, again based on 100 Monte Carlo runs, of estimated position, velocity and acceleration errors together with their estimated covariances are shown in figure 5.

Tables 3 and 4 give the performance of the two filters as a function of the number of particles. More specifically the table shows how many runs out of a total of 100 simulations that diverged. Furthermore, the table gives the mean RMSE of the three estimated states after a burn-in time set to 20, 60 and 120 sec for the position, velocity and acceleration respectively.

Number of particles 5000 10000 20000 30000

Diverged (%) 8 6 2 2

1/nPnt=1RMSE(ˆx1t|t), tburn= 20 13 12 11 11

1/nPnt=1RMSE(ˆx2t|t), tburn= 60 0.48 0.46 0.45 0.42

1/nPnt=1RMSE(ˆx3t|t), tburn= 120 0.0042 0.0040 0.0038 0.0035

Table 3: Percentage of diverged runs and mean RMSE of the estimated states for the particle filter.

One clearly sees that it takes a lot of particles for the particle filter to work well. There are two main reasons for this. First of all if the process noise has a

(18)

Number of particles 125 250 500 1000

Diverged (%) 6 1 1 0

1/nPnt=1RMSE(ˆx1t|t), tburn= 20 17 12 11 11

1/nPnt=1RMSE(ˆx2t|t), tburn= 60 0.63 0.45 0.40 0.39

1/nPnt=1RMSE(ˆx3t|t), tburn= 120 0.0064 0.0045 0.0035 0.0035

Table 4: Percentage of diverged runs and mean RMSE of the estimated states for the combined Kalman and particle filter.

small variance. Secondly, if the dimension on the state vector is high, especially when the dimension of the observation vector is low relative to the dimension of the state vector.

If for some reason the cloud of particles ends up slightly besides the true state. The only way for the cloud to move back is by the process noise spreading the particles. Together with resampling the cloud hopefully finds its way back to again cover the true state. This is of course under the condition that the cloud has not ended up so far off that the likelihood p(yt|xt) is almost flat (or even worse equal to zero as could be the case when p(yt|xt) has a uniform distribution). If this is the case the risk for divergence is high. Of course one can always increase the process noise to make the filter more robust, but the price for this is a worse accuracy.

The reason for the cloud to move away from the true state in the first place could for example be outliers. Note that the effect of outliers can be reduced by resampling only every k:th iteration. Another reason is when the states not directly visible in the measurements are off track. This causes the particles to be spread unnecessarily much, leaving only a few samples near the true state. A peaked likelihood together with a resampling step leaves a cloud very concentrated to small areas all more or less off track. This phenomena is clearly more pronounced during the first iterations of the filter, and a large process noise initially is therefore important. Note that the combined Kalman and particle filter deals with this nicely. The particles are spread according to (see Table 2)

{x1,it+1}N

i=1∈ N (m, C),

where C depends on the uncertainty in the estimate of x2t which is large initially but then decreases gradually.

6

Conclusions

In this report, we have proposed a modified particle filter based on a combination of a standard particle filter and the Kalman filter. This filter has been compared to a standard stand-alone particle filter.

Simulations show that one has to use a large number of particles, when the dimension of the state vector is high, for the particle filter to work well. One can argue about what is meant about a large number of particles, which of course is dependent on what computational load is acceptable for the problem at hand. Note that the different numbers of particles used during the simulations are surely possible to decrease by optimising the filter parameters, and this applies

(19)

to both filters. What is important is that it is possible to reduce the number of particles significantly by using the combined Kalman and particle filter.

The great advantage with the particle filter is its enormous flexibility. One can apply it to practically anything within a statistical framework. On the other hand, the performance as a function of the number of particles can be quite bad. If possible, it is probably better to linearise as much as possible and solve the linearised parts analytically and save the particle filter to the really severe nonlinear and/or non-Gaussian parts.

References

[1] B.D.O. Anderson and J.B. Moore. Optimal Filtering. Prentice Hall, En-glewood Cliffs, N.J. USA, 1979.

[2] C. Andrieu and A.Doucet. Optimal estimation of amplitude and phase modulated signals. Technical Report CUED/F-INFENG/TR395, Depart-ment of Engineering, University of Cambridge CB2 1PZ Cambridge, 2000. [3] C. Andrieu, A. Doucet, and W J. Fitzgerald. An introduction to Monte

Carlo methods for Bayesian data analysis.

[4] N. Bergman. Recursive Bayesian Estimation, Navigation and Tracking Ap-plications. Phd thesis 579, Department of Electrical Engineering, Link¨oping University, Link¨oping, Sweden, 1999.

[5] A. Doucet and C. Andrieu. Particle filtering for partially observed Gaus-sian state space models. Technical Report CUED/F-INFENG/TR393, De-partment of Engineering, University of Cambridge CB2 1PZ Cambridge, September 2000.

[6] A. Doucet, N. de Freitas, K. Murphy, and S. Russell. Rao-Blackwellised particle filtering for dynamic Bayesian networks. In Proceedings of the 35th IEEE Conference on Decision and Control, volume 3, pages 3145–3150, 1996.

[7] A. Doucet, S.J. Godsill, and C. Andrieu. On sequential simulation-based methods for Bayesian filtering. Statistics and Computing, 10(3):197–208, 2000.

[8] A. Doucet, N.J. Gordon, and V. Krishnamurthy. Particle filters for state estimation of jump Markov linear systems. Technical Report CUED/F-INFENG/TR359, Department of Engineering, University of Cambridge CB2 1PZ Cambridge, 1999.

[9] N.J. Gordon, D.J. Salmond, and A.F.M. Smith. Novel approach to nonlinear/non-Gaussian Bayesian state estimation. In IEE Proceedings on Radar and Signal Processing, volume 140, pages 107–113, 1993.

[10] F. Gustafsson. Adaptive Filtering and Change Detection. John Wiley & Sons Inc., 1st edition, 2000.

(20)

[12] B.W. Leach. A Kalman filter integrated navigation design for the IAR twin otter atmospheric research aircraft. Technical Report NRC No. 32148, Institute for Aerospace Research, National Research Council Canada, April 1991.

[13] P.G. Savage. Strapdown Inertial Navigation - Lecture Notes. Strapdown Associates Inc., Plymouth, MN, USA, February 1990.

[14] A.F.M. Smith and A.E. Gelfand. Bayesian statistics without tears: A sampling-resampling perspective. The American Statistician, 46(2), 1992.

(21)

0 50 100 150 200 250 300 0 10 20 30 40 50 RMSE of x

1,t (solid) and P1,t (dashed)

m 0 50 100 150 200 250 300 0 0.5 1 1.5 2

RMSE of x2,t (solid) and P2,t (dashed)

m/s 0 50 100 150 200 250 300 0 0.005 0.01 0.015 0.02 RMSE of x

3,t (solid) and P3,t (dashed)

m/s

2

sec

Figure 4: RMSE of ˆxMMSt|t and Pt|tMMS provided by the particle filter based on 100 Monte Carlo simulations (2 runs removed because of divergence). Number of particles = 30000.

(22)

0 50 100 150 200 250 300 0 10 20 30 40 50 RMSE of x

1,t (solid) and P1,t (dashed)

m 0 50 100 150 200 250 300 0 0.5 1 1.5 2

RMSE of x2,t (solid) and P2,t (dashed)

m/s 0 50 100 150 200 250 300 0 0.005 0.01 0.015 0.02 RMSE of x

3,t (solid) and P3,t (dashed)

m/s

2

sec

Figure 5: RMSE of ˆxMMSt|t and Pt|tMMS provided by the combined particle and Kalman filtering method based on 100 Monte Carlo simulations. Number of particles = 1000.

References

Related documents

Metoden är mobil, påverkar inte materialet (oförstörande provning) och mätningen tar endast några sekunder vilket gör den idealisk för användning i tillverkande industri och

As a second step in the treatment of non-parametric methods for analysis of dependent ordered categorical data, a comparison of some standard measures, models and tests

Observability and identiability of nonlinear ODE systems: General theory and a case study of metabolic models This part of the thesis concerns the observability and

Svetlana Bizjajeva and Jimmy Olsson: Antithetic Sampling for Sequential Monte Carlo Methods with Application to State Space Models..

By using Milsteins scheme, a method with greater order of strong conver- gence than Euler–Maruyama, we achieved the O( −2 ) cost predicted by the complexity theorem compared to

Figure B.3: Inputs Process Data 2: (a) Frother to Rougher (b) Collector to Rougher (c) Air flow to Rougher (d) Froth thickness in Rougher (e) Frother to Scavenger (f) Collector

For the neutron nuclear reaction data of cross sections, angular distribution of elastic scattering and particle emission spectra from non-elastic nuclear interactions, the

För att eleverna inte ska känna att lusten att lära försvinner och att de får uppleva att de hela tiden misslyckas är det viktigt att läraren leder sina elever rätt och att de