# Inference in nonlinear state-space models using Particle Gibbs with Ancestor Sampling ?

## Full text

(1)

?

(2)

(3)

(4)

(5)

Weighting

Resampling

(6)

it

it

Weighting

Resampling

(7)

it

it

(8)

(9)

5 10 15 20 25

−4

−3.5

−3

−2.5

−2

−1.5

−1

−0.5 0 0.5 1

Time

State

(10)

(11)

it

it

(12)

### 2. Sample x?1:T with P ( x?1:T = x i1:T | F T N ) ∝ w iT.

5 10 15 20 25 30 35 40 45 50

−3

−2

−1 0 1 2 3

Time

State

(13)

### 1. Run CPF-AS ( N, x01:T ) targeting p θ ( x 1:T | y 1:T ) , 2. Sample x?1:T with P ( x?1:T = x i1:T | F T N ) ∝ w iT.

5 10 15 20 25 30 35 40 45 50

−3

−2

−1 0 1 2 3

Time

State

(14)

### P Nθ ( x 1:T0 , dx ?1:T ) p θ ( dx 01:T | y 1:T ) .

F. Lindsten, M. I. Jordan and T. B. Schön, P. Bartlett, F. C. N. Pereira, C. J. C. Burges, L. Bottou and K. Q. Weinberger (Eds.), Ancestor Sampling for Particle Gibbs Advances in Neural Information Processing Systems (NIPS) 25, 2600-2608, 2012.

(15)

1:T

1

T

1:T

1:T

2:T

1:T

1:T

1

t

it

it

(16)

1:T

(17)

(18)

01:T

T

## ∑ N j = 1

01:T

01:T

01:T

(19)

(20)

12

### x t ) , e t ∼ N ( 0, 1 ) . Consider the ACF of θ [ k ] − E [ θ | y 1:T ] .

0 50 100 150 200

0 0.2 0.4 0.6 0.8 1

PG-AS, T = 1000

Lag

ACF

N=5 N=20 N=100 N=1000

0 50 100 150 200

0 0.2 0.4 0.6 0.8 1

PG, T = 1000

Lag

ACF

N=5 N=20 N=100 N=1000

(21)

### Semi-parametric model: State-space model for G , Gaussian process model for h ( ·) .

−10 0 10 20

Magnitude(dB)

0 0.5 1 1.5 2 2.5 3

−50 0 50 100

Phase(deg)

TruePosterior mean 99 % credibility

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

z

h(z)

True Posterior mean 99 % credibility

F. Lindsten, T. B. Schön and M. I. Jordan, Bayesian semiparametric Wiener system identification, Automatica, 49(7):

2053-2063, July 2013.

(22)

### Idea: Marginalize out f ( ·) and do inference directly on x 1:T (and θ ).

R. Frigola, F. Lindsten, T. B. Schön and C. E. Rasmussen, Bayesian Inference and Learning in Gaussian Process State-Space Models with Particle MCMC, Conference on Neural Information Processing Systems (NIPS), accepted for publication, Lake Tahoe, NV, USA, 2013.

(23)

### ! PGAS is well suited for tackling such non-Markovian problems.

−20 −10 0 10 20

−1

−0.5 0 0.5 1

−20

−10 0 10 20

u x

f(x,u)

10 20 30 40 50 60 70 80 90

−20

−15

−10

−5 0 5 10 15 20

t

x

(24)

(25)

(26)

(27)

PSEM

(28)

### • Computationally much more efficient than MCEM when the simulation step is complicated.

B. Delyon, M. Lavielle and E. Moulines, Convergence of a stochastic approximation version of the EM algorithm, The Annals of Statistics, 27:94-128, 1999.

(29)

PSEM

### log p θ ( x 1:T [ k ] , y 1:T ) − Qb k − 1 ( θ )  .

F. Lindsten, An efficient stochastic approximation EM algorithm using conditional particle filters, Proceedings of the 38th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2013.

(30)

(31)

−1

−0.5 0 0.5 1

k

M

L

10×ak

σv,k2 σe,k2

(32)

(33)

### ex) Nonlinear time series 31(33)

10−2 10−1 100 101 102 103 104

10−3 10−2 10−1 100 101 102

Computational time (seconds)

Average relative error

PSAEM,N=15 PSEM, N=15 PSEM, N=50 PSEM, N=100 PSEM, N=500

(34)

(35)

Updating...

## References

Related subjects :