• No results found

Clock models for Kalman filtering

N/A
N/A
Protected

Academic year: 2021

Share "Clock models for Kalman filtering"

Copied!
20
0
0

Loading.... (view fulltext now)

Full text

(1)

Per Jarlemark, Kenneth Jaldehag, and Carsten Rieck

SP Report 2016:48

SP T

ech

ni

ca

l Re

se

arch

I

nstitu

te of Sweden

(2)

Clock models for Kalman filtering

(3)

© SP Sveriges Tekniska Forskningsinstitut AB

Abstract

Clock models for Kalman filtering

Time and frequency error models for atomic frequency standards are presented in this

report, together with derivations of model parameters. The models are suited for use in

Kalman filtering, e.g., for combining data from several frequency standards to form a

“group clock”.

Key words: time metrology, frequency standards, Kalman filter

SP Sveriges Tekniska Forskningsinstitut

SP Technical Research Institute of Sweden

SP Report 2016:48

ISSN 0284-5172

Borås 2016

(4)

© SP Sveriges Tekniska Forskningsinstitut AB

Contents

Abstract

3

1

Introduction

5

2

Noise process models of clocks

5

2.1

Noise sequence ν

3

5

2.2

Noise sequences ν

2

and ν

1

7

3

State description of clocks

7

3.1

State equations

7

3.2

Noise covariance

8

4

State description of a first order Gauss-Markov process

9

5

Triple difference variance

11

Appendix A: Integration of the process noise

12

Appendix B: Derivation of process noise covariances

13

Appendix C: Full clock state noise covariance matrix

14

Appendix D: Derivation of triple difference variance

15

(5)

1

Introduction

The purpose of this report is to document the equations in the Kalman filters used for clock data combinations (“group clocks”). The treatment of the stochastic processes is not mathemat-ically rigorous; the derivations are intended as engineering tools for finding appropriate model parameters for the filtering.

2

Noise process models of clocks

We model the time offset, φ, of an atomic frequency standard on time intervals, say minutes to months, as the sum of three noise components

φ = φ1+ φ2+ φ3 (1)

where each term is originating from integrations of white noise sequences, σiνi,

dφ1 dt = σ1ν1, d2φ 2 dt2 = σ2ν2, d3φ 3 dt3 = σ3ν3. (2)

By definition, the sequences νi are normalized (see Appendix B), and scaled by the standard deviations, σi, in order to give appropriate size for the clock offsets φi.

Each term will have a time interval regime where its relative influence on the total time offset is at its largest. The first term, φ1, will dominate on small enough time intervals, while φ3 will dominate on the largest times intervals. On these intervals the definition of φ3 from equation (2) will give rise to a parabolic time offset (see below). This is observed for, e.g., H-masers.

2.1

Noise sequence ν

3 Starting with the defining function

d3φ 3

dt3 = σ3ν3 and defining a frequency offset, f3, and a frequency drift, a3

f3≡ dφ3 dt , a3≡ df3 dt = d2φ 3 dt2 (3) we get: da3 dt = σ3ν3 (4)

Frequency drift, frequency offset and clock time offset at time t can now be expressed as once, twice, and thrice integration of the noise after a starting point t0

a3(t) = a3(t0) + t Z t0 σ3ν3(t1) dt1 f3(t) = f3(t0) + (t − t0) a3(t0) + t Z t0 ( t2 Z t0 σ3ν3(t1) dt1) dt2 (5) φ3(t) = φ3(t0) + (t − t0) f3(t0) + (t − t0)2 2 a3(t0) + t Z t0 ( t3 Z t0 ( t2 Z t0 σ3ν3(t1) dt1) dt2) dt3

(6)

Notice! Superscripts on t, e.g. t3, are just indices; they do not denote exponents.

The data occur as samples at a set of discrete time instances, tp. Inserting the present and previous sampling instants, tp and tp−1, as t and t0, and defining τp≡ tp− tp−1 we get:

a3(tp) = a3(tp−1) + tp Z tp−1 σ3ν3(t1) dt1 f3(tp) = f3(tp−1) + τpa3(tp−1) + tp Z tp−1 ( t2 Z tp−1 σ3ν3(t1) dt1) dt2 (6) φ3(tp) = φ3(tp−1) + τpf3(tp−1) +τ 2 p 2 a3(tp−1) + tp Z tp−1 ( t3 Z tp−1 ( t2 Z tp−1 σ3ν3(t1) dt1) dt2) dt3

Let us define the discrete function wi,n(p) as n times integration of the noise process σiνi in the interval between tp−1 and tp. For n =1 to 3 we get:

wi,1(p) ≡ tp Z tp−1 σiνi(t1) dt1 wi,2(p) ≡ tp Z tp−1 ( t2 Z tp−1 σiνi(t1) dt1) dt2 (7) wi,3(p) ≡ tp Z tp−1 ( t3 Z tp−1 ( t2 Z tp−1 σiνi(t1) dt1) dt2) dt3

It is shown in Appendix A that the multidimensional integrals in equation (7) can be reduced to integrals in one dimension:

wi,n(p) ≡ tp Z tn=tp−1 ( tn Z tn−1=t p−1 · · · ( t2 Z t1=t p−1 σiνi(t1) dt1) · · · dtn−1) dtn = tp Z t1=tp−1 (tp− t1)n−1 (n − 1)! σiνi(t 1) dt1 (8)

Viewing a3(p), f3(p), φ3(p) as discrete functions we can rewrite equation (6):

a3(p) = a3(p − 1) + w3,1(p) f3(p) = f3(p − 1) + τpa3(p − 1) + w3,2(p) (9) φ3(p) = φ3(p − 1) + τpf3(p − 1) +τ 2 p 2 a3(p − 1) + w3,3(p)

(7)

In a stricter sense the functions in equation (9) are not identical with those with the same name in equation (5). They are merely composite functions of those in equation (5) with the function ts(p), e.g.,

a03(p) = a3(ts(p)) (10)

where ts(p) defines the sampling time for each (integer) sample number p. However, in the following we have chosen to ignore the formal difference between the continuous and discrete versions of the functions, e.g., a03 and a3.

2.2

Noise sequences ν

2

and ν

1

We start with the defining function for the second process

d2φ2

dt2 = σ2ν2 and define a frequency offset, f2, as:

f2≡ dφ2

dt (11)

In analogy with the procedures for the third process above, the frequency offset and clock offset for the second process at two consecutive sampling instants can now be derived from once and twice integration of the noise process, followed by discretization:

f2(p) = f2(p − 1) + w2,1(p)

φ2(p) = φ2(p − 1) + τpf2(p − 1) + w2,2(p) (12)

Finally, for the first process one integration of the defining function

dφ1

dt = σ1ν1

give the relation between the clock offset at two consecutive sampling instants:

φ1(p) = φ1(p − 1) + w1,1(p) (13)

3

State description of clocks

3.1

State equations

As a complement to the total clock offset, φ = φ1+ φ2+ φ3, we can define (total) frequency offset and frequency drifts as:

f ≡ f2+ f3

a ≡ a3 (14)

Note that f 6= dφ/dt, a 6= df /dt for this combination of processes; only “smooth” components (i.e., involving integration of the white sequences) contribute to our definition of f and a. The parameters φ, f , and a constitute a state description of the clock error, that can be used to compile equations (9), (12), and (13) into the combination:

(8)

φ(p) = φ(p − 1) + τpf (p − 1) + τp2

2 a(p − 1) + w1,1(p) + w2,2(p) + w3,3(p)

f (p) = f (p − 1) + τpa(p − 1) + w2,1(p) + w3,2(p) (15) a(p) = a(p − 1) + w3,1(p)

By defining a state vector, x, and an accompanying noise vector w, as

x ≡φ f aT

(16) w ≡w1,1+ w2,2+ w3,3 w2,1+ w3,2 w3,1

T

(17)

we can now rewrite equations (15) for a clock k as:

xk(p) = Φ(p) xk(p − 1) + wk(p) (18) where Φ(p) ≡   1 τp τ 2 p 2 0 1 τp 0 0 1   (19)

3.2

Noise covariance

In Kalman filters where data from a set of clocks are compared, the covariance of the components in the process noises, w, is essential for calculating the optimal filter parameters. The covariance for two clocks k and l can be written as:

Qkl(p) ≡ E{wk(p) wTl (p)} (20)

where E{} denote expectation value. This covariance matrix can be calculated using an expres-sion for the covariance between noise processes wi,ni and wj,nj, integrated ni and nj times (see equations (7) and (8)): Ewi,niwj,nj = cijσiσjτni+nj−1 p (ni− 1)! (nj− 1)! (ni+ nj− 1) (21)

where cij is the correlation coefficient between the noise sequences i and j. This relation is derived in Appendix B.

A general expressions for the elements in Qkl of equation (20) is given in Appendix C. For the case of all six noise processes of clock k and l are uncorrelated we get:

Qkk(p) =     σ2 1kτp+ σ2k2 τ3 p 3 + σ 2 3k τ5 p 20 σ 2 2k τ2 p 2 + σ 2 3k τ4 p 8 σ 2 3k τ3 p 6 σ2 2k τ2 p 2 + σ 2 3k τ4 p 8 σ 2 2kτp+ σ3k2 τ3 p 3 σ 2 3k τ2 p 2 σ2 3k τp3 6 σ 2 3k τp2 2 σ 2 3kτp     (22) Qkl(p) = 0, k 6= l

(9)

4

State description of a first order Gauss-Markov process

In a first order Gauss-Markov process a negative feedback limits its amplitude. It could be used, e.g., as part of describing the measurement link between clocks. It is defined by the equation:

dφg

dt = −β φg+ σgνg (23)

where the feedback factor β is positive for generating a negative feedback. Using ψ(t) = eβtφg(t) this can be written

dψ dt = e

βtσgνg

and after integration ant multiplication with e−βt we get

φg(t) = e−β(t−t0)φg(t0) +

t Z

t0

e−β(t−t1)σgνg(t1) dt1 (24)

In analogy with the clock processes we discretize this process and get:

φg(p) = e−βτpφg(p − 1) + wg(p) (25) where wg(p) = tp Z tp−1 e−β(tp−t1)σgνg(t1) dt1 (26)

For Kalman filters using φgas a state variable the covariance between the noise wgand other noise components is needed (in Q). In analogy with Appendix B we derive for two first order Gauss-Markov processes k and l:

Ewgkwgl = σgkσgl tp Z t1=t p−1 tp Z t1∗=t p−1 e−βk(tp−t1)e−βl(tp−t1∗)E{νgk(t1) νgl(t1∗)} dt1dt1∗ = σgkσgl tp Z t1=tp−1 e−βk(tp−t1)e−βl(tp−t1)c gk,gldt1 =cgk,glσgkσgl (βk+ βl)  1 − e−(βk+βl)τp  (27)

where cgk,gl is the correlation coefficient between the noise sequences νgk and νgl.

(10)

noise generating sequence of a clock process, wi,ni, we get: Ewgkwi,ni = σgkσi (ni− 1)! tp Z t1=tp−1 tp Z t1∗=tp−1 e−βk(tp−t1)(t p− t1∗)ni−1E{νgk(t1) νi(t1∗)} dt1dt1∗ = σgkσi (ni− 1)! tp Z t1=t p−1 e−βk(tp−t1)(tp− t1)ni−1cgk,idt1

= [ repeated integration by parts ]

= cgk,iσgkσi (ni− 1)! (−βk)ni



e−βkτph(−βτp)ni−1− (ni− 1)(−βτp)ni−2

+ (ni− 1)(ni− 2)(−βτp)ni−3− · · · + (−1)ni−1(ni− 1)!i+ (−1)ni(ni− 1)!



(11)

5

Triple difference variance

In order to find values for the parameters σi from measured clock offsets, φ(p), a chain of differentiations can be used. These can remove the dependence on the actual state of the clock (as defined by φ, f , and a) leaving a function of w terms from which conclusions on the standard deviations (σ) can be drawn.

We assume that the sampling points are equally spaced with separation τ , and define a differentation of a discrete function by ∆ψ(p) ≡ ψ(p) − ψ(p − 1). By differentiating the mea-sured phase offset three times, and estimating the variance of the differentiated data we get a polynomial in τ with coefficients based on the sought constants σi.

We start by differentiating the third phase offset φ3. From equation (9) follows:

∆φ3(p) = τ f3(p − 1) + τ2 2 a3(p − 1) + w3,3(p) (29) By using f3(p − 1) − f3(p − 2) = τ a3(p − 2) + w3,2(p − 1) and a3(p − 1) − a3(p − 2) = w3,1(p − 1)

derived from equation (9) we can write an expression for the result after a second differentiation as:

∆∆φ3(p) ≡∆(∆φ3(p))

=τ2a3(p − 2) −τ 2

2 w3,1(p − 1) + τ w3,2(p − 1) + w3,3(p) − w3,3(p − 1) (30) and finally, a third differentiation remove the remaining dependence on the state variables, and we get:

∆∆∆φ3(p) =τ 2

2 [w3,1(p − 1) + w3,1(p − 2)] + τ [w3,2(p − 1) − w3,2(p − 2)]

+ w3,3(p) − 2w3,3(p − 1) + w3,3(p − 2) (31)

Using equations (12) and (13) we can get the triple difference also for φ2and φ1:

∆∆∆φ2(p) = τ [w2,1(p − 1) − w2,1(p − 2)] + w2,2(p) − 2w2,2(p − 1) + w2,2(p − 2) (32)

∆∆∆φ1(p) = w1,1(p) − 2w1,1(p − 1) + w1,1(p − 2) (33)

We can now combine the three terms

T1= ∆∆∆φ1(p), T2= ∆∆∆φ2(p), T3= ∆∆∆φ3(p)

to form the triple difference of φ and calculate its variance:

E{(∆∆∆φ)2} =E{(T1+ T2+ T3)2} =E{T12} + E{T2

2} + E{T 2

3} + 2E{T1T2} + 2E{T1T3} + 2E{T2T3} =[ See derivations in Appendix D ]

=6σ12τ + σ22τ3− 2c13σ1σ3τ3+ 11 20σ 2 3τ 5 (34)

(12)

Appendix A: Integration of the process noise

Below we derive the reduction of multidimensional integrals of the process noise into one integrals found in equation (8). Notice! Superscripts on t, e.g. tn, are just indices (in the multidimensional space of integration); they do not denote exponents.

wi,n≡ tp Z tn=t p−1 ( tn Z tn−1=tp−1 · · · ( t2 Z t1=tp−1 σiνi(t1) dt1) · · · dtn−1) dtn

=  using Heaviside function: θ(x) = 1 if x > 0, = 0 otherwise  = tp Z tn=tp−1 tp Z tn−1=t p−1 · · · tp Z t1=t p−1 θ(tn− tn−1) · · · θ(t2− t1) σiνi(t1) dt1· · · dtn−1dtn =  change order of integration, start with tn 

= tp Z tn−1=t p−1 · · · tp Z t1=tp−1 " tp Z tn=tp−1 θ(tn− tn−1) dtn # · · · θ(t2− t1) σiνi(t1) dt1· · · dtn−1 = tp Z tn−1=tp−1 · · · tp Z t1=tp−1 " tp Z tn=tn−1 1 · dtn # · · · θ(t2− t1) σ iνi(t1) dt1· · · dtn−1 = tp Z tn−1=t p−1 · · · tp Z t1=t p−1  tp− tn−1  θ(tn−1− tn−2) · · · θ(t2− t1) σiνi(t1) dt1· · · dtn−1 = tp Z tn−2=tp−1 · · · tp Z t1=tp−1 " tp Z tn−1=tp−1 (tp− tn−1) θ(tn−1− tn−2) dtn−1 # · · · θ(t2− t1) σiνi(t1) dt1· · · dtn−2 = tp Z tn−2=tp−1 · · · tp Z t1=tp−1 " tp Z tn−1=tn−2 (tp− tn−1) dtn−1 # · · · θ(t2− t1) σiνi(t1) dt1· · · dtn−2 = tp Z tn−2=tp−1 · · · tp Z t1=t p−1 " (tp− tn−2)2 2 # θ(tn−2− tn−3) · · · θ(t2− t1) σiνi(t1) dt1· · · dtn−2 =  after a total of n − 1 integrations: 

= tp Z t1=t p−1 (tp− t1)n−1 (n − 1)! σiνi(t 1) dt1

(13)

Appendix B: Derivation of process noise covariances

The sequences denoted ν in this report are white and normalized. This means that

Z

D

f (t1)E{νi(t1) νj(t)} dt1= f (t) cij, t ∈ D

= 0, t /∈ D

where f is (almost) any function, and cij is the correlation coefficient between the two sequences. Especially the normalization give

Z

D

E{νi(t1) νi(t)} dt1= 1, t ∈ D

We also assume that the distribution functions of the stochastic processes π(t) are such that the order between forming an expectation value and integration in time are interchangeable, i.e.,

En Z

π(t) dto= Z

Enπ(t)odt

With this background we can calculate the covariance between two noise processes wi,ni(p)

and wj,nj(p) derived from integration of white noise sequences ni and nj times as defined in

equation (8). Ewi,ni(p) wj,nj(p) = E ( tp Z t1=tp−1 (tp− t1)ni−1 (ni− 1)! σiνi(t 1) dt1· tp Z t1∗=tp−1 (tp− t1∗)nj−1 (nj− 1)! σjνj(t 1∗) dt1∗ ) = σiσj (ni− 1)! (nj− 1)! tp Z t1=t p−1 tp Z t1∗=t p−1 (tp− t1)ni−1(tp− t1∗)nj−1E{νi(t1) νj(t1∗)} dt1dt1∗ = σiσj (ni− 1)! (nj− 1)! tp Z t1=tp−1 (tp− t1)ni−1(tp− t1)nj−1cijdt1 = cijσiσj (ni− 1)! (nj− 1)! tp Z t1=t p−1 (tp− t1)ni+nj−2dt1 = cijσiσj(tp− tp−1) ni+nj−1 (ni− 1)! (nj− 1)! (ni+ nj− 1) = cijσiσjτ ni+nj−1 p (ni− 1)! (nj− 1)! (ni+ nj− 1)

(14)

Appendix C: Full clock state noise covariance matrix

The noise in clock offset, frequency offset, and frequency drift is:

w =w1,1+ w2,2+ w3,3 w2,1+ w3,2 w3,1T For clocks k and l the noise covariance can be written as:

Qkl(p) ≡ E{wk(p) wTl (p)} =   q11 q12 q13 q21 q22 q23 q31 q32 q33  

where, e.g., q12 = E(w1k,1+ w2k,2+ w3k,3) · (w2l,1+ w3l,2) . Using the general expression for process noise covariance (derived in Appendix B)

Ewi,niwj,nj =

cijσiσjτni+nj−1

p

(ni− 1)! (nj− 1)! (ni+ nj− 1)

we can express each element in Q based on the process standard deviations σ, the correlation c between the processes, and the time step τp as follows:

q11= c1k,1lσ1kσ1lτp+ c1k,2lσ1kσ2l τ2 p 2 + c1k,3lσ1kσ3l τ3 p 6 + c2k,1lσ2kσ1l τp2 2 + c2k,2lσ2kσ2l τp3 3 + c2k,3lσ2kσ3l τp4 8 + c3k,1lσ3kσ1l τp3 6 + c3k,2lσ3kσ2l τp4 8 + c3k,3lσ3kσ3l τp5 20 q12= c1k,2lσ1kσ2lτp+ c1k,3lσ1kσ3l τp2 2 + c2k,2lσ2kσ2l τp2 2 + c2k,3lσ2kσ3l τp3 3 + c3k,2lσ3kσ2l τp3 6 + c3k,3lσ3kσ3l τp4 8 q13= c1k,3lσ1kσ3lτp+ c2k,3lσ2kσ3lτ 2 p 2 + c3k,3lσ3kσ3l τp3 6 q21= c2k,1lσ2kσ1lτp+ c2k,2lσ2kσ2lτ 2 p 2 + c2k,3lσ2kσ3l τp3 6 + c3k,1lσ3kσ1lτ 2 p 2 + c3k,2lσ3kσ2l τ3 p 3 + c3k,3lσ3kσ3l τ4 p 8 q22= c2k,2lσ2kσ2lτp+ c2k,3lσ2kσ3lτ 2 p 2 + c3k,2lσ3kσ2l τ2 p 2 + c3k,3lσ3kσ3lτ 3 p 3 q23= c2k,3lσ2kσ3lτp+ c3k,3lσ3kσ3lτ 2 p 2 q31= c3k,1lσ3kσ1lτp+ c3k,2lσ3kσ2lτ 2 p 2 + c3k,3lσ3kσ3l τ3 p 6 q32= c3k,2lσ3kσ2lτp+ c3k,3lσ3kσ3lτ 2 p 2 q33= c3k,3lσ3kσ3lτp

(15)

Appendix D: Derivation of triple difference variance

With φ = φ1+ φ2+ φ3 and each triple difference term denoted

T1= ∆∆∆φ1(p), T2= ∆∆∆φ2(p), T3= ∆∆∆φ3(p)

we get the variance of the triple difference of φ as:

E{(∆∆∆φ)2} =E{(T1+ T2+ T3)2} =E{T12} + E{T2

2} + E{T 2

3} + 2E{T1T2} + 2E{T1T3} + 2E{T2T3}

Below we derive an expression for each term using the process noice covariance equation

Ewi,niwj,nj =

cijσiσjτ ni+nj−1

p

(ni− 1)! (nj− 1)! (ni+ nj− 1)

derived in Appendix B. We have also used the fact that the processes are white,

Ewi,ni(p) wj,nj(q) = 0, p 6= q

i.e., we get covariance contributions only from identical time intervals.

E{T12}

=Ew1,1(p) − 2w1,1(p − 1) + w1,1(p − 2)2

=[ Covariance contributions only from identical time intervals ] =Ew2 1,1(p) + 4Ew 2 1,1(p − 1) + Ew 2 1,1(p − 2) =[ Statistics independent of which time interval used ] =6E{w21,1}

(16)

E{T22}

=Eτ [w2,1(p − 1) − w2,1(p − 2)] + w2,2(p) − 2w2,2(p − 1) + w2,2(p − 2) 2 =[ Covariance contributions only from identical time intervals ]

=Ew2 2,2(p)

+ Eτ w2,1(p − 1) − 2w2,2(p − 1)2 + E − τ w2,1(p − 2) + w2,2(p − 2)2

=[ Statistics independent of which time interval used ] =E{w2,22 } + τ2E{w22,1} + 4E{w2 2,2} − 4τ E{w2,1w2,2} + τ2E{w22,1} + E{w 2 2,2} − 2τ E{w2,1w2,2} =2τ2E{w22,1} + 6E{w2 2,2} − 6τ E{w2,1w2,2} =σ222τ2· τ + 6 ·τ3 3 − 6τ · τ2 2  =σ22τ3

(17)

E{T32} =E τ

2

2 [w3,1(p − 1) + w3,1(p − 2)] + τ [w3,2(p − 1) − w3,2(p − 2)] + w3,3(p) − 2w3,3(p − 1) + w3,3(p − 2)2

= [ Covariance contributions only from identical time intervals ] =Ew2 3,3(p) + E τ 2 2 w3,1(p − 1) + τ w3,2(p − 1) − 2w3,3(p − 1) 2 + E τ 2 2 w3,1(p − 2) − τ w3,2(p − 2) + w3,3(p − 2) 2

=[ Statistics independent of which time interval used ] =E{w23,3} +τ 4 4 E{w 2 3,1} + τ 2E{w2 3,2} + 4E{w 2 3,3}

+ τ3E{w3,1w3,2} − 2τ2E{w3,1w3,3} − 4τ E{w3,2w3,3}

4

4 E{w 2

3,1} + τ2E{w23,2} + E{w23,3}

− τ3E{w3,1w3,2} + τ2E{w3,1w3,3} − 2τ E{w3,2w3,3} =τ 4 2 E{w 2 3,1} + 2τ 2E{w2 3,2} + 6E{w 2 3,3} − τ 2E{w 3,1w3,3} − 6τ E{w3,2w3,3} =σ32 τ 4 2 · τ + 2τ 2· τ 3 3 + 6 · τ5 20− τ 2·τ 3 6 − 6τ · τ4 8  =11 20σ 2 3τ 5

(18)

E{T1T2}

=Ew1,1(p) − 2w1,1(p − 1) + w1,1(p − 2)·

τ [w2,1(p − 1) − w2,1(p − 2)] + w2,2(p) − 2w2,2(p − 1) + w2,2(p − 2) =[ Covariance contributions only from identical time intervals ] =Ew1,1(p) · w2,2(p) + E − 2w1,1(p − 1) ·τ w2,1(p − 1) − 2w2,2(p − 1)  + Ew1,1(p − 2) · − τ w2,1(p − 2) + w2,2(p − 2)  =[ Statistics independent of which time interval used ] =E{w1,1w2,2} − 2τ E{w1,1w2,1} + 4E{w1,1w2,2} − τ E{w1,1w2,1} + E{w1,1w2,2} = − 3τ E{w1,1w2,1} + 6E{w1,1w2,2} =c12σ1σ2 − 3τ · τ + 6 · τ 2 2  =0 E{T1T3} =Ew1,1(p) − 2w1,1(p − 1) + w1,1(p − 2)·  τ 2 2 [w3,1(p − 1) + w3,1(p − 2)] + τ [w3,2(p − 1) − w3,2(p − 2)] + w3,3(p) − 2w3,3(p − 1) + w3,3(p − 2)

=[ Covariance contributions only from identical time intervals ] =Ew1,1(p) · w3,3(p) + E − 2w1,1(p − 1) · τ 2 2 w3,1(p − 1) + τ w3,2(p − 1) − 2w3,3(p − 1)  + Ew1,1(p − 2) · τ 2 2 w3,1(p − 2) − τ w3,2(p − 2) + w3,3(p − 2) 

=[ Statistics independent of which time interval used ] =E{w1,1w3,3}

− τ2E{w1,1w3,1} − 2τ E{w1,1w3,2} + 4E{w1,1w3,3}

2

2 E{w1,1w3,1} − τ E{w1,1w3,2} + E{w1,1w3,3} = −τ

2

2 E{w1,1w3,1} − 3τ E{w1,1w3,2} + 6E{w1,1w3,3} =c13σ1σ3 −τ 2 2 · τ − 3τ · τ2 2 + 6 · τ3 6  = − c13σ1σ3τ3

(19)

E{T2T3} =Eτ [w2,1(p − 1) − w2,1(p − 2)] + w2,2(p) − 2w2,2(p − 1) + w2,2(p − 2)·  τ 2 2 [w3,1(p − 1) + w3,1(p − 2)] + τ [w3,2(p − 1) − w3,2(p − 2)] + w3,3(p) − 2w3,3(p − 1) + w3,3(p − 2)

=[ Covariance contributions only from identical time intervals ] =Ew2,2(p) · w3,3(p) + Eτ w2,1(p − 1) − 2w2,2(p − 1) · τ 2 2 w3,1(p − 1) + τ w3,2(p − 1) − 2w3,3(p − 1)  + E − τ w2,1(p − 2) + w2,2(p − 2) ·  τ2 2 w3,1(p − 2) − τ w3,2(p − 2) + w3,3(p − 2) 

=[ Statistics independent of which time interval used ] =E{w2,2w3,3} +τ 3 2 E{w2,1w3,1} + τ 2E{w2,1w3,2} − 2τ E{w2,1w3,3} − τ2E{w 2,2w3,1} − 2τ E{w2,2w3,2} + 4E{w2,2w3,3} −τ 3 2 E{w2,1w3,1} + τ 2E{w2,1w3,2} − τ E{w2,1w3,3} +τ 2

2 E{w2,2w3,1} − τ E{w2,2w3,2} + E{w2,2w3,3} =2τ2E{w2,1w3,2} − 3τ E{w2,1w3,3} − τ 2 2 E{w2,2w3,1} − 3τ E{w2,2w3,2} + 6E{w2,2w3,3} =c23σ2σ32τ2· τ2 2 − 3τ · τ3 6 − τ2 2 · τ2 2 − 3τ · τ3 3 + 6 · τ4 8  = 0

(20)

SP Technical Research Institute of Sweden

Box 857, SE-501 15 BORÅS, SWEDEN

Telephone: +46 10 516 50 00, Telefax: +46 33 13 55 02 E-mail: info@sp.se, Internet: www.sp.se

www.sp.se

SP Report 2016:48 ISSN 0284-5172

More information about publications published by SP:

www.sp.se/publ

SP Sveriges Tekniska Forskningsinstitut

SP-koncernens vision är att vara en internationellt ledande innovationspartner. Våra 1 400 medarbetare, varav över hälften akademiker och cirka 380 med forskarutbildning, utgör en betydande kunskapsresurs. Vi utför årligen uppdrag åt fler än 10 000 kunder för att öka deras konkurrenskraft och bidra till hållbar utveckling. Uppdragen omfattar såväl tvärtekniska forsknings- och innovationsprojekt som marknadsnära insatser inom provning och certifiering. Våra sex affärsområden (IKT, Risk och Säkerhet, Energi, Transport, Samhällsbyggnad och Life Science) svarar mot samhällets och näringslivets behov och knyter samman koncernens tekniska enheter och dotterbolag. SP-koncernen omsätter ca 1,5 miljarder kronor och ägs av svenska staten via RISE Research Institutes of Sweden AB.

SP Technical Research Institute of Sweden

Our work is concentrated on innovation and the development of value-adding technology. Using Sweden's most extensive and advanced resources for technical evaluation, measurement technology, research and development, we make an important contribution to the competitiveness and sustainable development of industry. Research is carried out in close conjunction with universities and institutes of technology, to the benefit of a customer base of about 10000 organisations, ranging from start-up companies developing new technologies or new ideas to international groups.

References

Related documents

After a short introduction to a typical implementation, this paper provides a description of the Kalman filter algorithm, formulated just over 50 years ago, as a solution to

This kind of variables also reduces the size of the dataset so that the measure points of the final dataset used to train and validate the model consists of one sample of

The simulation scenario is made to calculate the lowest path loss in above defined environments by using selected frequency and height of base station antennas

In this paper, different approaches to direct frequency domain estimation of continuous-time transfer functions based on sampled data have been presented. If the input

Patient #5: (A) MIBI (false-negative); (B) Methionine-PET suggested a right intrathyroidal parathyroid adenoma (red-arrow, false-positive); (C) SVS (no second round,

This research shows that careful choice and informed use of visuali- zations matters, and that different visualizations will be best suited for different educational

This is valid for identication of discrete-time models as well as continuous-time models. The usual assumptions on the input signal are i) it is band-limited, ii) it is

Detta diskuterar Redelius och Hay i relation till ämnet idrott och hälsa där de menar på att elever som bjuder på sig själva under lektionerna får bättre betyg, och att de