• No results found

Deterministic dynamical bounds on moments of nonstationary stochastic processes

N/A
N/A
Protected

Academic year: 2021

Share "Deterministic dynamical bounds on moments of nonstationary stochastic processes"

Copied!
12
0
0

Loading.... (view fulltext now)

Full text

(1)

Deterministic dynamical bounds on moments of nonstationary stochastic processes

P. Carrette

Department of Electrical Engineering Linkping University, S-581 83 Linkping, Sweden

WWW:

http://www.control.isy.liu.se

Email:

carrette@isy.liu.se

April 8, 1998

REGLERTEKNIK

AUTOMATIC CONTROL LINKÖPING

Report no.: LiTH-ISY-R-2022

Submitted to Systems and Control Letters

Technical reports from the Automatic Control group in Linkping are available by anony-

mous ftp at the address

ftp.control.isy.liu.se

. This report is contained in the com-

pressed postscript le

2022.ps.Z

.

(2)

Deterministic dynamical bounds on moments of nonstationary stochastic processes

P. Carrette - carrette@isy.liu.se

Department of Electrical Engineering, Linkoping University S-58183 Linkoping, Sweden

Abstract

In this contribution, we deal with the deterministic dominance of the proba- bility moments of stochastic processes. More precisely, given a positive stochastic process, we propose to dominate its probability moment sequence by the trajectory of appropriate lower and upper dominating deterministic processes. The analysis of the behavior of the original stochastic process is then transferred to the stability analysis of the deterministic dominating processes.

The result is applied to a nonstationary auto-regressive process that appears in the system identication literature.

Keywords

: nonstationary stochastic process, probability theory, nonlinear dynamic system, stability analysis, trajectory bounding.

1 Introduction

In general, the evolution of nonstationary stochastic processes is hard to obtain from the statement of the underlying stochastic equation (see 2, 7, 9, 3] and 6, chap. 13]). As a motivation example, let us consider the positive scalar stochastic process

rk

dened by

r

k

= (1

;

)

rk;1

+

'2k k >

0 (1) where

r0 

0,

2

0



1) and

'k

denotes a random variable whose distribution is subject to the following \excitation" condition

' 2

k

 r

k;1

(2)

with

 2

0



1]. Note that this process arises in time-varying system identication by use of a constrained forgetting factor recursive least square algorithm (as introduced in 4], see also 8]).

Obviously, the process

rk

cannot be considered anymore as an auto-regressive (AR) pro- cess 1, chap. 5]. Roughly speaking, it can be viewed as a nonstationary AR process.

1

(3)

As valuable characteristics of stochastic processes lie in their probability moments, it is natural to ask for the trajectory of the probability moments of

rk

along

k

. But this is a hard problem to solve. The reason for this is that the excitation condition (2) is such that its probability distribution of

'k

depends on all its past samples (due to

rk;1

). Thus, the distribution of the sample

rk

depends in a very intricate way on its past values, so does the evaluation of its moments.

Then, instead of asking for the exact value of these moments, one may be less ambitious and desire to only characterize the evolution of their trajectory (along

k

). This is the purpose of the paper.

Here, we propose to lower and upper dominate the trajectory of the probability moments of nonstationary stochastic processes by the solutions of deterministic dynamical equa- tions. Our contribution is as follows.

Given a positive scalar stochastic process

xk

, we show that under functional assump- tions on its conditional (onto past samples) probability moments, it is possible to trace the evolution of its probability moments on that of the output of appropriately dened lower and upper bounding deterministic dynamic systems, i.e.

wk  E

(

xk

)

 zk

with

w

k

=

g

(

wk;1

) and

zk

=

f

(

zk;1

). Hence, valuable properties of these moments can be obtained from the stability analysis of these bounding dynamic systems, e.g. equilibrium points, convergence rates.

For an illustration purpose, our results will be applied to the stochastic process

rk

in order to derive bounds upon its probability moments, i.e.

E

(

rnk

), and on those of its inverse process

pk

= 1

=rk

, i.e.

E

(

pnk

).

The structure of the paper is as follows. Our main result is stated in Section 2. It deals with the convex and concave functional boundedness of the trajectory of the conditional expectation of a positive scalar stochastic process. Consequences of this functional prop- erty on the evolution of the process expectation are provided. In Section 3, we develop a simple algorithm for practically evaluating convex (lower) and concave (upper) functional bounds on a given function representing conditional expectation dynamics. In Section 4, we apply our results to the stochastic equation (1) under the excitation condition (2).

More precisely, we derive deterministic dynamics dominating that of the moments of the stochastic processes

rk

and

pk

. Finally, simulations are provided for a particular distribu- tion of the sequence

'k

.

2 Deterministic dominance of stochastic processes

In this section, we are interested in evaluating convergence bounds on the expectation of a positive scalar stochastic process

xk

. Therefore, we propose to dominate this expectation by the trajectories of appropriate lower and upper bounding deterministic processes, i.e

w

k

E

(

xk

)

zk

for

k 

0.

The convergence analysis of the original expectation is then transferred to that of the deterministic dominating dynamics.

2

(4)

The following theorem presents our main result.

Theorem 1 Let

xk

(with

k >

0) be a positive stochastic process such that

g

(

xk;1

)

E

(

xkjFk;1

)

f

(

xk;1

) a.e. (3) where

Fk;1

=

fxj

0

j <kg

is the



-algebra generated by the past events of the process and

Fk;1 Fk

, and where the functions

g

(

x

) and

f

(

x

) are continuous nonnegative convex and concave functions in

R+

, respectively. Then,

w

i

E

(

xkjFk;i

)

zi

(4)

where

wizi >

0 are the samples of particular trajectories of the following deterministic scalar processes:

wi

=

g

(

wi;1

) and

zi

=

f

(

zi;1

) with

w0

=

z0

=

xk;i

.

Before going into the proof, let us note that the stochastic inequality (3) holds uniformly in

k

. For example, in the case of a stochastic process

xk

=

h

(

ek

) with a random sequence

e

k

possibly dependent on the past

xk

(i.e.

xk;1x0

), we can write

E

(

xkjFk;1

) =

hk

(

xk;1

)

where

hk

(

x

) is possibly non-uniform in

k

. Then, by dening

h;

(

x

) := min

khk

(

x

) and

h

+

(

x

) = max

khk

(

x

) over

x >

0, we obtain

g

(

x

)

 h;

(

x

) as well as

h+

(

x

)

 f

(

x

) with the desired properties for

g

(

x

) and

f

(

x

), if possible. If not, the associated deterministic process bound does not hold.

Proof: By use of Jensen's inequality 5, page 47], the concavity (resp. convexity) property of

f

(

x

) (resp.

g

(

x

)) leads to

E

(

f

(

x

))

f

(

E

(

x

)) (resp.

g

(

E

(

x

))

E

(

g

(

x

)))

for any positive random variable

x

. Now, the quantity

E

(

xkjFk;i

) is recursively dened by

E

(

xkjFk;i

) =

E

(

E

(

xkjF(k;i)+1

)

jFk;i

) for

i<k

. So that

E

(

xkjFk;i

) =

E

(

E

(

E

(

xkjFk;1

)

jFk;2

)

jFk;i

)

 E

(

E

(

f

(

xk;1

)

jFk;2

)

jFk;i

)

 E

(

f

(

f

(

xk;2

)

jFk;i

)

 f

(

f

(

xk;i

)



)

with

i

compositions of the concave function

f

(

x

). For the lower bound (i.e. in term of the convex function

g

(

x

)), we similarly have

E

(

xkjFk;i

)

g

(

g

(

xk;i

)



)

Finally, the denition of the

wi

and

zi

processes leads to

wi

=

g

(

g

(

z0

)



) and

zi

=

f

(

f

(

y0

)



) with

i

compositions of

g

(

x

) and

f

(

x

), respectively. Hence, the proof is completed by taking

w0

=

z0

=

xk;i

.

3

(5)

It follows from this result that the convergence properties of the expectation of the stochas- tic process

xk

can be estimated by the analysis of particular deterministic positive pro- cesses. The two following lemmas exhibit properties of their underlying dynamics, i.e.

f

(

x

) and

g

(

x

) respectively.

Lemma 2 Let

zi

(with

i>

0) be the following positive scalar process

z

i

=

f

(

zi;1

) with

z0 >

0

where

f

(

z

) is a nonnegative nondecreasing concave function in

R+

. If there exists

z >

0 such that

f

(

z

) =

z

and

f

(

z

)

< z

for

z < z

, then

z

is an attractive equilibrium point for

z <z

, i.e.

z

i

z



+

z;1

(

zi;1;z

) (5)

with

z >

1. Globally, we have that lim

izi z

for

i!1

.

Proof: First, we derive some properties of the function

f

(

z

).

It is nonnegative:

f

(

z

)



0 for

z 

0. It is nondecreasing and concave: 0

 f+0

(

z2

)



f 0

+

(

z1

) for 0

 z1  z2

, with

f+0

(

z

), the right derivative of

f

(

z

). By assumption, 0



f 0

+

(

z

)

<

1 and

f

(

z

)

>

0, so that

f

(

z

)

>

0 for all

z >

0 and either

z < f

(

z

)

 z

or

f

(

z

) =

z

for 0

< z < z

. We also have

z  f

(

z

)

< z

for

z > z

, by the nondecreasing property.

Now, we show that

zzM

] (with

zM < 1

) is a positively invariant compact set and we derive the result in (5). From above, if

zi;1 2

zzM

] then

z  zi

=

f

(

zi;1

)

 zi;1

, so that

zi 2

zzM

]. Moreover, as

f

(

z

)

<z

for

z <z

, we have that

zi <zi;1

for

z <zi;1

. This means that the equilibrium point

z

(i.e.

f

(

z

) =

z

) is attractive from above. And simple calculations give :

f

(

z

)

;z

z;z





f

(

z

) +

f+0

(

z

)(

z;z

)

;z

z;z



=

f+0

(

z

) for

z <z

. Hence,

z;1

=

f+0

(

z

)

<

1 in (5).

Finally, the positive invariance of 0

z

] (i.e. if

zi;1 2

0

z

] then 0

 zi

=

f

(

zi;1

)

 z

:

z

i

2

0

z

]) completes the proof of the lemma.

Similarly, we have for the process

wi

in Theorem 1.

Lemma 3 Let

wi

(with

i>

0) be the following positive scalar process

w

i

=

g

(

wi;1

) with

w0 >

0

where

g

(

w

) is a positive nondecreasing convex function in

R+

. If there exists

w >

0 such that

g

(

w

) =

w

and

g

(

w

)

>w

for

w<w

, then

w

is an attractive equilibrium point for

w<w



, i.e.

w

i

w



;

;1

w

(

w;wi;1

) with

w >

1. Globally, we have that lim

iwi w

for

i!1

.

4

(6)

Proof: it is similar to the one of Lemma 2. In this case,

w

can be linked with the left derivative of

g

(

w

) evaluated at

w

=

w

:

w;1

=

g;0

(

w

)

<

1 with

g0;

(

w

), the left derivative of

g

(

w

).

Hence, provided that the stochastic process

xk

satises the condition (3) in Theorem 1 and that the corresponding functions

f

(

x

) and

g

(

x

) exhibit the characteristics described in Lemma 2 and Lemma 3, respectively, we have that

w



;

;k

w

(

w;x0

)

Ix0<w E

(

xk

)

z

+

;kz

(

x0;z

)

Ix0>z

(6) with

E

(

xk

) =

E

(

xkjF0

) and

IX

denotes the indicator function on to the condition

X

. Of course, if either

g

(

x

) or

f

(

x

) cannot be found then the corresponding convergence property does not hold anymore. Actually, either

w

= 0 or

z

=

1

.

3 Algorithm for practical dominating functions

For a constructive use of the result presented in Section 2, we now propose to evaluate, at least numerically, dominating convex and concave functions (i.e.

g

(

x

) and

f

(

x

), respec- tively) of a particular function

h

(

x

) for

x

0.

The two corresponding procedures read as:

For the convex functional lower-bound on

h

(

x

) with a positive and nondecreasing function

g

(

x

), we rst make

g

(0) = min

x0h

(

x

). Then, we estimate

g

(

x

) for

x>

0 by successive Euler integration steps (with an increment 

x

for

x

), i.e.

g

(

xi

) =

g

(

xi;1

) +

g;0

(

xi

)

x

for

xi

=

xi;1

+ 

x

where

g;0

(

xi

) = max(

g0;

(

xi;1

)

 i

) and

i

is such that the line

g

(

xi;1

)+

i

(

x;xi;1

) is tangent to

h

(

x

), from below, for

x >xi;1

while

g0;

(0) = 0.

For the concave functional upper-bound on

h

(

x

) with a nonnegative and nonde- creasing function

f

(

x

), we rst state

f

(0) =

h

(0). Then, we successively perform

f

(

xi

) =

f

(

xi;1

) +

f;0

(

xi

)

x

for

xi

=

xi;1

+ 

x

where

f;0

(

xi

) = min(

f;0

(

xi;1

)

 i

) and

i

is such that the line

f

(

xi;1

) +

i

(

x;xi;1

) is tangent to

h

(

x

), from above, for

x > xi;1

while

f;0

(0) is as large as possible in order to have

f;0

(

x1

) =

1

.

In Figure 1, we present the result obtained by these two procedures for a given function

h

(

x

). We have also displayed the attractive equilibrium point

z

(resp.

w

) associated to the

z

(resp.

w

) process in Lemma 2 (resp. 3).

It appears that the growth rate of these two dominating function is asymptotically iden- tical, i.e.

h0

(

x

) for

x

1. Note also that the equilibrium point interval, i.e.

wz

], is rather extended over the abscissas where the original function

h

(

x

) varies.

5

(7)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

x

Figure 1: Practical concave (

;;

) and convex (

;

) domi- nating functions of the original

h

(

x

) (

|

). Corresponding equilibrium points:

z

('



') and

w

('

o

').

4 Application to the stochastic process example

In this section, we analyze in some details the stochastic process

rk

introduced in (1).

More precisely, we provide lower and upper dynamical bounds on the trajectory of its probability moments, i.e.

E

(

rnk

), and those of its inverse, i.e.

E

(

pnk

).

In view of Theorem 1, such dynamical bounding is achieved in deriving lower convex and upper concave functional bounds on the conditional (onto the past events) expectation on the corresponding power of these processes, i.e.

E

(

xkjFk;1

) with

xk

=

rkn

for the

n

-th probability moments of

rk

. Let us then evaluate bounding functionals for the conditional expectation of the

n

-th power of the two processes successively.

The

n

-th power of the stochastic equation governing the process

rk

is written as

r n

k

= (1

;

)

rk;1

+

'2k

]

n

=

rk;1n

(1

;

)

n

+

Pn

(

kjk;1

)]

with

kjk;1

=

'2k=rk;1

and

Pn

(

x

) = ((1

;

) +

x

)

n;

(1

;

)

n >

0 for

x>

0. Note that

P

n

(

x

) is monotonically increasing in

x

with

Pn

(0) = 0.

The evaluation of the conditional expectation of

rkn

with respect to the past events gives

E

(

rknjFk;1

) =

rk;1n

(1

;

)

n

+

E

(

Pn

(

kjk;1

)

jFk;1

)]

Then, we derive the following functional bounds that are uniform in

k

r n

k;1

(1

;

)

n

+

Q;n

(

rnk;1

)]

E

(

rknjFk;1

)

rk;1n

(1

;

)

n

+

Q+n

(

rnk;1

)] (7)

6

(8)

for appropriate functions

Q;n

(

x

) (resp.

Q+n

(

x

)) dened similarly to

g

(

x

) (resp.

f

(

x

)) in Theorem 1 for

Pn

(

kjk;1

).

Moreover, when appropriate (lower convex and upper concave, respectively) dominating functions are estimated for

E

(

rknjFk;1

) (as presented in Section 3), we can dene ~

Q;n

(

x

) and ~

Q+n

(

x

) as the functions that make the bounding expressions in (7) identical to the corresponding dominating function estimates. The attractive equilibrium points of these dominating processes, i.e.

wr

and

zr

are found by solving

~

Q

;

n

(

wr

) = ~

Q+n

(

zr

) = 1

;

(1

;

)

n

while the lower bounds for the convergence rate to these solutions (see expression (6)) are given by

;1

wr

= 1 +

wr

( ~

Q;n

(

wr

))

0;

and

zr;1

= 1 +

zr

( ~

Q+n

(

zr

))

0+

where (

h

(

x

))

0;

(resp. (

h

(

x

))

0+

) stands for the rst left (resp. right) derivative of

h

(

x

) evaluated at

x

=

x

.

The

n

-th power of the process

pk

, i.e.

pnk

, is treated similarly. We rst write

p n

k

=

pnk;1

(1

;

)

n

+

Pn

(

kjk;1

)]

;1

where

Pn

(

x

) is the same polynomial as before and

kjk;1

can be written as

kjk;1

=

'2k pk;1

. Then, the uniform (in

k

) functional bounds on

E

(

pnkjFk;1

) takes the following form

p n

(1

;

)

n

+

k;1Tn;

(

pnk;1

)

E

(

pnkjFk;1

)



(1

;

)

np

+

nk;1Tn+

(

pnk;1

) (8) for appropriate functions

Tn;

(

x

) and

Tn+

(

x

). In fact, by Jensen's inequality, it can be shown that

Tn+

(

x

)

Q+n

(1

=x

)

Tn;

(

x

) with

Q+n

(

x

) from above.

Finally, when convex and concave dominating functions are estimated for the lower and upper bounds of

E

(

pnkjFk;1

), we similarly obtain the functions ~

Tn;

(

x

) and ~

Tn+

(

x

). The attractive equilibrium points of these dominating processes, i.e.

wp

and

zp

are found by solving

~

T

;

n

(

wp

) = ~

Tn+

(

zp

) = 1

;

(1

;

)

n

while the lower bounds for the convergence rate to these solutions are given by

;1

wp

= 1

;wp

( ~

Tn;

(

wp

))

0;

and

zp;1

= 1

;zp

( ~

Tn+

(

zp

))

0+

In the next section, we give simulations of the dynamical (lower convex and upper concave) bounds we have derived for the probability moments of the processes

rk

and

pk

in the case of a particular distribution of their \independent" random variable

'k

. The role of the corresponding equilibrium points, i.e.

w

and

z

, will also be demonstrated.

7

(9)

5 Simulation results

Here, we illustrate the theoretical results presented in the preceding sections. More pre- cisely, we evaluate asymptotic bounds on the trajectories of the second probability mo- ments of the stochastic processes

rk

and

pk

= 1

=rk

.

As seen above, these bounds are made of the equilibrium points of the dominating deter- ministic dynamics associated to the corresponding conditional moments trajectories, i.e.

E

(

rk2jFk;1

) and

E

(

p2kjFk;1

). By use of Jensen's inequality, we further have that

1

=zp E

(

p2k

)

E

(

rk2

)

zr

for large

k

(9) where

zr

(resp.

zp

) is related to the estimated function ~

Q+2

(

r2

) (resp. ~

T2+

(

p2

)) introduced in Section 4.

First, let us introduce the density function of the \input" random variable. Each sample

'

k

is taken independently of the others. Its energy density function is based upon a reference density function, i.e.

d

(

'2

), whose distribution function is denoted

D

(

'2

). We then consider a modication of this reference density function in order to generate energy sequences that satisfy the excitation condition

'2k rk;1

along

k

0 for a chosen value of

 2

0



1]. By dening the conditional (on

2

) reference density function as

d

(

'2j2

) = 1

1

;D

(

2

)

d

(

'2

)

I'22

the sample

'2k

can be seen as a random variable following a density function identical to

d

(

:jrk;1

).

For the simulations, we consider an energy sequence

'2

that has a small probability, i.e.



say, of being large with a density function centered at

02

and a complementary probability, i.e. 1

;

, of being small. In Figure 2, we present the density function of

j'j

corresponding to a particular example of such a reference density function

d

(

'2

) for



= 0

:

1 and

02

= 1.

We also show the constitutive density functions.

In Figure 3, we have presented two realizations of the process

rk

for this reference density function

d

(

'2

) with



identical either to zero or 0

:

3 for



= 0

:

05. Obviously, the two real- izations behave very dierently: for zero



, it exhibits small

rk

(leading to large

pk

) values due to irrelevant samples

'2k

while, for



= 0

:

3, its focuses on the signicant samples out of that density function. Note that memory of the process (or of the initial condition

r0

) is similar to the inverse of the forgetting factor



, i.e.

1

=

.

Now, let us turn to the trajectory of the second probability moment of the processes

r

k

and

pk

. Although these two processes are not auto-regressive (AR) per se, they can roughly speaking be seen (from their realizations in Figure 3) as almost-stationary AR processes.

The independence of the samples

'k

over

k

and the fact that their density function is at most

rk;1

-dependent imply that the conditional (onto the past) probability moments of these processes are uniform in

k

, i.e.

E

(

xkjFk;1

) =

h

(

xk;1

). Therefore, the results

8

(10)

0 0.5 1 1.5 2 2.5 3 0

0.5 1 1.5 2 2.5 3 3.5

Figure 2: Example of the density function of

j'j j'j

with



= 0

:

1 and

02

= 1.

0 100 200 300 400 500 600 700

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

=0:05

=0:0

=0:3

Figure 3: Two di erent realizations of the

k

nonstationary process

rk

. derived in Section 2 are easily applied.

Let us then consider the deterministic dynamics that upper dominate the second prob- ability moment of these processes, i.e.

E

(

rk2

) and

E

(

p2k

). These dynamics are related to the bounding functions ~

Q+2

(

r2

) and ~

T2+

(

p2

) that are represented in Figure 4 (normalized to 1

;

(1

;

)

2

]). As mentionned in Section 4, the equilibrium points of the dominating trajectories are found by making these bounding functions identical to 1

;

(1

;

)

2

],

0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

~

Q +

2 (r

2

)

1;(1;) 2

~

T +

2 (p

2

)

1;(1;) 2

ror1=p

=0:4

=0:4

Figure 4: Bounding functions for the

rk2

and 1

=p2k

processes. Equilibrium points:

1

=

(

zp

)

1=2

and (

zr

)

1=2

('



'). Estimated values of

krkk2

('



') and 1

=kpkk2

('

o

').

0 0.1 0.2 0.3 0.4 0.5

0 0.5 1 1.5

0.01 0.05 0.10 0.18



Figure 5: Asymptotic bounds on

krkk2

and

kp

k k

;1

2

as a function of the dynamic relative threshold



.

i.e. leading to

zr

and

zp

, respectively. Furthermore, from the expression (9), the interval

9

(11)

made of these two equilibria, i.e. 1

=zpzr

], will asymptotically (in

k

) contain the expec- tation of the square processes. This can be seen in the gure where we have displayed the estimated values of these process 2-norms, i.e.

krkk2

and 1

=kpkk2

with

kxk2

=

E

(

x2

)]

1=2

. These estimates has been obtained by averaging a particular realization of the associated processes (for



=



= 0

:

4). In fact, the equilibria interval appears to provide bounds that tightly surround the estimated moments.

Finally, in Figure 5, we present the estimated (lower and upper) bounds on the asymptotic value of these process 2-norms, i.e. 1

=

(

zp

)

1=2

and (

zr

)

1=2

, as functions of the value



for several values of



.

It can be emphasized that these bounding intervals are not linear in the



value. Indeed, for small



(i.e.

<

0

:

02), the

rk

process exhibits small sample values that characterize the global distribution of

'2

. For increasing



's (i.e. 0

:

02

< <

0

:

2), the process

rk

tends to exhibit the distribution of more energetic regressors samples

'k

. For larger



values (i.e.

>

0

:

2), the samples

rk

already concentrate on the signicant part of this distribution, so that more \selective"



's have only small eects on the realization of

rk

. Note also that the bounds on the second probability moments of the process

rk

are rather independent of the forgetting factor



for

 >

0

:

2.

6 Conclusions

The main result of this paper is the constructive possibility of transferring the analysis of the trajectory of the probability moments of particular stochastic processes into the stability analysis of dominating deterministic dynamics.

This result has been applied to a nonstationary auto-regressive process obtained by con- ditioning its \input" random variable on past values of its output.

References

1] T. Anderson. The statistical analysis of time series. John Wiley and Sons, New-York, 1971.

2] L. Arnold. Stochastic dierential equations: theory and applications. John Wiley and Sons, New York, 1973.

3] A. Balakrishnen. Stochastic dierential systems I: ltering and control a functional space approach. Springer-Verlag, Berlin, 1973.

4] P. Carrette. Analysis of a constrained forgetting factor recursive least squares algo- rithm in system identication. In Proceedings of the 35th CDC, Kobe, Japan, 1996, pages 1079{1080.

5] K. Chung. A course in probability theory. Academic Press, Boston, 1974.

10

(12)

6] H. Cramer and M. Leadbetter. Stationary and related stochastic processes. John Wiley and Sons, New York, 1967.

7] J. Galambos. Advanced probability theory. Marcel Dekker, inc., 1988.

8] L. Guo, L. Ljung, and P. Priouret. Performance analysis of the forgetting factor rls algotithm. Int. J. Adaptive Control and Signal Processing, 1993, 7:525{537.

9] R. Liptser and A. Shiryayer. Statistics of random processes I: general theory. Springer- Verlag, New York, 1977.

11

References

Related documents

The aim of the thesis is to examine user values and perspectives of representatives of the Mojeño indigenous people regarding their territory and how these are

nated to the Association for Local History. It is of course impossible to determine exacdy why the two were unable to finish their work, but one contributing factor may well be

Martins’ works and my working process reminded me of these memories and my sketchbook I had in that drawing course (fig. I look back on the past and find things that

I am simultaneously moving forward in time while also I am stuck in the form of artworks, written words, recordings, photographs etc.. This way of thinking about existence in time

Here L(E, F ) is the space of all bounded linear operators from E into F endowed with the

Gillespie use the algorithm successfully to simulate the time evolution of the stochastic formulation of chemical kinetics, a process which takes into account that molecules come

To clarify, using this transformation we know the possible realizations (the jump positions) of the random variable W ˜ n and we are back to the earlier discussed problem of finding

Theorem 2 Let the frequency data be given by 6 with the noise uniformly bounded knk k1 and let G be a stable nth order linear system with transfer ^ B ^ C ^ D^ be the identi