• No results found

High Resolution Filter for Stable Spacecraft Attitude Estimation

N/A
N/A
Protected

Academic year: 2021

Share "High Resolution Filter for Stable Spacecraft Attitude Estimation"

Copied!
86
0
0

Loading.... (view fulltext now)

Full text

(1)

High Resolution Filter for Stable Spacecraft Attitude Estimation

K R I S T I N R Y N N I N G E R E R I S K S S O N

Master of Science Thesis Stockholm, Sweden

(2)
(3)

High Resolution Filter for Stable Spacecraft Attitude Estimation

K R I S T I N R Y N N I N G E R E R I S K S S O N

Master’s Thesis in Optimization and Systems Theory (30 ECTS credits) Master Programme in Mathematics (120 credits) Royal Institute of Technology year 2014

Supervisor at KTH was Xiaoming Hu Examiner was Xiaoming Hu

TRITA-MAT-E 2014:14 ISRN-KTH/MAT/E--14/14--SE

Royal Institute of Technology School of Engineering Sciences KTH SCI SE-100 44 Stockholm, Sweden URL: www.kth.se/sci

(4)
(5)

Abstract

In this thesis a lter for attitude estimation using sensor fusion of a Star Tracker and a Gyroscope are designed and implemented for integration on the European Data Relay satellite, EDRS. The performance of three lters, Multiplicative Extended Kalman Filter, Unscented Kalman Filter and Parti- cle lter, are evaluated in terms of attitude error, covariance convergence and change of attitude knowledge uncertainty. The lters show similar perfor- mance on all three measures and the comparison becomes a question of im- plementation and computational load. The Multiplicative Extended Kalman Filter is better in both criteria and are chosen for integration on board.

(6)
(7)

Abstract

I detta examensarbete formuleras och implementeras ett lter för attity- destimering ombord satelliten European Data Relay Satellite. Filtret som utvecklas skall estimera attityden och vinkelhastigheten hos en satellit genom att blanda signalerna från från en stjærnkamera och ett gyro. Prestandan hos tre olika utvidgade Kalmanlter undersöks i fråga om estimeringsfel, konvergens av tillstånskovariansen och förändring i attitydskunskaposäker- het (change i attitude knowledge uncertainty, CAKU). Alla tre lter up- pvisar snarlik prestanda och valet av lter för implementering ombord faller då på det lter som har den enklaste implementationen och den minsta beräkningstyngden, det multiplikativa utvidgade Kalmanltret (Multiplica- tive Extended Kalman Filter, MEKF).

(8)
(9)

Contents

1 Introduction 4

1.1 Background . . . 4

1.2 Prerequisites for Implementation and Available Hardware . . 4

2 Preliminaries 6 2.1 Coloured Noise . . . 6

2.1.1 White Noise . . . 6

2.1.2 Brownian Noise . . . 6

2.1.3 Pink Noise . . . 7

2.2 Allan variance . . . 7

2.2.1 Denition . . . 7

2.2.2 Application . . . 8

2.3 State Estimation Theory and Filtering in Vector Spaces . . . 8

2.3.1 The State Estimation Problem . . . 9

2.3.2 Linear Systems and The Kalman Filter, KF . . . 9

2.3.3 Non-Linear Systems and the Kalman Filter . . . 11

2.3.4 Particle Filter . . . 13

2.3.5 Unscented Kalman Filter, UKF . . . 16

3 Problem Statement and Modeling 20 3.1 Problem Statement . . . 20

3.2 Attitude . . . 21

3.2.1 Denition . . . 21

3.2.2 Attitude Matrix . . . 21

3.2.3 Attitude Quaternion . . . 22

3.3 Attitude Dynamics . . . 23

3.4 Gyroscope Measurement . . . 24

3.5 Star Camera Measurements . . . 24

4 Filter Design and Implementation 27 4.1 Unit Quaternions is a Non-Linear Manifold . . . 27

4.2 Parametrized Error Quaternions . . . 28

4.3 (Multiplicative) Extended Kalman Filter . . . 28

(10)

4.3.1 Time Update . . . 29

4.3.2 Measurement Update . . . 31

4.4 Unscented Kalman Filter . . . 31

4.4.1 Time Update . . . 32

4.4.2 Measurement Update . . . 34

4.5 Particle Filter/Bayesian Bootstrap . . . 34

4.5.1 Initialization . . . 35

4.5.2 Prediction . . . 35

4.5.3 Update . . . 36

4.5.4 Resample . . . 37

5 Simulations, results and analysis 38 5.1 Simulation Set-Up . . . 38

5.2 Attitude Estimates . . . 40

5.2.1 Attitude Measurements at 10 Hz . . . 40

5.2.2 Attitude Measurements at 2 Hz . . . 41

5.3 Bias Estimates . . . 41

5.4 Convergence of MEKF and UKF . . . 44

5.5 Change of Attitude Knowledge Uncertainty . . . 48

6 Conclusions 49 6.1 Evaluation of Results . . . 49

6.1.1 Static or Adaptive Filters . . . 50

6.2 Suggestions for Improvements and Further Studies . . . 50

A Appendix A 52 A.1 MEKF . . . 52

A.2 Unscented Kalman Filter . . . 53

A.3 Particle Filter/Bayesian Bootstrap . . . 56

A.3.1 Initial Guess . . . 56

A.3.2 Prediction . . . 57

A.3.3 Update . . . 58

A.3.4 Reset . . . 58

A.3.5 Resample . . . 58

B Appendix B 59 B.1 Adaptive MEKF . . . 59

B.1.1 Zero Input . . . 59

B.1.2 Non-Zero Input . . . 62

B.2 UKF . . . 64

B.2.1 Zero Input . . . 64

B.2.2 Non-Zero Input . . . 67

B.3 PF . . . 70

B.3.1 Zero Input . . . 70

(11)

B.3.2 Non-Zero Input . . . 72

(12)
(13)

Chapter 1

Introduction

1.1 Background

The background of this thesis is the EDRS-C mission, a geostationary satel- lite that will serve as a data relay for lower orbit satellites. Because of the high speed of these lower orbit satellites, the time slot for communication to ground is limited. By relaying the data via a geostationary satellite, this time slot can be increased and more data can be sent to ground.

The geostationary satellite, in turn, sends the data to the ground with a laser communication terminal. This laser communication terminal demands a high accuracy attitude estimate and moreover it does also demand a atti- tude estimate without rapid changes, i.e. a stable estimate. Existing attitude estimators are not designed with the demand on stability and the task at hand is to design an estimator that can provide the laser terminal with both a stable and accurate attitude estimate.

1.2 Prerequisites for Implementation and Available Hardware

The goal in this thesis is mainly to develop an estimate of the attitude or orientation of a spacecraft in space. This task is very wide. Questions to be answered could be; what kind of hardware should we use? What should the sample rate be? Should we have a adaptive or static lter? What are the bounds on the errors? And so on. Luckily all those questions are already answered to some extent and the context of this thesis can be described.

The satellite in mind is a geostationary satellite. When information is to be sent over the laser link the satellite has to point to the same point on earth, this is called earth pointing mode. When the satellite is in earth pointing mode the angular velocity of the satellite is small and approximately constant at 2π/24 h, and the requirements on accuracy and stability are very high. When the satellite is not in earth pointing mode the angular velocity

(14)

can be of greater magnitude and vary, but in this case the requirements on the accuracy and stability are not equally high. Therefore only the earth pointing mode is considered here.

The satellite in mind is equipped with a number of sensor, actuators and other hardware. In the problem of attitude estimation there are two sensors that are to be used, a Star Camera and a Gyroscope, in a so called sensor fusion. This set-up is quite common in this kind of problems. The gyroscope measures the angular velocity of the satellite, thus determines the propagation of the attitude and the star camera measures the angle to some specic bright stars and translate this into an attitude. The output of the gyroscope is very accurate which makes the gyroscope the reliable and primary source of data. The measurement model of the gyroscope should be made in a way that utilizes the accuracy and the known noise characteristics of the gyroscope.

(15)

Chapter 2

Preliminaries

2.1 Coloured Noise

In all measurements there are noises and errors present. Systematic noises can always be avoided or compensated for, but the random noises can never be completely eliminated and ltering really is to lter away the noises.

In an electric measurement devise there will always be three types of random noises present, white noise, Brownian/brown noise and pink noise.

These are called coloured noises because the power spectral density looks like the frequency spectrum of colors.

2.1.1 White Noise

White noise is perhaps the easiest noise to understand and it is used to model noise in many applications in signal processing. It is dened to be a zero mean stochastic process w(t) which is uncorrelated in time, i.e. the mean E[w(t)] = 0 and the autocorrelation E[w(t1)w(t2)T] = Qδ(t2 − t1) where Q is some diagonal matrix and δ(·) is the Dirac delta function giving zero weight for all t1 6= t2 and innite weight at t1 = t2. In this application, like in many, it is assumed that the white noise is Gaussian. Gaussian meaning that w(t) ∼ N (0, Q). The white noise has a constant power spectral density being equal to Q, which gives nice properties when trying to lter a signal perturbed by white noise.

In gyroscope specications the white noise is referred to as angular ran- dom walk.

2.1.2 Brownian Noise

Brownian noise can be modelled as a random walk process in discrete time.

One can think of it as beginning at some point on a line, and then take a step of some length in some direction, forward or backward. Standing at the new point, take a new step of some length and direction. The new step has

(16)

nothing to do with the last step, so it is not more likely that to move forward if the last step was forward. The step size is determined by some random process with zero mean uncorrelated in time, i.e. white noise, so the position on the line at time t will be the sum of samples from the white noise process.

This means that the mean will be zero but the variance of the position will increase in time.

b(t + ∆t) = b(t) + w(t)∆t (2.1)

The mean value of b(t) is zero, but the autocorrelation is given by E[b(t1)b(t2)] = min(t1, t2)which is dependent of the time lag and the power spectral density of b(t) is given by Sb(ω) = ω12Sw(ω) = ω12Q. In a generalized way, the white noise is the derivative of the Brownian noise.

˙b(t) = w(t) (2.2)

The Brownian noise is referred to as rate random walk in the gyroscope specications.

2.1.3 Pink Noise

Pink noise or icker noise is present in all electric devices and originates from a DC current. The icker noise is hard to model in the time domain. The characteristics of icker noise is that it has a ω1 spectrum.

2.2 Allan variance

The concept of Allan variance was introduced by Allan in 1966 [1] as a method to determine noise characteristics in atomic clocks, but is now used to determine noises present in other measurement devises, e.g gyroscopes.

2.2.1 Denition

Assume a signal yk is measured for k = 1, 2, ...D. Divide the signal into N time slots of length τ and let ¯yi(τ ) be the mean of the signal in timeslot i.

The Allan Variance (AV) is dened by

a(τ ) = 1 2(N − 1)

N −1

X

i=1

(¯yi+1(τ ) − ¯yi(τ ))2. (2.3) If yk is regarded as a stochastic process, for a given τ, a(τ) will be a random variable with a theoretical mean and covariance. If the AV is calculated for a set of sampled data points, the theoretical and empirical mean will not be the same, but for large N the sample mean will be a good approximation of the theoretical mean.

(17)

Noise PSD AV log(a(τ )) Slope Covariance

White Q Qτ − log(τ ) + log(Q) −1 a(1)

Pink Pω 2Pπ ln(2) log 2Pπ ln(2)

0 2 ln(2)π a(τ ) Brownian ωR2

3 log(R) − log(3) + log(τ ) 1 a(3) Table 2.1: Power Spectrum Density and Allan variance for white, Brownian and pink/icker noise. The value for icker noise is given for τ = τ0 which is the minimizing time lag.

2.2.2 Application

Assume that the measured signal is interfered by the three types of noises described in Section 2.1, and assume, for simplicity, that the true signal is zero. The following noise model is obtained.

yk= wk+ pk+ bk, (2.4)

where bk is Brownian noise with PSD ωR2, wkis white noise with PSD Q and pk is pink noise with PSD Pω.

The AV and PSD are equivalent ways to visualize a signal, but in some cases the AV is more useful. Assume that the PSD is given by

S(ω) ∝ ωα, (2.5)

which is equivalent to

S(1

τ) ∝ τ−α. (2.6)

The corresponding AV is given by

a(τ ) ∝ τ−(α+1). (2.7)

The theoretical AV of the three dierent noise terms are summarized in Table 2.1. By taking logarithm of the AV given in Table 2.1 it is seen that dierent error terms appear in dierent regions of τ. Hence, the total AV of yk can be assumed to be a sum of the AV of the dierent terms.

2.3 State Estimation Theory and Filtering in Vec- tor Spaces

The attitude estimation problem is a problem of signal processing and lter- ing and, as seen in the following chapters, the problem is a non-linear state estimation problem. Over the years several lters for non-linear problems have been proposed. The goal of this thesis is to investigate and compare

(18)

the performance of three dierent types of lters applied to the attitude es- timation problem, Extended Kalman lters, Unscented Kalman Filters and Particle Filters. In this section, a brief description of the general state es- timation problem is given, to provide a foundation for the mathematical modelling in Chapter 3. The general forms of the lters to be investigated are also given.

2.3.1 The State Estimation Problem

Let xk∈ Rn, be some state to be determined, for k = 0, 1, . . .. Assume yk is function of xk, and that yk is measured with some error, that is

yk= h(xk, vk, k) (Measurement Model) where vk is zero mean white noise, referred to as the Measurement Noise.

Assume further that the dynamics of xk is known, up to some disturbance wk, that is

xk+1 = f (xk, uk, wk, k) (Dynamic Model) where wk is zero mean white noise, referred to as the Process Noise and uk

is a known input signal. This gives the general system

xk+1 = fk(xk, uk, wk) (2.8a) yk= hk(xk, vk) (2.8b) The problem is to estimate xkin the best possible way using observations of yk. The best possible way here is to minimize the Mean Squared Error, (MSE), namely min Eh

xk− ˆxk|k2i where ˆxk|k = E [xk|yk, yk−1. . .]. 2.3.2 Linear Systems and The Kalman Filter, KF

If f(·) and h(·) in Eqns. 2.8 above are linear functions and assume, without loss of generality, that the input signal is uk = 0, the Measurement Model and Dynamic Model can be rewritten as the linear system

xk+1= Fkxk+ Gkwk (2.9a)

yk= Hkxk+ vk (2.9b)

where wkand vkis assumed to be uncorrelated, zero mean, white noises, i.e.

E [wk] = 0, E [vk] = 0, E wkwlT = Qδkl, E vkvTl  = Rδkland E wkvkT = 0 For linear systems, as the one described above, with the assumption that the noise terms are Gaussian, there exist an optimal lter rst proposed by Kalman in 1960, called the Kalman Filter [9].

The Kalman lter recursion can be formulated in two steps. The rst step is the Time update where the a posteriori estimate of xk is used to

(19)

calculate an a priori estimate of xk+1 namely ˆxk+1| with corresponding a priori estimate of the covariance matrix of xk+1, Pk+1|k. The second step is the Measurement update where ˆxk|k−1, Pk|k−1 and the value of yk is used to calculate an a posteriori estimate of xk, xk|k and the corresponding a posteriori covariance matrix Pk|k.

Time Update:

ˆ

xk+1|k = E [xk+1|yk, yk−1, . . .] = Fkk|k (2.10a) Pk+1|k = Eh

xk+1− ˆxk+1|k

xk+1− ˆxk+1|kTi

(2.10b)

= FkPk|kFkT + GkQGTk (2.10c)

Measurement Update:

˜

yk= yk− Hkk|k−1 (2.11a)

ˆ

xk|k= E [xk|yk, yk−1, . . .]

= ˆxk|k−1+ Kkk (2.11b)

Pk|k= Eh

xk− ˆxk|k

xk− ˆxk|kTi

= (I − KkHk) Pk|k−1(I − KkHk)T +

+ KkRkKkT, (2.11c)

where ˜ykis referred to as the innovations sequence and Kkthe Kalman gain.

The matrices Pk|k−1 and Pk|k are the a priori and a posteriori estimated co- variance matrices of xk, respectively. The Kalman lter stated in Equations 2.10 and 2.11 is the so called Joseph form of the Kalman lter. The update holds for all gain matrices Kk.

In aim in a state estimation problem is to nd an estimator that min- imizes the mean square error (MSE), i.e. minxkE|xk− ˆxk|k|2

. This is equivalent to minimizing the trace of the covariance matrix. By taking the Kk derivative of Pk|k and setting it to zero and the optimal Kalman gain, that minimizes the MSE in the Kalman recursion, is obtained.

Kk = Pk|k−1HkTR−1k (2.12) Using the optimal Kalman gain the Kalman lter can be rewritten as:

Time update

ˆ

xk+1|k = Fkk|k (2.13a)

Pk+1|k = FkPk|kFkT + GkQGTk (2.13b)

(20)

Measurement Update

˜

yk= yk− Hkk|k−1 (2.14a)

Kk = Pk|k−1HkT(HkPk|k−1HkT + Rk)−1 (2.14b) ˆ

xk|k = ˆxk|k−1+ Kkk (2.14c) Pk|k = (I − KkHk) Pk|k−1. (2.14d)

The Kalman lter can be written in closed form and mimics a dynamical system with the measurement yk

2.3.3 Non-Linear Systems and the Kalman Filter

The general Kalman lter described in Eqns. (2.13) and (2.14) is the min- imum MSE for systems where the xk and yk are Gaussian random pro- cesses [9]. This is the case when the the system is linear and the process and measurement noises are Gaussian. The generic physical system or noise model given in Eqn. (2.8) is not linear and many attempts have been made to make an extension of the Kalman lter to non-linear problems. Below three such attempts are described. The lters described in this section are suboptimal lters and there is little general theory about convergence and stability, but is should be noted that the goal in this thesis is not to nd an optimal lter in a mathematical sense, but to investigate behaviour of dierent lters.

Extended Kalman Filter, EKF

Take the general non-linear system on the form given in Eqn. (2.8). The rst assumption in the EKF is that is that the expected value of a function is approximately the function of the expected value of the variables, i.e. given a function f(·), E [f(x)] ≈ f (E [x]). If f(·) is a linear function then this is true with equality. If f(·) is non-linear the error will depend on the magnitude of higher order terms in the Taylor series expansion of the function. What we will assume is that

ˆ

xk|k−1= E [f (xk−1, uk−1, wk−1|y0, y1, . . . yk−1)] ≈ (2.15a)

≈ f (E [xk−1, uk−1, wk−1|y0, y1, . . . yk−1]) = (2.15b)

= f (ˆxk−1|k−1, uk−1, 0). (2.15c) The assumption also leads to the approximation of the covariance matrix Pk|k−1.

The essence of the EKF is to linearize the dynamics, at each point in time, about the last estimate ˆxk|k and the measurement model about xk|k−1.

(21)

Assume that fk, gk and hk are suciently smooth and let Fk= ∂fk(x, uk, w)

∂x

x=ˆxk|k,w=0

Hk= ∂hk(x, v)

∂x

x=ˆxk|k−1,v=0

Gk= ∂fk(x, uk, w)

∂w

x=ˆxk|k,w=0

. The Taylor expansions are given by

fk(xk) = fk(ˆxk|k, uk, 0) + Fk(xk− ˆxk|k) + Gkwk+ . . . (2.16a) hk(xk) = hk(ˆxk|k−1) + Hk(xk− ˆxk|k−1) + . . . (2.16b) Assume that the rest terms in Eqn. (2.16) are small enough rewrite the general non-linear system in Eqn. (2.8) on the following form

xk+1= Fkxk+ Gkwk+ ξk (2.17a) yk= Hkxk+ vk+ ζk, (2.17b) where ξk = fk(ˆxk|k) − Fkk|k and ζk = hk(ˆxk|k−1) − H ˆxk|k−1. If the KF described in Eqns. (2.11) and (2.11) is applied to Eqn. (2.17) with the as- sumptions given above, the following lter recursion is obtained:

Time Update

ˆ

xk+1|k = fk(ˆxk|k, uk, 0) (2.18a) Pk+1|k = FkPk|kFkT + GkQkGTk (2.18b)

Measurement Update

˜

yk= yk− hk(ˆxk|k−1, 0) (2.19a) Kk= Pk|k−1HkT(HkPk|k−1HkT + Rk)−1 (2.19b) ˆ

xk|k= ˆxk|k−1+ Kkk (2.19c)

Pk|k= (I − KkHk) ˆPk|k−1(I − KkHk)T +

+ KkRkKkT. (2.19d)

If the function is suciently smooth and the process is close to Gaussian, then the EKF will be a fairly good estimator and this is why the EKF has been the standard routine in non-linear state estimation since it's entrance in the 1960's [8]. One way of checking the performance of the EKF is to look at the innovation sequence, ˜yk[2]. If the lter is optimal, then the innovation sequence is a zero mean white noise. Hence, the deviation from optimality is reected by deviation from white noise in the innovation sequence.

(22)

2.3.4 Particle Filter

The Particle Filters are based on the idea of a Hidden Markov Model and aims to estimate the conditional PDF of the hidden states using sampled Particles and Bayesian Recursive estimation. The rst working lter was introduced by N.J. Gordon et al. in 1993 [11]. The original formulation is the Sequential Importance Resample or Bayesian bootstrap. Since the release many variations of the Particle lter have been developed. [6].

Hidden Markov Models and Bayesian Recursive Estimation A Hidden Markov Model can be an intricate construction, but it is based on the idea that there are states xk, for k = 0, 1 . . ., that cannot be observed directly, hence called hidden states. The sequence of hidden states is assumed to be a Markov process, i.e.

p(xk|xk−1, xk−2. . . x0) = p(xk|xk−1), (2.20) where the transition probability p(xk|xk−1)is known. The hidden states can be observed via the observed states ykfor k = 1, 2 . . .. The conditional prob- ability of ykgiven the sequence of hidden states is assumed to be independent of all states but the current one, i.e.

p(yk|xk, xk−1. . . x0) = p(yk|xk), (2.21) where p(yk|xk)is known. Hence, the unconditioned probability distribution of all states can be written

p(x0, x1. . . xk, y1, y2. . . yk) = p(x0)

k

Y

i=1

p(yi|xi)p(xi|xi−1). (2.22) In the ltering context the goal is to nd an estimate of xk, given all measurements of yk for k = 1, 2 . . . k. Let Yk = {yi}ki=1 be the sequence of all observed states. Bayesian Recursion does not only give the desired conditional expectation E[xk|Yk] but the whole conditional probability dis- tribution p(xk|Yk). Using Bayes theorem, the conditional expectation can be written

p(xk|Yk) = p(yk|xk)p(xk|Yk−1)

p(yk|Yk−1) , (2.23)

hence the name Bayesian estimation.

The a priori distributions are given by p(xk|Yk−1) =

Z

p(xk|xk−1)p(xk−1|Yk−1)dxk−1 (2.24a) p(yk|Yk−1) =

Z

p(yk|xk)p(xk|Yk−1)dxk. (2.24b)

(23)

The last equation, or the denominator in Eqn. 2.23 is constant with respect to xk and since p(xk|Yk) is a probability distribution, we can substitute p(yk|Yk−1) by a normalizing constant.

To but the Recursive Bayesian lter in the same outline as the Kalman

lter, it is separated it into two steps, Prediction (Time Update) and Update (Measurement Update),

Prediction:

p(xk|Yk−1) = Z

p(xk|xk−1)p(xk−1|Yk−1)dxk−1 (2.25) Update:

p(yk|Yk−1) = Z

p(yk|xk)p(xk|Yk−1)dxk (2.26a) p(xk|Yk) =p(yk|xk)p(xk|Yk−1)

p(yk|Yk−1) . (2.26b)

The integrals of the left hand side are solvable analytically for certain types of systems, including the linear/Gaussian case. In fact, if the Bayesian Recursion is applied to a linear system with Gaussian signals, it becomes the ordinary Kalman Filter.

For the Bayesian Recursion to be applicable to the same type of problems as the Extended Kalman Filter, assume the system given in Eqn. 2.8 and assume, without loss of generality, that the input uk = 0. The transition distribution is given by

p(xk|xk−1) = Z

p(xk|xk−1, wk−1)p(wk−1|xk−1)dwk−1. (2.27) The noise is independent of the current state, by assumption, hence p(xk|wk) = p(wk), which is known. If xk−1 and wk−1 is known then xk is determined deterministically and the probability can be written

p(xk|xk−1) = Z

δ(xk− f (xk−1, wk−1, k − 1))p(wk−1)dwk−1. (2.28) By the same argument, the conditional probability of the observation can is given by

p(yk|xk) = Z

δ(yk− hk(xk, vk))p(vk)dvk, (2.29) where p(vk) is known.

(24)

Particle Filter

When Eqns. 2.25 and 2.26 are non-solvable, a sequential Monte Carlo type of method called Particle Filter can be used. The Particle lter utilizes a number of points or particles to approximate the probability distributions.

The rst working particle lter type is called the Bayesian Bootstrap or Sequential Importance Re-sample and was introduced by Gordon et.al. in 1993 [11]. Sequential Importance Re-sample was developed based on Sequen- tial Importance Sample, which in its pure form does not work in practice but it illustrates the idea of the particle lter in simple way.

Sequential Importance Sample

The Sequential Importance Sample consists of two steps, prediction and update. Assume that the initial distribution of xk is known, i.e. p(x0) is known. Draw, at random, samples, x0(i) for i = 1, 2 . . . N, from this distribution and dene the discrete distribution P (X = x0(i)) = W0 where W0 = 1/N is the weight of particle i. For k = 1, 2 . . . the iterations are described below.

Prediction:

Propagate the particles through the dynamics of the system

xk+1(i) = f (xk(i), wk(i), k), (2.30) where wk(i) are samples from p(wk). The probability distribution P (X = xk+1(i)) = Wk(i) is assumed to be the approximate discrete version of the a priori distribution p(xk+1|Yk)and the a priori estimate of xk+1 is approximated by the weighted mean of the particles, i.e.

ˆ

xk+1|k

N

X

i=1

Wk(i)xk+1(i). (2.31) The computation of the mean is not necessary for the lter iteration, but in the implementation an approximate value of the state, both the a priori and a posteriori, is desired, hence the computations are stated here:

Update

In this step the weights are updated using the measurement yk. Let the likelihood of the state xk(i)given the observation yk be L(xk(i)) = p(yk|xk(i). The importance, or weight, of particle xk(i) is then given by

Wk= Wk−1∗ L(xk(i))

N

P

j=1

L(xk)

. (2.32)

(25)

The probability distribution P (X = xk+1(i) = Wk+1(i) is assumed to be the approximative discrete version of the a posteriori distribution p(xk|Yk) and the a posteriori estimate is given by

ˆ

xk+1|k+1

N

X

i=1

Wk+1(i)xk+1(i). (2.33)

Using this approach, all weights but a few will eventually become zero, and there is no guarantee that these non-zero weights reect the true distribu- tion and the Sequential Importance Sample does not work in practice. To circumvent this problem, re-sampling is introduced.

Sequential Importance Resample or Bayesian Bootstrap

The Bayesian Bootstrap introduces a re-sampling step, where equally weighted particles xk+1(i)are drawn from the distribution P (X = xk+1(i)) = wk+1(i). If this is done in every iteration, the number of particles with non-zero weights will always be N. If the re-sample step is performed once in a while, the number of particles with non-zero weights can become less than N, but the re-sampling step ensures the diversity of the particles.

If re-sampling is done in every iteration, the original Bayesian Bootstrap [11] is obtained. Assume the distribution p(xk|Yk) is known. Let xk(i) be samples from this distribution, i.e. particles. Propagate the particles trough the dynamics,

xk+1(i) = f (xk(i), wk(i), k), (2.34) where wk(i)are samples from p(wk). The set of propagated particles, {xk+1(i)}, are assumed to be approximately distributed p(xk+1|Yk). The weight of par- ticle i is given by the likelihood of each particle, normalized by the sum of all likelihoods,

Wk(i) = p (yk|xk(i))

N

P

j=1

p (yk|xk(j))

. (2.35)

Let P (xk = xk(i)) = Wk(i) dene a discrete distribution which is as- sumed to be the discrete version of p(xk|Yk). Draw N samples xk(i) from this distribution and start over.

2.3.5 Unscented Kalman Filter, UKF

The UKF is based on the idea that it easier to approximate a probability dis- tribution than it is to approximate an arbitrary non-linear function or trans- formation, and was proposed by Julier and Uhlmann [8]. The Unscented

(26)

Filter is a simplication of the Particle Filter in the sense that it utilizes weighted points to approximate a probability distribution. The dierence is that the points, σ-points, in the UKF are chosen deterministically and are re-chosen each iteration. The UKF also assembles the EKF in outline, the only dierence being that more than one point is propagated through the dynamics and measurement models. Hence, the UKF is simpler and compu- tationally cheaper than the PF and can handle more complex systems then the EKF. The best of both worlds, put simply.

The Unscented Transformation

Let x ∈ RNx be random variable, such that

E [x] = ¯x (2.36a)

E[(x − ¯x)(x − ¯x)T] = Σx. (2.36b) Assume further that y = h(x) where h(·) is some function. The goal is now to calculate an ecient approximation of mean and covariance of y. If h(·) is linear, it is simple, else it can be more or less hard. The idea is to choose p + 1 σ-points x(i) in RNx, which has the mean ¯x and covariance Σx, with associated weights W (i) such that Pp

i=0

W (i) = 1. Transform the points to get y(i) = h(x(i)). The approximated mean of y is given by ¯y = Pp

i=0

W (i)y(i)and the approximated covariance is given by Σy =

p

P

i=0

W (i)(y(i) − ¯y)(y(i) − ¯y)T. A set of points that satises this conditions is the points constructed in the following way

x(i) = ¯x +p

NxΣx

i (2.37a)

W (i) = 1

2Nx (2.37b)

x(i+Nx) = ¯x −p NxΣx



i (2.37c)

W(i+Nx) = 1

2Nx (2.37d)

i = 1, 2, . . . , Nx, (2.37e) where (√

NxΣx)iis the i:th column of√

NxΣxwhere NxΣx =√

NxΣx

NxΣxT. If another σ-point is added, x(0) = ¯x, with the weight W (0), the mean will be unaected but the covariance will change. Hence, rescale the points using

(27)

W (0)in the following way

x(0) = ¯x (2.38a)

W (0) = W (0) (2.38b)

x(i) = ¯x +

s Nx

1 − W (0)Σx

!

i

(2.38c)

W (i) = 1 − W (0)

2Nx (2.38d)

x(i) = ¯x −

s Nx

1 − W (0)Σx

!

i

(2.38e)

W(i+Nx)= 1 − W (0)

2Nx . (2.38f)

The value of W (0) can be used to control the higher order moment approxi- mations. If the higher order moments are known, this choice of points could improve the estimation, but if the higher order moments are unknown, this could introduce larger errors in the estimation. Using the unscented trans- formation, i.e. transforming the points through the true non-linear function f (·) instead of the linearised function, the transformed points will have the true mean and covariance of f(xk)[8], if xk has mean ¯x and covariance Σx . The Unscented Kalman Filter

Assume we have the general system given in Eqn. 2.8, where x ∈ RNx. In is not not even required that f and g are dierentiable. The Unscented Trans- formation is used to calculate the a priori estimate and covariance and the prediction has to include some information about the process noise, there- fore the state space is extended by the vector wk. Further, the Unscented Transformation is used to calculate the a posteriori estimate and covariance and measurement update has to include some information about the mea- surement noise, therefore the state space is extended by the vector vk to get the augmented state vector

xak=

 xk wk vk

. (2.39)

The augmented state will have the dimension Na= Nx+ Nv+ Nw and the augmented system takes the form

xak+1= fa(xak) (2.40a) yk= ha(xak). (2.40b)

(28)

If we let

ˆ xak|k =

 ˆ xk|k

0 0

 (2.41)

and

Pk|ka =

Pk|k 0 0

0 Qk 0

0 0 Rk

. (2.42)

The Unscented Kalman lter recursion becomes:

Time update

Generate σ-points, xak(i), using Eqn. 2.37, and Eqn. 2.38 if desired.

xak+1|k(i) = fka(xak(i)) (2.43a)

ˆ

xak+1|k =

p

X

i=0

W (i)xak+1|k(i) (2.43b)

Pk+1|k =

p

X

i=0

W (i)

xak+1|k(i) − ˆxak+1|k 

xak+1|k(i) − ˆxak+1|kT

(2.43c)

Measurement update

yak|k−1(i) = hak(xak|k−1(i)) (2.44a) ˆ

yk|k−1=

p

X

i=0

yak|k−1(i) (2.44b)

˜

yk= yk− ˆyk|k−1 (2.44c)

Sk=

p

X

i=1

W (i) yk(i) − ˆyk|k−1

yk(i) − ˆyk|k−1T

(2.44d)

Kk=

p

X

i=1

W (i)

xak(i) − ˆxak|k−1

yk(i) − ˆyk|k−1T

! Sk−1

(2.44e) ˆ

xk|k = ˆxk|k−1+ Kkk (2.44f)

Pk|k = Pk|k−1− KkSkKkT (2.44g)

(29)

Chapter 3

Problem Statement and Modeling

3.1 Problem Statement

The problem to be addressed in this thesis is which type of lter to implement in attitude estimation to gain the best accuracy and the best stability. This thesis does not aim to nd an optimal lter from a mathematical point of view, but to investigate the performance of dierent lters and determine how well they meet certain well dened requirements.

The rst requirement is on the accuracy of the estimate, i.e. the error has to be bounded below some greatest permissible value.

The second requirement is that the estimate is stable. The denition of stability in the estimate is referred to as Change of Attitude Knowledge Uncertainty and was dened for the EDRS project alone.

Moreover, two signals are to be used in the lter, attitude data from the star camera and angular rate data from the gyroscope.

In order to design lters utilizing these measurements, the noise charac- teristics of the gyroscope and star camera has to be determined and a noise model has to be dened. Special care is to be taken regarding the model of the gyroscope. Since the angular rate determines the propagation of the attitude, the model of the gyroscope, or errors in the model of the gyro- scope, will have great impact on the quality of the estimate in the time slots when measurements from the star camera are absent. Hence, a great part of the modelling will handle the gyroscope, including profound analysis of the noise. The star camera on the other hand will be treated with a little less respect and will be modelled in a more primitive and ad hoc way.

(30)

3.2 Attitude

3.2.1 Denition

Imagine a spacecraft in inertial space. Let ˆu,ˆv and ˆw be a right handed, orthonormal coordinate system xed to some reference point in inertial space, in this case determined by the direction of the stars in the star catalogue of the star camera. Henceforth this frame will be referred to as the reference frame. Let ˆx,ˆy and ˆz be a orthonormal coordinate system xed in the spacecraft body, called the body frame. The attitude is dened to be the orientation of the space craft relative to the reference frame, i.e a proper rotation.

3.2.2 Attitude Matrix

A rotation can be described by a proper rotation matrix, that is a 3×3 matrix with determinant 1. Hence the body frame expressed in the reference frame is dened by

xˆ yˆ z = A ˆˆ u vˆ w ,ˆ (3.1) where A is a proper rotation matrix, i.e. det(A) = 1. By this construction the attitude is completely described by the rotation matrix A referred to as the Attitude Matrix.

The set of real 3 × 3 matrices with determinant 1 forms the Special Orthogonal Group of degree 3 over R3, SO3, with matrix multiplication being the group operation. This means that an attitude matrix can be described as a product of rotation matrices, e.g.

A00 = A0A. (3.2)

Thus, the elements of SO3 is rotated into each other by an element in SO3. An important interpretation of Eqn. (3.2) is that if A represents an attitude in the reference frame and A00 represents another attitude in the reference frame, then A0represents the attitude given by A00relative to the body frame when the attitude is given by A.

One other thing that is special about SO3 is that every element A has an eigenvalue equal to 1. Hence, there exists a direction in R3 which is invariant under the operator A. If A is not the identity, then A has one and only one eigenvalue equal to 1 and hence there is a unique direction (up to a sign) which is invariant under multiplication by A. This is stated in Euler's theorem and the interpretation is that a proper rotation always has an instantaneous axis of rotation.

Although the attitude matrix seems like a pretty good way to represent the attitude it is not ecient to use in applications. It will turn out that the attitude has only 3 degrees of freedom. The property of one unit eigenvalue is utilized to parametrize the Attitude Matrix.

(31)

3.2.3 Attitude Quaternion

One way to parametrize the attitude matrix is by the unit norm quaternions.

There are other ways to parametrize the attitude such as Gibbs vectors, Ro- driques Parameters e.t.c., but the problem with those is that they become singular for a certain angle of rotation and can only be used in local prob- lems [13]. Unit quaternions, on the other hand, provide a global represen- tation which is nowhere singular. 1 This is why the unit quaternions have been utilized to represent the attitude in attitude estimation problems since the 1960's. [12]

The quaternions were invented by Hamilton 1843 as a type of complex number system with 3 complex units, i, j and k, and one real unit, i.e.

q = a + bi + cj + di, such that i2 = j3 = k2 = ijk = −1, but he did not know the connection to rotation matrices and nothing about space or air craft either. A convenient way to represent quaternions is as vectors in R4, i.e. q = [qTq4]T where q4 is the scalar part corresponding to a above and q is the vector part corresponding to the vector [b, c, d]T. Following the multiplication rule we can write the multiplication of two quaternions in vector representation as

q0∗ q =q4I3x3− [q×] q

−qT q4



q0, (3.3)

where q× is the skew symmetric cross product operator given by

[q×] =

0 −q3 q2 q3 0 −q1

−q2 q1 0

. (3.4)

Let A be a proper rotation matrix and let e be the eigenvector of length 1 corresponding to the unit eigenvalue and let Φ be the rotation angle around e. The corresponding unit quaternion is given by

q = e sinΦ

2 (3.5a)

q4 = cosΦ

2. (3.5b)

If we instead have the unit quaternion q, the rotation matrix A is given by

A(q) = (q24− q2)I + 2qqT − 2q4[q×]. (3.6) Note that the unit quaternions is a double cover of the proper rotation ma- trices, since q and −q represents the same rotation.

1In fact it is also no were zero. The unit quaternions form a Lie algebra on the 3-ball under quaternion multiplication, which implies that it can be combed.

(32)

What is important to note here is that the rotation matrices and quater- nions satisfy the identity

A(q0∗ q) = A(q)A(q0). (3.7) This identity follows from the denition by Hamilton, but is somewhat contra-intuitive and this thesis, like many others on the subject of attitude estimation, will adopt the following denition of the quaternion multiplica- tion,

q ∗ q0 =q4I3x3− [q×] q

−qT q4



q0= Ω(q)q0. (3.8) i.e. in reverse order, giving the more intuitive identity A(q∗q0) = A(q)A(q0).

3.3 Attitude Dynamics

The aim of this section is to construct the dynamics of the dynamic system used in the implementations of the lter. We want to nd a model for how the attitude evolves in time. To nd this equation we simply dierentiate the attitude quaternion given by Eqn. (3.5) with respect to time,

˙ q = d

dt

e sinΦ2 cosΦ2



= ω ∗ q = Ω(ω)q, (3.9)

where ω is the angular velocity in body coordinates. Note that the vector ω ∈ R3 is equivalent to the quaternion (ω, 0)T.

The measurements arrive at discrete points in time, hence we want a discrete dierence equation. Formally this is given by

qk+1 = Φ(tk, tk+1)qk, (3.10) where Φ(tk, tk+1)is the state transition matrix. Although the system is non- linear and evolves in a non-linear space, the state transition matrix will exist, since there exists a proper rotation matrix in R4×4, taking a arbitrary unit vector onto another. By assuming that the angular acceleration ˙ω and the time step ∆ = tk+1−tkare suciently small, we can make the approximation that the angular velocity ω(t) is constant for t ∈ [tk, tk+1]. If ω(t) = ωk is constant then the axis of rotation is given by ||ωωkk|| and the angle of rotation is given by ||ωk||∆t = ωk. By Eqn.(3.5) this can be rewritten as a quaternion and the dierence equation is given by

qk+1=

 sin||ω

k||∆t 2

 ωk

||ω||

cos

||ω

k||∆t 2



∗ qk= fqk) ∗ qk, (3.11a) where the transistionmatrix is approximated by fqk).

(33)

3.4 Gyroscope Measurement

The gyroscope measures the angular rate with some error. The present noises in a gyroscope not due to outer circumstances are a drifting bias, some kind of random high frequency measurement noise and icker noise. A common way to model the noise was rst introduced by Farrenkopf [7] and has been deployed in many application [5] [4] [10] and in the SGEO satellite. The basic idea is that the angle rate measurement is disturbed by some bias term, d, which is varying like a random walk and some white noise term, wθ. The bias term causes a random walk in the measured rate, hence referred to as rate random walk, RRW. When the measured rate is integrated to get and angle, the white noise in the measured rate causes a random walk in the angle, hence referred to as angular/angle random walk, ARW. The icker noise is left out due to its inability to be expressed as ltered white noise.

Let ω be the true angular velocity and let ωM be the measured angular velocity. The model for the measured rate becomes

ωM(t) = ω(t) + d(t) + wθ(t) (3.12a)

˙d(t) = wd(t), (3.12b)

where the statistic properties of are described by

E[wθ] = 0 (3.13a)

E[wd] = 0 (3.13b)

E[wθ(t)wθ(s)T] = Qθδ(t − s) (3.13c) E[wd(t)wd(s)T] = Qdδ(t − s) (3.13d)

E[wd(t)wθ(s)T] = 0. (3.13e)

3.5 Star Camera Measurements

A typical star camera looks like a tube and is xed in the body frame of the space craft. This gives the camera a portion of the sky as eld of view. We will assume for simplicity that the bore sight of the star camera coincides with the z-axis of the body frame. In the bottom of the camera there is a

at sensor parallel to the xy-plane in the body frame. The star camera has a star catalogue that provides the direction of a number of stars in the inertial frame. Let rj be the unit vector giving the direction of star j. The stars in the eld of view triggers the sensor and the centroid of the star in the xy-plane of the body frame can be determined up to some disturbance. We will not go into how, but the star camera can identify the measured stars.

If we let pj = (xj, yj)T be the true position of star j in the xy-plane in the body frame, let ˜pj be the measured ditto and assume that the noise, j, is

(34)

zero mean and white.

˜

pj = pj+ j. (3.14)

Assume further that bj is the unit vector in the direction of star j in the body frame. Then

bj = Arj, (3.15)

where A is the attitude matrix. The relationship between bj and pj is given by

pj =

bj1

bj3

bj2

bj3

!

(3.16)

and

bj = 1

q

p2j1+ p2j2+ 1

 pj1 pj2

1

, (3.17)

where bj3 ≥ 0 is assumed, since the eld of view will most certainly be less than half the sky (180).

Assume the position of N stars is measured, i.e. j = 1 . . . N, to obtain the measurements

˜

p1 = p1+ 1 (3.18)

...

˜

pN = pN + N, (3.19)

where 1. . . N are independent, identically distributed, random variables.

What is left to nd now is the attitude matrix. Ideally, if the measure- ments were noise free, Eqn. (3.16) could be used to rewrite p1. . . pN and obtain the system

(b1. . . bN) = A (r1. . . rN) . (3.20) This system has one unique solution which is the true attitude matrix. With noise present i the measurement the overdetermined system of equations is obtained.

˜b1. . . ˜bN



= A (r1. . . rN) . (3.21) Since the vectors b and r always has unit norm and A is constrained to be a proper rotation matrix, the equation ˜bj = Arj will have a unique solution ∀j but Eqn. (3.21) is not guaranteed to have a unique solution, in

(35)

fact it will most certainly not have a solution at all. The aim here is to

nd an approximate solution to Eqn. (3.21). Methods to solve this problem where developed in the 60's, e.g. at NASA and one famous and widely used algorithm is called QUEST and was rst proposed by Shuster [12].

The exact algorithm used in the star camera at hand is unknown and ex- actly which model is used is unknown. What is know is the output quaternion ywhich will be the quaternion corresponding to the approximate solution of Eqn. (3.21) and some error characteristics. For this reason, no more eort will be put into the modelling of the star camera based on the unit vector measurements. Instead it will be assumed that there exists a quaternion δv reecting the measurement noise in the some way, i.e.

yk= δvk∗ qk. (3.22)

This model is naive in all senses and may impose problems to deal with later.

(36)

Chapter 4

Filter Design and Implementation

The necessary theory is now in place in order to continue on to lter design and implementation. The following types of lters for attitude estimation are implemented.

• Extended Kalman Filter

• Unscented Kalman Filter

• Particle Filter

The foundations for these three types of lters are described in Section 2.3, but there are no such ting as The Extended Kalman Filter. Every system have to be handled in an unique way. The same holds for UKF and PF type of lters.

4.1 Unit Quaternions is a Non-Linear Manifold

Perhaps the most dicult problem to deal with is the fact that the set of unit quaternions is not a vector space since it is not closed under addition.

The lters described in Section 2.3 are all designed under the assumption that the state evolves in some vector space. If the quaternions are used directly as state variables, problems will arise. In the EKF this fact will impose two problems, the update of the a priori estimate to the a posteriori estimates includes an addition of unit quaternions and addition of attitude quaternions does not make any sense. Also, since one of the components of the quaternion is completely determined by the other three, the covariance of the quaternion part of the state will only have rank 3 and not 4, i.e. the covariance will be singular. Working with singular matrices is not preferable since it introduces large numerical errors.

(37)

One way to circumvent the problem of the non-linear world is to use a parametrized error quaternion as state variable [10]. The error quaternion in the estimated attitude quaternion ˆq is the small quaternion δq which is the distance between the true quaternion q and the estimate, i.e.

δq = q ∗ ˆq−1 (4.1)

The parametrization of the error quaternion into a vector in R3 has one big aw. It can become singular. This is why the tree parameter parametrization is not used as a global attitude representation, but if we assume that the error is always small, i.e. the error quaternion is close to identity, we can get a local representation of the error in R3.

4.2 Parametrized Error Quaternions

Under the assumption that the error is small, a three parameter parametriza- tion of the attitude error will not become singular. Let δp be the parametrized error quaternion. There are several ways to parametrize the error quater- nion with advantages and disadvantages [10] . The two used in the following sections are Generalized Rodrigues Parameters, (GRP), and Gibbs vector.

The Gibbs vector is dened by

aGibbs= 2e tan Φ 2



(4.2) or via the quaternion q

aGibbs= 2q

q4 (4.3)

and the Generalized Rodrigues Parameters are dened by aGRP= f q

a + q4, (4.4)

where a ∈ [0, 1] and f some value. In the case when a = 0 and f = 2 then aGibbs= aGRP.

4.3 (Multiplicative) Extended Kalman Filter

In this section the EKF applied to the problem of attitude estimation is described. The general form of the Extended Kalman Filter is given in Eqns. (2.18) and (2.19) on page 12. The formulation in this thesis is based on the Multiplicative Extended Kalman Filter described by Markley [10].

The algorithm is stated in detail in Appendix A.

(38)

4.3.1 Time Update

As described in Section 2.3.3 the EKF is based on the assumption that the expected value of a function is the same as the function of the expected value of the variable. Therefore the a priori update of the attitude is straight forward

ˆ qk+1|k =

 sin

||ω

k|k||∆t 2

 ω

k|k

||ωk|k||

cos

||ω

k|k||∆t 2



∗ qk|k (4.5a)

= fq( ˆωk|k) ∗ qk, (4.5b) where ˆωk|k = ωM − ˆdk|k. The update of the bias is straight forward since the function is linear,

k+1|k = ˆdk|k. (4.6)

Regarded as stochastic processes, the attitude quaternion components are dependent since the forth is determined by the other three up to a sign, hence the covariance matrix will be singular which can impose numerical errors. Since the attitude quaternions are not closed under addition the additive state measurement update is does not apply. For these reasons the error quaternion, δq, is introduced as state variable. The error quaternion is dened as the quaternion rotating the attitude quaternion estimate ˆq into the real attitude quaternion, q, i.e. δq = q∗ˆq−1. Assume that the a posteriori estimate of the attitude at time tk−1with all available information is known, then the estimated error of that estimation is zero. Let ˆqk|k−1be the reference quaternion to dene the error quaternion

δqk= qk∗ ˆqk|k−1−1 . (4.7) Under the EKF assumption the a priori estimate of the error is given by

δ ˆqk|k−1= E h

qk∗ qk|k−1−1 |y0, . . . yk−1

i (4.8)

= E [qk|y0, . . . yk−1] ∗ qk|k−1−1 (4.9)

= qk|k−1∗ qk|k−1−1 = (0, 0, 0, 1)T. (4.10) This implies that δˆpk|k−1= [0, 0, 0]T, independent of the choice of parametriza- tion. Hence the parametrization of the error quaternion can be chosen such that it is has a convenient covariance expression. The Gibbs vector parametrization is useful since it has a closed expression for the dynamics that is easy to linearise [10]. The Gibbs vector will be singular for q4 = 0or equivalently Φ = 180. This represents a large rotation and by the assump- tion that the error is small this case will not occur. Hence, let δp(t) = δqδq(t)4(t).

References

Related documents

A simple baseline tracker is used as a starting point for this work and the goal is to modify it using image information in the data association step.. Therefore, other gen-

Again, the neck pain risk we found from rotation in a group of forklift operators (OR 3.9, Table 2 ) seems reason- able given that forklift operators are more exposed to un-

Our horizon estimation method incorporates a probabilistic Hough voting [5] scheme where edge pixels on the image are weighted in the voting, based on how likely they are to be

The results when starting at the ground truth give an indication of the pose estimation errors induced by the stereo method to compute the local height patch.. Results are shown

However, considering the interviewed Somali children’s beliefs and attitudes, knowledge of their language is good for their ethnic identity and belonging, which may correlate

Species with larvae developing in tree hollows (= tree-hollow species), nests from birds or other animals in tree hollows (= nest.. species), or rotten wood on the trunk (with rot

BRO-modellen som nämns i litteraturen beskriver att det krävs att omgivningen uppmuntrar och förstår brukarens sätt att kommunicera på för att kommunikationen ska fungera

The contributions are, a complete model structure for a heavy-duty diesel engine equipped with a fixed geometry turbocharger and inlet throttle, and optimal control trajec- tories for