• No results found

Sensor Fusion Navigation for Sounding Rocket Applications

N/A
N/A
Protected

Academic year: 2021

Share "Sensor Fusion Navigation for Sounding Rocket Applications"

Copied!
86
0
0

Loading.... (view fulltext now)

Full text

(1)

Institutionen för systemteknik

Department of Electrical Engineering

Examensarbete

Sensor Fusion Navigation for Sounding Rocket

Applications

Examensarbete utfört i Reglerteknik vid Tekniska högskolan i Linköping

av

Mattias Nilsson och Rikard Vinkvist

LiTH-ISY-EX- -08/4009- -SE

Linköping 2008

Department of Electrical Engineering Linköpings tekniska högskola

Linköpings universitet Linköpings universitet

(2)
(3)

Sensor Fusion Navigation for Sounding Rocket

Applications

Examensarbete utfört i Reglerteknik

vid Tekniska högskolan i Linköping

av

Mattias Nilsson och Rikard Vinkvist

LiTH-ISY-EX- -08/4009- -SE

Handledare: Johanna Wallén

isy, Linköpings universitet Anders Helmersson

Saab Space

Examinator: Fredrik Gustafsson isy, Linköpings universitet

(4)
(5)

Avdelning, Institution Division, Department

Division of Automatic Control Department of Electrical Engineering Linköpings universitet

SE-581 83 Linköping, Sweden

Datum Date 2008-10-05 Språk Language  Svenska/Swedish  Engelska/English   Rapporttyp Report category  Licentiatavhandling  Examensarbete  C-uppsats  D-uppsats  Övrig rapport  

URL för elektronisk version http://www.control.isy.liu.se http://www.ep.liu.se ISBNISRN LiTH-ISY-EX- -08/4009- -SE Serietitel och serienummer Title of series, numbering

ISSN

Titel Title

Navigering med sensorfusion i en sondraket

Sensor Fusion Navigation for Sounding Rocket Applications

Författare Author

Mattias Nilsson och Rikard Vinkvist

Sammanfattning Abstract

One of Saab Space’s products is the S19 guidance system for sounding rockets. Today this system is based on an inertial navigation system that blindly calculates the position of the rocket by integrating sensor readings with unknown bias. The purpose of this thesis is to integrate a Global Positioning System (GPS) receiver into the guidance system to increase precision and robustness. There are mainly two problems involved in this integration. One is to integrate the GPS with sensor fusion into the existing guidance system. The seconds is to get the GPS satellite tracking to work under extremely high dynamics. The first of the two problems is solved by using an Extended Kalman filter (EKF) with two different linearizations. One of them is uses Euler angles and the other is done with quaternions. The integration technique implemented in this thesis is a loose integration between the GPS receiver and the inertial navigation system. The main task of the EKF is to estimate the bias of the inertial navigation system sensors and correct it to eliminate drift in the position. The solution is verified by computing the position of a car using a GPS and an inertial measurement unit. Different solutions to the GPS tracking problem are proposed in a pre-study.

Nyckelord

(6)
(7)

Abstract

One of Saab Space’s products is the S19 guidance system for sounding rockets. Today this system is based on an inertial navigation system that blindly calculates the position of the rocket by integrating sensor readings with unknown bias. The purpose of this thesis is to integrate a Global Positioning System (GPS) receiver into the guidance system to increase precision and robustness. There are mainly two problems involved in this integration. One is to integrate the GPS with sensor fusion into the existing guidance system. The seconds is to get the GPS satellite tracking to work under extremely high dynamics. The first of the two problems is solved by using an Extended Kalman filter (EKF) with two different linearizations. One of them is uses Euler angles and the other is done with quaternions. The integration technique implemented in this thesis is a loose integration between the GPS receiver and the inertial navigation system. The main task of the EKF is to estimate the bias of the inertial navigation system sensors and correct it to eliminate drift in the position. The solution is verified by computing the position of a car using a GPS and an inertial measurement unit. Different solutions to the GPS tracking problem are proposed in a pre-study.

Sammanfattning

En av Saab Space produkter är navigationssystemet S19 som styr sondraketer. Fram till idag har systemet varit baserat på ett tröghetsnavigeringssystem som blint räknar ut position genom att integrera tröghetsnavigerinssystemets sensorer med okända biaser. Syftet med detta exjobb är att integrera en GPS med tröghet-snavigeringsystemet för att öka robusthet och precision. Det kan i huvudsak delas upp i två problem; att integrera en GPS-mottagare med det befintliga naviga-tionsystemet med användning utav sensorfusion, och att få satellitföljningen att fungera under extremt höga dynamiska förhållanden. Det första av de två proble-men löses genom ett Extended Kalman filter (EKF) med två olika linjäriseringar. Den första linjäriseringen är med Eulervinklar och är välbeprövad. Den andra är med kvaternioner. Integrationstekniken som implementeras i detta Examensarbete är en lös integration mellan GPS-mottagaren och tröghetsnavigeringssystemet. Hu-vudsyftet med EKF:en är att estimera bias i tröghetsnavigeringsystemets sensorer och korrigera dem för att eliminera drifter i position. Lösningen verifieras genom att räkna ut positionen för en bil med GPS och en inertiell mätenhet. Olika lös-ningar till satellitföljningen föreslås i en förstudie.

(8)
(9)

Acknowledgments

First we would like to thank Anders Helmersson at Saab Space for his help and support during our work. We would also like to thank our supervisor Johanna Wallén and our examiner Fredrik Gustafsson at the department of Electrical En-gineering. Last but not least we would like to thank FOI and Fredrik Eklöf for lending us some valuable equipment critical to our work.

(10)
(11)

Contents

1 Introduction 1

1.1 Objective . . . 1

1.2 Problems and Methods . . . 2

1.3 Previous Research . . . 2

2 Inertial Navigation 5 2.1 Coordinate Frames . . . 5

2.2 Inertial Measurement . . . 6

2.3 Choosing Reference Frame . . . 8

2.4 Strap Down Navigation . . . 8

3 Global Positioning System 9 3.1 Brief History . . . 9

3.2 Calculating the Position . . . 10

3.3 The GPS Signal . . . 11 4 GPS Receiver 15 4.1 The Design . . . 15 4.2 Carrier Tracking . . . 16 4.3 Code Tracking . . . 20 4.4 Loop Filters . . . 20 4.5 GPS Receiver Hardware . . . 21 5 GPS INS Integration 25 5.1 Loose Integration . . . 25

5.2 Classic Tight Integration . . . 26

5.3 Tight Alternative Integration . . . 26

5.4 Deep Integration . . . 26

6 Navigation Equations 29 6.1 Notation . . . 29

6.2 Deriving the Navigation Equations . . . 30

6.3 Linearization with Quaternion Angle Representation . . . 32

6.4 Linearization with Euler Angles . . . 34

6.5 Measurement Equation . . . 36 ix

(12)

7 The Extended Kalman Filter 39

7.1 Discretization of a Linear System . . . 39

7.2 Discretization of a Nonlinear System . . . 40

7.3 Extended Kalman Equations . . . 41

7.4 The Measurement Update . . . 42

7.5 Implementation . . . 43

8 Hardware Used for Testing and Verification of the System 47 8.1 Proposed Hardware . . . 47

8.2 A Complete Integration Test of the System . . . 49

9 Results 51 9.1 Test Procedure . . . 51

9.2 Filter Tuning . . . 52

9.3 Estimating Position . . . 53

9.4 Attitude Estimation Performance . . . 53

9.5 Velocity Estimation Performance . . . 54

9.6 Bias Estimation Performance . . . 54

9.7 GPS interruption . . . 54

10 Conclusions and Future Work 65 10.1 Conclusions . . . 65

10.2 Future Work . . . 66

Bibliography 69 A Quaternions 71 A.1 Quaternion operations . . . 71

A.2 Quaternion Rotation . . . 72

A.3 Quaternion to Rotation Matrix . . . 72

B Euler Angles 73 B.1 Euler Angle Rotation . . . 74

B.2 Euler Angle to Quaternion conversion . . . 74

(13)

Chapter 1

Introduction

Saab Space has been developing and producing systems for several differentiated space programs since its start in 1983. One of these systems is the S19 guidance system, which purpose is to guide the sounding rockets during the acceleration phase. A sounding rocket is an instrument-carrying research rocket designed to take measurements and perform scientific experiments during its sub-orbital flight [26]. This guidance system has been used with success in more than 200 NASA launches and is also used in the launching of the MAXUS1 rockets in Esrange, Kiruna, Sweden. The S19 family of guidance systems have proved to be reliable systems, which keep on evolving with new and exciting capabilities. An important part of these systems is to track the sounding rockets and calculate the flight course, thus make sure that the rockets will land in an isolated and restricted area. To track these rockets an Inertial Navigation System (INS), made up of several Inertial Measurement Units (IMU), has been used to measure the acceleration and calculate the velocity and position. This would be a perfectly good system if it was not for the fact that measurement errors introduce faults in the calculations. These faults make it hard to pinpoint the exact position of the rocket. The faults can be kept to a minimum with the help of Global Positioning System (GPS) measurement data and a sensor fusion solution that will correct the INS calculations if drifts occur.

1.1

Objective

The objective of this Master’s thesis is to investigate and propose a design for an integrated GPS INS solution to be used in the positioning of a rocket. The thesis proposes two different Extended Kalman filters using so called Loose Integration. One is based on Euler angle linearization, while the other is based on quaternion linearization. A study on how to develop a GPS receiver capable of withstanding the high dynamics of a sounding rocket is also presented.

1A sounding rocket developed by Swedish Space Corporation (SSC) and Astrium Space

Trans-portation [7]

(14)

1.2

Problems and Methods

This section presents the problems that exist when integrating GPS and INS to navigate in a high dynamic application such as a rocket flight, and what methods that could be used to manage these problems.

Integration of GPS and INS GPS and INS are two different systems used in navigation. The GPS is quite slow, two to three updates per second, but produces a positioning error which will not grow with respect to time. The INS however is very fast, about 200 measurements per second, but when integrating acceleration to form velocity, small errors will be introduced in the system. The errors propagate with each calculation thus making the system more and more unreliable with respect to time. The different strength and weaknesses of the GPS and INS can be used to create a more robust and precise navigation system. The integration of the two systems is performed by a Kalman filter using INS and GPS data. Further information about the integration and an introduction to various integration techniques can be found in Chapters 5 and 7. The integration technique that is implemented in this thesis is the loose integration which is explained in Chapter 5.

Using GPS in a High Dynamics Environment During a sounding rocket flight high acceleration and deceleration often appear. For example, in the case of the launch and flight of a MAXUS rocket the amount of stress caused by the acceleration can be measured to about 130 m/s2(13 g), while in the reentry-phase a deceleration of about 40 g will affect the rocket. With these quick changes in velocity a regular GPS receiver will not be able to adapt to the signals of the satellite, thus the receiver will loose its sense of positioning. The reason for a GPS receiver losing track is that when the velocity of the rocket suddenly changes, so does the introduced Doppler effect on the GPS signal of the satellites. A GPS receiver will always try to adapt to Doppler effects introduced to the signal and will normally cope with the changes, but in the case of a rocket flight these changes are just too large for a regular GPS receiver. More details on GPS receivers and their design can be found in Chapter 4.

1.3

Previous Research

In this section earlier research of creating tracking loops for high dynamics and different integration techniques for GPS INS integration are introduced.

Loop Filter based Carrier Tracking Loops A recent related research was done in 2006 [23], where a Frequency-Lock Loop (FLL) assisted Phase-Lock loop (PLL), see Section 4.2, was constructed and tested. The designed loops were im-plemented in a GPS receiver and tested with frequency modulated radio signals. Simulations and experimental results confirmed that with a careful design, the digital loops can expand their tracking capacity up to acceleration steps of 40 g.

(15)

1.3 Previous Research 3

The work was based on [24], where the performance of FLL assisted PLLs were investigated.

The Space Flight Technology department of Deutches Zentrum für Luft und Raum-fahrt (DLR), Germany, also constructed FLL assisted PLL algorithms to be used in their high dynamic GPS receiver Phoenix HD [20]. They use a wide-band third order PLL with FLL assist. Their algorithms were also previously tested in [21] with an older GPS receiver (GPS Orion) used in a MAXUS rocket with good results. A comparison between that receiver and other high dynamic GPS receivers was done in [4], where the Orion GPS excelled.

GPS INS Integration Techniques The are many ways to integrate the GPS and INS systems, but most of them can be grouped into three levels of integration. These integration methods are loose, tight and deep integration. These levels can be further differentiated, where for example tight integration can be either classi-cal or alternative. There has been quite an extensive research about integration methods combining INS and GPS since the launch of GNSS. Current research aims at deep (often called ultra-tight) integration, where the INS is deeply integrated with the carrier and code loop of the GPS to gain an even more robust signal tracking than tight integration. The different methods of integration are further described in Chapter 5.

(16)
(17)

Chapter 2

Inertial Navigation

The basic idea of inertial navigation is to measure the motion relative a given point in space in order to compute the actual position. It can also be described as a formula: Let R(t) be a vector that points to the actual position and v(τ ) be the actual velocity, then the position can be described as

R(t) =

t Z

0

v(τ )dτ + R(0) (2.1)

The relation is based on Newton’s laws of motion, which are stated below. Isaac Newton’s first and second law of motion as found in [16] states

1. A body in motion tends to maintain its motion unless acted on by a force 2. a = F/m

Newton’s laws of motion are valid in a reference frame that is neither rotating nor accelerating. The origin of the inertial coordinate system is arbitrary, and the axis may point in any three perpendicular directions. It is however convenient to consider the inertial frame of reference with its origin in the center of the earth, since we are only moving in close proximity to the Earth. By placing the inertial frame in the center of the Earth, it is no longer an inertial frame mainly due to the gravity field of the Earth. This new frame is called Earth centered inertial frame (ECI) and is an approximate inertial frame. In ECI a body is considered to be in rest if it is in free fall or equivalently accelerating with g toward the center of the Earth.

2.1

Coordinate Frames

Position and movements can be presented in a number of ways each with its own coordinate system. In this section the different coordinate systems used in this master’s thesis are presented. These descriptions are a summary of the frames explained in [9].

(18)

ECI (Earth Centered Inertial) The ECI is a frame with its origin in the center of the Earth. The z-axis is parallel with the rotation axis of the Earth. The y- and x-axis span the equator plane of the Earth and the x-axis points at the vernal equinox. Figure 2.1 shows the ECI system.

Figure 2.1. The Earth Centered Inertial frame. The z-axis is parallel to the rotation

axis of the earth, the x- and y-axis span the equatorial plane and the x-axis points at the vernal equinox.

ECEF (Earth Centered Earth Fixed) The Earth Centered Earth Fixed frame (ECEF) is a frame with its origin in the center of the Earth. The z-axis is parallel with the rotation axis of the Earth. The y- and x-axis span the equator plane of the Earth and the x-axis points at the Greenwich meridian. This frame, unlike the ECI, rotates with the Earth which means that for example, Linköping will have a fixed position (x, y, z) in the ECEF regardless of the movement of the Earth. Figure 2.2 shows the ECEF frame.

B-frame (The Body Fixed Frame) The Body Fixed Frame (B-frame) is a frame that is fixed to the host vehicle. Figure 2.3 shows the Body fixed frame.

2.2

Inertial Measurement

In an inertial navigation unit there are typically two sensors, a gyro and an ac-celerometer. The accelerometer measures the accelerations applied to its accelera-tion sensitive axis relative the inertial frame. The gyro measures the angular rates around its three axis relative the inertial frame.

When the accelerometer is in close proximity to the Earth, it measures the ac-celerations relative to the ECI. This means that if the accelerometer is placed on

(19)

2.2 Inertial Measurement 7

Figure 2.2. The Earth Centered Earth Fixed frame. The z-axis is parallel to the rotation axis of the earth, the x- and y-axis span the equatorial plane and and the x-axis points at the Greenwich meridian. This frame also rotates with the earth

Figure 2.3. The Body fixed frame. This frame is fixed to the vehicle

the surface of the Earth lying completely still, it will measure an acceleration of 1 g radially out from the Earth. If the accelerometer falls to the ground it will measure zero force on all axes before it hits the ground (in an ideal case). To compensate for the gravity effects on the accelerometer a gravity model is needed. The gravity model subtracts 1 g of acceleration toward the Earth to compensate for the gravity field effects on the accelerometer. The gyro however does not need to be compensated for gravity effects.

(20)

2.3

Choosing Reference Frame

It is often desired to navigate relative the Earth. Choosing the ECEF frame as a reference frame results in an Earth relative navigation. A few more compensations must be done due to the rotation of the Earth. The centripetal force and the Coriolis force must be compensated for in the accelerometer. The gyro must also be compensated for the rotation of the Earth. The equations for the ECEF compensations as well as the corresponding gravity model are presented in Chapter 6.

2.4

Strap Down Navigation

By measuring the accelerations acting on the accelerometer, then compensating for the gravity field and the forces due to the rotation of the Earth, we can now integrate the accelerations twice to determine the position in the ECEF frame. If the accelerometer is perfectly aligned to the ECEF frame at all times, the double integration of the accelerations are all the computations needed in order to determine the position. This method of having the accelerometer aligned with the reference frame and thus completely isolated from the attitude of the host vehicle is called a gimbaled implementation of inertial navigation. Bolting the accelerometer to the chassis of the host vehicle and adding a gyro is called strap down navigation. This means that the accelerometer now is aligned with the time dependent B-frame of the host vehicle. In order to determine the position in the reference frame the attitude of the B-frame relative the reference frame must be known at all times. The attitude can be computed by integrating the angular rates provided by the gyro. Knowing the attitude of the B-frame the accelerations measured by the B-frame aligned accelerometer can be transformed to the reference frame and can thus be used to compute the position.

(21)

Chapter 3

Global Positioning System

This chapter introduces the Global Positioning System (GPS), with a short histor-ical description followed by a description of the technhistor-ical properties or the system.

3.1

Brief History

The concept of NAVSTAR GPS was formed in the wake of the Defence Navigation Satellite System (DNSS) program, which was established 1969 by the Office of the Secretary of Defense (OSD). This was in response to the need of a navigation system in the United States (US) military that could be used by high dynamics vehicles such as aircraft. The former US president Ronald Reagan announced in 1983 that the system would be declassified, which meant that the system would move from pure military applications to include civilian ones. This was made after a Korean civilian airplane carrying 269 people accidentally crossed the Soviet bor-der twice and was shot down by Soviet jets. The GPS signal that the civilian GPS receivers got access to was intentionally degraded with an error called Selective Availability (SA). SA introduced a random fault in the GPS signal that lowers the resolution of the position if you do not possess a certain encryption key. This key was only available to the US military and its allies and it changes daily. In 2000, after an executive order by the US president Clinton, the system was deactivated and has remained so until present day.

The GPS is a navigation instrument that provides users the ability to calculate their position and course worldwide. To be able to cover the whole world, the system originally consisted of 24 operational satellites (as shown in Figure 3.1) orbiting the Earth at an altitude of about 20200 km. Today the system consists of additionally eight satellites, increasing the operational satellites to 32. The in-crease provides the system with redundant hardware, thus making it more precise and more robust against degradation of the signals caused by malfunctioned satel-lites.

The satellites are continuously sending out information that can be obtained if 9

(22)

Figure 3.1. [10] The Global Navigation Satellite System (GNSS) constellation of

satel-lites

you are within range and in possession of a GPS receiver. The GPS receiver needs to obtain signals from at least four different satellites to be able to determine the position. If more than four signals from different satellites can be obtained, the positioning error decreases as the receiver can pick and choose the best signals.

3.2

Calculating the Position

To calculate the receiver’s position, it must find four unknown values; position (x, y, z) and clock error. The receiver calculates its distance from every satellite that it has obtained a signal from. This is done by looking at when the satellite sent the signal and when it was obtained by the receiver. If one have the precise time, one could easily calculate the distance by multiplying the time it took for the signal to travel from the satellite to the receiver with the speed of light. The calculations needed to determine the position forms a 3-D trilateration algorithm. It is essentially a triangulation, but instead of circles, spheres are used. One will need at least three different satellites for the algorithm and thus knowing the distance to three satellites. If the receiver knows the distance to one satellite it knows that its position is within a sphere with a radius that is equal to the distance with the satellite being in the center (see Figure 3.2 sphere 1). If then a second satellite is added, creating two of these spheres, the receiver knows that its position is somewhere in the resulting ring (the resulting line in Figure 3.2 sphere 2). By introducing the distance of a third satellite, the receiver can narrow down its search area to two points (see Figure 3.2 sphere 3). One of the points will be located in space while the other on the surface, meaning that the receiver now knows its position.

(23)

3.3 The GPS Signal 11

Calculating the time it took for the signal to reach the position of the receiver would mean that the receiver needs a very precise clock. Those precise clocks are very expensive. For example an atomic clock cost from 50000$ to 100000$. This problem is solved in quite a clever way. Every satellite has a precise atomic clock, while the receivers have inexpensive quartz clocks. The quartz clock is reset constantly during the receivers operation. What the receiver does is calculating the unwanted time offset between the atomic clocks and the quartz clock. This is done by calculating the time value that the four satellites must have to align in a single point in space. The calculations for this is done in the same way as those used in determining the position of the receiver, but this time you will need a fourth satellite to determine which of the two points that is the right one. The receiver now knows the exact time offset and has in essence transformed the quartz clock to a cheap atomic one.

3.3

The GPS Signal

The signal that the GPS satellites is continuously sending is created by combining three different signals, the Pseudo Random Code (PRN) code, the navigation message and the carrier signal. In civilian applications, that is not the US military, the PRN code is the Coarse Acquisition (C/A) code . Every satellite has its own predetermined C/A code which makes it possible for all satellites to use the same frequency without destroying each others message. First the C/A- code is combined with the navigation message and the resulting signal is then combined with the carrier signal creating the final GPS signal.

One could, if using an analogy, see the carrier signal as a large box where the message (navigation data) should be put. Every sender (satellite) sends their message in the same large box with the only difference being where they put the message. The purpose of the C/A code is informing where every sender should place its message. To find a message the receiver searches through all the potential places in the box where the message might be. If the receiver then finds the message it also knows the identity of the sender.

The navigation data is transmitted with a bit rate of 50 bps. The data is 1500 b long which means that to receive one complete frame of data 30 seconds is needed. There are five different sub frames that forms the complete navigation message, each one is 300 b long . The first sub frame contains information about the sending satellite’s GPS week number, its health and accuracy, and its the clock correction terms. The second and third sub frames contain all the ephemeris data. Sub frame number four contains almanac and health data for satellites number 25-32, special messages, satellite configuration flags, ionospheric and UTC data. The fifth sub frame contains almanac and health data for satellites number 1-24, almanac reference time and week number. The last two sub frames are compromised by 25 different pages. These pages are cycled with each navigation data transmission. So in order to get the full frame of the satellites you will need to intercept 25

(24)

complete frames. It takes 12.5 minutes to attain a whole set of frames.

The GPS satellites are broadcasting signals with two different carrier signals with different frequencies, L1 and L2. The L1 frequency is for everyone, while L2 is strictly for the military. The L2 frequency is only transmitted with the P-code, while the L1 frequency transmits both the C/A and P-code. For the task of decrypting the military transmissions specialized hardware is needed. There are two variants of the P-code. One is simply called P-code while the other is a special form of the P-code, called Y-code, which is used to protect against false transmissions. In this application the L1 frequency is used. A good overview of how the navigation data, the different codes and the carrier signals form the GPS signal is given by Figure 3.3.

(25)

3.3 The GPS Signal 13

Figure 3.2. The figure pictures the three steps needed for the GPS receiver to calculate

its position. If one know the distance to one satellite, one knows that the possible position solutions of the receiver is within a sphere with the satellite in its center. Knowing the distance to two satellites and then calculating the intersection narrows the position solutions to a circle. If a third satellite is used one can calculate the intersection between its sphere and the earlier calculating circle to narrow the possible position solution down to two points. One of the points is located in space while the other near the surface.

(26)

Figure 3.3. The figure explains how the navigation data is mixed to form the channels

(27)

Chapter 4

GPS Receiver

To be able to obtain the signals that the GPS satellites broadcast a GPS receiver is needed. This chapter gives an introduction of the GPS receiver design and a more in-depth study of the parts that are needed in designing a GPS receiver for use in high dynamics applications.

4.1

The Design

The function of the GPS receiver design can be divided into three groups each constituted of several functions and components. By working together the sub-groups makes it possible to extract the important navigation data from the GPS signal. First you have the front-end of the receiver where the GPS signal is down converted to a digital intermediate frequency (IF) signal. Basically this means that the GPS signal is converted from an 1575.42 MHz analog signal to a digital signal with a much lower frequency, often in the range of about 4 MHz. After the conversion the data extraction begins and this is where the second group makes its entry. This group contains the functions that help the receiver tracking the signal and extract data. The mission of each channel is to lock onto one unique signal of a satellite with the help of the carrier tracking and then extracting the navigation data with the code tracking.

The attained data from the tracking functions is then processed by the functions that constitute the third group. These functions process the data to calculate the correct position of the receiver. Figure 4.1 shows a simplified overview of the complete process.

(28)

Figure 4.1. GPS overview of a simplified GPS receiver. The antenna receives the

1575.42 MHz GPS signal and sends it to the group of functions that constitute the front-end (Preamp, Down converter, A/D converter and AGC). The front-end samples the analog signal and converts it to a 4 MHz digital signal (the Digital IF signal in the picture above). The digital signal is then processed by the Receiver channels and Receiver processing to extract the data (clock error, pseudo range) needed by the Navigation processing functions to calculate a position estimate.

4.2

Carrier Tracking

Before any extraction of navigation data can take place, the receiver must wipe off the carrier frequency from the signal. This is done by creating a replica of the incoming carrier signal, see Figure 4.2. In order to do this, a good tracking is necessary. The tracking of the carrier signal is where the problems of using the GPS receiver in a high dynamic environment will be the most prominent. The carrier tracking is managed by a carrier tracking loop, which is illustrated in Figure 4.2. A carrier tracking loop is made up of several different parts but the most important ones are the carrier predetection integrators, the carrier loop discriminator and the carrier loop filters.

The discriminator determines what kind of tracking loop that is used. The tracking loop can be either a phase-lock loop (PLL) or a frequency-lock loop (FLL). The two different tracking loop discriminators can also be used in conjunction to solve their inherent flaws. Information about these flaws and the characteristics of the PLL and FLL can be found later in this section. The discriminator uses the I and Q signals, see Figure 4.2, together with the chosen discriminator algorithm when creating an output error. The I and Q signals are created by the carrier wipe-off functions by multiplying the digital IF signal with a cos, respectively a sin replica signal. For a FLL to create a frequency error it needs two pars of I and Q samples taken from two different adjacent moments in time. In Table 4.2 these signals are called I1, I2, Q1 and Q2. In high dynamic applications used in the

launch and flight of a sounding rocket, a short predetection integration time and a wide bandwidth are needed to handle the stress. With a short integration time the measurements will be faster which can be crucial when the rocket is speeding with several kilometers per second. A wide bandwidth will provide a better signal.

(29)

4.2 Carrier Tracking 17

On the other hand a narrow bandwidth and a longer integration time would mean better Doppler phase measurements as the noise in the signal is lower.

Numerically Controlled Oscillator

Figure 4.2. The different parts and layout that makes up a tracking loop

Phase-Lock Loop The Phase-Lock Loop (PLL) is one of the most commonly used carrier tracking techniques. The PLLs used in GPS carrier tracking are usually of the Costas type PLLs [11]. The Costas PLL can have several different discriminator algorithms, as shown in Table 4.1. The phase error will affect the numerically controlled oscillator (NCO) period after being filtered in the carrier loop filter. This will result in the decreasing of the phase error thus getting a better replicate wave to represent the incoming sinus wave.

A PLL produces accurate phase measurements, which are important in the aiding of the code loop, but it is sensitive to dynamic stress such as acceleration and jerk. A PLL could be made more robust against dynamic stress with a good loop filter. Although this is not enough to handle the amount of stress present in the launch and flight of a sounding rocket. PLLs are sensitive to dynamic stress, but produces the most accurate velocity measurements thanks to its good phase measurements. Because of the inherent sensitivity to high dynamics stress of the PLLs, caused by the narrow bandwidth, this disqualifies it for tracking a launched rocket.

Frequency-Lock Loop The ultimate goal of the Frequency-Lock Loops (FLLs) used in GPS receivers is the same as for the PLLs. FLL discriminators have short predetection integration time, thus it is well adapted when it comes to handling dynamic stress. Even though it is faster in predicting the carrier frequency it has more difficulties measuring the carrier Doppler phase than the PLL because of the inherent trouble in detecting a change of sign of the phase. This difficulty means that the FLL produces noisier error output values than the PLL. Noisier values translate to degradation of accuracy in comparison. The inherent problem with the accuracy of FLL discriminators could create quite large position discrepancies when used on a very fast vehicle and is therefore a poor choice, if used alone.

(30)

Table 4.1. A table presenting the most commonly used PLL discriminator algorithms

Discriminator Output Characteristics algorithm phase error

Q · I sin 2φ Classic Costas analog discriminator.

Near optimal at low signal to noise ratio (SNR). Slope proportional to signal amplitude squared. Moderate computational burden.

Q · sign(I) sin φ Decision directed Costas.

Near optimal at high SNR.

Slope proportional to signal amplitude. Least computational burden.

Q/I tan φ Suboptimal but good at high and low SNR. Slope not signal amplitude dependent. Higher computational burden. Divide by zero error at ±90o.

arctan(Q/I) φ Two-quadrant arctangent.

Optimal (maximum likelihood estimator) at high and low SNR.

Slope not signal amplitude dependent. Highest computational burden.

Examples of common FLL discriminators can be found in Table 4.2.

Combining FLL and PLL When comparing the FLL and PLL characteris-tics one can see that by combining their strengths, one would have a lock loop that could lock onto a signal with good phase accuracy and still work under high dynamic stress. This is the goal of the FLL assisted PLL, see Figure 4.3. The combined FLL and PLL works in a coupled mode where the FLL performs the signal acquisition in high dynamic situations and then helps the PLL to achieve a phase lock. The acquisition could also be improved by using a carefully designed FLL and then revert to the combined FLL and PLL when you get a good lock of the signal. When phase lock have occurred several options exist. The tracking design could either convert the FLL assisted PLL into a pure PLL until the phase lock is lost and sub sequentially revert back into a pure FLL or one could proceed with using the coupled FLL and PLL as suggested in [24]. If the two are used in a coupled mode with the addition of a good loop design, it could very well be used in situations of high dynamics without the need for assistance of an INS system.

Externally Aided Carrier Tracking Loops One way to solve the problem of using GPS receivers in a high dynamics environment is by estimating the change of

(31)

4.3 Code Tracking 19

Table 4.2. A table presenting the most commonly used FLL discriminator algorithms

Discriminator Output Characteristics Algorithm frequency error

A

t2−t1 sin(φ2− φ1) Near optimal at low SNR.

A = I1· Q2− I2· Q1 Slope proportional to signal amplitude squared. Least computational burden.

A·Q1·sign(B)

t2−t1 sin 2(φ2− φ1) Decision directed.

B = I1· I2+ Q1· Q2 Near optimal at high SNR.

Slope proportional to signal amplitude. Moderate computational burden

arctan 2(B,A)

t2−t1

φ2−φ1

t2−t1 Four-quadrant arctangent. Maximum likelihood estimator. Optimal at high and low SNR. Slope not signal amplitude dependent. Highest computational burden. Usually table lookup implementation.

the Doppler effect. If a good estimation of the Doppler effect can be had, then the change of frequency of the GPS signal could be calculated. One could then feed this information into the carrier tracking to get a better lock of the signal. Aiding the carrier tracking in this way can reduce the acquisition time, help keep lock on the signal and also in the reacquisition of a lost signal. The Doppler estimation can be calculated in various ways. The easiest would be to take the acceleration data from the INS and just integrate it to a velocity and then calculating the Doppler effect using

∆f = f v

c

where f is frequency of the GPS signal (without Doppler), ∆f is the change of frequency, v is the relative velocity between the satellite and the rocket and c is the speed of wave, that is for electromagnetic signals, the speed of light. What one would prefer is to use the filtered velocity from the Kalman filter and thus using both the GPS and INS data to form a better estimation of the Doppler effect. The relative velocity is calculated from the knowledge of the movement of the rocket and the empheris data from the satellite. The use of aiding could be included in for example the FLL assisted PLL to make the solution more robust and maybe also enable it to handle even more stress without making large changes to the design of the discriminators and the loop filters.

4.3

Code Tracking

With the help of a shift register the receiver uses the I and Q signals to create new signals called early, prompt and late, where the only difference is the time offset. The signals are then correlated with the incoming digital IF signal to measure if the

(32)

Figure 4.3. A FLL assisted PLL loop filter. Note that the insertion point of the FLL

is one integrator back from the PLL. This is because the FLL error is in units of Hertz, whereas the PLL errors are in units of phase.

replica code is properly aligned. There are several different algorithms to calculate this correlation. The method of code tracking is called Delay-Lock Loop (DLL) and the algorithms used in code tracking are called delay lock loop discriminators. The DLL is often aided by the carrier tracking like the simple GPS receiver shown in Figure 4.1.

4.4

Loop Filters

The purpose of the loop filters is to filter the error signal produced by the discrim-inator in order to provide a better signal to the numerically controlled oscillator (NCO). The order of the loop filter and its noise bandwidth also determines the response of the loop filter to signal dynamics. There are two loop filters present in GPS receivers, one is used for the carrier tracking loop and the other is used for the code tracking loop. Most common GPS receiver uses a second order loop filter for the carrier tracking whereas the code tracking loop normally uses a filter of order one.

A analog loop filter with a first or second order is unconditionally stable at all noise bandwidths whereas a loop filter of order three remains stable if the noise bandwidth is within 18Hz [24]. The reason why one often would like to use a filter with a higher order than two, despite being less stable, is because of the useful characteristics of being insensitive to both velocity and acceleration stress [24]. Both these are present in the launch and flight of a sounding rocket.

4.5

GPS Receiver Hardware

This section introduces and discusses the different hardware options when creating a low-cost GPS solution capable of handling high dynamics. The chosen hardware must be flexible enough so it can be able to run custom firmware.

(33)

4.5 GPS Receiver Hardware 21

Previous Development There have been several projects building GPS re-ceivers and implementing custom software. Several of these projects, like DLR’s Phoenix HD receiver [20] can be traced back to Mitel’s (now Zarlink) GPS receiver. Mitel created a low-cost receiver prototype aimed for mass market applications. This prototype never entered production but, thanks to the open design documen-tation and the fact that the components used in the prototype could be purchased independently, the design idea was further developed later on in various projects. There is also another reason why Mitel’s creation inspired further development. In order to sell more hardware, Mitel developed a complete software development kit (SDK) called Mitel Architect code (sometimes referred to as Orion). This SDK could be licensed from Mitel and allowed users to modify and expand an already working code base. Unfortunately the SDK was discontinued and one can no longer be obtain a license from Zarlink.

The Orion receiver is compromised by a GP2015 front-end, a DW9255 saw fil-ter, a GP2021 and an ARM60B microprocessor [19]. Zarlink nowadays offers the GP4020 which is a unified platform including correlators and an ARM7TDMI mi-croprocessor with built-in functions such as a real-time clock and watchdog. The GP4020 has been used to create the Novatel Superstar II receiver and Signav’s MG5001 receiver. The Novatel Superstar II offers a development kit but does not provide the source code. This could make it hard to expand the functions and still be able to use existing functions of the firmware. The MG5001 would be quite attractive if it was not for the fact that it has been discontinued. Signav has moved on making several solutions in the navigation field and consequentially stopped the development of the MG5001 receiver. This is unfortunate as Signav created a MG5001 receiver compatible software development kit (SDK) which al-lowed developers to modify parts of the software.

Signav’s SDK was called MG5021 development kit and it was a kit based on the GP4020 platform. Unfortunately this SDK also has been discontinued. The earlier mentioned Architect SDK has been modified to work on the MG5001 [22]. The reason why a port of the Architect code was done was that while the MG5021 is a good SDK, it did not offer full access to the source code [18]. German Space Op-erations Center (GSOC) and Deutches Zentrum für Luft- und Raumfahrt (DLR) used the MG5001 receiver with their own software to create the Phoenix Miniature GPS receiver which has been proven to work in high dynamics situations. The software used was most likely based on their previous work on an Orion receiver, which was a modified version of the Architect code.

There have been some recent development on open source receivers like the Namuru platform from the University of New South Wales, Sydney Australia [22], and GPS1001 from GPS Creations [8]. The Namuru platform uses Field-programmable gate arrays (FPGAs) in conjunction with a ported version of the Architect code while the GPS1001 is implemented on a computer ISA-card. The GPS1001 comes with an open source GPS code.

(34)

The question about what software to use is difficult as the earlier mentioned SDKs can no longer be purchased or licensed. There might be a possibility to reach a deal with third-parties who already own a license, but it could prove difficult. There have been efforts made in developing an open source software such as GPL-GPS [1] and OSGPL-GPS [5]. The GPL-GPL-GPS code is compatible with the GP4020 platform and could potentially be used as a starting point. This option however requires quite some effort as it does not support for example functions for atmo-spheric corrections among other things. But it might be the best thing if you really want to be able to set up your own GPS receiver development system. The OSGPS code has a lot of functions that the GPL-GPS code lacks, but as it is designed to run on GPS1001 hardware it is not suited for use in a rocket. There is a possibility that one could port some of the functions to the GPL-GPS code in order to create an attractive custom open source development kit. This approach will however require some additional effort than just replacing the tracking loops with help of an existing SDK.

Hardware summary In order to choose the best hardware several factors have to be taken into consideration; hardware price, software options and availability. The Orion design is quite dated and some of the components used have been discontinued or replaced by newer versions like Zarlink’s own GP4020 integrated chipset. The GP4020 platform together with the front-end GP2015 is an attractive low-cost solution for a GPS receiver and has been successfully used in several different products like the MG5001 receiver. The GP2015 chipset and the GP4020 platform are still being produced and should not be difficult to obtain. There is still a large question mark about what code to use in development of new software for the receiver. The MG5001 receiver, which uses the GP4020 platform, is not really an option as the product is discontinued and it has been for some time. There is also the issue that if the MG5021 SK can not be obtained, one is faced with the problem of developing the firmware code.

The Phoenix HD receiver is an already competent GPS receiver and could be used as it is for future MAXUS missions. If one only wants a working product and not the know-how and option of developing a specialized GPS receiver, then this might be the way to go. The disadvantages with this approach is that it can be expensive and there will probably arise problems later on if one would like to expand its features. The fact that as it belongs to the GSOC and DLR, and thus the German state could also pose some problems regarding, for example, exportation of the product. Both the Namuru receiver and the GPS1001 receiver should both be quite good if one wanted to learn and expand the functions in an GPS, but they are ill suited for practical use with rockets. First of all they are more complex than needed. The Namuru receiveruses FPGAs, which are nice when developing and testing hardware but are not as efficient as specialized hardware. Moving from a GPS correlator chipset to a general purpose processor or FPGA currently incurs unacceptable power and size penalties for the embedded applications. The GPS1001 receiver is an ISA-card and also suffers from the same problems as Namuru regarding power efficiency and size of the system.

(35)

Chapter 5

GPS INS Integration

The GPS and the INS have complementary characteristics. In an INS small errors will propagate with each position estimation, that is the system has an unbounded error. The GPS solution, however, has a bounded error typically for a civilian re-ceiver in the range of ± 10 m. The GPS output rate is smaller than 20 Hz while the INS have an output rate of more than 100 Hz. The GPS user may also experience short term losses of the signal because of signal jamming, blockage or interference. The INS however does not rely on external signals hence its insensitivity to jam-ming, blocking and interference. Combining these sensors gives the advantages of both and removes most of their individual weaknesses. If the GPS has lost the GPS satellite signal, the INS can still compute a position and help the GPS to re-acquire the lost signal. An integration, with the help of a Kalman filter (see Chapter 7) of the two systems makes it possible to estimate the error of the INS. The estimated error can then be feed back to the INS system thus making the error bounded. The result is a navigation system which enjoys the measurement speed of a INS but with a bounded error thanks to the GPS. There are many possible options and techniques when integrating the two different systems, and one can essentially divide them into the three groups; loose integration, tight integration and deep integration. The names and definitions of the different techniques can differ much from one article to another. In this thesis the definitions according to [13] are used.

5.1

Loose Integration

Loose integration is the most commonly used integration technique when integrat-ing GPS and INS, due to its simplicity and the low computational burden relative to other integration techniques. The idea of the loose integration is to see the GPS as a black box which gives a calculated position with a certain bounded error as the output. The position and velocity data from the GPS is then blended with the output from the INS, typically in a Kalman filter or in an extended Kalman filter (EKF). The unbounded error of the INS can then be estimated and fed back to create a bounded INS solution.

(36)

Figure 5.1. An example of loose integration. The EKF blends the GPS and INS solutions and estimates a position. Estimated acceleration and angular bias are fed back from the EKF to the INS.

5.2

Classic Tight Integration

Classic tight integration can essentially be explained as when the GPS navigational computations are performed in an EKF. This integration technique sees the GPS receiver as a measurement unit that measures the distance to visible satellites, that is pseudo range. The measured pseudo ranges are taken in as observations in the EKF. The EKF in this integration differs from the loose integration in two aspects; one, it has an additional state for the receiver’s clock error and two it uses pseudo ranges as a measurement of distance. The advantages of this integration is that when less than four satellites are visible to the receiver, the EKF will still be able to use satellite data to estimate a position. The disadvantage with this approach is the higher computational burden and an increase in complexity.

5.3

Tight Alternative Integration

The Alternative Tight Integration is similar to the classic tight, with the difference that it aids the GPS carrier tracking loop with Doppler estimates computed from the EKF. More about Doppler aiding frameworks can be found in Section 4.2.

5.4

Deep Integration

The basic idea with deep integration is that the code- and carrier-tracking loops are integrated into the EKF. This technique has a very high computational burden, since the EKF must be updated very often. The update rate must atleast be in the range of 100 to 200 Hz becauset that is the typical rate of tracking and code

(37)

5.4 Deep Integration 25

Figure 5.2. An example of classic tight integration. The EKF uses the INS solution,

and the measured pseudo ranges and clock error from the GPS receiver to calculate a position. The estimated accelerator and angular bias are fed back from the EKF to the INS.

loops. The EKF computes the pseudo ranges and blends them with IMU data as in the classic tight integration.

(38)

Figure 5.3. An example of alternative tight integration. The EKF uses the INS solution

and the measured pseudo ranges and clock error from the GPS receiver to calculate a position. The estimated accelerator and angular bias are fed back from the EKF to the INS. In this integration, unlike the classic tight integration, the Doppler effect is also estimated by the EKF and fed back to the receiver to aid the carrier tracking.

(39)

Chapter 6

Navigation Equations

This chapter introduces the navigation equations used in calculating the posi-tion, velocity and attitude of the rocket. Their respective linearization are also described.

6.1

Notation

Before the navigation equations are explained some notations must first be con-sidered which are explained in table 6.1 and 6.2.

Table 6.1. Notation of the navigation equations, a and b can be any frame

Notation Description

xa A position in the coordinate frame a

Cb

a The transformation matrix from frame a to frame b

qb

a The quaternion rotation from frame a to frame b [v × ] The cross-product of v in matrix form. (v × w = [v×]w)

fa A specific force with unit m/s2

in the a-frame

ωaba The angular velocity between frame a and b with the unit rad/s in the a-frame

Table 6.2. Notation of the different frames, each frame is described in chapter 3.1

Notation Description

i The I-frame (Earth Centered Inertial frame)

e The E-frame (Earth Centered Earth Fixed frame)

b The B-frame (Body Fixed frame)

(40)

6.2

Deriving the Navigation Equations

The navigation equations purpose is to describe the movements of the rocket in the E-frame. These navigation equations are based upon [3]. The starting point for the derivation is in the I-frame where accelerations will be integrated in order to obtain position and velocity. The equations describing this are

ai= d

dtx˙

i = ¨xi= fi+ gi(xi) (6.1) where xiis the position of the rocket, fiis the measured acceleration from the IMU in the I-frame and gi(xi) is the gravitational force in position xi. If the application had been ground based, gi could have been approximated as a constant. But as the rocket accelerates from the Earth the gravitational force affecting the rocket will drastically change.

While the navigation equations are now fully described in the I-frame, the E-frame equations remain. The transformation of a position from one coordinate system to another is performed with the help of a transformation matrix.

xi = Ceixe (6.2)

The relation describes the transformation from a position in the E-frame to the I-frame. Differentiating (6.2) gives

˙

xi= ˙Ceixe+ Ceix˙ e

(6.3) Differentiating (6.3) then results in

¨

xi= ¨Ceixe+ ˙Ceix˙e+ ˙Ceix˙e+ Ceix¨e= ¨Ceixe+ 2 ˙Ceix˙e+ Ceix¨e (6.4) which will be used in relation (6.1) when solving for xe. But first multiple expres-sions for the derivate of the transformation matrices are created. The derivative of Cei is in [9] described as

˙

Cei= Cei

e

ie (6.5)

where Ωeie is a skew-symmetric matrix. Ωeie and ωiee describes the rotation of the Earth and are given by

ωiee =0 0 ωe  (6.6) ωe ie× = Ω e ie=   0 −ωe 0 ωe 0 0 0 0 0   (6.7) ¨

Cei is then calculated by taking the derivative of ˙Cei as follows ¨ Cei= ˙Ceieie+ C i eΩ˙ e ie= C i ee iee ie+ C i eΩ˙ e ie (6.8)

Combining (6.3), (6.4), (6.5) and (6.8) results in ¨

xi= ¨Ceixe+ 2 ˙Ceix˙e+ Ceix¨ e

(41)

6.2 Deriving the Navigation Equations 29

An expression for ¨xe can now be derived. Setting (6.1) equal to (6.9) gives

gi(xi) + fi= Cei(Ω e iee ie+ ˙Ω e ie)x e + 2Ceie iex˙ e + Ceix¨ e (6.10) Solving (6.10) for ¨xeis done in the following way

¨ xe= Cei −1 (gi(xi) + fi) − (Ωeieeie+ ˙Ωeie)xe− 2Ωeiex˙e = Cie(gi(xi) + fi) − (Ωieeeie+ ˙Ωeie)xe− 2Ωe iex˙ e = ge(xe) + fe− (Ωe ieeie+ ˙Ωeie)xe− 2Ωeiex˙e (6.11)

Assume that the angular velocity of the Earth relative to inertial space ˙Ωeie, is constant. This assumption together with equation (6.11) gives Ωeie = 0 and the resulting equation ¨ xe= ge(xe) + fe− Ωe iee iex e− 2Ωe iex˙ e (6.12)

where the gravitational force ge with a good approximation can be written as in [9]

ge≈ − kM

| xe|3x

e (6.13)

kM is the standard gravitational parameter, µ, which is the product of the gravi-tational constant and mass of the Earth as in

µ = kM = 398600.4418 km3/s2

The measured gravitational force fb can be transformed to the E-frame using the relation

fe= Cbef

b

(6.14)

Ce

b is obtained by calculating Ceb with the equation for the transformation matrix from a quaternion described in [9], given by

Ceb(qeb) =   q2 0+ q12− q22− q23 2(q1q2+ q0q3) 2(q1q3− q0q2) 2(q1q2− q0q3) q20+ q22− q21− q32 2(q2q3+ q0q1) 2(q1q3+ q0q2) 2(q2q3− q0q1) q02+ q32− q12− q22   (6.15)

where the quaternion is denoted

qeb=     q0 q1 q2 q3     (6.16) Ce

b is then obtained by taking the transpose of C b

e because of the transformation matrix properties.

(42)

Last but not least the time derivative of the quaternion’s can be expressed as described in [9]     ˙ q0 ˙ q1 ˙ q2 ˙ q3     =1 2     −q1 −q2 −q3 q0 −q3 q2 q2 q0 −q1 −q2 q1 q0     | {z } Ξ(q)   r s t   (6.18) wherer s t = ωb eb.

6.3

Linearization with Quaternion Angle

Repre-sentation

In this section the navigation equations are linearized with quaternion’s as the angle representation. Consider the nonlinear differential navigation equations with quaternion angle representation.

˙ qeb= Υ(ωibb − Cb e(q b e)ω e ie)q b e ¨ xe= −2Ωeiex˙e− Ωe iee iex e+ Ce b(q b e)f b+ ge(xe) (6.19) where the time derivative of the quaternion is written as in [16].The matrix function Υ(ω) is Υ(ω) = 1 2     0 −ωx −ωy −ωz ωx 0 ωz −ωy ωy −ωz 0 ωx ωz ωy −ωx 0     (6.20)

This can also be written according to equation (6.18), giving ˙ q = Ξ(q)ω (6.21) Ξ(q) = 1 2     −q1 −q2 −q3 q0 −q3 q2 q3 q0 −q1 q2 q1 q0     (6.22)

Since the matrix function Υ(ω) is linear one can write the first row of equation (6.19) as d dtq b e= Υ(ω b ib)q b e− Υ(C b e(q b e)ω e ie)q b e (6.23)

The derivative with respect to the state vector gives

∂z  d dtq b e  =h ∂q∂b e Υ(ωb ib)qbe  ∂ωb ib Ξ(qb e)ωibb  . . . ∂qb e (Υ(Ceb(qbe)ωeie)qbe)  (6.24)

(43)

6.3 Linearization with Quaternion Angle Representation 31

Consider the transformation matrix Ceb that consists of nine elements

Ceb=   c11 c12 c13 c21 c22 c23 c31 c32 c33   (6.25)

Expanding the last term in equation (6.24), using (6.20) gives

Υ(Ceb(q b e)ω e ie)q b e= Υ C b e(q b e)   0 0 ωe   ! qbe= Υ   c13 c23 c33  ωe ! qbe =     0 −c13 −c23 −c33 c13 0 c33 −c23 c23 −c33 0 c13 c33 c23 −c13 0         q0 q1 q2 q3     ωe (6.26)

Using (6.26), (6.15) and the relation q02+ q12+ q22+ q23= 1 gives

∂qb e (Υ(Ceb(q b e)ω e ie)q b e) =     −2q0q3 −2q1q3 −2q2q3 −1 − 2q23 −2q0q2 −2q1q2 −1 − 2q22 −2q3q2 2q0q1 1 + 2q12 2q2q1 2q3q1 1 + 2q2 0 2q1q0 2q2q0 2q3q0     | {z } Θ(qb e) ωe = Θ(qbe)ωe (6.27)

The matrix in (6.27) is replaced by Θ for a more compact notation. The linear quaternion equation can now be derived as

∆ ˙qbe= Υ(ωbib) − Θ(qeb)ωe∆qeb+ Ξ(q b e)∆ω

b

ib (6.28)

Linearization of the Velocity Equation The velocity equation can be ex-pressed as: ¨ xe= −2Ωeiex˙e− Ωe iee iex e+ Ce b(q b e)f b+ ge(xe) (6.29) The derivative with respect to the state vector gives

∂zx e) = ∂ ˙xe(−2Ω e iex˙ e) ∂xe(Ω e iee iex e) ∂xe(g e(xe)) . . . ∂qb e(C e b(q b e)f b) ∂fb(C e b(q b e)f b) i (6.30) It is assumed that ˙Ωe

ie= 0 because of the rotation of the Earth is nearly constant. Remember that Ce

b is the transpose of equation (6.15), it gives

Cbe(qbe)fb=   (q2 0+ q12− q22− q32) 2(q1q2− q0q3) 2(q1q3+ q0q2) 2(q1q2+ q0q3) (q20− q12+ q22− q32) 2(q2q3− q0q1) 2(q1q3− q0q2) (q2q3+ q0q1) (q20− q12− q22+ q32)  f b (6.31)

(44)

Equation (6.31) is differentiated with respect to the quaternion components giving ∂qb e (Cbe(q b e)f b ) = 2     q0 −q3 q2 q3 q0 −q1 −q2 q1 q0  f b   q1 q2 q3 q2 −q1 −q0 q3 q0 −q1  f b. . . . . .   −q2 q1 q0 q1 q2 q3 −q0 q3 −q2  fb   −q3 −q0 q1 q0 −q3 q2 q1 q2 q3  fb  = Λ(qbe, fb) (6.32)

The matrix in (6.32) is replaced by Λ(qbe, fb) for a more compact notation. The gravity model (6.13) is then differentiated with respect to the position xeresulting in ∂ge(xe) ∂xe = −kM  I3 r3 − 3 xe(xe)T r5  = −kM   1 0 0 0 1 0 0 0 1  r−3− 3   x2 1 x1x2 x1x3 x2x1 x22 x2x3 x3x1 x3x2 x23  r−5 ! | {z } Γ (6.33)

The derivative of the gravity function is replaced by Γ for a more compact notation. With (6.33), (6.32) and (6.30) the linear velocity equation can be written as

∆¨xe= − 2Ωeie∆ ˙xe−Ωeieeie− Γ(xe)∆xe + Λ(qbe, fb)∆qbe− Ce b(q b e)∆f b (6.34)

The Complete Linear Model The complete linear model can now be ex-pressed using (6.28) and (6.34) as

∆¨xe= − 2Ωeie∆ ˙xe−Ωeieeie− Γ(xe)∆xe + Λ(qeb, fb)∆qbe+ Cbe(qeb)∆fb ∆ ˙qeb= Υ(ωbib) − Θ(qeb)ωe∆qeb+ Ξ(q b e)∆ω b ib (6.35)

where xe, qeb, ωbib is the linearization point.

6.4

Linearization with Euler Angles

A different approach is to represent the small deviation angles with Euler angles, it gives a more intuitive angle representation. If the angles are small, the differ-ence between Euler angle representation and quaternion angle representation is minimal, as described in Appendix B.2. This linearization is based on the work of [3]. Consider the equation (6.36) derived from (6.12) and (6.33)

¨ xe= −2Ωeiex˙e− (Ωe iee ie− Γ e)xe+ Ce bf b+ ge (6.36)

(45)

6.4 Linearization with Euler Angles 33

The transformation matrix Cbe can be written as

Cbe= Cb 0e + ∆Cbe (6.37) where Ce

b 0is the linearization point and ∆C e

b is the deviation from the linearization point. Assume that the deviation from the linearization point can be described by small Euler angles Ψ = (ψ, θ, φ)T. According to [9] the direction cosine matrix A can be written as

A = (I + [Ψ×]) (6.38)

Where I is an 3 × 3 identity matrix and [Ψ×] is the skew-symmetric matrix given by [Ψ×] =   0 −ψ θ ψ 0 −φ −θ φ 0   (6.39)

The transformation matrix Cbe can now be expressed as

Cbe= ACb 0e (6.40)

Combining (6.40) and (6.37) gives an expression for ∆Ce b

(A − I)Cb 0e = ∆Cbe (6.41) Replacing A with (6.38) gives

[Ψ×]Cb 0e = ∆Cbe (6.42)

The acceleration now becomes

Cbefb= ∆Cbefb0+ Cb 0e ∆fb

= [Ψ×]Cb 0e fb0+ Cb 0e ∆fb = [Ψ × Cb 0e fb0] + Cb 0e ∆fb = −[Cb 0e fb0× Ψ] + Cb 0e ∆fb

(6.43)

where ∆fb is the deviation from the linearization point fb

0. In order to have Ψ

as a state in the state space model, its derivative must be derived. Differentiating the relation ˙Ce b = Cbeeeb gives ∆ ˙Cbe= ∆Cbeeeb+ Cb 0e ∆Ωeeb (6.44) Differentiating (6.42) gives ∆ ˙Cbe= d dt([Ψ×]C e b 0) = [ ˙Ψ×]C e b 0+ [Ψ×] ˙C e b 0 (6.45)

Setting (6.45) equal to (6.44) and using ∆Ce

b = [Ψ×]Cb 0e and ˙Cb 0e = Cb 0eeebleads to

References

Related documents

For integration purposes, a data collection and distribution system based on the concept of cloud computing is proposed to collect data or information pertaining

MEMS inertial sensors nodes containing a 3D accelerome- ter, gyro and magnetometer (IMUs) are increasingly being used for quantitative study of human motion, but integrating the

In total, seven algorithms have been implemented. Three of them integrate UWB ranging measurements with velocity and orientation measurements from a DR sys- tem. Another three, fuse

A use of sensors to detect occupants in a specified indoor environment has been an essential part for minimising energy consumption in buildings by implementing

This data have then been postprocessed and run through the navigation lter to estimate the position, attitude and velocity of the vehicle during ights.. The lter are feed

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Hence for development of sensor data fusion platforms it is required to visualize representation of all levels of such data, starting with raw sensor data from each single sensor up