• No results found

Ground Based Attitude Determination Using a SWIR Star Tracker

N/A
N/A
Protected

Academic year: 2021

Share "Ground Based Attitude Determination Using a SWIR Star Tracker"

Copied!
80
0
0

Loading.... (view fulltext now)

Full text

(1)

Master of Science Thesis in Electrical Engineering

Department of Electrical Engineering, Linköping University, 2019

Ground Based Attitude

Determination Using a

SWIR Star Tracker

(2)

Karl Gudmundson LiTH-ISY-EX--19/5236--SE Supervisor: Robin Forsling

isy, Linköpings universitet Jouni Rantakokko

foi, Linköping Examiner: Gustaf Hendeby

isy, Linköpings universitet

Division of Automatic Control Department of Electrical Engineering

Linköping University SE-581 83 Linköping, Sweden Copyright © 2019 Karl Gudmundson

(3)

Sammanfattning

Det här arbetet undersöker möjligheten att skatta orienteringar genom att ta bil-der av stjärnor med en swir-kamera. Idag förlitar sig många autonoma system på mätningar från en gps för att bestämma sin position och sin orientering. gps-signaler kan dock både störas ut och vilseledas, vilket gör att alla system som endast förlitar sig på gps-signaler blir sårbara. För att göra navigeringssystemen mer robusta kan andra sensorer adderas för att skapa ett multisensorsystem. En av dessa sensorer skulle kunna vara en markbaserad swir-kamera som kan ge noggranna orienteringsskattningar. För att undersöka om detta är möjligt har en swir-kamera placerats på foi:s byggnad i Linköping, där kameran i en fast posi-tion tagit bilder av himlen.

swir-kameran har flera fördelar gentemot en kamera som verkar i det visu-ella spektrat. Till exempel är bakgrundsstrålningen svagare och transmissionen genom atmosfären är högre för vissa våglängdsband.

Bilderna som tagits med swir-kameran skickas till en dator där en mjukvara för att erhålla orienteringsskattningar från stjärnbilder har utvecklats. Mjukva-ran innehåller algoritmer för att detektera stjärnor, placera dem i bilden med subpixelnoggrannhet, matcha stjärnorna till en stjärndatabas och slutligen esti-mera en orientering utifrån stjärnorna i bilden och de identifierade stjärnorna i databasen. För att förbättra orienteringsskattningen ytterligare har även ett Kal-manfilter implementerats.

Resultaten visar att orienteringsskattningar kunde erhållas kontinuerligt från sen kväll till tidig morgon, då himlen var mörk. Detta krävde dock att väderför-hållandena var goda, vilket betyder att förekomsten av moln var begränsad. När moln fanns på himlen kunde inga orienteringsskattningar göras under stora delar av natten. swir-kameran jämfördes även med en kamera verksam i det visuella spektrat, för att se om resultaten var bättre för swir-kameran när moln förekom. Med de kamerainställningar som använts i detta arbete verkade dock de två ka-merorna prestera jämbördigt.

Noggrannheten i orienteringsskattningarna är svårbedömd, eftersom ingen sann orientering är tillgänglig. Variansen i skattningarna var dock liten, och de största skillnaderna i orientering över en natts mätningar kom från en drift som syntes i alla vinklar. Det maximala felet i deklination under en natts mätningar varierade från 40 till 60 bågsekunder, beroende på dataset. Det maximala felet i rektascension varierade mellan 200 och 2000 bågsekunder, och samma metrik för rollskattningen varierade mellan ungefär 100 och 2500 bågsekunder. Anled-ningen till driften i skattningarna antas vara atmosfäriska effekter som inte kom-penserats för, samt astronomiska effekter som gör att jordens rotationsaxels orien-tering ändras, vilket leder till att ett fel uppkommer i stjärnornas position givna från databasen.

(4)
(5)

Abstract

This work investigates the possibility of obtaining attitude estimates by capturing images of stars using a swir camera. Today, many autonomous systems rely on the measurements from a gps to obtain accurate position and attitude estimates. However, the gps signals are vulnerable to both jamming and spoofing, making any system reliant on only gps signals insecure. To make the navigation systems more robust, other sensors can be added to acquire a multisensor system. One of these sensors might be a ground based swir star camera that is able to provide accurate attitude estimates. To investigate if this is possible, an experimental setup with a swir camera was placed at the office of foi Linköping, where the camera in a rigid position has captured images of the sky.

The swir camera possesses several advantages over a camera operating in the visual spectrum. For example, the background radiation is weaker and the transmission through the atmosphere is higher in certain wavelength bands.

The images captured by the swir camera was provided to a star tracker soft-ware that has been developed. The star tracker softsoft-ware contains algorithms to detect stars, position them in the image at subpixel accuracy, match the stars to a star database and finally output an attitude based on the stars from the image and the identified stars in the database. To further improve the attitude estimates, an mekf was applied.

The results show that attitude estimates could be obtained consistently from late evenings to early mornings, when the sky was dark. However, this required that the weather conditions were good, i.e., a limited amount of clouds. When more clouds were present, no attitude estimates could be provided for a majority of the night. The swir camera was also compared to a camera operating in the visual spectrum when clouds were present, to see if the results were any differ-ent. With the camera settings applied in this work, the two cameras seemed to perform equally.

The accuracy of the estimated attitudes is hard to validate, since no true atti-tude is available. However, the variance of the estimates was low, and the major differences in the attitude estimates over a night’s measurements seemed to be a drift present in all angles. The maximum estimated error in declination during a night’s measurements varied from about 40 to 60 arc seconds, depending on the data set. The maximum estimated error in right ascension varied between 200 and 2000 arc seconds, and the same metric in the roll estimate were about 100 to 2500 arc seconds. The reason for the drifts is assumed to be atmospheric effects not being accounted for, and astronomical effects moving the direction of the ro-tation axis of the earth, creating errors in the star positions given in the database.

(6)
(7)

Acknowledgments

I would like to begin by thanking all the people at foi that have helped me out during this thesis. My supervisor Jouni Rantakokko has come with helpful com-ments, always shown a great interest in the results of my work and put me in contact with experts in certain subjects when needed, thank you for that. I would also like to thank Magnus Pettersson for helping me collect the data, and Fredrik Kullander, Lars Pettersson and Jonas Nygårds for valuable insights.

At Linköping University, I would like to thank my supervisor Robin Forsling, for always quickly answering my questions, and my examiner Gustaf Hendeby for giving new ideas and angles to all my problems.

I would also like to take this opportunity to thank all the friends I have made during my time at Linköping University. Without you, the last five years would have been much harder to get through.

Lastly, I would like to thank my family, for always being there for me, and for all the support you have given me. Thank you.

Linköping, June 2019 Karl Gudmundson

(8)
(9)

Contents

Notation xi 1 Introduction 1 1.1 Background . . . 1 1.2 Related Work . . . 2 1.3 Problem Formulation . . . 4 1.4 Limitations . . . 4 1.5 Outline . . . 4 2 Theory 5 2.1 Hardware Considerations . . . 5

2.1.1 Short Wave Infrared Spectrum . . . 5

2.1.2 Optical Wavelength Filters . . . 9

2.1.3 Field of View . . . 9

2.1.4 Additional Configuration Parameters . . . 9

2.1.5 Sensor Types . . . 10

2.2 Reference Frames . . . 10

2.2.1 Compensating for the Precession and Nutation . . . 11

2.3 Attitude Representations . . . 12

2.3.1 Rotation Matrix . . . 13

2.3.2 Euler Angles . . . 13

2.3.3 Unit Quaternion . . . 13

2.3.4 Transformation between Representations . . . 14

2.4 Software Overview . . . 15 2.5 Image Processing . . . 15 2.6 Star Tracking . . . 17 2.6.1 Detection . . . 17 2.6.2 Centroiding . . . 18 2.6.3 Star Identification . . . 19 2.6.4 Attitude Determination . . . 20 2.7 Filtering . . . 22

3 Star Tracking Methods 25

(10)

3.1 OpenStarTracker . . . 25

3.2 Preliminary Processing . . . 25

3.3 Calibration . . . 26

3.4 Star Database . . . 27

3.4.1 Filtering . . . 27

3.4.2 Database Preprocessing and Compensation . . . 28

3.5 Image Preprocessing . . . 29 3.6 Thresholding . . . 30 3.6.1 itwmf . . . 30 3.6.2 Adaptive Thresholding . . . 31 3.7 Centroiding . . . 32 3.7.1 Centroid Uncertainty . . . 32

3.7.2 Transformation to Unit Vectors . . . 33

3.8 Star identification . . . 34 3.8.1 Misidentifications . . . 36 3.9 Attitude determination . . . 37 3.10 Filtering . . . 37 4 Experimental Evaluation 41 4.1 Experimental Setup . . . 41

4.1.1 The swir and vis Cameras . . . 43

4.1.2 Camera Software Settings . . . 43

4.2 Data Description . . . 43

4.3 Camera Performance . . . 44

4.3.1 Comparison of swir Datasets . . . 44

4.3.2 Comparison of Images from the swir Camera and the vis Camera . . . 46

4.4 Software Methods . . . 47

4.4.1 Generating a Ground Truth . . . 47

4.4.2 Uniformly Distributed Star Database . . . 48

4.4.3 Detection of Stars . . . 48

4.4.4 Centroiding . . . 51

4.4.5 Attitude Determination Techniques . . . 52

4.4.6 mekfPerformance . . . . 55

4.4.7 swirData Sets . . . . 55

4.4.8 Comparison with the vis Camera . . . 59

4.5 Discussion . . . 60

5 Conclusion and Future Work 63 5.1 Conclusions . . . 63

5.2 Future Work . . . 64

(11)

Notation

Abbreviations

Abbreviation Meaning

ccd Charged Coupled Device

cmos Complementary Metal-Oxide Semiconductor cog Center of Gravity

dec Declination

ecef Earth Centered Earth Fixed eci Earth Centered Inertial

foi Swedish Defence Research Agency fov Field of View

gnss Global Navigation Satellite System gps Global Positioning System

imu Inertial Measurement Unit ins Inertial Navigation System

itwmf Information Theoretic Weighted Median Filter mekf Multiplicative Extended Kalman Filter

ost Open Star Tracker ra Right Ascension snr Signal to Noise Ratio

svd Singular Value Decomposition swir Short Wave InfraRed

vis Visual spectrum xi

(12)

Mathematical Notation Notation Meaning x A scalar x The vector x = [x1, ..., xn]T A A matrix [ · ]T Vector/matrix transpose [ · ]−1 Matrix inverse ˆ

x The estimate of the variable x k· k The Euclidean norm

Im×n Identity matrix of size m × n

(13)

1

Introduction

As an introduction, this chapter provides a background as to why this thesis work has been conducted. The problem formulation will also be presented, as well as limitations and a review of related works.

1.1

Background

Determining position and orientation is important for both civilian and mili-tary applications. While the deployment of Global Navigation Satellite Systems (gnss) has paved the way for small, lightweight, low-cost receivers that often can provide accurate position, velocity and time measurements, the signals are vulnerable to both jamming and spoofing [32, 33]. In a jamming attack, radio frequency energy is transmitted on the frequencies used by e.g. the Global Posi-tioning System (gps), thus jamming any gps receiver. Spoofing refers to the case where a false gps signal with an erroneous location and time is transmitted to deceive the receiver. New alternative navigation methods independent of gnss are therefore of interest for the development of robust navigation systems. One part of a multi-sensor navigation system for ground based vehicles might in the future be a star tracker, which can be used to find both the orientation and the po-sition of a vehicle. This thesis aims to investigate the accuracy and performance of the orientation obtained from a ground based star tracker that uses Short Wave InfraRed (swir) sensors.

The idea of tracking stars for navigation purposes has been around for thou-sands of years, and automatic star tracking, typically in combination with an Inertial Navigation System (ins), is an old technology. It has been used for half a century in high-altitude surveillance aircraft, missiles and lately also on satel-lites.

The basic idea behind modern star tracking systems is to take a photo of a set 1

(14)

Stars Star camera Computer Star database Attitude Star image fov Preprocessed database

Figure 1.1:Star tracking overview

of stars, detect the stars in the image, determine their position and match them to a preprocessed star database by comparing the position of the stars in the image with the position of the stars in the database. An orientation can then be obtained by using the stars from the image and the matching stars from the database. An overview of the components in a star tracker is seen in Figure 1.1.

Attitude determination by using star trackers has been around for quite some time when it comes to spacecraft applications [17], since the stars are easily vis-ible in space. Therefore, a lot of progress has already been made in several as-pects of the orientation determination process. The ability to detect stars in an image has been developed and refined, as well as the robustness and speed of matching the image stars to a star database. Research has also been conducted to determine the attitude given the processed images and the matched database entry in a computationally efficient manner. Suitable hardware properties of a star tracker, such as the Field Of View (fov) or the number of star trackers op-erating simultaneously, have also been investigated. However, almost all of this development assumes space or high-altitude applications, where:

1. The apparent magnitude of a star is not suppressed by the earth’s atmo-sphere.

2. The visibility is not dependent on the weather or the time of day.

Because of these difficulties, the development of similar techniques for ground-based applications has not been as prominent. There are some exceptions, but these often assume a vehicle operating at a high altitude to minimize the weather dependencies and the absorption in the atmosphere. Another way to limit the effect of the problems might be to use a camera operating in the swir spectrum, which is what this thesis aims to evaluate.

1.2

Related Work

A summary of the whole star tracking process, from hardware to an attitude estimate, is given in [25], although for satellite applications. While being an older publication, [20] also gives a thorough tutorial on how star trackers work.

(15)

1.2 Related Work 3

One of a few exceptions that considers ground based star tracking is the patent described in [2], where the intended applications include both air and ground based vehicles. To overcome the problems listed earlier, the authors propose to use the previously mentioned swir cameras to track the stars. Nor-mally, the star trackers operate in the visible light spectrum, but Belenkii et al [2] presents several advantages of using swir sensors instead, operating from 0.9 to 2.5 µm. The benefits include, but are not limited to:

1. Background radiation, i.e. radiation from everything that is not stars, is weaker.

2. The transmission through the atmosphere is higher at certain wavelengths within the swir spectrum.

3. The number of stars at similar intensity levels is an order of magnitude greater in the swir spectrum compared to the visible waveband.

This means that more stars can potentially be tracked from ground based cam-eras, both during night and day. Hence, the objective to determine position and attitude from earth bound vehicles might be easier to achieve using swir cameras. The patent is however not clear on the weather dependency of the invention, but mentions that the star tracker should be complemented with an ins to cover for bad weather, e.g. cloudy conditions, when the star tracker might not be able to provide position and/or attitude. To further prevent the star tracker from not being able to provide sufficient measurements, three star trackers are proposed to be used simultaneously. By using three cameras, at least two will always be pointed more than 30 degrees away from the sun. While only one camera is suf-ficient for the attitude determination, multiple cameras make the system more robust.

A method for the detection and centroiding of stars in a swir image for day-time applications is given in [41]. The proposed algorithm detects defect pixels and removes the “stripes” that often are present in swir images. Other noise removal and centroiding algorithms are not specifically focused on swir images, but are still interesting, e.g. [11].

The next step is to match the image to a star database. A commonly used algorithm is the Pyramid Star Identification technique [26]. This algorithm uses the distances between the stars in the image and matches them to the same mea-surement in the star database. First, the distances between three stars are used to search the database. If a match is found, a fourth star is added to confirm the match.

Given a star image and a matched database entry, the orientation can be cal-culated. The easiest and most straightforward way of doing this is by using so-called “single point” methods, i.e. methods that only take one measurement (pic-ture) into account for every orientation update. A few of these algorithms are described in [24]. A more accurate and robust method is to use filter solutions, where previous measurements as well as system dynamics are taken into account. A multitude of different filtering algorithms exist [7]; however, the Multiplicative

(16)

Extended Kalman Filter (mekf) appears to be the most used. The filter solutions also often incorporates angular rate and acceleration measurements, i.e. the mea-surements from an Inertial Measurement Unit (imu).

Regarding hardware considerations, information on the aspects of using day-time swir images from earth to look at the sky is presented in [27]. Therein, the focus is on detecting and tracking satellites instead of stars, but much of the work is still relevant. More properties of swir cameras are investigated in [3], [14] and [42].

1.3

Problem Formulation

The main question the thesis aims to answer is:

Is it possible to automatically determine the attitude of a ground based system by track-ing stars ustrack-ing short wave infrared cameras?

Follow up questions include:

• Under what weather and sky brightness conditions is this possible? • How accurate are the attitude estimates?

• Can the estimates be improved by filtering?

To investigate these questions, swir images have been collected under differ-ent weather and light conditions.

1.4

Limitations

This thesis will only be using data collected at a free line of sight, i.e., no trees or other environmental factors disturbing the star images, except for weather phenomena such as clouds. The hardware itself also contains certain limitations. Only two swir cameras have been tested, and the optical possibilities to change the focal length and fov of the cameras are restricted.

1.5

Outline

The contents of this thesis are outlined as follows. Chapter 2 gives a theoretical background to the hardware considerations used in the configuration of the cam-eras, as well as an explanation of all processing steps used in star tracking. The implementation of these steps are described in Chapter 3, where the methods used in the evaluation are outlined. Chapter 4 then evaluates these methods and presents results that are used to answer the problem formulation. Chapter 5 then finishes the thesis by making conclusions based on the results of Chapter 4.

(17)

2

Theory

In this chapter, the theoretical basis of this theses will be presented, regarding both hardware and software of the star tracker, as well other critical aspects nec-essary to understand the implementation.

2.1

Hardware Considerations

As stated in the introduction, a swir camera will be used to detect stars. This section will provide reasons for why a swir camera is preferred over a camera op-erating in the visual spectrum. Other configuration aspects of the camera system will also be considered, such as focal length, aperture, filters, fov and exposure time.

2.1.1

Short Wave Infrared Spectrum

The swir spectrum is composed of the wavelength band from 1 to 3 µm [3]. As-tronomers have defined bands within the swir spectrum that are suitable for astronomy because of the high transmittance through the atmosphere. These are specified in Table 2.1. It should be noted however that the size and location of these windows are not set in stone, and authors often define them in different ways. The notation below follows that of [3] and [27].

The wavelength has multiple implications on how certain objects and phe-nomena are perceived by the sensor. For wavelengths in the swir spectrum, [3] provides a good overview on the impacts, as well as [27]. The next section will present the effects of the impacts in the star tracking context.

(18)

Table 2.1:Windows of the swir spectrum Wavelength Band 1.1 – 1.4 µm J 1.5 – 1.8 µm H 2.0 – 2.4 µm K Thermal radiation from the atmosphere Thermal radiation from the earth Star tracker Star radiation Refraction Scattering Earth surface Atmosphere Sun Moonlight Atmospheric attenuation Artificial light sources

Figure 2.1:Overview of impacts on a star tracker operating inside the atmo-sphere.

Advantages and Disadvantages of theSWIRSpectrum

As stated earlier, most star tracking systems operate in space, either on spacecraft or satellites. As long as the star tracker is not pointing directly at the sun, the background light is limited. Therefore, in most space-based systems a Charged Coupled Device (ccd) sensor is used, operating in the visual spectrum at about 0.4 – 0.8 µm [20]. ccd sensors are also often the selected sensor in lunar rover applications, see e.g. [28]. When investigating the use of star trackers on earth however, the background light from the sun and the moon is significant. The atmospheric transmittance must also be taken into account, which affects the choice of sensor and wavelength filters. Figure 2.1 gives a brief overview of the the radiation sources affecting a ground based star tracker.

In [2], several advantages of using a star tracker in the swir spectrum is pre-sented. First of all, the transmittance through the atmosphere is higher in the swirspectrum than the visual spectrum. This can be seen in Figure 2.2, where the transmission through the atmosphere has been simulated using the MOD-TRAN® simulation software [4]. The simulation was made with the atmospheric model Sub-Arctic Summer with a visibility of 23 km. It is clear that the transmit-tance is greater in certain swir wavelength bands compared to the visual

(19)

spec-2.1 Hardware Considerations 7

Figure 2.2:The transmittance through the atmosphere for wavelengths from 0.4 to 3 µm. The data is calculated using the MODTRAN® simulation soft-ware [4] with the atmospheric model Sub-Arctic Summer and a visibility of 23 km. A high transmittance means that more light can pass through the atmosphere.

trum, although in some parts the transmittance is close to zero. This is because of the radiance being absorbed by either water, carbon dioxide or both [3]. So, to detect as many stars as possible, one of the wavelength windows with high transmittance should be chosen.

A second advantage with swir cameras is that the daylight sky background radiation is lower than in the visible waveband, according to [2]. The radiation depends on the angular distance from the sun. Up to about 80◦away from the sun, the radiation decreases exponentially. At larger angles it is approximately constant. However, the radiation is lower at higher wavebands for all angles. Ac-cording to [2], it is about 6 and 18 times lower for the H- and K-band, respectively, compared to the I-band (0.7 – 0.9 µm).

A third advantage is that the number of stars at similar intensity levels is an or-der of magnitude greater in the swir spectrum compared to the visible waveband, making it possible to have a smaller fov and still detect the necessary amount of stars. A smaller fov gives a higher pixel accuracy and limits the amount of background light hitting the sensor. This is further discussed in Section 2.1.3.

Belenkii et al. [2] presents a few more advantages of star tracking in the swir spectrum, such as the lesser impact of smoke and clouds that can attenuate the star light, the camera frame rate being higher, allowing an increase in snr by averaging images, and the lower impact of turbulence caused by scintillation. Analyzing all of these advantages, the authors come to the conclusion that the

(20)

Wavelength [μm] Sp ectr al ir radi anc e [W/ m 2/μ m] 0.0 0.5 1.0 1.5 2.0 2.5 10-8 10-7 10-6 10-5 10-4 10-3 10-2 Full moon Airglow Integrated starlight Zodiac light BB 295 K BB 273 K

Figure 2.3: The radiance of different light sources at night for wavelengths from 0 to 2.5 µm. The brown line represents the light from a full moon, the red integrated star light, the blue zodiac light and the two rightmost black and gray the light emitted from two blackbodies of different temperatures. [3]

star detection limit for a sensor operating in the I-band is apparent magnitude1 3.3, while the same statistic for the H- and K-band is 6.8 and 5.8, respectively. This means that more stars are expected to be detected in the H- and K-band compared to the I-band.

Since [2] focuses on daytime applications, nighttime aspects of the swir spec-trum are not considered. Even though the transmittance through the atmosphere is still higher at night, the advantage of the sun radiance being lower is not of im-portance any longer. At night, other light sources dominates. Figure 2.3 gives a rough overview of the expected light sources at different wavelengths. Notable is that the airglow, a luminescent phenomenon in the upper parts of the atmo-sphere, becomes much larger in the swir spectrum compared to the visible spec-trum. Other light sources such as the zodiac light (the reflection of sun light in cosmic dust) and the integrated star light are more constant. At higher wave-lengths the thermal radiance from the ground and atmosphere dominates com-pletely. Taking this into account, it is a reasonable assumption that even though the swir camera might be more suitable at daytime, the performance may be worse than for a camera operating in the visible spectrum at nighttime.

1The apparent magnitude is a measure of the brightness of an astronomical object as seen by an

observer on earth. The scale has an inverse relation between magnitude and brightness, meaning that the brighter an object appears the lower its magnitude. The scale is also logarithmic and a difference

of 1 in magnitude equals a change in brightness by a factor of 5

√ 100 [21].

(21)

2.1 Hardware Considerations 9

2.1.2

Optical Wavelength Filters

In the previous section, important properties of the swir spectrum in general were considered. However, a smaller waveband is usually preferred, and a band-pass filter needs to be applied in the optical setup. Due to the highly variable transmittance, as seen in Figure 2.2, only a few wavelength bands are suitable, coinciding with the J-, H- and K-bands specified in Table 2.1. From these bands, the H-band seems to be the most popular choice. It is used in [2] and [14], while a part of it (1.52 – 1.7 µm) is suggested in [42]. While the performance might be a bit better in this waveband, another reason for choosing the H-band is claimed to be the reduced cost of sensors operating up to 1.7 µm [2][14].

2.1.3

Field of View

The Field of View (fov) defines how large part of the sky the camera will be ob-serving. This is determined by the focal length f and the vertical and horizontal length of the sensor, h. Given f and h, the field of view is given by [29]

FOV = 2 arctan h

2f !

. (2.1)

In most cases, the camera sensor has different vertical and horizontal lengths, and the fov is then represented as fovvertical× fovhorizontal.

Which fov that should be chosen depends on the application. With a large field of view, more stars are present in the picture. However, the background light will also be stronger, and less bright stars will no longer be visible. A small fov has the opposite effect. In daylight applications, the background light is signifi-cant. A small fov will limit the effects of the background light, but also requires that a sufficient amount of stars are present within the smaller fov. Since the number of stars at a given apparent magnitude is higher in the swir spectrum, the sensors operating in the swir spectrum are more suitable for a smaller fov than sensors in the visual spectrum. Another aspect of the fov is that the smaller the fov, the higher the pixel accuracy. Intuitively, stars should also be spread out over more pixels with a small fov, but this is not necessarily true since less bright stars also appear smaller.

In space applications, the fov is often chosen to be quite large. In e.g. [15], it is said that most star trackers have a fov between 25° and 45°. When day-time earth applications are considered, the fov is often chosen to be significantly lower. In [2], the optimal fov at the I-band is said to be 7◦×7, while only being 0.86×

0.86

and 1.3× 1.3

in the H- and K-band, respectively. In [14] a fov of 0.86×

0.86

is suggested, while a fov of 3◦× 3◦

is chosen in [42].

2.1.4

Additional Configuration Parameters

In addition to the properties already mentioned, the exposure time and aperture also impacts the camera performance. The exposure time is the amount of time the camera sensor is exposed to light. A longer exposure time results in more light

(22)

being captured, which in the case of star images result in stars appearing brighter and an increased snr. However, any platform motions during the exposure time will result in a blurred image.

The aperture is the size of the opening that lets in light to the camera sen-sor, thus also controlling the amount of light reaching the sensor. It is common to specify the aperture in relation to the focal length f . An aperture of 1.4 (di-mensionless) is short for f /1.4 and describes the factor between focal length and aperture. In this case, the focal length is 1.4 times greater than the aperture. The focal length is usually given in mm.

2.1.5

Sensor Types

The type of material that is used in image sensors varies. For swir cameras, the detector materials Indium Gallium Arsenide (InGaAs) and Mercury Cadmium Telluride dominates [3]. As stated in the beginning of this section, Charged Cou-pled Devices (ccd) have often been used in earlier star tracking hardware in the visual spectrum. Another sensor type that can be used in the visual spectrum is the Complementary Metal-Oxide Semiconductor (cmos).

2.2

Reference Frames

To be able to use a star tracker, a number of coordinate systems need to be defined. The number of coordinate systems needed depends on the setup and applications. In most cases, at least three are necessary: the inertial (or reference) frame, the body frame and the camera frame. The inertial frame is an Earth Centered In-ertial (eci) coordinate system, where the origin is in the center of the earth. All eciframes are fixed in space, meaning the frame does not rotate with the earth. This is a nice property when dealing with astronomical objects. In an eci coordi-nate system, the z-axis points to the mean celestial north pole, while the x-axis points to the mean vernal equinox (the intersection of the equatorial plane and the plane of earth’s orbit around the sun). The y-axis completes the right-handed frame. The coordinate system is shown in Figure 2.4. The pointing of the celestial north pole and the vernal equinox changes over time, due to the precession and nutation of the earth [12], further explained in Section 2.2.1. Therefore, the eci coordinate system needs to be specified to a time. A commonly used eci frame is J2000, where the x-axis is specified by the vernal equinox in the Julian epoch 2000, which is noon at January 1, 2000, UTC.

All star coordinates are specified in an eci frame. They are represented as Declination (dec) and Right Ascension (ra). These angles can be seen in Figure 2.4, as coordinates to an example star.

Another world frame is the Earth Centered Earth Fixed (ecef) coordinate sys-tem, which is closely related to the eci frame. As the name suggests, the coordi-nate system is fixed in the earth, meaning its z-axis is the rotation axis of the earth. The x- and y-axes can be specified to any direction, but a common approach is to let the x-axis point out of 0◦latitude, 0◦ longitude, and let the y-axis complete

(23)

2.2 Reference Frames 11

RA

DEC

Mean celestial pole

z

ECI

y

ECI

x

ECI

Vernal equinox

Figure 2.4:The eci coordinate system. The angles dec and ra used to define the position of a celestial object are also marked.

the right handed frame. As the coordinate system is earth fixed, it rotates along with the earth with an angular velocity of 7.2921×10−5rad/s.

The body frame is a coordinate system fixed to the body of the platform, while the camera frame is fixed with respect to the camera with the z-axis pointing out of the lens, and the x- and y-axes in the focal plane.

2.2.1

Compensating for the Precession and Nutation

When describing the eci coordinate system, the two terms precession and nuta-tion were mennuta-tioned. These are two astronomical effects that rotates the axis of the earth in a cone around the ecliptic pole [12]. The angle between the eclip-tic pole and the earth rotation axis (the inclination of the eclipeclip-tic) is however constant,  = 23◦2602100.448, where ’ and ” defines arc minutes (1/60◦) and arc seconds (1/3600◦) respectively. The ecliptic is the plane of earth’s orbit projected in all directions, and the ecliptic poles are then defined along the axis perpendic-ular to this plane. The precession causes a slow linear change in the direction of the rotation axis of the earth, with a period of 25,772 years, while the nutation represents a combination of all higher terms in the directional change. The two effects are shown in Figure 2.5.

The changes caused by nutation are small and often hard to compensate for. The precession however is larger and easier to calculate. The precession causes the movement of the vernal equinox along the ecliptic by ψ = 5000

.290666 per

year. This affects both the eci coordinates dec and ra. Comparing the J2000 coordinates with today’s, the stars will have moved approximately 0.27

, a not insignificant amount. The movement can however be compensated for, as ex-plained in [12], by adding the following to the original coordinates

(24)

Precession

ε

Nutation

North ecliptic pole

Earth rotation axis

Figure 2.5: The precession and nutation effects on the direction of earth’s rotation axis.

∆dec= ∆λ sin() cos(ra0), (2.2a)

∆ra= ∆λ(cos() + sin() sin(ra0) tan(dec0)), (2.2b) where ∆λ = T ψ, T is the time in years since the eci frame was defined, and dec0 and ra0are the original coordinates. To further improve the compensation, (2.2) can be used two times. First, the corrections are calculated and divided by two before being added to the original coordinates. Then, the updated coordinates are used to calculate the corrections once again, which then are added to the original coordinates. This should give an accuracy better than 1 arc second over a time frame of 50 years, according to [12].

2.3

Attitude Representations

There are several ways to represent an attitude, as shown in e.g. [35]. The most common representations when it comes to star tracking are the rotation matrix, also known as the direction cosine matrix, and the quaternion, since one of these is often the output of the algorithms for attitude determination. These are de-scribed in Section 2.6.4. For the user, orientation is usually presented using Euler angles.

(25)

2.3 Attitude Representations 13

2.3.1

Rotation Matrix

The rotation matrix is a 3 × 3 matrix, which describes the rotation between two coordinate systems, in this case between the reference frame and the body frame

W = AV , (2.3)

where W is a set of vectors in the body frame, A is the rotation matrix and V a set of vectors in the reference frame. The rotation matrix is orthogonal and proper, meaning AAT = 1 and det(A) = 1. By multiplying multiple rotation ma-trices, subsequent rotations can be computed. The notation often takes the form

A(3,2)A(2,1), where a vector in coordinate system 1 is first rotated to coordinate

system 2, and then rotated once again to coordinate system 3. Together, the two rotation matrices form a new rotation matrix A(3,1), which rotates a vector from coordinate system 1 to coordinate system 3 directly.

2.3.2

Euler Angles

Euler angles are tightly coupled to the rotation matrix. One way to construct a rotation matrix is by three consecutive rotations around the axes of a coordinate system, where each rotation is defined by the Euler angle of that axis. The Euler angles are often defined as roll φ, pitch θ and yaw ψ. In an eci coordinate system, the notation of dec as pitch and ra as yaw is more common, as mentioned earlier. Note that the dec is defined as the negative pitch angle.

Depending on the rotation convention, the rotation matrix is constructed dif-ferently from the Euler angles. As an example, the xyz rotation is used, creating the rotation matrix

A = AxφAyθAzψ =         1 0 0 0 cos φ sin φ 0 sin φ cos φ                 cos θ 0 −sin θ 0 1 0 sin θ 0 cos θ                 cos ψ sin ψ 0 −sin ψ cos ψ 0 0 0 1         =        

cos θ cos ψ cos θ sin ψsin θ

sin φ sin θ cos ψ − cos φ sin ψ sin φ sin θ sin ψ + cos φ cos ψ sin ψ cos θ sin φ sin θ sin ψ + sin φ sin ψ sin φ sin θ sin ψ − sin φ cos ψ cos φ cos θ         . (2.4)

2.3.3

Unit Quaternion

A more efficient way of representing the attitude is the unit quaternion, which has been the most common attitude representation since the early 1980s [7]. The unit quaternion is more compact and does not suffer from singularities (as the Euler angles do), but requires a bit more complex calculations. The notation of the unit quaternion differs a bit in various sources, the one used here follows that of [7] and [23].

(26)

q ≡" % q4 # , (2.5) with q4= cos(α/2), (2.6a) % ≡ [q1 q2 q3]T = v sin(α/2), (2.6b) where v is a unit axis, and α is the rotation around said axis. There also exists a norm constraint on the unit quaternion: ||q|| = qTq = 1.

The unit quaternion has some special operations that will be covered shortly here, given in [7]. These are later used in the filter. The composition of two unit quaternions is defined as q ⊗ q0 ="q4I3×3[%×] % %T q4 # q0, (2.7) where [%×] is given by [%×] ≡         0 −q3 q2 q3 0 −q1 −q2 q1 0         . (2.8)

The inverse of the unit quaternion is defined as

q−1≡"−%

q4 #

, (2.9)

which gives q ⊗ q−1= [0 0 0 1]T, the identity quaternion.

2.3.4

Transformation between Representations

The different attitude representations can be transformed into each other. Here, the transformation from unit quaternion to rotation matrix and the inverse oper-ation are presented, as well as equoper-ations on how to obtain Euler angles from both representations.

The rotation matrix is obtained from the quaternion by

A =          2(q42+ q12) − 1 2(q1q2−q3q4) 2(q2q4+ q1q3) 2(q3q4+ q1q2) 2(q24+ q22) − 1 2(−q1q4+ q2q3) 2(−q2q4+ q1q3) 2(q2q3+ q1q4) 2(q24+ q23) − 1          . (2.10) The transformation from rotation matrix to unit quaternion is not as straight-forward as the other way around, as it suffers from singularities. However, in general, the following equation holds [39]

q4= 1 2

p

(27)

2.4 Software Overview 15 q1= 1 4q4 (A3,2A2,3) (2.11b) q2= 1 4q4 (A1,3A3,1) (2.11c) q3= 1 4q4 (A2,1A1,2). (2.11d)

The problem of this transformation occurs when trace(A) ≤ −1, since q4 then becomes either 0 or complex.

The Euler angles are obtained from the quaternion and the rotation matrix by

ψ = tan−1 A1,2 A1,1 ! = tan−1 2(q1q2−q3q4) 2(q42+ q12) − 1 ! , (2.12a) θ = − sin−1 A1,3 = − sin −1 (2(q1q3+ q2q4)) , (2.12b) φ = tan−1 A2,3 A3,3 ! = tan−1 2(q2q3−q1q4) 2(q24+ q23) − 1 ! . (2.12c)

2.4

Software Overview

The coming sections in this chapter will give a theoretical background to the different steps of the process to turn a star image into an attitude estimate. This section aims to give an overview of these steps, before further details of each step are given in the coming sections. The whole process is visualised in Figure 2.6.

Starting with a star image being loaded to the software process, initial image processing is conducted where noise is removed, before the star detection process starts. This process detects potential stars and results in a binary image, where every non-zero pixel is considered a potential star. The next step is centroiding. A subpixel position of every potential star is determined and in the next step converted to a unit vector in the camera frame. The unit vectors are then sent to the star identification part. The star identification process uses a star database to match the stars in the image with database entries. When a match is found, the unit vectors from both the database and the image of every identified star are saved. This is then used as input to the attitude determination, which outputs an orientation as a quaternion in the reference frame. The quaternion is sent to a Kalman filter which outputs an orientation represented by the eci Euler angles, as described in Section 2.2.

2.5

Image Processing

All images, regardless of camera type, contains noise of different kinds, such as light fluctuations, sensor noise and quantization noise [6]. To reduce the noise, images can be smoothed. Smoothing an image is the process of replacing each pixel value with some kind of local average of surrounding data points. The simplest smoothing filter is the box filter, which computes the pixel value as the

(28)

Star image Noise removal Detect bright spots Centroiding Image position to unit vector

Identify stars Star database

Attitude determination Filtering the orientation Euler angles Processed image Binary image

Star image position

Unit star vectors in the camera frame

Unit star vectors in the camera and reference frame

Quaternion

(29)

2.6 Star Tracking 17

mean of its neighboring pixels. Another more sophisticated filter is the Gaussian blur, in which the values of the surrounding data points follows a 2D Gaussian distribution [6]

G(x, y) = 1

2πσ2ex2+y2

2σ 2 , (2.13)

where x and y is the distance from the origin in the horizontal and vertical axis re-spectively, and σ is the standard deviation. A third type of smoothing filter is the median filter, which replaces the pixel value with the median of the surrounding pixels.

2.6

Star Tracking

This section will provide some basic knowledge about how star tracking is used to obtain an orientation estimate. This process can be divided into four main parts: detection, centroiding, identification and solving the orientation. The the-ory used in these parts will be described shortly below.

2.6.1

Detection

The first problem of star tracking is to detect and determine the position of the stars in the image. The images can be seen as grayscale, where the pixel value represents the brightness. The objective is then to detect the potential stars in the image and determine their position. Since the images are mostly dark, objects brighter than the background light can be assumed to be stars. To solve this, the detection process often uses a pre-determined threshold, where everything above the threshold is considered to be a star. The threshold is often chosen by the following equation [11]

T (I) = µ(I) + ασ (I), (2.14) where T is the calculated threshold of the image I, µ is the mean of all pixels in the image, σ is the standard deviation and α is a constant.

When (2.14) is used, the constant α is often chosen by manual tuning, using some example images. This approach can be effective, but as the light conditions between the images varies, the algorithm struggles to consistently provide good thresholds. Although the background light can vary to some extent for images taken from sensors on space vehicles, because of the presence of the Milky Way, the light conditions for images taken from earth may shift heavily, due to a variety of reasons. In applications where both night and day time images are used, this becomes an even more serious challenge, restricting the usage of this equation even more. According to [11], a short and varying integration time, as well as a low snr, further limits the applicability of the algorithm.

To remove any tuning parameters and make the threshold selection adaptive to the light conditions, other algorithms can be applied. One is the A-Contrario Decision Framework presented in [11], where no manual tuning is required. This

(30)

method might help when the light condition varies between images. However, if the average light intensity varies within the image, a global threshold will still have some problems. To account for this, the threshold can be made adaptive. Instead of the threshold being constant over the whole image, it is determined locally by considering a window around each pixel. This solves the problem of the varying light conditions, but it might also cause new problems. Pixels close to the edges of the image will be affected, since the window around the pixel is limited by the image boundaries. Larger stars will also be affected. Since the window will be filled with more bright pixels, the threshold will be higher, and less bright pixels belonging to the stars will not exceed the threshold.

2.6.2

Centroiding

Centroiding is the process of calculating the position of every detected star. To make the star tracking process accurate, sub-pixel accuracy is required. Because of this, the star images are often intentionally defocused, making the stars spread out over more pixels in the image, thus making it possible to achieve a higher accuracy [20].

The most common and easiest way of calculating the centroid is the Center of Gravity (cog) method, also known as the Moment method [37]. First, an initial estimate of the centroid is made by choosing an appropriate pixel in the star area, e.g. the brightest pixel. A window of pixels is then created around the chosen pixel. In this window the centroid is then calculated by [34]

(xc, yc) = P ijIijxij P ijIij , P ijIijyij P ijIij ! . (2.15)

Although the cog method is fast, the algorithm’s performance is degraded when the background noise is high. A modified version of cog might also be used [37], which potentially is less sensitive to noise, where the local background light is subtracted and the only pixels being accounted for are the ones greater than a threshold. The intensity Iij in (2.15) is then determined by

Iij =        Iijb, if Iij > T , 0, if Iij < T , (2.16) where b is the local background light, often computed by averaging the outer bounds of the centroiding window, and T is a threshold, which may be equal to the one determined earlier.

To make use of the star positions in the image, for both identification and attitude determination purposes, the centroids are generally converted to unit vectors. Using the focal length f and the sensor pixel size µxand µyin the x- and

y-direction, respectively, the unit vector u, representing the direction to a star in

(31)

2.6 Star Tracking 19 x z y f xc,yc u Camera sensor Camera lens (0,0)

Figure 2.7:The unit vector to a star in the image.

u = h µxxc µyyc f iT h µxxc µyyc f i . (2.17)

This is also illustrated in Figure 2.7.

2.6.3

Star Identification

The next step in the star tracking process is to match the detected stars to a star database. The database contains information such as the dec, ra and apparent magnitude of the stars. Most identification algorithms make use of the unit vec-tors from the image to calculate certain geometries that can be compared to the star database. Since the position of the stars in the database are identified by their dec and ra, they also need to be converted to unit vectors before the same geometries can be computed. This is performed by [25]

vi =        

cos decicos rai

cos decisin rai

sin deci         , (2.18)

where deci and rai is the declination and right ascension of star i.

One technique where the geometries mentioned earlier are used is the Pyra-mid Star Identification Technique [26]. This algorithm will be briefly explained below.

(32)

Pyramid Star Identification Technique

Given a star catalog and the fov of the camera, a k-vector database is generated from the star catalog. The k-vector database contains all star pairs that fit within the camera fov and has an apparent magnitude that is brighter than a set thresh-old, ordered with increasing interstar angles. By ordering the database in this way, the search becomes significantly faster.

Assuming the number of stars in the image is more than three, three stars i,

j and k are chosen in a “smart” way. This means that the combinations of three

stars are not searched linearly starting with the first star, i.e., 1-2-3, 1-2-4, 1-2-5, 1-3-4, etc., since this means that if the first star is a spike (a star not found in the database), all consequential loops starting with star 1 has been evaluated without ever having the possibility to find a match. Instead the search cycles through all stars more rapidly, i.e., 1-2-3, 2-3-4, 3-4-5, etc. The interstar angle of each star pair in the current triangle of stars is then matched to the database. If a highly confident match has been found, the triangle is matched with an additional star

r, thus creating the pyramid which gives the technique its name. If the three new

star pairs r − i, r − j and r − k also matches to the database, the match is confirmed. By using the orientation information of the identified stars, the remaining stars in the image can be identified as well.

The technique to use three stars is quite common. By including an extra star to confirm the orientation, the algorithm becomes more robust. According to Mortari et al. [26], the probability that a random match could match all of the angles in a pyramid of stars, i.e., six angles, within the error margins is less than 10−7for a modern star tracker. Hence, a match found using this technique should provide a near-certain star identification.

2.6.4

Attitude Determination

After the identification is made, a set of star vectors exist in the body frame, i.e., in vectors from the image, and in the reference, or inertial, frame. The problem of determining the attitude then comes down to solving for A in (2.3). This equation can be solved by using either single frame methods, only utilizing the vector measurements at a single time, or filtering methods, which employs information about system dynamics to propagate the solution. Most single frame methods are based on Wahba’s problem [40], in which the proper orthogonal matrix A that minimizes the loss function J(A) is to be found

J(A) = 1 2 m X i=1 aikwiAvik2, (2.19)

where ai are non-negative weights and wi and vi are unit vectors in the body

and reference frame of star i, respectively. The weights ai are usually set to the

inverse variance, σi−2. The loss function can be rewritten as

(33)

2.6 Star Tracking 21 where λ0= m X i=1 ai, (2.21) and B = m X i=1 aiwivTi . (2.22)

It is clear that the loss function J(A) is minimized when trace(ABT) is maximized.

Singular Value Decomposition

The singular value decomposition method (svd) is a numerically robust method that provides the optimal attitude matrix [22]. It is computationally heavy com-pared to similar methods, but since real-time aspects are not considered in this thesis, it is chosen for its simplicity and the fact that the solution is optimal given the measurements.

The matrix B has the singular value decomposition

B = U ΣVT = U diag[Σ11 Σ22 Σ33]VT, (2.23) where the matrices U and VT are orthogonal and Σ

iimarks the singular values,

with Σ11≥ Σ22≥ Σ33≥0. This means that

trace(ABT) = trace(AV diag[Σ11 Σ22 Σ33]UT) = trace(UTAV diag[Σ11 Σ22 Σ33]),

(2.24) which is maximized for, according to [22]

UTAV = diag[1 1 (detU )(detV )], (2.25) and gives the solution

A = U diag[1 1 (detU )(detV )]VT. (2.26)

Weighted Triad

The weighted triad method uses only the two brightest stars identified to deter-mine the attitude matrix. So W and V from (2.3) only contains two vectors each. This method is faster than the svd, but also less accurate, since only two vectors are used. The reason this method uses just two measurements is that it is often used with other sensors, where only two measurements are available, e.g. from a sun sensor and a magnetometer.

The standard Triad method is determined by the equations below [5]. Follow-ing the notation in (2.3), define four new column vectors

(34)

sw= w1 ||w1|| , (2.27a) sv= v1 ||v1|| , (2.27b) mw= w1× w2 ||w1× w2||, (2.27c) mv= v1× v2 ||v1× v2|| . (2.27d)

Using these vectors, the attitude matrix A is obtained as

A = [sw mw sw× mw] [sv mv sv× mv]T . (2.28)

The solution still holds when the order of the vectors in (2.27) is reversed, i.e. sw

and svare defined by the second measurement and database vector instead.

How-ever, the solutions may differ when the order is reversed, since the solution A is not the optimal solution for minimizing Wahba’s problem, (2.19). By weighting the two solutions together, with weights dependent on the variance of the mea-surement vectors w1and w2, the optimal solution is found, since the covariance matrix is minimized. This is the optimized, or weighted, Triad method, and it is solved by [1]

A0= a1ATRIAD-1+ a2ATRIAD-2, (2.29)

where the weights are given by

a1=

σ22

σ12+ σ22 and a2=

σ12

σ12+ σ22, (2.30) and ATRIAD-1and ATRIAD-2represents the two different Triad solutions. The vari-ances are, as stated earlier, that of the first and second measurement vector. How-ever, the addition of two rotation matrices does not necessarily result in a new rotation matrix, since it may not be orthogonal. In [1], this is solved by one or-thogonalization cycle

A = 0.5A0+ (A0−1)T, (2.31) which gives the final solution.

2.7

Filtering

To improve the orientation estimates, a Kalman filter can be used, which is the best possible estimator given that the process noise and measurement noise are Gaussian. The Kalman filter operates in two steps, the prediction update and the measurement update. The prediction update updates the states according to a chosen model, while the measurement update fuses the updated states with a new measurement.

(35)

2.7 Filtering 23

A common type of Kalman filter used in attitude estimation methods is the Multiplicative Extended Kalman Filter (mekf) [19], which represents the attitude as a product of an attitude estimate and the error of that estimate. Using the quaternion representation, the product is

q =δq(φ) ⊗ ˆq, (2.32) where ˆq is the estimated unit quaternion and δq(φ) is the rotation from ˆq to the true attitude q, represented by a unit quaternion and parameterized by the three-component vector φ. φ can take multiple forms, but a common approximation gives

δq(φ) ="φ/2

1 #

, (2.33)

which should be noted is not a unit quaternion.

Given an estimate of φ, ˆφ =E{φ}, this gives δq( ˆφ) ⊗ ˆq as the estimate of the

true attitude quaternion q. By choosing ˆq so that ˆφ = 0,δq( ˆφ) =δq(0) gives the

identity quaternion, which means that ˆq is the best estimate of the true quater-nion. This also means that φ is a three-component representation of the attitude error. The main objective of the mekf is to estimate this error vector. Once the error vector has been updated in the measurement update, the new information in φ is transfered to the estimated unit quaternion by [23]

δq( ˆφk|k) ⊗ ˆqk|k−1= δq(0) ⊗ ˆqk|k= ˆqk|k, (2.34) where a subscript m|n indicates an estimate at time m, using measurements up to time n. Before updating φ in the next iteration, it must be reset, φ = 0.

The prediction step of the quaternion is given by ˙q = 1 2Ω(ω)q (2.35) with Ω(ω) ="−[ω×] ω −ωT 0 # , (2.36)

where ω is the angular velocity in the body frame. Discretizing and making a series expansion gives

q(t + T ) =         cos kω(t)T k 2 ! I4+ sinkω(2t)T k kω(t)k(ω(t))         q(t), (2.37) where T is the sample time. The angular velocity is often measured by an imu, but it can also be estimated in the filter without any additional sensors.

The state error covariance matrix is defined as

(36)

and the prediction of this is

Pk|k−1= Φk−1Pk−1|k−1ΦTk−1+ Q(t) (2.39)

where Φ is the state error transition matrix and Q(t) is the process noise covari-ance. The error state vector δx(t) is composed of the error vector φ(t) and another state vector which, depending on the implementation, may be an estimate of the angular velocity error or the bias of an imu.

The full measurement update then becomes

Bk= HkPk|k−1HkT + Rk, (2.40a) Kk= Pk|k−1HkTB1 k , (2.40b) νk= zkHkδxk|k−1 = zk, (2.40c) δxk|k= δxk|k−1+ Kkνk = Kkzk, (2.40d) Pk|k= (I6×6−KkHk)Pk|k−1(I6×6−KkHk)T + KkRkKkT, (2.40e)

where νk is the innovation, Bk the innovation covariance and Kk is the Kalman

gain. The measurement model defining Hk, Rkand zkdepends on the

implemen-tation. The prediction of the error state, δxk|k−1, is equal to zero, leading to the

simplifications of (2.40c) and (2.40d). The filter is completed by updating the quaternion according to (2.34).

(37)

3

Star Tracking Methods

This chapter will thoroughly describe the software aspects of the star tracking problem, essentially going through every step of the star tracking process out-lined in Figure 2.6.

3.1

OpenStarTracker

The work in this thesis has been based on an open source software called Open-StarTracker (ost) developed by Andrew Tennenbaum at the Nanosatellite Lab at University of Buffalo. More information about the software can be found on the website1and in [38]. ost is a software that is able to provide an orientation based on star images. It has a back-end developed in C++ and a Python front-end. However, the software is optimized for small satellites operating in space, which means clear sky star images in visible light have to be available at all times. Hence, changes has to be made to adopt the software to swir camera images operating from earth.

Throughout this chapter, ost will be viewed as a reference point. While the structure of the software has been of great use, many processing steps have been changed either partly or completely.

3.2

Preliminary Processing

Before the actual star tracking process starts, some initial processing is performed to set up the software. A calibration is performed using a set of images of the sky taken by the camera to be used in the experiment. This identifies properties that will later be used in the star tracking process, such as the camera fov and

1http://openstartracker.org/

(38)

Star database Database filtering Database processing Calibration Processed database Star camera Filtered database Calibration data 3–10 example images

Figure 3.1:Overview of the calibration process.

the width and height of the images. The information determined by the camera calibration process is also used to process the star database. Figure 3.1 gives an overview of the process. A detailed explanation of the process will be given in the two upcoming sections.

3.3

Calibration

The calibration process is still mostly in line with the one implemented in ost, with some minor modifications. The calibration process needs 3 to 10 example images of the sky as input, taken with the camera that will be used. As out-put comes a text file with properties of the calibration, which will be further explained below.

The example images are preferably captured with sufficiently many stars and of different parts of the sky. The calibration process then uses another software called astrometry.net [18] to solve astrometric properties of the images. Astrom-etry.net is a very robust system that takes an astronomical image as input and outputs the pointing, scale and orientation of that image by identifying the stars, without false positives [18]. False positives in this case refers to the case when an image is solved, but the wrong stars has been identified, meaning that the proper-ties of the image are wrong. The software needs no additional information about the image and works only with the information in the pixels.

Astrometry.net provides the image height and width as well as the pixel scale, i.e., the fov of each pixel. In addition to the calibration data given by astrom-etry.net, a few more properties are calculated by ost from the example images. All data obtained from the calibration process are summarized in Table 3.1.

The position error variance is defined as the variance of the pixel distance be-tween the position of all stars in the example images and the matched database stars projected in the image. The positions of the stars are obtained using astrom-etry.net.

In the original implementation of ost, additional properties are derived in the calibration process. A median image is computed, where each pixel value is

(39)

3.4 Star Database 27 Table 3.1:Data generated in the calibration process.

Variable name Description

IMG_X Vertical image size

IMG_Y Horizontal image size

PIXSCALE The fov of every pixel in arcseconds

POS_VARIANCE The average position error variance for matched stars

the median of the pixel values of the example images. This gives an approxima-tion of the background light, as well as identifying pixels that keep a high pixel value throughout all images, therefore presumably being broken pixels. An im-age variance value is also calculated using the example imim-ages, as well as a base flux. However, neither of these parameters are used in the implementation of this work.

3.4

Star Database

To identify the stars in the image, a star database is needed. ost provides a database called Hipparcos, containing approximately 118,000 astronomical ob-jects [8]. However, this database is based on images taken with cameras operat-ing in the visual spectrum. Dependoperat-ing on the type of star and its temperature, the amount of light it emits will vary at different wavelengths, some having a greater apparent magnitude in the visual spectrum while others have a greater apparent magnitude at longer wavelengths. This means that a camera operating in the swir spectrum may detect starlight from stars that are not present in the Hipparcos database. In the same manner, stars found in the database may not be detected by the swir camera. The main problem with this is that the probability of matching the image to the database is significantly reduced. Therefore, a star catalog optimized for the swir spectrum is beneficial. The 2MASS database [36] detects stars in the J-, H- and K-bands, as specified in Table 2.1, and it is therefore suitable for swir cameras.

3.4.1

Filtering

The 2MASS catalog contains 470,992,970 astronomical objects. This many ob-jects are not necessary, since most will not be seen by the star camera. Too many stars will also make the star identification search slower and increase the risk of erroneously matching images. To limit the number of stars, the database can be filtered based on the stars’ magnitude. In Section 2.1.1 the magnitude limit for a swir camera was said to be 6.8 in the H-band. Since the 2MASS database con-tains specific magnitude data for every spectrum band J, H and K, a first filter can be set on all stars with a magnitude higher than 7 in the H-band. Considering that the swir camera detects stars in the J-band as well, the same limit seems reasonable to set for the magnitudes in this band as well. The computation of

(40)

0 100 200 300 400 500 600 0 100 200 300 400 500

(a)The database stars with celestial coordinates within the fov 7.46◦ × 5.97of dec 68.5, ra 182.0◦. 0 100 200 300 400 500 600 0 100 200 300 400 500

(b)The database stars with celestial coordinates within the fov 7.46◦ × 5.97of dec 353.6, ra 281.1◦. Figure 3.2: The fov of two different celestial coordinates in the same star database, highlighting the non-uniformity of the database. Each star is rep-resented by a red circle. The axes represent pixels. (a) contains 69 stars, while (b) contains 848.

the reduced database is made using the catalog access tool VizieR2, making the database shrink to 162,007 stars, which is more in line with the Hipparcos cata-log.

The star database is not uniformly distributed. Instead, the number of stars in the fov varies significantly depending on what part of the sky the camera is pointing to. Figure 3.2 shows two parts of the sky and the stars from the filtered database found in each fov. The left image contains 69 database stars, whereas the right image contains 848 database stars. To have this many stars in the fov po-tentially increases the risk for erroneous matches. ost solves this by making the database uniformly distributed. This reduces the number of stars in the database, thus increasing the risk of not identifying the detected stars at all, but at the same time reduces the risk of a false identification. Another advantage of a uniformly distributed database is that the database initially can be increased, thereby in-cluding less bright stars. In directions with a small amount of stars, these stars will fill out the void, while being sorted out in more densely populated areas. A comparison of the results obtained with the original database and the uniformly distributed database will be performed in the evaluation.

3.4.2

Database Preprocessing and Compensation

Before the star coordinates are transformed to unit vectors, the precession since the epoch J2000 needs to be compensated for, as described in Section 2.2.1. No regard is taken to the nutation, which might cause slight errors in the final

orien-2The VizieR catalogue access tool, CDS, Strasbourg, France (DOI : 10.26093/cds/vizier). The

References

Related documents

This research has shown that retention strategies such as non-financial benefits, training and development programs and a present and committed management are important

Likewise, if the light source is closer to the lens than the plane in focus, the lens will not make all of the light hit a single point on the image plane, but will instead spread

Esther år 1976 tar inte ansvar för Johns beteende eller snesteg när han är full utan utmanar honom att själv ta ansvar för sina handlingar, vilket gör denna romans mycket

The results of the study show that the STAR-schema can be mapped to the logical column- oriented structure but the queries take a lot longer in the column-oriented data warehouse than

Utöver upphovsrätten för upphovsmän reglerar även URL så kallade närstående rättigheter. Detta är rättigheter som dels tillfaller de så kallade utövande

The figure below shows a simple test image from which we want to detect interesting

impression management is one of the crucial aspects of all social interaction (Goffman, 1959), and it is reasonable to assume that one of the essential attractions of tobacco for

Detta projekt är en ombyggnad av Tyresö Gymnasium till ett bostadskomplex, som bevarar ytterväggarna och den bärande konstruktionen i delen av skolan vald, för att sedan ändra