• No results found

A Software Model for MATS Satellite Payload

N/A
N/A
Protected

Academic year: 2021

Share "A Software Model for MATS Satellite Payload"

Copied!
95
0
0

Loading.... (view fulltext now)

Full text

(1)

Payload

Tejaswi Seth

Space Engineering, master's level (120 credits) 2018

Luleå University of Technology

Department of Computer Science, Electrical and Space Engineering

(2)

A Software Model for MATS Satellite Payload

TEJASWI SETH

Supervisors: Dr. Ole Martin Christensen (Stockholm University) Arvid Hammar (Omnisys Instruments AB)

Examiner: Dr. Mathias Milz Lule˚a University of Technology

Department of Computer Science, Electrical and Space Engineering (Kiruna)

Academic year 2017-18

(3)
(4)

Abstract

Department of Computer Science, Electrical and Space Engineering (Kiruna) Master of Science

by Tejaswi SETH

This thesis presents the development of a software model that simulates a payload in- strument onboard the MATS satellite. The goal of this model is to provide an under- standing of how the instrument impacts the measured data. This model is important for error analysis and may help in correcting the measured data for systematic flaws in the instrument.

The software will consist of 5 main modules as follows: Scene Generator, Optics Module, Stray Light Module, Charge-Coupled Device Module and Electronics Module. This thesis forms a basic foundation for the software by designing the CCD module and a part of the Optics module, and concludes the effects of both on the output of the system. It takes into account important mission defined procedures that ultimately aim to improve image quality, resolve vertical structures in different bandwidths and analyze noise effects on the measured data.

(5)

Foremost I would like to express my gratitude towards Dr. Ole Martin Christensen at Stockholm University and Arvid Hammar at Omnisys Instruments AB for giving me an opportunity to perform my master’s thesis on the MATS satellite project. Under their excellent supervision, I was able to experience working on a real space mission from both industrial and academic point of views. I am grateful to these institutions that helped me improve my knowledge and earn valuable experience.

I would also like to thank my examiner at LTU, Dr. Mathias Milz for his helpful support throughout the course of my thesis.

Last but not least, I would like to thank my parents and my sister for believing in me and inspiring me to never give up on my dreams. Without their support and encouraging words, this thesis would not have been possible.

(6)

Abstract iii

Acknowledgements iv

I Introduction 1

1 Introduction to MATS 3

1.1 Mission Objectives . . . 3

1.2 Noctilucent Clouds . . . 4

1.3 Payload . . . 5

1.4 Instrument Model . . . 8

II Theory 9 2 Charge-coupled Devices 11 2.1 Introduction . . . 11

2.2 Charge-coupled Device Operation . . . 11

2.2.1 Architecture . . . 11

2.2.2 Readout Process . . . 12

2.2.3 Binning . . . 14

2.3 Charge-coupled Device Noise . . . 15

2.4 Signal-to-Noise Ratio . . . 18

3 Optical Filters and Beamsplitters 21 3.1 Overview . . . 21

3.2 Ghosting in Beamsplitters . . . 22

4 Point Spread Function 25 4.1 Overview . . . 25

4.2 Airy Pattern . . . 25

4.2.1 MATS Point Spread Function Simulated Experiment . . . 27

III Implementation of MATS Software Model 29

5 Model Summary 31

(7)

5.1 Previous Software Model . . . 31

5.2 New Software Model . . . 32

6 CCD Module 35 6.1 Introduction . . . 35

6.2 Organizational Structure . . . 35

6.3 Interpolation . . . 37

6.4 Binning and Smearing . . . 39

6.5 Noise . . . 41

7 Optics Module 43 7.1 Introduction . . . 43

7.2 Point Spread Function . . . 45

7.2.1 Software Simulations . . . 45

8 Results and Discussion 49 8.1 Results for CCD Module . . . 49

8.1.1 Smearing . . . 49

8.1.2 Binning . . . 52

8.1.3 Noise . . . 53

8.1.4 Final Results . . . 56

8.2 Results for Optics Module . . . 57

IV Conclusion 61 V Appendices 65 A Source Code for Old Model 67 B Source Code for New Model 71 B.1 Scene Generator . . . 71

B.2 Optics Module . . . 72

B.2.1 Point Spread Function Code . . . 72

B.3 CCD Module . . . 73

C Comparison of Image Row Values 81 C.1 MATS Original Simulated Image (Figure 8.1) . . . 81

C.2 MATS Actual Smeared Image (Figure 8.2) . . . 81

C.3 MATS Actual Binned Image without Smear (Figure 8.5) . . . 82

C.4 MATS Actual Binned Image with Smear (Figure 8.6) . . . 82

C.5 MATS Actual Noisy Image (Figure 8.7) . . . 83

C.6 MATS Simulated Image with Actual CCD Module Parameters (Figure 8.10) . . . 83

Bibliography 85

(8)

λ Wavelength of light

Ω Solid angle for incoming photons on CCD σ Standard deviation for Gaussian distribution σs Standard deviation of Shot Noise

θ Angle at which light spreads when passing through a camera aperture A Area of instrument aperture

BW Bandwidth of incoming wavelength channel CF L Cut on filter loss

CW L Center wavelength D Aperture diameter DC Total dark current

DCpixel Dark current on each CCD pixel

f Focal length of optics system F L Filter loss from MATS filters L Radiance

M R Mirror reflectance from MATS telescope mirrors n Refractive index of medium in which system is placed ng Refractive index of lens

Nr CCD readout noise

NBP Product of horizontal and vertical binned pixels in the CCD

(9)

N A Numerical aperture of the MATS camera P Measured signal by the CCD

P S Power split loss from MATS electronics QE Quantum efficiency of the CCD

r Rayleigh Criterion or the radius of first concentric ring surrounding the Airy disk after PSF implementation

r80 Radius of airy disk in which 80% of image signal is conserved S Measured electron signal by the CCD

SB Background radiance from other sources falling on the CCD SL Splitter loss from MATS beamsplitters

SN R Signal to Noise Ratio

SR Source radiance falling on the CCD t Integration time

T R Transmission ratio of the CCD

(10)

Introduction

(11)
(12)

Introduction to MATS

MATS (Mesospheric Airglow/Aerosol Tomography and Spectroscopy) is a Swedish micro- satellite mission that aims to study atmospheric waves in the Earth’s mesosphere. It is funded by the Swedish National Space Board (SNSB) and is currently under devel- opment by Omnisys Instruments, OHB Sweden, ˚AAC Microtec, Department of Mete- orology (MISU) at Stockholm University, Department of Earth and Space Sciences at Chalmers University, and Space and Plasma Physics Group at KTH.

MATS is a 50 kg micro-satellite that will be launched into a 585 km dawn/dusk circular sun synchronous orbit (SSO). The projected lifetime of the mission is two years and its scheduled launch is in 2019.

1.1 Mission Objectives

The mesosphere ranges from roughly 50 km to 100 km above sea level [1]. This layer of the Earth’s atmosphere has been affected by human activities over the years, but there have been limitations in the prediction of atmospheric changes due to these activities.

One of these limitations is the uncertainty in distribution of atmospheric waves around the globe [2].

These atmospheric waves, also known as gravity waves, are important for understanding the dynamics of the atmosphere. For instance, when winds pass over a mountain peak, they create waves that move upwards from the troposphere to stratosphere to the meso- sphere. At first, the waves travel through these layers without losing a lot of energy, however when they reach higher altitudes that contain thin air, the amplitude of these waves increases and they tend to break apart in a nonlinear way. These waves can be seen as different cloud formations in the sky (a phenomenon known as Undular Bore).

A new atmospheric model has been proposed that aims to predict such changes in atmosphere dynamics. To help construct this model, MATS will conduct optical studies

(13)

on the variation in light reflected off clouds in the mesosphere, called Noctilucent Clouds (NLC). These images will then be subject to a tomographic analysis resulting in a 3D view of gravity waves. This will allow MATS to map atmospheric waves on a global level [2].

1.2 Noctilucent Clouds

Noctilucent clouds (NLC) (also known as night clouds) are the highest clouds in the Earth’s atmosphere, occurring at the mesopause region during summer months. They are composed of ice particles having radius in several tens of nanometers, that scatter sunlight and thus can only be seen in twilight. When viewed from space, these clouds are known as Polar Mesospheric Clouds (PMC) [3].

NLCs are observed only in the polar regions at latitudes above 50 degrees north and similarly, below 50 degrees south. They are formed when water vapour freezes in the extremely low temperatures (up to 140 K) of the mesopause region.

As mentioned in section 1.1, gravity waves travel from the lower altitudes of the Earth’s atmosphere to higher altitudes such as mesosphere until they lose all their energy. These waves are not only responsible for giving patterns to NLCs (as shown in figure 1.1), but also play a vital role in transferring energy, momentum and chemical radicals to the mesosphere.

Figure 1.1: Noctilucent Clouds over Gothenburg, Sweden (June 2016) (Image courtesy: Spaceweather.com)

Naturally, the mesosphere is extremely dry and has low temperatures in the summer months. Therefore, the presence of water ice in this region arouses curiosity on its existence. These ice particles travel to the mesosphere from the lower atmosphere due

(14)

to a majority of human related activities. For example, these activities may result in the distribution of winds in the atmosphere, such as the removal of mountains or construction of buildings can affect the wind pattern in a specific area, thus altering the motion of gravity waves. Secondly, day to day activities such as agriculture and fossil fuel consumption have increased the amount of methane in the atmosphere. When methane reacts with hydroxyl radicals (OH) in the air, they produce water molecules.

These molecules can then be transferred to the mesosphere through winds.

Another reason for presence of water vapour in the mesosphere is due to vigorous aerospace activities. For example, exhaust gases from NASA’s space shuttle and other launch vehicles have been known to create noctilucent clouds in the arctic region. These exhaust plumes consist of 97% water vapour that freezes in the low polar temperatures creating water ice particles which form thin clouds [4]. Hence, it is important to predict the changes in weather pattern as a result of human pursuits in order to improve our understanding of global climate.

1.3 Payload

The MATS satellite is built on the InnoSat platform (developed by OHB Sweden and

˚AAC Microtec). It consists of two instruments; Limb Imager and Nadir Imager (as shown in figure 1.2). In this figure, the limb baffle and optical bench on the outside and the CCD sensors on the inside of the Limb Imager are the main parts on which this thesis has been performed. The limb baffle is the part through which light will enter the instrument while the optical bench is a rigid platform on which mirrors, filters and beamsplitters will be situated.

Figure 1.2: Nadir Imager and Limb Imager instruments on MATS (Image courtesy: Stockholm University)

(15)

The main task of the Limb Imager is to capture light from the limb of the atmosphere in 6 different wavelength channels; 4 IR channels and 2 UV channels; to capture the light reflected from NLC (UV in the range of 260-310 nm) and atmospheric airglow due to light emitted from Oxygen A-band in the mesosphere (IR in the range of 758- 768 nm). The UV channels provide information on smaller scaled structures from light scattered by NLCs while the IR channels provide information on larger structures and on atmospheric temperatures by measuring excitation energies of oxygen molecules [5].

In other words, spectral analysis of NLC will retrieve information on water ice particle sizes and other microphysical cloud properties while analysis on Oxygen A-band will help determine atmospheric temperatures [6].

Channels CWL (nm) Spectral width (nm)

UV1 270 3.0

UV2 304.5 3.0

IR1 763 8.0

IR2 762 3.5

IR3 754 3.0

IR4 772 3.0

Table 1.1: Operating channel bandwidths for MATS

Table 1.1 lists out the six channels in which the MATS limb instrument will conduct optical studies. These channels have a center wavelength (CWL) with a wavelength shift accuracy of ±0.5 nm for UV1,UV2,IR1 and IR2 channels and ±1 nm for IR3 and IR4 channels.

Channels VFOV,HFOV (km)

VRes,HRes (km) Sensitivity

(ph cm−2nm−1 s−1str−1)

SNR

UV1 and 2 70-90, ±125 0.2, 5 1 · 109 100

IR1 and 2 75-110, ±125 0.4, 10 1 · 109 500

IR3 and 4 75-110, ±125 0.8, 50 1 · 109 500

Table 1.2: Mission requirement specifications for MATS

Table 1.2 shows the mission requirements of the six channels for limb instrument. The horizontal field-of-view (HFOV) for each channel is constant at 125 km but the vertical field-of-view (VFOV) changes with respect to the altitudes of NLC and Oxygen A-band in the mesosphere. Each channel also has its own vertical and horizontal resolutions (VRes and Hres) to have an imaging quality that is sufficient so that the point spread function from a single atmospheric object on the image plane has 80% encircled energy within the regional (or resolution) values defined in table 1.2. As the MATS Limb Imager will capture light in UV and IR channels, there is an uncertainty in the amount of light retrieved due to noise emerging from the instrument and the number of measurements conducted. Therefore, a signal variation value (or sensitivity of signal) of 1 · 109 must be detectable for a binned pixel and to achieve noise requirements for limb imaging in each channel, a signal-to-noise ratio is required to be used as a metric [6].

(16)

Figure 1.3 shows the Limb Imager to be used on MATS. To perform limb imaging, the instrument will consist of a baffle with 3 mirrors followed by the splitting of wavelengths into 6 channels via dedicated beamsplitters and filters. The image will then be captured by a CCD sensor for each channel with readout electronics that implement pixel binning and image processing techniques.

Figure 1.3: Inside view of Limb Imager on MATS (Image courtesy: Omnisys Instruments)

The Nadir Imager (as shown in figure 1.4) captures atmospheric band emissions in the nadir direction. It provides information in a single wavelength channel on molecular gases with smaller scale heights in the atmosphere. The instrument consists of a wide angle camera, with a horizontal coverage of 200 km and possessing simple lens optics.

Figure 1.4: Nadir Imager on MATS (Image courtesy: Omnisys Instruments)

(17)

1.4 Instrument Model

To analyze and perform design tradeoffs in the working of MATS instruments, a soft- ware model is required. This software will simulate the imaging capability of the two instruments onboard MATS while taking into account defined instrument requirements such as pixel binning techniques. The entire imaging process starting from capture of light to final digital readout will be divided into different modules in the software. Each module has a predefined dedicated task and stands independent from other modules in its working. The software model is needed in order to analyze the results of MATS experimental simulations with proper care and to look out for situations that may be harmful for the success of the mission, such as getting rather noisy images as outputs or too much incidence of stray light in the instrument. Moreover, the results of this model will help improve our understanding of the performance of each payload component in the real MATS space mission.

Hence the goals of this master’s thesis are, to develop a CCD module that simulates basic MATS readout and pixel binning techniques for one input image; to simulate the working of a point spread function using a Gaussian filter on the image; and to develop an empty shell for the optics module to simulate its organizational structure in the software model.

(18)

Theory

(19)
(20)

Charge-coupled Devices

2.1 Introduction

A CCD is a metal oxide semiconductor device that stores information in the form of electric charge and is capable of transferring this charge onto digital systems for further processing. This stored information denotes the number of photons captured by the CCD as light falls on it. The CCD takes these incoming photons and converts it into measured electrons. This electron signal is then converted into a digital signal through an analog-to-digital converter (ADC) and finally processed by a software to obtain the final image output [7].

Invented in 1969 by scientists Willard S. Boyle and George E. Smith of Bell Telephone Laboratories, the CCD has since been a very popular imaging sensor for visible and UV spectrum. Its better performance in low light environments gives it an upper hand over other imaging sensors such as the CMOS and therefore the CCD is widely preferred in astronomical applications [8].

2.2 Charge-coupled Device Operation

2.2.1 Architecture

There are various forms of CCD architectures, the most common ones being full-frame, frame transfer and interline. These architectures have been designed in such a way so as to fulfill the utility requirements of the CCD.

The CCD used in MATS has a full frame architecture [9]. It is the simplest type of architecture in CCD design and is mostly adopted due to its ease of reliability and fabrication. It usually has a 100 percent fill factor, which means that the full pixel array is capable of detecting photons from exposure to the object being imaged [10], however

(21)

the CCD used in MATS has several areas that have been blacked out purposely. These blacked out areas come directly from the CCD manufacturer and mainly help in checking the status of the CCD, i.e. to read out the CCD signal without any lighting.

Figure 2.1: CCD Full Frame Architecture (Image courtesy: AVC Emporium)

The full frame CCD, as illustrated above, consists of two main sections: imaging section (or exposure section) and readout section. The imaging section consists of pixel arrays or arrays of photo-diodes, which detect photons during exposure and convert them into electrical signals. Once this is done, the next step is to readout these electrical signals as per the standard CCD readout process, which is explained in 2.2.2.

2.2.2 Readout Process

The amount of charge stored in a pixel during exposure time is used to measure the photon flux of that pixel in the CCD array. Exposure is a process in which light is inci- dent on an image sensor (CCD pixels) as determined by the aperture and the luminous intensity of the captured scene. Exposure time is therefore the length of time during which the image sensor is exposed to light.

Before going through the readout process, the charge stored in a pixel through exposure, must first be transferred to a separate sampling node (as shown in figure 2.1). This transfer of charge takes place through a series of continuous processes that aid in the readout of the stored charge. These processes are divided into Serial Shifting and Parallel Shifting. To visualize these two processes, a widely used analogy called the ’Bucket Brigade’ analogy has been explained. This concept determines the measurement of rainfall onto an array of buckets, in similarity to how light (or photons) falls on an array of pixels [11].

Figure 2.2 represents a group of empty buckets, separated out in rows (called parallel bucket array as a group), with more than one bucket in each row. As the buckets get filled up with water during rainfall, each bucket starting from the right side of the conveyor belt is poured into another set of buckets that form the serial bucket array.

(22)

Figure 2.2: Bucket Brigade Analogy that explains the working of a CCD readout process (Image courtesy: Nikon MicroscopyU)

Finally these serial buckets are collectively poured into a calibrated measuring container, thus completing the readout process for one CCD row.

Similarly, during exposure of CCD to light, photons enter the CCD and get absorbed by its silicon layer. This absorption excites an electron from the silicon’s valence band to its conduction band. The number of excited electrons varies from pixel to pixel because of two main reasons:

• The angle and intensity of light falling on the CCD may not be the same for every pixel.

• Quantum Efficiency (QE) of the CCD: It is the ratio of photons striking the pixel to the number of electrons excited in it. This property rates the photon absorption capability of a sensor which varies in different sensor materials. For example, the human eye has a QE of 20% while that of a CCD can go over 80% [12].

Quantum efficiency also depends on the wavelength of incoming light. This can be seen in figure 2.3. It is a spectral response graph that shows the QE of two different

(23)

CCDs over a range of wavelengths. Both the CCDs have different operational capabilities in different wavelengths. The silicon CCD array can work up to 1200 nm wavelength while the InGaAs one can go up to 1800 nm.

The lower boundaries of the two detectors in figure 2.3 hardly overlap and therefore important for taken into consideration. The silicon CCD used in MATS has a spectral range of 200 to 1060 nm [9], which covers the entire bandwidth for all channels being imaged by MATS (see table 1.1). The InGaAs detector on the other hand, works only in wavelengths starting from 800 to 1800 nm and cannot measure anything in the UV spectrum. Therefore, it has not been chosen for use in MATS.

Figure 2.3: Spectral Response Chart of QE of CCD vs Wavelength of incoming light (Image courtesy: Photovoltaic Education Network)

The next step carried out in the readout process is parallel and serial row shifting in which, the values of image data from each row of the CCD get transferred into the calibrated container. This container is simply a storage area that collects these electrons and is also capable of transmitting them for further processing on a computing device.

Once the pixels are devoid of any excited electrons, it can be concluded that the readout process has ended until the CCD is exposed to light again.

2.2.3 Binning

Binning is a clocking technique that combines the stored charge (or excited electrons) in adjacent pixels during readout process. It is usually more advantageous than the standard readout process because it offers the benefits of reduced noise signals, faster readout speeds and improved signal to noise ratio. One disadvantage here is that it offers reduced spatial resolution and thus it is a trade-off situation wherein signal is gained at the cost of resolution.

(24)

Figure 2.4: 2x2 Pixel Binning Readout Stages (Image courtesy: Hamamatsu Photonics)

Figure 2.4 reviews the implementation of binning process on a readout. It shows 2x2 pixel binning on a 4x4 CCD array.

Before the standard readout process begins, a specific value (called Binning Factor) is chosen, that defines the number of parallel arrays to be combined together into the serial array register. This means that during parallel shifting, instead of each row shifting out separately, the rows combine together and integrate in the serial shift register. For example, in figure 2.4, the binning factor has been selected as two. Therefore, during parallel shift, the first two rows get combined together as one and added in the serial register. Similarly, the third and fourth rows combine together and so on. Once this pair of rows gets transferred to the serial register, the next part of binning takes place.

It allows the two adjacent serial pixels to combine once more. This final combination is called the summed pixel (as in figure 2.4) or calibrated container (as in figure 2.2).

This summed pixel is also known as super pixel. It shows improved signal to noise ratio but lower image resolution. The average noise levels for this pixel are reduced but individual noise levels might or might not improve. The various types of noise sources are explained in section 2.3.

2.3 Charge-coupled Device Noise

Noise in a CCD is any unwanted signal that degrades the quality of the image. Whenever an image is captured onto a CCD, there will always be presence of some type of noise.

(25)

A majority of the CCD noise is randomly generated, hence it is not possible to remove it completely. However, there are certain methods that reduce the impact of noise, thus improving the signal to noise ratio of the CCD.

The main contributions to overall CCD noise are as follows:

• Shot Noise

If the CCD is subject to consecutive equal exposure times, then the number of photons measured (intensity) in each image will vary slightly. If enough images are captured, it can be seen that this slight deviation in intensity follows the Poisson Distribution. It is this deviation in intensity that results into shot noise associated with the image [13]. This is given by:

σs=

P (2.1)

where σs is the standard deviation of the shot noise and P is the measured signal by the CCD in photons steradians−1cm−2. As this deviation follows Poisson Distribution, the expected value is equal to the square root of the intensity of photons measured (see equation 2.1).

• Dark Current

Even in the absence of light, there is presence of thermally generated electrons within the CCD. These electrons are generated either at the depletion bulk silicon or silicon-silicon dioxide interface states. If the temperature of the CCD is high, there may be excitation of electrons on the surface of these states, due to which they sneak through the valence band. This further adds up to the signal measured in the pixel.

The generation of dark current is a thermal process in which electrons use thermal energy to travel to an intermediate state from which they enter the conduction band. Therefore, it is advised to keep the CCD operating temperatures as cool as possible to prevent the excitation of such electrons [14].

When thermally generated electrons are produced in the CCD, they add up to the main signal (or image), thereby generating white dots in the image. Figure 2.5 shows an example where thermally generated electrons are imprinted on the image as white dots in the absence of light. These dots are basically individual pixels with high dark currents and are often referred to as hot pixels or dark current spikes [15].

(26)

Figure 2.5: Hot Pixels shown as bright white coloured spots in a dark image (Image courtesy: R Widenhorn et al [15])

• Readout Noise

Readout noise is the combination of noise from system components relevant to the process of converting excited electrons to a voltage signal, quantification of that signal and finally analog to digital conversion. Its main contribution comes from the on-chip pre-amplifier, where the noise is randomly added to each image pixel [13].

Figure 2.6: CCD RMS Noise vs Sampling Frequency graph (Image courtesy: University College London)

(27)

Figure 2.6 shows the readout response of noise versus the sampling frequency for a typical CCD. As this frequency increases, the root mean square value of the read noise is also increased.

2.4 Signal-to-Noise Ratio

As explained earlier, noise is inherent to all image sensors. It needs to be controlled to ensure that the signal relative to the noise is adequate enough to capture image information onto the CCD. This quality of measurement of a signal with respect to combined noise is called Signal-to-Noise Ratio (SNR), which also determines the system performance and efficiency [16].

The equation used to calculate the CCD SNR is given by:

SN R = S

pSR · T R · AΩ · BW · t + DC + (Nr)2 (2.2) where S is the measured signal on the CCD in e, SR is the source radiance falling on the CCD in ph sr−1cm−2s−1, TR is the transmission ratio of the CCD, AΩ is the product of solid angle at which photons are incident on the CCD and the area of instrument aperture in cm2, BW is bandwidth of light channel in nm, t is integration time in s, DC is the total dark current in the CCD in e, and Nris the CCD readout noise in erms.

The equation 2.2 assumes that the incoming signal is the only source of light coming onto the CCD. However in a real space mission, the camera will be subject to multiple light sources such as the sun or light reflected off the instrument baffle. In that case, the equation changes a little with respect to the detected signal S, such that:

SN R = S

p(SR + SRB) · T R · AΩ · BW · t + DC + (Nr)2 (2.3) where SRB is the background radiance coming from various sources other than the main source of radiance, measured in ph sr−1cm−2s−1. This light is also termed as Stray Light and is a definite concern in the MATS payload system as it decreases the SNR.

Source radiance is the amount of light coming into the MATS instrument from the main source (Earth) as captured by the camera optics. The source radiance and stray light are both measured values, directly used as inputs for equations 2.3 and 2.4. As radiation from the main source falls on the CCD, the photons in individual pixels are detected and converted into electrons. This detected signal is the measured signal S in equation

(28)

2.3. This signal is free from background radiation because it corresponds to an ideal SNR without any presence of stray light. Signal S is given as:

S = SR · T R · Ω · BW · t (2.4)

In equation 2.4, AΩ is the product of the aperture area and the mean solid angle for each CCD pixel. Solid angle is the angle at which radiance is incident on each CCD pixel. It is measured in steradians, denoted as sr. BW is the bandwidth of any of the six wavelength channels in which the MATS instrument will operate. It is a mean value or the central value in the entire bandwidth range and is measured in nano-meters. Another variable called the Integration time or t (also called exposure time), is the amount of time for which the CCD is exposed to incoming light. It is measured in seconds.

Transmission ratio (TR) is a percentage of the amount of light transmitted from the instrument after being subject to multiple losses in the optics and the CCD. It is a product of various types of signal losses throughout the instrument and is given as:

T R = F L · QE · (M R)3· CF L · P S · SL (2.5) where FL is the filter loss that occurs when light passes through the various MATS filters and beamsplitters, QE represents the quantum efficiency of the CCD sensor (see section 2.2.2), MR is the mirror reflectance which is the amount of light reflected off the three telescope mirrors in MATS optics, CFL is an additional transmission loss that occurs when incoming light passes through UV and IR filters, PS is the power split loss which is a loss of transmittance from the UV beamsplitter (see section 3.1) in MATS optics and SL is the splitter loss from beamsplitters.

Finally, in equation 2.3, DC is the total dark current generated in the CCD after the readout process has ended. It depends on firstly, the number of binned pixels in CCD rows and columns. These pixels depend on the user defined binning factor (BF) input.

The total dark current (DC) is also dependent on DC of each pixel and the integration time because this time period determines the residual noise signal, generated when CCD finishes its readout process for one image. The total dark current is given by:

DC = NBP · DCPixel · t (2.6)

where NBP is a product of the number of horizontal binned pixels and vertical binned pix- els in the CCD, DCP ixelis the dark current on each CCD pixel measured in es−1pixel−1, and t is the integration time in s.

Lastly, in equation 2.3, Nr is the readout noise generated from the CCD during readout process. It is measured in erms and is predefined by the CCD manufacturer. The

(29)

MATS CCD readout noise is dependent on the readout frequency of the CCD, hence its value is 9 erms at 3 MHz frequency while at 2 MHz, the readout noise is 7.5 erms.

These values of readout noise are independent of the channel as they originate from the CCD hardware.

(30)

Optical Filters and Beamsplitters

3.1 Overview

Optical filters are devices that allow only certain wavelengths of light to pass through them while restricting the remaining wavelengths. They can be made from various materials such as coated glass or plastic.

In MATS limb instrument, there is a system of beamsplitters and filters situated on the optical bench. These work together to separate signals into six light channels and form a separate image on the CCD for each channel. For these six channels, there are in total six narrowband interference filters, three broadband filters, five beamsplitters and two folding mirrors as shown in figure 3.1.

Figure 3.1: MATS Limb Instrument Optics System (Image courtesy: Omnisys Instruments)

The MATS optical system for the limb instrument also consists of three reflective mirrors that are situated before the filters in an off-axis telescopic design (as shown in figure 3.2).

(31)

These mirrors are confocal and utilize a combination of convex and concave mirrors in order to remove linear astigmatism. Therefore, mirror M2 is convex whereas the others are concave mirrors [17].

Figure 3.2: MATS Limb Instrument 3 mirror off-axis design for a single channel (Image courtesy: Omnisys Instruments)

Table 3.1 represents the channel bandwidths and their respective spectral requirements (i.e. transmittance T and reflectance R) for relevant filters and beamsplitters in MATS limb optics.

Optical components Operating bandwidth (nm) Spectral requirements (%) BS UV IR 265-320 and 365-1000 R > 90 and T < 90

filterB IR 750-780 T > 90

BS UV 250-320 T = R = 50

Center wavelength (nm)

filterN UV1 270 ± 0.5 -

filterN UV2 304.5 ± 0.5 -

filterN IR1 762 ± 0.5 -

filterN IR2 763 ± 0.5 -

filterN IR3 754 ± 1.0 -

filterN IR4 772 ± 1.0 -

Table 3.1: Channel requirements of filters in MATS limb instrument

where BS UV IR is the beamsplitter separating UV and IR, filterB IR is a broadband IR filter, BS UV is the beamsplitter for UV1 and UV2 channels and filterN denotes the narrowband filter with a center wavelength (CWL) for individual channels.

3.2 Ghosting in Beamsplitters

Ghosting is a phenomenon that occurs when light is repeatedly reflected off the surface of a lens (or beamsplitter in this case). The most common device in which ghosting occurs, is the partially reflective mirror (or beamsplitter). This mirror allows only a certain wavelength of light to be transmitted while reflecting off the remaining wavelengths.

(32)

Figure 3.3: Ghosting in a partially reflective mirror (Image courtesy: Laser and Electro-optics Components [18])

In figure 3.3, when light is incident on a partially reflective surface (with refractive index ng = 1.5), part of it gets reflected back while the remaining part is refracted through the device. This refracted light undergoes multiple internal reflections due to ample thickness of the mirror. A part of each reflection is transmitted out of the mirror surface. This means that the photo capturing device (such as the camera CCD) can see multiple photon signals, and thus the same image is created multiple times.

Figure 3.4 shows an image captured in bright sunlight. It can be seen that the sunlight casts a number of bright coloured orbs. These orbs can display various colours and shapes, and originate due to multiple reflections of light in the camera lens. They usually appear in a straight line from the light source and span the entire image [19].

These orbs are called ’ghosts’ and this phenomenon is known as ghosting.

Figure 3.4: Sample image showing presence of ghosting due to multiple reflections of sunlight within the camera lens (Image courtesy: Canon)

Until now it was decided that pellicle beamsplitters would be used in MATS for infrared channels. They were chosen as they prevent ghosting even if multiple reflections occur.

This is because pellicle beamsplitters have very thin coated membranes that absorb

(33)

reflected light and thus, multiple internal reflections are no longer a problem. However, in the second half of year 2017, experimental tests concluded that pellicle beamsplitters added to design complexities in MATS optics. Therefore, it was decided to make use of glass beamsplitters instead, which might allow presence of ghosting but comparatively reduce design complexity. Even then, the amount of ghosting expected in MATS is very less, and if it occurs, it is only due to internal reflections in the coating of the optical components.

(34)

Point Spread Function

4.1 Overview

The point spread function (PSF) is a two dimensional diffraction pattern of light, emitted from a point source and projected onto the image plane. In other words, when light is emitted from a point source, only a fraction of it is focused at a specific point in the image plane. The remaining light waves distribute around and produce a diffraction pattern in the form of concentric rings around a central bright disk of light [20], called the Airy Disk (named after George Biddell Airy). The rays might spread in random directions, sometimes overlapping each other. This interference results in blurring of the image, since each image pixel is now lit by more than one ray of light which in turn decreases the overall sharpness of that image pixel. An example of this is given in figure 4.1.

Figure 4.1: Image showing the blurring effect when subject to a point spread function (Photo courtesy: Wikipedia)

4.2 Airy Pattern

The point spread in MATS is described by an Airy pattern that depends on the wave- length of light and the numerical aperture of the camera. It is diffraction limited and

(35)

distributes the light all around the Airy disk, thus giving a blurred effect to the image.

The numerical aperture (NA) is a dimensionless number that describes the angles at which light spreads from its mean position. It depends on the refractive index of the camera lens and the angle of spread, and is expressed as:

N A = n sinθ (4.1)

where n is the refractive index of the glass lens in the MATS camera and θ is the angle at which light spreads when passing through the aperture of the camera.

The PSF is an impulse response of an optical system, which means, it is the image captured when the systems images a point source.

Figure 4.2: Simulated PSF from Nominal Design for MATS Optics (Image courtesy: Omnisys Instruments)

Figure 4.2 shows a simulated PSF from nominal design in MATS. The left figure shows an infrared image of the point spread while the right figure shows the point spread in a graph. The center point in the left figure is similar to the mean axis along the central line of the first peak in the right figure.

The simulated PSF is symmetric and therefore, it may be approximated with a Gaussian distribution in the software model for easy implementation.

The radius of the first concentric ring surrounding the central disk in a PSF can be expressed as:

r = 0.61λ

N A (4.2)

where r, also known as Rayleigh Criterion, is the radius of the first concentric ring surrounding the first disk in the Airy pattern, λ is the wavelength of light and NA is the numerical aperture of the MATS limb instrument.

(36)

The numerical aperture (NA) can be expressed as:

N A = nD

2f (4.3)

where NA is the numerical aperture, n is the refractive index of the medium in which the lens is working, D is the diameter of aperture (or diameter of mirror M2 (see section 3.1) in MATS limb instrument which is equal to 35 mm) and f is the effective focal length of the optical system (equal to 261 mm [17]). On substitution of these variables with their respective values, the numerical aperture for MATS limb instrument is obtained as a dimensionless number with a value equal to 0.067.

4.2.1 MATS Point Spread Function Simulated Experiment

Following is an image (figure 4.3) from the MATS PSF experiment that utilizes a point source pinhole with a diameter of 10 µm. Its raw image has been taken with an 8.8mm x 6.6mm Sony CCD sensor with 11.6µm x 11.2µm pixels.

Figure 4.3: Raw image of point source target in a MATS like simulated experiment (Image courtesy: Omnisys Instruments)

A 10µm pinhole mounted in the focal plane of the collimator has been used for this experiment. Figure 4.3 is an image of the projected pinhole target showing the spread.

The Airy disk radius’ for UV and IR are 2.68 and 7.56 µm respectively, thus the pro- jected target is regarded as a point source. The resultant spot size appears to be fairly symmetric in this image. [17]

(37)

Lastly, it should be noted that the PSF pattern for MATS as stated in this chapter is not the ideal case. As described in chapter 3, the amount of light entering the optics module is subject to various types of filters (depending on the channel). These filters may give rise to irregular reflections, thus decreasing the intensity of focused light that exits from a particular channel filter.

(38)

Implementation of MATS

Software Model

(39)
(40)

Model Summary

5.1 Previous Software Model

In preliminary design phases of the MATS project, an older software model was used.

This model calculated the signal to noise ratio of the CCD using user-defined parameters and generalized functions. It comprised of a Microsoft Excel worksheet that held all variables and functions together. It was designed even before the PDR of MATS payload and therefore considered many mission related possibilities in a very generic way. For example, the orbital altitude was initially assumed to be between 600 - 650 km. This is not the case now, as there is only one value present, which is 585 km.

The main purpose of this model was to calculate the signal to noise ratio as its primary output. However, since it did not consider specific information that was received in the detailed phases of the MATS project, such as the orbital altitude or CCD operating temperature, the need of an updated model was imminent. These updates would involve latest project developments thus helping the users generate more realistic simulations.

(41)

Figure 5.1: Snapshot of the old MATS software model representing in Microsoft Excel

Figure 5.1 represents a snapshot of the Microsoft Excel file containing the old MATS model. The model calculated various parameters such as total dark current, detected signal for summed CCD pixels and the total signal to noise ratio based on these values.

On the top of the image is a list of mission defined parameters that affected each value in some way, thus further affecting the measured CCD signal.

Since this model was also less readable and difficult to comprehend due to its complex Microsoft Excel worksheet design, it was discarded to make way to a more organized software design. This new software design would be the new MATS software model.

5.2 New Software Model

Developed on Python version 3.6.0 on the Anaconda IDE, the new software model incorporates the use of separate functions for separate modules to reduce complexity and promote code re-usability.

Before beginning with the new software codes, it was necessary to decode the previous model and understand its basic functioning. In other words, the excel file had to be implemented as a Python programme. Therefore, the first code to be developed for the new model consisted of formulas and concepts from the previous model, coded in Python (see Appendix A). This code performed the same functions and got a nearly precise output value of the signal-to-noise ratio. Hence this code helped form a basic foundation for the new model, from which, different software module would be developed.

(42)

Figure 5.2: Main Flowchart for MATS model

Figure 5.2 represents the overall organizational structure for the MATS software model.

It consists of five main modules. This flowchart represents the flow of programme logic starting from the capture of an image from the payload cameras up to the final electronic output in digital form. Each module modifies the image in its own way, simulating the effects of different parts in the instrument. These modifications can be seen in sections 6.3 and 6.5 in this report that discuss the interpolation function and addition of noise respectively, in the CCD module.

(43)
(44)

CCD Module

This chapter explains the implementation of the CCD module, that simulates the func- tioning of the CCD in the satellite instrument. This module is comprised of 3 sub- routines, namely Interpolation, Binning and Noise. Each has been covered in detail in separate sections.

6.1 Introduction

Being the core part of the MATS payload system, the CCD is responsible for reading out the measured electrons in each pixel. These electrons are a result of photons coming through the optics module. In the CCD module, the image goes through a series of readout procedures that help improve the accuracy of the output image as well as reduce the noise on the signal, thus improving the overall signal to noise ratio.

The CCD in MATS payload is the most crucial part of the system because this is where the incoming image undergoes the readout process, that includes specialized binning as well as noise removal procedures. Hence, it was decided to follow an inside-out approach wherein this module was designed first in order to build a solid foundation in the software.

6.2 Organizational Structure

The numerous subsections that come under the main CCD module have been presented altogether in figure 6.1. This flowchart represents the flow of program logic for CCD simulation, starting from the arrival of an image into this module, to the final output.

As seen in figure 5.2, the CCD module gets an input from the stray light module. This input consists of image data in photons/second for every pixel of the CCD array.

(45)

Figure 6.1: CCD Module Flowchart

In the MATS system, the CCD is exposed to the incoming scene for 3 seconds, i.e., exposure time is 3 seconds for the actual mission. The incoming image is divided into frames and each image is accompanied by a time stamp that helps differentiate between individual images coming into the MATS system.

Another important input for the CCD module is the readout time. The readout time is the time taken by a CCD to read one frame of the image, i.e., from the first row shift till the storage of information in the summed pixel.

In the MATS software model however, the readout time has been simplified to individual row readout time (or parallel readout time per row), which is approximately equal to 2 milliseconds per row. This has been done to organize and build the code for each CCD row and work our way up from there. This gives us an opportunity to identify any problems in the code by looking at each CCD row.

(46)

6.3 Interpolation

The resolution of the input image is given by the PSF from MATS optics. This resolution does not match the CCD resolution and therefore needs to be modified before the readout process.

The MATS CCD has a total number of 2048x512 square-shaped pixels with each pixel being 13.5 µm in width [9]. The incoming image needs to fit in properly on each pixel of the CCD in order to preserve the entire image data. If this is not done, then we may risk losing some of the pixel data due to lack of pixels.

In order to prevent this, interpolation is carried out. This means that the image is inscribed onto the CCD regardless of its original size.

Interpolation has been carried out in MATS by taking into account the following two conditions, and simulated on an example image as shown in figure 6.2.

Figure 6.2: Sample alphabet image for testing out interpolation functions for MATS CCD

• Smaller CCD resolution

Firstly, the incoming image might have a higher resolution than the CCD. This means that the total number of pixels in the image is greater than 1,048,576 (=

2048x512), which is the resolution of the CCD.

This condition was tested in the simulation software with an inbuilt interpolation function for the SciPy scientific python library. This process starts by creating a rectangular mesh for the CCD in order to align its x and y axes with the incoming image. Next, the image is run over a bivariate spline function, which chooses image pixels that fit together smoothly with the CCD mesh. Thus, a majority of image pixels is chosen and implemented onto the CCD and only a small amount of data

(47)

is lost. Hence the interpolation is smooth and easy to evaluate. This can be seen in figure 6.3.

Figure 6.3: Resultant alphabet image after Bivariate Spline Interpolation

• Larger CCD resolution

The other condition involves the image being smaller than the CCD. This means that the number of pixels in the image is less than 1,048,576 which is the CCD resolution. This condition was tested using another function in the SciPy library that basically resizes the image. This function takes an average of pixels with respect to their neighbouring pixels, thereby merging several rows and columns into one. This average is done by taking the CCD resolution as a reference.

It is an easier process than the bivariate spline interpolation because there is no need to create a mesh and align the x and y axes together. Although this condition will not be present in MATS, it is better to have it in the software in order to increase the flexibility of this model towards unexpected situations or for testing out different cases. Therefore it was implemented in the model. Now assuming that the CCD resolution is 300x350 which is lower than the image resolution, Figure 6.4 shows the resultant alphabet image after resizing it, thus fitting all image details in the new resolution.

Figure 6.4: Resultant alphabet image that is resized by interpolation

(48)

6.4 Binning and Smearing

In MATS simulation, the binning process starts with the selection of a binning factor, followed by a direct transfer of the first CCD row in the parallel shift registers, to the serial register of the CCD. After the first row, the remaining rows combine together in counts of binning factor until the number of rows left in the CCD parallel array is less than the binning factor. The remaining rows (if any) are then binned together and transferred to the serial array.

An important case has been considered for the readout process here, making it necessary to model the software properly. As there is no shutter in the MATS cameras, light will fall on the CCD all the time. This means that during individual row readout, each row is still lit up with photons during row shift, thus giving rise to an extra signal in that time instant, until that particular row is done shifting and free to catch incoming light again. This extra signal merges with the original row data during readout and is known as smear. The process of producing a smear is called vertical smearing.

MATS imaging requires that smearing needs to be minimized [6]. Smearing results in degradation of the image clarity, therefore the maximum amount of smear allowed should not make the image lose out its important details. There are two solutions for reducing smear.

• The first one consists of putting a shutter in the camera, which is not there in MATS to reduce the number of movable parts in the payload, to reduce risks of equipment failure.

• The second solution involves the row readout time having a very low value com- pared to the CCD exposure time. This will ensure that the smear value or the error in the image signal is very small. This has been taken into account in MATS and therefore the individual row readout time is 0.002 seconds while the exposure time is 3 seconds. The results of this solution can be seen in smearing results 8.1.1 for MATS simulated images.

The limb image quality has a strict requirement on the image pixels obtained from pixel binning. Since the captured scene has a much smaller vertical dimension than horizontal (45 by 250 km respectively), the vertical image quality should be better than the horizontal [6]. This is because the structures that MATS will investigate have a lot more variability in the vertical dimension than in the horizontal dimension. For example, a noctilucent cloud might be 100 km wide but only 500 metres thick. So when a number of rows are combined together in the binning process, the resultant image has decreased resolution with a chance of losing out important details in the vertical dimension.

This is where the binning factor comes into play. It helps decide the number of rows that need to be binned together so that the overall image quality is not compromised.

(49)

The vertical binning factor for MATS is equal to 2, while horizontal binning factor is 40. This means that every two rows in the image will be combined together and read out as one row while every 40 columns in each row will be binned together.

This master thesis does not include column binning or horizontal smearing, hence all the simulations have been conducted only to test out row binning and vertical smearing.

To validate the binning code for MATS, it was initially tested on the sample alphabet image (see figure 6.2). Figure 6.5 shows the original alphabet image when read out from the CCD and figure 6.6 shows the same CCD output image but with presence of smear.

The images are upside down because the CCD readout process starts reading out pixels from the bottom most row in the image. Figure 6.6 shows an exaggerated amount of smear present but not the real value for MATS, in order to visually show the effects of vertical smearing on an image.

Figure 6.5: Resultant alphabet image after CCD readout

Figure 6.6: Resultant alphabet image showing exaggerated smear due to vertical smearing

(50)

6.5 Noise

Addition of noise to the image is the last part in the CCD module. This noise is comprised of three major sources of noise for the MATS CCD; readout noise, dark current and shot noise. All of these are implemented in a similar way onto the CCD, such that they are randomly generated on each CCD pixel during readout. The readout noise is approximated from a Gaussian distribution while shot noise and dark currents come from a Poisson distribution. In MATS however, the dark current value comes directly from the CCD manufacturer and therefore we do not have to calculate it ourselves using the Poisson counting process. So when combined together, the total noise follows the Gaussian process. This random approximation is added to each pixel on the image during the shifting of serial registers into the summed pixel array (see section 2.2.2).

In this way, each image pixel now has a randomly generated signal added to it as the noise value. The values of readout noise for MATS CCD is 9 rms electrons while that of dark current is 5 electrons/(second pixel). Shot noise is the randomness in consecutive measurements of incident photons on a CCD. It is the square root of the number of photons detected by the CCD and is measured in photons.

A few tests were performed on sample images to verify the implementation of noise on an incoming image. Figure 6.7 shows a sample image without any added noise. On the other hand, figure 6.8 has an exaggerated amount of noise added to it after the readout process such that it is clearly visible to the user, thus visually verifying the software code developed for this simulation.

Figure 6.7: Sample image of an airplane without any visible noise (Image courtesy: Smartech)

(51)

Figure 6.8: Resultant image of an airplane after implementation of exaggerated values of readout noise, dark current and shot noise

In figure 6.8, it can be seen that the image is not as sharp as the original one. This decrease in the image clarity is due to noise coming from the CCD as well as due to the randomness of incident photons. These noise components have been added to every pixel in the image, which is why the entire image appears noisy and loses out image details.

(52)

Optics Module

This chapter describes the layout and structure of the optics in MATS payload, followed by a final implementation of the point spread function.

7.1 Introduction

The optics module is the second part in the main flowchart (figure 5.2). It consists of three main sub-modules, namely, Filters, Point Spread Function and Cropping module.

The basic objective of the optics module is to blur the image when passed through a series of disturbances. Figure 7.1 represents the flowchart of the optics module. The empty shell code for the optics module can be seen in appendix B.2.

The input data needed for the optics module is mainly comprised of the following;

incoming channel, center wavelength of incoming light, temperature of filters and polar coordinates to map the image on the instrument FOV.

The first components encountered by the image are filters and beamsplitters. These filters divide the rays into their own specific channels. There are 4 IR and 2 UV channels.

As light strikes these filters, some of it may be lost due to irregular reflections (called filter loss) or contrarily might take on extra signals due to high temperature of filters.

In either case, the original image gets modified during filtering.

(53)

Figure 7.1: Optics Module Flowchart

The next component encountered consists of the PSF module, in which the image un- dergoes a point spread function that results in its blurring due to irregular spreading of a point source signal. In MATS optics, the three mirrors in the baffle act as point targets for reflecting light to the CCD sensor. This light is observed by each pixel as a point source and the amount of degradation (blurring) in the image produced on the CCD is a direct measure of the PSF for the system.

Once the image is blurred, it is passed through a cropping module that crops out the image to match the circular aperture of the second telescopic mirror M2 (see section 3.1) in limb instrument. It is the only mirror that has a circular shape while the other two mirrors M1 and M3 are oval. This has been done to ensure uniform illumination of the optics due to collimated light from all angles producing a symmetric cone (or solid angle) when incident on the mirror.

(54)

7.2 Point Spread Function

In the MATS limb instrument, the vertical FOV is greater than the horizontal FOV.

This means that the number of vertical pixels on the image is higher than the number of horizontal pixels. Based on this fact, the PSF on the CCD can be resolved by taking into account the resolution requirements in the vertical direction [17], since the vertical resolution requirement tends to be at the maximum limit for the entire instrument.

7.2.1 Software Simulations

The PSF simulation for this project consists of development of a module that approxi- mates the PSF pattern along a Gaussian distribution. This distribution consists of one PSF impulse response over the image. In other words, the deviation from the mean value in the Gaussian distribution is approximated only once. This forms a basic foundation for implementation of an actual MATS PSF pattern.

The deviation from the mean value for a MATS simulated image is dependent on the radius of the Airy disk in which 80% of the image radiance is preserved. This means that the imaging quality is deemed sufficient if the object being imaged is projected onto points in the image plane (CCD) so that these points are upto 80% confined in an area not exceeding the image pixel to be read out (i.e. 80% of the energy is preserved) [6].

This can be explained with the help of table 7.1.

r80[µm] sigma [µm]

UV1 and UV2 10.345 8.1544

IR1 and IR2 20.695 16.313

IR3 and IR4 41.385 32.622

Table 7.1: Required PSF for all channels of MATS limb instrument [17]

It has been calculated by Omnisys Instruments that the spread in signal (denoted by sigma) is dependent on the incoming light channel and the radius of airy disk in which 80% of energy is conserved (denoted by r80). There are two more parameters affecting this spread; the spatial frequency ν for resolving vertical wave structures of 0.5, 1 and 2 km height; and lastly the Modulation Transfer Function (MTF), which specifies how spatial frequency affects the MATS system.

Values in table 7.1 have been used to simulate the MATS simulated image of Earth’s limb and the results are shown in section 8.2. Meanwhile, to visually test out the PSF code in the software, a sample image in figure 7.2 has been tested with incremental deviation values to see how they affect the resolution and clarity of the image.

(55)

Figure 7.2: Sample image to be used for PSF simulation

Figure 7.2 is an 800x600 resolution image. It shows the global map wherein, country borders and outlines in the image can be viewed with good clarity. For simulation, it is now passed through a PSF with a deviation value (sigma) of 2 arbitrary units to see how the image is affected. This deviation is the width of airy disk for each image pixel.

The resulting image is as follows (figure 7.3).

Figure 7.3: Resultant image after PSF simulation with sigma = 2

In figure 7.3, each pixel is subject to a spread that deviates its pixel values from its mean intensity value by a factor of 2 arbitrary units. This results in overlapping of photons with other pixels, thus causing blurring of image. The borders and lines however are still visible enough to be differentiated.

(56)

Now, the radius of the Airy disk is increased further to see if blurring increases on the image or not.

Figure 7.4: Resultant image after PSF simulation with sigma = 6

Figure 7.4 has a deviation of 6 arbitrary units. The image is blurred further but its details can be differentiated on a global view.

When the radius of Airy disk is increased even more, it is concluded that its value of approximately 20 arbitrary units results in total blurring of the image such that even the bigger countries on the map are unable to exhibit clarity and end up smudging the image altogether. This can be seen in figure 7.5.

Figure 7.5: Resultant image after PSF simulation with sigma = 20

(57)
(58)

Results and Discussion

8.1 Results for CCD Module

8.1.1 Smearing

Figure 8.1: Simulated image of Earth’s limb captured by MATS limb instrument The software has been tested on a MATS simulated image that was provided by Dr.

Ole Martin Christensen to test and validate the developed programme modules in this master’s thesis (see figure 8.1). It is a simulated image showing the limb of the Earth as will be captured by the MATS limb camera. It clearly lists out the different vertical and horizontal details present. The darker region in this image represents space while

(59)

the brighter region is the Earth’s albedo. The thin greenish layer that seems to separate these dark and bright regions, is a layer of noctilucent clouds (see chapter 1) in the mesosphere. These clouds appear bright due to sunlight reflected off them which is being captured by the camera.

According to MATS requirements (see chapter 6.4), the smearing should be minimized in order to preserve the details in an image. As smearing occurs when every CCD row is subject to a readout time, the pixel values smudge against each other in the vertical direction thus losing out important details in the image. In the actual MATS CCD, the individual row readout time is 0.002 seconds, which gives rise to an extra smear in that time during readout. Figure 8.2 shows the realistic simulated output from the MATS instrument with a row readout time of 0.002 seconds.

Figure 8.2: Resultant MATS simulated image of Earth’s limb with presence of smear

In figure 8.2, there is no smearing visible when comparing with figure 8.1. This is because the individual row readout time (0.002 seconds) is very less as compared to the exposure time (3 seconds) of the image. In other words, the extra signal (smear) generated in 0.002 seconds is much smaller than the original pixel value in the image.

In order to view the differences between the smeared and non-smeared images, a differ- ential scatter plot has been charted out (see figure 8.3). It is a graph of smeared image signal against non-smeared image signal. Because the value of smear is very small as compared to the pixel value, the graph appears to be a straight line (marked by a thick blue line). However, when zoomed in on the center portion of the graph, a distribution of image pixels can be seen in figure 8.4. A semi-transparent blue diagonal line goes

(60)

through the center of both image arrays and therefore conveys a one-to-one relationship between each adjacent pixel (shown in red) on either side of the blue line, in both arrays.

Figure 8.3: Scatter plot representing smeared signal against non-smeared signal for MATS simulated image

Figure 8.4: Zoomed-in scatter plot showing one-one relationship between smeared signal and non-smeared signal for MATS simulated image

Moreover, to see the changes in pixels, the values of first columns in both images have been compared. Because each image is comprised of 2048 pixels in each column (that is 2048 rows), the result array will be quite large in size. Therefore, at maximum 60 pixel values for the first columns in each image have been put in appendix C at the end of

(61)

this report. In the two arrays, the first pixel values in both images are the same. This is because the first row is directly read out from the CCD, thereby having the actual pixel value as output. From the second row onward, each row is subject to smearing and therefore the signal changes along each row.

8.1.2 Binning

During the implementation of binning, two cases were taken into account for the number of remaining rows in the CCD. The first condition involved the discarding of remaining rows and simply concentrating on the binned rows. This would make the binning process less complex in the real CCD and easy to analyze the results on ground. The only disadvantage here would be that we would lose certain amounts of data in the omitted rows. The second case allowed the remaining rows to combine together as one row and then follow the usual readout process. This prevented any wastage of data in the incoming image. As the mission focuses on obtaining accurate and quality atmospheric data, the second option was considered.

Figure 8.5 is a simulated image from MATS that shows number of rows (or vertical resolution of the image) lost due to binning. It takes as input the actual row binning factor for MATS which is equal to 2, but does not include any presence of smear. Due to this binning factor, the number of rows are halved in the output image.

Figure 8.5: MATS simulated image of Earth’s limb with row binning but no smearing Figure 8.6 on the other hand is another MATS simulated image that shows binning but this time includes the presence of smear, by taking into account the individual row readout time of 0.002 seconds. The resultant image array values are given in appendix C to compare between image radiance in figures 8.5 and 8.6.

(62)

Figure 8.6: MATS simulated image of Earth’s limb with row binning and smearing

8.1.3 Noise

As stated in section 6.5, noise added on MATS image is comprised of three parts: readout noise, dark current and shot noise. This noise is however combined into a single equation to calculate the total uncertainty of changes in individual pixel values of the image.

variance (photons) = Readout N oise + Dark Current + Shot N oise (8.1)

Both shot noise and dark current follow a Poisson distribution because of their different noise values in consecutive measurements. Therefore, they are calculated as a square root of their respective photon signals. Equation 8.1 can now be expressed as:

variance (photons) = Readout N oise +

Dark Current +

Shot N oise (8.2)

In order to find the total noise which follows a Gaussian process, the standard deviation of the measured signal value becomes:

stdev =

variance (8.3)

= q

(Readout N oise)2+ (

Dark Current)2+ (

Shot N oise)2 (8.4)

= p

Readout N oise2+ Dark Current + Shot N oise (8.5)

References

Related documents

• The design of a prescaled shift-counter in a photon counting pixel significantly reduces the noise caused by the switching activity of the digital part in the pixel circuit..

• The design of a prescaled shift-counter in a photon counting pixel significantly reduces the noise caused by the switching activity of the digital part in the

Figure 14: Shows a plot (corresponding to the FOV of the Limb Imager) from a MATLAB simulation of how a geographical location at 110 km altitude in the Earth’s atmosphere would

Threshold scans of the Medipix3RX chip connected to a 2 mm thick CdTe sensor at a sensor pixel pitch of 110 µm in single pixel mode (SPM) and charge summing mode (CSM) [5]..

The Internet traffic classes are distinguished by their respective (a) traffic characteristics/behaviour in respect of network resource requirements, (b) network resources

What we can see after evaluating eight non-profit fundraising associations, and compared them with the profit making limited company Sandvik, is that the annual reports looks more

The result from the implementation of the model by Oh et al [1] is given in the comparative performance maps below, where the estimated pressure ratio and efficiency is plotted as

It contains a receiver module, which gets data from a BlockPipeIn component and passes the data on to the transmitter module inside the Detector Interface, a transmitter module,