• No results found

Calibration of Multispectral Sensors

N/A
N/A
Protected

Academic year: 2021

Share "Calibration of Multispectral Sensors"

Copied!
99
0
0

Loading.... (view fulltext now)

Full text

(1)

Institutionen för systemteknik

Department of Electrical Engineering

Examensarbete

Calibration of multispectral sensors

Examensarbete utfört i Bildbehandling vid Tekniska högskolan i Linköping

av

Wilhelm Isoz

LITH-ISY-EX-3651-2005

Linköping 2005

Department of Electrical Engineering Linköpings tekniska högskola

Linköpings universitet Linköpings universitet

(2)
(3)

Calibration of multispectral sensors

Examensarbete utfört i Bildbehandling

vid Tekniska högskolan i Linköping

av

Wilhelm Isoz

LITH-ISY-EX-3651-2005

Handledare: Thomas Svensson

Avdelningen för Sensorteknik, FOI

Ingmar Renhorn

Avdelning för Sensorteknik, FOI

Examinator: Per-Erik Forssen

isy, Linköpings universitet

(4)
(5)

Avdelning, Institution

Division, Department Bildbehandling

Department of Electrical Engineering Linköpings universitet S-581 83 Linköping, Sweden Datum Date 2005-12-13 Språk Language  Svenska/Swedish  Engelska/English   Rapporttyp Report category  Licentiatavhandling  Examensarbete  C-uppsats  D-uppsats  Övrig rapport  

URL för elektronisk version

http://www.cvl.isy.liu.se/ http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5202 ISBNISRN LITH-ISY-EX-3651-2005

Serietitel och serienummer

Title of series, numbering

ISSN

Titel

Title

Kalibrering av multispektrala sensorer Calibration of multispectral sensors

Författare

Author

Wilhelm Isoz

Sammanfattning

Abstract

English: This thesis describes and evaluates a number of approaches and

algo-rithms for nonuniform correction (NUC) and suppression of fixed pattern noise in a image sequence. The main task for this thesis work was to create a general NUC for infrared focal plane arrays. To create a radiometrically correct NUC, reference based methods using polynomial approximation are used instead of the more common scene based methods which creates a cosmetic NUC.

The pixels that can not be adjusted to give a correct value for the incomming radiation are defined as dead. Four separate methods of identifying dead pixels are used to find these pixels. Both the scene sequence and calibration data are used in these identifying methods.

The algorithms and methods have all been tested by using real image sequences. A graphical user interface using the presented algorithms has been created in Mat-lab to simplify the correction of image sequences. An implementation to convert the corrected values from the images to radiance and temperature is also per-formed.

Svenska: Den här rapporten beskriver och utvärderar ett antal olika påståen-den och algoritmer för icke-likformiga korrektioner samt reducering av de döda pixlarna i en bildsekvens. Huvuduppgifterna för detta arbetet var att skapa en generell icke-likformig korrektion för IR-kameror. För at skapa radiometriskt kor-rekt icke-likformig korkor-rektion, valdes en referens baserad metod som använder sig utav polynomekvationer, istället för den mer vanliga typen kallad scen baserad metod, som skapar en kosmetiskt korrekt icke likformig korrektion. De pixlar som inte kan korrigeras så att de ger ett korrekt värde för den inkommande strålningen definieras som döda. Fyra olika metoder används för att hitta och definiera dessa döda pixlar. Både scen och kalibrerings data används i dessa metoder.

Alla algoritmerna och metoderna har blivit testade på riktiga scen data sekvenser. Ett grafiskt användargränssnitt, som använder de föreslagna algorit-mer har skapats i Matlab för att förenkla korrigeringen av bildsekvenserna. En metod för att konvertera de korrigerade bilderna till radiansnivåer och temperatur har också skapats.

Nyckelord

Keywords nonuniform correction, infrared sensor, reference based, radiation, graphical user-interface

(6)
(7)

Abstract

English: This thesis describes and evaluates a number of approaches and

algo-rithms for nonuniform correction (NUC) and suppression of fixed pattern noise in a image sequence. The main task for this thesis work was to create a general NUC for infrared focal plane arrays. To create a radiometrically correct NUC, reference based methods using polynomial approximation are used instead of the more common scene based methods which creates a cosmetic NUC.

The pixels that can not be adjusted to give a correct value for the incomming radiation are defined as dead. Four separate methods of identifying dead pixels are used to find these pixels. Both the scene sequence and calibration data are used in these identifying methods.

The algorithms and methods have all been tested by using real image sequences. A graphical user interface using the presented algorithms has been created in Mat-lab to simplify the correction of image sequences. An implementation to convert the corrected values from the images to radiance and temperature is also per-formed.

Svenska: Den här rapporten beskriver och utvärderar ett antal olika

påståen-den och algoritmer för icke-likformiga korrektioner samt reducering av de döda pixlarna i en bildsekvens. Huvuduppgifterna för detta arbetet var att skapa en generell icke-likformig korrektion för IR-kameror. För at skapa radiometriskt kor-rekt icke-likformig korkor-rektion, valdes en referens baserad metod som använder sig utav polynomekvationer, istället för den mer vanliga typen kallad scen baserad metod, som skapar en kosmetiskt korrekt icke likformig korrektion. De pixlar som inte kan korrigeras så att de ger ett korrekt värde för den inkommande strålningen definieras som döda. Fyra olika metoder används för att hitta och definiera dessa döda pixlar. Både scen och kalibrerings data används i dessa metoder.

Alla algoritmerna och metoderna har blivit testade på riktiga scen data sekvenser. Ett grafiskt användargränssnitt, som använder de föreslagna algoritmer har ska-pats i Matlab för att förenkla korrigeringen av bildsekvenserna. En metod för att konvertera de korrigerade bilderna till radiansnivåer och temperatur har också skapats.

(8)
(9)

Acknowledgements

This master thesis was conducted at the department of IR systems, FOI, Linköping. I would like to thank the following people:

- The initiators to the thesis and my supervisors at department of IR systems, FOI, Thomas Svensson and Ingmar Renhorn.

- The staff at the department of IR systems, FOI.

- Thomas Svensson for the daily discussions and ideas about the thesis work. - Martin Mileros who introduced principal component analysis (PCA) to me. - My examiner Per-Erik Forssén, at the department of Electrical Engineering,

Linköpings universitet. - My opponent Mikael Löfqvist.

(10)
(11)

Notation

Symbols

L Spectral radiance [W/(m2· sr · µm)]

M Emittance [W/m2]

Me Spectral exitance [W/(m2· µm)]

yij DN of pixel j from frame i

xij Corrected DN of pixel j from frame i

Abbreviations

DN Digital number FPA Focal plane array FPN Fixed pattern noise MWIR Mid wave infrared NUC Non Uniform Correction PCA Principal component analysis SWIR Short wave infrared

TIR Thermal infrared TN Temporal noise

(12)
(13)

Contents

1 Introduction 1

1.1 Background . . . 1

1.2 Previous work. . . 1

1.3 Goal of the thesis . . . 2

1.4 Implementation . . . 2 1.5 Thesis overview . . . 3 2 Basic concepts 5 2.1 Basic radiometry . . . 5 2.1.1 Properties of radiators . . . 7 2.1.2 Lambertian radiator . . . 8 2.2 Infrared imaging . . . 8 2.2.1 Detectors . . . 8

2.2.2 From detector to an imaging system . . . 9

2.2.3 Responsivity . . . 11

2.2.4 Noise . . . 11

3 Equipment used in thesis 15 3.1 Cameras . . . 15

3.1.1 Multimir . . . 15

3.1.2 Emerald . . . 16

3.1.3 Information from files . . . 17

3.2 Radiation sources. . . 17

4 Nonuniform correction of IR-images 21 4.1 Scene based correction . . . 23

4.1.1 Statistic methods. . . 23

4.1.2 Registration-based . . . 24

4.2 Reference based correction. . . 24

4.2.1 Correction function. . . 25

4.3 Quality measurement. . . 30 xi

(14)

5.1 Temporal noise . . . 34

5.2 Extreme values . . . 37

5.3 Value on polynomial coefficients. . . 39

5.4 Principal component analysis . . . 42

5.5 Replacement of bad pixels . . . 48

6 Results 49 6.1 NUC and bad pixel filtering . . . 50

6.2 Linear responsivity . . . 50

6.3 Correctability . . . 52

6.4 Measurement considerations . . . 54

6.5 The user-interface of Matlab program . . . 55

6.6 DN → physical unit . . . 55

7 Conclusions 59 7.1 Future work . . . 60

Bibliography 61 A NIPALS algorithm 63 B Guide to the user interface 65 B.1 Startup . . . 65

B.1.1 Select calibration file . . . 67

B.2 View image and set dead pixel limits . . . 69

B.3 View file without any correction . . . 76

B.4 Save file . . . 78

B.5 IrEval . . . 79

C Radiometric calibration 81 C.1 Atmospheric absorption . . . 81

(15)

Chapter 1

Introduction

1.1

Background

The image data obtained from an infrared sensor are matrices with digital pixel values - which are equivalent to the image data obtained from digital cameras in the visual region. The goal with all imaging is that the information in the image should represent the scene as close as possible, which is affected by noise sources like the nonuniform response of each pixel in the detector. This type of noise is a much more serious concern in the infrared region compared to the visual region. Therefore a nonuniform correction (NUC) of the elements are needed, leading to an improved image quality. The need for a NUC becomes clear when viewing an uncorrected image where the nonuniformity appears, see figure 1.1. The image shows two types of nonuniformities, first it is the pixels with no or almost no response, they appears as black and white pixels, secondly there is a Gaussian blur over the image, due to the nonuniform response.

1.2

Previous work

Nonuniformity of focal plane arrays (FPA) is a well known problem and there are many articles on this subject found in the literature, where each article describes a specific method to reduce the nonuniformity. All these methods have their own advantages and disadvantages. A number of them are selected and reviewed in chapter 4. A nonuniform correction method that earlier has been used at FOI, Linköping, was implemented in Matlab as a set of Matlab scripts and worked reasonable well on single images, but the algorithms were not fully developed. The computation time was too long to correct series of images and the control of different parameters in the nonuniform correction was poor.

(16)

Figure 1.1: Uncorrected image from an infrared sensor

1.3

Goal of the thesis

The goal of this thesis was to design and evaluate nonuniform correction methods. Some of the wishes of this thesis were

• An enhanced image quality.

• The method must be general, it has to work on different kinds of image data (e.g. static scenes, motion, high and low contrast).

• Efficiency concerning the computation time. • The nonuniform correction should be user friendly.

No article in the literature fulfils all these requirements. The two last require-ments were not the least important, since the amount of image data collected during field trials might be large, up to 200GB, divided into a large number of data files.

1.4

Implementation

The focus of the thesis work was on the nonuniform correction. The algorithms were implemented in Matlab v7.0. The algorithms and utilities were tested on real infrared sequences registered with two high performance infrared cameras, denoted Multimir and Emerald. Both are available at the Department of infrared systems and are routinely used in signature measurements. The computer used was a Pentium-III 3.0GHz with 1024MB RAM.

(17)

1.5 Thesis overview 3

1.5

Thesis overview

Chapter2 - Basic concepts

Gives an overview of some basic concepts and definitions used in the thesis and an introduction to radiometry.

Chapter3 - Equipment used in thesis

Describes IR-cameras and other tools used in thesis.

Chapter4 - Nonuniform correction of IR-Images

Describes different techniques for nonuniformity correction of images. Both scene based and reference based correction methods are described.

Chapter5 - Identifying and replacing dead pixels

Different methods for identifying dead pixels are described and implemented.

Chapter6 - Results

The proposed algorithms presented in previous chapters are merged together. The result of this is evaluated and a presentation of the created GUI is given.

Chapter7 - Conclusion

A conclusion and discussion of the result from this thesis is presented and suggestions for further work are given.

Appendix A - Nipals algorithm

The used algorithm to calculate the PCA.

Appendix B - Guide to the user interface

An overview of the the created graphical user interface.

Appendix C - Radiometric calibration

Describes the transformation of the pixels digital value into a radiometric unit.

(18)
(19)

Chapter 2

Basic concepts

In this chapter basic concepts used in this thesis work, are described and defined. Radiometry, which describes the energy transfer from a source to a detector is treated in section 2.1. Concepts connected to infrared imaging are treated in section 2.2. The information presented here is mainly gathered from references [4], [5],[9], [12] and [16], if nothing else is specified.

2.1

Basic radiometry

Radiometry describes the energy or power transfer from a source to a detector. Passive remote sensing in the optical regime (visible through thermal) depends on two sources of radiation. In the visible to near infrared band, the radiation collected by a remote sensing system originates with the sun. In the thermal infrared band, thermal radiation is emitted directly by materials on the earth. Part of the radiation received by a sensor has been reflected at the earth’s surface and part has been scattered by the atmosphere, without ever reaching the earth. All objects with a temperature above zero Kelvin emits thermal radiation, according to Planck’s law, eq 2.1. The spectral emittance [W/(m2, µm)] of an

ideal blackbody is a function of the absolute temperature and the wavelength and is described by the Planck distribution law

Mλ(λ, T ) = c1 λ5  1 ec2/λT − 1   w µm · m2  (2.1) where T = Blackbody temperature [K] λ = Wavelength [m] c1 = 3.7418 · 108 [watt · µm4/m2] c2 = 1.43388 · 104[µm · K]

In figure 2.1the Planck distribution is plotted for five different temperatures. 5

(20)

10−1 100 101 102 100 105 200K 300K 600K 1000K 2000K Spectral radiance Spectral radiance, W/(cm 2,µ m) Wavelength, µm

(a) Spectral emittance

100 101 100 105 200K 300K 600K 1000K 2000K Spectral radiance Spectral radiance, W/(cm 2,µ m) Wavelength, µm

(b) Spectral emittance within the infrared region

Figure 2.1: The spectral emittance at five different temperatures Two main features may be observed in the figure.

• The peak shifts to shorter wavelengths as the temperature is raised. The peak position is given by the Wien’s displacement law.

λ|max= 2898/T (2.2)

where

λ|max = Wavelength at which the radiation is a maximum [µm]

T = Temperature [K]

• The total energy emitted is strongly dependent on the temperature. In fact it is proportional to the fourth power of temperature, and is known as

Stephan-Boltzmann law. M = σ · T4 [W · m−2] (2.3) where σ = π 5k4 45c2h3 = 5.669 · 10 −8 [W · m−2· K−4] k = Boltzmann constant = 1.380662 · 10−8[J · K−1] h = Planck’s constant = 6.626176 · 10−34[J · s] T = Temperature [K]

The most interesting spectral regions are shown in table 2.1. They contain relatively transparent atmoshperic window, see appendixC, and there are effective radiation detectors in these regions.

The thermal infrared region (TIR) is the part that may be used by infrared sensors to detect emitted radiation. Due to strong absorption bands in the atmo-sphere (mainly H2O and CO2) the transmission is very low between 5 − 8µm and

(21)

2.1 Basic radiometry 7

Name Wavelength range Main radiation

source

Visible 0.4 − 0.7µm solar

Near Infrared (NIR) 0.7 − 1.1µm reflected solar Short Wave Infrared (SWIR) 1.1 − 3µm reflrected solar Thermal infrared (TIR) 3 − 14µm

Mid Wave Infrared (MWIR) 3 − 5µm solar, thermal Long Wave Infrared (LWIR) 8 − 14µm thermal

Table 2.1: Spectral regions and its radiation sources

2.1.1

Properties of radiators

The maximum emittance of a sample is given by Planck law, eq 2.1. The ratio between the emittance of a sample and the emittance from a blackbody at the same temperature, is called the spectral emissivity and is given by

s(λ) = Ms λ(λ) Mbb λ (λ) (2.4) where

Mλs(λ) = Spectral emittance of sample Mλbb(λ) = Spectral emittance of a blackbody

Kirchoffs law states that in equilibrium the emission and absorption are equal, and therefore

α(λ) = (λ) (2.5)

The specular spectral reflectivity relates the reflected radiation of a surface to the incident one.

ρ(λ) = M r λ(λ) Einc λ (λ) (2.6) where

Mλr(λ) = The reflected radiation Eλinc(λ) = The incoming irradiance

The spectral transmissivity, relates the transmitted radiation to the incident radiation and is defined the same way as the reflectivity and is given by

σ(λ) = M t λ(λ) Einc λ (λ) (2.7) Kirchoffs law states that the sum of the absorbed, reflected and transmitted power is equal to the incident power.

(22)

If α(λ), ρ(λ) and σ(λ) are independent of wavelength then the emitter is called a graybody. If α = 1 the emitter is a perfect blackbody.

2.1.2

Lambertian radiator

The radiance unit [W/(m2· sr · µm)] is used to describe the exitance from objects

that are resolved by the imaging sensor. If the object has a radiance that is independent of the angle it is called a Lambertian radiator, for which the relation between the exitance and the radiance is given by

L(λ) = s(λ) Mλ(λ) π [W/(m 2· sr · µm)] (2.9) where s(λ) = Spectral emissivity

Mλ(λ) = The spectral exitance given by eq. 2.1 and2.3

The power detected by a detector may be due to radiance emitted by the object or radiance reflected by the object. At temperatures around 25◦C the radiance detected below 3µm is mainly due to reflected radiation and radiance above 3µm is mainly due to emitted radiation, see figure2.1.

The digital number (DN) given by a sensor is the radiance seen by the detector converted to an electric signal and quantized to a digital number.

2.2

Infrared imaging

An electro-optical imaging system consists of at least five fundamental parts: op-tics, optic band pass filter, detector, signal processing and monitor. The term ”sensor” defines the optics, optic band pass filter, detector and signal processing.

2.2.1

Detectors

The detector is the heart of the sensor. There are two main types of detectors • Thermal detectors

• Photon detectors

In thermal detectors the temperature of a response element, sensitive to the action of infrared radiation, is raised. This in turn changes a temperature dependent parameter like the electrical conductivity. These detectors can operate at room temperature but the sensitivity is lower and the response time longer than for the photon detectors.

High performance IR detectors are cryogenically cooled photon detectors, which means that the working temperature is below 80 K, ”cryogenically” here means a temperature down to the temperature of liquid nitrogen ≈ 77 K. Photon detec-tors are based on the interaction between photons and electrons in semiconductor

(23)

2.2 Infrared imaging 9

materials. At longer wavelengths in the infrared region they require cooling to get rid of excessive noise. The sensitivity is high and the response time is small.

Systems based on uncooled thermal detectors have not been studied in this thesis and in the following will therefore only photon detectors be discussed.

Detector material

Common materials in photon detectors are HgCdTe (known as MCT), InSb and QWIP (Quantum Well Infrared Photodetector). Advantages of MCT is a high sensitivity and quantum efficiency, a flexible wavelength tunability (1.5 − 26µm) and the potential to operate above cryogenic temperatures. A drawback of MCT is that it is difficult to manufacture, leading to high costs and poor uniformities of the arrays. InSb is an equally sensitive alternative to MCT and is easier to reproduce than MCT. The tunability is however poorer (2 − 5.5µm). The QWIP employ silicon and GaAs manufacturing procedures, they are much more producible than MCT, especially for long wavelength and large format arrays. The QWIP also has lower sensitivities to nonuniformities. Drawback’s of QWIP is a lower operating temperature and a lower responsivity than MCT, ref [8].

Sensitivity

NETD, noise equivalent temperature difference, [mK], is a common measure of the sensitivity. It is the smallest temperature difference that is possible to detect by an IR camera and is defined as the temperature difference between a target and a uniform background that produces a signal to noise ratio equal to one. It is calculated as

N ET D = σN

∆S/∆T [mK] (2.10)

where

σN = standard deviation of the noise

∆S = signal difference

∆T = temperature difference between target and background

NETD for systems based on cryogenically cooled photon detectors and uncooled thermal detectors are typically 20 mK and 100 mK respectively.

2.2.2

From detector to an imaging system

Basically there are two types of infrared imaging systems, scanning systems and staring systems. A general drawback with scanning systems is the need for an expensive scanning arrangement. The first IR cameras consisted of one detector element that scanned the scene in two directions, figure2.2a. The demand for the scanning velocity was high with these systems and with many pixels the frame rate was low. By the development of linear arrays of detector elements, figure 2.2b, that performed parallel scanning of the scene in only one direction, the frame rate

(24)

was increased. An improved manufacturing lead to the development of focal plane arrays, FPA, where the number of detector elements is equal to the number of pixels, figure 2.2c. Due to the elimination of the scanning system these sensors are called ”staring”. The technique has very high frame rates (≥ 1000Hz) and longer integration times leading to a higher sensitivity. A longer integration time has also made the use of uncooled thermal detectors possible.

(a) Single detector, scan-ning

(b) Linear array, scan-ning

(c) Staring focal plane ar-ray

Figure 2.2: Types of imaging systems

The array sizes for MCT, InSb and QWIP are comparable and typically 640x512. A general trend in infrared imaging is that the arrays are growing. The largest MCT arrays available today consist of 4000x4000 detector elements.

Dynamic range and contrast

The dynamic range is defined as the relation between the maximum measurable signal and the minimum measurable signal. For digital systems it is usually defined as the relation between the largest digital number and the least significant bit(=1). The (radiometric) contrast is defined as the difference between the maximum and minimum level, radiance in the scene or DN value in a single frame. The output are shown in with a visual contrast.

The dynamic range is a constant value for the camera, in this thesis 214 is

used. The contrast change depending on the integration time and radiation from the scene. The contrast may vary from a DN value of a few hundred to the full dynamic range. If the contrast is small objects may be hard to visually detect, i.e. small visual contrast. However increasing the visual contrast to enhance the visual result also increases the nonuniformity between the pixels in the detector.

(25)

2.2 Infrared imaging 11 DNMax=214-1 DNMin=0 DNMax image DNMin image D y n am ic r an g e (R ad io m et ri c) co n tr as t Image contrast 14

Figure 2.3: Illustration of dynamic range and contrast

2.2.3

Responsivity

Responsivity, R, is defined as a system response to a known controlled signal input. The responsivity unit depends on the unit of the systems response.

R = U φ [V /W ] R = I φ [I/W ] R = DN φ [DN/W ] or [DN/radiance]

If the sensor registers homogeneous blackbody surfaces the incident radiation is approximately the same on all pixels, therefore the mean value of the FPA is proportional to the incident radiance level.

2.2.4

Noise

On a raw image from an infrared camera there is quite some distortion, some of which can be corrected and some that is incorrigible. Some categories of noise that is commonly used in the literature and the types of noise that will be discussed in this thesis is given here.

Temporal- and spatial noise

Temporal noise is defined as the standard deviation of the DN’s for one pixel on

the IR sensor through time with a constant incoming radiation. Temporal noise is an uncorrectable noise, but if the noise is considered Gaussian, the images can be averaged and the S/N radio is then raised√n times, where n is the number of frames averaged.

Spatial noise is defined as the standard deviation of the DN’s for one image.

Spatial noise can to a large extent be reduced by the NUC.

Usually the standard deviation is used to calculate the two types of noise, but it can also be described by some other statistical approach, in this thesis it is the standard deviation that is used.

(26)

Time

x y

(a) Temporal noise

y

x

(b) Spatial noise

Figure 2.4: Temporal and spatial noise in the FPA

Background Noise

Background noise is one fundamental temporal noise source. It is due to the statistical fluctuations of the radiation inside the camera. This noise is independent of the performance of the detector and is not correctable.

Nonuniformity and fixed pattern noise

A drawback with focal plane arrays is that the responsivities as a function of the incident radiation of the detectors, especially MCT, are nonuniform. This leads to an extra noise source, spatial noise or pixel noise, often denoted fixed pattern noise (FPN). Figure2.5shows response curve for nine randomly selected Multimir pixels.

Due to the manufacturing process each pixel has a unique response curve, it has its own gain and offset, that differs from the other pixels in the FPA. The differences in the linearity i.e. the gain and offset levels may be significant between different detectors. A NUC is therefore needed where a unique correction function corrects each pixels value. The nonuniformity will be further discussed in chapter4.

The most common fixed pattern noise sources include:

Fabrication errors Inaccuraties in the fabrication process give rise to variations

in geometry and substrate doping quality of the detector elements.

Cooling system Small deviations in the regulated temperature are hard to avoid

but may have large impact on the detector responsivity.

Electronics For detector arrays, variations in the read-out electronics is a

com-mon source of fixed pattern noise.

Optics The sensor optics may decrease the signal intensity at the edges of the

images and create different kinds of circular image artefacts.

Response curve over the full dynamic range

A detectors response curve is not linear over the full dynamic range, but instead it tends to have a S-shaped form, figure2.6. The minimum level depends on the

(27)

2.2 Infrared imaging 13 4000 6000 8000 −50 0 50 100 150 Pixel [10,10] 4000 6000 8000 −50 0 50 100 Pixel [100,10] 4000 6000 8000 −40 −20 0 20 40 Pixel [200,10] 4000 6000 8000 −50 0 50 100 Pixel [10,100] 4000 6000 8000 −50 0 50 Pixel [100,100] 4000 6000 8000 −10 0 10 20 Pixel [200,100] 4000 6000 8000 −50 0 50 100 Pixel [10,200] 40000 6000 8000 10 20 30 Pixel [100,200] 4000 6000 8000 −40 −20 0 20 Pixel [200,200]

Figure 2.5: Response curve showing the pixel deviation (pixel DN - mean DN) for nine randomly selected pixels in the Multimir camera. For perfect uniform detectors a constant difference equal to zero would have been obtained.

background noise and the surrounding temperature. At a high incident radiance the camera becomes saturated, which decreases the responsivity. A response curve therefore has a theoretical function like the one given in figure2.6.

Figure 2.6: A theoretical S-shaped response curve for an infrared sensor

Drifting

When a camera is calibrated, there is still some incoming noise. The responsivity of each pixel is not constant but changes through time, this is due to several factors, but one main reason is the fact that the infrared cameras operate at a

(28)

temperature of 80K, and are therefore very sensitive to temperature changes. The median value for any given radiation might be constant but the unique responsivity for each pixel in the sensor is changing. Figure2.7shows six different corrections for one pixel, over a time period of seven hours.

4500 5000 5500 6000 6500 7000 7500 −10 0 10 20 30 40 50 Pixel [200,100] DN

Difference to the mean value of the FPA

Figure 2.7: The change of responsivity for one pixel at six timepoints during 7 hours, using the Multimir camera. The curves are created using a second degree polynomial approximation.

(29)

Chapter 3

Equipment used in thesis

This chapter gives an overview of the equipment that is used in this thesis work. To calibrate the infrared cameras, uniform radiance sources have been used.

3.1

Cameras

This thesis is mainly based on two infrared cameras at FOI, Linköping, which are denoted the Multimir and the Emerald camera.

3.1.1

Multimir

Multimir (MULti-spectral Midwave IR) is a multi-spectral infrared sensor. Multi-mir is based on a spinning filter wheel, figure3.1with four optical band pass filters. The transmission curves for the four filters are shown in figure3.1b. The rotation frequency of the wheel is 25Hz , which gives a full frame rate = 4 · 25 = 100Hz . The camera can be operated with the filter wheel in a non-rotating mode, where only one of the spectral bands is used.

The cut on and cut off frequencies of the four optical filters are shown in table3.1.

Band 1 1.55 − 1.75µm Band 2 2.05 − 2.45µm Band 3 3.45 − 4.15µm Band 4 4.55 − 5.2µm

Table 3.1: Transmission bands for optical filters in Multimir

The transmission curves for the four filters are given from the manufacturer, and are shown in figure3.1b.

The material of the detector is based on MCT, see section 2.2.1, and has an operating temperature below 85K.

(30)

(a) Multimir camera. Note the filter wheel. 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Band 1 Band 2 Band 3 Band 4 Responsivity of MMIR

Wavelength [µm]

Responsivity

(b) Transmission curve of the four filters in the Multimir camera.

(c) Image from the Multimir camera.

Band 1 Band 2

Band 3 Band 4

(d) In the rotating mode the images are presented in the following way.

Figure 3.1: The Multimir camera

3.1.2

Emerald

Emerald is a multi-band sensor. Multi-band defines a sensor that registers in several spectral bands, but not in real time.

Emerald is equipped with a filter wheel, holding four different filters, at present only three of the positions are used. The transmission curves for the three filters are shown in figure3.2b.

The detector is based on InSb and has an operating temperature below 80K. Band 1 − 4µm

Band 2 3.5 − 5µm Band 3 4.6 − 5.5µm Band 4 Not in use

(31)

3.2 Radiation sources 17

(a) Emerald camera.

3 3.5 4 4.5 5 5.5 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Band 3 Band 2 Band 1 µm Responsivity

(b) Transmission curve for the three filters of the Emerald camera.

Figure 3.2: The Emerald camera.

3.1.3

Information from files

When saving the registrated scenes, the data is saved to different types of files depending on camera. The format of the file from the two cameras contains in-formation that is useful for the nonuniform correction. A comparison between the information between the two types is shown in table3.3

3.2

Radiation sources

When using the reference based correction some radiation sources with uniform radiation, are needed as a reference. At FOI, there are two types of radiation references that are used. For short wavelength radiation, < 3µm, a small spotlight is used, see figure 3.3a. This radiation is scattered through three pieces of opal glass, to create a homogeneous radiance. Using opal glass a near Lambertian source can be achieved. For thermal infrared bands > 3µm a peltier-radiator with a black coating is used, see figure 3.3b. This black coating is to create an emissivity, , close to one. The range of temperature is from −10◦C to 60◦C.

When calibrating the voltage (=radiation) is kept constant over the spotlight instead the integration time of the detector is changed to create unique radiation values. The temperature of the peltier is measured by a handheld IR-thermometer, which gives the temperature with an accuracy down to a decimal. The emitted radiance from the radiator at different temperatures is calculated by the Planck distribution law, eq2.1.

Pre-calibration

When using the cameras a pre-calibration is usually performed, which is an one or two point calibration using the peltier-source. The two point calibration is a

(32)

Multimir Emerald

Detector MCT(HgCdTe) InSb

Optics 100mm 50mm Dynamic range 214 214 Array size 288x384 512x640 Frame size 576x768 (rotating wheel) Frame size 288x384 512x640 (non-rotating)

Subframes 16,32, . . . 288 rows 16, 32, . . . rows 32,64, . . . 384 columns 16, 32, . . . columns Integration time 50µs − 2.6ms 3µs − 10ms

in 256 steps by steps of 1µs Frames per second 100Hz rotating wheel 100Hz full frame Subframes < 1000Hz < 1000Hz

Information from fileheader

Number of frames X X

Size of image X X

Used filter X

Time of registration X X

Integration time X

Frames per second X

Used lens X

Temperature of camera X

Table 3.3: Information contained in the scene files from the infrared cameras

(a) Spotlight and three pieces of opal glass (b) Peltier radiator

(33)

3.2 Radiation sources 19

linear gain and offset calibration. Since the pre-calibration is a linear function it does not corrupt any of the information in the image.

Some of the reasons for this procedure are:

Real-time correction. The nonuniform responsivity of the sensors are creating a

highly disturbed image. Using pre-calculation creates a real-time correction, that enhances the visual output.

Scene in focus. Due to the low contrast and nonuniform responsivity, it might

be hard to register objects in the image. The enhanced visual output from the real-time correction simplifies the work of putting the camera in focus.

Verifying the contrast. When registering a scene, it is important that the DNs

are within valid region. When performing a pre-calibration the two used and known radiation sources, are at least corresponding to the warmest and coldest objects in the scene. This will verify if the DNs from the scene will be valid or not.

Function control of camera. A system check is performed at the pre-calibration,

to verify that the mean DNs are a function of the incoming radiation. When referring to raw data or uncorrected images, pre-calibration is assumed to have been performed.

(34)
(35)

Chapter 4

Nonuniform correction of

IR-images

Figure 4.1shows two uncorrected images (raw data), registered by the Multimir and the Emerald camera.

The images look noisy, especially the Multimir image, which is due to the nonuniformity of the FPA. All detector elements in an array have a unique re-sponsivity which will appear as an extra noise source. There are two kinds of pixel noise: random and non-random. If it is non-random then the noise is created by a fixed source and may produce image artefacts such as striping in the image, see figure4.1b. Pixels that are very different from the other pixels, appear as ”salt and pepper”. The salt and pepper may be due to both dead pixels, see chapter5, and pixels with very different responsivity. It should be noted that two images, registered by the same camera, may appear quite different in a visual evaluation concerning the pixel noise, depending on the following

• the contrast in the registered scenes may be different • the visual contrast operations may not be the same

• the physical sizes of the images, on a monitor or on a paper, may not be the same

Small differences in the radiance levels in a scene will result in a low contrast. Increasing the visual contrast will enhance the visibility of the nonuniformity and cause pixels to appear as salt and pepper.

The goal with a nonuniform correction (NUC) is that all pixels should give the same digital signal, DN, for a certain incident radiance level. A correction function is therefore needed for each pixel and each spectral band, since the inci-dent radiance level, due to the transmission of filters and optics and the detector sensitivity are wavelength dependent.

Basically the NUC methods are divided into the scene based and the reference based correction method.

(36)

(a) Uncorrected image from the Multimir camera

(b) Uncorrected image from the Emerald camera band 3

Figure 4.1: Uncorrected infrared images; upper: Multimir, band 1-4. Note the salt and pepper in the image; lower: Emerald, band 3. Note the striping in the image, the pixel noise is not randomly distributed.

(37)

4.1 Scene based correction 23

4.1

Scene based correction

As the name implies, in scene based correction the pixels are calibrated by the scene. A correction function is created based on the DN given in earlier frames. Scene based correction does not use any homogeneous reference radiance sources, making it time and economic efficient and explains why much of the research performed on NUC methods is in this field. However to get a good quality of the correction an image motion is needed in the scene. Another drawback is that the correction functions created for the pixels are relative to each other. Thus, the pixels are not calibrated to any radiometric value.

More advanced scene based correction methods also involve methods for motion estimation. Since the main focus of this thesis work is on the reference based calibration only a brief overview of scene based calibration is given. It is based on the references [3], [11] and [14].

4.1.1

Statistic methods

By using statistics of previous images in a sequence, parameters to perform NUC can be created. Depending on the type of statistics that is used, several different methods are availible for creating parameters for the NUC.

Temporal highpass

The basis for the method is that the DN are varying between subsequent images in an image sequence, while the fixed pattern noise is approximately constant. Thus the information given in a scene is in the high frequency information, while the fixed pattern noise is in the low frequency information. By using a high pass filter, usually a recursive IIR-filter to save memory, the temporal average value will therefore be subtracted from the DN creating an offset correction. Without a motion the high pass filter tries to high pass the DN from the same pixel, which is constant plus some temporal noise. This creates a correction function that only passes the temporal noise.

Constant statistics

The constant statistics method estimates both the offset and gain for each pixel. The method assumes that the incoming temporal means and variances of the radiation are the same for all pixels, which requires that all possible scene radiance levels will be observed by all pixels in an image sequence. To achieve this, an image motion must exist.

Let a linear model constitute the correction function

x = g · y + o (4.1)

where y is the incoming DN, g is the gain and o is the offset value of the correction function and x is the corrected DN. The temporal mean value, mx, and mean

(38)

deviation, sx, are mx = E[x] = = E[g · y + o] = = E[g · y] + o = = g · E[y] + o = = g · my+ o (4.2) sx = p E[(x − mx)2] = = q E[g2· (y − m y)2] = = g · sy (4.3)

In the constant statistics method it is assumed that the temporal statistics of x is constant for all pixels. Then the expressions can be rewritten as

my= o (4.4)

sy = g (4.5)

A gain and offset correction has thus been performed.

Kalman filtering

A key limitation of all the scene-based NUC techniques published to date is that they do not exploit any temporal statistics of the drift in the nonuniformity. As a result, each time that a drift occurs, a full-scale NUC is performed, a process that may be greatly simplified and improved if statistical knowledge on the nature of drift is exploited, especially in cases where the drift is small. In the articles [3] and [11] they use Kalman filtering and a inverse covariance method to simplify the process.

4.1.2

Registration-based

In registration based methods motion estmation is used in an image sequence, where each image in the sequence can be spatially related to previous images. The idea behind the registration based method is that the DN values of all pixels that represent the same position in the scene, through the image sequence should give the same value.

The main task in this method is the motion estimation which has to be able to estimate sub-pixel motion to create the correct coefficients for the NUC.

4.2

Reference based correction

Reference based correction is based on registrations of homogeneous radiance sources at one (or more) levels. The method allows the radiance and the tem-perature of the object in the scene to be calculated, if the radiance sources are

(39)

4.2 Reference based correction 25

well defined. Contrary to the scene based correction, the reference based correction does not need any image motion, motion estimation or any statistics for a good result. The method is also more efficient concerning the need for computational power. A drawback is the time duration between the registrations of the radiance sources and the scene images. During the time elapsed between the registrations the responsivity may change due to drift and the correction function therefore may be valid only under a limited time period. Because there is no way to update the coefficients, like in the scene based method, the time difference between the registrations of the radiance sources and the scene should be as small as possible. In practice, especially in field trials, there is a lower possible limit for frequent calibrations. The need for registrations of the radiance sources also depends on factors like the stability of the camera, the temperature stability of the air sur-rounding the camera and the quality of the calibration files. This will be further discussed in section6.3.

4.2.1

Correction function

The pixels responsivities need to be described by a correction function, which is used in the NUC. The correction function may simply be a constant (offset correction) or a more complex function, compare the two equations below.

x = y + o x = g ·  φ0− 1 kφ ln  k x y − x0 − 1  − o

To find out what correction function should be used, the valid contrast has to be known. If the sensor’s whole dynamic range will be used, an S-shape correction therefore is optimal. An S-shape correction function is however complex and there is no simple equation that can be used, making it computationally requiring. In addition five (or more) reference points are needed to determine an S-shape. If the contrast is restricted to a linear range the equation is simple, it may be as simple as an offset correction of each pixel. The demand for computational power is then reduced and the correction function is easier to implement in hardware.

Polynomial approximation

Today the polynomial approximation is the most common correction function. It approximates the pixels responsivities well, is easy to implement and computa-tionally efficient enabling the NUC to be performed in real-time.

In the section explaining the scene based correction the only correction function that was used was the polynomial approximation. The temporal highpass used only an offset correction, which is a polynomial correction of zero degree while most of the other correction methods use a first degree polynomial approximation. The number of degrees of the reference based correction is depending on the number of references, but usually a second degree approximation is quite sufficient. The polynomial approximation was first presented in [13], and later on improved by [10].

(40)

Assume an output value yij from the FPA, the subscript j = 1 · · · m, refers to

the m individual detector elements on the FPA and the subscript i = 1 · · · n refers to the n individual reference sources. Ti is the temperature of reference source i.

The mean signal output value, hyii, of the FPA is defined by

hyii = 1 m m X j=1 yij (4.6)

Since m is a large number, the sensor uses > 1000 pixels for each frame, a good approximation of the mean response characteristics is obtained by averaging. The overall detector response characteristics R(Ti) is sampled at the n irradiation

tem-peratures,

R(Ti) = hyi(Ti)i (4.7)

The nonuniformity and the temporal noise are described by the signal output deviation, the amplitude deviation ∆yij, of individual data points

∆yij = yij− hyii (4.8)

By performing a standard least square curve fitting with a polynomial defined by the order of correction to the amplitude deviation of individual pixels, the amplitude deviation values ∆ylsqij calculated by the curve fitting are

∆ylsqij = c0j+ c1j· hyii + c2j· hyii 2

+ · · · (4.9) For an offset correction only one parameter, a constant c0j is determined for

each pixel, for a linear correction two parameters, a constant c0j and a gain c1j is

determined. The degree of correction may be arbitrarily chosen.

This type of calibration is sensitive to the temporal noise, i.e. the variations of the DN for constant radiation. To reduce this sensitivity, the calibration points are averaged over a number of frames from the same radiation source.

The least square curve fitting procedure is the adequate method to approxi-mate the individual pixel characteristics because it minimizes the error and takes into account the errors due to the temporal noise. For the correction of the nonuni-formity, the data values determined by the curve fit to the amplitude deviation is subtracted

∆yijc = ∆yij− ∆yijlsq (4.10)

The amplitude values xc

ij after correction are obtained by adding the unified

re-sponse function again

xcij = hyii + ∆ycij (4.11)

The corrected pixel amplitude is determined by eliminating the linearized ir-radiation parameter hyi from equations4.8to4.11to obtain the relations for the

(41)

4.2 Reference based correction 27

offset, the linear and the quadratic approximation.

xcj = yj− c0j offset correction (4.12) xcj= yj− c0j 1 + c1j linear correction (4.13) xcj= −1 + c1j 2c2j + s (1 + c1j)2 4c22j +yj− c0j c2j quadratic correction (4.14) For corrections higher than second order polynomial correction the mathematical relations are more complex.

This procedure is, for its complexity, a time consuming function. R. Wang im-proved this procedure[10], which resulted in eliminating the square root in calcu-lating the corrected DN. While Schultz tried to fit the data with help of coefficients and the mean value, hyi, Wang fits the data based on the actual DN, yj.

Wang’s approach, starts with first calculate the mean DN, hyii. The ∆yji is

calculated as

∆yij = yij− hyii (4.15)

By creating a least square polynomial approximation ∆yij can be calculated

as ∆yij= yij− hyii ≈ l X k=0 ck jykj j=1,2 . . . m (4.16)

Where l is the degree of the polynomial approximation and it has to be < n. Correcting the DN with the approach suggested by Wang is then performed by

xcj = yj− l

X

k=0

ck jyjk j=1,2 . . . m (4.17)

The obvious advantages by using approach suggested by Wang instead of the Schultz approach are: 1. It has no division operations and extractions of the root. 2. Using approach made by Schultz, polynomial order higher than third order cannot give an analytical correction function. 3. For polynomial orders higher than of first order using Schultz approach, gives multiple roots, where the correct correction has to be selected.

The correction algorithm that is implemented in Matlab and used in the main correction uses the polynomial approximation. A result of this correction function can be seen in figure4.2and figure4.3.

Analytical approximation

While the polynomial approximation gives a good result when the DN are within the linear region of the S-shaped response curve (see section 2.2.4), it does not give an acceptable correction for the response curve when using the full dynamic

(42)

(a) Uncorrected image, Multimir

(b) Same image through a second order polynomial nonuniformity correction function, Multimir

Figure 4.2: A comparison between the raw image from the Multimir camera and the polynomial nonuniformity corrected image. Note the differences in band 3.

(43)

4.2 Reference based correction 29

(a) Uncorrected image, Emerald

(b) Same image through a second order polynomial nonuniformity correction function, Emerald

Figure 4.3: A comparison between the raw image from the Emerald camera and the polynomial nonuniformity corrected image. Note the reduction of stripes and optical artefacts in the image.

(44)

range of the sensor, which for both the Emerald and Multimir, is DN ∈ [0, 214−1]. Finding an approximation for the correction, that gives the proper S-shape, may be a hard work. In the article "A feasible approach for nonuniformity correction

in IRFPA with nonlinear response" [15], they come up with an attempt to explain

an analytical approximation of the curve. Their assumptions are

1. The response function is a monotonic increasing function of the incident photo flux. That means that there always exists an inverse function for the response function.

2. The typical form of the sensors response curve is S-shaped. The analytical S-shaped curve that they present is

y = h(x) = x0+

kx

1 + e−(kφ·(x−φ0)) (4.18)

where

h(x) is the response function with the incoming photoflux as the variable. x0 and φ0 are the shift coefficients for output signal and incident flux respectively.

kx and kφ are the scale coefficients.

To correct a DN the inverse of h(x) needs to be taken. This correction function for pixel j is xcj= Gj(yj) ≈ g ·  φ0,j − 1 kφ,j ln  k x,j yj− x0,j − 1  − o (4.19) where

g and o are assigned the value of the spatial average

coefficients of gain and offset over the sensor respectively.

4.3

Quality measurement

As has been mentioned in this chapter the NUC is never perfect, the errors are due to the temporal noise and to nonlinearities which are not accurately approximated by the regression analysis. The minimum resolved signal is defined by the temporal noise; the fixed pattern noise further degrades the signal quality.

One way to estimate the quality of the correction is to relate the magnitude of the residual fluctuations in the array after correction to the temporal noise pattern [13]. The goodness of the curve fitting for an individual pixel is described by the χ2 value of the standard deviation normalized to the mean temporal noise

T N averaged over the FPA for the various radiation levels, χ2j = n X i=1 (xcij− hyii)2 T N (4.20)

(45)

4.3 Quality measurement 31

The χ2-distribution of a perfectly uniform FPA having only temporal (gaus-sian) noise is well known in statistics.

Deviations of the χ2-data histogram from the ideal χ2-distribution are due to the residual spatial nonuniformities in the array. By subtracting the ideal χ2 -distribution of measured data for the array after correction, we obtain a single value to estimate the goodness of the correction

c = s Pm j=1χ 2 j (m − 1))− 1 = s Pm j=1 Pn i=1(xcij− hyii 2 )/T N (m − 1) − 1 (4.21)

The DN normalization is m − 1 because one degree of freedom is consumed for the averaging over the whole FPA to obtain the unified photo response character-istics. For a perfect nonuniformity correction the goodness value, c, is equal to zero, i.e. there is only temporal noise in the array.

The magnitude of the temporal noise strongly affects the correctability value. A small temporal noise level increases the c-value as the threshold for the fixed pattern noise is also small.

(46)
(47)

Chapter 5

Identifying and replacing

dead pixels

This chapter will study pixels that can not be corrected by a nonuniformity cor-rection. These pixels are generally called dead pixels in this thesis. They give an erroneous response to the incident radiation and have to be identified and mapped. In many applications their digital numbers (DN) also have to be replaced, for ex-ample with the value of a nearest neighbour. The number of pixels defined as dead is usually less than 1% of the total number of pixels in the focal plane array (FPA). There are two kinds of dead pixels described below.

1. Pixels with no responsivity belong to the first kind. The DNs are constant and independent of the incident radiant power on the detector. Most often the pixel value is one of the two extremes 0 or the maximum value the sensor can give, which is 214− 1 for both the Multimir and Emerald. Pixels of this kind are sometimes called truly dead pixel and are easy to identify.

2. Pixels with responsivities that are harder to describe belong to the second kind. A correction function, see chapter4, is not working and the pixel noise will therefore remain after a nonuniform correction (NUC). Pixels of this kind are sometimes called weak pixels and harder to identify.

The focus of this chapter will be on dead pixels of the second kind. Some of them do respond to the incident radiation. The response curve however may be very weak or may be different at different time points. For some of the pixels the DN and the incident radiance level are only fairly correlated. One example is pixels for which the DN are varying even if the incident radiance level is constant. The response for the second kind of pixels is hard to describe by a correction function and the pixel noise therefore will remain after a NUC, or it may even be increased. The pixels may not be ”definitely” dead but may be usable after some time, and therefore it is not possible to make a constant dead pixel map of these pixels. To some extent this depends on the scene characteristics. Pixels with a weak response may be usable if the contrast of the scene is very high.

(48)

The number of pixels defined as dead also depends on the application. If the goal is only to present an image that looks visually perfect, the number may be raised until all salt and pepper is gone. In other applications the goal may be to keep the total number of dead pixels as low as possible since dead pixels mean lost information. The definition and identification of dead pixels therefore has to be performed for each application.

The result of a NUC depends on the drift of the camera and the time elapsed between the three registrations of radiance sources and the scene registration. Defining a dead pixel therefore is more difficult in the reference based calibration since the limit between a dead pixel and a pixel with a high drift might be difficult to draw. There are a number of methods for identifying the weak pixels, four of these are used in this thesis and will be described in sections5.1-5.4. However, the final decision whether a pixel should be defined as usable or not has to be made by the user.

5.1

Temporal noise

A pixel’s temporal noise is the variability of the DN in a series of frames where the incident radiance on the detector is kept constant. The standard deviation is used as a measure of the temporal noise.

T Nj2= m X i=1 (yji− hyji) 2 m − 1 i = 1 . . . m, j = 1 . . . n (5.1) where

T Nj = is the temporal noise for pixel j in the sensor.

m = is the number of frames.

yji = is the DN for pixel j in frame number i.

hyji = is the mean value for pixel j in the file.

Since the NUC is based on reference based calibration where surfaces at three constant radiance levels are routinely registered, suitable data is already available to calculate the temporal noise. Since there are three calibration files three tem-poral noise values per pixel may be calculated. Figure5.1aand figure5.1b show variablilities of DN for nine randomly selected pixels in the cameras, Multimir and Emerald respectively. The difference in the temporal noise or variability are quite small between the pixels. The temporal noise for dead pixels however is signifi-cantly higher than for the other pixels. The temporal noise can therefore be used to define and identify dead pixels.

The following criteria has shown to be a good starting point for most image sequences

Dead pixel = T Nj ≥ 3 · T N

where T N is the mean temporal noise of all pixels in the FPA. Results with Multimir and Emerald are shown in figures5.3and5.4.

(49)

5.1 Temporal noise 35 0 20 40 60 80 100 4465 4470 4475 4480 4485 Pixelvalues at position (10, 10) 0 20 40 60 80 100 4490 4495 4500 4505 4510 Pixelvalues at position (144, 10) 0 20 40 60 80 100 4545 4550 4555 4560 4565 Pixelvalues at position (278, 10) 0 20 40 60 80 100 4550 4555 4560 4565 Pixelvalues at position (10, 192) 0 20 40 60 80 100 4530 4535 4540 4545 4550 Pixelvalues at position (144, 192) 0 20 40 60 80 100 4575 4580 4585 4590 4595 Pixelvalues at position (278, 192) 0 20 40 60 80 100 4585 4590 4595 4600 4605 Pixelvalues at position (10, 374) 0 20 40 60 80 100 4590 4595 4600 4605 Pixelvalues at position (144, 374) 0 20 40 60 80 100 4560 4565 4570 4575 4580 Pixelvalues at position (278, 374)

(a) DN for nine different pixels in one calibration file using the Multimir band 3

0 20 40 60 80 100 8215 8220 8225 8230 8235 8240 8245 Pixelvalues at position (10, 10) 0 20 40 60 80 100 8200 8210 8220 8230 8240 8250 8260 Pixelvalues at position (256, 10) 0 20 40 60 80 100 8210 8215 8220 8225 8230 8235 8240 Pixelvalues at position (502, 10) 0 20 40 60 80 100 8190 8200 8210 8220 8230 Pixelvalues at position (10, 320) 0 20 40 60 80 100 8210 8215 8220 8225 8230 8235 8240 Pixelvalues at position (256, 320) 0 20 40 60 80 100 8210 8220 8230 8240 8250 Pixelvalues at position (502, 320) 0 20 40 60 80 100 8230 8240 8250 8260 8270 8280 8290 Pixelvalues at position (10, 630) 0 20 40 60 80 100 8240 8250 8260 8270 8280 8290 Pixelvalues at position (256, 630) 0 20 40 60 80 100 8230 8240 8250 8260 8270 Pixelvalues at position (502, 630)

(b) DN for nine different pixels in one calibration file using the Emerald

Figure 5.1: The variability of the pixel values of nine randomly selected pixels in the Multimir and Emerald camera. Also note the differences of mean value for each pixel.

(50)

(a) Multimir

(b) Band three, Emerald

Figure 5.2: Histogram of the temporal noise for one calibration file. The y-axis has a logarithmic scale

It should be noted that the dead pixels will be identified only if the temporal noise is observed in the calibration files. Because there are both low frequency and high frequency temporal noise the sequence recorded need to be long enough; in this work the length of the sequences have been 100 images.

(51)

5.2 Extreme values 37

(a) Multimir, without masking the tempo-ral noise

(b) The mask of dead pixels in the Multimir image

(c) Multimir, with masking the temporal noise

Figure 5.3: The corrected image, with and without applying the 5·mean temporal noise threshold

5.2

Extreme values

Extreme values are identified as DN’s outside a valid region. Due to the fact that we have a response curve that is a non-linear S-shaped curve, which is hard to approximate using three calibration points, we should set a working region, where the approximation is well defined. This region is somewhere between the maximum and minimum DN that the camera will produce, where the responsibility is fairly linear. Also we have the dead pixels that give a constant value. These pixels that

(52)

(a) Emerald, without masking the temporal noise

(b) The mask of dead pixels in the Multimir image

(c) Emerald, with masking the temporal noise

Figure 5.4: The corrected image, with and without applying the 5·mean temporal noise threshold.

have no responsivity usually have a DN that is one of the extreme points either zero or the maximum value that the sensor gives, this is for both Multimir and Emerald, DNmax= 214− 1.

By looking at the histogram, in figure 5.5, for the DN on both cameras one can see that most of the DN are well within the minimum and maximum values. Those pixels that differ from this tend to be at the minimum and maximum values of the dynamic range.

(53)

5.3 Value on polynomial coefficients 39

(a) Multimir, histogram of the DN

(b) Band three, Emerald, histogram of the DN

Figure 5.5: Histogram of the DN viewing a scene. The y-axis has a logarithmic scale

If we have a dead pixel mask created by the extreme values we would be able to exclude these pixels. Using a regular scene where there are no specific heated or cooled objects, the valid range can be set to DN ∈ [500, 16000]. A result from both cameras is shown in figures5.6and5.7.

5.3

Value on polynomial coefficients

The polynomial coefficients obtained in the NUC using a polynomial approxi-mation, section 4.2.1, may be used to identify the dead pixels. The polynomial coefficients describe the response curve of a single pixel, which is the single pixel values plotted against the median values of pixels in the FPA. The polynomial coefficients therefore contain information about the single pixel response and can be used to identify the dead pixels. Consequently the dead pixels should have

(54)

(a) Multimir, without masking the ex-treme values

(b) Multimir, the extreme value mask

(c) Multimir, with masking the extreme values

Figure 5.6: The corrected image, with and without replacing the extreme values

coefficients that are different from the other pixels coefficients. Figure5.8 shows histograms of the coefficients C0, C1 and C2 for images registered with Multimir

and Emerald camera.

In the histograms the main part of the pixels has coefficient values that are collected in a main peak at zero in the histograms. Dead pixels may therefore be identified by the deviation from the mean pixel located at zero. The deviation may be expressed in units of standard deviation and the pixels that are outside

(55)

5.3 Value on polynomial coefficients 41

(a) Band three, Emerald, without

masking the extreme values

(b) Band three, Emerald, the extreme

value mask. Note that no pixels are

identified as dead.

(c) Band three, Emerald, with masking the extreme values

Figure 5.7: The corrected image, with and without replacing the extreme values

this interval are defined as dead pixels.

(56)

(a) Multimir, histogram of the three polynomial coefficients

(b) Band three, Emerald, histogram of the three polynomial coefficients

Figure 5.8: Histogram of the coefficients. The y-axis has a logarithmic scale where

N = is an arbitrary value.

Coefficientjl = Polynomial coefficient for pixel j and degree l

std(Coefficientl) = is the standard deviation of the coefficients of degree l, in the FPA.

Figure5.9and5.10show results with the Multimir and Emerald camera.

5.4

Principal component analysis

Principal component analysis (PCA) is a useful statistical technique that has found application in fields such as face recognition and image compression, and is a com-mon technique for finding patterns in data of multiple dimensions. PCA involves

(57)

5.4 Principal component analysis 43

(a) Multimir, without masking the coeffi-cients

(b) Multimir, the coefficients mask

(c) Multimir, with masking the coefficients

Figure 5.9: The corrected image on the Multimir camera, with and without ap-plying the coefficients mask

a mathematical procedure that transforms a number of (possibly) correlated vari-ables into a (smaller) number of uncorrelated varivari-ables called principal compo-nents. The first principal component accounts for as much of the variability in the data as possible, and each succeeding component accounts for as much of the remaining variability as possible.

Procedure to calculate the PCA 1. Collect the data.

(58)

(a) Band three, Emerald, without masking the coefficients

(b) Band three, Emerald, the coeffi-cients mask

(c) Band three, Emerald, with masking the coefficients

Figure 5.10: The corrected image on the Emerald camera, with and without ap-plying the coefficients mask

2. Subtract the mean value from the data. 3. Calculate the covariance matrix.

4. Calculate the eigenvectors and eigenvalues of covariance matrix. 5. Sort the eigenvectors by the eigenvalues.

(59)

5.4 Principal component analysis 45

6. Create a new dataset and multiply the dataset with the eigenvectors. The calibration files have more than 30 frames and most of them use 100 frames. This can easily create a need for large amounts of memory on the computer if, the PCA is directly implemented in Matlab. Instead, to save time and memory for the price of possibly a little quality, we only let the PCA function look at the mean value for each pixel of the calibration file Having three calibration files we have three pixel values for each pixel. Calculating the first two eigenimages of these three images, then more than 99% of the information from the three images are included. The first eigenimage gives information about the temporal noise, while the rest is the spatial noise [6] and [7]. There are several ways of implementing the PCA, in this thesis the implementation is described in appendixA.

(a) Multimir, histogram of the PCA values

(b) Band three, Emerald, histogram of the PCA values

(60)

As has been done in the earlier sections in this chapter, if the dead pixels are identified by the standard deviation of the merged eigenimages, then the given result reduces the bad pixels quite successfully. If the dead pixel mask is set at 2*times the standard deviation, there are many pixels identified as dead. Observ-ing the two images that is NUC and the image that is both NUC and filtered by the dead pixel, in figure5.12and5.13, the pixels that appear as deviating to the scene are successfully removed. Especially in band three of the Multimir, where the noise at the lower right part of the image is corrected. This part has not been identified in any other dead pixel functions, at least not as completely.

(a) Multimir, without masking the PCA (b) Multimir, the PCA mask

(c) Multimir, with masking the PCA

Figure 5.12: The corrected image on the Multimir camera, with and without applying the PCA mask

(61)

5.4 Principal component analysis 47

(a) Band three, Emerald, without

masking the PCA

(b) Band three, Emerald, the PCA mask

(c) Band three, Emerald, with masking the PCA

Figure 5.13: The corrected image on the Emerald camera, with and without ap-plying the PCA mask

References

Related documents

Solid black line represent the static characteristic of a tradi- tional HPAS, gray area indicate the working envelope of the Active Pinion.”. Page 204, Figure 5: Changed Figure

Within the landscape is a waiting hall for buses toward Nacka and Värmdö and 20 meters under- ground is a metro station with trains toward Stockholm.. Some additional program is

Clarification: iodoxy- is referred to iodoxybenzoic acid (IBX) and not iodoxy-benzene

&#34;The difference was reduced at the final assessments,.. and the Total group was at the same level as the

Principals´ views in Swedish secondary schools, Academic dissertation, Faculty of Social Sciences, Umeå University,

Perceptions of users and providers on barriers to utilizing skilled birth care in mid- and far-western Nepal: a qualitative study (*Shared first authorship) Global Health Action

A: Pattern adapted according to Frost’s method ...113 B: From order to complete garment ...114 C: Evaluation of test garments...115 D: Test person’s valuation of final garments,

Original text: RVEDV interobs CMR/3DEcho Corrected text: RVEDV