• No results found

Thermographic Measurements of Hot Materials Using a Low- to High-speed RGB-camera

N/A
N/A
Protected

Academic year: 2021

Share "Thermographic Measurements of Hot Materials Using a Low- to High-speed RGB-camera"

Copied!
88
0
0

Loading.... (view fulltext now)

Full text

(1)

Thermographic Measurements of Hot

Materials Using a Low- to High-speed

RGB-camera

Prospect of RGB-cameras Within the Field of Thermographic Measurements

Therese Berndtsson

Space Engineering, master's level (120 credits) 2019

(2)

ACKNOWLEDGEMENTS

The past five months have been a challenging, frustrating but yet so exciting journey towards the completion of my master’s degree in space and atmospheric physics at Luleå Technical University. I would like to take the time to sincerely thank all the people helping me to complete this master thesis.

First I would like to thank my supervisor, Jan Frostevarg, for his guidance and genuine interest in my work throughout the project. Furthermore, I would like to thank all of the Ph.D. students at the Division of Product and Production Development, for the endless amount of advice. A special thanks to Tore Silver, research engineer at the same division, for helping me with equipment and experimental set-up, but mainly for always taking the time to bounce ideas.

My gratitude also goes to Kentaro Umeki at the Division of Energy Science and Lars Frisk at the Division of Materials Science for letting me borrow important experimental equipment. Furthermore, I would like to thank Albert Bach Oller for always being available and easy to contact whenever I wanted to use the lab-equipment at the Division of Energy Science and Gustav Häggström for helping me to find validation experiment equipment on very short notice. For the advice concerning the general optical properties of the imaging system, I would furthermore like to thank Johan Öhman at the Division of Fluid and Experimental Mechanics. A big thanks also go to Anita Enmark, at the division of Space Technology, for the advice concerning the post-processing of the images.

Finally, I would like to thank my friends for their moral and uplifting support during this period of my life. A special thanks to my cousin, Martina Berndtsson, for her comforting words whenever the project work felt overwhelming. Last but not least, I would like to thank my partner, Jesper Finnsson, for always taking the time listening to my endless chatter of encountered project problems and possible solutions; I do envy his patience.

I hope you will enjoy reading this master thesis. If you have any questions or comments, please do not hesitate to contact me.

Sincerely,

Therese Berndtsson

(3)

ABSTRACT

Monitoring the thermal behavior of material while heated or cooled is of great importance in order to understand the structural behavior of materials. This thesis aims to investigate the prospects for imaging hot materials using an RGB-camera. The main motivation of using an RGB-camera is the very simple set-up and, in comparison to thermal IR-cameras, low price.

A method and code enabling thermographic measurements in the temperature range of 800C up to 1500C has been produced. Calibration of the RGB-camera was made, the accuracy was predicted to be poor within the temperature range of 1000C up to about 1200C. The poor accuracy of the calibration within this range has its source in the non-linear (and irregular) response of the CMOS- sensor prohibiting a valid exposure time function to be accurately determined. The calibration is thus performed with different settings (i.e different exposure times and aperture settings) but without any correction for the setting change. The validation experiments were performed in (or very close to) the temperatures where the temperature error was predicted to be the largest. An under-estimation of ap- proximately 30-50C in the temperature range between 950C and 1015C could be seen corresponding to an absolute error of about 3-5% in this temperature range. The accuracy is however thought to increase with the temperature above a temperature of approximately 1250C. It is possible to perform a temperature transform of target images with temperatures above 1500C since the Look-up-table produced for the temperature transform extends to about 3000C. However, the accuracy is unknown since no calibration or validation experiments within these ranges were performed.

The result of the calibration and experiments along with the theoretical assessment within the thesis gave background to the discussion of optimal imaging system for thermographic measurements. In order to receive more accurate temperature measurements, a CCD-sensor is to prefer since producing more uniform images with a more linear and predictable response. This would most likely enable the implementation of the camera setting influence. To receive better color accuracy but mainly to prevent channel overlap a system using three sensors instead of one (as for the current imaging system) is to prefer. This would result in larger freedom of channel choice and thus, the temperature transform can somewhat be customized for the specific measured temperature ranges. A brief discussion concerning the overall choice of camera was also brought up. Since the temperature range is large and the red, green and blue channels are sensitive to temperature changes the demands on the sensor dynamic range will be high in order to receive a linear response, or even a fully predictable response, across the whole temperature range. A suggested option to the RGB-camera, still offering the very simple set up, is a dual-wavelength camera in the near (or medium wavelength) infrared range.

(4)

SAMMANFATTNING

Kartläggning av materialförändringar vid uppvärmning eller nedkylning är av stor betydelse för förståelsen för ett materials strukturella beteende. Denna masteruppsatts syftar till att utforska utsikterna för termisk avbildning av varma material med en RGB-kamera. I detta inkluderas kalibrering av kamera, valideringsexperiment och teoretiska efterforskningar. Den huvudsakliga motiveringen för användning av en RGB-kamera är den enkla uppställningen och det, i jämförelse med många IR-kameror, låga priset.

En metod och en kod som möjliggör termografiska mätningar (både för bildsekvenser och stillbild) inom temperaturområdet 800C till 1500C har tagits fram där kalibrering av kameran är inklud- erat. Efter utförd kalibrering förutspåddes att noggrannheten av mätningarna mest troligt skulle vara bristfälliga i temperaturområdet 1000C upp till cirka 1200C. Detta har sin grund i den icke-linjära (och i vissa fall oregelbundna) sensorresponsen vilket, i huvudsak, försvårade bestämning av exponer- ingstidens signalinflytande. Med anledning av detta gjordes en kalibrering med fixa inställningar, det vill säga; fixa exponeringstider och bländarinställningar, men utan korrigering för förändring av kam- erainställning mellan temperaturområden. Valideringsexperimenten som utfördes var i (eller mycket nära) det temperaturområde där de största temperaturavvikelserna förutspåddes vara. Temperaturer uppmätta med RGB-kameran underskattade temperaturerna med cirka 30-50C i temperaturområdet mellan 975-1015C vilket motsvarar ett absolut fel på cirka 3-5% inom detta temperaturområde. Nog- grannheten av mätningarna tros dock öka då temperaturerna av det avbildade objektet är högre än 1250C eftersom man i kalibreringsprocessen kunde se mindre avvikelser i detta område. Med pro- ducerad uppslagstabell för transformation mellan uppmätt emission och temperatur, är det möjligt att utföra temperaturtransformationer för avbildningar av objekt som har temperaturer över 1500C.

Detta dock med okända osäkerheter då varken kalibrerings- eller valideringsexperiment har utförts för så pass höga temperaturer.

Resultatet av kalibreringen och experimenten, tillsammans med en teoretisk utredning av begrän- sningar och möjliga förbättringar vid termografiska mätningar, lade grunden till diskussion gällande optimalt bildsystem. Rekommendationer för att i framtiden utföra mer exakta termografiska mätningar med en RGB-kamera togs fram där en CCD kamera med trippel-sensor föreslogs för att förbättra mätre- sultaten. En CCD-sensor är att föredra framför en CMOS-sensor då de icke-verkliga pixel-avvikelserna inte är lika kritiska för en CCD-sensor som för en CMOS-sensor. CCD-sensorn är dessutom i många aspekter mer tillförlitlig vid vetenskapliga mätningar och har oftast mer linjär och förutsägbar respons vilket mest troligt skulle möjliggöra inkludering av exponeringstidens signalinflytande. Genom att ha tre separata sensorer (en för varje färg) istället för en sensor (som för befintligt system) bör även tillför- litligheten av pixelvärdena och friheten i val av kanal förbättras då signalöverlappet mellan färgerna reduceras. En kort diskussion gällande val av våglängdsområden som kan användas vid temperatur- mätningar tas även upp i avhandlingen. Eftersom mätområdet är stort och den röda, gröna och blå kanalen är känsliga för temperaturförändringar (liten förändring av temperatur ger stor förändring i uppmätt emission) så kommer kravet på det dynamiska omfånget av sensorn vara högt. Ett alternativ till RGB-kameran, som fortfarande har en enkel uppställning, är en dubbel-sensor-kamera med ett dubbelt bandpass-filter i det när-infraröda (NIR) området. Detta kräver dock en utredning för hur sensorer i detta område påverkar den uppmätta signalen.

(5)

Contents

NOMENCLATURE 4

1 INTRODUCTION 6

1.1 Aim of Master Thesis . . . 6

1.2 Problem Description . . . 6

1.3 Delimitation of Project . . . 7

1.4 Readers Guide . . . 7

2 BACKGROUND THEORY 9 2.1 Black- and Greybody Emission . . . 9

2.2 Electromagnetic Radiation . . . 11

2.3 Principle of Non-contact Temperature Measurements . . . 12

2.4 Emissivity . . . 12

2.5 Reflectivity . . . 13

3 IMAGING SYSTEM PROPERTIES 14 3.1 IR-camera . . . 14

3.2 RGB camera . . . 15

3.2.1 Single sensor RGB-separation . . . 16

3.2.2 Triple sensor RGB-separation . . . 18

3.3 General Sensor Concepts and Properties . . . 19

3.3.1 Noise . . . 19

3.3.2 Windowing . . . 20

3.3.3 Potential well . . . 20

3.3.4 Dynamic range . . . 20

3.3.5 Well depth . . . 20

3.3.6 Sensitivity . . . 21

3.3.7 Shuttering . . . 21

3.3.8 Blooming . . . 21

3.3.9 SNR . . . 21

3.3.10 Resolution . . . 21

3.4 Principle of CCD-sensors . . . 22

3.5 Principle of CMOS-sensors . . . 25

3.6 CMOS-sensor vs. CCD-sensor . . . 26

3.7 Point Spread Function of an Imaging System . . . 28

4 EQUATION FOR TEMPERATURE EXTRACTION 31 4.1 Emitted Spectral Radiance to Digital Number . . . 32

4.1.1 CDN - correction function . . . 32

4.1.2 F - offset . . . 33

4.1.3 Final equation ∆DN . . . 34

(6)

4.2.1 Final equation DN-ratio . . . 35

5 MOTIVATION FOR CHOICE OF TEMPERATURE TRANSFORM 38 6 METHODOLOGY OF PRACTICAL WORK 39 6.1 Equipment Used . . . 39

6.2 Software Used . . . 40

6.3 Sensor Response . . . 40

6.4 Black Current Frame (Noise Deduction) . . . 40

6.5 Preheating . . . 41

6.6 Blackbody Calibration . . . 41

6.6.1 Calibration 1 - exposure time and gain function . . . 43

6.6.2 Calibration 2 - fixed settings . . . 46

6.7 Validation Experiments . . . 47

6.7.1 Validation experiment 1 . . . 48

6.7.2 Validation experiment 2 . . . 48

6.7.3 Validation experiment 3 . . . 48

6.8 Other Experiments . . . 49

7 RESULTS OF PRACTICAL WORK 50 7.1 Devloped Code . . . 50

7.2 Black Current Frame . . . 52

7.3 Calibration Results . . . 53

7.3.1 Calibration 1 . . . 53

7.3.2 Calibration 2 . . . 59

7.4 Validation Results . . . 62

7.4.1 Validation experiment 1 . . . 62

7.4.2 Validation experiment 2 . . . 63

7.4.3 Validation experiment 3 . . . 65

7.5 Other Experiment Results . . . 65

8 DISCUSSION OF PRACTICAL WORK 67 8.1 Discussion of Calibration Results . . . 67

8.1.1 Calibration 1 . . . 67

8.1.2 Calibration 2 . . . 67

8.2 Discussion of Validation Results . . . 69

8.2.1 Validation experiment 1 . . . 69

8.2.2 Validation experiment 2 . . . 70

8.2.3 Validation experiment 3 . . . 70

8.3 Summarizing Discussion of Validation Results . . . 71

9 DISCUSSION OF SENSOR AND CAMERA CHOICE 72 9.1 Channel Behavior and Optimal Channel Choice . . . 72

9.2 Camera and Sensor Choice . . . 74

(7)

10 FINAL RECOMMENDATIONS 76

11 CONCLUSIONS 77

12 SUMMARY 78

13 REFERENCES 79

APPENDICES

(8)

NOMENCLATURE

Physics Constants

∆DN Differential digital number (DN − DNBG)

DN Digital number (the digital value for a measured total radiance value of an imaging system) DNBG Digital number of the black current i.e the background (inner camera noise)

E Total radiance over all wavelengths, ε ≡ ε(λ, T ) [W · sr−1· m−2] Egb Total radiance over all wavelengths (greybody approximation), ε ≡ ε [W · sr−1· m−2] Esyst Total radiance, within specific color-band and for specific system, ε ≡ ε(T ) [W · sr−1· m−2] Lε Hemispherical spectral radiance emitted per unit surface of body with ε < 1 [W · sr−1· m−3] Lbb Hemispherical spectral radiance emitted per unit surface of a blackbody (ε = 1) [W ·sr−1·m−3] Ltarget Hemispherical spectral radiance emitted per unit surface of a specific target [W · sr−1· m−3] R∆DN Ratio between the differential digital number of two spectral bands

Quantities

c Speed of light in vacuum 2.99792458 × 108[m · s−1]

h Planck constant 6.626070040 × 10−34[J · s]

k Boltzmann constant 1.38064852 × 10−23[J · K−1]

Optical properties

α Color-band conversion function (conversion from Esyst,color to ∆DN.

Cα Color-band ratio conversion function (conversion of Esyst,ratio to R∆DN) Cβ Color-band ratio conversion function (conversion of Esyst,ratio to R∆DN)1 2 Cs Spectral conversion constant (spectral radiance of specific wavelength to DN.

f (g) Gain function for a specific spectral range.

h(∆t) Exposure time function for a specific spectral range.

Conversion constants and functions

1influence of f (g) and h(∆t) included

2influence of f (g) and h(∆t) neglected

(9)

τf ilter Spectral transmittance of filter τlens Spectral transmittance of lens

τlens Spectral transmittance of the atmosphere ε Emissivity

S Spectral response of imaging sensor General properties

∆t Exposure time [s]

λ Wavelength [m]

Aimage Area of image [m2]

apixel Area of pixel [m2]

Atarget Area of emitting target [m2]

g Gain [dB]

T Absolute temperature [K]

Tb Brightness temperature [K]

Note that the above stated nomenclature can be altered by the denotation of which spectral band the property is to represent and if it is the theoretical representation (denoted by "approx") or the measured value (denoted by "real") of the property.

Added denotation:

• n, n+1, where n is the index of a specific wavelength e.g. λn and λn+1 (n=1,2,3,4).

• color, for the general equation for a color-band (red, green and blue).

• red, green and blue for the specific equations for a specific spectral color-band (red, green and blue respectively).

• ratio, for the general equations of a ratio between two color-bands.

• R/B, R/G and G/B for the specific equations of specific spectral band ratios Red/blue, Red/- Green and Green/Blue respectively.

• approx, for the theoretical approximation of a property.

• real, for the measured value of a property.

(10)

1 INTRODUCTION

Temperature changes of materials, either introduced diligently or as a side effect of any high-temperature process, can introduce imperfections and structural changes within the material. Monitoring the ther- mal behavior of material while heated or cooled is hence of great importance in order to understand the capacity of the materials used. There are many methods to measure temperature, however, most of them only offer a point temperature measurement and/or require direct contact with the surface to be measured. To receive the 2D-surface measurements and to avoid direct contact measurement at high temperatures (where materials become liquid) imaging is the preferred temperature measuring method. Thermal imaging in the infrared (IR-) range is commonly used since it allows for measure- ments at lower temperatures (temperatures encountered in everyday life) where the objects are mainly emitting IR-radiation. Due to the low Signal to Noise ratio when measuring in the IR-range the design of the IR imaging systems are complex and thus, often expensive. At higher temperatures, materials will emit further into the visible spectral range. This provides opportunities for the use of a commercial RGB-cameras (active in the visible Red, Green, and Blue spectra) for thermal imaging application.

A method and code for monitoring temperature behavior of materials exposed to high temperatures (above 800C) using an RGB-camera are developed within this thesis. Calibration and experiments using the existing imaging system, a single CMOS-sensor RGB-camera, are executed. Furthermore, an investigation of introduced sources of error for the current imaging system will be presented, along with an assessment of optimal imaging systems, sensors and channel choice for different applications.

This to form the basis for possible purchase of a new imaging system to the Division of Product and Production Development at Luleå Technical University.

1.1 Aim of Master Thesis

The aim of this Master Thesis is to develop a method and code for thermographic measurements in the temperature range of 800 up to approximately 1500C using an RGB-camera. Calibration of the camera will be an important intermediate step in order to achieve the thermographic measurements.

Furthermore, current restrictions and future possibilities of the used method and imaging system is to be investigated. The project strives towards setting a foundation for future improved thermographic measurements.

The application area of the temperature measurements will be focused on laser material processing, e.g. laser cladding or hardening. Imaging of metal surface laser treatment is highly complex (high temperature ranges and fast events) and the developed method should hence be applicable to other similar but less complex areas.

1.2 Problem Description

First and foremost an algorithm for the radiance-to-temperature transform has to be found along with the uncertainties of the temperature transform. Secondly, code has to be developed for the calibration and the actual radiance-to-temperature transform. Solutions to solve for the uncertainties should

(11)

be discussed and future improvements suggested. In order to estimate the temperature, the current imaging system and its limitations concerning temperature range, linearity and channel choice for the different settings have to be scrutinized.

• How does the imaging system (RGB camera) and electronics transfer radiance into pixel-intensity?

• What method for radiance-to-temperature transform is preferred when using an RGB camera?

• What limitations do the selected method together with the imaging system introduce?

• Can these limitations be fixed?

• Is complementary equipment needed in order to receive better results?

• What are the uncertainties of the numerical method?

1.3 Delimitation of Project

An early decision was made not to do include experiments with different imaging systems. This mostly due to the time-consuming calibration. Furthermore, the non-linear region of the camera is not used, in order to use this region, a more carefully executed and detailed calibration is needed. The camera system used had limitations which complicated the calibration, and a choice to somewhat simplify the calibration process was made in order to produce reasonable results doing the temperature transform.

The validation experiments will furthermore not reach the very high temperatures of laser processes.

The method developed should however work, but with unknown uncertainties, for higher temperatures than validated if the imaging system allows imaging at these high temperatures without saturation.

1.4 Readers Guide

The thesis work included theoretical studies, coding, calibration, experiments and method optimization along with an assessment of optimal imaging system. The latter will be used as a foundation for future experiment improvements and underlie possible future purchases by the division of Product and Production Development (where the thesis is performed). Below a quick guide to the thesis structure is presented.

Section 1,2 and 3 are to offer a basic understanding of the thesis work performed.

Section 4, Equation of Temperature Extraction, will present the used equation for the temperature extraction and is the main Section for the basis of the future code developed. In this section, the algorithm for the intensity-to-temperature conversion is to be found.

Section 5, Motivation for choice of temperature transform, will go on to further explain why the imag- ing system, RGB-camera, and the method for image transform is a good choice for these temperature measurements.

(12)

Since the thesis work includes many different areas, the next sections ( including methodology, result and discussion) are not necessarily divided into these standardized categories.

Section 6, Methodology of practical work, will include the methodology of the practical work; calibra- tion of the camera and experiment (imaging hot materials) along with the settings for the calibration and experiment.

Section 7, Result of practical work, include the result of the code-development (how gathered data was processed in order to receive a temperature map). Furthermore, the result of the calibration and validation experiments are presented.

Section 8, Discussion of practical work, will discuss only the results seen in Section 7 (Results of practical work).

A lot of the project work included gathering information in order to suggest improvements and opti- mization solution for the used method and imaging system. This includes presenting trade-offs due to the used imaging system and other information that could be of use in order to motivate purchase of a more suitable imaging system for the specific application (measuring surface temperatures of hot materials using an RGB-camera).

Section 9, Discussion of Sensor and Camera choice can be seen as part of the thesis result and discussion. Where, based on the thesis full content, including discussion of optimal imaging system and camera choices for thermographic measurements.

Section 10, Final Recommendations, it the final recommendation needed, given in a more consist form, for future improvements of the thermographic measurements (concerning optimal choice of imaging system, channel choice and sensor choice).

Section 11 and 12, Conclusions and Summary, are self explanatory.

The thesis is structured approximately in the same order as the project work was performed (except for the code developed during the whole project process).

(13)

2 BACKGROUND THEORY

2.1 Black- and Greybody Emission

A blackbody is defined as a body having a surface absorbing all incoming electromagnetic radiation (no reflection) and emitting radiation over all wavelengths (DeWitt and Richmond, 1985). The concept of a blackbody is non-realistic but can be seen as an approximation of the behavior of all absorbing and emitting objects (hence all objects with a temperature >0 K). The radiation is a result of the motion of particles, atoms and molecules within the object. The intensity of the emitted radiation from a black body at each wavelength depends only on the temperature of the body. The blackbody emission, i.e the spectral radiance, is described by Planck’s law stated in 1.

Lbb(λ, T ) = C1 λ5h

eC2λT − 1i (1)

Lbb(λ, T ) is the hemispherical spectral radiance emitted per unit surface area of an ideal blackbody at an absolute temperature T. Equation 1 is presented using the notation of Michels (1968) where C1 = 2hc2 and C2 = hck (see nomenclature list at the beginning of thesis for constants within C1 and C2).

Planck’s law states that the spectral radiance is given by the temperature of the emitting body, an illustration of the Planck curves for different temperatures can be seen below:

(14)

Figure 1: Planck curves given for different temperatures (C meansC).

The accuracy of the blackbody approximations of a real body will, however, depend on the emissivity of the object. If the emissivity ε = 1 the object will emit as a perfect blackbody. The emissivity will however vary depending on the object material and surface and is furthermore wavelength and temperature dependent. The spectral radiance of a real material is hence more accurately described by the equation seen below.

Lε(λ, T ) = ε(λ, T )Lbb= ε(λ, T ) C1 λ5

h

eC2λT − 1i (2)

Where the emissivity ε(λ, T ) is the emissivity of a certain material having certain surface properties (for a specific viewing angle). Furthermore, the total radiance of a body is received by integration over all wavelengths:

(15)

E(T ) =

Z

0

ε(λ, T ) C1 λ5

h

eC2λT − 1i dλ (3)

If the emissivity is assumed not to vary with wavelength or temperature. i.e ε(T, λ) = ε the total radiance can be described by the greybody approximation seen below.

Egb(T ) = ε

Z

0

C1

λ5 h

eλTC2 − 1i dλ (4)

2.2 Electromagnetic Radiation

The spectral ranges of the electromagnetic radiation are presented below. This for future understanding of used notation of the spectral ranges.

Figure 2: The electromagnetic radiation spectra. Image credit Edmund optics (2019)

NIR is short for Near-infrared, MWIR short for Medium Wavelength Infrared and FIR short for Far infrared.

(16)

2.3 Principle of Non-contact Temperature Measurements

In order to receive the temperature of a blackbody one can measure the emission (radiance) from the object and further extract the temperature (using Planck’s law). The total radiance within a spectral range is received by measuring the photon emission within a spectral range (using a band-pass filter). The incoming photons are detected by a sensor and are transformed into an electrical signal corresponding to the photon count. The electrical signal is further processed, by a signal processing system, into a Digital Number (DN) corresponding to the electrical signal. If measuring within a very narrow range and the optics, electronics and atmosphere are assumed not to interfere with the signal, the total emitted radiance can be approximated to the spectral radiance seen in Eq. 2. If the emissivity is known, Eq. 2 can be used directly to extract the temperature, if including a conversion constant to assure unit consistency when converting photon count to DN. Each measurement will hence result in a Digital Number (DN) corresponding to a spectral radiance emitted by the object one is measuring, i.e DN=CsBb where Cs is the spectral conversion constant. The temperature can then be extracted from Eq. 2:

Tb = C2

λln

 C1

εLbbλ5 + 1

 =

C2

λln



C1 εDN

Cs λ5 + 1

 (5)

The equation on the left-hand side is the equation of the brightness temperature of a greybody. The brightness temperature of an object surface is often described as the temperature that a blackbody (or greybody) would have if it were to emit the same amount spectral radiation as the actual body. Thus, when doing the assumptions stated above, the absolute temperature is the brightness temperature. In order to determine the spectral conversion constant, Cs, calibration is needed, and if the emissivity is unknown a reference temperature will furthermore be necessary in order to produce the temperature of the target.

Since the channel band-pass filters for most imaging systems often are wider and the assumption of no-signal interference by the optical and electrical system is unreasonable, the equation of the real temperature will become more complex. In order to find the temperature transform, iterative solutions and/or "look-up-table" solutions will be necessary. A method doing so will be presented under Sec. 4 (Equation for Temperature Extraction) where a refined method for extraction of the temperature can be found.

2.4 Emissivity

The emissivity of a surface is both wavelength and temperature dependent and will furthermore vary depending on the angle of the emission (viewing point). If the angle dependency is neglected (reasonable for the project application, see Section 4.2 where angle dependency will cancel out). The spectral emissivity is defined as the spectral directional radiance from the target surface relative the spectral directional radiance of a blackbody surface if at the same temperature.

(17)

ε = Ltarget(λ, T )

Lbb(λ, T ) (6)

The definition above will assume the temperature T to be the surface temperature defined by the radiometric temperature, Tsr which is based on the radiance emitted by a surface (Becker and Li, 1995). The emissivity will change with wavelength and has been shown to increase with temperature (Wang et al., 2015; King et al., 2017). The emissivity of metals has furthermore been shown to increase with surface roughness (Wang et al., 2015; King et al., 2017), surface oxidation (constant when oxidation is fully developed) (King et al., 2017; Zhang et al., 2017) and phase changes (Liu et al., 2013). The emissivity will hence change with each target and have a temperature and wavelength dependency.

2.5 Reflectivity

Reflectivity (i.e. the reflectance applied to a thick reflecting object, having no transmission) is defined as the radiance reflected by a surface divided by the radiance received by the surface i.e the radiance absorbed. The radiance absorbed can be seen as the radiance to be emitted by the surface according to Kirchhoff’s law of thermal radiation and hence the reflectivity and emissivity are closely linked; if the reflectivity is high the emissivity will be low. Since one is using the emission of the target in order to extract the temperature the reflective surface of the target material has to be considered. If the surface is highly reflective, as for metals, the absorptive properties of the surface will be low resulting in a low emission. At lower temperatures, the emission is already low and together with a low emissivity of the metal surface the measured signal might be difficult to detect. In order to increase the radiance, one can decrease the reflectivity by e.g. sandblasting the surface or apply a thin film or grease onto the surface of the material before doing thermographic measurements. Adding material to the surface might however change the surface properties and even though the extra coating of the metal often changes the surface properties to the better (assuring surface protection) this might be unwanted in some applications (Meola et al., 2015).

The distribution of light reflection on rough surfaces can be described by the bidirectional reflection distribution function (BRDF). Including the polarization properties of the light (highly relevant for metal surfaces) the polarized BRDF (pBRDF) describes the light distribution of the polarized reflected light (Priest, 2000). Most models of the pBRDF of metals include the specular reflection (component based on microfacet theory combined with Fresnel’s law) and the diffuse reflection (Torrance and Sparrow, 1967).

If the imaging system used is sensitive to surrounding light or light used in the actual heating process (e.g. laser process) the directions of where the reflection is the strongest should be avoided and/or a band stop filter of the significant light frequency should be implemented. Modeling of the strong directional reluctance can be seen in Torrance and Sparrow (1967) who is using an improved model of the pBDRF. Within the scope of this project, the exact distribution of the reflection will however not be of importance.

(18)

3 IMAGING SYSTEM PROPERTIES

The below presentation of imaging system properties will assume applications of thermal (non-contact) surface measurements.

3.1 IR-camera

An IR-camera can work in different spectral ranges if using different detector technologies (Tarin, 2016). Low-temperature objects only emit in the longer wavelength spectra (since the Planck curves of lower temperatures are shifted to longer wavelengths, see Fig. 1) and temperature measurement is only possible using a detector sensitive to wavelengths corresponding to the object emission. As seen in the Planck curves Fig. 1, the longer the wavelength the lower the signal will be, thus the Signal to Noise Ratio (SNR) is low, especially in FIR, and noise (arising from the camera electronics) can easily disturb the real measurement. In order to reduce the disturbance of noise, extensive cooling of the detectors has to be implemented. Even though cooling is necessary for most imaging applications, measuring lower surface temperatures (in the long wavelength range) will set higher demands on the process of cooling than if measuring higher temperatures at higher wavelengths. The IR-range is furthermore more sensitive to atmospheric attenuation than shorter wavelength-ranges (visible and shorter) since many of the common air gases absorb significantly within this range.

Most IR-cameras are monochromatic (uses one channel, i.e one spectral range) and if doing thermo- graphic measurements a point reference temperature input is always needed (Battalwar et al., ND).

When using monochromatic cameras for thermographic measurements it is recommended to have the sensor parallel to the target surface since changing the viewing angle will change the received portion of the light, i.e the measurement is dependent on the viewing angle. Furthermore one has to consider the influence of the reflected light, this since the total energy measured is no longer exclusively emitted light from the target but also reflected light from the surroundings and the temperature measurement accuracy will decrease (Tarin, 2016). If instead using the ratio between two (or more) channels within the IR range one will reduce these errors; the viewing angle is the same for both channels and the reflection ratio will be taken care of when accounting for the emissivity (or by assuming the reflection to be equal in both channels) and these dependencies will cancel out.

In the IR-range the Planck curves flatten out and the curves of the different temperatures start to converge. The intensity difference between two closely separated channels in the MWIR to FIR-region will be relatively small. This is illustrated in Fig. 3.

(19)

Figure 3: Zoomed in section of Fig. 1 in the IR range. CH1=channel 1 and CH2=channel 2. If CH2 is to be further separated from CH1 it will result in a larger emission difference between CH1 and CH2.

One should note that the separation of channels can pose problems when doing the assumption of similar channel behavior. Two channels having a fixed separation in wavelength will have a larger difference in radiance if placed at a somewhat shorter wavelength than if the two channels are placed at the very long wavelengths (in the IR-range). For lower temperatures (25-800C) the object emission (Planck curve) is shifted to longer wavelengths and is close to zero in any other range than the IR- range. In this temperature region measurement within the IR-range will be the only option for non- contact measuring methods. Since the signal is low at these low temperatures IR-cameras for lower temperatures require extensive cooling (to reduce the noise) and are, due to this, often expensive. If measuring at higher temperatures, the demands on sensor cooling will be reduced.

3.2 RGB camera

The RGB (Red Green Blue)-camera has a color separation mechanism before the sensor detection

(20)

two main approaches for color separation of incoming light in an RGB-camera; the use of one sensor and spectrally selective filters, or, the use of three separate sensors each sensitive for one spectral range.

If having a single sensor RGB-camera no beam splitting mechanism is needed, the colors are instead separated only by using filters. The most common filter arrangement is separating the colors in a Bayer Pattern. If instead having a triple-sensor RGB-camera, a beam splitting mechanism is implemented (and sometimes narrow band-pass filters) in the camera. The difference between the two methods and the different sensor types, along with the pros and cons, is more thoroughly presented within this section.

3.2.1 Single sensor RGB-separation

The most common RGB-cameras uses "pixel-filters" in a Bayer-pattern to spectrally separate the colors red, green and blue. The incoming light will, without any beam splitting mechanism, hit the sensor, where different pixies of the sensor is coated with a "one-color filter" i.e. a narrow band-pass filter with the spectral range of the specific color. The filter coating is arranged in a so-called Bayer-pattern seen in Fig.4.

Figure 4: Filtering of light in a Bayer pattern. Image credit; Prayagi et al. (ND).

The Bayer-filter has twice as many green pixels as red and blue (as seen in Fig. 4). This since the image should correspond to how the eye perceives colors (the human eye is more sensitive to green

(21)

colors). Due to the specific color-filter on every pixel, each pixel will have one known and two unknown intensity counts (e.g. if intensity count for the red channel is known, the intensity count of the blue and green channel is unknown). To solve for the unknown intensity count in each channel, a method called demosaicing or Debayering can be used. The approach is to approximate the unknown intensity values for each pixel by the knowledge of the surrounding pixels (Guarnera et al., 2010).

There are different ways, having different pros and cons, to find an estimation of the missing pixels;

Nearest Neighbor Interpolation, Bilinear Interpolation, Smooth Hue Interpolation and VNG (Variable Number of Gradients). The most common method is Bilinear Interpolation where the intensity count of each unknown pixel (in each channel) is an average of the four surrounding pixels (Stark, ND). The Bilinear Interpolation for a red, blue and green pixel can be seen below:

RGB-values in red pixel (R33):

R33= R33

G33= G23+ G34+ G32+ G43 4

B33= B22+ B24+ B42+ B44 4

Interpolation RGB-values in blue pixel (B44):

R44= R33+ R35+ R53+ R55 4

G44= G34+ G45+ G43+ G54

4 B44= B44

Interpolation RGB-values in green pixel (G43):

R43= R33+ R53 2 G43= G43

B43= B42+ B44

2

(Guarnera et al., 2010)

The bilinear method is computationally relatively fast and offer significantly reduced color error and increased resolution in comparison to the nearest neighbor method, but is far from excellent. Bilinear interpolation will cause some edge effects since the edges cannot be averaged by the pixels outside of the image, zero-padding the image will give a decline in intensity around the edges.

(22)

the pixel value might be better to approximate using only the surrounding pixels in one direction (e.g. vertical or horizontal). By implementation of edge-sensing algorithms, the gradients in different directions can be found. When calculating more than two gradients and finding the optimal gradient to use for the estimation, the method is called VNG (Variable Number of Gradients). VNG is very accurate but is computationally challenging, especially when Debayering real-time video images(Stark, ND).

In some cases the Debayering is already done by the camera settings and electronics, however, one should be aware of how the Debayering algorithm might impair the color error and resolution. Fur- thermore, if only receiving Debayered data from the camera, the Raw-data cannot be retrieved since the weight of each pixel to the interpolated value cannot be found. If one would like full control of the Debayering, the extraction of raw data and further implementing Debayering algorithms in the post-processing can be done. However, if Debayering is performed by the camera electronics these are most likely already optimized (where one is often offered the possibility to select the algorithm to be used).

Having filters in a Bayer Pattern onto one single sensor is a cost-effective method in order to receive color images (no significant price difference in price from a monochrome camera). The single sensor Bayer filter camera is often good enough for household imaging or imaging where the color resolution and accuracy is not very important (as long as the human eye perceives a good image quality). Highly intelligent Debayering algorithms have been developed in order to improve the overall color image quality and repair the Bayer pattern artifacts, however, other factors, such as sharpness, color accuracy, image noise and speed of the camera suffer from these improvements. Many single sensor RGB-camera will furthermore suffer from sufficient signal strength loss since a "frame" often is surrounding each pixel. The pixel frame is implemented in order to restrict the light within each specific pixel (otherwise the colors can blend and blur into white color). Some leakage will however still occur and the sensor response for each color will have a significant overlap (Prayagi et al., ND). The single sensor RGB cameras are hence not custom for scientific applications.

3.2.2 Triple sensor RGB-separation

The triple sensor system can use two forms of color separation systems. The prism used can either work as only a beam splitter, where each beam is directed to the different separate sensors having their own individual single color filters (Red, green and Blue). Or, a tri-diachronic prism can be used, where the white light is separated into three beams of different wavelengths directly and further directed to the specific sensor for each color (no additive filters applied). The separate sensors will in both cases only register one color each, and a full sensor color resolution can be achieved. Since the sensors are separated the color overlap in the sensor response is not as pronounced as for the single-sensor camera.

Using a triple-sensor system will furthermore minimize the signal strength loss since "all" the existing light in each wavelength range is directed to and captured by individual sensors (Prayagi et al., ND).

(23)

3.3 General Sensor Concepts and Properties

The general properties presented within this section are general but is more or less focused on the Charged Couple Device sensors (CCD-sensors) and Complementary Metal-Oxide Semiconductor (CMOS- sensor) since these are the most common sensors in the RGB-range. Information on specific properties for sensors used in other spectral ranges (e.g. IR-range), will hence not be presented.

3.3.1 Noise

The noise can be divided into three types of noise:

1. Fixed pattern noise (Nf):

Fixed pattern noise is caused by the sensitivity difference between the pixels. The built up of the fixed pattern noise differs between the CCD-sensor and the CMOS-sensor which has to do with the difference in readout between the two sensors (more thoroughly explained under in Sec. 3.4, 3.5 and 3.6). At larger signals, the fixed pattern noise will be proportional to the exposure time.

The fixed pattern noise will (in most cases) determine the detection limit of a CMOS-sensor.

2. Shot noise (Ns):

Shot noise is a Poisson distributed noise caused by the statistical difference in the number of photons hitting the sensor. The shot noise is proportional to the root of the electron signal, and will hence be of importance at low signals (low amount of exposure).

3. Dark shot noise (Nd):

Dark shot noise is caused by the dark current. The dark noise is proportional to the square root of the number of electrons generated in the dark state (no external light). The dark shot noise is often larger than the fixed pattern noise, especially for the low amount of exposure.

4. Readout noise (Nr): Readout noise is caused by the amplifier in the read out process and the readout circuit. The readout noise will determine the lower detection limit of the CCD and CMOS-sensor, however, the built up and appearance of the readout noise will differ between the two sensors.

The total noise is proportional to pNf2+ N s2+ N d2+ N r2. If imaging when no external light is present the dark shot noise, Nb, along with the readout noise, Nr, will be imaged, where the highest of the two will determine the lowest detection level. If increasing the frame rate the readout noise should increase and if increasing the temperature of the sensor the black current noise is to increase. For a CCD, if the dark noise is often higher than the read-out noise, hence it will determine the detection limit of the CCD. But if reducing the dark shot noise (i.e the dark current) below the read-out noise, the limit of the CCD detection is lowered to the readout noise level. For the CMOS-sensor, the read- out noise will most likely be determined by the fixed pattern noise since it is amplified for each pixel.

If the fixed pattern noise of a CMOS-sensor is increasing (increasing with exposure time if there is a large signal), so does the read-out noise (Hamamatsu Photonics, 2014).

(24)

3.3.2 Windowing

The windowing is the choice of material acting as a protective coating (with respect to different applica- tions). The window choice will be of importance, a sapphire window will make the sensor mechanically stronger and reduce condensation onto the sensor, and together with an anti-reflection(AR)-coating it also offers a good transmittance in the visible region. Note that a sensor having no window will have the highest QE but the sensor will be less sustainable and more sensitive to influence from external factors(Hamamatsu Photonics, 2014).

When speaking of windowing of CMOS-sensors this will instead mean the ability to read out a portion of the image sensor (enabling very fast imaging of specific events)(Litwiller, 2001).

3.3.3 Potential well

When the photons hit each electrode of the sensor it is transferred to an electrical signal, this electrical signal will differ in between the electrodes, hence, a potential well is created. Each electrode will thus have its unique potential well with a specific voltage for each exposure. One often uses the "rain-bucket"

analogy in order to explain the potential well; where the photons are the rain and the potential wells the buckets collecting the rain (i.e. the more photons collected the more electrical charge is gathered and the potential will increase). Transparent electrodes, made from poly-silicone substrate are often used as the photosensitive area (the area sensitive to photons). (Hamamatsu Photonics, 2014).

3.3.4 Dynamic range

The dynamic range is defined as the ratio between the maximum level and the minimum detection level (Hamamatsu Photonics, 2014). Using the "rain-bucket" analogy the minimum level can be described by imaging rocks on the bucket floor; if the collected amount of rain in the bucket is very small the fluid level will not reach above the rocks and the fluid level will be represented by the rock surface.

However, if the fluid is covering the rocks the fluid level is represented by the fluid level. The read out noise, always present, is the rocks within the bucket and the level of fluid (i.e. the amount of charge) should always be above this level in order make sure that the detected signal is the real signal. The dynamic range is hence defined as the ratio between the saturation charge and the read out-noise (the read out noise is the minimum detection limit of a sensor). In order to be able to record a large range of intensities using the same settings of the camera, the dynamic range should be as high as possible.

3.3.5 Well depth

Returning to the "rain-bucket-analogy" the potential well depth can be seen as the depth of the buckets. Hence an increased well depth will increase the number of photons that can be captured before saturation i.e. increase the saturation charge (Stemmer Imaging, 2019). The dynamic range is the ratio between the saturation charge and the readout noise, thus, an increased well depth (increasing the saturation charge), does not necessarily increase the dynamic range if the readout noise is proportionally

(25)

increased. However, the dynamic range is often increased with increased well depth. A large full well capacity (i.e a "deep well") is of importance in itself when measuring signals of high intensities in order to avoid saturation.

3.3.6 Sensitivity

The sensitivity is a measurement of how much of the photons hitting the sensor is transferred to electrical charge, it is a product of the quantum efficiency (QE) and the Fill factor (FF) of the sensor.

The quantum efficiency is the number of photons measured relative to how many photons that initially hit the sensor and is correlated to the thickness of the absorption layer of the sensor. The fill factor is the ratio of the light-sensitive area to the pixel’s total size (hence if electrical circuits are integrated to the pixel the fill factor is smaller since the electrical circuit takes up space)(Bigas et al., 2006).

3.3.7 Shuttering

Shuttering is the ability to arbitrarily start and stop the exposure time. This requires an electronic shutter (Litwiller, 2001).

3.3.8 Blooming

Blooming is an effect of leakage between pixels. It is more thoroughly explained in Section 3.4. This is mainly a problem for CCD-sensors (Litwiller, 2001).

3.3.9 SNR

Both shot noise and fixed pattern noise will increase with the amount of exposure (long exposure time) but since the signal is increasing as well a better quantity, in order to understand the influence of the noise, is to measure the noise relative to the signal strength. The Signal-to-Noise-ratio (SNR) is the ratio between signal strength and noise. As a rule of thumb, the SNR is determined by the fixed pattern noise when the amount of exposure is high (long exposure time or high signal) and determined by the shot noise when the amount of exposure is low (short exposure time or low signal). Both properties do increase with the amount of exposure (Hamamatsu Photonics, 2014).

3.3.10 Resolution

The ability of a sensor to reproduce the contrast of a specific spatial frequency is called the spatial resolution. The spatial resolution can be quantized (in the frequency domain) using the modulation transfer function (MTF) (Hamamatsu Photonics, 2014). The MTF of an imaged bar pattern with decreasing size and distances is seen in Fig.5.

(26)

Figure 5: The spatial frequency is shown by the red curve and the MTF is shown by the blue curve. Image credit: Imatest (2000)

As seen each bar pattern has a spatial frequency representation and an MTF-representation. At some spatial frequency, the pattern can no longer be resolved. If the MTF is closing to zero no spatial frequencies can be detected. Below these spatial frequencies, no contrast (intensity difference) can be resolved and the spatial resolution is always above the limit of no contrast. The limiting resolution is determined by the Nyquist limit, however, the real MTF of a CCD or CMOS is however determined by the diffusion of the signal occurring when the charges are collected in the silicone substrate electrodes.

If the signal has longer wavelength the photons can penetrate further into the silicone and cause more diffusion. Thus, the longer the wavelength the worse the resolution (Hamamatsu Photonics, 2014).

3.4 Principle of CCD-sensors

If returning to the "rain bucket analogy" the readout process can more easily be described. After exposure, all the buckets are emptied row by row into a line of registration buckets (on the horizontal shift register). When a row of buckets is emptied into the registration buckets, the content of the registration buckets is noted and the buckets are further emptied, bucket by bucket. Since the buckets are emptied in the same order for each readout, the order of the fluid layers is known and the original placement can be back-tracked. With every exposure, all pixels are exposed and the charge is collected

(27)

simultaneously, hence no "rolling shutter"-artifacts will appear. An image to illustrate the bucket analogy is shown below;

Figure 6: The potential well of an CCD illustrated as buckets filled with rain (photons). The read out process is also ilustraded by the arrows. Image credit: Howell (ND)

There are many different types of CCDs, mainly differing in the readout process. The FT-type CCD (FT-CCD) is often used in video cameras. Since video cameras are of interest within the scope of this project this specific CCD-sensor is more thoroughly described below. In order to collect data fast, there is one photon-sensitive area collecting the photons, and one storage area where the last exposure can be stored and further readout (Hamamatsu Photonics, 2014). Using the bucket analogy again; the buckets will not be emptied but only vertically transferred from the photosensitive area to the storage area, and then emptied in the same manner as seen in figure 6, this will enable approximately twice as high frame rate. An illustration of the FT-CCD can be seen in Fig.7.

(28)

Figure 7: FT-type CCD, having one photosensitive area and one storage area. Image credit: Hamamatsu Photonics (2014)

The IT-type CCD can also be used for video purposes since it is similar to the FT-CCD. They do however differ in the process going from photosensitive area to the storage area. Using the bucket analogy again one can imagine the storage area being on the ground floor, below the photosensitive area on the second floor. The transfer between photosensitive area and the storage area will occur directly from bucket to bucket, hence the bucket fluid is emptied down into the storage buckets on the first floor and is further read out in the same manner as for the FT-type. The IT-type readout can, however, cause a phenomenon called smear (occurring when transferring the signal from vertical register to horizontal register). Imagining the buckets being filled and then quickly moved, if moved before the buckets on the top-floor are completely emptied, some of the fluid will end up in the wrong bucket and cause a smear. However, if instead using a FIT-type CCD (not thoroughly explained here) this problem is solved.

Blooming is common for CCD-sensors and is caused by leakage from an over-filled bucket to another bucket, in order to solve the often occurring blooming issues of CCD-sensors the one-dimensional type CCD was created. It has anti-blooming gates/drains where, if using the rain-bucket analogy, the buckets do not leak into the other buckets but instead leak into a "drain-bucket" when overfilled. The readout process of the one-dimensional-CCD is also made more effective since having two horizontal shift registers and hence allows for electronic shutter structures.

The CCD-sensor can either be front-illuminated or back-thinned. In the front-illuminated sensor, a lot of light is reflected back and thus, the quantum efficiency, QE, (i.e. the number of photons measured

(29)

relative how many photons hit the sensor) is limited (only about 40% in the visible region). Using a back-thinned type CCD, the light will instead enter at the back of the silicon substrate and does hence offer a much higher QE also being sensitive in the ultraviolet and visible region (wanting high sensitivity in the NIR-region a NIR-enhanced back-illuminated CCD is needed).

The linearity of the CCD-sensors often deviates somewhat from the ideal line (where γ = 1). The linearity will deviate the most at very low signals and thus if the linear region of the sensor is wanted a threshold can be set. The dark current of the CCD is often seen to increase with temperature and should hence be tracked with increasing temperature (Hamamatsu Photonics, 2014).

3.5 Principle of CMOS-sensors

There are several CMOS image sensors having somewhat different working principles and readout methods but the common architectural structure is the pixel-integrated electronic circuit, i.e. each pixel has its own amplifier and transistor (if needed).

The signal is, after exposure, directly transferred to the pixel-integrated AD-converter (analog to digital converter). The now digital signal is further registered by the column decoder (can be seen as the digital horizontal shift register). In the bucket analogy, the buckets are emptied and the fluid measured and noted directly after each exposure. The measurement information is then transferred to the decoder.

A passive pixel sensor (PPS) has one transistor per pixel and often a large fill factor (a large part of the pixel is light sensitive). The PPS is on the other hand relatively slow (in comparison to other CMOS-sensor technologies) and has low SNR, the PPS is thus rarely used today. The active pixel sensor (APS) CMOS uses more transistors per pixel (≈ 2 − 4) and is often faster and has higher SNR than the PPS. This is the most common CMOS-sensor on the market. There is technology allowing for more transistor in each pixel (5 or more), these sensors are very fast and have no column noise.

However, the integrated circuits and the overall implementation is more complex. The CMOS-sensors are highly programmable since one has full access to every pixel of an array. Binning and similar operations are hence easy to implement (Chae, 2013).

Many CMOS-sensors often have a relatively small fill factor in comparison to the CCD-sensors since the pixel-circuit takes up space in each pixel. However, most newer CMOS-sensor has an integrated micro-lens focusing the light onto the sensitive region of the pixel. This will significantly increase the SNR of the CMOS-sensor. The photon detection process has further been developed from front-side illumination to back-side illumination. In the front-side illuminated CMOS-sensor the light will pass the sensor in the following order:

1. microlens 2. color filter 3. metal wiring

4. photodiode substrate

(30)

The light must hence pass the metal wiring of the integrated pixel-circuit in order to reach the photo- sensitive area. In back-side, illuminated CMOS-sensors the photo-diode substrate is placed in front of the metal wiring allowing for a larger portion of the light to hit the photosensitive area. The back-side illuminated CMOS-sensors will hence increase the low-light performance of the front-side illuminated sensors (Chae, 2013).

A CMOS-sensor can as mentioned have different read-out methods and the CMOS-sensor (with a specific readout method) should be chosen with respect to the application in order to improve the performance of the CMOS-sensor.

3.6 CMOS-sensor vs. CCD-sensor

The Charged Couple Device sensor (CCD-sensor) and the complementary metal–oxide–semiconductor (CMOS-sensors) are the most commonly used sensors for imaging applications. Both sensors are pixelated metal oxide semiconductors and both accumulate a signal charge in each pixel proportional to the locally captured intensity. To summarize, the CCD will, each time the sensor is exposed to light, transfer each pixel at a time to a common output structure, then convert the common output structure of charges into voltage before buffering and sending the signal "off-chip". In a CMOS-sensor, the conversion from pixel charge to voltage will instead occur instantaneously after each exposure within each pixel and the digital readout is further row by row (Litwiller, 2001; Hamamatsu Photonics, 2014).

In the article by Litwiller (2001), a comparison between the two sensors (CMOS and CCD) is made by characterizing the sensor performance by eight attributes for different applications. These attributes are presented below. Only a short presentation of which of the two sensors considered superior and why within each attribute will be presented.

1. Sensitivity

The CMOS does often have marginally better sensitivity. This because the gain elements are easier to place on the CMOS-sensor and allow for low-power high gain amplifiers. However, the noise will be amplified in the same process. The CCD will have higher power consumption for the same increase of gain. At zero gain the CCD-sensor might have better sensitivity than the CMOS.

2. Dynamic range and well depth

The dynamic range is often better for CCD-sensors since one can take advantage of the off-chip circuits having better cooling to reduces the noise. Furthermore, the full pixel size is used for photon count whereas in the CMOS-sensor much of the readout electronics is placed within each pixel decreasing the fill factor and hence the FWC. The fill factor is however increased in CMOS-sensor if micro-lenses are integrated to focus the light.

3. Uniformity

As mentioned the CMOS-sensors have much of its electrical circuits in each pixel (amplifiers and transistors). Both the dark shot noise and the pixel pattern noise will hence be amplified and the noise performance will be worse both in low light and high light conditions. Some newer

(31)

CMOS-sensors have feed-back amplifiers making the pixel pattern noise significantly lower and close to the CCD performance but is still not as reliable.

4. Shuttering

The CCD-sensors often have superior electronic shutters. If implementing uniform (i.e) global electric shutters in CMOS-sensors a number of transistors are needed within each pixel. This will decrease the sensitive area of the pixel and furthermore decrease the FWD and dynamic range of the sensor. Furthermore, the pixel pattern noise can be increased.

5. Speed

The speed of the CMOS-sensor was the primary motivation for developing the CMOS-sensor and the speed is hence superior to the CCD-sensor. However, it is only at very high frame rates this advantage will become important.

6. Windowing (in the sense of CMOS-sensors)

The CMOS-sensor can read out a portion of the image sensor (i.e. one can specify the region of interest, ROI) which makes it possible to track very fast moving objects. These possibilities are limited using a CCD-sensor.

7. Anti-blooming

The CMOS-sensor often has metal frames surrounding each pixel which will prevent the pixel- light from leaking into the other pixels even when saturated, hence no blooming will occur (the frames will however influence the image quality in other areas such as the PSF). Anti-blooming gates and drains can be implemented in CCDs improving the anti-blooming.

8. Biasing and Clocking

The CMOS operates with a single bias voltage and clock level and will hence be somewhat better than the CCD-sensor that often requires multiple bias voltages.

General comparison CMOS vs. CCD

The primary motivation for the development of the CMOS-sensor was the relatively slow speed, com- plexion and cost of the CCD-sensor. Hence it is within these areas the CMOS is superior to the CCD-sensor. The CMOS-sensor does not, in comparison to the CCD-sensor, required complex ex- ternal readout electronics since these are already built into the sensor. The compact and relatively easy design of the CMOS-sensor in comparison with the CCD-sensor will keep the price down and it is hence often less expensive than a CCD-sensor. However, if the CMOS-sensors are to be used in scientific purposes external cooling and other external electronics would still be necessary, making the CMOS-sensor very similar to the CCD in price.

The readout noise will determine the lower detection limit of the CCD and CMOS-sensor, however, the built up and appearance of the readout noise will differ between the two. In a CCD-sensor the AD- converter (analog to digital converter), including an amplifier, is not placed within the photosensitive region, thus, the pixel variation i.e the fixed pattern noise is not amplified. For CMOS-sensors ADs are placed at each pixel and the pixel pattern will be amplified. The noise pattern of the CMOS-sensor can, due to these local variations and further amplification, be irregular. The noise evaluation will hence require somewhat more work. One would have to do row-by-row and column-by-column wise

(32)

To summarize, depending on the application both CMOS or CCD-sensor can have a satisfactory result.

The CMOS-sensors have smaller system sizes and are used in many commercial applications. CMOS- sensors furthermore offer low power consumption and somewhat higher processing speed than the CCD-sensor. However, the image quality of a CCD-sensor is in most cases superior to that of a CMOS- sensor (Litwiller, 2001). One should note that the performance of CMOS-sensor has been significantly improved in recent years, where CMOS-sensor in many cases, has a lower readout noise than many CCD-sensors and can offer high-quality imaging (Mustafa, 2013). The CCD-sensor technology is however more mature than the CMOS technology and properties such as sensitivity, noise and dark current are carefully taken into consideration in the CCD-sensor. The primary motivation for the CMOS-sensor development was to have a faster, cheaper and less complex sensor which is also achieved.

The local variation of the CMOS-sensor, both in signal strength and noise is, however, severe and can cause problems when doing scientific measurements. The CMOS-sensors are seldom recommended for scientific purposes.

3.7 Point Spread Function of an Imaging System

If an optical system is focused on a point source of light in the size of one pixel of the focal plane array (FPA), and there was no optical or electronic influence on the signal, the FPA would detect only one bright pixel and the rest of the pixels would be zero (If pixel is not saturated). In reality, the light of a small point source will not be of the same size in the FPA but will instead spread out after going through the optical and electronic system of the camera. This since the imaging aperture lens will not focus the light perfectly but instead, the light waves converge and interfere at the focal point which produces diffraction patterns. The three-dimensional diffraction pattern will appear as a high-intensity center peak with low-intensity centric ripple rings surrounding the main peak Rottenfusser et al. (ND).

Figure 8: Illustration of a diffraction limited PSF. Image Credit; Rottenfusser et al. (ND)

(33)

If an optical system producing images with angular resolution as good as the theoretical limit (diffrac- tion limit) the system is diffraction limited where the minimum resolvable distance of the system can be described by Eq. 7.

θ = λ

2nsinφ (7)

θ is the angular resolution, λ is the wavelength, n is the refraction index of the medium, and φ is the half-angle to the optical objective lens. nsinφ is often called the Numerical aperture (NA) and is determined by the range of angles of where a system can accept light (constant for the aperture if the lens and mirrors are fixed). The PSF of a perfectly diffraction limited system can be further approximated to the airy disk given by Eq. 8 (using the f-number of the imaging optics).

d = 1.22λN

2 (8)

where N is the f-number of the imaging optics. Eq. 8 can be used to approximate the PSF if knowing the f-number and the wavelength range. As seen in 8, the larger the wavelengths the worse the spatial resolution, hence, if using a thermal imaging device (having relatively long wavelengths) it will tend to underestimate the temperature of very small hot spots. The PSF will affect the values of the pixel intensity (and furthermore the temperature) due to overlap of the pixel intensity values (i.e. crosstalk between the pixels) and should hence be accounted for (Lane et al., 2013).

In an imaging system the source signal will however not only be affected by the diffraction effects but also the pixel grid (more severe for CMOS-sensors) where the combined effect is determined by the PSF. The best way to determine the Point spread function of the camera is by experiments. This is most commonly done by using a point source of known size and measuring the spread, or, by imaging calibration patterns.

The measurement will always include the influence of the PSF and so should the theoretical approx- imation of the image (i.e the digital number). The equation for ∆DN (see eq 16), is not including the PSF which will result in measurement errors. By including the convolution between PSF and the signal, the approximated ratio between the DN number of the red and blue channel will instead be:

R∆DN,psf = P SFred~ ∆DNred

P SFblue~ ∆DNblue (9)

R∆DN,psf will thus be a better approximation of the measured DN-ratio. In order to retrieve a sharper image, and thus retrieve all real pixel-variations, deconvolution of the image matrix by the PSF must be performed. Since the PSF is wavelength dependent the PSF will most likely differ somewhat between the channels of an RGB-camera (red, green, blue). Thus, the PSF for each color channel should be deconvoluted with the image of each color channel. The PSF is furthermore most likely shift-variant (i.e. can vary depending on where on the image plane the light is measured). The shift-variance is often difficult to take into account, both due to the difficulties of measuring the shift-variant PSF,

(34)

the PSF and the image) (Pirinen and Toytziaridis, 2015). In Pirinen and Toytziaridis (2015) a well described and easy to follow measurements of the PSF of an RGB-camera can be found. Measurement of the camera PSF will however not be done within the scope of this project since not considered as the primary source of error.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Comparing the achieved lateral resolution with previous results from the wave optics based Monte Carlo numerical approach, the SPC is proven a simple yet efficient

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating