• No results found

Stand-alone Dual Sensing Single Pixel Camera in SWIR

N/A
N/A
Protected

Academic year: 2021

Share "Stand-alone Dual Sensing Single Pixel Camera in SWIR"

Copied!
72
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Science and Technology

Institutionen för teknik och naturvetenskap

Linköping University

Linköpings universitet

LiU-ITN-TEK-A--19/024--SE

Stand-alone Dual Sensing

Single Pixel Camera in SWIR

Martin Oja

Sebastian Olsson

(2)

LiU-ITN-TEK-A--19/024--SE

Stand-alone Dual Sensing

Single Pixel Camera in SWIR

Examensarbete utfört i Elektroteknik

vid Tekniska högskolan vid

Linköpings universitet

Martin Oja

Sebastian Olsson

Handledare Qin-Zhong Ye

Examinator Magnus Karlsson

(3)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida

http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page:

http://www.ep.liu.se/

(4)
(5)

Abstract

A Single pixel camera is just that, a camera that uses only a single pixel to take images. Though, it is a bit more to it than just a pixel. It requires several components which will be explained in the thesis. For it to be viable it also needs the sampling technology Compressive sensing which compresses the data in the sampling stage, thus reducing the amount of data required to be sampled in order to reconstruct an image. This thesis will present the method of building an SPC with the required hardware and software. Different types of experiments, such as detection of small changes in a scene and different wavelength bands, has been conducted in order to test the performance and application areas for the SPC. The resulting system is able to produce images of resolutions up to 512 ˆ 512 pixels. Disturbances such as movement in the scene or the camera itself being shaken became less of a problem with the addition of a second pixel. This thesis proves that an SPC is a viable technology with many different areas of application and it is a relatively cheap way of making a camera for the infrared spectrum.

(6)

Acknowledgments

Firstly we are grateful for the opportunity to carry out our master thesis at Totalförsvarets Forskningsinstitut (FOI) in Linköping. Special thanks to our supervisors Carl Brännlund, David Gustafsson, David Bergström and Andreas Brorsson at FOI. We would also like to thank our examiner Magnus Karlsson and supervisor Qin-Zhong Ye at Linköpng University. Without everyone of them this project would not have been possible. The time at FOI, work-ing on this thesis has been a true learnwork-ing experience with a great amount of fun and lots of exciting challenges.

(7)

Contents

Abstract i

Acknowledgments ii

Contents iii

List of Figures v

List of Tables vii

Nomenclature viii Abbreviations ix 1 Introduction 1 1.1 Background . . . 1 1.2 Motivation . . . 2 1.3 Aim . . . 2 1.4 Research questions . . . 2 1.5 Delimitations . . . 2 1.6 Outline . . . 2 2 Related work 5 2.1 Single Pixel Camera and Compressive Sensing . . . 5

3 Camera architecture and compressive sensing 7 3.1 Single Pixel Camera . . . 7

3.2 Software . . . 7

3.2.1 Patterns and frames . . . 7

3.2.2 Compressive Sensing . . . 9

3.2.3 Signal processing . . . 9

3.2.4 Image reconstruction . . . 10

3.3 Hardware . . . 11

3.3.1 Analog filters and amplification . . . 11

3.3.2 DMD and DLP . . . 12

3.3.3 Data Acquisition system . . . 12

3.3.4 Photodetector . . . 13

3.4 Sources of noise . . . 13

3.4.1 External noise . . . 13

3.4.2 Electrical noise . . . 13

4 Hardware and software design 15 4.1 SPC . . . 15

(8)

4.3 Filter and amplification design . . . 16

4.3.1 PCB design . . . 17

4.3.1.1 First PCB design . . . 17

4.3.1.2 Second PCB design . . . 18

4.4 PicoScope as a Data Acquisition system . . . 20

4.5 Reconstructing images . . . 22

4.5.1 Mean calculation and signal combination . . . 22

4.5.2 Total variation . . . 26

4.6 Noise reduction . . . 26

4.6.1 Reduction of noise in the scene . . . 26

4.6.2 Reduction of electrical noise . . . 26

5 Conducted experiments 29 5.1 Differences when using two detectors versus one . . . 29

5.2 Test of filter topologies and coefficients . . . 31

5.3 Detection of pixel change in a scene . . . 31

5.4 Different wavelengths of the same scene . . . 32

6 Results 33 6.1 Comparison of one vs two detectors . . . 33

6.1.1 Signal to noise ratio . . . 33

6.1.2 Natural scenes . . . 41

6.2 Signal truncation for different filter orders . . . 43

6.3 Detection of a simulated muzzle flash . . . 43

6.4 Visual and SWIR spectrums . . . 45

6.5 General system results . . . 46

6.5.1 Robustness . . . 46

6.5.2 Speed and quality . . . 47

7 Discussion 49 7.1 Differences when using two detectors versus one . . . 49

7.2 Test of filter topology and coefficients . . . 50

7.3 Detection of pixel change in the scene . . . 51

7.4 Different wavelengths of the same scene . . . 51

7.5 Dual Sensing Single Pixel Camera . . . 51

7.5.1 Design process of the PCB . . . 52

7.6 Problems, future work and applications . . . 53

8 Conclusion 55

Bibliography 57

(9)

List of Figures

3.1 Illustration of an SPC-system. . . 8

3.2 Illustration of the decomposing of an 24-bit RGB frame into 1-bit planes. . . 8

3.3 Example of a signal with the spikes in between pattern changes. . . 10

3.4 Illustration of a DMD where the individual mirrors and their ability to tilt is showed. 12 4.1 Illustration of an SPC with two detectors. . . 16

4.2 Image of the Dual Sensing Single Pixel Camera system developed in this master thesis. The numbers in the image marks the components where 1 is the DLP and DMD. 2 are the photodetectors. 3 are the lenses that focus the light onto the de-tectors. 4 are the mirrors that direct the incoming light onto the DMD. 5 is the USB-camera. 6 is the mirror used with the USB-camera to see the DMD. . . 16

4.3 Parts of the schematic for the first design. . . 17

4.4 Layout of the first PCB design. . . 18

4.5 Schematic of the filter and amplification card. . . 19

4.6 The final PCB design. . . 19

4.7 Graph over a measurement with no filter or amplification, hence a raw signal in-cluding both AC and DC components. . . 21

4.8 Comparison of signal filtered and amplified (left) versus unfiltered and unampli-fied (right). . . 21

4.9 Raw signal from photodetector on the left and zoomed in graph of the DC drop that occurs when the first pattern is displayed on the DMD. . . 22

4.10 In the upper graph both the raw signal and the calculated mean value signal is shown. In the lower graph the signal that is produced when normalizing the mean value signal is shown. . . 23

4.11 Signals showing the effects of using Y = A-B to remove DC shifts. . . 25

4.12 The reconstructed image of the signal in Figure 4.11 with 5% subsampling. . . 25

5.1 The resolution board used to take the measurements for comparing one versus two photodetectors. . . 29

5.2 Reconstructed image with a resolution of 128 ˆ 128, the blue and red quadrants represents the white and black uniform areas used for SNR calculations. . . 30

6.1 Comparison of reconstructed images, using signals from one versus two detectors. In (a) and (b) the top row are images from one detector and the bottom row are images with two detectors. . . 34

6.2 SNR for a resolution of 64 ˆ 64. . . 35

6.3 Comparison of reconstructed images, using signals from one versus two detectors. In (a) and (b) the top row are images from one detector and the bottom row are images with two detectors. . . 36

(10)

6.5 Comparison of reconstructed images, using signals from one versus two detectors. In (a) and (b) the top row are images from one detector and the bottom row are images with two detectors. . . 38 6.6 SNR for a resolution of 256 ˆ 256. . . 39 6.7 Comparison of reconstructed images, using signals from one versus two detectors.

In (a) and (b) the top row are images from one detector and the bottom row are images with two detectors. . . 40 6.8 SNR for a resolution of 512 ˆ 512. . . 41 6.9 Comparison of reconstructed images at different subsampling ratios using one

de-tector versus two dede-tectors. The two leftmost columns are of a construction crane and the two rightmost columns shows a sign that says P&L on some scaffolding. . 42 6.10 Signal truncation for different orders of filters. . . 43 6.11 Detection of a laser being shone at a reflective object in a tree. And the two images

being combined to detect the laser. . . 44 6.12 Chance to detect the muzzle flash compared to the amount of patterns used with

two different methods for detection. . . 44 6.13 Comparison between images reconstructed using one SWIR detector and one

vi-sual detector at the same time, resolution 128 ˆ 128. . . 45 6.14 Comparison between images reconstructed using one SWIR detector and one

vi-sual detector at the same time, resolution 256 ˆ 256. . . 45 6.15 Comparison between images reconstructed using one SWIR detector and one

vi-sual detector at the same time, resolution 512 ˆ 512. . . 46 6.16 Reconstructed images of a construction crane moving during the measurement. . . 46 6.17 Speed versus quality comparison for resolutions from 32 ˆ 32 up to 512 ˆ 512,

images from left to right for each row; roughly discernible objects, starting to be able to discern details, best possible reconstruction of the image and to the far right the reference image. The compression percentage for each resolution is given in table 6.1. . . 48

(11)

List of Tables

(12)

Nomenclature

ǫ Noise in the scene Φ Measurement matrix Ψ A sparse basis

A Signal from the first detector, positive amplitude B Signal from the second detector, negative amplitude C Intensity of light parameter

f Frequency

M Patterns on the DMD n Resolution parameter

x Scene of the camera as an array Y Combined signals from A and B y Sampled signal

(13)

Abbreviations

ADC Analog-to-Digital Converter

CS Compressive sensing

DAQ Data Acquisition system

DLP DLP4500 Lightcrafter™

DMD Digital Micromirror Device

FOI Totalförsvarets Forskningsinstitut

FPS Frames Per Second

OP amp Operational amplifier

PGA Programmable gain amplifiers

SPC Singel pixel camera

SWIR Short-wave infrared

(14)
(15)

1

Introduction

This chapter will give a brief insight into what this master thesis will be about, the back-ground and motivation and also the goals that were aimed at achieving.

1.1

Background

A normal single-lens reflex camera that can be bought at any store can consist of several million pixels and you can get a good camera relatively cheap compared to an infrared camera. A camera for the infrared spectrum is expensive and often not something for the everyday person due to the high cost and limited area of use. Now, why is it interesting to have a camera that can see in infrared? And more importantly, why is it interesting to have a camera with only ONE pixel? The answer to the first question is that infrared cameras have the ability to see through smoke and fog to some extent also some materials or substances, such as gases and textiles, can be more visible in infrared spectrum than they are in the visual spectrum. An infrared camera can also see during the night or with very low levels of light due to the night glow coming from space and the atmosphere which is stronger in infrared than in the visual spectrum [7]. The answer to the second question, why it is interesting to have only one pixel, this thesis will give the answer to that.

A Singel pixel camera (SPC) works by measuring the light intensity of the scene as it is seen through random black an white patterns and then pairing these measurements with the cor-responding pattern to recreate an image. What makes an SPC actually usable is the sampling technique called Compressive sensing (CS). As the name suggests it compresses the signal and image in the sampling step, thus eliminating a lot of unnecessary data. Totalförsvarets Forskningsinstitut (FOI) among others have researched SPC:s and CS and they have con-structed an SPC system which will be used and improved in this thesis. There are several areas to improve with FOI:s system such as; synchronization between system and measure-ments, signal quality by noise reduction, the total time to take an image and overall robust-ness of the SPC.

(16)

1.2

Motivation

While SPC:s are not a new concept they have become much more interesting in the recent years due to the signal processing technology, CS which makes SPC:s more viable. CS reduces the time it takes to reconstruct an image by reducing the amount of data needed. However, the focus of this thesis will lie on the hardware and control of the camera system and not on the algorithms for image recovery. Although SPC:s have become much faster and more useful with the application of CS the time for which it takes to get the data is still limited by the hardware. Thus, it is of interest to optimize the hardware for increased speed and quality.

1.3

Aim

The purpose of this thesis is to increase the speed, quality, robustness and consistency of an existing SPC made for Short-wave infrared (SWIR) at FOI by implementing new hardware and software. As to why a camera in SWIR is needed, some objects may be hard to see in the visual spectrum, such as a camouflage clothing in the forest as it blends in with the sur-roundings. The clothes might be good in the visual spectrum but it can show up very clearly in the SWIR spectrum which is why it is interesting to design a camera for that wavelength. That being said, new uses for an SPC will be examined in order to find new areas for which an SPC may be used i.e. can it be used as a low-powered surveillance camera which can see that which might not be visible in the visual spectrum? Can it be used to detect very brief changes in a scene such as a muzzle flash from a fired weapon?

1.4

Research questions

1. When measuring and reconstructing an image is two photodetectors better than one regarding speed, quality and noise?

2. Is it possible to improve the results by reducing the amount of noise by filtering the photodetector output?

3. How does an analog filter affect the signal and image quality?

4. Is it possible to detect movement in a scene without reconstructing an image?

1.5

Delimitations

During this thesis Compressed Sensing will be used in an open source Matlab algorithm but little focus will be placed on the mathematics behind it. The theory behind it will be ex-plained only briefly with a mathematical model being presented for a better understanding. As there are many different experiments and test for applications for an SPC in SWIR only a select few will be tested. They will be chosen with the intent of evaluating the performance of the camera. Any other tests might be mentioned as future work in the discussion.

1.6

Outline

This section gives a brief outline of the contents of every chapter in this thesis.

• Chapter 1 gives an introduction to this master thesis with the background to the project, motivation for making an SPC. It also details the aims and research questions that this thesis will work towards answering. Lastly the delimitations to this thesis will be pre-sented.

(17)

• Chapter 2 presents papers and articles by different authors whose work are similar or connected to the work presented in this thesis. They lie as a foundation to the theories and methods for single pixel imaging using CS.

• Chapter 3 explains the theories behind SPC:s and the software and hardware for the SPC in this thesis. It presents all different parts, both positive and negative, that affect the SPC system.

• Chapter 4 gives an in depth explanation to how the work has been done. It explains all the steps taken with the software, choice of components, how the hardware has been built up, how noise has been handled and how the images are being reconstructed. • Chapter 5 describes the different experiments that have been conducted to test and

val-idate the system, the reasons for the experiments and how they have been conducted. • Chapter 6 presents all the achieved results from the experiments in the form of

im-ages,diagrams and tables.

• Chapter 7 contains a discussion surrounding the results of the experiments and the thesis work approach. In this chapter the problems faced during the thesis will be dis-cussed and also how to move forward with improving the system and some applica-tions for the SPC.

• Chapter 8 will conclude the thesis, give answers to the research questions and end with some final words from the authors about the master thesis as a whole, and the achieved results.

(18)
(19)

2

Related work

Articles, papers and reports that are related to this thesis are listed and explained in this chapter. They contain information about how SPC:s are built and how they work, also how CS works and why it is needed.

2.1

Single Pixel Camera and Compressive Sensing

• [2] "Detection and localization of light flashes using a single pixel camera in SWIR" by Carl Brännlund et al is an article, in which the authors were a part of, using the same SPC as the one in this thesis. It shows a possible application for SPC:s where it could be used to detect changes in the scene.

• [3] "Performance evaluation of a two detector camera for real-time video" by B Lo-chocki, et al is an example of an SPC-system that uses two separate photodetectors. Though, their images were taken in a controlled laboratory environment and not of na-ture scenes. It still provides good information on how a dual detector system works and the improvements gained due to an extra detector.

• [4] "Compressive Sensing:Single Pixel SWIR Imaging of Natural Scenes" by Andreas Brorsson is a master thesis about the SPC-system which lie as a basis for this thesis. The aim of Brorsson’s thesis was more about Compressive sensing and not so much about the hardware of the system. His report contains much relevant information surround-ing the processes in ssurround-ingle pixel imagsurround-ing. The thesis covers how the camera is built and how it runs and also how an image is reconstructed.

• [8] "An introduction to Compressive Sensing" by Emmanuel J. Candès and Michael B. Wakin gives good and easy to understand explanations about what Compressive Sensing is and how it works. As Compressive Sensing is not a focus of this thesis it is still a big part of the SPC system and an overarching understanding of CS is good to have.

• [10, 11] "Real-time single-pixel video imaging with Fourier domain regularization" and "Single-pixel imaging with Morlet wavelet correlated random patterns" are two papers by Krzysztof M. Czajkowski, Anna Pastuszczak and Rafał Koty ´ns. They explain differ-ent methods for single-pixel imaging, both simulated and with a real SPC-system.

(20)

• [12] "Compressive Sensing for 3D Data Processing Tasks: Applications, Models and Algorithms" by Chengbo Li provides an in depth explanation of image reconstruction and particularly TVAL3. Chengbo Li is the creator of the reconstruction method TVAL3 which will be used in this thesis. This work provides guidelines to the parameter values used in the algorithm but also the meaning and effects of each parameter.

• [13] "Principles and prospects for single-pixel imaging" by Matthew P. Edgar, Graham M. Gibson, and Miles J. Padgett contains easy to understand explanations to how an SPC works and also the mathematics behind single-pixel imaging, i.e Compressive Sensing.

(21)

3

Camera architecture and

compressive sensing

In this chapter all the theories, software and hardware behind an SPC is explained.

3.1

Single Pixel Camera

An SPC works quite differently to a regular camera. For example a normal camera can have 10 Megapixels and when an image is taken the incoming light hits a light detector inside the camera which breaks up the light into 10 million pixels. An SPC however, manages to take images by only using one pixel, as the name suggests. For this to work a couple of things are needed; a spatial light modulator (in this thesis a Digital Micromirror Device (DMD) is used), lenses or mirrors, a photodetector, a Data Acquisition system (DAQ) and finally some device to reconstruct the image for example a PC. Figure 3.1 shows an example of how an SPC-setup may look like. Note that the mirrors, DMD, lens and photodetector are all mounted inside a case, thus limiting the effects of surrounding light. The red lines are representing the light from the scene that hits the concave mirror which directs it towards the mirror in the middle. The light is then focused onto the DMD, which redirects light either towards the photodetector or away from it determined by randomized binary patterns. The light then hits a lens which focuses it onto the photodetector where it is measured as a voltage level and then sampled using a DAQ. From there it is stored in a PC for further processing and image reconstruction.

3.2

Software

This section contains explanations of all the software needed for the SPC in this thesis.

3.2.1

Patterns and frames

A vital part of SPC:s is random black and white patterns used either actively, where the pattern is projected onto a target and a photodetector then measures the reflected light; or as in this thesis, used passively where the individual mirrors on a DMD are tilted after the 1-bit plane patterns. The incoming light is shone onto the DMD and the reflected light from the mirrors facing the detector is then measured.

(22)

Figure 3.1: Illustration of an SPC-system.

An image is taken using a set of frames which are comprised of 24 patterns each. These frames are in the form of pseudo-random patterns of black and white, where "white" and "black" in this sense indicates whether the individual micromirrors are tilted towards the photodetector, white, or away from it, black. The 24 patterns are decomposed from an 24-bit RGB frame, this is shown in Figure 3.2. These patterns are made with randomized matrices and they can be made in several different ways, the matrices used in this thesis, called Walsh-Hadamard (WH), will be explained in section 3.2.4.

Figure 3.2: Illustration of the decomposing of an 24-bit RGB frame into 1-bit planes.

(23)

3.2.2

Compressive Sensing

CS is a technique used in signal processing that enables the practical use of SPC:s, as it reduces the amount of required data to reconstruct an image significantly. CS can recover images and signals with far fewer samples than other traditional methods, thus it is not bound by the Nyquist-Shannon sampling criterion[16, 17].

For this to work CS has two requirements; Sparsity and incoherence [8]. Sparsity in a signal or image means that most of signal is zero or close to zero in some basis. This means that the signal or image is compressible in that basis, e.g. wavelet or gradient basis. The incoherence requirement is fulfilled by using randomized patterns, or measurement matrices which are projected from the DMD [1]. The mathematical model for CS is defined as,

y=Φx+ǫ, (3.1) in which ΦMxNis the pattern, or measurement matrix, where M is the amount of

measure-ments and N the amount of pixels. xNx1 is the scene the camera is pointed at considered as an image array with N pixels, ǫ is the noise in the scene and yMx1is the sampled signal. CS makes it possible for the amount of measurements M to be relatively small compared to the number pf pixels N ˆ N. What enables this is the fact that the image x can be expressed in another basis where it is sparse as seen in (3.2).

Ψθ=x (3.2) Where a sparse basis, e.g., gradient, is represented by ΨNxNand θNx1contains the coefficients

in the current basis. x in the basis ΨNxN contains K values that are non-zero which means

that θ is said to be K-sparse. Given that this is known (3.1) can be combined with (3.2) giving the expression (3.3).

y=Φx+ǫ=ΦΨθ+ǫ, (3.3) Where ΨΦ is the reconstruction matrix A [1]. All this is done in the open source algorithm TVAL3 used in this thesis (more on TVAL3 under section 3.2.4).

3.2.3

Signal processing

The photodetectors output an analog DC voltage signal depending on the light intensity de-tected by it. This signal is affected by the shot noise of the detectors and the thermal noise of the electrical components in addition to all the noise that might come from the surrounding environment. This noise can be filtered through analog filters and/or digital filters. Signal processing is in essence a way to take out the necessary parts of a signal. The signal that is used to reconstruct an image is produced when the patterns are streamed to the DMD via HDMI, which then displays the patterns by turning its individual mirrors ˘12°. The signal strength or DC level depends on how many mirrors that are reflecting the light onto the detector. Between every pattern all mirrors reset to a neutral position, i.e. 0°. This results in a spike in the signal either downwards or upwards depending on which detector the signal is coming from. This can be seen in the raw AC signal from the detectors presented in Figure 3.3 The amount of noise in the signal and the strength of the signal are dependent on several things; weather, turbulence, movement in the scene and so on. For example, bad weather such as rain will increase the noise and lower the signal strength, while bright sunlight will increase the signal strength but not necessarily decrease the noise since there might be much turbulence in the air when the weather is warm. To be able to take images in different con-ditions the signal has to be processed, this is done both in hardware and software. Analog filters, DC-block, amplification and higher sampling frequency is done by the hardware. Av-eraging and a move mean function are used in software to improve the signal.

(24)

Figure 3.3: Example of a signal with the spikes in between pattern changes.

3.2.4

Image reconstruction

Once the scene has been measured, and the signal has been sampled and processed in Mat-lab the last step is the image reconstruction. This is done in MatMat-lab with the open source CS algorithm TVAL3 [6]. The TVAL3 algorithm is short for "Total Variation Minimization by Aug-mented Lagrangian and Alternating Direction Algorithms" and it has four different models for Total Variation minimization. The one used in this thesis is the TV+model seen in 3.4,

minxPℜn

ÿ

i

}Dix}p, s.t Φx=y x ě 0 (3.4)

where Diis the discrete gradient of x at position i and the other variables are from (3.1)-(3.3).

In Matlab, TVAL3 takes in five input arguments and can either have one or two output argu-ments and the Matlab interface is shown in (3.5),

[U, out] = [A, y, n1, n2, opts] (3.5) where U is the main output that contains the reconstructed image and out is a secondary optional output. As for the input arguments, A is the reconstruction matrix, y is the measured and processed signal, n1 and n2 represent the size of the image and opts contains a structure of control options. The different parameter options control the quality and characteristics of the reconstructed image and the different parameters are listed below:

• opts.mu= [28] (primary penalty parameter) • opts.beta= [25] (secondary penalty parameter)

(25)

• opts.mu0=opts.mu (initial mu for continuation) • opts.beta0=opts.beta (initial beta for continuation) • opts.tol= [1.e ´ 6] (outer stopping tolerance) • opts.tolinn= [1.e ´ 3] (inner stopping tolerance)

• opts.maxit= [1025] (maximum total iterations) • opts.maxcnt= [10] (maximum outer iterations) • opts.TVnorm= [2] (isotropic or anisotropic TV) • opts.nonneg= [f alse] (switch for non negative models) • opts.TVL2= [f alse] (switch for TV/L2 models) • opts.isreal= [f alse] (switch for real signals/images)

Note that these values are the default values taken directly from the user’s guide to TVAL3 and in this thesis they have been changed from image to image. For a more detailed descrip-tion of what the different settings do see secdescrip-tion 4.5.2 for a short explanadescrip-tion of the settings used in this thesis and the user’s guide for a description of all settings[6].

Aside from TVAL3 another essential part in the image reconstruction are the randomized measurement matrices, also referred to as patterns in this thesis. They are created using WH matrices which have been created in advance, randomized and then stored in a PC. A WH matrix is a type of measurement matrix designed for CS. It is a Hadamard matrix combined with a matrix created using Walsh code[5, 4]. The reason for storing these premade matrices is to reduce the time it takes to reconstruct an image as the matrices does not have to be generated each time.

The image reconstruction can be explained as a comparison between the measurement ma-trix, i.e. the WH mama-trix, and the actual scene. If a part of the scene is white and a pattern happen to be white at the same area, TVAL3 concludes with increasing certainty as more patterns agree with the scene, that that is what the scene must look like at that specific area. If there is noise such as movement in the scene, comparison between the pattern and the scene may change, thus making the algorithm unsure of what the scene looks like. This causes the reconstructed image to become blurry.

A small change, such as a bird passing by or a light being turned on or off does not affect the image quality significantly as there is much more of the signal without this movement than there is with it. Thus the algorithm can still be sure of what the scene looks like. Also, the disturbance might be in the later part of the measurement and thus, thanks to CS, that data is not needed to recreate the image.

3.3

Hardware

All the hardware used for the SPC in this thesis are explained in this section.

3.3.1

Analog filters and amplification

An analog filter, commonly a low pass filter, is often placed before an Analog-to-Digital Con-verter (ADC) with the purpose of removing as much noise as possible so that the noise is not sampled by the ADC. As an ADC has a reference level which is set according to the ampli-tude of the signal, the signal might have to be amplified in order for the sampled signal to

(26)

have good resolution. Another reason for amplifying the signal is to separate the signal from the noise floor. During the amplification step the signal should have as little noise as possible because the noise will also be amplified. This puts a great emphasis on a good filter before the amplification.

3.3.2

DMD and DLP

A DMD is a spatial light modulator which uses mirrors to direct incoming light. It is in essence a mirror matrix where each matrix element is an individual mirror which can tilt a set angle (e.g. ˘10 ´ 12°), depending on the device, positive and negative to the normal of the mirror. Mirrors tilted positive can be seen as a ’1’, meaning all the light is reflected onto the detector, and a negative mirror as a ’0’, meaning no light is reflected onto the detector. Thus, these mirrors can be used to project the binary patterns that make up the measurement matrices used for taking images. Figure 3.4 shows how a DMD is made up of mirrors. The DMD is mounted on a DLP which powers and controls the DMD. The DLP determines how the individual mirror of the DMD should tilt and it also has a built in synchronization signal. This signal sends out a pulse each time the DMD changes patterns. This synchronization signal is used in the signal processing which will be explained in section 4.5.1.

Figure 3.4: Illustration of a DMD where the individual mirrors and their ability to tilt is showed[15].

3.3.3

Data Acquisition system

The purpose of a DAQ is to take an analog signal as input and create a digital representation of that signal. The process of acquiring data is done by sampling a signal with a chosen frequency and converting it into a digital signal. Depending on the specifications of the DAQ this sampling can be done with different accuracy and resolution. If the signal is sampled at a higher frequency with more data points it can be recreated more precisely. Although sampling frequency is important, a more important specification is resolution. How many bits the ADC in the DAQ has will determine how accurate the digital signal is compared to the analog one.

There are several parameters to consider with DAQ:s such as; number of channels, resolution, sampling frequency and internal memory. The number of channels determines how many signals can be sampled simultaneously. The resolution and sampling frequency of parameters determines the precision of the sampled signal and also the maximum frequency of the signal. If the frequency of the analog signal is higher than the maximum frequency of the DAQ, the signal can not be sampled and recreated as too much data will be lost in between samples. The last parameter, internal memory, determines how much data can be stored in the DAQ. This will limit the amount of data that can be sampled at a time. This can be worked around if the DAQ can stream the samples to an external storage such as a PC.

(27)

In this project a PicoScope was used as the DAQ. Picoscope is a franchise that produces, amongst other products, oscilloscopes for PC:s. There are two possibilities to control the PicoScope; by a company provided software interface and by external programming using a software development kit (SDK), the latter of which is done in this thesis.

3.3.4

Photodetector

A photodetector is a device that operates by converting incoming light into DC voltage or current [9]. The output is proportional to the intensity of the incoming light with higher light intensity meaning higher voltage. There are different types of detectors; the two most commonly used are photovoltatic/photoconductive - and photoemissive detectors. These detectors can be made with different materials such as Si, InSb and InGaAs. The different materials are suited for different wavelengths and the one most suited for SWIR is InGaAs [18].

3.4

Sources of noise

In this section different sources of noise and how it affects the system is described.

3.4.1

External noise

Noise in the scene is a general way of looking at disturbances that might affect the overall quality of an image. Because an image is not captured in an instant, but during the several seconds that the frames are being streamed to the DMD, there are several things that can be considered as noise in an image. Such as turbulence, shaking of the camera or movement in the scene. The three above mentioned noise sources have the same effect, they all lead to an unclear or "noisy" image. This is because the pixels in the scene does not stay constant but instead changes during the measurement which affects the reconstruction algorithm.

3.4.2

Electrical noise

Another source of noise is electrical noise which comes from all electrical components and from the surrounding equipment and potentially from other electrical devices in the sur-rounding area. There are many different forms of electrical noise but the most common ones are thermal noise, shot noise and low-frequency (1/f) noise [14]. Thermal noise comes from the vibrations of electrons and thus it can not be avoided as everything above 0 Kelvin vibrates. Shot noise comes from irregularities in the current flow such as carriers passing the potential barrier of a silicon diode. Low-frequency noise, or 1/ f noise on the other hand comes from the properties of the surface material of the component. All these types of noise are a factor to consider in this thesis and measures to reduce their effect on the results will be explained in chapter 4.

(28)
(29)

4

Hardware and software design

In this chapter the method used to design and implement each part of the SPC system is described.

4.1

SPC

The way a second photodetector is added can differ depending on the system and Figure 4.1 shows an illustration of how a second detector could be implemented. The SPC system in this thesis was based on an existing system at FOI and some changes has been made in the design. The existing system was built using a Thorlabs PDA20CS-EC photodetector placed at

+12° from the normal of the DMD, a Texas Instruments DLP4500 Lightcrafter™ (DLP) with a Texas Instruments DLP4500NIR DMD and mirrors as can be seen in Figure 4.2. Along with these components a visual USB-camera was placed inside the camera box and it was aimed at the DMD to see what the SPC was aimed at. All of these components were also used in the redesign of the system made in this thesis. In addition to the existing components, the new system also contains a Thorlabs PDA20CS photodetector placed at+12° from the normal of the DMD. This is all the hardware placed inside the camera box. There is also a Picoscope 2406b DAQ, a filter and amplification circuit along with a PC needed for the system to work but they are external components to the SPC.

4.2

Control of DMD

The DMD is attached to the DLP which is the controller unit that determines which patterns the DMD are to show. The DLP was controlled through Texas Instruments software DLP Lightcrafter 4500 Control Software™. There are two parameters that are controlled trough this interface. Firstly how the 24-bit RGB frame that is received by the DLP is divided up into 24 1-bit planes. These 1-1-bit planes are the patterns which are used in the image reconstruction with TVAL3. The patterns are sent to the DMD which tilts mirrors to correspond the pattern on the 1-bit plane. The second parameter is the exposure time for each pattern which determines the frequency at which the mirrors will tilt. This frequency has to match with the specified frequencies given by Texas Instruments for the DLP. If they were not matched there were delays between frames noted. The frame rate of the DMD was 60 Frames Per Second (FPS)

(30)

Figure 4.1: Illustration of an SPC with two detectors.

Figure 4.2: Image of the Dual Sensing Single Pixel Camera system developed in this master thesis. The numbers in the image marks the components where 1 is the DLP and DMD. 2 are the photodetectors. 3 are the lenses that focus the light onto the detectors. 4 are the mirrors that direct the incoming light onto the DMD. 5 is the USB-camera. 6 is the mirror used with the USB-camera to see the DMD.

and with 24 patterns being streamed the highest frequency achieved was 60 ˆ 24=1440 Hz. The patterns are created on a PC and streamed to the DLP via HDMI.

4.3

Filter and amplification design

The filter that was designed is a band pass filter to remove the DC level as well as the low and high frequency noise. The band pass frequency range was set to 100-13000 Hz with the intent of removing the DC-level and the low frequency noise coming from, for example, the electrical grid; the upper limit was set according to the bandwidth of the signal. The proto-type filters were made with potentiometers to be able to easily change the cut off frequency if needed and the amplification was also done with potentiometers to be able to control the gain. Since the SPC is using photodetectors to measure the light intensity of each image in the form of a voltage level, different amounts of gain was wanted depending on the reference voltage on the ADC. For example, in bright sunlight there was little need for amplification since the measured signal had high enough amplitude to be measured with good resolution in the DAQ. Where as in bad weather a higher gain was needed in order to differentiate the

(31)

signal from the surrounding noise and to amplify it according to the reference voltage. The filters were first tested on a breadboard where it was easy to test different filters and components and also to measure the filter characteristics using an oscilloscope. Compared to simulating the filters it gives a better view of the surrounding noise that will affect the signals when measuring on an actual filter.

4.3.1

PCB design

In this thesis two designs for the filter and amplification PCB have been manufactured where the first design was used as a test and evaluation card. The second design was based on the results from using the first with improvements on the faults and mistakes discovered during usage. The PCB:s were designed in Altium Designer which is a software made for designing circuits and PCB:s. They were made as two metal layer PCB:s since there was no need for more and the laboratory on Linköpings university does not have the capability to manufacture PCB:s with more metal layers.

4.3.1.1 First PCB design

As mentioned above, the first design was meant to be a test and evaluation PCB where sev-eral different orders of filters could be tested and combined with the aim of determining the best filter for the SPC. Figure 4.3 (a) shows the schematic for the one of the two identical pas-sive filters. In the top of the figure are the jumpers for selecting which filter order to use for one channel. Figure 4.3 (b) shows the active filter with the Operational amplifier (OP amp) to control the gain with a potentiometer. The signal then goes from the OP amp to a Pro-grammable gain amplifiers (PGA) where it is amplified even further.

(a) Passive filter.

(b) Active filter.

Figure 4.3: Parts of the schematic for the first design.

In the middle of Figure 4.4 (a) the amplification part of the PCB is located with the OP amp. At the bottom left are the PGA which are programmed using an Arduino Nano to control the gain of the signals. At the bottom right there is a dual JK flip flop which takes the synchronization signal from the DLP as a clock input to create a square clock signal which is then used in the signal processing as synchronization.

(32)

(a) Top layer. (b) Bottom layer.

Figure 4.4: Layout of the first PCB design.

4.3.1.2 Second PCB design

The functionality of the second design is the same as the first design but the second design was based on the results and usage of the first, where design flaws were corrected. The schematic of the new PCB can be seen in Figure 4.5. The components were placed with the focus of making a space efficient and low noise PCB where the digital signals and analog sig-nals would be separated. The circuit was designed with two fuses to protect all components from current spikes which could damage them. Local decoupling capacitors were placed at every voltage input on the active components to remove voltage spikes. An Arduino Nano was used to control the gain of the PGA:s and it was placed in the top left of Figure 4.6 (a). Above it are contacts for buttons to increase or decrease the gain or reset the Arduino externally and a contact for LED:s to show the gain setting.

The inputs from the two photodetectors were placed on the right side where the signals then go through a band pass filter into an OP amp with variable gain control using potentiometers. The two PGA:s were placed in the middle, after the OP amp. After the PGA:s, the signals go to the outputs on the bottom of the PCB where BNC cables are connected between the PCB and the Picoscope. The circuit is powered by +5 V and -3.3 V. The +5 V was used to power the Arduino and then regulated down to + 3.3 V since the rest of the components are powered by ˘3.3 V.

(33)

Figure 4.5: Schematic of the filter and amplification card.

(a) Top layer. (b) Bottom layer.

(34)

4.4

PicoScope as a Data Acquisition system

The DAQ used in this thesis was a PicoScope which is essentially an oscilloscope for a PC. When choosing a model these things where considered; cost, number of channels, internal memory, number of ADC bits, ability to stream data, the ability to program it externally and triggering functions. Two detectors and a sync signal requires three channels, this left us with the models with three or more channels. Several models were considered which resulted in the deciding factors being cost and ADC resolution. Since there was a budget to consider the choice of DAQ was limited even further as a higher number of ADC bits increased the cost significantly. The chosen model was PicoScope 2406B and the specifications that mattered in this thesis are presented below:

1. Four channels

a) One channel can be triggered through the SDK b) One channel has an arbitrary signal generator 2. 8 bit ADC

3. Power and data through USB 4. 32 MS buffer memory using SDK 5. 1 GS sampling rate

6. ˘ 250 mV, ˘ 2.5 V or ˘ 5 V reference voltage on its ADC

The buffer memory of the PicoScope determines the amount of samples that can be collected. The required number of samples depends on the size of the image and the subsampling ratio between frames and pixels. The subsampling ratio determines how many samples are taken compared to the amount of pixels in the image. As with normal signal processing techniques the samples would have had to be the same as the amount of pixels to be able to restore the image but with CS that is no longer required. The maximum size is 512ˆ512 pixels and ’1’ in subsampling ratio. A subsampling ratio of ’1’ means that the amount of samples are equal to the amount of pixels. This means that one frame consisting of 24 patterns per pixel is streamed to the DMD resulting in a maximum of 512 ˆ 512 ˆ 24=6291456 samples. Since there are three relevant channels the buffer needs to be at least 6291456 ˆ 3 « 19 MS which is within the maximum buffer memory of 32 MS.

Furthermore the resolution of the ADC is 8 bits and the AC part of the raw signal is between 0 and 20 mV, this combined with the DC offset and no amplification results in a signal with poor resolution. An example of this can be seen in Figure 4.7.

To enhance the resolution the signal was oversampled and amplified. The signal was ampli-fied to be as close to the chosen ADC reference voltage as possible. This was done because of the 8-bit resolution of the ADC. The ADC resolution determines how many voltage levels the signal can be divided into and an 8-bit resolution results in 256 levels. If a signal of 250 mV is sampled and the reference voltage is ˘ 250 mV then the signal will have the resolution of 1 mV, resulting in a high resolution signal. If the signal instead is only 25 mV with the same reference voltage the resolution will be 10 mV which will result in information being lost in the sampling. The oversampling of the signal was done to ensure that a single voltage spike did not affect the signal to such a degree that it affects the image reconstruction. The oversampling meant that there were enough sampled values for each pattern to counteract the effects of voltage spikes.

(35)

Figure 4.7: Graph over a measurement with no filter or amplification, hence a raw signal including both AC and DC components.

Making sure to amplify the signal to be as close to the reference voltage as possible greatly improved the signal resolution. An example of an amplified signal compared to a signal not amplified can be seen in Figure 4.8 where the reference voltage is ˘ 250 mV. The left signal is amplified and the right signal is not.

Figure 4.8: Comparison of signal filtered and amplified (left) versus unfiltered and unampli-fied (right).

The PicoScope was programmed to start collecting samples on three channels; detector A, detector B and synchronization signal, at a specific trigger level. When taking an image a chessboard pattern is first displayed on the DMD which generates a signal which had a

(36)

relatively stable amplitude. To trigger the PicoScope to start collecting samples the patterns are streamed to the DMD. When the first pattern of an image is projected on the DMD the signal gets a quick DC drop as can be seen in Figure 4.9.

Figure 4.9: Raw signal from photodetector on the left and zoomed in graph of the DC drop that occurs when the first pattern is displayed on the DMD.

The trigger level is set so that this drop triggers the PicoScope to start collecting at a set sam-pling frequency. When the whole signal has been captured, i.e., when all patterns have been displayed by the DMD, the PicoScope sends all the data stored in its buffer to the PC. The buffers are then freed and cleared and ready for another sample collection.

4.5

Reconstructing images

In this section the signal processing and usage of the TVAL3 algorithms will be presented.

4.5.1

Mean calculation and signal combination

The next step after sampling the signal was to process the raw data before reconstructing the image. The first step in processing the signal was to find the last pattern and remove the rest of the signal to ensure that only relevant data were being processed. The starting index was 1 because the sampler triggers at the first pattern, the stop index was calculated using (4.1), where M is the number of patterns displayed on the DMD and fsis the sampling frequency.

Stopindex=M fs (4.1)

Before the image could be reconstructed through TVAL3 the signal had to be processed. To get the most accurate value during the full time of each pattern that was displayed on the DMD, an oversampling was performed. This was done to be able to calculate a mean value for each pattern and thereby reducing noise. Since there are signal spikes occurring in between two patterns this affects the mean value calculation. These spikes can be seen in the top graph of Figure 4.10. Therefore the samples including these spikes are removed before calculating the

(37)

mean value. The mean for every pattern was then calculated by using the synchronization signal as each pattern lies between a falling and rising edge of the synchronization signal. This made it easier to differentiate each pattern so two patterns does not overlap with each other in the mean calculation. Without the synchronization signal every signal spike would have to be found and used in order to differentiate the separate patterns.

The signal was then normalized since there is a difference in amplitude between the two detectors but also to consistently have a signal that is between ’0’ and ’1’ no matter the signal strength. In Figure 4.10 the raw signal (raw signal in this case means the measured signal after filtering and amplification) and mean value signal can be seen in the upper graph. In the lower graph the normalized mean signal is presented. Each value in the lower graph represents one of the pattern mean values in the upper graph (graphs do not correlate exactly in the figure).

Figure 4.10: In the upper graph both the raw signal and the calculated mean value signal is shown. In the lower graph the signal that is produced when normalizing the mean value signal is shown.

A final step before combining the signal was performed, this step i called move mean. It removes the trends in the signal such as shifts in the light intensity and general small move-ments in the scene, such as turbulence or wind which give the signal a DC offset. This brings the signal to a steady level which results in the mean values being on the same level. The effects of using this function is more important when reconstructing the signal from only one detector since one detector is more susceptible to turbulence, intensity shifts and so on. Using move mean function is not very effective when using two detectors but can increase

(38)

the quality slightly hence it was used in this thesis.

After the signal processing the two signals were combined. There are several ways of com-bining the signals and here are three options that were considered in this thesis;

1. Y=A ´ B 2. Y=A+B 3. Y= A ´ B

A+B+ǫ

To see the effects of using different ways of combining the signal the mean value for the entire image was used. If every mirror on the DMD represents a pixel in the resulting image, the sum of the voltage generated by each mirror can be used to calculate the voltage mean value for the whole image. If the intensity of light increases in the scene this can be viewed as increasing the mean value for the entire image. Thus also increasing every pixels individual amplitude by a constant C this is shown in (4.2) where pijrepresents an individual pixel and

Aijis the amplitude that the pixel reflects onto the detector.

pij =Aij+C (4.2)

The signals are then combined and in the ideal case the signal from detector two is the inverted signal from detector one. This means that Aijis negative for detector two and has

the same amplitude as detector one. With this in mind lets look at the results of the different ways of combining the signals:

First optionof combining the signals is (A-B), following (4.3) the constant C cancels out and the resulting amplitude is double the amplitude of one detector.

p1ij´ p2ij= (A1ij+C)´(A2ij+C) =A1ij´ A2ij=2A1ij (4.3)

The second optionthat was looked into was to add the signals. In (4.4) it can be seen that only the intensity change C remains in the ideal case as the amplitude cancel out each other. When using this method it is important to keep in mind that C is ’0’ for every other pixel except for the individual pixels that are affected by change in the light intensity. In theory this should be a useful way of detecting very fast pixel changes such as a muzzle flash.

p1ij+p2ij = (A1ij+C) + (A2ij+C) =2C (4.4)

The third optionwas to normalize the signal after the intensity changes, this can be deduced using (4.5). Again if the ideal case is assumed, the amplitude is double that of one detector and it will be normalized against the intensity shift C. In this case however there is another aspect to be considered, say there is no intensity shift ,i.e. C=0, then division by zero would occur. To prevent this a small ǫ is added in the numerator.

p1ij´ p2ij

p1ij+p2ij+ǫ

= 2A1ij

2C+ǫ (4.5)

When comparing the reconstructed images from option number one and three the difference are next to none. Which is why Y=A ´ B ended up being the simplest option for combining the signals. Why does this method of combining signals result in a good image? When taking Y = A ´ B the affects of light intensity shifts during the collection of a signal is mitigated. This is because the amplitude between the signals stays the same whether there is a tempo-rary positive or negative DC offset due to light intensity shifting or the camera shaking, an example of this is shown in Figure 4.11. The reconstructed image of the signal can be seen in Figure 4.12 where one and two detectors are compared side by side.

(39)

Figure 4.11: Signals showing the effects of using Y = A-B to remove DC shifts.

(a) One detector (b) Two detectors

(40)

4.5.2

Total variation

The last step in reconstructing an image is the Total variation algorithm TVAL3 where each mean value is paired to the corresponding randomized and premade Walsh-Hadamard matrix and thus, if everything was done correctly, an image could be reconstructed. The quality of the reconstruction could be affected by different parameters in the TVAL3-algorithm. These parameters are listed in section 3.2.4. Not all the options have been changed and they in-fluence the reconstruction quality to different degrees. The options that have been changed are:

• opts.mu which is the most important one as it has the biggest impact on the recon-structed image. It should be set according to the noise level in the measured signal. A smaller noise level means a higher value for opts.mu.

• opts.beta is less important than opts.mu as it fine tunes the image more than opts.mu does. It should be set somewhere between 24and 213and the best setting seems to come from trial and error.

• opts.tol and opts.tol_inn determines the accuracy of the reconstruction. Smaller values results in greater accuracy but it will also result in longer time for the reconstruction. • opts.maxit sets how many times the algorithm will iterate and thus it will increase the

certainty of the reconstruction. This setting has a big impact on image quality but also on the time it takes to reconstruct the image.

• opts.nonneg and opts.isreal are set to false. These settings are used to determine which TVAL3 model to use and by setting them to false the TV+ model is chosen.

Apart from these options the amount of data from the signal used to reconstruct the image was also chosen and this sets the subsampling ratio of the signal.

4.6

Noise reduction

The measures taken to reduce noise as much as possible in order to achieve as good signals as possible will be presented in this section.

4.6.1

Reduction of noise in the scene

As the scenic noise could not be controlled there was no real way of reducing it. However, the effects of it could be reduced by oversampling the signal and implementing a second pho-todetector. As mentioned in section 4.5.1, oversampling was done to reduce voltage spikes. Implementation of the second detector was done to reduce more noticeable noise such as movement in the scene and turbulence. In the signal processing a moving mean was used to remove trends in the signal such as changing light intensities.

4.6.2

Reduction of electrical noise

To handle the electrical noise a band pass filter with a pass band of 100 - 13000 Hz was imple-mented, it also acts as a DC block to remove the DC offset. With the DC level removed only the AC parts of the signal remained and could be amplified without having to worry about the OP amp over ranging and cutting the signal due to the DC-level. The lower limit was chosen to remove the low frequency noise such as the 50 Hz from the electrical grid and the higher limit was determined by the bandwidth of the signal. The upper limit was calculated using (4.6) where SBWis the signal bandwidth and tris the rise time of the signal. The resistor

(41)

values were calculated using (4.7) and (4.8) where the capacitor values Cl and Chchosen to 1 nF and 10 µF respectively. SBW = 1 tr (4.6) Rlow= 1 2π flCl (4.7) Rhigh= 1 2π fhCh (4.8)

Another measure taken to reduce noise was the design of the PCB. It was designed with the purpose of reducing noise with regards to the placement of components, inputs/outputs, traces and also all the copper on the top and bottom layer that are not traces are connected to ground, thus creating big ground planes. The inputs and outputs were placed as close to each other as possible to prevent the card from acting like an antenna. Local decoupling capacitors were placed on every power pin to reduce current spikes. Analog and digital signals were kept apart from each other as best as it could be done and the traces were drawn so that high voltage traces did not affect traces carrying signals.

(42)
(43)

5

Conducted experiments

All the experiments that were conducted are listed and explained in this chapter.

5.1

Differences when using two detectors versus one

In order to see the differences between using one or two detectors, a couple of tests were conducted. A resolution board with sharp edges and high contrasts as shown in Figure 5.1 was setup outside at an approximated distance of 90 meters.

Figure 5.1: The resolution board used to take the measurements for comparing one versus two photodetectors.

(44)

Figure 5.2: Reconstructed image with a resolution of 128 ˆ 128, the blue and red quadrants represents the white and black uniform areas used for SNR calculations.

A series of measurements were then conducted for four different resolutions; 64 ˆ 64, 128 ˆ 128, 256 ˆ 256 and finally 512 ˆ 512 pixels. To compare the resulting image qualities from the combined signals of two detectors versus one detector, the SNR was calculated for one and two detectors respectively. To calculate the SNR, two separate uniform areas on the resolution board as in Figure 5.2, one white and one black area, was selected.

Since white represents a ’1’ i.e. max signal and black represents a ’0’ i.e. no signal, the differ-ence between the two areas can be considered to be the signal strength. The mean value, µ, and standard deviation, σ, for both areas were calculated. The SNR for the image could then be determined with (5.1),

SNRtot = µ1σ1´ µ22 2

(5.1) where µ1and σ1corresponds to the white area, and µ2and σ2corresponds to the black area.

This was done for the same images with one and two detectors respectively to compare the difference. The result was compared to see if there was a difference in SNR when using one or two detectors, this was done on images for each resolution.

To further investigate the differences between one and two detectors a series of images of nature scenes was taken. When taking images of nature scenes it is of interest to look at how high subsampling ratio is needed to get a clear enough image to see what the scene depicts and how high the subsampling needs to be to discern details. The percentage is the ratio

(45)

between the number of patterns, M, and number of pixels, N, and it is calculated using (5.2). P[%] = M

N ˚ N (5.2) Another aspect of having two photodetectors is the possibility of a more robust system which is less affected by physical disturbances such as the camera being shook or dynamics in the scene. This was tested by taking images while there was movement in the scene, such as a construction crane moving.

5.2

Test of filter topologies and coefficients

As mentioned in Chapter 3 the signal can be filtered with both analog and digital filters to clear up the signal from the noise and improve the signal integrity. In this thesis only analog filters where evaluated. To conclude which type of analog filter that is best suited to an SPC, theoretical studies of different filters was done and from there, practical tests of filter topologies and types was performed. These tests would then be the basis for which type of filters are implemented with the SPC-system.

The conclusion of the test results was made by comparing the unfiltered signal with the fil-tered signal. A balance of how much was filfil-tered out versus how much of the signal that was truncated by the filter had to be made in order to decide on which order of filter to use. As the SPC will be used in varying conditions of light intensity, turbulence and other disturbances no specific filter was likely to be best for all situations.

5.3

Detection of pixel change in a scene

A possible application for an SPC is detecting changes in the scene. Although changes in the scene are most often regarded as unwanted noise when taking an image it could also be the target of the SPC. Is it possible to detect small sudden changes in the scene without reconstructing the image? If the changes are detectable in the raw data, is it then possible to reconstruct it to see what caused the sudden change? This sudden change could for example be the muzzle flash of a rifle firing a round.

In order to test this, an IR laser was used, both to simulate a muzzle flash by firing the laser in short pulses and also to see if it was detectable when firing it with a constant light. This test was performed outside where the laser was aimed at a small reflective object at a distance of approximately 320 meters. If the laser is detectable in the signal and it is possible to reconstruct the image. How much percentage of the signal is needed in the reconstruction to detect the laser?

Similar to the method in section 5.2 a series of measurements was taken at three different resolutions; 32 ˆ 32, 64 ˆ 64 and 128 ˆ 128. Three different modes was used; no laser, pulsed laser and constant laser.

No lasermode was used to reduce background noise in the reconstruction phase. This was done by using the image as a background so the difference when using constant or pulsating laser would only be the pixel/pixels changing due to the laser.

Pulsating laserwas used to detect an amplitude jump in the signal and also to see if it was possible to reconstruct a single pulse. The frequency which the laser was pulsing at was 40 Hz, this gives a period time, T, of T = 1/ f = 0.025 s. The duration of a muzzle flash is around 1 ms. If it is possible to detect the laser at 4% of the pulse duration, corresponding

(46)

to 3 patterns on the DMD, it would be possible to detect a real muzzle flash. If for example a DMD with a switching frequency of 22 kHz would have been implemented, 4% of the pulse duration would correspond to 33 patterns on the DMD instead. This is more realistic to achieve and was the limit to detect a muzzle flash in this thesis since there are DMDs with switching frequencies up to 22 kHz on market.

Constant lasermode was used to consistently be able to detect the pixel/pixels that were affected by the light from the laser. In order to detect the muzzle flash two different meth-ods was tested and compared. The first method was to use an image taken without the laser and use that as a background image. Then an image with the laser will be combined with the background image by subtracting the background from the one with a laser. This should result in everything but the differences in the two images being removed. Since the back-ground does not have the laser, that should be the only difference and the white dot of the laser should be the only thing remaining. The other method does not include background removal but instead utilizes the characteristics of the signal processing option Y = A+B described in section 4.5.1. In theory, this removes everything but changes in light intensity in the pixels thus removing everything but the laser as that is the only change in the scene.

5.4

Different wavelengths of the same scene

As the SPC have two photodetectors they can be combined to take one image or they can be used to take two separate images at the same time. By taking two separate images it is possi-ble to look at two different wavelengths at the same time of the same object. This experiment was only a minor experiment to see what differences could be seen in different wavelengths. This could potentially be used to spot gas leaks which might be invisible in the visual spec-trum but can be seen in some of the infrared specspec-trum or to see how different clothes show up in different wavelengths. This will be tested by using two different photodetectors, one in the visual spectrum (350-1100 nm) and one in the SWIR spectrum (800-1700 nm). Note that the visual detector overlaps the SWIR spectrum a bit, ideally the detectors would have been in two completely separate bands of wavelength but there were no such detector for the visual spectrum at hand. The test was conducted by taking images of the authors when they were wearing clothes with hunters camouflage in order to see if the camouflage would be more visible in the SWIR spectrum than in the visual spectrum. For this experiment, images of resolution 128 ˆ 128, 256 ˆ 256 and 512 ˆ 512 were taken as 32 ˆ 32 and 64 ˆ 64 are too low resolutions to discern any details.

(47)

6

Results

In this chapter all the results from the system and all experiments will be presented.

6.1

Comparison of one vs two detectors

In this section the resulting SNR for the different resolutions will be presented as well as a comparison between subsampling ratios for one and two detectors with natural scenes.

6.1.1

Signal to noise ratio

The reconstructed images in this section were chosen with the intention of showing the differences between one and two detectors. For lower resolutions a higher subsampling ratio is required, to be able to discern details on the resolution board, compared to higher resolutions. That was the reason behind the different incremental steps in subsampling ratios for the different resolutions.

The SNR graphs in this section does not directly correlate to the images with the same resolution, the images are reconstructed from one signal while the SNR graphs have been calculated using multiple signals.

In Figure 6.1 the resulting images that were reconstructed using the signals from one respec-tively two detectors is presented.

References

Related documents

Om man ska se till metoder som företagen använder för att bli konkurrenskraftiga så ser vi en likhet mellan Swedese och Karl Andersson & Söner där båda företagen arbetar med

In the South P1 atte Ri ver, the causes of return flows can be classified as three: the initial drawdowns of the water table, the river stage drawdowns, and the net uniform

Maskinkod är helt obegripligt för den oinvigde: till exempel betyder 1000101111000011 ”flytta data från b till a” för en Pentium-kompatibel CPU; fullkomligt meningslöst

C., Höst, G., Tibell, L., (2017), Insights from introducing natural selection to novices using animations of antibiotic resistance, Journal of Biological Education, , 1-17...

Resultaten efter genomförd träningsperiod visar att UVK-gruppen kickar signifikant längre efter starten, samt visar en tendens till att kicka längre efter vändningen vid 50

Det förslag till lagändring som presenterades under hösten 2016 följer i samma fotspår och verkar för att det lagstadgade rättighetsområdet ska breddas ytterligare i och

För att utvärdera skillnader i kontaktvinkel mellan de olika limsystemen, fuktkvots- nivåema samt käm- och splintved analyserades resultaten enligt modellen Tukey HSD. Resultaten

Finally, we look at the relative performance of the dif- ferent direct transfer methods and a target language specific supervised system trained with native and cross-lingual