• No results found

Development of a Handheld Night Vision System

N/A
N/A
Protected

Academic year: 2021

Share "Development of a Handheld Night Vision System"

Copied!
33
0
0

Loading.... (view fulltext now)

Full text

(1)

Development of a Handheld Night Vision System

Robert Ek

Jonas Karp

School of Innovation, Design and Engineering

Mälardalen University

May 2009

Examinator

Prof. Lars Asplund Mälardalen University

Supervisor

Bo Sjögren Scandilumen AB

(2)

with a digital interface. The specification was to contain information about manufacturers and performance on critical components such as image intensifier tube, image sensor and display. Scan-dilumen have previous experience with CCD cameras and wanted to know if the CMOS technology were sensitive enough to work properly in gated systems where high sensitivity is critical. Different image processing techniques was analyzed to find out the possibility to enhance image quality with an FPGA built-in into the system. When the specification is implemented, Scandilumen will have a prototype up-to-date with a digital interface and real time image enhancement.

(3)

Sammanfattning

Uppdraget i denna magisteruppsats var att ta fram en specifikation för en andra prototyp av Scandilumens grindade mörkerkamera. Nuvarande prototyp är analog och skall uppgraderas till en digital variant med display och anslutningsmöjlighet till dator. Specifikationen skall innehålla uppgifter om vilka ingående komponenter som skall användas samt vilken prestanda de skall ha. Exempel på dessa komponenter är bildförstärkarrör, bildsensor och displayer. Stor vikt har lagts vid att avgöra vilken typ av bildsensor som skall ingå i systemet. Scandilumen har tidigare erfaren-het av CCD-kameror men ville undersöka om CMOS-tekniken var känslig nog för denna typ av applikation. En jämförelse gjordes mellan de olika teknikerna med fokus på de höga krav som ställs på känslighet. Dessutom analyserades olika typer av bildbehandling som är lämpliga för systemet och som också går att implementera i en FPGA på lämpligt sätt. Om specifikationen följs kommer Scandilumen ha en prototyp uppdaterad med ett digitalt format och den senaste tekniken.

(4)

1 Introduction 1

1.1 Background . . . 1

1.2 Purpose . . . 1

1.3 Delimitations . . . 1

1.4 Image intensifiers . . . 2

1.4.1 Image intensifier tube . . . 2

1.4.2 Laser . . . 4 1.4.3 Range gating . . . 4 1.5 3D scanning . . . 5 1.6 Related work . . . 5 1.6.1 Competitors . . . 6 1.6.2 Research . . . 6 2 Problem formulation 8 2.1 Analysis . . . 8

2.1.1 Model of complete system . . . 8

2.1.2 Halo effect . . . 8 2.1.3 Sensor requirements . . . 9 2.1.4 Image processing . . . 9 3 Method 10 3.1 Literature . . . 10 3.2 Empiric collection . . . 10 3.3 Comparison . . . 10 3.4 Specification . . . 10 4 Solution 11 4.1 Image intensifier tube . . . 11

4.2 Image sensor . . . 12

4.2.1 Introduction to CCD and CMOS . . . 12

4.2.2 CCD details . . . 14 4.2.3 CMOS details . . . 14 4.2.4 Technology differences . . . 15 4.2.5 Sensor proposal . . . 16 4.2.6 Conclusion . . . 16 4.3 Image processing . . . 17 4.3.1 Different techniques . . . 17 4.3.2 Implementation . . . 18 4.3.3 Implementation on an FPGA . . . 20 5 Result 22 5.1 Image intensifier tube . . . 22

5.2 Sensor . . . 22

5.3 Image processing . . . 22

5.4 Future work . . . 23

(5)

CONTENTS ii

A Descriptive images 24

(6)

Introduction

1.1

Background

The ability to see, move and act in the dark is a feature that humans naturally have limited recourses for. In most situations it is perfectly fine to use a torch or a flashlight to light up the dark, whilst for some applications it is preferred to be more discrete. For these kinds of applications image intensifiers were first developed. As in many other fields it was military needs that lead the development of new technologies and devices. Since image intensifiers first emerged in the 1930s great advancements have been made in this field due to enhancements of materials and technologies. Image intensifiers were for a long time expensive and difficult to find for the general public. When the development continued and more effective technologies were developed the early versions made its way to the open market and is today to be found at reasonable prices. Even if today’s state of the art night vision devices give superior possibilities to see in darkness there is still a long way to go before it is comparable to the vision possibilities during daytime. One area of improvement is the possibility to retrieve colors through image intensifiers which normally displays the reality in green and black. There are however some areas where night vision systems perform better than normal daytime systems, for example range gated systems. Range gated night vision systems give the user the ability to see through rough conditions such as heavy rainfall, snow or fog. This is because of their ability to avoid reflections from the interfering media. Gated systems use a complex technology with the need of fast response times in components used. The fact that desired reflections travel with the speed of light sets high demands on components used throughout the whole system.

1.2

Purpose

Scandilumen AB started the development of a laser range gated night vision system 2006, and recently had their first prototype made. The prototype was developed by Turn LLC on behalf of Scandilumen. Turn is a Russian company that has many years of experience in the field of night vision devices. Turn has developed several night vision devices, ranging from large devices that mount on ships to under water systems. Scandilumen is a one-man enterprise meaning the work load of keeping up-to-date in components and technology is huge. This thesis aims to build a platform from which Scandilumen can continue the development of the system and keep up with the demands set by the market. The result will be an updated specification for the next prototype of the system.

1.3

Delimitations

The process of bringing a product to market normally involves several steps of experiments, com-parisons and prototypes. For each prototype there are factors such as material, components, manufacturer, design and cost to consider. The purpose of this thesis is to leave proposals about electronic components and software algorithms for further development of the system. The main

(7)

CHAPTER 1. INTRODUCTION 2

task is to give suggestions about image sensors and techniques for image processing. Factors such as casing design and user interface will therefore not be covered.

1.4

Image intensifiers

The technology behind image intensifiers is often categorized into four generations (Gen 0 - Gen III), all with different characteristics. The main enhancements concern the image intensifier tube (IIT). The Gen 0 devices were developed during the 1950s and used an extern light source to illuminate the field of view because the intensification was not efficient enough to produce a detectable image under low light conditions. During the 1960s the intensification was high enough for the devices to operate under moonlight conditions without the use of an external light source. Such devices are categorized as Gen I. Gen II was developed during the 1970s with the introduction of the microchannel plate (MCP). The MCP is an electron multiplier which gave IITs the possibility to intensify the received light by tens of thousands. Gen III devices use a photocathode coated with gallium arsenide. Compared to lower generation tubes that use multialkali coating it allows more photons to be converted to electrons inside the IIT. [1]

1.4.1

Image intensifier tube

The key component of an image intensifier is the image intensifier tube. Incoming photons pass through the photocathode and are converted into electrons. Due to a high voltage the electrons are then accelerated through an MCP where the electrons are multiplied by several thousands. The final step is to accelerate the multiplied electrons onto a phosphor screen where the energy of the moving electrons is converted into visible light, see Figure 1.1 for the principle of an IIT. Since the

Figure 1.1: The principle of an image intensifier tube.

IIT is a key component of the system it is very important to get the best possible unit out of the very few producers on the market. There are a few standards that most manufacturers use when it comes to size and classification but many of the parameters are individual for each manufacturer and model. The handheld system is specified to have as low weight as possible which means that size and weight is critical in every component used. Standard size of IITs is a diameter of 18 mm or 25 mm where the 18 mm version is preferable for the handheld system with its weight of around 80 − 100 g. The tubes are to be ordered as straight or twisted, also known as non-inverted or inverted. This possibility is interesting for systems for analogue use because the way the image

(8)

transforms on its way from input lens via IIT to the ocular for viewing. The image gets rotated 180 when passing the input lens which demands another twist of 180 for the observer to see an image that corresponds to the object observed. Digital systems have the possibility to twist the image sensor according to the tube being used.

Photocathode

The input window of the IIT is provided with a photocathode covered by a semi-conductive coating. The purpose of the photocathode is to convert photons into electrons. The second generation IIT uses multialkali-coated photocathode. These cathodes show response to wavelengths between 150 − 900 nm with the peak response near 430 nm. During the 1980s with the introduction of generation 3 IITs came the Gallium Arsenide (GaAs) coated photocathode. These cathodes show response to wavelengths between 530 − 950 nm with its peak response near 840 nm.[2] This is a major advantage for the Gen III intensifiers because starlight consists of light in this region and therefore is more easily detected and amplified in the tube. [3]

Microchannel Plate

The MCP is a component used for multiplying electrons inside the IIT. The disc shaped plate was originally manufactured by leaded glass punched with millions of channels parallel to each other. Today many MCPs are made of silicone based materials, see Figure 1.2. The disk is treated to optimize electron multiplication inside the channels. The surfaces of the plate are coated with a thin metallic layer to serve as electrodes. Providing a high voltage over the semi conductive plates each channel serves as an electron multiplier by allowing charge multiplication by secondary emission. [4] The channels vary in diameter from 0.5 − 25 µm (depending on model) and are non-perpendicular to the surface planes with some 5 − 15◦tilt. The tilt forces the electrons entering the channels to collide with the channel walls, bouncing back and forth and multiplying several times for every collision with the wall[5]. New technology allows production of silicon MCPs with better performance and higher resolution than those made out of glass. Silicone MCPs are fabricated using a photolithographical process and offer smaller sized channels closer to each other. By using this method more than 90 % of the plate may be used for open channels allowing high resolution. The channels of silicone MCPs are coated with insulating and resistive layers to optimize performance, offering gains of 105−6 times depending on the length of the channels and the voltage applied. [6]

(a) Seen from the top (b) Seen from the side

Figure 1.2: Close-up of a silicon microchannel plate. The hole is 6×6 µm. Images are taken from

[6].

Phosphor screen

The final stage for the electrons inside the IIT is to be accelerated onto a phosphor screen. Phosphor is sensitive to the electrons and when they crash into the screen the kinetic energy is transferred into the phosphor which starts to glow with its characteristic green light. There are several types of phosphor being used in night vision devices (NVDs), all with special characteristics such as

(9)

CHAPTER 1. INTRODUCTION 4

spectral illumination and decay time. In analouge devices the observer looks directly onto the phosphor screen to see the image intensified by the device. Green phosphor is normally used in NVDs because the human eye is most sensitive to wavelength in the green area around 500 nm. For digital devices it is important to match the illumination intensity, spectral illumination and decay time with the specifications for the image sensor.

1.4.2

Laser

Laser is an acronym for Light Amplification by the Stimulated Emission of Radiation. The idea behind laser was born 1917 with a paper by Einstein on the quantum theory of radiation, but the first usable laser was constructed by Theodore Maiman 1960. Lasers differ from ordinary light in four ways; it is monochromatic, parallel, coherent and can be intense [7]. Since the 1960s the development have been rapid, and lasers has a broad scope of use; e.g. medical surgery, cutting steel and communication. Laser wavelengths range from 180 nm to 1 mm [8], and peak powers range from less than one mW to several MW. Some lasers can operate continuously while others are pulsed. Pulses are often in the ns or µs range, but it is possible to create pulses as short as a few fs (10−15s).

Eye safety

The danger for eyes is dependent on wavelength and intensity. Eye damage is usually caused by an intense light that is focused on the retina. The light causes a momentary temperature rise and burn damages as a result. If the pulse energy is increased, steam blisters or plasma can appear, causing ripping damages. Damage to the retina can never be repaired, but small damages to the retina can be compensated by the brain, and the individual may not even be aware that he damaged his eye. Wavelengths between 400 − 1500 nm can enter the eye and be focused on the retina, and wavelengths outside the visible spectra are especially dangerous, as the eyes reflexes do not work. Light that is absorbed before it reaches the retina can also cause damage, but it requires powerful lasers or long exposure times. The term “eye-safe laser” is often used. It may refer to lasers with a beam power less then 1 mW, it is also used about lasers operating near 1540 nm. Light near this wavelength neither reaches the retina, nor is it fully absorbed by the cornea, but it is absorbed by larger portions of the eye, making the eye less sensitive. However, long enough exposure damages the eye even at 1540 nm. [7]

1.4.3

Range gating

There are two main groups of image intensifier systems; passive and active systems. Passive systems only intensifies the illumination from the environment, e.g. moon and starlight. Active systems, on the other hand, uses an illuminating source, usually a laser. The light emitted is usually not visible to the eye. The main benefit with a passive system is that it does not emit light, and can therefore not be detected, important for, e.g., military use. The drawback with a passive system is that it does not work in absolute darkness, as there is no light to intensify. In absolute darkness you need an active system. An active system will illuminate the scene with a laser, and amplify the reflected light, thus giving a good image of the scene even at long distances. The downside is of course that it can be detected, but also that it needs more power. A range-gated night vision system is an active system with a pulsed laser and a time controlled gain. The IIT can be switched on and off very rapidly (in the ns range [9]). The intensifier tube is controlled with respect to the time passed (distance) since the illuminating pulse was sent; this enables control of which distances to view illuminated and which to view non-illuminated. As the pulse is transmitted the intensifier is off. When reflections from particles between the system and the desired distance are returned, they will not be intensified. When the reflections from the desired distance (time) are received the intensifier is turned on. The intensification will be on for a time (depending on the depth of the viewing window), and then turned off again. Reflections from the background will not be intensified. This will give improved signal to noise ratio and thus a better picture at the desired distance. Objects and particles in the foreground and background will not clutter the image to the same degree, thus giving a clearer image, See Figure 1.3 on page 5. For an explanatory figure of the technique see Figure A.2 in Appendix A. This technique makes a great difference in, for

(10)

example, low-visibility conditions like snow, fog and rain. It can also make it easier to distinguish objects as the foreground and background do not occlude the objects.

(a) Beginning of corridor (b) Middle of corridor (c) End of corridor

Figure 1.3: Practical tests in a 100m corridor with a target at the end of it. Tests were performed

with a sea lynx system from TURN, gate depth is 5m.

Limitations

The range and gate depth in a range-gated system has its limitations. Although both distances are dependent on time, the range is mostly dependent on the laser and optics. A powerful laser illuminates longer distances than a less powerful one, and the optics is important when viewing things at great distances. Some systems claim to have a range beyond 10 km. The gate depth on the other hand is dependent both on the length of the laser pulse and the speed at which the IIT can be switched on and off. The longer the IIT is switched on, the deeper the gate will be. For every 3.4 ns the light will travel about one meter in air, so to have a short gate window you need a fast IIT. In most systems the smallest gate depth is between 5 and 60 m, this would mean that the IIT is turned on and off again at a speed between 15 and 200 ns [10][12][13]. Although there are really fast systems which have a speed as fast as 250 ps [11], this would give a gate depth of 7.5 cm in air. The gate depth is also dependent on the length of the laser pulse, if the pulse is longer than the speed of the IIT, the minimum gate would be half the length of the laser pulse. But short laser pulses are relatively easy to create, so the main problem is to minimize the IIT speed.

1.5

3D scanning

With the ability to take “slices” of the environment that a range-gated system gives, it is possible to step through the environment, with each step taking a slice and then increase the range by a small amount. When the last slice is taken, these slices can then be combined to a 2.5D environment. This environment can then be rotated to give a better understanding of the environment. [10] It is also possible to emphasize distances using colors [14]. Another possibility is to run the image through a target recognition system, which is able to, e.g., tell what type of vehicle is on the screen [15].

1.6

Related work

The Russian company Turn is another supplier of bigger systems for fixed installations. They have a product called Sea Lynx which is comparable with the non-portable systems from Obzerv. These systems have a practical viewing range above 15 km and can recognize humans at distances of 8 kilometers. The weight of the units is around 45 - 55 kg and they are powered with either 220V AC or 18 – 30V DC. The power consumptions range from 108 – 350W. [16] [13]

(11)

CHAPTER 1. INTRODUCTION 6

1.6.1

Competitors

The number of commercial range gated systems available on the market is very limited. Canadian company Obzerv offers a variety of non-portable systems, they also offer a handheld unit called ATV-500. The product aims for military, police and maritime surveillance purposes and has a practical gate range of up to 2.5 km. The gate depth ranges from 20 to 250 m and it has a unit weight of 4.8 kg. Due to its high optical magnification the ATV-500 allows the user to read the license plate of a car at distances above 300 m. [12]

1.6.2

Research

True color night vision

One of the drawbacks with night vision technology is the lack of colors in the image received. The two main night vision technologies thermal imaging and image intensifying both suffer from this flaw. In recent years a lot of research has been made with applying color to night vision images. Early techniques used artificial colors which may lead to even more diffusion for the observer since such images do not appear natural and are difficult to interpret in an intuitive way. In 2008 Alexander Toet and Maarten A. Hogervorst presented a technique that present nighttime images with true daytime colors in real time. The idea presented is to use certain statistical properties of a reference color image taken in daytime and transfer them to nighttime images. In the first step some bands of the nighttime image are mapped onto the RGB channels of a false color image. The image is transformed into a perceptually decorrelated color space where the properties from the reference image are applied. The image is then transferred back into the RGB space with true colors applied to the image. The reference image used shall be similar to the nighttime image in content and composition for best result. Different reference images might be stored to be able to act in different environments such as desert, woods or urban terrain. When the suitable reference is loaded it allows a real time implementation and gives true colors in nighttime images.[17]

Automotive systems

Night vision systems for cars is an area where a lot of research is currently being done. Some of the major car companies offer models where image sensors are coupled to a display where intensified images are shown. The car industry is now developing smart night vision systems with for example pedestrian and wildlife detection. Different types of warning systems are being tested to alert the driver when pedestrians are crossing the future path of the vehicle. [18] Simplified night vision systems are proposed where the display might be switched for icons on the dashboard or an image beamed on the windshield. The drawback of having to look at an external display seems obvious because the driver will draw his attention away from the road. One of the advantages of using night vision systems in cars is the increased distance the driver can see without increasing the glare for other drivers along the road. There are mainly two types of night vision systems used on vehicles today, far infrared (FIR) and near infrared (NIR) systems. The FIR is passive and uses an infrared camera which senses temperature differences in objects. The NIR system is active and uses an illuminating source to light up the road ahead and detects light reflected from objects ahead of the vehicle.[19]

FOI

Swedish defence research agency, FOI, have done several tests with range gated systems, both in air and under water. In the year of 2000 they tested a system for long range viewing, up to 10 kilometers. Their results show interesting facts about long-distance viewing. They discuss matters such as how heavy rainfall, snowfall and fog lower the distance for detectable objects. FOI also shows that turbulence is a factor that has to be considered. They point out the difference in turbulence in morning or afternoon, day or night and summer or winter. The effect of turbulence also decreases with respect to height above ground. FOI also shows the importance of an even distributed laser beam for a correct representation of a reflected object. [9] FOI also performed several tests with the gated technique for underwater applications such as mine identification. Due to low levels of light in deep waters it is necessary to use a light source of some kind to see clear.

(12)

Gated systems with the use of lasers clearly fulfill this criterion. The use of range gated systems is proven to show good performance in both clear and dirty waters because of its possibility to deselect backscattering media and just illuminate the desired object for the viewer. [20]

(13)

Chapter 2

Problem formulation

2.1

Analysis

The current prototype of the handheld unit is analogue, this means that the user sees an image represented on the phosphor screen of the IIT. The next prototype, which is currently under development, is to have a digital interface. The user views an image represented on a display, and has the possibility to connect the device to a computer or monitor. There has to be an upgrade of the components used in the system to match the digital parts being introduced. The image intensifier tube is a key component in such upgrade, therefore recommendations for an IIT shall be given. An investigation has to be made to find out if all distributors of IITs are able to deliver parts to Sweden and Scandilumen. Parts that could be used for military applications could be under heavy export regulations.

Another task given is to suggest a suitable image sensor that fulfills the requirements of cap-turing the image from the phosphor screen with high quality. The image can then be processed and presented on a suitable screen. During this process the image quality has to remain as high as possible through every step. Image processing, such as noise reduction and contrast enhancement, will make it easier for the viewer to interpret the image. The possibility to connect the device to an external computer allows for video storage.

2.1.1

Model of complete system

A critical step when specifying the system is to make sure to maintain as much information as possible through the whole system. The IIT has to show as high spectral response as possible to ambient light such as moonlight and starlight for passive use of the system. The wavelength of the laser has to match the spectral response of the IIT for active or gated use. When selecting an image sensor one has to match resolution with the phosphor screen and make sure that the frame rate is high enough for image processing requirements. Also the screen used to present the information has to deliver a sharp picture with resolution high enough.

2.1.2

Halo effect

The halo effect is a phenomena that appear in the IIT when a NVD is exposed to a bright light source. The headlights from a car can even from a large distance blind the NVD, causing a larger area on the screen to be lit up, resulting in saturation of the image. See Figure 2.1 on page 9 for an example. One reason is that the incoming light might be reflected between the input window and the photocathode. For every reflection an amount of light will be transmitted through different points of the photocathode hence appear at a wider area of the screen. Depending on the quality of the device and the brightness of the source the image will be more or less distorted and show a brighter and larger defect. When operating the NVD in bright environments or where bright light sources suddenly appear there is a chance of amplifying the incoming light too much, which will cause either halo effect to rise around the light source or even saturate the phosphor screen and fill it completely with its characteristic green light. When the photocathode is transforming a large

(14)

amount of photons into electrons the channels of the MCP will be flooded by electrons filling up neighboring channels, causing the halo effect.

(a) Gated mode (b) Passive mode

Figure 2.1: A car with full beam in gated mode compared to the same car in passive mode clearly

showing the halo effect.

2.1.3

Sensor requirements

To make the next prototype digital there has to be an image sensor built in the system to capture images from the IIT. In order to get the best possible picture out from the system there are several things to consider when choosing image sensor. The sensor will be mounted close to the light emitting phosphor screen and has to match the spectral emission from the phosphor. The intensity of the phosphor is depending on the amount of electrons being multiplied inside the IIT and accelerated onto the phosphor. Under very dark conditions this means that the screen intensity will be very low. The sensitivity of the sensor must be high enough to work in low light conditions and capture even the smallest output from the phosphor screen. Some image processing techniques desire a fast frame rate from the image sensor to perform calculations for image enhancements. Different phosphor types have different decay times meaning that for a fast frame rate and long decay time, there will be an afterimage on the next frame. Therefore the image sensor also has to match the requirements of the image processing technique and decay time of the phosphor used in the system.

2.1.4

Image processing

The use of digital interface opens up for the possibility to enhance the images digitally. Different algorithms can be used to achieve different results. Algorithms that are suited for night vision images, and therefore make the interpretations easier for the user, can be algorithms for contrast enhancement and noise reduction. There are several ways to practically implement the algorithms in hardware, all have their benefits and drawbacks. The hardware has to be small enough to fit in the device, and a low power consumption is preferable. Another problem is that all algorithms may not be possible to implement in all hardware. It is preferable if the algorithms can be changed easily, for future updates of the device.

(15)

Chapter 3

Method

The work in this thesis follow a top down approach meaning the authors begin with a thorough investigation about work in the area covered by the report. The idea is to get an average under-standing and then go deeper into specific areas of the subject to learn about details. After collecting suitable knowledge an investigation will be made where different components and algorithms are compared to meet the thesis requirements. The report will end up with a summary of conclusions and finally leave a specification for future work.

3.1

Literature

As a master thesis shall be of an investigative or researching manner, the first step was to do a thorough investigation of literature. All sorts of media were used to collect information about what has been done in the area of night vision so far. Resources used where the library at Mälardalen University were different databases such as IEEE explore and Engineering Village were used to search for papers and reports. Relevant phrases such as night vision, gated viewing, range gating and image intensifier were used to find information in the databases.

3.2

Empiric collection

Primary data regarding the current prototype of the handheld unit was collected from the company that developed the unit. Several e-mails and telephone calls were made throughout the project period. A delegation from the constructing company paid a visit to Västerås where the authors had the chance to enquire the engineers more thorough. Schematics and sketches of the current prototype were viewed while the prototype itself were thoroughly investigated. Interviews made were always in the form of a dialogue where the person being interviewed had the chance to broaden or elaborate on the answer if so desired. Field tests were made to verify the functionality of the device and compare it to specifications submitted. Information about different components such as IITs, camera sensors and displays were collected from manufacturers and compared to each other. Such information was collected via a paper of requirements sent to companies acting in the sector of interest.

3.3

Comparison

In the comparison part of the thesis, components and algorithms are tested and compared. Some components are compared theoretically while others are tested both in theory and practice.

3.4

Specification

The result of the thesis shall be used by Scandilumen AB to develop the second prototype of the handheld night vision system. The specification will consider factors such as price, availability and performance when making suggestions about parts and algorithms.

(16)

Solution

4.1

Image intensifier tube

The input window of the tube is traditionally provided with a glass disc to protect the sensitive photocathode. Many manufacturers provide the possibility to have fiber optics installed instead of glass which will give a better light transmittance into the tube. The use of a fiber optic faceplate instead of glass gives a sharper image, this is because light is transmitted straight from the input window to the photocathode. When using the standard glass disc there is a chance of reflections between the glass and the photocathode producing the so called halo effect. Fiber optics is also an option for the output window of the tube which becomes practical when coupling an image sensor to the IIT. This arrangement provides the image sensor with more light thus giving the option to choose from a wider selection of sensors.

There are just a few companies on the market producing IITs today. Examples given are Hamamatsu from Japan, Photonis from The Netherlands, ITT from The United states and Katod from Russia. The first step when choosing an IIT is to decide what specifications one needs. Some of the companies offer GEN 3 tubes with different spectral response than GEN 2 tubes. The IITs are normally optimized for passive use which means that the ambient light is the main factor to consider when looking for a suitable IIT. In a range gated system one also has to match the spectral response to the laser used. The ideal way of choosing the IIT is to study data sheets from different suppliers and see which ones that would satisfy the needs. After the study a field test should be performed where different IITs could be compared practically. High technology products that could be used for military operations are often under heavy export regulations and therefore some manufacturers might not have the possibility to deliver equipment to certain countries. Dutch company Photonis is a manufacturer of IITs that offers a wide variety of tubes. They also offer a technique for light protection called auto gating. The idea is to turn the IIT on and off very rapidly when the unit is exposed to a bright light source or operated under daylight conditions. The amount of photons being transformed and intensified inside the IIT is in that way kept at reasonable levels hence not causing the phosphor screen to saturate. This kind of protection is a very important factor when selecting an IIT. Another way to protect the tube is to lower the voltage inside the IIT and therefore lower the intensification.

Due to the high cost and limited time of the project, a practical test between IITs where never performed. Instead a theoretical comparison was made between the top models from different suppliers. The price of comparable tubes range from 30´000 - 100´000 SEK and is therefore a major factor to consider before placing a order. When investigating the possibility to buy tubes from the companies mentioned above there were big differences on availability. Some of the companies were not able to deliver tubes at all due to export regulations.

(17)

CHAPTER 4. SOLUTION 12

4.2

Image sensor

When looking for an image sensor there are a number of attributes that has to be satisfied, and depending on your needs the weight lies on different attributes. The attributes that are covered in this thesis are:

• Sensitivity, the amount of output signal the sensor delivers per input optical energy. • Dynamic range, the ratio between saturation and threshold.

• Resolution, number of active pixels.

• Uniformity, the consistency of response for different pixels. • Spectral response, the sensitivity at different wavelengths. • Shuttering, the ability to start/stop the exposure. • Speed, at which frame rate the sensor can deliver images. • Price, cost of the sensor.

The sensor will be connected to the phosphor screen on an IIT. The sensor should detect as low light levels as possible, and will therefore require a high sensitivity. In the same time, if the phosphor screen is very bright, no information should be lost, so a high dynamic range is important. Also, the phosphor screen emits green light (around 550 nm), so the sensor should have a high spectral response in this region, see Figure A.1 in Appendix A. Some sensors have micro lenses to compensate for lower fill factor, this means that each pixel has its own lens that directs the light to the active part of the surface. The use of micro lenses boosts the effective fill factor giving a better sensivity. Due to that our sensor will be connected to the phosphor screen with fiber optics this technique cannot be used, a sensor without micro lenses is needed. As the system should be able to display video, the sensor must have a frame rate of at least 20 frames per second (fps). A higher frame rate can be preferable for image processing, e.g. noise reduction, and crucial for 3D-scanning. [21]

4.2.1

Introduction to CCD and CMOS

Both the CCD and CMOS technology where invented in the late 1960s and early 1970s. Although CCD has had the upper hand since then, CMOS is becoming more and more popular. CMOS was long believed to surpass CCD in performance, power consumption, integration and price. In some areas that is true, others not, but the two technologies will probably coexist for a long time. Both CCD and CMOS are pixilated metal oxide semiconductors that work by converting photons into electrons, measure the amount of electrons and output an electrical signal. This is achieved in a few steps, some are the same in both technologies, others differ. They both accumulate signal charge in each pixel according to the illumination intensity. After this is done the CCD transfer each pixels charge through a capacitor-chain to a common charge-to-voltage converter and sends it of chip. In CMOS the charge-to-voltage conversion is done in each pixel. The voltage can either be sent of chip, or as more common to an A/D converter integrated on the chip. As seen in Figure 4.1 and Figure 4.2 on page 13, the CMOS has more electronics integrated directly on chip. This means CMOS is easier to implement and is more reliable in rugged environments. While CCD is more flexible, as the electronics outside the chip can be changed without redesigning the image sensor. [22]

(18)

Figure 4.1: Overview of a CCD chip.

(19)

CHAPTER 4. SOLUTION 14

4.2.2

CCD details

There are three ways to transfer the charge from the pixel to the charge-to-voltage-converter; full frame, frame transfer and interline transfer, see Figure 4.3. [23]

Figure 4.3: Types of CCD chips.

Full frame

Each pixel can capture charge and transfer it to the next pixel. This gives a great fill factor (up to 100%). However a full frame sensor requires a mechanical shutter, otherwise the image would become smeared as more light is captured as the pixels are being transferred to the output node.

Frame transfer

Each pixel charge is at high speed transferred to a light-shielded region of the same size as the imaging region. The active area can immediately start gathering charge for the next image. The charges in the light-shielded region are transferred to the output node in the same way as the full frame. This technique overcomes two major flaws that the full frame suffers from; a mechanical shutter is no longer needed and the speed of the sensor is improved. The fill factor is about the same as in full frame.

Interline transfer

Interline transfer CCDs use photodiode pixels, and a light-shielded vertical transfer channel. Sen-sitivity of the photodiodes is good, but fill factor is around 30 − 50 %, much lower than full frame and frame transfer. Most interline transfer sensors compensate the low fill factor with micro lenses, boosting the effective fill factor to around 70 %. As mentioned in section 4.2 a sensor without micro lenses is needed for the handheld system.

4.2.3

CMOS details

Passive pixels

Passive pixels contain only a photo sensing element (typically a photodiode) and a switching MOS-transistor for transferring the charge to the output node. This allows the CMOS to read individual pixels or regions of interest (ROI). However passive pixels have more noise and lower sensitivity than CCDs. [23]

Active pixels

Active pixels have the first stage of amplification implemented in every pixel, giving better signal to noise ratio than passive pixels. Almost all CMOS sensors today use active pixels. The amplification

(20)

is achieved with three transistors, known as 3T. More complex designs use more transistors (4T and 5T). These designs can reduce noise and/or add global shuttering. More transistors will reduce the fill factor, so many CMOS sensors use micro lenses. [23]

4.2.4

Technology differences

When comparing the two technologies one will find benefits with both types, see Table 4.1. Basi-cally you can say that CCD and CMOS are good at different things, so which technology to choose depends on the application. CMOS has greater integration (see Figure 4.4), lower power dissipa-tion, single voltage supply capabilities and ease of parallel readout. But due to analog process circuitry on-chip it also has more noise and higher dark current (around 100 pA/cm2 compared

to CCDs 10 pA/cm2and below [24]). CCD on the other hand has better sensitivity and dynamic range, lower noise levels and better uniformity. But it will also need more supporting circuits, voltage supplies and has higher power dissipation, see Figure 4.4. [23][25]

Figure 4.4: Parts that can be integrated in the two technologies.

Parameter CCD CMOS Comment

Sensitivity CCD has better sensitiv-ity, due to better fill fac-tor.

CMOS has historically been much worse, but are closing in on CCD.

CCD still has the up-per hand.

Dynamic range CCD has about the double dynamic range.

CMOS has improved the dynamic range, but are well behind CCD.

CCD is clearly better.

Uniformity CCD has always had good uniformity.

Uniformity for CMOS was a big problem, but has improved thanks to new techniques.

CCD still has the up-per hand, although CMOS is closer now than ever before.

Shuttering CCD has superior elec-tronic shuttering.

CMOS usually uses rolling shutter. The use of an uniform shutter requires more transistors in each pixel.

CCD has the upper hand.

Speed CCD is slower than CMOS.

CMOS has the advantage because all camera func-tions can be put on the im-age sensor.

CMOS has the upper hand.

Price Comparable Comparable The prices aren’t too far apart for compara-ble sensors.

(21)

CHAPTER 4. SOLUTION 16

The conclusion is that sensors of both technologies can be used in the system as long as it has a good sensitivity, good dynamic range and high enough speed. When it comes to the cost parameter, you have to take into account that a CMOS sensor may need less supporting circuitry, and may therefore be cheaper and easier to implement. [21][22][23][25]

4.2.5

Sensor proposal

Two sensors where proposed, one CCD sensor ICX429ALL from Sony, and one CMOS sensor IBIS4-1300 from Cypress. A wide range of sensors and manufacturers were reviewed and compared to the proposed sensors. The two proposed sensors were considered to fulfill the requirements, and were compared in greater detail.

ICX429ALL

ICX429ALL is a CCD sensor that is 8 mm of diagonal size (type 1/2), it has an effective number of pixels of 752x582 (approx. 440K pixels). It is available in black and white, and has high sensitivity, high dynamic range and high signal to noise ratio.

IBIS4-1300

IBIS4-1300 is a CMOS sensor that has an effective number of pixels of 1280x1024 (approx. 1.3M pixels). It is available in black and white, and has high sensitivity and high dynamic range.

4.2.6

Conclusion

Comparing the two sensors was found to be very difficult. Properties given in one data sheet were missing in the other and vice versa, see Table 4.2 where data from the manufacturers data sheets are represented.

Parameter ICX429ALL IBIS4-1300

Effective pixels 440K 1.3M

Pixel size 8.6 × 8.3 µm2 7.0 × 7.0 µm2

Chip size 7.40 × 5.95 mm2

Sensitivity 1400 mV 7 V/lx.s

Dynamic range 69dB

Dark current 344 pA/cm2

Spectral sensitivity range 400-1000 nm

Peak spectral sensitivity 610nm

Lag 0.5% Smear -126 dB Operating temperature -10 - +60C Flicker -126 dB MTF 0.4-0.5 @ 450 nm 0.25-0.35 @ 650 nm Table 4.2: A comparison of the sensors parameters.

Both sensors give their sensitivity, so this should be comparible. However, they use different ways to present their data. Sensitivity of the IBIS4-1300 is presented in volts per lux-second, while the ICX429ALL uses Sony’s standard way to measure and present sensitivity. The way Sony measures sensitivity is well described in the data sheet as:

“Use a pattern box (luminance 706 cd/m2, color temperature of 3200K halogen source) as a subject. (Pattern for evaluation is not applicable.) Use a testing standard lens with CM500S (t = 1.4mm) as an IR cut filter and image at F8. The luminous intensity to the sensor receiving surface at this point is defined as the standard sen-sitivity testing luminous intensity. After selecting the electronic shutter mode with a

(22)

shutter speed of 1/250 s, measure the signal output (VS1) at the center of the screen and substitute the value into the following formula. S1 = VS1 * 250/50 mV.”

After several attempts to convert the values to comparable units, this idea was abandoned due to lack of information. Practical tests would have been optimal, but the high price tag of the sensors and the long delivery time made this approach impossible. The next step was to look at other implementations that use these sensors. Both sensors were found working in low light conditions, e.g. astronomical applications and surveillance. They were also found in other night vision systems. With this empirical data the sensitivity can be considered as good enough for the handheld system.

The IBIS4-1300 requires less supporting circuitry and has both analog and digital output, while ICX429ALL require more circuitry and only has analog output. The conclusion is that both sensors can be used, but that the IBIS4-1300 has the upper hand with its digital output and less supporting circuitry.

4.3

Image processing

The difference between a good image and a bad one, can mean different things in different sit-uations. In this system, a good image means that the user can easily see what is shown on the screen, and that the risk of misinterpreting is minimal. Different techniques to minimize unwanted components and enhance wanted components can be used to get a clearer representation. Some techniques are represented in section 4.3.1. However, these techniques need some hardware to do the calculations, different solutions are represented in section 4.3.2. Further on, not all techniques are possible to use. This can be because the algorithm is too time consuming to support real time video with realistic hardware, or that the algorithm is not intended for real time video processing. The practical implementation will be covered in section 4.3.3.

4.3.1

Different techniques

Noise reduction

The IIT is by nature noisy, if one look directly at the phosphor screen you will see a flicker in the image, especially in the dark regions. If this flicker can be reduced, it will give a more comfortable image to look at. This can be done in a few ways. One way is to have an algorithm that reduces the amount of noise, but unfortunately this will also distort details in the image. A better way was found, the adding of a few frames. By adding 4-5 frames the flicker from one frame was cancelled out by the other frames, resulting in a much improved image, see Figure 4.5 on page 18. A third way is to do the noise reduction in an analogous manner, directly in the image sensor. The shutter time is extended, and when the image gathering time is extended, the noise will be cancelled out. This will give the same result as by adding frames.

Contrast improvement

Improving the contrast may be the most important enhancement, making it easier to distinguish objects, see Figure 4.6 on page 19. However, improving the contrast also improves the contrast of noise. This means that although objects are easier to see, the noise is more prominent.

Edge detection

Edge detection is an algorithm that scans the image, trying to find where the contrast is high. These places are considered to be an edge. Output from this algorithm is a binary image, where one is an edge, and zero is not an edge. This image can be used in several ways. It may be used to enhance the contours of objects. The edges can be colored or just made brighter, so the objects are easier to distinguish, see Figure 4.7 on page 19. The binary image can be compared to previous images, and movement can thereby be detected. This can e.g. trigger an alarm, start recording video, take a snapshot, etc. Object recognition can be used. The system may recognize cars, boats, people, etc. The system could, for example, raise the alarm if a person enters the field of view, but not if a cat or bird enters.

(23)

CHAPTER 4. SOLUTION 18

(a) Original image (b) Noise reduced image

(c) Original image zoomed (d) Noise reduced image zoomed

Figure 4.5: Example of noise reduction using merging of 5 frames.

Dark spot removal

All IITs have flaws in the MCP resulting in dark spots in the image. These spots are constant and characteristic for each IIT, and occur in the manufacturing process. Techniques to remove these spots could improve the image quality. One idea is to calibrate the system against a white wall, to see where the dark spots are on the specific system. This information is then stored in a Read Only Memory (ROM). Each frame coming from the image sensor would then be corrected with respect to the information in the ROM, minimizing the effect of the black spots in the IIT.

4.3.2

Implementation

There are a number of ways to implement image processing, but one thing they all have in common is that some type of hardware is needed. The hardware can be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP) or an Field Programmable Gate Array (FPGA). All have their advantages and disadvantages.

CPU

CPUs have a wide range of high-level languages for code development, debugging environments, and profiling tools. This makes development for CPUs relatively easy. However, CPUs are bad at handling floating points and have no parallelism. [26]

(24)

(a) Original image (b) Contrast improved image

Figure 4.6: Contrast improvement done in Matlab using function “adapthisteq”.

(a) Boat highlighted with white edge (b) Man in a corridor highlighted with red edge

Figure 4.7: Edge detection done in Matlab using the function “edge”.

GPU

General-Purpose computing on Graphics Processing Units (GPGPU) can be used with a GPU. Compared to CPUs, GPUs are good at handling floating points, and are highly parallel. However, GPUs have higher development complexity compared to a CPU. Languages for development on GPUs have existed for a while (e.g. Cg, HLSL and GLSL). However, in these languages com-putation must be expressed in graphics terms (e.g. vertices, textures, fragments, and blending). Higher level languages have later been developed, designed especially to compute general opera-tions. BrookGPU and Sh are two academic research projects, Accelerator is a project by Microsoft that has similar goals. Other examples are Scout, CgiS and GPU++. Higher level GPU imple-mentations perform up to thousands of times better than CPU impleimple-mentations, this without a notable increase in source code complexity. [27][28][29][30]

DSP

A programmable DSP is a microcomputer designed particular for real-time number crunching. They can perform a multiply-accumulate (MAC) operation in one instruction cycle. A MAC is a multiplication that is added to previous result, which is used in many algorithms. A high-level language can be used to develop the algorithms, making the solution portable. A solution in a DSP would be faster than in a CPU. [26]

(25)

CHAPTER 4. SOLUTION 20

FPGA

An FPGA differs from the other implementations in the way that it does not run software, FPGA is programmable hardware. It is programmed with the help of a hardware description language, VHDL or Verilog. An FPGA is parallel making it ideal for image processing. Tests in [26] compare FPGA to three other solutions for digital image processing. The FPGA was more than 30 times faster than the DSP, and more than 1000 times faster than both the PC computer and the embedded microprocessor.

4.3.3

Implementation on an FPGA

As noted in section 4.3.2, an FPGA is a good platform for image processing. Thanks to the paral-lelism of FPGA technology, it can reach speeds far higher than CPUs and other similar solutions. However, developing for FPGA is more complex than for CPUs, and it also has limitations that CPUs don´t have, e.g. poor floating-point number handling. So the developer has to keep these limitations in mind when developing the system. It is also preferable not to use external memory, as the memory accesses can become a bottleneck and reduce the speed of the system.

Sliding window

Image processing algorithms based on sliding window operations (SWO) are common in both the software world and in FPGA solutions. SWOs are a way to process images where each pixels nearest neighbors are used in the algorithm. However, SWOs are very computationally intensive, and hardware acceleration using an FPGA would be beneficial. The FPGA is actually ideal for these types of operations.

A sliding window operation is just what it says; a window sliding through the data, step by step, executing an operation at each step. The window consists of an m × n matrix that has m rows and n columns. Figure 4.8 shows an example of a 5 × 5 window, the currently processing window and the next is shown in the figure. The pixel marked red are buffered pixels, all these will be used in the future as the window slides through the image. Only the leftmost pixel on the top row will be discarded each time the window moves. In an FPGA the pixel stream would be

Figure 4.8: Showing an example with current processing and next processing window, the window

size is 5 × 5.

guided through a path, and each time a pixel is inserted the algorithm would calculate and output the result. This would result in an application with real time video, and with a small delay. The delay (d seconds) would be dependent on the readout pixel rate (p pixels/second) the width of the image (w pixels), and the size of the window (m × n pixels). The delay is given by the formula:

d = n−1 2 + m−1 2 w p (4.1)

For an example using a 5 × 5 window on a 1286 pixel wide sensor with a pixel rate of 10 MHz, equation 4.1 would give 0.26 ms, a very low delay for our application.

(26)

Figure A.3 in Appendix A shows how the pixels are read from the image sensor. Pixels then enter the path in the FPGA, highlighting the pixels used for the algorithm (a 5 × 5 matrix), and then how the pixels are put on a display.

As seen in Figure A.3, there will be an loss of pixels at the edges of the image. This is because of the nature of the algorithm, the pixels at the edges don’t have enough neighbors to perform the calculations. A number of rows (m − 1) and columns (n − 1) will always be lost. Usually these number of rows and columns are divided evenly top/bottom and left/right. [31][32]

Analysis of techniques

A few things are required of a technique to be implemented in sliding windows. The algorithm has to work with the nearest neighbors. Pixels outside the window cannot be used in the algorithm. Sliding window does not store data in a memory, and cannot e.g. store previous frames. The algorithm has to output pixels in the same rate they are inserted in the FPGA.

Noise reduction

One technique to reduce noise that is presented in section 4.3.1, is to add a few subsequent frames thereby the noise in one frame will be canceled out by the noise in the others. This technique is not suitable for sliding window. To do noise reduction in sliding window, an algorithm is needed. As mentioned in section 4.3.1, none of these algorithms that were tested performed satisfactory. The adding of frames can however be used, but in this case before or after the FPGA.

Contrast improvement

The Figure 4.6 in section 4.3.1 was created using then Matlab function “adapthisteq”. The code for this function is open to read and analyze. This function cannot be used as it is, the algorithm uses windows (the windows used in Figure 4.6 is 5 × 5 pixels large), the windows are however not sliding through the image. When one window is processed, the function jumps to the next window, until the whole image is processed. There are other algorithms that would suit better for implementation in sliding window, for example an algorithm presented in [33]. This algorithm uses sliding window, and is designed especially to work well on noisy images.

Edge detection

When looking for an edge in an image, the relationship a pixel has to its neighbors are stud-ied. If the gray-level of the pixel vary from the neighbors gray-level it may be an edge. If the pixel has similar gray-level as its neighbor´s it probably is not an edge. The fact that a pixel is compared to its neighbors make sliding window a perfect option. This could be implemented in an FPGA. An example of an edge detecting algorithm using sliding windows is presented in [34].

Dark spot removal

The removal of dark spots can be done in FPGA if a ROM is present. This can be done without the use of sliding windows as only one pixel and one reference pixel at a time is used for this. If it is to be implemented in sliding window, the neighboring pixels can just be disregarded.

Conclusion

A good way is to have noise reduction before the FPGA, either by adding frames or by doing it in the image sensor (as described in section 4.3.1). Dark spot removal can be implemented in the FPGA, but preferably before the sliding window component. The sliding window component would be the last step in the image processing, and both contrast enhancement and edge detection is possible to implement with sliding window.

(27)

Chapter 5

Result

5.1

Image intensifier tube

The handheld system has to be operational in urban environments where it suddenly could be exposed to bright light sources. Therefore it is critical that the device has a built in protection for overexposure. Scandilumen could develop a technique for overexposure protection by the use of e.g. output brightness control but it is recommended to buy an IIT with built in protection. Auto gating, developed by Photonis, is proved to work well in 24h operations and is a good option to use in the handheld system. It is preferable to have a good light transmittance through the whole system whereas a fiber optic faceplate and a fiber optic output window would be recommended. This would give a sharper image, lower the risk of halo effect and give an optimal way to mount the image sensor to the IIT. When viewing the image from a NVD on a display it is essential to have as sharp image as possible. Therefore an IIT with high resolution is of great interest. The resolution of the tube shall be no less than 64lp/mm (typical value). Signal to noise ratio shall be at least 25.

5.2

Sensor

One of the aims for this thesis was to investigate if the CMOS technology was sensitive enough to be implemented in a night vision system and perform as well as the CCD technology does. The two technologies were investigated and compared. Both technologies were found to have advantages over the other, so there was no obvious choice of technology. Basically, the technologies are good for different things.

Two sensors was then proposed and compared in detail, one CCD from Sony and one CMOS from Cypress. As when the technologies were compared, both the sensors were found to be good in different ways. When looking at sensitivity, the CCD is believed to be more sensitive than the CMOS, which is an important property. However, sensitivity is only one aspect, the CMOS require less supporting circuitry as much is built in the sensor. Another point is that the CMOS has digital output. While the CMOS is not as sensitive as the CCD, it is sensitive enough for this application. The conclusion is that the CMOS sensor can be used in the system.

5.3

Image processing

The advantage of having computational power built-in into the system with the use of an FPGA is clear. During this thesis work several processing techniques were investigated and tested. The conclusion from these tests is that some types of image enhancements make a big difference on the image. Depending on the end user, different techniques might be interesting, so an updatable system is recommended. Dark spot removal is one enhancement that could be implemented in every system. Noise reduction and contrast enhancements could be optional, but the user should have the option to turn off some features, especially features like contrast enhancement and edge detection.

(28)

5.4

Future work

Depending on the end user, the requirement for output may differ. Some users may want a 7 inch LCD display where they can sit back and look at the screen. Other customers might require a more discrete way to observe. Small OLED displays with ocular viewing will offer good viewing capabilities and at the same time not illuminate the viewer as is the case with a LCD. OLED is a technology that is under rapid development and has lower power dissipation than LCD screens. On the other hand OLED is still more expensive than LCD. Further development of the handheld unit would gain from a thorough market investigation where potential customers are given the chance to influence the development according to their requirements.

Further tests with image processing algorithms should be done to find solutions that give better performance. The noise reduction algorithms tested in this thesis performed poorly and further tests should be done in this area. Although the contrast enhancement used works, a better option is probably out there.

(29)

Appendix A

Descriptive images

Figure A.1: Curves showing the relative intensity/response for three phosphor screens and two

image sensors.

(30)
(31)

APPENDIX A. DESCRIPTIVE IMAGES 26

(32)

[1] ITT Generations of IIT´s

Available at: http://www.nightvision.com/night_vision/how_nv_works.html (2009-05-19)

[2] Liu L, Chang B, Du Y, Qian Y, Gao P (2005). The variation of spectral response of

transmis-siontype GaAs photocathode in the seal process. Nanling University of Science and Technology,

Nanling.

[3] Qiu Y, Qian Y, Liu L, Fu R, Chang B (2004). The Design of Low-Light Night Seeing Helmet. Nanling University of Science and Technology, Nanling.

[4] Wiza J (1979). Microchannel plate detectors. Nuclear instruments and methods, Vol.162, pages 587-601

[5] Winn D (2007). High gain photodetectors formed by nano/micromachining and

nanofabrica-tion. Fairfield University, Fairfield.

[6] Beetz C, Boerstler R, Steinbeck J, Lemieux B, Winn D (2000). Silicon-micromachined

mi-crochannel plates. NanoSciences Corporation, Oxford.

[7] Kariis H, Svensson s (2001) Skydd mot laser - en introduktion FOI, Linköping

[8] Strålsäkerhetsmyndighetens (2009) Strålsäkerhetsmyndighetens författningssamling Strålsäk-erhetsmyndigheten (Swedish Radiation Safety Authority)

[9] Klasén L, Steinvall O, Bolander G, Elmqvist M (2001). Gated viewing, initial tests at long

ranges. FOI - Swedish Defence Research Agency, Linköping.

[10] Monnin D, Schneider A L, Christnacher F, Lutz Y (2006) A 3D Outdoor Scene Scanner Based

on a Night-Vision Range-Gated Active Imaging System French-German Research Institute of

Saint-Louis (ISL), France

[11] Thomas, M C, Yates, G J, Zagarino, P (1995) Image Enhancement Using a Range Gated

MCPII Video System With a 180-ps FWHM Shutter SPIE’s 1995 International Symposium

on Optical Science, Engineering and Instrumentation [12] Obzerv ATV-500 system specifications

Available at: http://www.cenrex.pro24.pl/dokumenty/Spec_ATV-500_11-05.pdf (2009-05-19)

[13] TURN LLC Description of sea lynx system

Available at: http://www.turn.ru/products/lynx_sea.htm (2009-05-19)

[14] Steinvall O, Carlsson T, Grönwall C, Larsson H, Andersson P, Klasén L (2003) Laser Based

3-D Imaging New Capabilities for Optical Sensing FOI, Linköping

[15] Carlsson C (2000) Vehicle Size and Orientation Estimation Using Geometric Fitting Divi-sion of Automatic Control, Department of Electrical Engineering, Linköpings universitet, Linköping

(33)

BIBLIOGRAPHY 28

[16] Obzerv ARGC-2400 system specifications

Available at: http://obzerv.org/english/products/argc-2400/overview/ technical-sheet.download (2009-05-19)

[17] Hogervorst A & Toet A (2008). Presenting nighttime imagery in daytime colours. TNO human factors, Soesterberg

[18] Tsimhoni O, Flannagan M.J, Mefford M.L, Takenobu N (2007). Improving pedestrian detection

with a simple display for night vision systems. The University of Michigan, Transportation

Research Institute.

[19] Källhammer J-E. Requirements and possible roadmap for FIR and NIR systems. Autoliv re-search, Vårgårda.

[20] Karasalo I, Lindqvist P, Morén P, Olsson A, Staaf Ö, Ström P, Söderberg P (2004) Seamine

detection and disposal, final report. FOI - Swedish Defence Research Agency, Stockholm.

[21] Litwiller D (2001) CCD vs. CMOS: Facts and Fiction

[22] Litwiller D (2005) CMOS vs. CCD: Maturing Technologies, Maturing Markets [23] Dalsa (2002) Image Sensor Architectures for Digital Cinematography

[24] Blanc N (2001) CCD versus CMOS - has CCD imaging come to an end? Photogrammetric Week 01’ (131-137)

[25] Janesick J (2002) Dueling Detectors Spie´s oemagazine february (30-33)

[26] Lujan C A, Mora F J, Atoche J R (2008) Comparative Analysis in the Implementation of

Sub-traction and Thresholding for Digital Image Processing Department of Electical and

Electron-ics Engineering, Merida Yuc., Department of ElectronElectron-ics Engineering, Valencia International Conference on Electrical Enginering, Computing Science and Automatic Control (465-469) [27] Owens J, Luebke D, Govindaraju N, Harris M, Krüger J, Lefohn A, Purcell T (2007) A

Survey of General-Purpose Computation on Graphics Hardware Computer Graphics Forum

26 (80–113)

[28] Owens J, Houston M, Luebke D, Green S, Stone J, Phillips J (2008) GPU Computing Pro-ceedings of the IEEE, Vol. 96 No. 5 (879-899)

[29] Jansen T (2007) GPU++ An Embedded GPU Development System for General-Purpose

Com-putations University of Technology, München

[30] Lejdfors C, Ohlsson L (2006) Implementing an embedded GPU language by combining

trans-lation and generation Department of Computer Science, Lund University, Lund

[31] Cvetkovic S, Schirris J (2008) Non-Linear Locally-Adaptive Video Contrast Enhancement

Al-gorithm Without Artifacts IEEE Transactions on Consumer Electronics, Vol. 54, No. 1

[32] Yu H, Leeser M (2006) Automatic Sliding Window Operation Optimization for FPGA-Based

Computing Boards Department of Electrical and Computer Engineering, Northeastern

Uni-versity, Boston

[33] Khriji L, Cheikh F A, Gabbouj M (1998) Contrast Enhancement in noisy Images Using

Ratio-nal Based Operators SigRatio-nal Processing Laboratory, Tampere University of Technology,

Tam-pere

[34] Shen J, Zen C Using Sliding Window Technique to Explore the Variations of Image Pixels

for Edge Detection Department of Management Information Systems, National Chung

Hs-ing University, Taiwan, Institute of Information Management, National Formosa University, Taiwan

References

Related documents

The vision can help to create a shared understanding in the team and gives direction to the software development projects.. The vision is not a part of the Scrum process but

This thesis presents a computational model suitable for complex image processing algorithms, a framework based on this computational model which supports the engineers when de-

The MISTRA investment in the center will enable the Consortium partners to take better advantage of and further develop their existing strengths, by creating

The aim of this literature review is to objectively compile and analyse if there is an effect, of early daytime outdoor exercise in natural light environments, on sleep quantity

entreprenörernas syn på hållbarhet något mer lokal än de andra aktörerna. Detta kan förklaras av att entreprenörernas operativa verksamhet är direkt påverkade av sådana

The skeleton image produced by the original Zhang-Suen algorithm left Figure 5.6.1 has one 8-connected components.. Its complement has eleven

Självfallet kan man hävda att en stor diktares privatliv äger egenintresse, och den som har att bedöma Meyers arbete bör besinna att Meyer skriver i en

The events of the Holocaust left deep physical and mental sores in the bodies and minds of the survivors so looking into Aaron Antonovsky’s (1979) sense of coherence theory was