• No results found

Construction of a high-resolution digital video camera

N/A
N/A
Protected

Academic year: 2021

Share "Construction of a high-resolution digital video camera"

Copied!
86
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Science and Technology

Institutionen för teknik och naturvetenskap

Linköping University Linköpings Universitet

SE-601 74 Norrköping, Sweden

601 74 Norrköping

LiU-ITN-TEK-A--08/026--SE

Construction of a

high-resolution digital video

camera

Rickard Hermansson

(2)

LiU-ITN-TEK-A--08/026--SE

Construction of a

high-resolution digital video

camera

Examensarbete utfört i elektronikdesign

vid Tekniska Högskolan vid

Linköpings unversitet

Rickard Hermansson

Handledare Henrik Hillberg

Examinator Ole Pedersen

(3)

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida

http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page:

http://www.ep.liu.se/

(4)

Abstract

This thesis describes the design process of a high-resolution digital video camera for use in several ongoing projects at SAAB Bofors Dynamics in Linkoping. An extensive research of image sensing technologies and available image sensors has been performed and a suitable sensor has been selected for the design. The thesis has resulted in the construction of a fully functional proof-of-concept prototype camera that will be used as an evaluation platform to aid in the decision of whether to construct own cameras, instead of buying them on the commercial market, or not.

Initial tests of the camera shows promising results for achieving high-quality images, but more testing and evaluation must be performed before any final deci-sion will be made to build custom made cameras for use in the projects at SAAB Bofors Dynamics.

(5)
(6)

Acknowledgements

I would like to extend my gratitude to all the people that, in one way or another, have been a part of this work.

Kent Stein for granting me the opportunity of this thesis work, Ole Pedersen for taking on the role of examiner, delivering feedback on the report, and all the people at SAAB Bofors Dynamics that have shown a genuine interest in my work and supported me all the way.

A special thanks to Henrik Hillberg who has been my supervisor and instructor at SAAB Bofors Dynamics. It is through his contributions, in terms of support, guidance and discussions, that the completion of this thesis has been made possible. Thank you all.

Rickard Hermansson February, 2008

(7)
(8)

Contents

1 Introduction 1 1.1 Task definition . . . 1 1.2 Expectations . . . 2 1.3 Method . . . 3 1.4 Delimitations . . . 3 1.5 Thesis disposition . . . 3 1.6 List of abbreviations . . . 5 2 Image sensors 7 2.1 The image sensor . . . 7

2.2 The pixel . . . 8 2.3 CCD . . . 9 2.4 CMOS . . . 10 2.5 Choosing sensor . . . 11 2.6 Shutter . . . 14 2.7 Bayer pattern . . . 16 2.8 Raw images . . . 17 3 Optics 21 3.1 Resolution . . . 21

3.2 The modular transfer function . . . 22

3.3 Mounting . . . 23 4 Market survey 25 4.1 Requirements . . . 25 4.2 Suitable sensors . . . 25 4.2.1 SONY IMX017 . . . 26 4.2.2 Cypress LUPA-4000 . . . 27 4.2.3 Micron MT9P001 . . . 27 4.2.4 DALSA 4M60 . . . 27 4.3 Summary . . . 28 v

(9)

5 Construction 29 5.1 Overview . . . 29 5.2 Schematics . . . 29 5.2.1 Camera . . . 30 5.2.2 FPGA . . . 30 5.2.3 Image sensor . . . 30 5.2.4 USB . . . 30 5.2.5 Camera link . . . 31 5.2.6 Power . . . 31 5.2.7 Decoupling . . . 32 5.3 Components . . . 32 5.3.1 FPGA . . . 32 5.3.2 Camera link . . . 32 5.3.3 USB/UART . . . 33 5.3.4 Power supply . . . 33 5.3.5 Logic analyzer . . . 33 5.4 Layout . . . 34 6 VHDL code 37 6.1 Master control unit . . . 37

6.2 Pixelgrabber . . . 37

6.3 Digital clock managers . . . 40

6.4 Serial interface . . . 41

6.5 PC-side . . . 41

7 Test and verification 43 7.1 Powerdistribution . . . 43 7.1.1 Goal . . . 43 7.1.2 Method . . . 43 7.1.3 Result . . . 43 7.2 Programming FPGA . . . 43 7.2.1 Goal . . . 43 7.2.2 Method . . . 44 7.2.3 Result . . . 44 7.3 Image sensor . . . 44 7.3.1 Goal . . . 44 7.3.2 Method . . . 44 7.3.3 Result . . . 45 7.4 System test . . . 46 7.4.1 Goal . . . 46 7.4.2 Method . . . 46 7.4.3 Result . . . 46

(10)

Contents vii

8 Prototype evaluation 47

8.1 Performance . . . 47

8.2 Possible improvements . . . 48

8.3 Further development . . . 49

9 Conclusions and further work 51 Bibliography 53 A List of sensors 55 B Schematics 57 C Mechanical drawings 65 D Bill of materials 68 E eVision cam-file 72

List of Figures

2.1 Active pixel sensor photodiode. . . 9

2.2 CCD image sensor architecture. . . 10

2.3 CMOS image sensor architecture. . . 11

2.4 Dynamic range vesus noise floor. . . 12

2.5 Shutter comparison - bottles . . . 16

2.6 Shutter comparison - fan . . . 16

2.7 Bayer pattern on image sensor. . . 17

2.8 Foveon X3 pixel. . . 18

2.9 Bayer pattern - prototype camera. . . 19

2.10 Bayer pattern - prototype camera color. . . 19

3.1 Effects of diffraction - contrast vs. frequency. . . 22

3.2 MTF for CCD + lens . . . 23

3.3 C-mount lens holder (board). . . 24

5.1 Dataflow structure of prototype camera. . . 29

5.2 Camera link feedback loop. . . 30

5.3 Prototype camera board fitted with components. . . 34

6.1 Master control unit symbol showing inputs and outputs. . . 38

6.2 Pixelgrabber schematic of VHDL. . . 39

6.3 DCM-configuration in prototype camera. . . 41

6.4 I2C-core with custom interface. . . 42

6.5 Screenshot of serial_ctrl2.exe. . . 42

(11)

7.2 Commands acknowledged by the image sensor. Acknowledge is made by the sensor by pulling SDA low on the ninth SCL high

period. . . 46

8.1 Image taken by prototype camera before color interpolation. . . 49

8.2 Image taken by prototype camera after color interpolation. . . 50

List of Tables

2.1 CMOS vs. CCD feature and performance comparison. . . 15

3.1 Different standard mounts available. . . 23

4.1 Readout modes for Sony IMX017. . . 26

5.1 Prototype board power planes. . . 31

5.2 Prototype board pinout. . . 35

7.1 Powerdistribution test-points. . . 44

8.1 Performance evalution of prototype cameraboard. . . 47

8.2 Synthesis report of VHDL in FPGA. . . 48

(12)

Chapter 1

Introduction

1.1

Task definition

At Saab Bofors Dynamics (SBD) there are several ongoing projects where high-resolution digital video-cameras are used for collecting data for real-time analysis. The cameras used today are COTS (Commercial Of The Shelf), but much could be gained by constructing custom cameras in-house. It’s not only the benefit of controlling the whole product but also customizations could be made for each project and hopefully a more optimized solution, in terms of power consumption, size and built-in functions, could be manufactured.

When buying third-party products there are several properties introduced with the camera which cannot be controlled without affecting primary functions. As an example there are often a standard set of connectors available on the back of the camera where not all of them are always used or needed. Although many of the camera manufacturers offer customized cameras these generally come at a high price and can take some time to manufacture. As many of the projects at SBD result in a low volume product the development cost to the manufacturer would become very high in relation to the actual number of cameras delivered and is therefore not an optimal solution.

By building a custom camera for each project the cost and development time would rise, but still this could be the best solution for some of the more demand-ing implementations where size, functions and power consumption are prioritized. Also some of, if not all, the image analysis could be transferred into the actual camera making it easier to reuse in a later project as a module. Another impor-tant factor that could be controlled is the ruggedness of the camera. The intended applications demands very robust systems that aren’t sensitive to vibrations and other tough environments.

This thesis is aimed at exploring the benefits and/or problems that arises in a camera development project. There is also a wish to manufacture a prototype camera board housing an image sensor, optics and the support circuits needed for generating an image. This would prove useful in upcoming evaluations of the intended construction.

(13)

The questions that are addressed in this thesis are the following:

• What does it take to interface an image sensor for use in a custom developed

camera?

• Which are the benefits of a custom developed camera and do they outweigh

the disadvantages in own development costs and technical difficulties that have to be overcome?

• What is gained by implementing an own camera design instead of buying

one off-the-shelf or a custom made solution available on the market?

• In today’s cameras mainly two different types of image sensors is used, CCD

and CMOS. Both have several attributes that speaks for and against them. Are any one of the technologies more suitable than the other for the work at hand?

The construction of a prototype camera could also provide answers to several other questions of interest:

• How much processing, if any, must be done to a captured image before it

can be used in the intended applications?

• What functions can be constructed directly into the camera? • Which performance can be expected from the camera?

There is also an interest for a rough estimation of the total work needed for completing a working product, in terms of technology obstacles that has to be overcome rather than actual manhours.

1.2

Expectations

The goal of this master thesis is to investigate the design process and the work needed to interface an image sensor and construct a high-resolution digital video camera. An image sensor with equal or better performance than the current solution should be presented and the process of interfacing this sensor should be investigated. Apart from the selection of a sensor a design of a prototype PCB should be completed. If there is time the PCB should be fitted with circuits and tested.

Although the design of a prototype camera board will be a big part of the thesis work there are several factors that can inhibit the final result, such as delivery times and access to personel from other projects at SBD. However the design should act as a reference in further work.

(14)

1.3 Method 3

1.3

Method

The overall purpose of this project is to evaluate the advantages and disadvantages of designing and constructing an own camera at SBD for use in various projects.

The work was divided into five major parts shown below. 1. Study of available image sensor technologies.

2. Market survey of suitable sensors.

3. Design of camera including schematics and pcb layout. 4. Writing of source-code for camera prototype.

5. Test and verification of the prototype camera board.

1.4

Delimitations

This project is to be performed as a thesis project in the scope of a M.Sc. degree in electronics design. This means that the deadlines are those of SBD as well as Linkoping University where a M.Sc. thesis is constrained to 20 weeks. In this somewhat limited time frame a market survey comparing different image sensors and their technologies should be completed. Also there is a wish for the design and construction of a prototype board for the selected sensor.

To minimize development time already available components will be used as far as possible. This is due to the time it takes to incorporate new components into the system. In this project the main focus will be on the selection of a suitable image sensor and the process of interfacing this while some of the other goals with a custom camera, such as increased power performance in comparison to current solution, has to stand back.

Optics is one of the most important parts for a camera to be able to produce high quality images. This will however only be briefly introduced in this report and more effort will be put into the electronics design of a prototype camera board. A more extensive investigation should be made in the optics area before any final construction of a camera is to be made.

1.5

Thesis disposition

This report has been written as a guide to the selection of an image sensor as well as a manual to the developed prototype camera. It is assumed that the reader have some experience in electronics design and FPGA development. Still those with lit-tle or no knowledge of the design process should be able to attain a better knowl-edge about the functions of an image sensor and the advantages/disadvantages of different types of cameras.

(15)

This paper has been divided into nine chapters which contents are briefly de-scribed below:

• Chapter 1: Introduction, goals and background of the thesis.

• Chapter 2: A guide to the technology of image sensors.

• Chapter 3: Brief introduction to optics for digital cameras.

• Chapter 4: Market survey and evaluation of available image sensors.

• Chapter 5: Design, schematics and layout of a digital video camera

proto-type.

• Chapter 6: Software for verifying function of camera prototype.

• Chapter 7: Hardware testing of constructed prototype.

• Chapter 8: Performance evaluation of prototype.

• Chapter 9: Conclusions and further work.

Chapter five and six describes the design of the prototype camera board and are mainly intended as reference for use in future developments. Chapter seven contains the methods used for testing of the prototype board. All three of these sections require the reader to have some experience in electronics design and/or VHDL-programming. In chapter eight an evaluation of the constructed prototype and recommended improvements to the prototype are presented.

(16)

1.6 List of abbreviations 5

1.6

List of abbreviations

CCD Charge Coupled Device CCTV Closed-Circuit Television

CMOS Complementary Metal Oxide Semiconductor COTS Commercial Of The Shelf

DCM Digital Clock Manager DLL Delayed Locked Loop FIFO First In First Out

FPGA Field Programmable Gate Array FPN Fixed Pattern Noise

FPS Frames Per Second JTAG Joint Test Action Group LED Light Emitting Diode

LVDS Low Voltage Differential Signaling MCU Master Control Unit

PGA Pin Grid Array PLL Phase Locked Loop RGB Red, Green, Blue SBD Saab Bofors Dynamics SLR Single Lens Reflex

SPI Serial Peripheral Interface TPI Threads Per Inch

(17)
(18)

Chapter 2

Image sensors

Today there are two types of image sensors available, CCD and CMOS, but it is a common mistake that, when discussing digital cameras, all image sensors are referred to as CCDs. Although this has been the most frequently used technique the CMOS technology has grown much over the past ten years and today both technologies are comparable in terms of image quality, size and cost. Each tech-nology has its own advantages and disadvantages and the choice of sensor should be application specific since both are working, and will continue to work, com-plementary to each other. In this chapter both CCD and CMOS image sensing technologies are presented as well as the different areas in which these have their respective strengths and/or weaknesses. This chapter is also intended to work as a guide to the attributes defining the performance of an image sensor.

2.1

The image sensor

In todays digital cameras the component capturing the actual image data is called an image sensor. This is basically an array of photodiodes measuring incoming light, in the form of photons, converting it to an electrical charge. The charge is measured and turned into voltage or current representing the light intensity of that pixel. This data is sent to an image processor which recreates the targeted object or scene. As mentioned above there are two dominant technologies used for building image sensors, CCD and CMOS. They both have their own advantages and disadvantages where traditionally CMOS has been easier to integrate into systems while CCD provides a higher image quality.

Image sensors follow the trend of all electronic components where new manufac-turing processes and technology refinements enables the production of smaller and more power efficient units while maintaining both speed and quality. This has led to the incorporation of many new markets. Small, high-quality and cost-effective sensors have introduced digital cameras into several new products ranging from gaming consoles to cars and one of todays dominating applications is the cam-era phone. This would probably not have been possible without the competition between the CMOS and CCD technologies and the increasing number of image

(19)

sensor manufacturers.

2.2

The pixel

The smallest unit of an image is the pixel which in image sensors is represented by a photodiode that converts incoming photons to electrons. The charge of the electrons indicates the light intensity at the pixel and by combining these in arrays an image can be recreated by the image processor.

The structure of the pixel is one of most important characteristics defining image quality of the entire sensor. Layout and content of the pixel affects the ability to collect incoming photons. In CCD sensors each pixel only contain a photodiode converting photons to a charge which later is transformed into a current or voltage in another part of the sensor. This gives CCDs exceptional performance in terms of image quality since the fill-factor, the ratio of optically active area to total sensor area, becomes very high (the whole pixel is collecting light). In CMOS sensors there are several transistors incorporated into each pixel to allow for more pixel level functions, e.g. every CMOS pixel performs its own charge-to-voltage conversion and is individually addressable (see windowing in section 2.5). This features comes at the cost of fill-factor and hence quality suffers since less pixel area is dedicated to the collection of photons. Much of the incoming light will bounce back on the optically dead transistors. CMOS pixels are often referred to as 3-T to 6-T where the number stands for the number of transistors in each pixel. For example the six transistors in a 6-T pixel is used for addressing, draining and reset/hold for global shuttering. A CMOS pixel is shown in figure 2.1.

Size have been always one of the biggest factors for achieving high-quality images. A large pixel has a bigger area for collecting incoming light and is therefore more sensitive to fluctuations in this and can create a more detailed image. High resolution means large sensors in this case (many large pixels next to each other). In SLR cameras, used by expert photographers, a large sensor is not an issue since it’s still relatively small compared to the camera house. In industrial, automotive and military vision applications however there is a need to keep the system small and then size becomes a problem. Minimizing a sensor means minimizing pixel area and then there are higher demands on the photodiodes in the pixels to still be able to produce high-quality images despite the lack of light.

In later years much work has been aimed toward constructing microlenses to place over each individual pixel. Their job is to focus all incoming light over an area onto the photodiode. This way it is possible to collect light that previously has bounced off the optically inactive components and the areas between pixels. These lenses greatly increases the sensitivity of each pixel and the entire sensor since more information of the incoming light can be processed. The fillfactor can be largely increased using microlenses and improves the quality of the output image. An example of the difference in collecting area when using a microlens can be seen in figure 2.1. Microlenses have been greatly beneficial to CMOS sensor bringing them closer to the performance of their CCD counterpart.

(20)

2.3 CCD 9

Figure 2.1. Anatomy of the Active Pixel Sensor Photodiode.[1]

2.3

CCD

CCD or charged coupled device has through time been the most dominant tech-nology in the image sensor industry. This is due to its superior image quality and low noise levels. A CCD image sensor transport all charges from each pixel to one node where the voltage conversion is performed. This method leads to all conversions being affected in the same way if there are any irregularities in the conversion components. In this way all noise introduced in the voltage conversion is uniform and can be corrected in software. One drawback is that by limiting all data to flow through one node speed is inhibited resulting in a lower data rate.

CCDs are fabricated in special foundries where the process is optimized for the task. This leads to high pixel counts (pixels are placed closer together) and low cost per unit in large series. A CCD pixel has a very large optically active area in ratio to its total size and can therefore accurately reproduce the total incoming light over the entire sensor. Almost all incoming photons hit the photodiodes and are transformed to electrons instead of bouncing back on the optically inactive edges of the pixels.

As can be seen in figure 2.2 a CCD image sensor require relatively much support electronics to function properly. For example a CCD sensor delivers an analog signal representing the pixel value and the analog-to-digital conversion has to be performed outside of the sensor. Many CCD sensors also requires more than one clock signal and/or several bias voltages. This makes the process of interfacing the sensor somewhat more challenging than that of CMOS.

(21)

Figure 2.2. CCD image sensor architecture. [17]

2.4

CMOS

The CMOS technology is a commonly used method for manufacturing CPUs and memory modules as well as most modern ICs.

Unlike CCDs most modern CMOS image sensors contains active pixels which perform conversion from charge to an amplified voltage inside the pixel before sending it to an A/D-converter. An active pixel contains, except the photodiode, several transistors handling the charge conversion and amplifying of the generated signal. Thus a large area of the pixel is covered by optically inactive components making the total pixel area less sensitive to incoming light. This has been one of the major drawbacks for CMOS image sensor since much of the incoming information, i.e. light, is lost as it bounces of the transistors. Discarding that information leads to a less accurate depiction of the scene or object that is being reproduced. In later years the use of microlenses (see section 2.2) has minimized this loss making the CMOS sensors more competitive, in regard to sensitivity, to their CCD counterpart.

One of the biggest benefits of using CMOS image sensors is the possibility of extensive on-chip solutions. Since the CMOS process is used for creating integrated circuits much of the control and processing circuitry can be realized inside the actual image sensor adjacent to the optical area. This leads to easy interfacing and a minimum need for support circuitry. Hence the final size of the construction can be kept as small as possible.

(22)

2.5 Choosing sensor 11

Figure 2.3. CMOS image sensor architecture. [17]

2.5

Choosing sensor

It is common to only look for resolution and fps when choosing an image sensor but in reality there are several other factors that affects the quality of the pro-duced image. As an example a 2Mpixel camera with 20µm2 pixels have a worse

resolution than a 1Mpixel camera with 8µm2 pixels if all other parameters are

equal. [2]

According to DALSA Vice President Dave Litwiller there are eight attributes that can be used to characterize image sensor performance.

• Responsivity - “The amount of signal the sensor delivers per unit of input

optical energy.” [17]

High responsivity leads to more accurate readings which in turn results in more detailed images. It is the sensors ability to detect and deliver slight changes in light-intensity that defines how well an image can be reproduced. In CMOS image sensors it is possible to place transistors directly into the pixel, thereby getting a high gain directly from the source. Small changes then result in bigger voltage swings to the A/D yielding a higher signal resolution. In CCDs the amplification of signals often come at the cost of a needed increase of power since all conversion and amplification is done outside the optically active area. CCD manufacturers is trying to address this problem by incorporating new readout amplifier techniques into their sensors.

• Dynamic Range - “The ratio of a pixels’s saturation level to its signal

thresh-old.” [17]

This is each pixels charge resolution. A large resolution means the pixel is able to provide a more accurate representation of the incoming light and differentiate

(23)

small changes in intensity. Dynamic range is, to an extent, defined by the data resolution of each pixel. A pixel with an 8-bit resolution is able to deliver 512 different light intensities whereas a 12-bit pixel can represent 4095. This in relation to responsivity defines how well a pixel can be reproduced in light and color. [3]

The greater the dynamic range the better the sensor is at capturing both shadow- and highlight-details at the same time. As seen in figure 2.4 a high dynamic range increases the signal to noise ratio (SNR) enabling a higher quality image representation. In this area CCD sensors have an advantage due to their bigger fill-factor and since there is less on-chip circuitry adding noise. This is some of the reasons for the superior image quality of high-end CCD image sensors.

Figure 2.4. Dynamic range vesus noise floor.[3]

• Uniformity - “The consistency of response for different pixels under identical

illumination conditions.” [17]

The quality of an image is not only dependent on each pixels ability to capture the amount of incoming light but also requires the values from every pixel to be noise free so that two different pixels recieving the exact same amount of incoming light return the exact same datavalue. CMOS sensors contains several transistors in every pixel providing windowing abilities (see next page) and fast readout, but although much is gained by introducing transistors into each pixel this also leads to some uniformity problems since each pixel is doing its own charge-to-voltage conversion. Transistors have small variances in their amplification, due to the nature of the manufacturing process, making CMOS image sensors more susceptible to noise than CCD sensors where each pixel is devoted only to light-capturing which makes it a far less “noisy” enviroment.

(24)

2.5 Choosing sensor 13

Some uniformity issues can be dealt with if the offset introduced in each tran-sistor is constant. This produces a fixed pattern noise (FPN), i.e. the same noise is introduced in each frame by every transistor. A reference image taken of a black scene can be substracted from each frame removing the light-intensity introduced by these optically inactive components.

• Shuttering “The ability to start and stop exposure arbitrarily.” [17]

A shutter is controlling when exposure should start and when to end. This feature is one of the most important in the projects at SBD so section 2.6 has been devoted for this discussion.

• Speed - The sensors ability to transfer collected information on- and off-chip.

Speed has always been one of the strongest reasons for selecting a CMOS sensor since much of the control circuits can be realized directly in the sensor. All signal and power traces are part of the actual die1where the light-sensitive area is housed.

This keeps the lengths, and with this inductance, capacitance and propagation delay, to a minimum.[17]

CCDs are more limited in this area due to the one node output acting as a bottleneck.

• Windowing - The capability to read only a portion of the image sensor.

The windowing feature is available in most CMOS-sensors where every pixel is individually addressable using simple x,y coordinates. When reading only a part of the image, frame rate can be increased. By reading 1/4th of the frame, four more frames per second can be produced and delivered on the same time as one full frame. This provides a tool for high-speed object tracking in a subregion of the image as well as temporary high frame rate outputs of a smaller area. As a complement to this binning is available in many CMOS sensors, meaning several pixels are combined into one, generating an image of the whole area seen by the sensor at a lower resolution. The gain is that a full-frame readout can be performed at a higher speed. Using binning to detect movement in a larger area and windowing to follow the object at correct resolution in a part of the image is only one of the possible applications.

Generally CCDs have limited capabilities in this area.

• Antiblooming - “The ability to gracefully drain localized overexposure

with-out compromising the rest of the image in the sensor.” [17]

A very bright area in an image can lead to a pixel recieving too many incoming photons resulting in the pixel well (the charge storage of incoming light) getting full and the charges “spilling” over into surrounding pixels. If this happen nearby pixels of different colors (see section 2.7) fills up and results in white areas (red, green and blue values peak). This is called blooming which creates a ripple effect seen

(25)

as a spreaded white area in the image. Today most CCD sensors are engineered to dampen this effect as far as possible. CMOS sensors has a natural immunity against blooming from the way every pixel is built, i.e. there are drain transistors that prevents charge-leakage into surrounding pixels.

• Biasing and clocking - Defines what support circuitry is needed for normal

sensor operation.

CMOS imagers often only need one bias voltage and one master clock while gen-erating other needed voltage levels on-chip. This way CMOS sensors are easier to interface due to the minimum need of support circuitry. CCD sensors typically require several bias voltages and sometimes more than one clock to function prop-erly. It is important to know that different sensors requires different biasing and clocking and the number of these are individual to each sensor more than to a specific technology.

There are some other factors that affects the selection of image sensor such as reliability and cost. One of the early predictions of CMOS image sensors was that it should be much cheaper to produce than CCDs since it could be manufactured in the same high-volume processing lines as RAMs, CPUs and other logic ICs. This is however not the whole truth since CMOS imagers require some special processing to acquire good electro-optical performance. So even though CCD re-quires custom-built foundries and CMOS can be produced in already available processing lines the cost of similar volume production is about the same for both technologies.

Looking at a system level the CMOS technology is however less expensive since much of the control logic is incorporated directly into the sensor which keeps down the total number of components needed. But on component level the price for the image sensing function remain about the same. [17]

2.6

Shutter

A shutter controls the exposure time of the image sensor determining the amount of time each pixel have to collect light. This can be done by either placing a mechanical shutter in front of the sensor physically blocking all incoming light from reaching the sensor, or by electronically end the sampling of photons. Depending on the intended use for the camera the choice of shutter is vital.

Almost all CCD image sensors use a frame shutter where exposure is initiated and ended at exactly the same moment for every pixel. This is done by resetting all pixels at once and allow them to accumulate charge over a given time. At the end of exposure all charges are simultaneously transferred to a light-shielded area for readout. The shielded area prevents further accumulation during readout. This technique allows all pixels to depict the same moment for the same time and thereby gives a sensor the ability to “freeze” time. Frame shutters are required when capturing objects moving at a high speed. Frame shuttering is often called global shuttering since all pixels are shuttered at the same time.

(26)

2.6 Shutter 15

Table 2.1. CMOS vs. CCD feature and performance comparison.[4]

Feature CCD CMOS Signal out of pixel Electron packet Voltage Signal out of chip Voltage (analog) Bits (digital) Signal out of camera Bits (digital) Bits (digital) Fill factor High Moderate Amplifier mismatch N/A Moderate System Noise Low Moderate System Complexity High Low Sensor Complexity Low High Relative R&D cost Lower Higher

Relative system cost Depends on Application Depends on Application Camera components Sensor + multiple support Sensor + lens possible,

chips + lens but additional support chips common

Performance CCD CMOS Responsivity Moderate Slightly better Dynamic Range High Moderate

Uniformity High Low to Moderate Uniform Shuttering Fast, common Poor

Speed Moderate to High Higher Windowing Limited Extensive Antiblooming High to none High

Biasing and Clocking Multiple, higher voltage Single, low-voltage

CMOS sensors often implements electronic rolling shutters (ERS). With these shutters all pixels collects light for the same period of time but at slightly different points in time. This is due to the way the shutter “rolls” over the pixels. Exposure is initiated row by row starting at the top and working its way to the bottom. After the given exposure time a new command tells the pixels to stop collecting light in the same order as started. This introduces a small delay between the exposures of each row. As long as the object being photographed isn’t moving this is not a problem but when depicting a car moving at 100km/h this delay becomes apparent. The wheels of the car will seem to have moved further than the roof since the bottom of the picture is being captured somewhat later in time. The result will be that the car appears to “lean”. Two examples of that using bottles passing on a conveyor and a rotating fan are shown in figure 2.5 and 2.6. This phenomenon is called skewing.

In CMOS sensors global shuttering is often obtained by use of an external mechanical shutter together with a simultaneous reset of all pixels (the reset start exposure and the shutter ends it). In this project a global shutter is needed to start and stop the sampling in each pixel at the same exact moment. However using a mechanical shutter is not optimal due to the enviroments in which the

(27)

Figure 2.5. a: High-performance true global shutter. b: Rolling shutter. c: Motion

blur (no shutter or exposure period too long). d: inefficient global shutter. [16]

camera should be able to function.

Figure 2.6. a: True global shutter. b: Electronic rolling shutter. [5]

2.7

Bayer pattern

In image sensors which are capable of capturing color each pixel is covered by a filter which only let light of a certain wavelength through. By combining several adjacent pixels with different color filters a “true” color can be generated by an image processor. This is called color interpolation and follows a given algorithm depending on which filter pattern is used. One of the most common patterns is the Bayer pattern which can be seen in fig 2.7. This pattern combines four pixels to a RGB-value in the interpolation process. Two of the four pixels are green because the human eye captures more detail in this spectra than in the blue or red.[6]

Before any processing has been done to the image the bayer pattern can be seen as adjacent pixels having different intensity values depending on which color the photographed object has. Figure 2.9 shows an extreme close-up of such an image, taken with the prototype camera, clearly showing the bayer pattern as a

(28)

2.8 Raw images 17

Figure 2.7. Bayer pattern on image sensor. [7]

difference in light-intensity repeating over the entire image. In the same figure it also becomes clear that even if no color is present the image is not true grayscale either. For this an averaging algorithm has to be applied converting the bayer pattern to its represented grayscale.

The filter pattern can make the image appear out of focus since the light inten-sities of different colors wavelengths can be very different. Today there are image sensors available that delivers monochrome images meaning there is no color pat-tern covering the pixels. The use of color patpat-terns decreases the “real” resolution of the generated color picture since all pixels in the final image will contain esti-mated color values rather than the accurate number of incident photons. Foveon Inc. has addressed this issue by constructing an image sensor where every single pixel record all colors on its own by the use of new technology where light of dif-ferent wavelength penetrates to difdif-ferent depths of the silicon in the pixel. This technique is compared to the traditional bayer patter in figure 2.8. Although these sensors are only used in certain high-end cameras for still photography it shows that the industry continuously is evolving to produce even better sensors.

2.8

Raw images

A raw file contains the image data just as it is captured from the sensor, i.e. no compression or processing has been made. In this state the data can be compared to a film negative. Raw images contain all errors that can be found on the sensor like dead (black) or overexposed (white) pixels, also there have been no gamma or contrast adjustments made to the image. In postprocessing these errors can be removed and adjustments to light be made. However the final image cannot contain more information about the depicted object than can be found in the raw file because all error correction is based on estimated values from the original data. As an example white pixels are replaced by an estimated value from surrounding pixels, based on averaging, to create a smooth and crisp image. Although the final

(29)

Figure 2.8. left: Foveon X3. right: Traditional bayer pattern. [8]

result is a better looking picture the averaged pixels is the result of calculations instead of the actual number of photons collected.

Customers buying cameras on the commercial market are often more focused on getting nice images where the resulting picture is crisp, clear and have a smooth lighting rather than demanding completely accurate sensor data. Many camera developers therefore introduce some image processing to remove dead or overex-posed pixels and remove some noise in the images directly in the camera.[9] In industrial and military applications the demands are somewhat different. When applying advanced image processing for verifying a circuit board passing on a con-veyor, or determining the distance to an object, there are strict demands that the image contains “real” data rather than approximations. Leveled light-intensities creates pictures that are “easy on the eye” while certain details like small shadows and other artifacts can be lost in the processing stage. At the same time an image with many broken pixels is useless without processing.

One of the benefits of constructing the whole camera by yourself is that the delivered data is completely untouched and can therefore be seen “as from the sensor”. At SBD this is wished for since they want complete control over the image processing to be able to adapt it to their specific applications.

Note that a raw image contains no color information other than the light in-tensity at each pixel. Colors have to be artificially restored by an image processor. This was discussed in section 2.7. A raw image taken by the prototype camera can be found in chapter 8.

(30)

2.8 Raw images 19

Figure 2.9. Areazoom of figure 8.1 showing the bayer pattern as it appears on a raw

image before any processing. Image taken using prototype camera.

Figure 2.10. Bayer pattern as it appears after each intensity has been converted to its

(31)
(32)

Chapter 3

Optics

As mentioned in the introduction the time limit of this project prohibits a more extensive research in the optics area. Instead focus has been directed toward the design and construction of the camera electronics. This chapter will however briefly introduce the reader to some of the parameters to regard when choosing a suitable optics system.

3.1

Resolution

Resolution is the factor defining how well an optical system can distinguish small objects that are placed closely together. In a camera the general rule is to make sure the lens or objective has equal or higher resolution than the image sensor to ensure that the full potential of the sensor can be used. An image from a system where the lens isn’t capable of delivering a correct resolution to the sensor will seem out of focus since fine edges (white pixel next to a black pixel) will be smeared over several pixels.

In an image sensor the resolution is the number of pixels it contains in its optically active area. It isn’t quite as easy to define the resolution of a lens. In optics resolution are measured in line pairs per millimeter (lp/mm) where one line pair is one black and one white line next to eachother. This quantity should not be confused with lines per millimeter (l/mm). “Typically engineering types refer to lines per millimeter, rightly assuming that to have a black line one must also have a white line”. 50 L/mm to an engineer means 50 line pairs since every black line must have a matching white line.[10]

Resolution also demands constrast to exist. A series of alternating black and white lines on a paper is differentiated by contrast. If the black lines turned white there would be no contrast between them and the lines would be invisible. This is why without contrast there can be no resolution. This requires another way of defining resolution of an optical system.

(33)

3.2

The modular transfer function

To measure how well a lens or a film can reproduce details in an image the Modula-tion Transfer FuncModula-tion (MTF) is often used by lens manufacturers, where modula-tion basically is being the same as constrast. The MTF of a lens is a measurement of the lens’ ability to transfer contrast at a particular resolution level from the object to the image. [18] By using the MTF one can take both resolution and contrast into consideration using a single specification and hence it’s the closest one can get to describing a lens’ ability to transfer details to the image sensor.

A MTF graph plots the percentage of transferred contrast versus the frequency (lp/mm) of the lines. Contrast is being expressed as a percentage of the difference between two lines where 100% is black on white and 0% is gray on gray.[18]

An image containing only black and white lines at 100% contrast cannot be fully transferred by any lens because of the diffraction limit.[18] The closer together these lines are, i.e. the higher the line frequency gets, the harder it becomes to efficiently transfer this contrast. The result is also based on how far from the center of the image the lines are depicted. In figure 3.1 those phenomenons are presented graphically.

Figure 3.1. Effects of diffraction and the amount of contrast imaged as the frequency

is increased.[10]

Every component in an imager system has its own MTF contributing to the total MTF. In contradiction to other systems there is no weak link that sets the upper quality limit. Instead the performance is a sum of all parts in the system. In figure 3.2 the combination of a lens’ and CCDs MTFs is shown (several MTF plots describing the result at different lenghts from the lens axis are presented).

There are several other attributes definining the quality of a lens besides the MTF, such as vignetting, linear distortions and resistance to flare[10], but these will not be discussed in the scope of this thesis.

(34)

3.3 Mounting 23

Figure 3.2. Illustration showing how the MTFs from a CCD camera and an imaging

lens combine.[18]

3.3

Mounting

There are several different mounts available for holding lenses or objectives in place over the image sensor. Large camera developers aimed at the consumer market often have their own type of mount made specifically for their objectives to keep customers from using lenses from competing companies. There are however some standard mounts available that are used mainly in machine vision applications, and by smaller manufacturers to ease the process of selecting a suitable optical system, since it widens the range of lenses to choose from. The most common types of mounts are S-, C-, CS-, D- and F-mounts. Mounts are differentiated through their flange to back lengths and the way the lens is fitted. This can be screw-in, bayonet, or friction type. Specification for the mount types mentioned above are found in table 3.1. The prototype camera is fitted with a C-mount holder since there are several lenses of this type available at SBD today from use in earlier projects.

Table 3.1. Different standard mounts available.

Mount Camera type Mount type Flange focal distance D mount 8 mm and CCTV Screw (0.625 inch x 32 TPI) 12.29 mm S mount 12 mm Screw (M12x0.5), board type n/a CS mount 16 mm and CCTV Screw (1 inch x 32 TPI) 12.52 mm C mount 16 mm and CCTV Screw (1 inch x 32 TPI) 17.526 mm F mount 35 mm still Bayonet 34.27 mm

(35)

To minimize construction time a lens holder for mounting lenses directly onto the PCB will be used. The actual holder, mounted on a board and fitted with a lens, can be seen in figure 3.3. In most professional cameras the lens holder is part of the camera house while the sensor and electronics are mounted inside. This makes it easier to adjust the space between sensor and lens for optimal performance. For the prototype a camera house will not be constructed due to the time limit of the project.

(36)

Chapter 4

Market survey

4.1

Requirements

There are several criterias that the image sensor is required to meet to be suitable for the intended applications. Among the basic functions is a minimum resolution of four megapixels and the ability to depict moving objects without distortion. The parameters of the currently used camera have been used as a starting point to form the minimum requirements for the construction, i.e. the chosen sensor should have equal or better performance than the one used today.

Requirements for selected image sensor:

• At least 4 Megapixels.

• Global shuttering capabilities. • 7 fps or more.

• Low power consumption. • High image quality.

• The chosen sensor must be available directly to enable construction of a

prototype board.

4.2

Suitable sensors

From the information gained in chapter 2 the CMOS technology seems to be the most suitable for this implementation due to the ability to incorporate almost all control logic into the actual sensor. This brings the need for support circuitry to a minimum, thereby shortening the design process and keeping the final construction less sensitive to outer influences of both the physical and electrical kind. Also a CMOS sensor, in general, requires less power and is easier to interface, which is important to ensure function of the final construction as well as shortening the

(37)

development time. However focusing only on one of the technologies drastically limited the selection of suitable sensors.

During the course of the survey it became clear that CMOS image sensors that meets all of the specified demands were in development but will not be available until about 6-12 months into the future. The advantages of the CMOS technology are still deemed substantial and a decision to stay with CMOS was made. However this meant that one or more of the requirements of the sensor would not be met. The choice of sensor is at this time only for prototype construction and evaluation which means that the global shuttering is not necessary while evaluation of the image quality is vital in the decision to build own cameras or not. This can be done with images of stationary objects as well as moving. However the quality, fps and resolution is required for an accurate evaluation. A global shutter is an electronic function that can be incorporated directly into the sensor using no extra external signals (pins). Thus the selected sensor should be fairly easy to replace with a future sensor in the same series when the global shuttering feature becomes available. All other criteria’s such as speed, resolution and size were still met.

The four most suitable sensors found, together with their advantages and dis-advantages, are presented below and a complete table of sensors can be found in appendix A.

4.2.1

SONY IMX017

The IMX017 is a newly released CMOS sensor from Sony featuring fast image ac-quisition, high resolution and global shuttering capabilities. This high-speed/high-resolution sensor can deliver 6.4Mpixels at a speed of 60fps. The high framerate is possible due to column-parallel A/D conversion which means there are separate A/D converters for each column. This technique also reduce overall noise in the system since more time is available for each conversion compared to a one-pixel-at-a-time conversion. [19]

Like many other CMOS sensors readout speed can be greatly increased by narrowing the region of interest (the number of pixels read). The IMX017 have four basic readout modes for different applications. These are described in table 4.1. In the table it can be seen, among other things, that the SONY IMX017 can capture images above 1Mpixel/s at 300fps.

Table 4.1. Readout modes for Sony IMX017.

Number of pixels Frame rate Data rate Data width Application 6.4Mpixel 60 432 MHz 10-bits High-quality video 6.4Mpixel 15 108 MHz 12-bits High-resolution

photography 1.56Mpixel 60 108 MHz 10-bits video

(38)

4.2 Suitable sensors 27

Communications with the IMX017 is handled through a LVDS interface capable of running at 432 MHz. The sensor has the ability to capture high-resolution photos during moving image capturing without interrupting the capture. In most sensors this action would require the sensor to stop video, take the image and then resume video, causing multiple frame drops, i.e. missing some frames.

Although the specification of this sensor fulfills the requirements of this the camera on all points Sony unfortunately only delivers this sensor to commercial developers ordering very large volumes at this time.

4.2.2

Cypress LUPA-4000

LUPA-4000 from Cypress is a 4Mpixel sensor with a full snapshot shutter (global shutter) available as both monochrome and with bayer pattern. It is capable of a frame rate of up to 15fps at full resolution and since it’s a CMOS sensor higher speeds are possible if the region-of-interest is made smaller. The optical format is 1:1 and contains 2048x2048 pixels on an area of 24.6mm2. This means each pixel

is about 12µm2providing a base for high quality images (see section 2.2).

The LUPA-4000 use SPI to communicate with other devices making interface fairly easy. On the other hand the sensor is delivered as a 127-pin PGA package of 42mm2 requiring several clocks and power supplies. These features make the

sensor less suitable for prototyping as well as miniaturizing, which is one of the goals for the final construction.

Finally Cypress requires all customers to sign an agreement not to use their sensors for any military purposes. Even though today’s intended projects for the camera is non-military SBD is part of a defense industry. By selecting Cypress as suppliers future implementations could be affected.

4.2.3

Micron MT9P001

Micron is one of the leading image sensor manufacturers today and has a wide array of sensors to choose from. For the application at hand their series of 5Mpixel sen-sors is very well suited in terms of resolution, frame rate and size. The MT9P001 has a resolution of 2592 x 1944 pixels and is able to deliver 15 full frames of 12-bit pixeldata every second.

Writing and/or reading sensor registers are done through a two-wire serial interface which follow the specification of I2C from Philips Semiconductors. This interface is widely used and provide easy communication with the sensor. The sensor has a pixel size of only 2.2µm2 which makes the total area of the sensor only 10mm2. This is to be compared to the 42mm2 of Cypress LUPA-4000.

The MT9P001 and other sensors in the same series are currently only capable of electronic shuttering.

4.2.4

DALSA 4M60

The 4M60 is a camera from DALSA which houses a 4Mpixel CMOS sensor which can deliver 8 or 10 bit full resolution pixeldata at 62 fps. It houses an electronic

(39)

global shutter and have a pixelsize of 7.2µm2. This makes the sensor a good

candidate for the intended applications at SBD. However at the time of this thesis this sensor was not available for purchase but only used in DALSA’s own products.

4.3

Summary

After careful consideration and numerous contacts with image sensor manufactur-ers it became clear that the only sensor that fulfilled all of the requirements, the Sony IMX017, was not available for purchase at the time of this thesis. A decision to still manufacture a prototype meeting all but one of the parameters was taken, as mentioned in the beginning of this chapter. To be able to meet the deadline of 20 weeks with a working prototype the chosen sensor had to be available for short time delivery.

Based on performance, availability and price the final choice fell on the Micron MT9P001. All sensors in the MT9-series have the same pinout and can easily be replaced by a later model. In this case it is a good attribute since there are indications from Micron that a sensor with global shuttering capabilities will be available in this series during 2008.

(40)

Chapter 5

Construction

5.1

Overview

Figure 5.1 shows the top overlay design, with communication paths, of the cam-eraboard. A master control unit will handle all communications between image sensor and PC as well as controlling start and stop of pixelgrabbing. The pixel-grabber is collecting pixeldata from the sensor and transfer it to a PC through a cameralink interface (see section 5.3.2).

Figure 5.1. Dataflow structure of cameraboard. Top view.

5.2

Schematics

This section contains descriptions of the schematic sheets of the camera board. Only the layout and function of the sheet as whole is presented without in-depth explanations of every component. For a detailed description of individual circuits please refer to section 5.3. The schematic pages described are found in appendix B and will not be placed in this chapter due to the size of the sheets. All schematic-and layout- work has been performed in Mentor Graphics.

(41)

5.2.1

Camera

The camera sheet is the top sheet of the schematics and defines through which signals the separate sheets commmunicates. It also provides an easy way to see all of the parts in the final construction.

5.2.2

FPGA

This sheet contains the FPGA, programming circuits, logic analyzer connectors and the 1:1 mapping1to the camera link interface and image sensor. Three status

LEDs has also been added to allow visual feedback from the FPGA.

Figure 5.2 shows the feedback loop that is used to align data and clock signals at camera link transmissions. CLINK_SYNC is fed back into a DCM (see section 6.3) in the FPGA while CAM_LINK_CLK clocks the Channel Link transmitter. By locking the internal FIFO-clocking to the CLINK_SYNC clock a more accurate timing can be achieved since data and clock can be configured to arrive at correct times to the transmitter. Using the feedback after D4 (see figure 5.2) the delay in FPGA, clock-driver and wires will not be relevant since the clock is locked to the arrival at the transmitter circuit.

Figure 5.2. Feedback loop of camera link clock. Used for aligning data and clock arrival

at the camera link transmitter.

5.2.3

Image sensor

The image sensor sheet is basically the image sensor itself and the required con-nectors to other sheets. It also contains pull-up resistors for the serial interface and the reset signal for the sensor. The connections are based on those in the MT9P001 datasheet [14] p.7.

5.2.4

USB

To handle the communication from a PC to the camera board an UART to USB interface has been added. This connection will ease the process of writing and read-1Pins are connected in order to match the layout of the channel link transmitter, i.e. all wires

should be able to draw in parallel to eachother without crossing eachother. This is to ease the layout process and minimize the number of signal layers needed.

(42)

5.2 Schematics 31

ing registers in the image sensor while at the same time provide a dynamic control interface for further development. Schematics has been created from datasheet specifications for a self-powered configuration of the FT232RL (refer to FT232R datasheet [12] p.20). TX and RX led indicators have been added to enable visual verification of data transfers. When creating a final product of the camera this interface can be removed and instead the built-in serial protocol of the camera link interface can be used (providing camera link is selected as the preferred image transfer protocol).

5.2.5

Camera link

The camera link sheet defines the connections between the DS90CR285 Channel Link-transmitter and the camera link connector.

Complete connector pinout can be found in the camera link specification [11]. Pins left dangling on the connector belongs to the LVDS CC1 to CC4 pairs. CC stands for Camera Control and these signals can be used for controlling pan, tilt and zoom features (or as general control signals) but is not implemented in this design. The built-in camera link serial interface is enabled through components IC2 and IC4 for future use but no code is written for this in benefit to the USB/UART protocol.

5.2.6

Power

The powersheet houses the three voltage regulators needed for powering the camera board and a low voltage reset circuit that can be used to reset the FPGA in case of a voltage dropout or power reset. There are currently six voltages used on the board, except the input voltage, and these are presented in table 5.1.

Table 5.1. Prototype board power planes.

Signal name Voltage Used for

UP5VD 5V Input voltage for camera board. UP3V3D 3.3V FPGA, Camera Link, USB.

UP3VD 3V Interface between image sensor and FPGA. UP2V8A 2.8V Analog voltage for A/D-conversion in the

im-age sensor. This is referenced to AGND. U2V5D 2.5V Auxiliary voltage for FPGA. Used for

pro-gramming.

UP1V8D 1.8V Interface between image sensor and FPGA. UP1V2D 1.3V Core voltage to FPGA.

VDDQ_SENSOR 3V/1.8V Choose by setting jumper.

VDDQ_SENSOR is the power signal for the interface between the image sensor and FPGA and is selectable by jumper (1.8V or 3.0V). This was introduced as a solution for debugging and test due to mismatch in the working regions of the FPGA and image sensor. The electrical characteristic of the sensor specifies a

(43)

working region from 1.7V to 1.9V or 2.6V to 3.1V for the I/O digital voltage where 3.0V thus is a valid value. By selecting 3.0V instead of the recommended 2.8V the I/O single-ended standard LVTTL mode can be used in the FPGA. By setting VDDQ_SENSOR to 1.8V the LVCMOS18 standard is applicable but this restricts the maximum data rate of the image sensor to 46Mpixel/s instead of the intended 71Mpixel/s (see section 6.2).

5.2.7

Decoupling

Decoupling prevents noise generated by the power supplies to reach the circuits where it can lead to unexpected and/or unwanted behaviour. Higher operating frequencies lead, in general, to the need of smaller capacitances and extra care in placing the decoupling capacitors to aquire an adequate decoupling. Each individ-ual decoupling unit has been designed to meet the recommended specifications in respective datasheets.

All decoupling capacitors have been placed on an individual sheet to provide a better overview. The Xilinx Spartan-3E FPGA utilize four different supply voltages which lead to an increased number of decoupling capacitors. The cur-rent design match the recommended amount and sizes of decoupling capacitors as specified by Xilinx. Except the recommendations in [15] similar connections can be found on the schematics of the Xilinx Spartan-500E evaluation boards.

5.3

Components

5.3.1

FPGA

A Spartan-3E FPGA (XC3S500E) from Xilinx was selected for this prototype due to its combination of speed and relatively high gate count. Also there are four available DCMs covering the needed timing and clock deskewing. This FPGA also have been used in an earlier project at SBD with satisfactory results.

To allow smaller modifications, in case of errors at the schematic stage, the PQ208 package was selected. Although larger than the other available packages, such as BGA, this ensured easier access to all pins, both for modification and soldering verification. This subject is discussed more later in section 5.4.

5.3.2

Camera link

Camera link is a communication interface based on the Channel Link technology provided by National Semiconductors. It was developed to provide the industry with a standard interface for their vision applications. Earlier almost all manu-facturers used different connectors and protocols for their cameras and framegrab-bers2, something that complicated the task of selecting compatible components for the customers.

2A framegrabber is the unit placed in the computer, or other system, to capture incoming

(44)

5.3 Components 33

Channel link use LVDS (low voltage differential signaling) for all data transmis-sions. This provides a high speed interface which doesn’t consume a lot of power while still being reliable. Due to the small voltage swing, 350 mV differential, the rise and fall time of the signal enable a theoretical transmission rate of up to 1.9 Gbps3. By using four LVDS data streams and one LVDS clock the Channel link chipset support transmission rates of up to 2.38 Gbits/s. [11]

5.3.3

USB/UART

As a complement to the serial interface available in the camera link configuration an USB to UART circuit has been added to the design. In this way communication between a PC and the camera is possible even when no camera link cable is plugged in. Instead of implementing an USB protocol directly into the FPGA UART was choosen due to its simplicity. The conversion is handled by the FT232RL from FTDI. [12]

5.3.4

Power supply

To generate all six voltages needed for supplying both image sensor and FPGA three power regulators are used. The TPS75003 is a triple-supply power manag-ment IC for powering FPGAs and DSPs. This regulator takes 5V and convert it to 3.3V, 2.5V and 1.2V which are needed for normal FPGA-operation. The 3.3V is also used for most ICs in this design.

The TPS71202 generates 1.8V to the image sensor as well as an analog voltage of 2.8V. This is used for the internal A/D-converters in the image sensor converting pixel-intensities to a 12-bit digital parallel data. To provide a less noisy powersignal an EMI filter was placed at the output of the voltage.

The last voltage is generated by a low-power dropout regulator transforming 3.3V to 3.0V which is one of the possible I/O-voltage for the image sensor. Both 3.0V and 1.8V is selectable by a jumper to allow for testing of a suitable power to the sensor, as mentioned in section 5.2.6.

Finally there is a voltage supervisory circuit available, driving the reset_l signal low when voltage is below recommended level. This is to ensure power is stable before any processing is done in the FPGA. Reset_l is polled in software to reset all registers and signals before normal operations resumes.

5.3.5

Logic analyzer

Two Tektronix logic analyzer connectors has been added to the board to enable monitoring of internal FPGA signals. This is to ease the verification and debug process.

(45)

5.4

Layout

The layout part of this prototype has been done with ease of modifications in mind meaning little effort have been made to keep the construction as small as possible, although this is an important criteria in a final construction. Instead all signals has been routed to ease the layout step (decreasing development time as much as possible) and keep as many signals on the top and bottom layer as possible to allow for some modifications if necessary. Table 5.2 describes the physical connections on the board.

The board is built-up by six layers. Four signal layers, one layer for all the power planes and one for the ground plane. The intention was to only use two layers for signals but the introduction of logic-analyzer connectors required more routing space.

(46)

5.4 Layout 35

Table 5.2. Pinout of Xilinx Spartan-3E on prototype cameraboard. Signal names as

presented in schematic, appendix B.

Signal FPGA Signal FPGA pin# pin# CAM_LINK_TX 2 DOUT[0] 145 CAM_LINK_RX 6 DOUT[1] 144 LED1 22 DOUT[2] 142 LED2 23 DOUT[3] 139 LED3 24 DOUT[4] 130 USB_TX 62 DOUT[5] 124 USB_RX 63 DOUT[6] 122 CAM_LINK_SYNC 74 DOUT[7] 118 CAM_LINK_CLK 75 DOUT[8] 116 RESET_L 82 DOUT[9] 115 FPGA_CLK 83 DOUT[11] 112 SENSOR_RESET 128 SENSOR_STANDBY 129 SENSOR_OE 132 SADR 133 SDATA 134 SCLK 135 TRIGGER 147 F_VALID 148 L_VALID 154 IMAGE_SENSOR_CLK 177 PIXCLK 183

CAMERA LINK IO PINOUT

CAM_LINK_IO[0] 50 CAM_LINK_IO[14] 30 CAM_LINK_IO[1] 49 CAM_LINK_IO[15] 29 CAM_LINK_IO[2] 48 CAM_LINK_IO[16] 28 CAM_LINK_IO[3] 47 CAM_LINK_IO[17] 25 CAM_LINK_IO[4] 45 CAM_LINK_IO[18] 19 CAM_LINK_IO[5] 42 CAM_LINK_IO[19] 18 CAM_LINK_IO[6] 41 CAM_LINK_IO[20] 16 CAM_LINK_IO[7] 40 CAM_LINK_IO[21] 15 CAM_LINK_IO[8] 39 CAM_LINK_IO[22] 12 CAM_LINK_IO[9] 36 CAM_LINK_IO[23] 11 CAM_LINK_IO[10] 35 CAM_LINK_IO[24] 9 CAM_LINK_IO[11] 34 CAM_LINK_IO[25] 8 CAM_LINK_IO[12] 33 CAM_LINK_IO[26] 4 CAM_LINK_IO[13] 31 CAM_LINK_IO[27] 3

(47)

References

Related documents

Comparing the achieved lateral resolution with previous results from the wave optics based Monte Carlo numerical approach, the SPC is proven a simple yet efficient

Based on relevant theories, a research model is selected which combines Sweden’s competitive advantages, the Swedish automotive cluster’s dynamism and its development stage into

Using two probes instead of one, efficiently eliminates diurnal and other large scale effects since they will influence both of the probes to approximately the same

It can be concluded that by utilizing natural learning instincts in young ELL learners, through the introduction and active use of the nonsense ABC and Onset-Rhyme, it is

If we compare the responses to the first three questions with those to the last three questions, we notice a clear shift towards less concern for relative

People who make their own clothes make a statement – “I go my own way.“ This can be grounded in political views, a lack of economical funds or simply for loving the craft.Because

Respondent A also states that if a current client makes changes in the ownership, a new credit assessment process will be initiated and if the bank does not get to know

Ongoing SSE Alumni Club matters shall be attended to for the period up to and including the next Annual Meeting by a Board of Directors consisting of a minimum of five, and a