• No results found

Real-time HDR video reconstruction for multi-sensor systems

N/A
N/A
Protected

Academic year: 2021

Share "Real-time HDR video reconstruction for multi-sensor systems"

Copied!
1
0
0

Loading.... (view fulltext now)

Full text

(1)

Real-time HDR video reconstruction for multi-sensor systems

Joel Kronander, Stefan Gustavson, Jonas Unger∗

Link¨oping University Lens Beam Splitter I1 I2 I3 ND filter 1 ND filter 2 ND filter 3 (a) T1 T2 T3 I2 I3 I1 w Zj (b) (c)

Figure 1: (a) Sketch of a typical multi-sensor HDR capturing system. (b) Assuming a known transformation sensor images can be warped to a common reference coordinate system, note that due to non-perfect pixel alignment, transformed sensor pixels will generally be irregularly

distributed in the reference grid. The reconstruction at a pointzjuses measured sensor pixels from all sensors inside the support of a finite

window function with radiusw. (c) Shows a tonemapped HDR video frame reconstructed at an resolution of 2400x1700 using our method.

1

Introduction

HDR video is an emerging field of technology, with a few camera systems currently in existence [Myszkowski et al. 2008]. Multi-sensor systems [Tocci et al. 2011] have recently proved to be pticularly promising due to superior robustness against temporal ar-tifacts, correct motion blur, and high light efficiency. Previous HDR reconstruction methods for multi-sensor systems have as-sumed pixel perfect alignment of the physical sensors. This is, however, very difficult to achieve in practice. It may even be the case that reflections in beam splitters make it impossible to match the arrangement of the Bayer filters between sensors. We therefor present a novel reconstruction method specifically designed to han-dle the case of non-negligible misalignments between the sensors. Furthermore, while previous reconstruction techniques have con-sidered HDR assembly, debayering and denoising as separate prob-lems, our method is capable of simultaneous HDR assembly, de-bayering and smoothing of the data (denoising). The method is also general in that it allows reconstruction to an arbitrary output reso-lution and mapping. The algorithm is implemented in CUDA, and shows video speed performance for an experimental HDR video platform consisting of four 2336x1756 pixels high quality CCD sensors imaging the scene trough a common optical system. ND-filters of different densities are placed in front of the sensors to cap-ture a dynamic range of 24 f -stops.

2

Reconstruction

We treat the reconstruction of each HDR video frame, F , as a sep-arate problem. Using N sensors, the input data for each frame

con-sists of a set of raw images, Is(i, j) s = 1...N , each with a possibly

varying exposure time, tsand ND-filter coefficient, ns. We assume

that there exists an affine transform, Tsrelating each image to a

vir-tual reference coordinate system, in which we seek to reconstruct

the HDR output image. In practice we find the transforms, Ts, for

each sensor matching the detected corners of a chessboard calibra-tion target (implemented in OpenCV 2.1), for our system, we obtain a maximum reprojection error of ≈ 0.1 pixels across the image. As a first step in the reconstruction, we perform shading correction

e-mail: {joel.kronander, stefan.gustavson, jonas.unger}@liu.se

for all sensor images, linearizing the sensor output to a common scale of reference. We then map all sensor images (pixel measure-ments) to the output coordinate system. This generally produces

an irregular distribution of red, green and blue pixel samples, ys,i,c

captured from sensors s = 1..N , using a lexicographical ordering index i, see Figure 1(a). We now seek to reconstruct the output image in a regular grid, with spacings corresponding to the desired

output resolution. For each output pixel location, zj, we compute a

locally weighted average of nearby pixel measurements, ys,c,i, for

each color channel c = R, G, B. ˆ zj,c= X s X i w(j, s, i, c)ys,i,c P w(j, s, i, c) (1) We compute the weights based on two criteria, w(j, s, i, c) =

wg(i, j) · wr(j, s, i, c). The first factor wg(j, i) is a windowing

function of finite support, giving higher weights to spatially nearby samples, in our implementation we use a Gaussian function with the euclidean distance as argument. The second factor is a

radio-metric weight, wr(j, s, i, c). This weight is set according to a linear

function of the raw digital input value with a small offset from the blacklevel level and saturation point. Our method thus automati-cally removes saturated pixels before reconstruction/debayering by setting their radiometric weight to zero.

Figure 1(c) shows a reconstructed HDR frame from our experimen-tal HDR video platform. The CUDA implementation, running on an NVidia 580, performs simultaneous debayering, filtering and HDR reconstruction on the four 4Mpixel input Bayer pattern im-ages at 26 fps sustained rate with an output resolution of 1168x876 pixels. Compared to methods considering the HDR reconstruction and debayering in separate steps, our method offers comparable quality and provides more flexibility in the choice of output res-olution and mapping.

References

MYSZKOWSKI, K., MANTIUK, R.,ANDKRAWCZYK, G. 2008. High Dynamic Range Video. Morgan & Claypool.

TOCCI, M. D., KISER, C., TOCCI, N., ANDSEN, P. 2011. A

Versatile HDR Video Production System. ACM Transactions on Graphics (TOG) (Proceedings of SIGGRAPH 2011) 30, 4.

References

Related documents

Överrullningsskyddet finns för att kunna ha en tillräcklig lutning på bäraren för att därmed säkerställa att ett rör rullar ned till föregående rör i bäraren även om de

Due to the need for incremental refinement of information at different levels of abstraction, we model computations and processes within the stream-based knowledge processing

Due to three reasons, BMW 1 thinks about implementing an additional, larger standardised container 2 : Firstly, there is a certain range of assembly parts that does not fit very

can be one, single acting individual, a unit framed by the members of the household, or a network of people limited in scope by some ac- ceptable definition. This

measured, the time spent in the different modes ( running cycle task , searching for tasks , idling or performing parallel-for) as well as time spent on synchronization

Supplementary Table 12 | Gene Ontology (GO) Biological Process terms enriched among differentially transcribed genes in resistant- relative to susceptible trophozoite lines,

Svenskt teckenspråk är något som hon behärskar fullt ut vilket gör att hon vet många tecken för olika ord, ibland vet hon inte motsvarigheten i svenska språket.. Slutligen är

External validation of a nomogram predicting the probability of prostate cancer Gleason sum upgrading between biopsy and radical prostatectomy pathology among Japanese