• No results found

Automatic Projector Calibration for Curved Surfaces Using an Omnidirectional Camera

N/A
N/A
Protected

Academic year: 2021

Share "Automatic Projector Calibration for Curved Surfaces Using an Omnidirectional Camera"

Copied!
54
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT MATHEMATICS, SECOND CYCLE, 30 CREDITS

,

STOCKHOLM SWEDEN 2017

Automatic Projector Calibration for

Curved Surfaces Using an

Omnidirectional Camera

DAVID TENNANDER

(2)
(3)

Automatic Projector Calibration

for Curved Surfaces Using an

Omnidirectional Camera

DAVID TENNANDER

Degree Projects in Optimization and Systems Theory (30 ECTS credits) Degree Programme in Applied and Computational Mathematics (120 credits) KTH Royal Institute of Technology year 2017

Supervisor at ÅF Technology AB: Mats Elfving Supervisor at KTH: Xiaoming Hu

(4)

TRITA-MAT-E 2017:42 ISRN-KTH/MAT/E--17/42--SE

Royal Institute of Technology

School of Engineering Sciences

KTH SCI

(5)

iii

Sammanfattning

Denna rapport presenterar en metod för att motverka de distorsioner som uppkommer när en bild projeseras på en icke plan yta. Genom att använda en omnidirectional kamera kan en omslutande dome upplyst av flertalet projek-torer bli kalibrerad. Kameran modelerades med The Unified Projection Model då modellen går att anpassa för ett stort antal kamerasystem. Projektorernas bild på ytan lästes av genom att använda Gray kod och sedan beräknades den optimala mittpunkten för den kalibrerade bilden genom att numeriskt lösa ett kvadratiskt NLP problem. Till slut skapas en Spline yta som motvärkar projektionsförvrängningen genom FAST-LTS regression. I den expirementella uppställningen användes en RICOH THETA S kamera som kalibrerades men omnidir modulen i openCV. Ett enligt författarna lyckat resultat uppnåddes och vid överlappning av flertalet projektorer så mättes ett maximalt fel på 0.5

(6)
(7)

iv

Abstract

This master’s thesis presents one approach to remove distortions genera-ted by projecting onto non flat surfaces. By using an omnidirectional camera a full 360◦ dome could be calibrated and the corresponding angles between

multiple projections could be calculated. The camera was modelled with the Unified Projection Model allowing any omnidirectional camera system to be used. Surface geometry was captured by using Gray code patterns, the opti-mal image centre was calculated as an quadratic optimisation problem and in the end a Spline surface countering the distortions was generated by using the FAST-LTSregression algorithm. The developed system used a RICOH THETA S camera calibrated by the omnidir module in openCV. A desirable result was achieved and during use of overlapping projectors a maximum error of 0.5was

(8)
(9)

v

Acknowledgement

First of all I would like to thank Mats Elfving, my supervisor at ÅF, who has spent many hours discussing my ideas and held me company in the otherwise lonely lab. In general I would like to thank all the great people in "Datakonsulter" at ÅF you have made it a joy to spend time in Solna!

I also want to aim a big thank you to the people in Kårspexet who made the hours not spent on the thesis a joy and have given me the energy to always be exited to keep going back to work.

At last I also want to thank Xiaoming Hu who have been my supervisor at KTH and who have had he’s inbox filled with my emails.

Thank you all!

(10)
(11)

Contents

Contents vi 1 Introduction 1 1.1 Background . . . 1 1.2 Problem Statement . . . 2 2 Theory 3 2.1 Camera Model . . . 3 2.1.1 Pinhole Model . . . 3

2.1.2 Non Linear Distortion Model . . . 4

2.1.3 Unified Projection Model . . . 5

2.2 Calibration . . . 6

2.2.1 Calibration of Non Linear distortion Model . . . 7

2.2.2 Calibration of the Unified Projection Model . . . 8

2.3 Spline Interpolation . . . 8

2.3.1 Catmull-Rom Spline . . . 9

2.3.2 Bézier Curve . . . 10

2.4 Linear Regression . . . 11

2.4.1 Least Squares . . . 11

2.4.2 Least Trimmed Squares . . . 12

2.5 Proof of equivalence between optimization problems . . . 12

3 Method 19 3.1 Solution Overview . . . 19

3.2 Camera Calibration . . . 19

3.3 Finding Projector Pixels in the Camera Image . . . 19

3.4 Lifting Image Points to a Unit Sphere . . . 20

3.5 Virtual Pinhole Projection . . . 22

3.6 Regression . . . 23

4 Result 25 4.1 Experimental Setup . . . 25

4.2 Camera Calibration . . . 26

(12)

CONTENTS vii

4.4 Regression Method . . . 28

4.5 System Verification . . . 28

4.5.1 Calibration of Single Projector . . . 29

4.5.2 Calibration of Multi-Projector System . . . 29

5 Discussion 35 5.1 Analysis of the Camera Model . . . 35

5.2 Regression Implementation . . . 35

5.3 Projector Calibration System . . . 36

6 Conclusion 37 6.1 Fulfilment of Problem Statement . . . 37

6.2 Further Work . . . 37

(13)

Chapter 1

Introduction

This chapter contains an introduction and definition to the problem solved in this thesis. Section 1.1 gives a background to the problem and section 1.2 defines the problem this thesis aims to solve.

1.1

Background

When constructing a Flight simulator the aim is to create the most realistic ex-perience possible given a set of business constraints. One of the core parts of this experience is the visual feedback seen from the simulated world outside the plane. One way to present this feedback is to place the pilot and cockpit inside a dome and then project the simulated world on the dome’s surface. One example of this solution can be found at the Swedish Armed Forces’ flight school and their set up is shown in Figure 1.1.

This solution uses six projectors all projecting different parts of the simulated world onto the dome. For the result to seem realistic the projector images needs to be preprocessed before being sent to the projectors to reduce the distortions produced by projection angle and surface curvature. This is today achieved by using an image processor produced and maintained by ÅF Technology AB. This process needs to be recalibrated when any projector or the dome moves or are being replaced. As of this writing this is done by doing manual measurements on the dome and then creating a Spline warping partly by hand for every projector. Work has been done to improve and speed up this process by both Andersson [2] and Åberg [1]. Andersson created a method to automate this process by using a camera positioned at the pilots head, but the method lacked in robustness and a multitude of images were needed to capture the hole dome. Åberg’s solution solves the robustness problem by capturing hidden parts of the dome with a secondary camera and then triangulate each pixel’s position. This thesis aims to improve the work of both Andersson and Åberg by creating a faster and more robust calibration procedure that uses an omnidirectional camera to reduce the number of images needed and also reevaluate the method used to create the Spline surface.

(14)

2 CHAPTER 1. INTRODUCTION

Figure 1.1: A flight simulator used in The Swedish Armed Forces’ training. Copy-right belongs to Jan Basilius.

1.2

Problem Statement

(15)

Chapter 2

Theory

This chapter presents the theories used to create the resulting solution. It also includes some theories studied by the author during the thesis but not applied in the end result.

2.1

Camera Model

This section will present the pinhole model used for modelling camera sensors and then present the extensions needed to model cameras with a field of view greater then 180°.

2.1.1 Pinhole Model

A camera with a field of view less then 180° can be modelled as a pinhole camera, meaning all captured light rays will need to pass a single point in space before being captured on the camera sensor, as shown in Figure 2.1. A scene point, Xi ∈ R4, in homogeneous coordinates is mapped to a point, m in the sensor space Ic⊂ R3. This creates the following relation between world and sensor space as

λm= CP X (2.1)

were λ is a scalar > 0, P is the transform from the homogeneous scene coordinate frame of reference to the frame of reference with origin in O and C is the camera matrix on the form

C=    f1 0 0 0 0 f2 0 0 0 0 1 0   . (2.2)

The pinhole model is far from realistic as all modern cameras uses a set of lenses to capture more light then would be possible with only a pinhole camera, but if the field of view is low this can be compensated by adding radial and tangential distortions. If the field of view exceeds 180° the angle θ will need to be able to take the value of 90° but with pinhole model this results in m being infinity far away from Oc.

(16)

4 CHAPTER 2. THEORY • O • Oc •Xim If Ic θ θ fl

Figure 2.1: Visualisation of the Pinhole Model used for cameras with a Field of View less then 180°.

u1 u2 •v r Ic g • O g(v) R3

Figure 2.2: The transformation g : R2 → R3.

2.1.2 Non Linear Distortion Model

One solution to this problem is studied by Micusik and Pajdla [11]. They propose a model that expands (2.1) as

λg(u) = P X (2.3)

were u is the euclidean representation of m and g : R2 → R3 is an non linear

transform from the sensor space to the unit sphere around the camera centre point O as shown in Figure 2.2. They then propose a g that can be expressed as

g(u1, u2) =    u1 u2 f(r(u1, u2))    (2.4)

were f : R → R and r follows

r(u1, u2) =

q

u2

1+ u22. (2.5)

(17)

2.1. CAMERA MODEL 5

For better results Scaramuzza et al. [14] proposed the idea that f could be modelled as a Taylor expansion,

f(r) = a0+ a1r+ a2r2+ · · · + aNrN. (2.6)

This allows (2.3) to be written as λ    u1 u2 a0+ a1r+ · · · + aNrN   = P X (2.7)

resulting in a camera model that can be calibrated by estimating the extrinsic matrix P and then the intrinsic parameters a0, a1, . . . , aN.

2.1.3 Unified Projection Model

Instead of using the above method Ying and Hu [17] show that the Unified Projection Model for catadioptic cameras proposed by Grayer and Daniilidis [5] also can be made to fit a camera system using an fish eye lens.

The Unified Imaging model projects a world point, X , by first moving it to the camera’s frame of reference, Fc, by the transform

Xc= P X . (2.8)

It then projects the point Xc onto the camera sensor by projecting it to the point XS on the unit sphere around the origin, O as

XS= X

kX k. (2.9)

The projected point is then moved to the reference system Fξ who’s origin, Oξ is at (0, 0, −ξ) in Fc making (XS)Fξ = (XS)Fc+    0 0 ξ    (2.10)

The point (XS)Fξ is then projected to a point, m, on the normalized plane by the

transform (h)Fξ : (x, y, z) → ( x z, y z,1) (2.11)

In the cameras reference system Fc we get the transform from XS to m as m= h(XS) = XS1 XS3+ξ, XS2 XS3+ξ, 1  . (2.12)

(18)

6 CHAPTER 2. THEORY •Oc •Oη • X • XS 1 ξ •m

Figure 2.3: The Transformation from X to m described in Section 2.1.3 The point m is then projected onto the sensor with a generalised camera matrix on the form K=    γ1 γ1α u0 0 γ2 v0 0 0 1    (2.13)

where γ1 and γ2 are the generalised focal length, α is the skewness and (u0, v0) the

principal point. In Geyer’s paper α is set to 0, but the addition of α is done by Mei and Rives in [10].

To increase model accuracy Mei and Rives implements a distortion step distor-ting the point m to a point p before applying the projection K. The distortion, D, is defined as D(m) =    L(ρ)m1+ d1(m) L(ρ)m2+ d2(m) 1   , ρ= q m21+ m22 (2.14) where L(ρ) = 1 + k1ρ2+ k2ρ4+ k5ρ6 (2.15) and d1(m) = 2k2m1m2+ k42+ 2m21) d2(m) = k32+ 2m22) + 2k4m1m2. (2.16) This approach results in a distortion controlled by the parameters k1, ..., k5 which

can be determined during calibration.

2.2

Calibration

(19)

2.2. CALIBRATION 7

2.2.1 Calibration of Non Linear distortion Model

In [14] Scaramuzza et al. starts by assuming that all calibration points in the scene space are located in one plane (for example a flat checkers pattern). Thus X can be assumed to follow X =      X Y 0 1      . (2.17)

Because of this equation (2.3) can be simplified to λ    u1 u2 f(r)   = P X =    p11 p12 t1 p21 p22 t2 p31 p32 t3   ·    X Y 1   . (2.18)

To remove the dependency on λ Scaramuzza et al. takes advantage of the condition that P X and g(u1, u2) should be parallel. This is done by cross multiplying both

sides of (2.18) by g(u1, u2) resulting in

   u1 u2 f(r)   ×    p11 p12 t1 p21 p22 t2 p31 p32 t3   ·    X Y 1   = ~0. (2.19)

This system can be expanded to the three equations

u2(p31X+ p32Y + t3) − f(r)(p21X+ p22Y + t2) = 0 (2.20)

u1(p31X+ p32Y + t3) − f(r)(p11X+ p12Y + t1) = 0 (2.21)

u1(p21X+ p22Y + t2) − u2(p11X+ p12Y + t1) = 0. (2.22)

During calibration each picture will contain L points for which P will be constant. One can then use equation (2.22) to build a system of equations for each cali-bration picture, M · H = 0, (2.23) were H=p11 p12 p21 p22 t1 t2 T (2.24) and M =    −u1 2X1 −u12Y1 u11X1 u11Y1 −u12 u11 ... ... ... ... ... ... −uL 2XL −uL2YL uL1XL uL1YL −uL2 uL1    (2.25)

were the superscript notes point in the calibration pattern. A linear estimate of H can then be constructed by solving the Least Square problem min kM · Hk2 subject

to kHk2= 1. One can then use the orthogonality of of P ’s columns to calculate p 31

(20)

8 CHAPTER 2. THEORY

To estimate the internal parameters a linear system is built with the equations (2.20) and (2.21) using all L points in the all captured pictures. This results in a overdetermined linear system depending only on internal parameters and t3.

Sca-ramuzza et al. solves this system with the use of the pseudoinverse and thus they can reconstruct all model parameters.

2.2.2 Calibration of the Unified Projection Model

The calibration procedure developed by Li et al. for the Unified Projection Model builds on Scaramuzza’s work in [14] and is described in [8]. It let’s V be the set of all extrinsic and intrinsic model parameters in the Unified Projection Model,

V = {p11, p12, p21, p22, p31, p32, t1, t2, t3, ξ, γ1, γ2, α, u0, v0, k1, k2, k3, k4, k5}, (2.26)

and let’s G be the projection function taking a world point X ∈ R3 to the sensor

space. Given a set of m points, x1, x2, . . . , xm, in the world space and the resulting points in the sensor space, p1, p2, . . . , pm, the calibration problem can be described as the optimization problem

minimize V 1 2 m X i=1 [G(V, xi) − pi]2. (MSP)

To solve (MSP) Li et al. proposes the use of the Levenberg-Marquardt algorithm and the implementation included in openCV_contrib [3] uses the Gauss-Newton method instead. In both cases an initial guess of V is needed. The extrinsic para-meters p11, p12, p21, p22, p31, p32, t1, t2 are calculated in the same manor as in Section

2.2.1.

To find an initial guess of the estimated focal lengths, γ1 and γ2, are set equal,

here forth γ. Li et al. then let’s ξ be initialized to 1, α to 0 and all distortions to 0 resulting in the relation

h−1(m) ∼    uc vc ˆ f(r(uc, vc))   , fˆ(r) = γ 2 − 1 2γr(uc, vc)2. (2.27) were ucand vcare the centred image points in the sensor space. By substituting f(r) with ˆf(r) in (2.20) and (2.21) a overdetermined linear equation system depending only on γ,1

γ and t3 can be created for each calibration image. For each system the Least Square solution is found and the resulting γ is the mean of all solutions. Li et al. notes that because γ and 1

γ are seen as separate variables the estimates is less accurate, but also notes that the inaccuracy should be removed when the initial guess is refined during the solving of (MSP).

2.3

Spline Interpolation

To represent a transform

(21)

2.3. SPLINE INTERPOLATION 9

T

Figure 2.4: The transform T : D → R represented by a Spline surface defined by 25 control points.

were D is a quadratic subset of R2and R is a subset of R2, a two dimensional Spline

surface can be used. The Spline surface is defined by a few control points, evenly spaced in D and their known location in R as shown in Figure 2.4. To create the continues transform in-between the control points interpolation using Catmull-Rom Splines is used on the interior paths, and Bélzier Curves are used on the edges.

2.3.1 Catmull-Rom Spline

In the one dimensional case a point p(v) on a Catmull-Rom spline is given by the equation p(v) = V MP (2.29) were V =      1 v v2 v3      T , M=      0 1 0 0 −τ 0 τ 0 2τ τ − 3 3 − 2τ −τ −τ 2 − τ τ − 2 τ      , P =      P0 P1 P2 P3      , (2.30)

and τ is a known parameter as described by Twigg in [16].

In the two dimensional case shown in Figure 2.5 the point p(u, v) is calcula-ted by first calculate the points Pv0, Pv1, Pv2 and Pv3 using Equation (2.29) and corresponding column of points P and then calculate p as

p(u, v) =h1 u u2 u3iM      Pv0 Pv1 Pv2 Pv3      . (2.31)

From Equation (2.29) the vectorh

(22)

10 CHAPTER 2. THEORY • P00 P01 P02 P03 • P10 P11 P12 P13 • P20 P21 P22 P23 • P30 • P31 •P32 •P33 • p(u, v)Pv0 Pv1 Pv2 Pv3 u v

Figure 2.5: Figure explaining notation used in the calculations of the point p(u, v) in a spline patch. were P=      P00 P01 P02 P03 P10 P11 P12 P13 P20 P21 P22 P23 P30 P31 P32 P33      (2.33) Combining Equation (2.31) and (2.32) we get an expression for p(u, v) as

p(u, v) = UMPTMTVT (2.34)

or if a more standard notation is preferred,

p(u, v) = V MPMTUT. (2.35)

2.3.2 Bézier Curve

(23)

2.4. LINEAR REGRESSION 11

This definition of MB is constructed to generate a smooth line between the Bélzier Curve and the Catmull-Rom Spline. With the use of equation (2.36) the point p(u, v) can be derived in analogue with its construction in Section 2.3.1 for patches along the edge. This allows us to interpolate values in the hole domain, D, of the transform T defined in (2.28).

2.4

Linear Regression

To find an estimate, β?, to a over determined linear system on the form

yi = βxi+ i, ∀i (2.38)

regression analasys can be applied. The first and most common method is the method of least squares developed by Legendré [7] presented in Section 2.4.1. A slower but more robust method is to find the Least Trimmed Squares estimator proposed by Rousseeuw in 1985 [12]. One algorithm to find this estimator was later developed by Rousseeuw & Van Driessen [13] and is described in Section 2.4.2.

2.4.1 Least Squares

The Least square method finds the linear estimator,β? which minimizes minimize β n X i=1 kβxi− yik2 (LS)

given a data set containing n data pairs {xi, yi}. By constructing the matrix and vector, X=       x1 x2 ... xn       , y=       y1 y2 ... yn       (2.39) the problem (LS) can then be rewritten as

minimize

β kXβ − yk

2. (LS)

The solution to this unconstrained problem is found were the gradient of kXβ −yk2

equals zero. The gradient, ∇βf, of the objective function is

βf = 2XTXβ −2XTy. (2.40)

β? is found by setting ∇βf = 0 resulting in the normal equation

XT? = XTy. (2.41)

To find β? solving Equation (2.41) on could use the inverse of XTX, but to increase

(24)

12 CHAPTER 2. THEORY

2.4.2 Least Trimmed Squares

Instead of minimizing the total sum of squared errors the Least Trimmed Squares method proposed by Rousseeuw [12] finds its estimator ˆβ by solving

minimize β h X i=1 (r(β)2) i:n. (LTS) where (r(β)2)

1:n(r(β)2)2:n≤ · · · ≤(r(β)2)n:n are the ordered squared residual,

ri(β)2 = (βTxi− yi)2. (2.42)

This is equivalent with finding the h-subset, H, of all measured points who’s Least Square objective function is the lowest. One algorithm developed in 2006 by Rous-seeuw & Van Driessen [13] called FAST-LTS finds ˆβ by selecting a large number of subsets Hi and then for every subset take a C-Steps, as described by Algorithm 1, until convergence. The Least Square solution to the subset Hi with the lowest objective value in then set to be ˆβ.

Algorithm 1 The C-step developed by Rousseeuw & Van Driessen in 2006. Takes

a set of indexes Hold and generates a new set Hnew

1: function C-Step(Hold)

2: β ← Least Square solution based on Hold

3: Compute all residuals ri(β) for i = 1, . . . , n

4: Find permutation, π, solving krπ(1)(β)k ≤ krπ(2)(β)k ≤ · · · ≤ krπ(n)(β)k 5: Hnew← {π(1), π(1), . . . , π(h)}

6: return Hnew

7: end function

The probability of converging to a "good" result with FAST-LTS is calculated by Rousseeuw & Van Driessen to be

1 − (1 − (1 − η)p)m (2.43)

where η is the probability of a data point being contaminated, p is the dimension of xi and m is the number of starting sets. To achieve this probability the starting sets are generated with Algorithm 2

2.5

Proof of equivalence between optimization problems

The problem of finding the direction with the lowest maximal angular error to a set, D, of n directions can be modelled as the problem

minimize

w∈U ,d∈R d

(25)

2.5. PROOF OF EQUIVALENCE BETWEEN OPTIMIZATION PROBLEMS 13

Algorithm 2 Generates a subset H of all data points, with the size h. Given that

each point have the dimensionality p.

1: function SetGenerator(h)

2: J ← a random p-subset.

3: while J does not have full rank do

4: Extend J by one random observation.

5: end while

6: H ← C-Step(J )

7: return H

8: end function

if U is the unit sphere and V is the set of unit vectors pointing in the directions contained in D. To simplify numerical calculation the following proposition was made:

Proposition 1. For the optimal solution,w˜?, to

minimize ˜ w∈R3 kwk˜ 2 subject to w · v˜ i1, i = 1, 2, . . . , n. ( ˜W) it holds that ˜ w? kw˜?k = w ? (2.44)

if and only if w? is the optimal solution to (W).

To prove this some lemmas needs to be proven first. The first thing needing proof is that the active constraints for the two problems are constructed by the same vectors:

Lemma 1. The vectors in V? ⊆ V are part of the active constrains of (W) if and

only if the vectors in V? are part of the active constrains of ( ˜W).

Proof. For the active vectors vi? ∈ ˜V? of ( ˜W) the following equality holds, ˜

w?vi? = 1 (2.45)

or, as kv?

ik= 1, it can be written as

kw˜?kcos αi = 1 (2.46)

where αi is the angle between ˜w? and v?i. Because vi? is active we know αi ≥ αj for all j = 1, 2, . . . n. Thus ˜V? contains the vectors lying on the edge of the smallest possible cone containing all vectors in V.

(26)

14 CHAPTER 2. THEORY

the set V. Because w? and all vectors in V have the length of 1 finding this w? which minimizes this length, d is equivalent of finding the smallest possible cone containing all vectors of V. The active vectors for (W) is then the vectors laying on the edge of this cone. There for the active vectors for (W) is and must be equal to the vectors in ˜V?. Thus Lemma 1 holds.

Then we need to build up proof that for each dimensionality of the subspace spanned by V? the logical conclusion is Proposition 1.

Lemma 2. If only one vector v1? spans the subspace of all active vectors for the

problems (W) and ( ˜W) it holds that ˜ w? kw˜?k = w

? (2.47)

if and only if w? is the optimal solution to (W).

Proof. Given that all active vectors are in span(v?1) it must hold that ˜

w?= w?= v?1. (2.48)

: If w? is the minimizer to (W) equation (2.48) holds and thus it follows that ˜ w? kw˜?k = w? kw?k = w ?. (2.49)

←: Equation (2.48) tells us k ˜w?k= 1 and thus ˜ w? kw˜?k = ˜w

?. (2.50)

Because of equation (2.48) it most then hold that ˜w? also is the minimizer of (W).

Lemma 3. Given that the set V? containing all vectors constructing the active

constraints of both (W) and ( ˜W) are contained in the subspace

V? ⊂ span(v1?, v2?) (2.51)

where v1? and v2? are two of the active vectors it holds that ˜

w? kw˜?k = w

? (2.52)

(27)

2.5. PROOF OF EQUIVALENCE BETWEEN OPTIMIZATION PROBLEMS 15

Proof. As both v?1 and v?2 are active constraints of (W) it is known that (w?− v?

1)T(w?− v?1) = (w?− v2?)T(w?− v2?). (2.53)

Because kv?

1k= kv2?k= 1 it then follows that

w?T(v2?− v?1) = 0. (2.54)

Because w?= arg min(W) it most hold that

w? ∈span(V?). (2.55)

From equation (2.55) it is then clear that there ∃a, b such that

w?= av1?+ bv?2. (2.56)

By combining equation (2.54) and (2.56) the conclusion is (a − b)v?T

1 v2?= (a − b). (2.57)

Thus ether v?T

1 v2? = 1 or a = b. If v1?Tv2? = 1 the only conclussion is that v?1 = v2?

and thus V? is one dimensional and then we fall back to Lemma 2. If v?T

1 v2?6= 1 it

then holds that

a= b. (2.58)

By combining equation (2.56) and (2.58) it follows that

w?= a(v?1+ v2?). (2.59)

For the problem ( ˜W) the two active constraints consisting of v?

1 and v?2 are

˜

w?Tv1? = 1 (2.60)

˜

w?Tv2? = 1. (2.61)

By subtracting equation (2.60) from (2.61) ˜

w?T(v2?− v?

1) = 0 (2.62)

is known to be true. Equations (2.60) and (2.61) together with kv?

1k = 1 can be

written as

kw˜?kcos αi= 1, i = 1, 2 (2.63)

where αi is the angle between ˜w? and v?i. As ˜w? is the minimizer of ( ˜W) both α1

and α2 are minimized thus resulting in that ˜w? must be be contained in

˜

w?span(v?

1, v2?). (2.64)

In the same fashion as above one can conclude from equation (2.62) and (2.64) that it must hold that there ∃α, β such that

˜

(28)

16 CHAPTER 2. THEORY

: If w? solves (W) we know that kw?k= 1 and equation (2.76) holds. Thus it is clear that ˜ w? kw˜?k = αw? αkw?k = w ?. (2.66)

: If we know equation (2.44) holds we know w? is the unit vector pointing in the direction of ˜w?, from equation (2.65) we know that the unit vector in the R space spanned by ˜w? fulfils all active constrains of (W) and thus w? is the solution to (W).

Lemma 4. Given that there exists three linearly independent vectors v?1, v?2 and v3? which are part of the active constraints of both (W) and ( ˜W) it holds that

˜ w? kw˜?k = w

? (2.67)

if and only if w? is the optimal solution to (W). Proof. From (W) we know

(w?− v?

1)T(w?− v1?) = (w?− v2?)T(w?− v?2) (2.68)

(w?− v?

1)T(w?− v1?) = (w?− v3?)T(w?− v?3). (2.69)

By using kv?

jk= 1 equation (2.68) and (2.69) gives us

w?T(v?1− v2?) = 0 (2.70)

w?T(v?1− v?

3) = 0. (2.71)

For ˜w? and v?1, v?2 and v3? solving ( ˜W) it is known that ˜

w?Tv1?= ˜w?Tv2?= 1 (2.72)

˜

w?Tv1?= ˜w?Tv3?= 1 (2.73)

and thus if most hold that ˜ w?T(v?1− v2?) = 0 (2.74) ˜ w?T(v?1− v? 3) = 0. (2.75) Given that v?

1, v?2 and v3? all are linearly independent it is clear that ∃α

˜

w? = αw? (2.76)

as equations (2.70), (2.71), (2.74) and (2.75) are assuring w? and ˜w?are in the same

(29)

2.5. PROOF OF EQUIVALENCE BETWEEN OPTIMIZATION PROBLEMS 17

: If w? solves (W) we know that kw?k= 1 and equation (2.76) holds. Thus it is clear that ˜ w? kw˜?k = αw? αkw?k = w ?. (2.77)

: If we know equation (2.44) holds we know w? is the unit vector pointing in the direction of ˜w?, from equation (2.76) we know that the unit vector in the R space spanned by ˜w? fulfils all active constrains on (W) and thus w? is the solution to (W).

We can now conduct the proof of Proposition 1:

Proof of Proposition 1. Given Lemma 1 we know both (W) and ( ˜W) share the same set of constraining vectors, V?. Because V? ⊂ R3 Proposition 1 stands on Lemma

(30)
(31)

Chapter 3

Method

This Chapter describes the developed solution and used methods.

3.1

Solution Overview

The work in this thesis was to develop a method which calibrates one or more projec-tors such that the projector images have the perspective as if they all were projected from a specific vantage point. This was achieved by measuring the direction of each projector pixel from the vantage point to the screen and then fit a bi-cubical spline transform on top of the data to minimize measuring noise and allow for manual recalibration. The following steps were designed and used to achieve this:

1. Calibrating the camera model 2. Map projector pixels

3. Create virtual pinhole projector model

4. Create Spline to minimize error between projector and the pinhole model. Step 1. were done by creating a stand alone c++ script. And Steps 2. to 4. were implemented as a c++ library and controlled by a c# solution.

3.2

Camera Calibration

The cameras were assumed to follow the Unified Projection model described in Section 2.1.3. The models parameters were found by analysing 50 images of checker board patterns for each sensor with the openCV_contrib module cv::omnidir [3].

3.3

Finding Projector Pixels in the Camera Image

To find the projector’s pixels in the camera image the technique described by An-dersson in [2] were used. The projector were set to display horizontal Gray code

(32)

20 CHAPTER 3. METHOD

patterns, and its inverses for all levels m ∈ {1, . . . , n}, where n is the bit depth,

n= dLog2(i)e, (3.1)

and i is the pixel width of the projector image. Each pattern were captured with the camera and the difference in intensity, ∆m between the pattern and its inverse were calculated for each camera pixel. For each camera pixel the Gray code series {∆1, . . . ,n}were then decoded into a binary index by the use of Algorithm 3. Thus a mapping between the pixels of the projector and camera could be established. This was then repeated for vertical indexes as well.

Algorithm 3 Converts a series {∆1, . . . ,n} of gray code measures into the

cor-responding binary index, i.

1: i ← {0, . . . , 0}1×(n+1) 2: foreach m ∈ 1, 2, . . . , n do 3: ifm > δ then 4: im ← ¬im−1 5: else ifm < −δ then 6: im ← im−1 7: else if −δ <m< δ then 8: im ←0 9: foreach j ∈ {m, m + 1, . . . , n} do 10: ij ←1 11: end for 12: end if 13: end for

3.4

Lifting Image Points to a Unit Sphere

Let X be a point in the world according to the camera frame. As described in 2.1.3 the transform to becoming a point in the captured image is a list of transforms.

W: The point, X , is projected to a point XSon the unit sphere around the camera centre Oc.

H: This point is then projected into a point, mp, on the reference plane as

mp = h(Xs) =  X s Zs+ξ Ys Zs+ξ 1 T . (3.2)

(33)

3.4. LIFTING IMAGE POINTS TO A UNIT SPHERE 21

K: The point on the reference plane is then moved to the image plane by the linear transform p= Kmd=    f1η f1ηα u0 0 f2η v0 0 0 1   md. (3.3)

As a result we can write the total transform, T as

T = K ◦ D ◦ H ◦ W. (3.4)

as Mei et al. do in [10].

To lift a image point p to the unit sphere we need the inverse of T , or the inverse of T without W . Let’s call this transform T−1 and it can be expressed as

T−1 = H−1◦ D−1◦ K−1. (3.5)

By calculating each composite inverse we can then create T−1. K−1 is a simple

inverse of the matrix K and follows K−1 =    1 f1η −α f2η v0α f2ηu0 f1η 0 1 f2η −v0 f2η 0 0 1   . (3.6)

The distortion transform D is non linear and work have been done by Drap and Lefévre [4] to find a analytical inverse but their work only takes into account radial distortion. A numerical approach to approximate the inverse have been proven successful for distortion in both radial and tangential direction [9]. The inverse is the result of using algorithm 4 created by Mei at. al as described in [10]. The last

Algorithm 4 Inverse calculation of distortion, D, with fixed loop count.

1: function distortionInverse(md) 2: ∆ ← D(md) − md 3: mp← md−∆ 4: for each i ∈ {1, 2, ..., 6} do 5: ∆ ← D(mp) − mp 6: mp ← md−∆ 7: end for 8: return mp 9: end function

step of projecting the undistorted point onto the unit sphere is done with the use of h−1(·) calculated by Mei et. al [10] to

(34)

22 CHAPTER 3. METHOD x y z w x0 y0 z0

(a) Construction of the new base. x0 y0 z0 vp Π

(b) Projection of a vector v onto the normalized plane Π.

Figure 3.1: Visualisation of the construction of the new orthonormal base (x0, y0, z0)

aligned with w and the plane Π.

3.5

Virtual Pinhole Projection

To find the best fitted virtual pinhole projection capturing all seen projector pixels, the vector w on the unit sphere, U ⊂ R3 which solves the problem

minimize

w∈U maxi (kw − vik

2) (W)

were vi for i ∈ {1, 2, . . . , n} are the unit vectors pointing at each mapped projector pixel is calculated. Thanks to Proposition 1 in Section 2.5 (W) can be reformulated to finding the vector, ˜w, solving

minimize ˜ w∈R3 kwk˜ 2 subject to ˜w · vi1, i = 1, 2, . . . , n. ( ˜W) After ˜wis found a new left handed base, (x0, y0, z0), can be defined by setting w as the new z0-axis and bounding the x0-axis to the xz-plane as shown in Figure 3.1a.

In this new base all vectors in {v1, . . . , vn} were then projected to the points, {p1, . . . , pn}, on the normalized plane

(35)

3.6. REGRESSION 23

3.6

Regression

To create a continues approximation, ˜T−1, of the inverse, T−1, to the transform

T : Π → P (3.10)

where P is the R2 space containing all projector pixel coordinates and Π is the

virtual projector image calculated in Section 3.5 a Spline surface was used. This surface is defined by n×m control points, Pij ∈ P, and the positions of theses point was optimized by minimizing the squared error,

kT−1(pk) − ˜T−1(pk)k2, (3.11)

for a subset of all measured pairs (pk, T−1(pk)).

To solve this regression problem a linear system of equations was constructed by locally for each measurement looking at equation (2.35) in tensor form

˜ T−1(pk) = 4 X i=0 4 X j=0 (V M)i(UM)jPijlocal. (3.12)

This equation could then be translated to a linear equation in the global coordinate system as ˜ T−1(pk) = h 0 · · · (VkM)0(UkM)0 (VkM)0(UkM)1 · · · 0 · · · (VkM)1(UkM)0 · · · i                 P00 P01 ... P0m P10 P11 ... Pnm                 (3.13) were the elements of VkM and UkM is placed in such a way that the scalars (VkM)i and (UkM)j maps to the corresponding position of the global points Pij. When all, N, measurements are used it creates the over defined linear system

(36)

24 CHAPTER 3. METHOD

where A is constructed by all rows vectors from equation (3.13). The best estimation of the vector,

P=hP00 P01 · · · P0m P10 P11 · · · Pnm

iT

, (3.15)

(37)

Chapter 4

Result

This chapter presents the experimental set-up used to verify the method presented in Chapter 3 and the corresponding results.

4.1

Experimental Setup

The developed system used the omnidirectional camera RICOH THETA S and for more exact corner extraction a Logitec C920 was used.

During the system verification a soft paper composite was used as projection surface. The surface was 3 × 1.2 meter with a curvature radius of approximate 2 meters. The omnidirectional camera was placed straight in front of the surface at a 2 meter distance. Projector 2 used in both the single projector case and the two projector case was placed 1 meter out from the centre of the screen and 0.5 meters to the left. Projector 1, only used in the two projector case, was placed 1.5 meter from the screen centre and 0.5 meters to the right. The set-up is presented in Figure 4.1. Projector 1 was a Optoma EH2000ST and Projector 2 a BenQ MP780 ST.

All calculations were done by on a DELL Latitude E7470, running a Intel core i7-6600 CPU at up to 2.8 GHz with the operating system Windows 10. To do all calculations the libraries Eigen and openCV were used in a Visual c++ library developed by the author.

Screen • Projector 2 • Camera • Projector 1

(38)

26 CHAPTER 4. RESULT

Table 4.1: The number of images successfully used during the calibration and the resulting re-projection error.

Camera Images RMS error[pixels]

Left 44 0.204585

Right 18 0.667097

4.2

Camera Calibration

50 pictures for each side of the camera were taken and the library omniDir [3] using the Calibration method described in Section 2.2.2 was used to run the calibration. The following intrinsic parameters were estimated:

Klef t =    425.859 0.320319 320.164 0 426.268 318.298 0 0 1   , ξlef t = 1.27373

with the distortion parameters

klef t1 = −2.53718 ∗ 10−1, k2lef t= 1.47391 ∗ 10−2, klef t3 = −2.06352 ∗ 10−4, k4lef t= −8.97213 ∗ 10−4

for the left camera. For the right camera the following results were achieved:

Kright =    410.536 −0.703598 319.560 0 409.634 317.808 0 0 1   , ξright = 1.20604

with the distortion parameters

k1right= −2.46282 ∗ 10−1, k2right= 1.63451 ∗ 10−2, k3right= −7.62555 ∗ 10−4, k4right= −4.88076 ∗ 10−4.

Not all images where successfully used in the calibration. The number of images used and the resulting root of the mean square error is presented in Table 4.1.

4.3

Inversion of Camera Model

The proposed undistort method described in Section 3.4 produced the error images shown in Figure 4.2. Here we move each pixel to the sensor space with K−1 and

(39)

4.3. INVERSION OF CAMERA MODEL 27

(a) Left camera (b) Right camera

Figure 4.2: The error of using Algorithm 4 as the inverse of the distortion function defined by equation 2.14. White corresponds to an error of 1 or more pixels and black to zero.

(a) Left camera (b) Right camera

(40)

28 CHAPTER 4. RESULT

Table 4.2: The seven different test cases used during regression evaluation.

Case f(x)v N r 1 2x + 10 0 0 2000 0 2 2x + 10 0 0.1 2000 0 3 2x + 10 -200 0 2000 1 20 4 2x + 10 10000 0 2000 1 20 5* 2x + 10 -20 5 2000 1 15 6 1 + 2x + 3x2+ 4x3 -20 0 2000 0 7 1 + 2x + 3x2+ 4x3 -20 0 20000 0

*The 5th case used ˆf(x) = 6x + 10 on outliers.

Table 4.3: The estimated function f(x) and resulting objective value using FAST-LTS.

Case f(x) Objective value Execution time [ms]

1 2x + 10 0 29 2 2.000365x + 9.995482 1.460589 85 3 2x + 10 0 41 4 2x + 10 0 33 5 1.946463x + 10.266341 7040.234959 66 6 1 + 2x + 3x2+ 4x3 0 64 7 1 + 2x + 3x2+ 4x3 0 61

4.4

Regression Method

For each test case the FAST-LTS algorithm was tested on N generated data points {xi, yi}constructed as

yi = f(xi) + i+ Ii∆ (4.1)

where f(·) is the function to be reconstruct, ∆ is a systematic error, Iiis an indicator function telling if i is an outlier and  is noise drawn from N(0, v). The ratio of outliers in the data is r. The seven test cases can be seen in Table 4.2. and the results can be seen in Table 4.3. A comparison to the Least Square method is visualised in Figure 4.4.

4.5

System Verification

(41)

4.5. SYSTEM VERIFICATION 29 -5 0 5 10 15 20 25 30 35 x 0 20 40 60 80 100 120 140 y

Fast-LTS & Least Square

Data - Clean Data - Outlier Fast-LTS Least Square

Figure 4.4: The data used in case 5 and the resulting regression from both the

FAST-LTSmethod and the Least Square method.

4.5.1 Calibration of Single Projector

The method presented by the author was tested on a single projector set-up and the captured projector pixels can be seen in Figure 4.5a and 4.5b. Figure 4.5a presents all pixels and if they were part of the optimal h-subset in the LTS regression or not and Figure 4.5b shows each pixel with a colour corresponding to it’s measured position in the projector image. The resulting control points calculated in the regression can be seen in Figure 4.6 and the end result can be seen in Figures 4.7 and 4.8. The images are captured from the same position as the omnidirectional camera stood during calibration with an accuracy of ±5 mm. In Figure 4.7 a static checker pattern is warped and in Figure 4.8 video from the flight simulator is warped.

4.5.2 Calibration of Multi-Projector System

Two projectors were aimed at the same surface from different positions. The two projector images were calibrated and a checker pattern was projected on them both. Each pattern were captured with the omnidirectional camera and the corners were extracted. The extracted corners can be seen in Figure 4.9 and the pixel positions

for the extracted corners can be seen in Figure 4.10. For each projector the

corners were moved to the unit sphere as described in Section 3.4 and the minimum, maximum and mean angular error is presented in Table 4.4.

(42)

30 CHAPTER 4. RESULT 0 0.2 0.4 0.6 0.8 1 x 0 0.2 0.4 0.6 0.8 1 y Found pixels in Used pixels Ignored pixels

(a) Ignored pixels.

0 0.2 0.4 0.6 0.8 1 x 0 0.2 0.4 0.6 0.8 1 y

Found pixels colored by position in projector

(b) Pixels marked with a colour gradient, red in-creases from top to bottom and blue from right to left in the projector space.

Figure 4.5: The projector pixels found during calibration run, plotted in Π-Space.

-0.2 0 0.2 0.4 0.6 0.8 1 1.2 x -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 y

Calculated control points

Figure 4.6: The resulting control points calculated by the FAST-LTS algorithm. Table 4.4: The angular error between the extracted corners of projector 1 and projector 2.

Angular Error

Minimum [deg] Maximum [deg] Mean [deg]

(43)

4.5. SYSTEM VERIFICATION 31

(a) Original projection (b) Warped projection

Figure 4.7: The resulting warp used on a checked pattern. Using the control points seen in Figure 4.6.

(a) Original projection (b) Warped projection

Figure 4.8: The resulting warp used on the actual flight simulator video feed. Using the control points seen in Figure 4.6.

Table 4.5: Absolute error between extracted corners from two projectors.

Camera Minimum [pixles] Maximum [pixels] Mean [pixels]

Logitec C920 1.72235 ∗ 100 1.13668 ∗ 101 4.42121 ∗ 100

RICOH THETA S 4.14754 ∗ 10−1 1.83618 ∗ 100 7.36507 ∗ 10−1

in Figure 4.11. The pixel position of the corners are presented in Figure 4.12 and an graph with the error enlarged 20 times is shown in Figure 4.13.

(44)

32 CHAPTER 4. RESULT

(a) Projector 1 (b) Warped projection

Figure 4.9: The extracted corners, marked with black circles, in the checker pattern. Camera: RICOH THETA S

290 300 310 320 330 340 350 360 370 380 x /pixels 400 420 440 460 480 500 520 y /pixels

Omnidirectional camera messurment

Projector 1 Projector 2

(45)

4.5. SYSTEM VERIFICATION 33

(a) Projector 1 (b) Warped projection

Figure 4.11: The extracted corners, marked with black circles, in the checker pat-tern. Camera: Logitec C920

1000 1500 2000 2500 3000 3500 0 500 1000 1500 2000

2500 Logitec camera messurment

Projector 1 Projector 2

Figure 4.12: The extracted corners pixel position. Camera: Logitec C920 Table 4.6: Relative error between extracted corners from two projectors. Each error is scaled to based on the nr of pixels in height and width of the two diagonal corners.

Camera Minimum Maximum Mean

Logitec C920 8.71595 ∗ 10−4 5.83431 ∗ 10−3 2.27025 ∗ 10−3

(46)

34 CHAPTER 4. RESULT 1000 1500 2000 2500 3000 3500 x /pixels 0 500 1000 1500 2000 2500 y /pixels

Logitec camera messurment, increased error

Projector 1 Projector 2

(47)

Chapter 5

Discussion

In this chapter the author reflects over the choices made during the thesis and discusses the results.

5.1

Analysis of the Camera Model

The Unified Projection Model used in this thesis worked well and judging by the results both from section 4.2, 4.5.1 and 4.5.2 the author believes it was sufficiently accurate. The high difference in reprojection error between the left and right sensor are believed to come from the fact that the right lens on the used camera has a big scratch in the middle of the image. This reduces the image quality of the calibration images and thus a worse calibration is expected. This also explains the low number of images in which the calibration algorithm could locate the calibration pattern. At last one should note that the high distortion errors presented in Figure 4.2 are not believed to be a problem as all pixels with an error greater then one pixel is found outside the region hit by light rays.

5.2

Regression Implementation

The implemented regression algorithm FAST-LTS is concluded to work as expected when comparing the expected results in Table 4.2 with the measured results shown in Table 4.3. The ability to reject outliers, as seen in Figure 4.4, is deemed as an important attribute by the author, as false readings during the pixel mapping have proven to result in significant errors in the resulting Spline surface. One should note that the choice of FAST-LTS might not have been optimal as FAST-LTS only uses a random sub-sample of all measurements. This might not be optimal for this application as control points of the Spline only depends on measurements close by, thus a random sub-sample could have to few measurements to give a stable regression for all control points. To account for this the size of h was increase from around half of n to ∼ 99/100 of n.

(48)

36 CHAPTER 5. DISCUSSION

5.3

Projector Calibration System

As one studies the results presented in Figure 4.7 and 4.8 it is clear that the desired outcome was achieved, straight lines in the reference pattern are kept straight during the projection. It should be noted however that information from the warped image is lost as parts of it are warped outside the projected image. This was a design decision, as the main application will be to calibrate surfaces covered with multiple projectors and thus the lost information will be projected by another projector.

In Figure 4.5a it is notable that the unused pixels all are on the left hand side of the projector image. The author argues that this is a result from the fact that the left part of the image is under the highest amount of distortion. From Figure 4.5a one can note that zero or close to zero outliers were detected during the recorded run and thus the disregarded pixels of the FAST-LTS should be found were the uncertainty is highest.

By calculating the field of view and angle of each projector image the overlap of projectors should be identical. This is not the case as one can see in Section 4.5.2. The measured mean error of 0.25is quite high, but by comparing the relative pixel

(49)

Chapter 6

Conclusion

In this chapter the author draws conclusions of the presented results and describes parts of the research field that could use more work.

6.1

Fulfilment of Problem Statement

From the discussion in chapter 5 the author states that the implemented c++ library works as expected. The process allows for any type of calibrated omnidirectional camera to be used. The use of an omnidirectional camera allows all projectors to be captured without moving the camera, which removes the need for image stitching and allows the user to only set-up the camera once. No knowledge of the projector placement or surface geometry is needed from the user reducing the possibility of error injection. The software is built to reproduce a pinhole projection onto a flat surface. This is also the seen result shown in Figures 4.7 and 4.8. In the images it is seen that straight lines remain straight as they should from a pinhole transform. The software also calculates the field of view and centre point for each projector allowing for easy integration into 3D-simulations were virtual cameras are used to capture the video projected.

The output of the software is the Spline control points and tools to warp video feeds using Spline surfaces have already been developed by ÅF Technology AB. Thus the author of this thesis have done no work on the graphics engine doing the real-time warping.

6.2

Further Work

From the work of Åberg in [1] it is clear that stereo optics are a solution to the loss of information occurring when parts of the view is blocked. Work could thus be done looking at how omnidirectional cameras could be used in that use case. The model presented in this thesis could also be expanded to allow the user to set the viewers position relative to the camera. This would allow the camera to be placed in a position maximising the view of the projection surface.

(50)

38 CHAPTER 6. CONCLUSION

(51)

Bibliography

[1] Viktor Åberg. Automatic projector warping using a multiple view camera ap-proach. 2016.

[2] Carl Andersson. Seamless Automatic Projector Calibration of Large Immersive Displays using Gray Code. 2013.

[3] Vladislav Sovrasov Baisheng Lai and Maksim Shabunin. cv::OmniDir. Ver-sion 3.2.0. Feb. 19, 2017. url: http://docs.opencv.org/3.2.0/dd/d12/

tutorial_omnidir_calib_main.html.

[4] Pierre Drap and Julien Lefévre. “An Exact Formula for Calculating Inverse Radial Lens Distortions”. In: Sensors 16.6 (2016). issn: 1424-8220. doi: 10.

3390/s16060807. url: http://www.mdpi.com/1424-8220/16/6/807.

[5] Christopher Geyer and Kostas Daniilidis. “A unifying theory for central pa-noramic systems and practical implications”. In: Computer Vision—ECCV 2000 (2000), pp. 445–461.

[6] Gene H Golub and Christian Reinsch. “Singular value decomposition and least squares solutions”. In: Numerische mathematik 14.5 (1970), pp. 403–420. [7] Adrien Marie Legendre. Nouvelles méthodes pour la détermination des orbites

des comètes. 1. F. Didot, 1805.

[8] Bo Li et al. “A multiple-camera system calibration toolbox using a feature descriptor-based calibration pattern”. In: Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEE. 2013, pp. 1301–1307. [9] Lili Ma, YangQuan Chen, and Kevin L Moore. “Rational radial distortion

models of camera lenses with analytical solution for distortion correction”. In: International Journal of Information Acquisition 1.02 (2004), pp. 135–147. [10] C. Mei and P. Rives. “Single View Point Omnidirectional Camera Calibration

from Planar Grids”. In: Proceedings 2007 IEEE International Conference on Robotics and Automation. Apr. 2007, pp. 3945–3950. doi: 10.1109/ROBOT. 2007.364084.

[11] B. Micusik and T. Pajdla. “Estimation of omnidirectional camera model from epipolar geometry”. In: 2003 IEEE Computer Society Conference on Com-puter Vision and Pattern Recognition, 2003. Proceedings. Vol. 1. June 2003, pp. 485–490. doi: 10.1109/CVPR.2003.1211393.

(52)

40 BIBLIOGRAPHY

[12] Peter J Rousseeuw. “Multivariate estimation with high breakdown point”. In: Mathematical statistics and applications8 (1985), pp. 283–297.

[13] Peter J Rousseeuw and Katrien Van Driessen. “Computing LTS regression for large data sets”. In: Data mining and knowledge discovery 12.1 (2006), pp. 29–45.

[14] D. Scaramuzza, A. Martinelli, and R. Siegwart. “A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion”. In: Fourth IEEE International Conference on Computer Vision Systems (ICVS’06). Jan. 2006, pp. 45–45. doi: 10.1109/ICVS.2006.3.

[15] A Schwarzenberg-Czerny. “On matrix factorization and efficient least squa-res solution.” In: Astronomy and Astrophysics Supplement Series 110 (1995), p. 405.

[16] Christopher Twigg. “Catmull-rom splines”. In: Computer 41.6 (2003), pp. 4– 6.

[17] Xianghua Ying and Zhanyi Hu. “Can We Consider Central Catadioptric Ca-meras and Fisheye CaCa-meras within a Unified Imaging Model”. In: Computer Vision - ECCV 2004: 8th European Conference on Computer Vision, Prague, Czech Republic, May 11-14, 2004. Proceedings, Part I. Ed. by Tomás Pajdla and Jiří Matas. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004, pp. 442– 455. isbn: 978-3-540-24670-1. doi: 10.1007/978-3-540-24670-1_34. url:

(53)
(54)

TRITA -MAT-E 2017:42 ISRN -KTH/MAT/E--17/42--SE

References

Related documents

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Det har inte varit möjligt att skapa en tydlig överblick över hur FoI-verksamheten på Energimyndigheten bidrar till målet, det vill säga hur målen påverkar resursprioriteringar

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

DIN representerar Tyskland i ISO och CEN, och har en permanent plats i ISO:s råd. Det ger dem en bra position för att påverka strategiska frågor inom den internationella

Storbritannien är en viktig samarbetspartner för Sverige inom såväl forskning som högre utbildning, och det brittiska utträdet kommer att få konsekvenser för dessa samarbeten.. Det

Neither is any extensive examination found, regarding the rules of thumb used when approximating the binomial distribution by the normal distribu- tion, nor of the accuracy and

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating