• No results found

Transformations between Camera Images and Map Coordinates with Applications

N/A
N/A
Protected

Academic year: 2021

Share "Transformations between Camera Images and Map Coordinates with Applications"

Copied!
86
0
0

Loading.... (view fulltext now)

Full text

(1)

Institutionen för systemteknik

Department of Electrical Engineering

Examensarbete

Transformations between Camera Images and

Map Coordinates with Applications

Examensarbete utfört i Reglerteknik vid Linköpings Tekniska Högskola

av Nils Börjesson LITH-ISY-EX—05/3702--SE Linköping 2005

TEKNISKA HÖGSKOLAN

LINKÖPINGS UNIVERSITET

(2)

Transformations between Camera Images and Map

Coordinates with Applications

Examensarbete utfört i Reglerteknik vid Linköpings tekniska högskola

av

Nils Börjesson

LITH-ISY-EX--05/3702--SE

Handledare: Ph.D. Rickard Karlsson

ISY, Linköpings Universitet

Examinator: Prof. Fredrik Gustafsson

ISY, Linköpings Universitet

(3)
(4)

Abstract

The quality of cameras is currently increasing very fast meanwhile the price of them is decreasing. The possibilities of using a camera as a measurement and navigation instrument are thus getting bigger all the time. This thesis studies the transformation relations between a camera image and the scene in space that is projected to it. A theoretical derivation of the transform will be presented, and methods and algorithms for applications based on the transform will be developed.

The above mentioned transform is called the camera matrix, which contains information about the camera attitude, the camera position, and the internal structure of the camera. Useful information for several different applications can be extracted from the camera image with the help of the camera matrix. In one of the applications, treated in this Master´s thesis, the camera attitude is estimated when the camera is calibrated and its position is known.

Another application is that of absolute target positioning, where a point in a digital map is searched from its position in a camera image. Better accuracy in the measurements can though be obtained with relative target positioning i.e., estimation of distance and angle between two points in the digital map by picking them out in the image. This is because that the errors of the absolute target positioning for each of the two points are dependent and thus partly will cancel each other out when their relative position and angle is measured.

(5)

Acknowledgements

This report is the written presentation of my Master´s thesis in the program Applied Physics and Electrical Engineering at Linköping University. It was performed during the winter and spring 2005 at the Sensor system division, Vehicle systems business unit, AerotechTelub AB in Linköping, Sweden. I would like to thank a number of people for supporting me during this work:

My supervisor at AerotechTelub, Hans Bohlin, who came with the idea to this thesis. The work would not have been possible to manage without the support from Hans.

Prof. Fredrik Gustafsson and Dr. Rickard Karlsson, Linköping University, which have been the examiner and supervisor for this thesis. Thank you for guidance, help and discussions.

My colleagues at AerotechTelub, for making me feel comfortable and enjoying my time at the workplace.

I would also like to thank my family, a number of friends, and especially my girlfriend Therese Davidsson, for support and encouragement during the work of this thesis and during my years at LiTH.

(6)

Contents

1 INTRODUCTION 1 1.1 Background 5 1.2 Objectives 5 1.3 Limitations 5 1.4 Outline 6 2 CAMERA GEOMETRY 8 2.1 Perspective Projection 8 2.2 Cameras 9 2.2.1 Thin Lenses 10

2.2.2 Field Of View (FOV) 11

2.2.3 Radial Distortion 12

2.3 The Camera Parameters 12

2.3.1 The Extrinsic Parameters 13

2.3.2 The Intrinsic Parameters 15

2.4 The Camera Matrix 16

2.5 Usage of the Camera Matrix 17

3 ESTIMATION OF THE CAMERA MATRIX 19

3.1 The Direct Linear Transformation (DLT) Algorithm 19

3.1.1 Least Squares Solution of Homogenous Equations 20

3.1.2 Data Normalization 22

3.2 The Gold Standard Algorithm 24

3.2.1 Geometric Error 24

3.2.2 A Geometric Interpretation of the Minimization of … 25 3.2.3 Minimizing the Nonlinear Geometric Functions 27

3.2.4 Estimating Parts of the Camera Matrix 29

(7)

4.2 Derivation of the Internal Camera Matrix 32

4.3 Results 32

5 CAMERA ATTITUDE ESTIMATION 34

5.1 Introduction 34

5.2 A Solution to the Problem of Camera Attitude Estimation 35

5.2.1 The Initital Estimate 35

5.2.2 Minimization of the Geometric Cost Function 36

5.3 Results 38

5.3.1 Choice of Parameters 40

5.3.2 Comparing the Performance of the two Methods 41 5.3.3 The Performance Dependence of the Number of Corresponding Points 42 5.3.4 The Performance Dependence of the Pitch Angle 44

5.3.5 The Performance Dependence of the FOV 46

5.3.6 Convergence Velocity of the Iteration Process 47 5.3.7 Covariance Estimation of the Camera Attitude 48

6 TARGET POSITIONING 50

6.1 Absolute Target Positioning 50

6.1.1 The 2D-case 50

6.1.2 The 3D-case 52

6.2 Relative Target Positioning 54

6.3 Generating the Footprint 57

6.4 Covariance Estimation 57

6.5 Results 58

6.5.1 Evaluation of the Footprint Generation 59

6.5.2 Absolute Target Positioning 60

6.5.3 Relative Target Positioning 62

7 CONCLUSIONS 67

7.1 Discussion of the Results 67

(8)

A TRANSFORMATIONS BETWEEN DIFFERENT

COORDINATE SYSTEMS 69

A.1 Transformation from Geodetic to Body 69

A.2 Transformation between Body and Camera 71

A.3 Transformation between Calibration pattern and Camera 72

B HOMOGENOUS REPRESENTATION OF VECTORS

AND POINTS 73

B.1 The Difference between Homogenous and Inhomogenous Vectors and

Points 73

B.2 Usage of the Homogenous Representation 74

(9)

Notation

Matrices, Vectors and Points

A Matrix used to express the vector cross-products in DLT.

C Camera center.

D World point.

d Image point.

H Parameter vector used when minimizing the geometric cost function with Gold´s algorithm.

K The internal camera matrix.

L Transformation matrix used when normalizing the world points.

M A matrix containing both R and t, used in target positioning. P The camera matrix.

p P reshaped to a vector, used in DLT. init

q The initial parameter vector used in camera attitude estimation.

Cal

q The parameter vector used in camera calibration.

Gold

q The parameter vector used when minimizing the geometric cost function in camera attitude estimation.

C G

R DCM between the geodetic frame and the camera frame.

B G

R DCM between the geodetic frame and the body frame.

C B

R DCM between the body frame and the camera frame.

C K

R DCM between the calibration frame and the camera frame.

S Diagonal m x n matrix used in SVD.

T Transformation matrix used when normalizing the image points.

G GC

t Translation vector between the geodetic and the camera frame, expressed in the geodetic frame.

C GC

t Translation vector between the geodetic and the camera frame, expressed in the camera frame.

C KC

t Translation vector between the calibration and the camera frame, expressed in the camera frame.

U Orthogonal m x m matrix used in SVD.

V Orthogonal n x n matrix used in SVD.

X Vector where all of the measured corresponding points are placed.

X Covariance matrix of the measurement vector X.

Cal

q Covariance matrix of the estimated q . Cal

Gold

(10)

Parameters and Symbols

B The body coordinate frame. C The camera coordinate frame.

f The focal length of the camera. G The geodetic coordinate frame. K The calibration coordinate frame.

l The distance between two world points, used in relative target positioning.

s The skew parameter. One of the intrinsic parameters used in K.

u0 The x-coordinate of the principal point. One of the intrinsic

parameters used in K.

v0 The y-coordinate of the principal point. One of the intrinsic

parameters used in K.

Magnification factor in x-direction. One of the intrinsic parameters used in K.

Magnification factor in y-direction. One of the intrinsic parameters used in K.

The angle between two world points, used in relative target positioning.

n Number of corresponding points.

N Number of elements in the measurement vector X. Abbreviations

DCM Direction cosinus matrix. DLT Direct linear transformation. FOV Field of view.

INS Inertial navigation system. LS Least squares.

RMS Root mean square. RMSE Root mean square error. SVD Singular value decomposition. UAV Unmanned aerial vehicle.

(11)

1

Introduction

This chapter begins with a presentation of the main problems that are going to be addressed in this Master´s thesis. It gives the reader a short summary over the thesis in an easy manner. That section is followed by the

background and a presentation of the principal aims. Finally, the limitations will be stated and an outline of the thesis is presented.

Using cameras for measurements, recognition, and navigation has become more and more interesting as the camera quality increases and the camera price decreases. The computer capacity also grows, which makes the possibilities of using a camera as a measurement and navigation instrument even better. Extracting information from a camera image is however a complex problem.

The essence of this Master´s thesis is the transform that relates the camera image to the scene in space that is projected to it. This transform is called the camera matrix. The camera matrix can be thought of as a mathematical function that reproduces points in space to points in the image. It consists of three parts; the internal camera structure K, the camera attitude R, and the

camera position t. These three parts contains all useful information about the

transform. The camera matrix can be used in many different applications, of which a few will be presented in this thesis.

The camera that will be studied is situated on an aeroplane. Assume that a digital map, with altitude information of the ground that the aeroplane flies over, is present. One interesting application would then be to find the map position of a point in the image. In Figure 1.1, a map and a synthetic camera image of the map is shown.

(12)

x-coordinate [pixels] y-co or di na te [p ix el s] 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 b) Map 1.48651.487 1.48751.488 1.48851.489 1.4895 1.49 1.49051.491 1.4915 x 106 6.477 6.478 6.479 6.48 6.481 6.482 6.483 x 106 Easting Position [m] N or th in g po si tio n [m ] a) Image x-coordinate [pixels] y-co or di na te [p ix el s] 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 b) Map 1.48651.487 1.48751.488 1.48851.489 1.4895 1.49 1.49051.491 1.4915 x 106 6.477 6.478 6.479 6.48 6.481 6.482 6.483 x 106 Easting Position [m] N or th in g po si tio n [m ] a) Image

Figure 1.1: The transformation from a point in the synthetic image (a) to a point in the map (b). The synthetic image is produced in Matlab and taken at an altitude of 1500 meters with a pitch angle of -30°.

A fundamental problem here is that the transformation from an image point to a point in the map does not have an unambiguous solution. Going from an image point to a point in the map coordinate means to move from a point in 2 to a point in31, and this problem ought to have an infinite number of solutions. The point will thus have to be found with an iterative algorithm which uses the altitude data from the digital map. The problem of finding the map coordinate from an image pixel will here be named target

positioning, and it will be further investigated later on in this Master’s

thesis.

Another application that is going to be presented in this thesis is that of estimating the attitude of a camera from its image. In this application it is assumed that both the camera position, t, and the internal camera matrix, K,

is well-known, leaving the camera attitude, R, as the unknown variable to be

estimated. The coordinate system used for expressing the camera attitude has its origin located in the center of the aeroplane carrying the camera. The attitude of the camera is assumed to be the same as that of the aeroplane i.e., the camera is located in the center of the aeroplane. Figure 1.2 shows the coordinate system used.

(13)

x y B B B z

Rotation around x-axis with the roll angle

Rotation around z-axis with the heading angle Rotation around y-axis

with the pitch angle ψ

ψ θ θ

.

.

.

ϕ ϕ x y B B B z

Rotation around x-axis with the roll angle

Rotation around z-axis with the heading angle Rotation around y-axis

with the pitch angle ψ

ψ θ θ

.

.

.

ϕ ϕ

Figure 1.2: A coordinate system is attached to the body of the aeroplane. The Euler angles; roll, pitch, and heading is a representation of the attitude of the body. The roll angle is the rotation angle of the body around its x-axis, which is pointing in the forward direction. The pitch angle is the rotation angle of the body around its y-axis, which is pointing out of the right wing. The heading angle is the rotation angle of the body around its z-axis, which is pointing down through the floor of the body..

The Euler angles that represents the camera attitude are illustrated in Figure 1.2 and in Figure 1.3. Notice that the angle will be named heading angle in this thesis, but the same angle may be reffered to as true heading or yaw in other literature.

(14)

ϕ θ ψ North xB xG ϕϕ θθ ψ North xB xG ψ North xB xG

Figure 1.3: The Euler angles; roll( ), pitch( ), and heading( ) shown in the body of the aeroplane. The heading angle might be referred to as true heading or yaw in other literature, in this thesis however, heading is going to be used for the angle , where x is pointing in the north direction. G

Knowing the internal camera structure K and the camera position t, is not

enough to estimate the camera attitude R. More information about the

relationship between the image and the map is needed. The approach is to pick out corresponding points between the image and the map i.e., points that represent the same object in the different frames, see Figure 1.4.

x-coordinate [pixels] y-co or di na te [ pi xe ls ] 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 b) Image 1.48651.4871.48751.4881.48851.4891.4895 1.49 1.49051.4911.4915 x 106 6.477 6.478 6.479 6.48 6.481 6.482 6.483 x 106 Easting Position [m] N or th in g po si tio n [m ] a) Map P=K(R t) x-coordinate [pixels] y-co or di na te [ pi xe ls ] 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 b) Image 1.48651.4871.48751.4881.48851.4891.4895 1.49 1.49051.4911.4915 x 106 6.477 6.478 6.479 6.48 6.481 6.482 6.483 x 106 Easting Position [m] N or th in g po si tio n [m ] a) Map P=K(R t)

Figure 1.4: A set of corresponding points between the map (a) and the image (b) that correspond under the camera matrix P.

The main problems that are going to be treated in this Master´s thesis have now been presented. Further explanations, solutions, and results will be

(15)

1.1 Background

This Master´s thesis has been carried out in cooperation with AerotechTelub AB in Linköping Sweden at the Sensor System division, Vehicle Systems business unit. AerotechTelub runs consultation, systems research,

maintenance work, and test and verification of remote-controlled unmanned

aerial vehicles (UAV) for the Swedish defence. Their duties are applied on

the avionic systems, the communication systems, and the control systems for controlling the UAV’s from the ground. The UAV’s are equipped with a camera and the objective with this Master´s thesis is to use this camera for applications in target positioning and navigation.

1.2 Objectives

The principal aims of this Master´s thesis are:

• To derive the transform that relates a camera image and the scene in space that is projected to the image.

• To find a method for estimation of this transform by the use of a set of corresponding points between the image and the scene in space. • To find a method for camera calibration.

• To estimate the camera attitude by the use of a set of corresponding points between the image and the scene in space. It is assumed that the camera is calibrated and that the position of the camera is well-known.

• To find a map coordinate and its covariance from its corresponding pixel coordinate when the camera matrix and its covariance is known.

• To find the distance and angle, and their covariances, between two points in the world by picking out their corresponding points in the image.

1.3 Limitations

A couple of limitations and basic conditions were given for this Master´s thesis. They are:

• No real camera images are available. The algorithms and methods will thus be evaluated with synthetic camera images produced in Matlab.

(16)

• The camera is situated and the ground constructed in such a way that no objects are hidden by other objects lying in front of them. • The pitch angle of the camera is less than 0° i.e., the camera is

pointing down at the ground and not up in the sky.

• A map with an altitude database over the ground is available.

1.4 Outline

The work in this Master´s thesis will be presented as follows:

Chapter 2 presents camera geometry theory. The transform, called the

camera matrix, that relates the image coordinates to the coordinates of the scene in space that is projected to the image, are derived. Finally, a “road-map” of the applications that are going to be treated in the thesis and how the camera matrix can be used in these situations, is presented.

Chapter 3 presents two methods of estimating the camera matrix. The first

method is the direct linear transformation (DLT) algorithm, which minimizes a linear algebraic cost function. The second method is Gold´s algorithm, which uses the result from DLT as an initial value and then refines this by minimizing a nonlinear geometric cost function. A suggestion to an approach of estimating parts of the transform will also be presented here.

Chapter 4 explains a method for calibration of the camera. This method

consists of a laborative part, where a set of corresponding points is obtained by taking an image of a calibration pattern. The internal camera matrix can then be found, while both the camera position and attitude are known with very good accuracy.

Chapter 5 treats the problem of estimating the camera attitude. A method

for doing this will be explained and evaluated through a couple of synthetic tests. This method assumes that the camera is calibrated and that the camera position is known. It is a variant of Gold´s algorithm, explained in Chapter 3.

Chapter 6 deals with the problem of target positioning. An algorithm for

finding the world point that corresponds to a specific point in an image is developed and the performance of the algorithm is evaluated through simulations. The covariance of the target position is calculated through Monte Carlo simulations and it is studied how this covariance depends on the surrounding area of the point. Another application in target positioning,

(17)

distance and angle between two points in the world that are chosen by picking them out in the image. It is investigated if it is possible to measure the distance and angle between two points with better accuracy then the measurement of absolute position for each of the points.

In Chapter 7 conclusions, a discussion of the results, and suggestions for

(18)

2

Camera Geometry

This chapter presents the camera geometry theory. The expressions for perspective projection, cameras with lenses, and rotation and translation of the camera are derived. The intrinsic and extrinsic parameters of the camera are defined and placed in the camera matrix P. Finally a road map

describing the usage of the camera matrix in the rest of the Master´s thesis will be presented. With the theory from this chapter, it will be possible to transform a point in space to a point in the image plane.

2.1 Perspective Projection

Consider a point D in space, which is a point that shall be projected to the

image. The origin of a Euclidean coordinate sytem is attached to the centre of projection C, called the camera centre. The plane Z=-f is called the image

plane, where f is referred to as the focal length. If a pinhole camera model is

assumed, the point D=(X,Y,Z)T will be mapped to the point d=(x,y,z)T, where a line joining D and C intersects the image plane. This is shown in

Figure 2.1. x z D=(X,Y,Z)T d=(x,y,z)T Principal axis f focal length C Principal point Image plane Z=-f y World point Image point x z D=(X,Y,Z)T d=(x,y,z)T Principal axis f focal length C Principal point Image plane Z=-f y World point Image point

Figure 2.1: Projection of a point D in space to a point d in the image. The

point d is found where the ray that passes C and D intersects

the image plane, located at Z=-f, where f is the focal length of

the camera. The principal axis is the axis that passes C and

intersects the image plane orthogonal at the principal point.

Figure 2.2 illustrates the yz-plane from Figure 2.1. From this it is possible to derive the expressions of perspective projection. By similar triangles the y -coordinate of the point d is found to be

(19)

Z Y f

y =− . (2.1)

The x-coordinate is found in a similiar way, which gives

Z X f x =− . (2.2) f focal length z y D=(X,Y,Z)T d=(x,y,z)T C

y

Y

Z

Image point World point f focal length z y D=(X,Y,Z)T d=(x,y,z)T C

y

Y

Z

Image point World point

Figure 2.2: The pinhole camera model in the yz-plane. Similar triangles

gives the y-coordinate of point d when point D is known.

2.2 Cameras

The pinhole camera, described in Section 2.1, is not often used in practice since most cameras are equipped with lenses. In [2] it is explained that the main reason for this is that the amount of light that a single ray through a pinhole carries is not enough to make a good picture. More light is needed and a lens gathers light from many rays to the same position in the image. Another reason for having a lens is to keep the picture in sharp focus while gathering light from a large area.

When a ray of light strikes the lens it will be refracted i.e., its direction changes. By the use of Snell´s law, see [6], it will be possible to compute the relation between the incident ray and the refracted ray as

( )

1 n sin

( )

2

sin

n1 = 2 . (2.3)

Here n1 and n2 are the refractive indices of air and of the lens respectively,

(20)

measured from the normal to the plane that the incident ray strikes. This is presented in Figure 2.3.

Figure 2.3: The incident ray r1 goes through a medium with refractive index n1 and with an angle 1 to the normal of the medium that it strikes. This medium has another refractive index n2, which refracts the ray r1 to r2 with an angle 2 to the normal of the medium.

2.2.1 Thin Lenses

A thin lens has two spherical surfaces with radius R. Because it is thin, the

rays entering the lens are refracted at the right boundary and then immediately refracted again at the left boundary. Consider a point D in

space. The ray passing through this point and the optical center of the lens O

is not refracted (this follows from Snell´s law) and it projects to the point d

in the image. All other rays passing through D are then projected by the lens

to the point d. This is illustrated in Figure 2.4.

The depth of the point D is set to z, and the depth of the point d to -z’. This

means that the distance z’ is the distance that is equal to f, the focal length,

in Section 2.1. Letting z

, it can be seen that these points focus on the plane with distance f to the lens. In the applications in this Master´s thesis,

the focus will be at infinity, because objects within a range of distances will be in acceptable focus. Therefore, the distance between the image plane and the camera center will be the focal length, just as in the case of the pinhole camera. From here on it will be assumed that the distance between the camera center and the image plane is f.

1 n 1 n 2 r 1 r 2 2

(21)

d

D

Image plane

O

Thin lens with radius R

d

D

Image plane

O

Thin lens with radius R

d

D

Image plane

O

Thin lens with radius R

Figure 2.4: All rays passing through the point D in space are projected by

the thin lens to the point d in the image plane. 2.2.2 Field Of View (FOV)

Consider Figure 2.5. The field of view (FOV) of a camera is the part of the scene in space that is projected to the image. In [2] this is defined as

= = f arctan FOV 2 2 2ϕ δ . (2.4)

Here is the diagonal distance between two corners in the image plane. If f

is much shorter than , it is a wide-angle lens and if f is much longer than

(22)

f Image plane lens f Image plane lens

Figure 2.5: The field of view (FOV) of a camera is 2 , which is calculated as in (2.4).

2.2.3 Radial Distortion

In the equations and relations that have been derived so far it has been assumed that the imaging process is linear. In this case the world point, image point and optical centre lie along a line. For camera lenses however, this is not the whole truth. The main deviation is radial distortion, and it becomes bigger with decreasing focal length and the lower the price of the camera is. To set things right the image measurements will be corrected to those that would have been obtained under perfect linear circumstances.This is though out of the scope of this Master´s thesis, hence it will be assumed that the imaging process is linear, just as in the case of the pinhole camera.

2.3 The Camera Parameters

In (2.1)-(2.2) it is assumed that the origin of the coordinates in the image plane is at the principal point. That is where the principal axis intersects the image plane, see Figure 2.1. The world and camera´s coordinate system are related through a set of parameters, such as the focal length of the lens, the position and orientation of the camera, and the positition of the principal point. The camera parameters are divided into two groups, the extrinsic and intrinsic parameters. The extrinsic parameters, which can change with time and describes the position and orientation of the camera, and the intrinsic

(23)

parameters, which are permanent in the camera and do not change with time2, such as the focal length and the position of the principal point. 2.3.1 The Extrinsic Parameters

The extrinsic parameters describes the external conditions of the camera, which are its position and attitude. In almost all situations the camera frame (C) is distinct from the world coordinate frame or the geodetic frame (G). In [2] it is explained that when projecting a point D in space to the image plane

it is necessary to first express D in C. The first transformation is then the transformation between the two different frames. The two frames are related to each other via a rotation matrix, RCG, and a translation vector, G

GC t . This

is illustrated in Figure 2.6. Expressing the point D , which is defined in G

geodetic coordinates, in camera coordinates D gives C ) t (D R D G GC G C G C = , (2.5)

where RCG is the rotation matrix, also called the direction cosinus matrix (DCM), between the geodetic frame and the camera frame. A thorough explanation of transformations between different frames is found in Appendix A. G

GC

t is the translation vector between the geodetic frame and

the camera frame expressed in geodetic coordinates. The transformation is called a “rigid transformation” since both the origin and the basis vectors of the two coordinate systems are different.

2 It is assumed that the camera does not have a zoom function. If it would have,

(24)

C G

R

G GC

t

y

G

x

G

z

G

y

C

x

C

z

C Geodetic frame (G) Camera frame (C) a aG aC C G

R

G GC

t

y

G

x

G

z

G

y

C

x

C

z

C Geodetic frame (G) Camera frame (C) a aG aC

Figure 2.6: Rigid transformation between the geodetic frame (G) and the

camera frame (C). This is done by first rotating the basis

vectors with the rotation matrix RCG and then translating

them with the translation vector G GC

t . A point a is also

illustrated and it is shown that this point can be expressed either in G, a , or in C, G a . C

With the use of homogenous coordinates, (see Appendix B for a detailed explanation of homogenous coordinates) it is possible to express the rotation and translation in the same transformation matrix as

(

)

= − =

(

)

. = 1 1 G C GC C G G t G GC C G C G G GC G C G C R D t R R t D R t D D C GC 3 2 1 (2.6)

With this and Appendix A-B it is now apparent that only six exctrinsic parameters are needed to express the external condition of the camera. These are the three parameters that define the attitude of the camera and the three coordinates of the translation vector tx, ty, and tz. The attitude of the camera

(25)

pitch angle is 2

π

± however, the set of the three Euler angles fail to describe the camera attitude, see [4].

2.3.2 The Intrinsic Parameters

The intrinsic parameters are the internal parameters in the camera, which are fixed. They are decided by the manufacturer and do not change with time. The camera that is described from here on is a CCD-camera. This camera has an image sensor consisting of a grid with rectangular pixels. In (2.1)-(2.2), the equations of perspective projection are stated for the pinhole camera. In a CCD-camera, the distances are not measured in meters but in pixels and these pixels are not always square but rectangular. This adds, according to [2], two scale parameters k and l to the camera, which gives

− = − = . Z Y lf y , Z X kf x (2.7)

The parameters k and l are measured in pixels/m and f is measured in meters.

Applying the magnifications

α

=kf and

β

= lf expresses the focal length in pixel units instead of meters. Another assumption, that is made in (2.1)-(2.2), is that the origin of coordinates in the image plane is at the principal point. This is not always true. The origin of an image is often situated in the upper or lower left corner of the image plane, and the position of the principal point thus has to be described, which adds two parameters u0 and

v0. This gives + − = + − = . v Z Y y , u Z X x 0 0 β α (2.8)

In [1] it is explained that a skew parameter s is introduced for added

generality. This parameter will though be zero for almost all cameras.

+ − = + + − = . v Z Y y , u Z Y s Z X x 0 0 β α (2.9)

(26)

Five intrinsic parameters , , u0, v0, and s are now present. Writing this in

matrix form gives:

(

)

v . u s − = = 1 0 0 0 where , 0 0 β α K D 0 K d G (2.10)

K consists of the five intrinsic parameters and is called the internal camera

matrix. DG and d are homogenous vectors i.e.,

. z y x , Z Y X = = d DG and 1

The inhomogenous pixel coordinates of d are thus obtained by dividing the

first two coordinates of d with its third one. 2.4 The Camera Matrix

The internal camera matrix, K, makes it possible to transform a point D=(X,

Y, Z)T, described in G, to a point d=(x,y,z)T, described in C. But this transformation assumes that the origin of G and C coincide. In (2.6) it was described how a transformation between two coordinate frames with different origin and orientation is made. If the transformation from a world point to an image point, (2.10), is used together with (2.6), the whole camera transformation is obtained. = 1 G D P d , where

(

C

)

GC C G t R K P = (2.11)

Putting everything together gives the camera matrix:

, t t v t t u st t v u s z z y z y x + − + + − + − + + − = 0 0 β 0 0 α β α T 3 T 3 T 2 T 3 T 2 T 1 r r r r r r P (2.12) − − − − = = T 2 T 1 C G r r R 21 22 23 13 12 11 where r r r r r r

(27)

P is used to transform homogenous vectors. This means that both the world

point D and the image point d are expressed in homogenous coordinates. To

go from homogenous image coordinates to ordinary inhomogenous image coordinates, the two first coordinates of d are divided with the third one. In

(2.11) it can be seen that d’s third coordinate isz=P3D, division of the

two first coordinates in d with z gives:

⋅ ⋅ = ⋅ ⋅ = . z y , z x D P D P D P D P 3 2 3 1 (2.13)

Here P1, P2, and P3 are the rows of the camera matrix P. 2.5 Usage of the Camera Matrix

The camera matrix P has now been derived, which makes it possible to

transform a point in the 3D world to a point in a 2D image. In the remaining chapters this knowledge will be used in the applications in different ways. The camera matrix consists of 3 parts, the internal camera matrix K, the

attitude of the camera R, and the position of the camera t. In Table 2.1 a

road-map over the applications in this Master´s thesis is presented. The table presents which variables that are known in the application and which variable that is derived.

(28)

Table 2.1: Description over what is well-known and what is derived in each of the applications treated in this Master’s thesis.

Each of the applications presented in Table 2.1 can be derived from (2.11), which can be expressed as

(

R t

)

D K PD d C GC C G = = .

Application Well-known Derived Section

Estimation of the

camera matrix. A set of corresponding points

i i D

d ↔ .

P Chapter

3 Camera calibration. A set of

corresponding points i i D d ↔ , R, and t. K Chapter 4 Camera attitude

estimation. A set of corresponding points

i i D

d ↔ , K, and t.

R Chapter

5 Target positioning. P, and a point in the

image d. D, which is the corresponding point to d in

the world.

Chapter 6

(29)

3

Estimation of the Camera Matrix

The camera matrix P was derived in Chapter 2. This chapter describes how P can be estimated when only a set of point correspondences between an

image and the 3D-world is available. The first approach is a method based on minimization of a linear algebraic cost function called the direct linear transformation (DLT) algorithm. The solution from this step is then improved via a nonlinear minimization of a geometric cost function, a method called the Gold Standard method. Finally, a suggestion to an approach of estimating parts of P is presented.

3.1 The Direct Linear Transformation (DLT) Algorithm

The transformation between a point in space, D , and a point in the image, i i

d , is given by the equation d =i PDi. Given a set of point

correspondencesd ↔i Di, the goal is to obtain a good estimation Pˆ of P.

Homogenous vectors are used in the transformation which means that

i i PD

d and have the same direction but that they may differ in magnitude i.e., transforming D with P results in a i d located anywhere on the ray that i

passes D and the camera center C. Hence it is not a good idea demanding i

these vectors to be equal. A better approach is to use the cross-product between the vectors since this is always zero between vectors that have the same direction. This gives

0 PD

di× i = . (3.1)

Let T

i

d =(xi,yi,zi) , then the result of the cross-product is

, y x x z z y i i i i i i 0 p D p D p D p D p D p D PD d 1 T i 2 T i 3 T i 1 T i 2 T i 3 T i i i = − − − = × (3.2) − − − − − − = = T 3 T 2 T 1 p p p P 34 33 32 31 24 23 22 21 14 13 12 11 where p p p p p p p p p p p p

This gives a set of three equations in the entries of P, but there are only two

(30)

correspondence gives rise to two independent equations in the entries of P.

Because of this, the last equation in (3.2) can be disregarded, which gives

0 p A p p p D 0 D D D 0 i 3 2 1 T i T T i T i T i T = = − − i i i i x z y z . (3.3)

Each A has the dimensioni 2× , because every point correspondence 12

gives rise to two equations, and p is P reshaped to a 12× vector. The 1 12

2n× matrix A can then be obtained from n point correspondences by

stacking up the equations from (3.3) for each correspondence. This gives

0

Ap = , (3.4)

which is solved for the non-zero solution p. The matrix P has 12 entries but

only 11 degrees of freedom because of the scale factor. This implies that 11 equations are needed to obtain an exact solution of P. Since every point

correspondence delivers 2 equations, “5½” point correspondences are needed. It may sound wrong to talk about ½ point correspondence, but it means that only the x- or y-coordinate is used. In most situations the data is not exact, because of measurement errors in the points. If a good estimation of P is wanted in all situations, more than 6 point correspondences is

needed. According to [1], a rule of thumb is that the number of equations should exceed the number of unknown parameters by a factor of five. While the number of unknown parameters are 11, at least 55 equations are needed. While each point correspondence gives rise to two equations, at least “27½” point correspondences are needed.

3.1.1 Least Squares Solution of Homogenous Equations

In Section 3.1 it was declared that more than “5½” point correspondences (11 equations) are needed to obtain a good result when estimating P. But

more than 11 equations in (3.4) makes the equation system over-determined, and an over-determined equation system cannot result in a unique solution. Instead of demanding an exact solution, an approximate solution that minimizes a suitable cost function is sought. A solution to an over-determined equation system is easily found by the least squares (LS) method, which minimizes the 2-norm between two vectors. The difference in this case is that one of the vectors is the null vector, see (3.4), which means that it is the norm Ap that shall be minimized. To avoid the

obvious solution p=0, a constraint on p is needed. One can observe that if p

(31)

The problem can now be summarized to:

Find the vector p that minimizes Ap subject top =1.

This problem is solved via a singular value decomposition (SVD) of A,

described in [7]. The SVD is a factorization of A into T

USV

A = , (3.5)

where A is an m× matrix with n m≥ . U is an n m× orthogonal matrix m

and V is an n× orthogonal matrix which implies that they have the n

property 1 T 1 T T T VV I V V U U UU = = = and =. (3.6)

A consequence of (3.6) is that U and V are norm preserving i.e., that p

Vp

Up = = . (3.7)

Furthermore, S in (3.5) is an m× diagonal matrix whose entries are the n

squared singular values of A in descending order and (3.5) can now be

rewritten as

,

p USVT

min when p =1. (3.8)

But the property (3.7) implies that (3.8) can be rewritten as 1

when ,

min SVTp VTp = . (3.9)

Lettingy=VTp, one easily sees that Sy is minimized when

T , ,..., ,0 01) 0 ( =

y . This is because the smallest value of S is in its last entry.

Inserting (3.6) into (3.9) gives

Vy p p V p V y= T =1 = . (3.10)

Now (3.10) gives that the solution p is the last column of V. For a thorough

(32)

3.1.2 Data Normalization

One weakness of the DLT algorithm is that it is not invariant to scaled Euclidean transformations. This means that the result of the DLT algorithm depends on the coordinate frame in which the points are expressed. Even if only the origin is changed, different results are achieved and the fact is that different coordinate frames leads to varying quality of the results. In [1] it is declared that data normalization is an essential step in the DLT algorithm, transforming the points to the coordinate frame for which the best results are achieved. Apart from better accuracy in the results, an algorithm with data normalization as an initial step will be invariant to arbitrary choices of the scale and coordinate origin. This is because the measurement points are transformed to a fixed canonical frame and algebraic minimization, which is done by DLT, is invariant to scaled Euclidean transformations when carried out in a fixed canonical frame.

The data normalization transformation consists of a translation and an isotropic scaling of the points. The centroid of the points in both the 2D-image and the 3D-world are translated to the origin of each frame. After that an isotropic scaling is made so that the average distance from the origin is 2 in the image case and 3 in the 3D case. This gives rise to two different transformation matrices, T in the image and L in the 3D-world. The

(33)

Algorithm 3.1 – Data normalization with isotropic scaling

Consider a set of point correspondences d ↔i Di. A scaled Euclidean

transformation T is used to normalize the image 2D points and a second

scaled Euclidean transformation L to normalize the 3D points.

1. Calculate the centroid to each of the point sets

= = n i i i,y x n 1( ) 1 c T

d in the 2D-case and

= = n i i i i,Y,Z X n 1( ) 1 c L D in the 3D-case.

2. Calculate the average root-mean-squared (RMS) distance from the centroid = = n i RMS , d n d 1 ) ( 1 c T i

T d d in the 2D-case and

= = n i RMS , d n d 1 ) ( 1 c L i L D D in the 3D-case.

3. Calculate the scale factor

T T d

s = 2 in the 2D-case and

L L d

s = 3 in the 3D-case.

4. Put the terms together into the transformation matrices. Here ) (xc,yc = c T d and c =(Xc,Yc,Zc) L D . − − = 1 0 0 0 0 c c y s s x s s T T T T T and − − − = 1 0 0 0 0 0 0 0 0 0 c c c Z s s Y s s X s s L L L L L L L

5. Finally the normalized point correspondences ~ ↔ are given by di D~i i i i i Td D LD d = ~ = ~ and

(34)

3.2 The Gold Standard Algorithm

The DLT algorithm described in Section 3.1 minimizes the norm Ap ,

which is the norm of a vector called the algebraic distance. The disadvantages with the DLT algorithm is that the cost function (the function that is minimized) is not geometrically or statististically meaningful. The most important thing when estimating the camera matrix is that the geometric error becomes as small as possible since it is distances and coordinates that will be measured. But it is not the geometric error that is minimized with the DLT algorithm. With a good normalization it is nevertheless often possible to obtain very good results with the DLT algorithm, and its advantages are that the solution is linear and that the algorithm is fast.

In the Gold standard algorithm, which is suggested by [1], the result from the DLT algorithm is used as a starting point for a nonlinear minimization of a geometric cost function. In general a nonlinear minimization algorithm has not got a unique solution, it has more than one minima. One way of solving this is by an iterative method, and this is actually what is going to be done here. Gold´s algorithm is thus much slower than the DLT algorithm but it often gives a better result.

3.2.1 Geometric Error

Two different kinds of geometric errors will be studied. The first is when the world points are known with very good accuracy and the image points are measured with a measurement error that is assumed to be Gaussian. This assumption is valid in camera calibration, where the world points are measured with very good accuracy and the corresponding points in the image are measured with an error.

In other situations than camera calibration it is not probable that the world points are measured without any error. The world points are often taken from a map with an altitude database, and these are not exact. The assumption in this case is that both the world points and the image points are measured with a Gaussian measurement error. The two different geometric cost funtions are presented below. From now on vectors such as

X

xand represents the measurement points or vectors, vectors with a hat such as xˆ andXˆ represents estimated points or vectors, and vectors that are overlined such as xandXrepresents the true points or vectors.

(35)

Errors in the image points only: i i i i d d PD d ˆ ˆ d i = = ( ) where ε . (3.11)

Errors in both image and world points:

2 2 ( ) ) (di,PDˆi d Di,Dˆi d i + = ε . (3.12)

In (3.12) Dˆ is the closest point in space to i D that maps to the closest point i i

dˆ in the image to d via i dˆ =i PˆDi. While errors in both image and world

points are present, there will not be a well-defined camera matrix between them. This is why the closest points (dˆ ,i Dˆ ) to (i d ,i D ), that correspond i

under a well-defined camera matrix, are searched for. (3.12) will thus be

minimized both with respect to Pˆ and with respect to the estimated world

pointsDˆ . i

3.2.2 A Geometric Interpretation of the Minimization of Geometric Errors

Consider a measurement space ℜ , where the set of all measurements N

{

d ↔i Di

}

for i=1,…,n, are represented by a single point X in this space.

It is assumed that X is a Gaussian distribution with mean X and variance 2

1σ

N , which means that each of the N components has variance

2

σ in both image points and world points. A parameter vector H in ℜ is also M

considered. H contains the parameters needed to parameterize the

transformation. This means that in the case of errors in the image points only, H will consist of the entries in P. In the case of errors in both image

and world points, the parameter vector H will consist both of the entries in P, and of the world points Dˆ . The transformation can be thought of as a i

mapping f from ℜ to M ℜ i.e., a set of parameters maps the coordinates in N

the image and in the 3D-world to each other.

The noise-free measurement point X is defined as the mapping f of the true

value of the parameter vector, H , as f(H =) X. If H is varied in a

neighbourhood around H then f(H) traces out a surface S in M ℜ . In most N

(36)

noisy measurement point X. The task is then to find a point Xˆ closest to X

that lies on S . The situation is drawn in Figure 3.1. M

SM N

M

H f X SM N

M

H f X

Figure 3.1: Mapping of a parameter vector H in ℜ with the function f M

traces out a surface S in M ℜ . N

Given a set of measurements corresponding to a point X in ℜ and an N

estimated point Xˆ lying on S the task is to minimizeM X ˆX over H i.e.,

it is minimized by changing the parameters in H until the minimum value of X

X ˆ− is reached. The measurement vector X consists of different

measurements in the two cases. When only errors in the image points are present, X consists of the measured image points d . When errors are i

present in both image and world points, X consists of both the measured

image points d and the measured world points i D . In Table 3.1, the i

(37)

Situation X H f Errors in the image points only. (x1,y1,...,xn,yn) (p11,...,p34)

(

)

(

xˆ,yˆ ,...,xˆn,yˆn

)

,..., ˆ f 1 1 : = = = = n 1 PX X P X H α Errors in both image and world points. (x1,y1,X1,Y1,Z1,...,Zn)

(

p11,...,p34,Xˆ1,Yˆ1,Zˆ1,...,Zˆn

)

(

)

(

xˆ,yˆ,Xˆ ,Yˆ,Zˆ,...,Zˆn

)

ˆ , ˆ ,..., ˆ , ˆ ˆ f 1 1 1 1 1 : = = = = n n 1 1X PX X X P X H α

Table 3.1: The different variables specified for the two error cases.

3.2.3 Minimizing the Nonlinear Geometric Functions

While the geometric cost functions are nonlinear, iterative techniques will be used to minimize them. First a parametrization of the different parts in the transformation has to be done. The cost function that shall be minimized is then specified with these parameters and the initial values of the parameters are collected from the result of the DLT-algorithm. The parameters are then iteratively refined with the goal of minimizing the cost function.

The main task is:

Given a measurement vector X ℜ N, a parameter vector M

H ℜand a mapping f:M N. Minimize the cost function of

the squared Mahalanobis distance X f− H( )2Σ by iteratively refining H.

This means that a set of parameters H are searched such thatf(H =) X, or at least a H that brings f(H) as close to X as possible in Mahalanobis distance.

The Mahalanobis distance is a weighted distance where the weights are chosen to reflect the relative accuracy of measurements of the image points and world points, see [1]. This accuracy is taken from the covariance matrix

Xof X. The Mahalanobis distance between two vectors X and Y is

expressed like

(

X Y

)

(

X Y

)

Y X 2 T 1 − − = − − . (3.13)

(38)

The parametrization of the cost function was explained in Section 3.2.2 and is listed in Table 3.1 for the two different situations. The iterative minimization method used to minimize the cost function is for example the Marquardt algorithm, suggested by [1]. The Levenberg-Marquardt algorithm is a combination of the more common methods “Gauss-Newton” and “gradient descent”. Gauss-Newton will cause rapid convergence in the neighbourhood of the solution and the gradient descent will guarantee a decrease in the cost function when the Gauss-Newton updates fail. For a thorough explanation of the Levenberg-Marquardt algorithm, see [12]. The Gold standard algorithm is summarized in Algorithm 3.2.

(39)

Algorithm 3.2 – The Gold standard algorithm

Given n>6 point correspondences d ↔i Di, determine the estimate

Pˆ describing the transformation from world to image points.

1. Data normalization: Find the transforms T and L that transforms di

and D into i ~di=Tdi and D~i =LDi.

2. DLT: Form the 2n×12matrix A by stacking up the equations (3.3)

generated by the point correspondences ~ ↔ . The solution to di D~i 0

Ap = subject to p =1, where p is the vector of the entries in P~ , is

the unit singular vector of A corresponding to the smallest singular

value.

3. Minimize geometric error: The result from the DLT algorithm is used

as a starting point for the iterative minimization of the geometric error. All measurements ~d ~i,Di are placed in a measurement vector X and the

estimated ones d ˆ~ˆ~i,Di are placed in a vector Xˆ. The Mahalanobis distance,

( ) ( )

1 T X X X X Xˆˆ = − ε ,

is then minimized, to receive an optimal P, using an iterative algorithm

such as Levenberg-Marquardt. Here X is the covariance matrix of the

measurements.

4. Denormalization: The camera matrix for the original coordinates is

obtained from P~ as L P T P=1~

3.2.4 Estimating Parts of the Camera Matrix

In many situations parts of P is well-known, hence it is only parts of the

camera matrix that is searched for. The internal camera matrix K is often

determined once and for all through camera calibration. When using a camera it is thus often only the position and attitude of the camera that are

(40)

unknown. In Chapter 2 it was stated that P consists of three different parts,

the internal camera matrix K, the rotation matrix R, and the translation

vector t. In Chapter 5 the camera attitude will be estimated and it is assumed

that both K and t are well-known. In this situation the following approach is

suggested by [1].

1. Produce an initial estimate of P with DLT.

2. Decompose P into K, R, and t using decomposition, see [7].

QR-decomposition can be used, while it is known that K is upper triangular

and that R is a DCM.

3. Construct a parameter vector q that consists of the parameters in K, R

and t. This parameter vector will have 14 entries and looks like

(

, ,s,u0,v0,r11,r12,r13,r21,r22,r23 t,x t,y t,z

)

=

q ,

where the first five entries are the parameters in K, the next six entries

are the entries in the two upper rows of R, and the last three entries are

the coordinates of t.

4. Add an extra cost function c to (3.13) which forces the parameters in K

and t to their desired values. The cost function is built up like

(

)

(

)

(

) (

) (

)

(

) (

+ −

) (

+ −

)

+ + − + − + − + − + − = 2 2 2 2 0 0 2 0 0 2 2 2 z z y y x x t t t v u s ˆ ˆ w c α α β β , (3.14)

where w is a weight that will increase with each iteration. This makes it

more and more important that the difference between the estimated and true values of the parameters is small. Here the same weight is given for the whole cost function, but it is possible to use different weights for each parameter.

The cost function (3.14) will be added to the cost function (3.13), and the sum of these will be minimized with Levenberg-Marquardt. To implement this extra cost function the true values of the parameters will be placed in the end of the measurement vector X. The estimated values

of the parameters will be placed in the end of the vector Xˆ , and the

weights, that changes after each iteration, will be inverted and placed in the end of the covariance matrix X.

(41)

4

Camera Calibration

In this chapter it is presented how the camera is calibrated. This means that the internal camera matrix K is determined through a laborative test. In the

first part of the chapter the laboration environment and the method for calibrating the camera is presented. After that it is explained how the internal camera matrix K can be determined from the calibration data and

finally the test results will be presented and the covariance of K will be

calculated through Monte Carlo Simulations.

4.1 Data Collection

When calibrating a camera it is assumed that both the position and attitude of the camera can be measured with very good accuracy. Camera calibration is preferably made in a laboratory before using the camera in an aeroplane. This is because the internal camera parameters do not change with time. Recall from (2.11) in Section 2.4, that P consists of three parts, K, R, and t.

When calibrating the camera, R and t are well-known and (2.11) is solved

for K. The suggested laboration arrangement that is used for camera

calibration in this Master´s thesis is illustrated in Figure 4.1.

INS Camera yC K KC t zC xC yK zK xK C K R Calibration pattern INS Camera yC K KC t zC xC yK zK xK C K R Calibration pattern INS Camera yC K KC t zC xC yK zK xK C K R Calibration pattern

Figure 4.1: The laboration environment for camera calibration. A calibration pattern with marked dots is used to get the corresponding points and an inertial navigation system (INS) is used to get the attitude of the camera. The rotaion matrix

C K

R is the transformation between the calibration frame (K)

and the camera frame (C) and the vector K KC

t is the

(42)

The data collection procedure is quite straightforward. The transformation vector K

KC

t is measured and the camera attitude is taken from the inertial

navigation system (INS), which measures with high accuracy. An image is

then taken of the calibration pattern and after that the corresponding points are picked out. Notice that the calibration pattern is planar, which means that the transformation of the points in space to the image plane is a transformation from ℜ to 2 ℜ . After picking out the corresponding points 2

the data collection part is finished and the next step is to calculate K from

the collected data.

4.2 Derivation of the Internal Camera Matrix

The method of finding the internal camera matrix K is quite analogue to that

of finding C G

R in Section 5.2. The difference is that K is unknown and C K R ,

derived in Appendix A.3, and K KC

t are well-known. It will be easier though

to find K because no respect has to be taken to orthonormality, which is the

case in camera attitude estimation. While C K

R is known this time, it is

evident that this one is orthonormal. It also turns out that the minimization of the geometric error isn´t necessary. Because when both C

K

R and K KC t are

known and the transformation is from ℜ to 2 ℜ , the geometric cost 2

function becomes equal to the algebraic cost function that is minimized by DLT, see [1].

The approach is to express P in the parameter vector q , which now looks Cal

like

(

, ,s,u0,v0

)

= Cal

q . (4.1)

DLT is then made with the elements in q as parameters, which gives the Cal

estimate of K. 4.3 Results

No laborations have been made in a laboratory in this Master´s thesis. All results are simulated with the camera model produced in Matlab. The calibration pattern is constructed to have a size of 5x5 metres and the points are separated with 0.5 metres in each direction. The calibration pattern is planar i.e., no depth of the points are present. There is no point in using a 3D calibration pattern since both the rotation matrix and the translation vector are known. It is assumed that both C

K

R and K KC

t are measured with infinite

References

Related documents

The study showed microwave pretreatment (600W,2min), ultrasonic pretreatment (110V,15min), and microwave combined with ultrasonic pretreatment (600W,2min;110V,15min)

The illumination system in a large reception hall consists of a large number of units that break down independently of each other. The time that elapses from the breakdown of one

As the tunnel is built, the barrier effect in the form of rail tracks in central Varberg will disappear.. This will create opportunities for better contact between the city and

By comparing the data obtained by the researcher in the primary data collection it emerged how 5G has a strong impact in the healthcare sector and how it can solve some of

Federal reclamation projects in the west must be extended, despite other urgent material needs of the war, to help counteract the increasing drain on the

Illustrations from the left: Linnaeus’s birthplace, Råshult Farm; portrait of Carl Linnaeus and his wife Sara Elisabeth (Lisa) painted in 1739 by J.H.Scheffel; the wedding

Microsoft has been using service orientation across its entire technology stack, ranging from developers tools integrated with .NET framework for the creation of Web Services,

I denna studie kommer gestaltningsanalysen att appliceras för att urskilja inramningar av moskéattacken i Christchurch genom att studera tre nyhetsmedier, CNN, RT