• No results found

Precision Analysis of Photogrammetric Data Collection Using UAV

N/A
N/A
Protected

Academic year: 2022

Share "Precision Analysis of Photogrammetric Data Collection Using UAV"

Copied!
74
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT THE BUILT ENVIRONMENT, SECOND CYCLE, 30 CREDITS

STOCKHOLM SWEDEN 2017,

Precision Analysis of

Photogrammetric Data Collection Using UAV

VIOLETA DE LAMA BLASCO

KTH ROYAL INSTITUTE OF TECHNOLOGY

SCHOOL OF ARCHITECTURE AND THE BUILT ENVIRONMENT

(2)

Master Thesis in

Precision Analysis of Photogrammetric Data Collection Using UAV

Violeta De Lama

Tuesday 13th June, 2017

KTH ROYAL INSTITUTE OF TECHNOLOGY

DIVISION OF GEODESY AND SATELLITE POSITIONING KTH Supervisor: Milan Horemuz

External Supervisor: Erik Syr´en (˚AF) Examiner: Anna Jensen

(3)

Preface

This master thesis is the culmination of the master programme called Transport and Geoin- formation Technology that I started at KTH in 2015.

The majority of this work has been developed at the offices of ˚AF in Solna, Stockholm, in the section of Measurements. The study has been conducted under the supervision of docent Milan Horemuz from the Division of Geodesy and Satellite Position at KTH to whom I am very grateful for his guidance, advice, time and patience during these months. I also have to express my gratitude to my opponent and examiner for the feedback received.

I would like to acknowledge Lennart Gimring and Erik Syr´en, from ˚AF, that gave me the chance to work on my thesis with them at ˚AF offices, and the rest of the section, that treated me very kindly since the very first day making me feel part of a team since the beginning.

I would also like to thank my family that always is supportive whenever I tell them that I got involved in another ”adventure”; my friends in Sweden, that are part of my family now, and in Spain, who always are there to meet me when I come back, or ring, or write...; and my classmates during the master studies, that always helped me and treated me like one of them.

(4)

Abstract

The focus of this work is on photogrammetric data collection using an Unmanned Aerial Vehicle (UAV) and its purpose is to study the effect in the precision on point estimation that terrain roughness, flight height, calibration of the camera, distribution of Ground Control Points (GCP) and position of points to be estimated can have on it.

First, a case study have been designed containing points whose coordinates are well-known (acting as GCPs) and points that need to be estimated. Then, two setups of the camera will be outlined where two pictures of the points are supposed to be taken by the same camera. For each of them, the collinearity equations are derived. Appliying Structure from Motion (SfM) technique, the points (GCPs and unknown) are identified in the overlap of the images and used in a Least Squares Adjustment (LSA) to estimate coordinates and variances of the unknown points.

Next, several tests are performed in different scenarios combining three terrains, two flying heights, calibration of the camera, two distributions of the GCPs, five different positions for points to be estimated and absence or presence of random and systematic distortions.

This study shows that better results are obtained with a calibrated camera, at a concen- trated distribution of points, for unknown points surrounded by GCPs, at a lower flying height and when using GCPs of varying heights.

Besides, an example of data collected by an UAV is processed with a commercial software (Agisoft Photoscan). Results for these estimated points are comparable to the analytical cal- culations in Northing and Easting coordinates. They are also within the thresholds set by the standard deviations of the case study. Heights estimations (except for an isolated point) are better than those of the case study (the difference with respect to the true values are smaller) and they are also within the ranges specified by the standard deviations of the case study.

(5)

Contents

Preface i

Abstract ii

1 Introduction 1

1.1 Background . . . . 1

1.2 Objectives . . . . 2

1.3 Review of literature and other relevant sources . . . . 3

1.3.1 Review of the current legislation . . . . 3

2 Methods 5 2.1 Determination of point coordinates from camera images . . . . 5

2.1.1 Definition of coordinate systems . . . . 5

2.1.2 Collinearity equations . . . . 7

2.1.3 Setups description . . . . 8

2.1.4 Structure From Motion . . . . 9

2.1.5 Least Squares Adjustment . . . . 10

2.1.6 Image distortions . . . . 14

2.2 Test and simulations . . . . 16

2.2.1 General Scenario . . . . 16

2.2.2 Different terrains . . . . 18

2.2.3 Different flying heights . . . . 18

2.2.4 Calibrated and Uncalibrated camera . . . . 19

2.2.5 Distribution of GCPs . . . . 21

2.2.6 Summary of specific scenarios . . . . 23

2.3 Implementation . . . . 25

2.3.1 Initial values . . . . 25

2.3.2 Matlab program . . . . 26

(6)

2.4 Commercial Software . . . . 27

2.4.1 Description of the data . . . . 27

2.4.2 Processing of the data . . . . 28

3 Results and Analysis 31 3.1 Precision at point estimation . . . . 31

3.1.1 No distortions . . . . 31

3.1.2 Random Distortions . . . . 32

3.1.3 Systematic distortions . . . . 33

3.1.4 Random and Systematic distortions . . . . 34

3.2 Software estimation . . . . 35

3.3 Other parameters of the adjustment . . . . 37

3.4 Analysis . . . . 39

3.4.1 Calibration of the camera . . . . 39

3.4.2 Distribution of GCPs . . . . 39

3.4.3 Position of the estimated points . . . . 39

3.4.4 Flying height . . . . 40

3.4.5 Different terrains . . . . 40

4 Discussion of the results, conclusions and recommendations 41 4.1 Conclusions and Recommendations . . . . 41

4.2 Discussion . . . . 42

References 44 A Appendix 45 A.1 Tables . . . . 45

(7)

List of Figures

1 Recommendations by Transportstyrelsen for safely flying a drone. . . . 4 2 Image showing world and image coordinate systems. Camera system is not ex-

plicitly shown (it has same axes and center as world coordinate system, but the units are pixels instead of meters). . . . 6 3 Image showing the disposition of a point, the camera center and the projection. . 7 4 Sketch of the Structure from Motion technique. . . . 9 5 Radial distortion can be positive or negative. Radial distortion. Image from MathWorks.com.

Last accessed 2017. https://se.mathworks.com/help/vision/ref/cameraparameters-class.html . . . . 14 6 Urban terrain at 50 meters from the camera. Pictures show first (left) and second

(right) camera setups images without (circles) and with (crosses) systematic errors. 16 7 Geometric description from a side of the flying path: the distance between two

positions of the UAV is given by the flight height and the internal parameters of the camera. . . . 17 8 Geometric description from the top of the flying path. . . . 17 9 Images of an urban terrain seen at 100 meters from the camera showing GCPs,

points to be estimated and points containing systematic errors in an uncalibrated camera. . . . 20 10 Images of an urban terrain seen at 100 meters from the camera showing GCPs,

points to be estimated and points containing systematic errors in an uncalibrated camera where it only be considered a 10% of the size of the systematic errors, therefore their effect is reduced. . . . 21 11 Spatial configuration of the urban scenarios showing GCPs (empty dots) and the

points to be estimated (filled dots). . . . 22 12 Top view of the spatial configuration of the GCPs and the points to be estimated. 22 13 Two dispositions of the points at urban terrain seen at 100 meters from the

camera. Left disposition makes the geometry become worse and the systematic errors towards the edges almost disappear so that their effect cannot be studied. 23 14 Flat terrain: Dataset where points are at a distance of 50m from the camera

and heights over the terrain are within a range of 30cm. Left upper corner:

image from first camera setup. Right upper corner: image from second setup.

Left lower corner: image shows spatial disposition of the points in real world coordinate system. Right lower corner: a triangulation of the points is presented.

Filled points represents points to be estimated. . . . 24 15 Telge Hus near S¨odert¨alje. . . . 27 16 Example from the dataset. View of the ruins of the castle from above. . . . 28

(8)

17 Disposition of control points around Telge Hus. . . . 28 18 Process of creating a 3D model in Photoscan. . . . 29 19 Model in Photoscan and location of GCPs . . . . 29

(9)

List of Tables

1 Dimensions of the terrain covered at different flying heights. . . . 19 2 Initial and true values for the heights of points P1a, P1b, P1c, P2a, P2b depending

on the different terrains (F:Flat, S:Semi-flat, U:Urban), flying at 50m from the ground level. . . . . 26 3 Summary of differences in coordinates when random distortions are present in

the image. . . . 33 4 Summary of differences in coordinates when systematic distortions are present in

the image. . . . 34 5 Summary of differences in coordinates when both random and systematic distor-

tions are present in the image. . . . . 35 6 Output from Photoscan. Differences of true coordinates minus estimated (Nor-

thing, Easting, Height). Error in estimated position in meters (distance between estimated and input value) are calculated only for the GCPs. ”Root mean square reprojection error for the marker calculated over all photos where marker is visi- ble” was output for all created markers. . . . 36 7 Data extracted from Table 23 and 21 in Appendix A for data containing random

and systematic distortions, corresponding to uncalibrated camera, 100m flying height, semi-flat terrain and spread geometry. Estimated coordinates and their variances and standard errors are shown. . . . 37 8 Summary of the terrains where points P1c,P2a and P2b obtain smaller differences

with respect to true values (comparative made between Flat and Urban terrains) 40 9 Data for uncalibrated camera. Estimated variances of all parameters. No errors . 46 10 Data for calibrated camera. Estimated variances of all parameters. No errors . . 47 11 Data for uncalibrated camera. Differences between true and estimated values. No

errors . . . . 48 12 Data for calibrated camera. Differences between true and estimated values. No

errors . . . . 49 13 Data for uncalibrated camera. Estimated variances of all parameters. Random

errors . . . . 50 14 Data for calibrated camera. Estimated variances of all parameters. Random errors 51 15 Data for uncalibrated camera. Differences between true and estimated values.

Random errors . . . . 52 16 Data for calibrated camera. Differences between true and estimated values. Ran-

dom errors . . . . 53

(10)

17 Data for uncalibrated camera. Estimated variances of all parameters. Systematic errors . . . . 54 18 Data for calibrated camera. Estimated variances of all parameters. Systematic

errors . . . . 55 19 Data for uncalibrated camera. Differences between true and estimated values.

Systematic errors . . . . 56 20 Data for calibrated camera. Differences between true and estimated values. Sys-

tematic errors . . . . 57 21 Data for uncalibrated camera. Estimated variances of all parameters. Random

and systematic errors . . . . 58 22 Data for calibrated camera. Estimated variances of all parameters. Random and

systematic errors . . . . 59 23 Data for uncalibrated camera. Differences between true and estimated values.

Random and systematic errors . . . . 60 24 Data for calibrated camera. Differences between true and estimated values. Ran-

dom and systematic errors . . . . 61

(11)

List of abbreviations

ALS Aerial Laser Scanner DEM Digital Elevation Model DSM Digital Surface Model DTM Digital Terrain Model

EASA European Aviation Safety Agency FOV Field Of View

GCP Ground Control Points

GIS Geographical Information System GNSS Global Navigation Satellite System IMU Inertial Measurement Unit

LiDAR Light Detection and Ranging LSA Least Squares Adjustment

RPAS Remotely Piloted Aircraft Systems SfM Structure from Motion

SIFT Scale-Invariant Feature Transform TIN Triangulated Irregular Network UAV Unmanned Aerial Vehicle

(12)

1 Introduction

1.1 Background

Unmanned Aerial Vehicles (UAV), or commonly called drones, are present in the society today. A variety of them is available in the market: they have different sizes, characteristics and even different degrees of autonomy. These features make them affordable and adaptable to varying contexts.

At the same time, cameras are also a growing market. Diverse qualities and sizes are offered from many different manufacturers worldwide. Simultaneously, photogrammetry is a discipline born in 19th century which thanks to the digitalization of the cameras is also expanding with the help of digital processing of the images. In particular, aerial photogrammetry has experienced a considerable change from analog to digital methods to analyze data.

The merge of these disciplines gives rise to a variety of applications of aerial photogrammetry that can be connected to positioning, which in turn is an attractive combination for surveying and modelling. Classic aerial photogrammetry performed with aircrafts and analog cameras could be nowadays performed with UAVs and a digital camera mounted on them as well as a GNSS receiver that could register the exact location where each image is taken.

The products to be obtained from this process could be used for modelling of 3D-city buildings, terrains or surfaces (obtaining products like DEM, DTM or DSM), for visualization purposes (creation of maps), navigation, etc. The society could benefit from these products in construction and urban planning, hydrology modelling, vegetation monitoring, precision farming, forestry management, flood risk analysis, ...

Modelling in three dimensions could also be achieved by using Light Detection and Ranging (LiDAR) technology. In this case, aerial photography could be replaced by Aerial Laser Scanner (ALS) and 3D models could be obtained with a different equipment: an aircraft in which a LiDAR, a GPS and IMU units could be mounted on. The product obtained with this method is reliable as laser measurements are very powerful, but the equipment and the procedure could be more expensive.

Another possibility is to take pictures instead of with an UAV, with a helicopter or from

(13)

a satellite. But again, this technology is cheaper, faster and has the possibility of flying much closer to the objects to be study, which can be essential in search and rescue operations, for example.

A disadvantage that can prevent the use of UAVs with cameras is the changes in legislation in EU in general and in Sweden, in particular. Attempts of privacy violations due to UAV flying and taking pictures in towns or cities, or even the flying height of the drones or the irruption of them in reserved or private areas (like airports or military bases) can be a reason for not encouraging their use as they can be identified as threats to security or authority.

This work will perform a study of aerial photogrammetry imagery collected by means of UAV devices and digital cameras. Sources of error and their contribution to the final accuracy of the product must be identified and studied so that their effect can be reduced.

1.2 Objectives

The principal objective of this work is finding how the precision of point determination depends on different factors:

• flight height

• distribution of the GCPs

• terrain roughness

• calibration of the camera

• position of the estimated points relatively to the GCPs

In order to study that dependency a case study will be designed in Matlab, being composed by a set of GCPs and points to be estimated. GCPs are placed over an area and their coordinates are assumed to be well enough known. The unknown points will be placed at different spots with respect to the set of GCPs.

More specifically, GCPs are assumed to be surveyed over different terrain roughness: flat, semi-flat and urban. At the same time, the UAV will fly at two different heights (50m and 100m) to take images of them together with the unknown points. These pictures are taken under the assumption of both an uncalibrated and a calibrated camera. The points will appear concen- trated in the middle for some pictures and will be spread over the image in some others. The unknown points will be both surrounded by GCPs and lying close to them but at a worse spot too. The addition of random and systematic errors produced in the process of photographing the points will be separated in order to discern the effect that they produce on the results.

Besides, a set of images shot over the ruins of a Middle-Age castle from a photogrammetric project carried out in S¨odert¨alje are processed with a commercial software in order to compare results.

It is expected that better estimations are obtained in a urban terrain, at 50m flight height, for a calibrated camera, for the points surrounded by GCPs and for a concentrated geometry of the GCPs.

(14)

1.3 Review of literature and other relevant sources

Since recent years the combination of photogrammetry and UAV is attracting the attention of the research community. Every year more articles relating photogrammetry and UAV systems are being published.

One of them is the work by Rosnell and Honkavaara (2012) [9] in which point clouds are generated by means of images taken by the use of a micro UAV and a digital camera. They investigated the use of two different post-processing approaches, one commercial and another one built by them, they took into account calibration of the camera and accuracy in height among other topics.

Another study of an ultra-light light UAV that foccusses on its performance with a digital camera in which the calibration of the camera is a step in the process is explained in the article by Vallet et al (2011) [12].

Reshstyuk and M˚artensson (2016) [8], is another example to be taken into consideration.

In it, two DEMs are created with the help of a UAV, this device is flown at two different heights and carries a digital camera, the processing of the data is done by using two different software suites.

An interesting study is the one performed by Harwin et al (2015) [6], where it is taken into account that camera can be calibrated previously and it can improve the results or use the option at Photoscan software to calibrate it afterwards. It also studies the different overlap between images and the oblique position of the camera, the number of GCP and their density. To assess the accuracy they compared ”precisely-surveyed points” with the ”photogrammetrically-derived coordinates”, as this study will do by comparing the true coordinates given to the points with the estimated ones.

1.3.1 Review of the current legislation

For this study, the flying of an UAV is essential, it is therefore important to study the legislation in Europe and Sweden as well.

As the activity performed with UAV devices can be harmful for society in different ways, each country should regulate these activities accordingly. Different chapters in legislation are dedicated to the two different purposes of the uses: either recreational or professional. Only criteria regarding the professional use of UAV is explained below.

Europe By the time this work is being written, the information site Dronerules.eu [4] is still on construction and it states that”the regulatory aviation framework is currently under development at European level, applicable rules and regulations are - as of today - different from country to country” in February 2017.

The purpose of this site is to provide European Remotely Piloted Aircraft Systems (RPAS) users with information regarding regulation of drone activity within the European Union and awareness on aviation rules concerning aviation safety (person endangerment or property dam- age), privacy (data protection) and insuring.

In the course of this thesis, the European Aviation Safety Agency (EASA) has published

(15)

a ”proposal to regulate the operation of small drones in Europe” as of May 2017. This is open to be commented until August 2017. The drones will be classified from C0 to C4 and each category will have different specifications. It has also been released a poster [5] in which

”Do’s and don’ts” are detailed for each category: the operating heights are different as well as the distances guaranteeing privacy, but there exist some general criteria like the prohibition of flying close large crowds, airports, taking pictures, etc.

Sweden Swedish authority in charge of regulating unmanned aircraft systems is the Swedish Transport Agency (Transportstyrelsen) . Some requirements are stated in the regulations [11]

(2009) in order to ensure a safe flight. Shown on a flyer some of them are:

• Permission is needed when flying within an airport control zone.

• Watch not to fly overhead groups of people, do not endanger anyone.

• Fly where other people are not disturbed. Take pictures only of those who had agreed on it.

• Always keep the drone at sight. Fly no higher than 120m or at a distance no longer than 500m from yourself.

among others on permissions for professional flying and special areas.

Figure 1: Recommendations by Transportstyrelsen for safely flying a drone.

Some news concerning the debate about privacy were published during the month of April 2017, when the Swedish Government has released a press note [10] in which they announce a change in the law on camera surveillance.

(16)

2 Methods

For this work it is going to be considered a compact camera Canon S45 for which internal and distortion parameters are available.

In the methodology to be used in this work, GCPs are assumed to be surveyed over different terrain roughness: flat, semi-flat and urban. At the same time, the UAV will fly at two different heights (50m and 100m) to take images of them together with the unknown points. These pictures will be taken under the assumption of both an uncalibrated and a calibrated camera.

The points will appear concentrated in the middle for some pictures and will be spread over the image in some others.

The unknown points will be both surrounded by GCPs and lying close to them but at a worse spot too. The addition of random and systematic errors produced in the process of photographing the points will be separated in order to discern the effect that they produce on the results.

2.1 Determination of point coordinates from camera images

In this chapter it is explained how, with the help of two images with certain overlap and information about GCPs, it is possible to estimate at the same time the coordinates of an unknown point and the internal and external parameters of the camera. In addition to this, the estimated variances of the unknowns and the differences of the estimated parameters with the real values are as well obtained.

First of all, the collinearity equations must be derived. In order to do that, the coordinate systems to be used must be defined.

2.1.1 Definition of coordinate systems

First, a world coordinate system is needed. It has its Z-axis pointing towards the objects in the real world, X-axis is along the vertical side of the picture to be taken by the camera pointing upwards and Y -axis is along the horizontal side of the picture pointing rightwards,

(17)

Figure 2: Image showing world and image coordinate systems. Camera system is not explicitly shown (it has same axes and center as world coordinate system, but the units are pixels instead of meters).

being a right-handed coordinate system. Its center is at the origin point O, (X0, Y0, Z0)t, where the camera lens is. Measures in this system are considered to be in meters (m).

A second system, the camera coordinate system, is defined at this same origin point O in which the axes are parallel to the ones described before but measures are taken in pixels. There can exist more than one camera coordinate systems (they are attached to each camera setup), they will be defined at the exact place where the camera takes the image and the axes will be defined based on the directions of the image sides.

In the first setup to be considered in this work, these two systems, the world and camera systems, will be coincident.

A third system is needed, the image coordinate system. It is defined as a 2-dimensional coordinate system with x0-axis and y0-axis parallel to the camera system defined before, x0-axis going along the vertical side of the image (and pointing up) and y0-axis along the horizontal side of the image (pointing rightwards), with its origin lying at the bottom left corner of the image (the system is attached to the picture). This coordinate system has its units in pixels.

The distance between camera system (and also world system, if it is coincident) to image system is given by the constant c: the principal distance of the camera (orthogonally measured from the lens of the camera to the image plane).

Another important element is the perspective projection of the center O, (X0, Y0, Z0)t, on the image plane which has coordinates (x0, y0)tin the image system and is called principal point.

(18)

Figure 3: Image showing the disposition of a point, the camera center and the projection.

2.1.2 Collinearity equations

With this disposition, it is considered a point P , with coordinates (X, Y, Z)t (m) in the world coordinate system, that can be seen in a picture taken by the camera.

The projection of point P in the image is P0. In camera system it will have coordinates (x0− x0, y0− y0, −c)t, and (x0, y0) in the image system (both in pixels).

Taking all the above information into consideration, the collinearity equations can be com- puted from the fact that P , P0 and O are lying in a straight line (as seen in Figure 3). The ray passing through the camera origin, O, and point P and the ray passing through O and the image projection for point P , P0, are coincident. In this case, if one segment joining those points is stretched a quantity not yet known (m > 0), the other one can be obtained:

OP0 ∝ OP ⇒ OP0= mOP (1)

By using the coordinate notation, one can express the vectorial relation as follows (as done, for example, in [2]):

OP0 = mOP ⇔

x0− x0 y0− y0

−c

= mR

X − X0 Y − Y0 Z − Z0

(2)

in which R is a rotation matrix (R−1 = Rt) to transform the coordinates from the world system to the camera system; its element corresponding to the i-th row and j-th column is rij, for i=1,2,3, j=1,2,3, and it is composed by three rotational matrices around each axis so that it holds: R = R3(κ)R2(ϕ)R1(ω). In this case m is acting as a scale factor (meters will be transformed to pixels).

(19)

Using the last line in (2), m can be identified as:

m = −c · 1

r31(X − X0) + r32(Y − Y0) + r33(Z − Z0) And so, the collinearity equations are:

x0 = x0− c.r11(X − X0) + r12(Y − Y0) + r13(Z − Z0) r31(X − X0) + r32(Y − Y0) + r33(Z − Z0) y0 = y0− c.r21(X − X0) + r22(Y − Y0) + r23(Z − Z0)

r31(X − X0) + r32(Y − Y0) + r33(Z − Z0) (3) These two equations express the image coordinates of a projected point, (x0, y0)t, as a function of the principal point (x0, y0)t, principal distance c, position (in world coordinate system (X0, Y0, Z0)t) and orientation (R) of the center of the camera, with respect to world coordinate system, and position of the point in world coordinate system, (X, Y, Z)t.

As explained above, the camera coordinate system is attached to each position of the camera, and there will be one for each photograph taken. The world coordinate system will be coincident with the first one, so that there can be some simplifications in the equations. The image coordinate system is attached to each picture. So that the coordinate systems are closely related to the images taken.

2.1.3 Setups description

It is now assumed that there exist two camera setups. In the first one, the position and orientation of the camera are well-known: first camera origin, O1, (X01, Y01, Z01), is set at the origin of the world coordinate system O(0, 0, 0) and its axes are oriented in order for them to coincide with the world coordinate system, the angles for the rotation matrix are set to ω1 = 0, ϕ1 = 0, κ1 = 0 (so that the rotational matrix is the identity). Thus no translation or rotation is needed to transform the camera coordinates to the world coordinate system (only the scale factor is needed).

Using (3), the collinearity equations for the first set up are:

x0 = x0− cX Z y0 = y0− cY

Z (4)

where (x0, y0) are the coordinates of a point in the first image.

Another setup for the camera is taken into consideration. The coordinates of the origin of this second camera system, in the world coordinate system, are O2(X02, Y02, Z02)t. The angles of the rotation matrix are ω2, ϕ2, κ2 and so, the rotational matrix will be R2 (upper indexes refer to the second setup).

Using again (3), the collinearity equations for the second setup are:

x00= x0− cr11(X − X02) + r12(Y − Y02) + r13(Z − Z02) r31(X − X02) + r32(Y − Y02) + r33(Z − Z02) y00= y0− cr21(X − X02) + r22(Y − Y02) + r23(Z − Z02)

r31(X − X02) + r32(Y − Y02) + r33(Z − Z02) (5)

(20)

where (x00, y00) are the coordinates of a point in the second image and rij are the elements of R2. This second setup that has been introduced is relevant when at least two pictures are taken with certain overlap. By using features and points recognized in the overlap of the two images, 3D structures can be recreated and used for calculating the camera position and orientation, internal parameters and coordinates of unknown points, in this case.

2.1.4 Structure From Motion

Structure from Motion (SfM) is a photogrammetric technique by which 3D structures can be computed from 2D images. An introduction to this technique can be found at Westoby (2012) [13]. This technique is applied to recreate models of three-dimensional objects from images taken by moving the camera around the object of interest (as depicted in Figure 4).

Figure 4: Sketch of the Structure from Motion technique.

Situated at the setups described above, the camera will take one image at each position. By comparing them, some points will be recognized in overlapping areas. In the case of this study the GCPs are placed in the overlap between the two images and can be identified and used as correspondences between them.

By using these correspondences, the relative positions of the camera and the scene geometry can be computed. Moreover, for this study, as first setup is coincident with the world coordinate system, also absolute position and orientation of the second viewpoint can be calculated.

The identified features are recognized in the set of available images (in the case of this work, only two). An initial estimation of the camera position and the coordinates of these features is iteratively improved by using a least squares adjustment. A 3D reconstruction of the object of the study can be obtained after this process. In a general case, the recreated model has an arbitrary scale, but in the case of this study, the information provided by the GCPs will give the correct scale to the world coordinates being estimated.

(21)

In computer vision, the correspondences are usually found by a feature-based algorithm called Scale-Invariant Feature Transform (SIFT).

2.1.5 Least Squares Adjustment

Summing up, O2, R2, x0, y0, c are not known for a specific camera with two setups. If there is also a point P which can be seen in both images but whose world coordinates are unknown, (Xp, Yp, Zp)t, they can also be estimated.

Therefore, in a general case there will be 12 unknowns: coordinates of the principal point (x0, y0), principal distance c, coordinates of the center of the second setup (X02, Y02, Z02)t, ori- entation angles of the second setup ω2, ϕ2, κ2 and coordinates of the point to be estimated P (more unknowns are considered if more points need to be estimated: 3 for each point).

After the search for coincidences explained above, several features/points can be identified in both images (the GCPs, in this work). In order to compute a solution for these 12 parameters, at least 3 points are needed: 2 equations correspond to each point, one per coordinate, and there are considered 2 images, therefore 4 equations can be written for each GCP. Usually, the number of points recognized is larger, therefore a Least Squares Adjustment (LSA) can be used to estimate the 12 parameters.

The equations above, (4) and (5), form a system of the form l = F (x) where l ∈ Rncontains the coordinates in the images, x ∈ Rmis containing the unknowns, F : Rm−→ Rnis a nonlinear function that takes as input values for the unknowns and returns the coordinates in the images.

In order to apply a LSA, it is necessary to linearise the equations. As usually, by developing F using Taylor expansion and using only the terms corresponding to the first order, it is obtained a linear system of the form Aδx = L + v, where A is the (m, n) matrix containing the derivatives evaluated at each point, δx is the correction to an approximate value (x), L = l−F (x) contains the coordinates of the images minus the calculated with approximate values, and v is the vector containing the residuals of the adjustment, needed for the equations to hold.

To begin with the Taylor expansion, the partial derivatives of the equations (4) and (5) with respect to the unknowns must be obtained.

In the case of the first setup, the only derivatives needed are the ones related to the internal parameters and point P :

∂x0

∂x0

∂x0

∂y0

∂x0

∂c

∂y0

∂x0

∂y0

∂y0

∂y0

∂c

=

1 0 X Z 0 1 Y Z

(6)

∂x0

∂X

∂x0

∂Y

∂x0

∂Z

∂y0

∂X

∂y0

∂Y

∂y0

∂Z

=

c

Z 0 cX

(Z)2 0 c

Z cY (Z)2

(7)

(22)

For the second setup, the derivatives can be analogously calculated, but some notation needs to be introduced: the following auxiliary functions U, V, W are defined, as in [2]:

U V W

= R

X − X02 Y − Y02 Z − Z02

(8)

then, the partial derivatives can be written as:

∂x00

∂x0

∂x00

∂y0

∂x00

∂c

∂y00

∂x0

∂y00

∂y0

∂y00

∂c

=

1 0 U W 0 1 V

W

(9)

∂x00

∂X02 = c W



r11 U Wr31

 ∂y00

∂X02 = c W



r21 V Wr31



∂x00

∂Y02 = c W



r12 U Wr32

 ∂y00

∂Y02 = c W



r22 V Wr32



∂x00

∂Z02 = c W



r13 U Wr33

 ∂y00

∂Z02 = c W



r23 V Wr33



(10)

∂x00

∂ω = c W



r13(Y − Y02) − r12(Z − Z02) + U

W −r33(Y − Y02) + r32(Z − Z02)



∂x00

∂ϕ = c W



cos κ · W + U

W(cos κ · U − sin κ · V )



∂x00

∂κ = −cV W

(11)

∂y00

∂ω = c W



r23(Y − Y02) − r22(Z − Z02) + V

W −r33(Y − Y02) + r32(Z − Z02)



∂y00

∂ϕ = c W



− sin κ · W + V

W(cos κ · U − sin κ · V )



∂y00

∂κ = cU W

(12)

∂x00

∂X = −∂x00

∂X02

∂x00

∂Y = −∂x00

∂Y02

∂x00

∂Z = −∂x00

∂Z02

∂y00

∂X = −∂y00

∂X02

∂y00

∂Y = −∂y00

∂Y02

∂y00

∂Z = −∂y00

∂Z02

(13)

The last partial derivatives ((7) and (13)) will only be used in the case where point P is being calculated, being null for the rest of the GCPs since good accuracy is supposed for them.

References

Related documents

A bill to provide for financing, cost and revenue sharing of Federal water resource development projects; jointly, to the Committees on Agriculture, Interior and

The adaptation of CPFR requires a lot of effort and time but it will extensively improve efficiency of supply chain activities like it improves forecast accuracy; by intense

The star tracker software contains algorithms to detect stars, position them in the image at subpixel accuracy, match the stars to a star database and finally output an attitude

It can be concluded that by utilizing natural learning instincts in young ELL learners, through the introduction and active use of the nonsense ABC and Onset-Rhyme, it is

In this paper we will consider these elliptic curves over finite fields, this will make the sets of points finite, and study ways of counting the number of points on a given

(a) (2p) Describe the pattern of nonzero entries in the basis matrix used to determine the coefficients for polynomial interpolation using each of the three bases.. (b) (2p) Rank

If the area has good GNSS availability, the positions measured using GNSS together with a forward and reverse Kalman filter with INS and odometer will result in sufficiently