• No results found

Simulated SAR with GIS data and pose estimation using affine projection

N/A
N/A
Protected

Academic year: 2022

Share "Simulated SAR with GIS data and pose estimation using affine projection"

Copied!
59
0
0

Loading.... (view fulltext now)

Full text

(1)

Simulated SAR with GIS Data and

Pose Estimation using Affine Projection

Martin Divak

Space Engineering, master's level 2017

Luleå University of Technology

Department of Computer Science, Electrical and Space Engineering

(2)
(3)

Simulated SAR with GIS Data

and

Pose Estimation using Affine Projection

Author Martin Divak

Thesis supervisor Zoran Sjanic

Examiner Goerge Nikolakopoulos Co-supervision Christoforos Kanellakis

(4)
(5)

The work presented in this thesis was conducted at Saab Aeronautics in the section Sensor Fusion and Tactical Control. Interest in the subjects described in this thesis are of interest due to the sections development of Decision Support Systems for Aircraft applications.

(6)
(7)

Abstract

Pilots or autonomous aircraft need knowledge of where they are in relation to the environment. On board aircraft there are inertial sensors that are prone to drift which needs corrections by referencing against a known item, place, or signal. Satellite data is not always reliable due to natural degradation or intentional jamming so aircraft are dependant on visual sensors for navigation. Synthetic aperture radar, SAR, is an interesting candidate as navigation sensor. SAR is a collection of methods used to generate high resolution radar images using movement to increase its apparent antenna size, or aperture. Radar sensors are not de- pendant o day light, unlike optical sensors. Infrared sensors can see in the dark but are affected by weather conditions. Radar sensors active sensors, transmitting pulses and measuring echoes, in the microwave spec- trum of electromagnetic radiation that does not have strong interactions with meteorological phenomena.

To use radar images in qualitative and quantitative analysis they must be registered with geographical in- formation. Position data on an aircraft is not sufficient to determine with certainty what or where it is one is looking at in a radar image without referencing other images over the same area. To lay an image on top of another image and transforming it such that they match in image content position is called registration.

One way of georeferencing is to simulate a SAR image and register a real image, from the same view, using corresponding reference points in both images. This present work demonstrate that a terrain model can be split up and classified into different types of radar scatterers. Different parts of the terrain yielding different types of echoes increases the amount of radar specific characteristics in simulated reference images. A terrain that is relatively flat having to geometric features, may still be used to create simulated radar images for image matching.

Computer vision with other type of sensors have had a long history compared to radar based systems.

Corresponding methods in radar have not had the same impact. Among these systems that have had a lot of underlying development include stereoscopic methods where several images are taken of the same area but from different views, meaning angles and positions, where image depth can be extracted from the stereo images. Stereoscopic methods in radar image analysis have mainly been used to reconstruct objects or environments seen from known parallel flight and orbital trajectories. The reverse problem, estimating position and attitude given a known terrain, is not solved. This work presents an interpretation of the imaging geometry of SAR such that existing methods in computer vision may be used to estimate the position from which a radar image has been taken. This is a direct image matching without requiring registration that is necessary for other proposals of SAR-based navigation systems. By determination of position continuously from radar images aircraft could navigate independent of day light, weather, or satellite data.

(8)

Sammanfattning

Piloter eller autonoma flygfarkoster beh¨over k¨annedom om var n˚agonstans de befinner sig i relation till omgivningen. Ombord p˚a flygfarkoster s˚a finns det tr¨oghetssensorer som p˚averkas av drift vilket beh¨over korrigeras genom referering mot ett k¨ant f¨orem˚al, plats, eller signal. Satellitdata ¨ar inte alltid p˚alitlig p˚a grund av naturlig degradering eller avsiktlig st¨orning s˚a ¨ar en flygfarkost beroende av visuella sensorer f¨or att navigera. Syntetisk aperturradar, SAR, ¨ar en intressant kandidat som navigationssensor. SAR ¨ar en samling metoder som anv¨ands f¨or att generera h¨oguppl¨osta radarbilder genom att anv¨anda r¨orelse f¨or att ¨oka dess apparenta antennstorlek, eller apertur. Radarsensorer ¨ar inte beroende av dagsljus som optiska sensorer ¨ar.

Infrar¨oda sensorer kan se i m¨orker men p˚averkas av v¨aderf¨orh˚allanden som kan blockera infrar¨od str˚alning.

Radarsensorer ¨ar aktiva sensorer, skickar pulser och m¨ater ekon, i mikrov˚agsspektrumet av elektromagnetisk str˚alning som inte har s¨arskilt starka interaktioner med meteorologiska effekter.

F¨or att kunna anv¨anda radarbilder f¨or kvantitativ s˚av¨al som kvalitativ analys s˚a m˚aste registreras med ge- ografisk information. Positionsdata p˚a en flygfarkost ¨ar inte tillr¨acklig f¨or att kunna best¨amma med s¨akerhet vad eller var man ser i en radarbild utan att referera mot andra bilder ¨over samma omr˚ade. Att l¨agga en bild ovanp˚a en annan och transformera de s˚a att bildinneh˚allets positioner matchar kallas f¨or registrering.

Ett s¨att att g¨ora det p˚a ¨ar att simulera hur en radarbild ser ut, givet att terr¨angen ¨ar k¨and, fr˚an samma vy f¨or att relatera bildkoordinaterna med v¨arldskoordinater. I detta arbete demonstreras att en terr¨angmodell kan delas upp och klassificeras som olika typer av radarspridare. Att olika delar av terr¨angen ger olika ekon ¨okar m¨angden radarspecifik karakteristik i simulerade referensbilder. En terr¨ang som till och med ¨ar relativt platt, allts˚a inte har n˚agra radarspecifik geometrisk karakteristik, kan ¨and˚a anv¨andas till att skapa simulerade radarbilder som kan anv¨andas till bildj¨amf¨orelser.

Datorsyn med andra typer av sensorer har en l¨angre historia j¨amf¨ort med radarbaserade system. Motsvarande metoder inom radar har inte haft lika stort genomslag. Bland dessa system som har haft mycket bakomlig- gande utveckling inkluderar stereoskopiska metoder d¨ar flera foton tas ¨over samma omr˚ade men fr˚an olika vyer, allts˚a vinklar och positioner, d¨ar bilddjup kan extraheras fr˚an stereobilderna. Stereoskopiska metoder inom radarbildanalys har huvudsakligen anv¨ants till att rekonstruera objekt eller omgivningar som ses fr˚an k¨anda parallela flyg- eller omloppsbanor. Det omv¨anda problemet, estimering av position och attityd givet en k¨and terr¨ang, har inte en l¨osning. Detta arbete tar upp en tolkning av avbildningsgeometrin s˚a att existerande metoder inom datorsyn kan nyttjas till att estimera positionen fr˚an vilken en radarbild har tagits. Detta ¨ar en direktj¨amf¨orelse utan att beh¨ova bildregistrering, som kr¨avs enligt andra f¨orslag p˚a SAR-baserade Navigationssystem. Genom att kunna best¨amma position kontinuerligt fr˚an radarbilder s˚a kan flygfarkoster navigera oberoende av dagsljus, v¨ader, och satellitdata.

(9)

List of Acronyms

AESA Active Electronically Scanned Array ATR Automatic Target Recognition

BRDF Bidirectional Reflectance Distribution Function CAD Computer-aided Design

CDT Constrained Delaunay Triangulation CP Control Point

CPU Central Processing Unit CV Computer Vision DEM Digital Elevation Map

DLR Deutsches zentrum f¨ur Luft- und Raumfahrt DSM Digital Surface Map

DTM Digital Terrain Map ESA European Space Agency FOV Field Of View

GCP Ground Control Point

GIS Geographic Information System GMTI Ground Moving Target Identification GNSS Global Navigation Satellite System INS Inertial Navigation System

InSAR SAR Interferometry KvD Koenderink and van-Doorn Lidar Light Detecton and Ranging LOS Line-of-Sight

MTI Moving Target Identification NLOS Non-Line-of-Sight

Radar Radio Detection and Ranging RCS Radar Cross-section.

SAR Synthetic Aperture Radar

SLAM Simultaneous Localisation and Mapping Sonar Sound Navigation and Ranging

UAV Unmanned Aerial Vehicle

Mentioned throughout the thesis are examples of letter designations of electromagnetic spectral bands. The letter designations of the electromagnetic spectrum used in this thesis follows IEEE standard nomenclature.1

Letter VHF UHF L S C X Ka K Ku V W mm

GHz 0.03-0.3 0.3-1 1-2 2-4 4-8 8-12 12-18 18-27 27-40 40-75 75-110 110-300

HH, VV, and HV represent Transmit-Receive Horizontal/Vertical linear polarization modes.

1IEEE std 521 - 2002

(10)

Mathematical Notation

r Slant Range V Velocity Vector¯ R Range Vector¯

fDC Doppler Centroid Frequency

ω Squint

ωw Azimuthal Beamwidth

∆t Illumination Time

θw Angular Width in Range/Swath Direction θ Depression Angle

θnear Near swath grazing incidence θfar Far swath grazing incidence

θdiff Difference in depression angle for parallel stereo channels

C Camera or Intrinsic Matrix CSAR SAR Intrinsic Matrix

Pk Normalized Orthographic Projection Matrix PAff Affine Projection Matrix

PSAR SAR Projection Matrix G Pose or Extrinsic Matrix

G Virtual Orthographic Camera Pose

R Rotation Matrix

¯t Translation Vector u0 Horizontal Image Centre v0 Vertical Image Centre c0 Speed of Light λ Wavelength

δr Slant Range Resolution δaz Azimuth Resolution

R Bidirectional Reflectance Distribution Function ϕinc Incidence Angle

ϕref Reflection Angle

(11)

Contents

Abstract . . . i

Popul¨arvetenskaplig Sammanfattning . . . ii

List of Acronyms . . . iii

Mathematical Notation . . . iv

1 Introduction 1 1.1 Problem definition . . . 1

1.2 Research Questions . . . 1

1.3 Thesis Contributions . . . 2

1.4 Thesis Outline . . . 2

2 Background 4 2.1 Earlier Work . . . 4

2.2 Need for GNSS Independent Navigation . . . 6

2.3 Vision-aided Robotics . . . 6

2.4 Platforms and Hardware . . . 7

2.5 SAR-aided Navigation . . . 8

2.6 Physical and Image Simulation of SAR . . . 8

3 Theory 10 3.1 Observation Geometry . . . 10

3.1.1 Resolution Cell . . . 11

3.1.2 Geometric Effects . . . 12

3.1.3 Radiometric Effects . . . 14

3.2 Rendering . . . 15

3.2.1 Scattering Models . . . 16

3.3 Stereoscopic Radargrammetry . . . 17

3.3.1 Parallax . . . 17

3.3.2 Parallel Heading Configuration . . . 18

3.3.3 Arbitrary Heading Configuration . . . 18

3.4 Affine Structure in SAR . . . 19

3.4.1 Epipolar Geometry . . . 19

3.4.2 Affine Projective Algebra . . . 20

3.4.3 SAR Sensor Model . . . 21

4 Model Preparation 23 4.1 Lantm¨ateriet Dataset . . . 23

4.2 Surface Generation . . . 25

4.2.1 Polygon Vectordata . . . 25

4.2.2 Line Vectordata . . . 26

4.3 Varying Reflectivity Model for Objects . . . 27

4.4 Results of Simulating SAR Images . . . 28

5 Image Utilization 31 5.1 Stereoscopy . . . 31

5.2 Affine Epipolar Analysis . . . 32

(12)

6 Conclusion 36

6.1 Discussion . . . 36

6.2 Answers to research questions. . . 37

6.3 Future Work . . . 38

6.3.1 Implementation of CV in SAR . . . 38

6.4 Multistatic SAR . . . 38

6.5 Polarimetric Decomposition . . . 39

References 41

A SAR Frame Sequence 47

(13)

1. INTRODUCTION

1 Introduction

The intention of the work presented in this thesis is in working towards the goal of using SAR as a navigation sensor. This introductory section will introduce the thesis in terms of the research questions identified and addressed through experimentation and literature survey. From a problem definition, from which research questions have been identified, to covering the main contributions of this thesis. It concludes an outline of the thesis of the two main topics that are covered.

1.1 Problem definition

Some of the constraints on aircraft navigation are; degradation of GNSS signals, passive sensors constrained to certain weather conditions, and drift in inertial sensors. Inertial sensors can be used in estimating head- ing when GNSS is not functioning, and visual sensors can be used to correct for the drift. The additional constraints facing aircraft are weather and time of day. Optical sensors require daylight, and whereas IR sensors can be used during nighttime they are limited by weather conditions, as are optical sensors.

SAR is an important candidate in solving for these constraints. It is an active sensor, thus not constrained by daylight, and it is in a part of electromagnetic spectrum largely unaffected by weather. Identified gaps in the research includes robust automated geocoding of a SAR image and a lack of observation geometry models that can be used in positioning. The limits are mainly image processing algorithms not specifically developed for SAR.

Local motion estimation, diverging from a nominal trajectory, has enabled use of SAR on smaller aerial platforms. What is lacking is a reliable global estimation, meaning where the nominal trajectory is in relation to image content. This is the gap that this thesis aims to address.

1.2 Research Questions

These questions have been formulated from the constraints described above and addressing these questions will aid in development and research into SAR aided navigation. As the simulation work enables pursuing other topics in SAR image analysis the main research questions defined first relates to simulation:

• 1) Is the method of using 3D terrain maps for SAR image reference good enough for use in positioning?

• 2) Can texture, based on optical information, be used to generate reference images with more infor- mation than only elevation maps?

• 3) How to increase the amount of radar specific information in simulated reference images?

As work proceeded other questions where identified in relation to the georeferencing of a SAR image or to position an aircraft using radar image content:

• 4) What information in SAR images is used in registration and quantitative analysis?

• 5) Is it worth developing radar-specific image analysis methods and algorithms?

• 6) Is it possible to orient an image by direct matching from different views?

These question will be revisited in the concluding section of this thesis. Background, theory, and the work presented will be summarized as answers.

(14)

1. INTRODUCTION

1.3 Thesis Contributions

Having surveyed the literature on SAR simulator use, SAR-aided Positioning, and use of SAR intensity maps, the contributions to these and related areas of research are:

• Reflectivity models have been added to terrain maps to create more realistic simulated SAR images.

The purpose is to increase the amount of radar-specific salient features to aid in georeferencing real SAR images. The terrain map has been classified into different types of reflecting surfaces using existing vector data over the same area.

• The SAR observation geometry has bee re-interpreted such that existing algorithms used in multiple- view geometry can be implemented. This is to directly match a real image and simulated reference, or to be used in a stream of images from the same platform, or multiple platforms illuminating the same area. This projection model for SAR was developed based on existing qualitative descriptions in Stereoscopic Radargrammetry. Multiple view geometry for SAR is generally limited to parallel trajectories. This approach is different from previous SAR-aided Positioning approaches typically solving the Range-Doppler equations using geolocated images. Positioning was the main goal for this projection model but this can also be used in scene reconstruction.

1.4 Thesis Outline

The main goal of this research and development effort is towards a working SAR-aided navigation system.

The total amount of processes that need to be developed for this includes navigation algorithms and image processing procedures that are outside the scope of this thesis. Objectives that are covered in this thesis are presented in table 1 and how research questions and theoretical background serves these objectives.

Table 1: Thesis outline in the following order: themes or partial goals towards SAR-aided Navigation, objectives covered in this thesis, research questions relating to the objectives, and finally theory and methods, with motivation, used towards fulfillment of the objectives.

Simulation Positioning

Model Preparation for Simulated SAR for use in Georeferencing by Image Registration

Aircraft Pose Estimation in Low Visibility using Affine Structure in Radargrammetry

(1) Is terrain data enough? (4) Information in SAR Images?

(2) Application of texture? (5) Radar Specific Methods?

(3) Backscatter modelling? (6) Positioning by Direct Matching?

• Observation Geometry to Motivate Method of Simulation

• Geometric Algorithms for Triangulation of Height Data

• Rendering Equation to Model Surface Backscatter

• Radargrammetric Dual Problem – Reconstruction from Known Positions – Pose Estimation using Terrain Data

• Affine Epipolar Geometry for Pose Esti- mation

• Define Resolution for Image Calibration

Because the simulation work has the intended goal of being used in navigation, a lot of the background survey covers either both or navigation specifically, with the exception of SAR simulators which may be considered more generally in this context. This background survey is to get a sense of technology readiness

(15)

1. INTRODUCTION

level and to identify gaps in comparison with other navigation sensors.

The concluding section covering future work will set some goals or milestones in the effort of developing a navigation system using SAR as a visual sensor. This is intended to streamline further research into key enabling requirements.

SAR Missions are typically divided into many areas of application.The interested reader is pointed towards the review paper [1] for an introduction to SAR in terms of application areas. This thesis will generally limit discussions to aircraft as platforms.

(16)

2. BACKGROUND

2 Background

This section presents some concepts and available technologies in fields that are relevant in model preparation for simulated SAR and in development or implementation of positioning algorithms. Research gaps are identified and a presentation of the problem of navigation in low visibility and without the use of GNSS is clarified.

2.1 Earlier Work

Because of the reliability and performance issues of positioning in smaller and cheaper UAVs, [2], there is a requirement for focusing and positioning SAR images beyond input from INS. It is shown that entropy as focus measure can be used to estimate deviations from a known nominal trajectory. Another method based on the phase of raw radar signals typically discarded in intensity image formation, [3], can also be used to correct deviations in trajectory.

Earlier efforts has been done on matching optical maps and SAR images directly [4] by feature extraction.

The purpose of this work is to estimate a nominal position of a SAR platform by matching optical and SAR images in a sensor fusion framework. In practice this means that the position of a SAR image can be used in estimating aircraft parameters. These parameters are more global as a nominal trajectory may not be known. it is also shown in [5] that the image matching method works for both focused and unfocused images, but it may be of interest to also introduce a focusing process in the sensor fusion. A combined cost function of image matching of a real scene with focus measure of image entropy is shown in [6]. Because of the computational complexity it may be interesting to study the effect on the image focusing process by focusing only subimages and see what the effect of different ranges is on the final product.

This thesis is a continuation of work presented in [7] where a simulated reference image using 3D terrain data, shown with a real SAR image over the same area in figure 1, is used for the matching process. A simu- lated SAR image using GIS data will contain more the features in real SAR taking into account radiometric effects, not only geometric using elevation maps of a terrain. Optical sensors and SAR have very different image acquisition geometries which means direct application of existing CV processes does not work, and photos can not be used directly as textures in simulations.

(17)

2. BACKGROUND

a) b)

Figure 1: a) Real SAR image and b) Simulated SAR image. Results of feature extraction using Canny detector shown as red circles in real image and white crosses in simulated image [7]

Canny detector, [8], used in feature detection shown in figure 1 and the registration thereof in figure 2, the main principle of which is to use two levels of threshold making it insensitive to noise due to hysteresis. To measure image matching Chamfer algorithm [9] was implemented in matching edges of the real SAR image and simulated reference image. This image registration method is used as SAR and optical images have different types of features.

Figure 2: Image Registration of the real and simulated SAR images [7] requires that features in both images, that are from the same scatterer, match.

It is remarked in [7] that additional radar specific information can be used to make the image matching process more robust. The current simulations only use a height map with the same backscatter model. Proposed work is thus to investigate the effect of including radar reflectance to a 3D map of an environment for use as reference image in positioning. The approach that is presented later in this thesis is using a combination of diffuse and specular reflection to estimate most radar scattering behaviour over a real topography.

(18)

2. BACKGROUND

2.2 Need for GNSS Independent Navigation

GNSS signal degradation effects are typically grouped into

• Environmental

• Intentional

depending on what type of application is discussed in the article concerning navigation and which types of degradation are most relevant to the system being discussed.

Demonstrations by University of Texas researchers are often referenced in literature when researching or discussing GNSS signal degradation, denial, and spoofing. One of the demonstrations are of an overtaking of a UAV following an incident where a military drone was allegedly hacked.2 Another demonstration aimed at raising the issue in terms of civilian security is the remotely asserted control of a yacht.3

Efforts into GNSS anti-spoofing is not limited to unencrypted civilian signals. [10] A discussion of INS in relation to GPS is presented in [11] where different INS technologies are presented, mitigation techniques in jamming, and some degradation effects.

Interest in GNSS independence has been raised earlier, [12], with some demonstrations of spoofing that did not get much attention until the aforementioned incident and demonstrations by Texas researchers. Aerial and underwater environments as being susceptible to degraded GNSS signals. [13]

2.3 Vision-aided Robotics

The context of this thesis is navigation and data fusion. The insight that one can one can use of results from maps as state or pose variable in estimating position enabled development of SLAM [14]. Applications of SLAM, such as cooperative mapping, [15], are enabled by integrating developments from many different specializations.

Compared to SfM that is a mapping technique that relates camera pose to 3D structure from a set of im- ages, VO only establishes the egomotion of an observation platform. Primary function of visual system is to establish pose, not mapping. SFM more general and encompasses sequential and unordered sets of images as input. Feature extraction applied to large environent or long range image matching has had more concentrated research efforts the past decade. [16]

Another use of images in navigation is first using image segmentation and classification before georefer- encing. [17] demonstrates using RGB data, not only greyscale which is relevant point for future work, for classifying extracted superpixels, or image segments, as asphalt, grass, or other environmental types. It is a rotation invariant method as the image position likelihood is calculated using histograms of circular image regions. There have been efforts into applying SLAM to radar data. [18] Raw SAR data that is only Range- compressed can possibly be used in environments with strong point scatterers. [19] Not for the purposes of mapping but for odometry for visual dead reckoning. This can also be seen as another approach to estimate divergence from a nominal trajectory.

The conclusion of [20] that CV for UAVs lacks experimental evaluation. The research presented have not fully integrated CV techniques in navigation systems, validation work presented in literature is limited to experimental tests under a lot of assumptions, system simplifications, or the validation work is simulated behaviour. This is found to be true for the longer range sensor SAR as navigation aid. Furthermore, it

2http://www.engr.utexas.edu/features/humphreysspoofing

3https://news.utexas.edu/2013/07/29/ut-austin-researchers-successfully-spoof-an-80-million-yacht-at-sea

(19)

2. BACKGROUND

is highlighted in [20] that comparative studies of these techniques is made hard by a lack of benchmarking and metrics that is transferable to different areas of applications. Development of SLAM systems is appli- cation driven making metrics less transferable. Ideal and unique solutions do not exist for every operational condition, platform, environment, and resources like hardware/software. [13] One contributor to a lack of benchmarking are needs for phenomenological description of methods and errors. [21]

Autonomous underwater vehicles are also in need of using GNSS independant navigation. Efforts into terrain- aided navigation [22] includes the use of synthetic aperture sonar and depth maps. Synthetic aperture sonar has also had many similar developments to SAR [23] including sonoclinometry, interferometric sonar, autofo- cusing by motion compensation, and sonargrammetry, all of which have had research efforts into utilization in underwater mapping and navigation. This directly parallels the concept of using SAR as a navigation sensor.

Some differences in state-of-the-art in robotic vision using SAR and other visual sensors have been exempli- fied. The gap between SLAM and its equivalent in SAR is in comparing state-of-the-art in photogrammetry and radargrammetry. These topics are presented in greater depth as they form the basis of the theory that is used in concept development for this thesis.

2.4 Platforms and Hardware

Modern processors, antennas, and algorithms enable SAR aided navigation research. Before describing nav- igation research based on SAR some background on hardware and platforms is presented. UAVs as SAR platforms are a recent research effort.

The study [24] shows technical feasibility of SAR-based navigation for a selection of UAV class, SAR system, and DTM requirement. Phase difference between simulated and real phase maps for InSAR based navigation did seem promising and their study indicates that position of the phase profiles rather than looking at phase difference. They conclude that SAR intensity images can be used for the purpose of aiding navigation. It is concluded that optimal settings and parameters for such a system are easily fulfilled by a commercial system such as PicoSAR.45

SAR architectures have been demonstrated for a range of aircraft. Examples of demonstrations of SAR systems carried by UAVs are highlighted here to get a sense of demonstrated operational conditions and an understanding of platform/UAV type. UAVs of different sizes are utilized for different purposes. [20]

Their relative advantages and disadvantages need to be taken into account when designing a mission. Some categorization of UAV types presented in [25]. These classifications are commonly used and referred to in many applications. Development of motion compensation and miniaturized antenna and processor systems has enabled the use of SAR on smaller platforms.

• For multirotor UAVs two examples of demonstrators are [26] in the X-band and [27] working in Ku- band. For rotor-based aircraft the implementation for SAR sensors is rare but possible mainly due to developments in motion compensation as stated in the demonstrator papers. CARABAS6 is flown on single rotor aircraft with VHF and UHF band for foliage penetrating ability.

• Fixed wing aircraft are a more common airborne platform for SAR as it is connected to longer endurance and higher payload. SAR typically does not require agility that comes with rotor based wings. Some demonstrators are: SARape[28] in W-band SARENKA[29] C-band SAR system. Demonstration of WATSAR[30] carries both S-band and Ku-band.

4http://www.leonardocompany.com/en/-/picosar-1

5Developed by the same company as the Raven ES-05 http://www.leonardocompany.com/en/-/raven-1

6http://saab.com/air/sensor-systems/ground-imaging-sensors/carabas/

(20)

2. BACKGROUND

The recent publication of the aforementioned papers indicate growing interest in SAR on UAV platforms.

Development of SAR for aircraft, especially smaller ones, have gone from project presentations and feasibility studies to demonstrations. Demonstrations provide with power- and mass-budget specifications and notes on control and image processing architectures. EM band is just one parameter that separate these systems.

2.5 SAR-aided Navigation

Navigation is the planning and filtering of a sequence of poses. Filtering several simultaneous estimates from different types of sensors is fusion. Research into using SAR as a navigation sensor is presented exemplifying what gaps exist and to clarify the purpose of the theory and methods presented in this thesis.

The idea of using SAR as a navigation sensor has been investigated before both as motion compensation [31], and registration approaches, [32]. Motion compensation for SAR images can be used as input to inertial sensor fusion. This is local motion estimation or odometry. Global estimation is in relation to an environment. Other approaches to acquiring position data from SAR images is by using the range-Doppler equations directly and known stationary targets sensed by an aircraft. [33]

An effort towards optimal global estimation using a multisensor fusion framework is presented in [34]. Re- searchers find that global optimal fusion theory approach to an INS/GPS/SAR integrated navigation system has a better performance using only INS/GPS. The presented results are simulations of their proposal of two layer decentralized filters before a global fusion filter. Experimental data using such an approach is presented in [35]. SAR specific data processing is not presented. It is unusual to cover state estimation in control theory papers.7 Pose estimation using SAR images is either based on Georeferencing and Range-Doppler equations or some other undisclosed techniques.

A method of georeferencing both SAR and InSAR is presented in the context of using SAR as a naviga- tion sensor in [36]. The intensity image is referenced against a landmark database, assuming the scene has landmarks picked up by ATR, whereas phase maps are compared against InSAR simulation over a DTM.

An experimental demonstrator for this approach was developed for two platforms showing positive results implementing SAR as a navigation sensor and exemplifying the new developments required. [37] GNSS lack of integrity, all weather capability, altimetry unreliable over flat areas, are some comparisons made by the authors. Further discussion about InSAR-aided Navigation, how imaging geometry related to acquired phase maps and error analyses, is presented in [38].

Bistatic observation geometries may also be useful in navigation. Paper [39] presents a system of spaceborne transmitter airborne receiver bistatic forward looking observations where the main area of application is navigation. Other uses are considered but not the end goal of development by the authors. The use of a satellite, a GNSS carrying one such as Galileo presented by ESA researchers. Onboard recorded information about the terrain does mean one can reference images against a database though this is unsuitable when flying in unknown environments and in scenario modification.

A simulation of the performance of SAR aided navigation is presented in [40]. Simulated imaging is a crop of a constant size of a reference SAR image along a linear trajectory without rotation. This cropped image is matched against the full reference image in estimating the position in the linear trajectory used.

2.6 Physical and Image Simulation of SAR

Some highlights of the use of simulated SAR is in correcting positional errors by geocoding, radiometric corrections of SAR images for quantitative analysis, evaluating signal processing algorithms and observation

7Control theory papers focus mainly on physical model of what is controlled, for example robotic arm or aircraft, and on filtration method of sequences of state estimations.

(21)

2. BACKGROUND

geometries. [41] These applications have varied levels of simulation requirements that relates to what process needs to be simulated. This section will cover some papers describing simulators and refer to some papers detailing how simulators are used in research.

SAR Simulators are typically described using two classifications, [42]:

• Image Intensity Simulator - Typically using ray tracing or rasterization approaches to estimate an image.

• Raw Signal Simulator - Physics-based approach to simulate electromagnetic propagation and how it is recorded by an aircraft.

The paper [43] presentes orthorectification of SAR images using a simulated reference over the same area.

GRECOSAR, a SAR simulator based on GRaphical Electromagnetic COmputing software, [44] solves the diffraction and geometrical optics equations given a scene with complex impedance. Simulator that is de- veloped for use in georeferencing presented in [36] is a physics-based, or raw signal, simulator [45]. Single and multiple scattering sometimes use different rendering algorithms, followed by summation of the different results, as is explained for some SAR simulators compared in [46] and simulator for use in navigation in [45].

This thesis covers work with image simulators that yield intensity and not complex-valued images. Com- putation of electromagnetic physical propagation may unsuitable for the purposes of generating reference images if considering online onboard generation of references. In contrast to computer models real signals propagate at c0and ray-tracing or other image simulators can be fast enough for rendering. What the true bottleneck will be in the real system: real frames, rendering, or matching, we can specify before building a system.

Presented in [46] is a comparative study of three Image Simulators:

• RaySAR

• CohRaS [47]

• SARViz [48]

RaySAR, [49], developed during the course of a doctoral thesis, [50], in cooperation with DLR, who have recently also released an online educational program for learning about SAR, [51]. Investigate comparative studies of SAR simulators to understand potentials and limitations of different approaches.

Comparative studies of simulated SAR, for instance [46], gives insight into how our generation of reference images differs from other simulation efforts. Realism is not the end goal, our interest is in estimating position with related restrictions and requirements.

RaySAR has been used in interpreting scatterer distributions in urban environments [52]. SAR Simulators have also been utilized in change detection [53]. As the mode of imaging of SAR is very different there have been publications on what it is that is being imaged, for instance [54] cover some effects simulated using CoHRaS of Pyramids, courtyard, and multifaceted poles. Interpreting images of man-made objects is not straightforward due to, typically, multiple specular reflections. First time viewing f certain scenes in SAR can be surprising unless investigated. [55]

Physical simulators are used for system performance or radar algorithm development. Physical or signal simulators can use real trajectory data as input to study defocusing, [6] or evaluate ATR or MTI algorithms and RCS of physical models, [56] , as well as degradation, and mitigation algorithms, due to environment or jamming. [57]

(22)

3. THEORY

3 Theory

Historically, SAR image processing was done with all-optical systems [58]. SAR was an important appli- cation of Fourier optics and Optical holography is sometimes stated as an analogue for SAR images [59].

Digital processors enabled flexibility in SAR processing [60]. This is mentioned here because the non-trivial processing of SAR data increases the complexity of understanding SAR. This section presents a limited theo- retical presentation of concepts and methods required for the experimental work in this thesis and use of CV algorithms for SAR. First the imaging geometry is clarified with associated challenges. Differences between SAR and photography will be highlighted with a representation in figure 3. Images are two dimensional where there is depth ambiguity in photography, meaning we do not know at what distance from the camera and object in a photo is located, whereas in SAR this ambiguity is in the height of the observed object, or rather where the object lies in a circle segment. This circular geometry of SAR requires understanding of the range-Doppler equations.

Figure 3: A comparison of projection models, or pixel contributions, for optical imaging and SAR imaging over the same set of features. Figure from [61]

The model of the optical system in figure 3 is perspective projection. Locus of pixels in photography is along projection lines from the centre of the optical system, and the locus of pixels in SAR are circle segments with slanted range radius. Another way of putting it is everything along the dotted lines in figure 3 will contribute to the same pixel. Occlusion in optical imaging is when objects are in front of another and shadows depend on illumination angle. In SAR, occlusion and shadow are the same thing.

The reasons why SAR is different to interpret from optical images is geometric effects and geometric effects.

[61] Clarification of these types of effects will follow the presentation of observation geometry for SAR. A discussion of these effects and distortions requires base knowledge of this observation geometry.

3.1 Observation Geometry

Monostatic zero-Doppler processed scanning mode will be the fundamental capture mode for most of the thesis. Multistatic, squinted, and spotlight will be discussed where it adds necessary practical context. In this thesis we will focus on some of the system design parameters that affect image output. Range sphere and Doppler cone equations [62] are used with georeferenced SAR images to position aircraft also using onboard pointing parameters.

(23)

3. THEORY

r =| ¯R| (1)

λfDCr

2 = ¯V · ¯R = r sin ω (2)

The set of scatterers satisfying this equation, limited by antenna beamwidth, are called the locus of a SAR pixel.

In imaging a flat plane, the isorange lines are a set of concentric circles, the flat plane intersecting spheres of different radii. The isoDoppler lines are a set of coaxial, along the nadir of the trajectory, hyperbolas where the flat plane intersects the Doppler cone for a given angle. For no squint the Doppler cone is a flat plane orthogonal to the direction of platform motion.

3.1.1 Resolution Cell

A definition of resolution cell is a good starting point for understanding any imaging system. The range resolution is typical of radar systems whereas the azimuth, or cross-range, resolution is unique to SAR. Some derivations of SAR image parameters can be found in [1].

Slant Range Resolution from Pulse Bandwidth is given by δr= c0

2B (3)

Bandwidth is start and end frequency difference in the case of frequency modulated chirp, or pulse repetition interval in pulses. Azimuthal, or cross-range, resolution for Synthetic Aperture

δa= LReal

2 (4)

Wider beamwidth gives longer illumination time for a scatterer on the ground which is a longer synthetic aperture. Neither resolution parameters depends on range which is an unintuitive theoretical result. The longest possible synthetic aperture is given by the flight velocity and illumination time of the same scatterer as shown in figure 4. A visulatization of range related parameters are shown in figure 5.

t0= 0

Scatterer

ωw ωw

∆t Trajectory

Figure 4: Azimuth angle sensor parameters for a SAR sensor translating in the right direction.

Swath width or the footprint of the beam is dependant on the antenna size in the θ direction or θw. It can also be limited by Doppler bandwidth but it is unnecessary to have a larger than necessary aperture if the SAR hardware has digital beamforming. The term depression is prefered to use here as a system parameter rather

(24)

3. THEORY

than angle of incidence because incidence is a scene-specific scattering or shading parameter. Depression is directly related to the imaging geometry.

θfar

Antenna

θnear

θw

Horizon

r θ

Figure 5: Angular sensor parameters in range direction.

A visualization of squint angle is shown in figure 6. This visualization shows how squinted scanning modes can look ahead, or behind, the unsquinted mode. zero-Doppler processed spotlight images have mean ω = 0.

t0= 0

ω

∆t Trajectory

Figure 6: Squint angle geometry and sensor pointing visualization translating in the right direction. The angle between zero-Doppler and squinted LOS is ω.

Having a definition of how SAR works as a sensor we will shift focus to more qualitative descriptions of the effect of radar geometry and signal propagation. This is necessary in formulating what information or features can be seen in SAR images and what limitations and strengths this imaging technique has.

Characteristic effects of SAR are typically grouped into geometric and radiometric. Understanding these effects aids in understanding what it is that will be contributing to feature extraction and image matching process when designing a full positioning system based on SAR, both in terms of developing radar-specific procedures and why traditional techniques in CV are applicable or fail.

3.1.2 Geometric Effects

These effects are dominant in SAR due to the EM spectrum used. How objects appear in SAR depends very much on shading, illumination angle, and pose, position and orientation. These are also the reason why SAR images are not as straightforward to interpret, as the image is of the distance to the scatterer, radar being a ranging instrument, not an angle of a ray of projection onto imaging plane as in photography. Consider figure 3.

(25)

3. THEORY

Some effects, or distortions, are presented graphically in figure 7. This figure presents ground range evenly spaced scatterers A B, C, ..., J appear in SAR. The approximation used here are that rays from the radar source are parallel to the slant range that is one of the image coordinates. The other is azimuth or cross-range.

Figure 7: Geometric distortion effects due to ambiguity in swath angle. Image from [63]. Bottom scale shows slant range and ground range for comparison. I and J are missing in slant range and this area of the SAR image is shadowed.

Layover, or foldover, as seen in figure 7 means that illuminated scatterers in the same range appear in, or contribute to, the same pixel.

High urban backscatter depends on the amount of planar surfaces at right angles. [64] These planar scat- terers are dihedral and trihedral reflectors that appear in SAR images as lines and points known as phase centers. This is because the backscattered radiation will have traveled the same range independant of where on the surface making up the corner reflector it hits.

Ghost persistent scatterers located ”under ground” resulting from multiple reflections, typically more than three. Ground as mirroring dihedral and trihedral reflectors for 4 and 5 reflections. [65] See also results on courtyard simulations in [54]. The ghost scatterer position is the same regardless of view as is shown in figure 8. Consider figure ??: The real bridge backscatter is positioned closest to the sensor, dihedral scattering apparent position is on the ground, and ghost scattering is a reflection of the bridge on the water that will appear under water when applying stereo SAR reconstruction.

(26)

3. THEORY

Figure 8: Figure of ghost scattering from NLOS dihedral scatterer. Apparent position of this scatterer does not change with observation parameters.

Persistent scatterer is a scatterer that persists, or appears, in different views. This is useful because they can be applied in registration as corresponding points. Man made objects are typically rectangular in shape and thus persist in many views enabling easier image registration in for example InSAR.

a) b)

Figure 9: a) SAR image of a bridge from [66] and b) Illustration of radar return modes from bridges, red indicating direct backscatter, blue where the bridge and water act as dihedral reflector with phase center indicated with blue, and green as ghost scatterer below ground.

Foreshortening, dilation, layover, and shadowing are typically introduced as distortions to radar images. The method of positioning using image contents presented in this thesis will use these effects as they are intrinsic to the observation geometry, except for shadowing that may be used for other functions.

3.1.3 Radiometric Effects

In this category of effects include atmospheric distortion and micro-Doppler effects that will have an impact in image interpretation. Throughout the thesis there will be regular referencing to the effect that shading

(27)

3. THEORY

has on image matching. Speckles are an important radiometric feature of SAR, and other coherent imaging techniques for that matter, when analyzing images.

A consequence of being a coherent imaging technique is the added complexity and image deterioration of speckle noise. This type of noise is multiplicative in nature and thus harder to filter. It deteriorates texture information. Typically filtering this noise results in deterioration of resolution. There exist metrology meth- ods utilizing speckles but in the context of SAR it is mainly unwanted noise.

A common model for speckle noise is as mentioned multiplicative for which there is some discussion of modelling in [67]. Another way of thinking about speckles is a random walk in the complex plane where backscatterers within a resolution cell are coherently summed. Estimators exist for addative noise. If used consider that the statistics are different for logarithmically transformed speckle. [68]

SAR is considered an all weather sensor but this is not technically incorrect. Tt depends on frequency band and whether or not it is spaceborne or airborne. Airborne SAR is limited in higher frequencies [69] whereas spaceborne applications are also limited in lower frequencies due to the ionosphere.

One interesting feature appears in Video SAR demonstrations from Sandia Laboratories of traffic. 8 As vehicles accelerate and decelerate the strong scattering from the vehicles detach and approach the shadow on the road. This is because SAR measures Doppler frequency, or frequency shift.

Shading, backscatter intensity as a function of surface orientation, is an important effect which is covered more extensively in the concluding discussion on limitations of the proposed method of navigation and where to shift future efforts. In fact, shading models can be used to estimate terrain using smoothness assump- tion, the variable intensity over a surface, and some assumption on the backscatter model which is typically Lambertian, covered in the section on Rendering. This method of shape estimation is called clinometry, or shape-from-shading. [70]

The radiometric effect that we will look at more closely is shading and impedance. Complex impedance is implied from the simulation models having absorption. Shading is an emergent effect due to the incidence angle of illumination onto a surface. This will be explored in greater detail in section 3.2

3.2 Rendering

The image simulation part of this thesis deals with rendering radar images as they may appear from some pose of view in an airspace. The rendering technique used by RaySAR is ray tracing. This is enough for simulating geometric features in a scene. Some representation of radiometry is required for this thesis making use of GIS data or textures.

Ray tracers follow rays projected from a source as an approximate illumination model. The main addition to PovRay developed for RaySAR is that the distance traveled by the rays form the image rather than the direction, which is the same difference in conceptual model between SAR and photography as seen in figure 3. The purpose of this rendering chapter is to introduce some concepts that are relevant to this thesis. In particular the shading equations are an important theoretical preamble.

Render equation, or shading equation, is what BRDF is called as in graphics computing. [71] Ray tracer that we will be using is built on a modified sum of ideal specular and Lambertian diffuse scattering. The modification is that the specular scatterer is in a cone rather than delta function. [50] These cone shaped

8http://www.sandia.gov/radar/video/ Visited 16/03/17

(28)

3. THEORY

specular highlights are dependant on a surface roughness factor.

Gouraud shading, Phong shading, and rasterization, are examples of different rendering algorithms approx- imating the render equation. [71] These have different procedures for treating graphics computing problems like z-buffer and interpolation. We will limit the discussion to reflection models. Main difference between ray tracing and rasterization approaches as expressed in [50] is that ray tracing does well in representing multiple scattering in SAR whereas rasterization is faster when dealing in purely diffusive scattering cases as is the case in most natural environments.

Ray tracers through refracting and volume scattering media exist. [71] and this functionality is included in POV-Ray, and by extension RaySAR. [50] This may be of interest in the future in evaluating atmospheric and foliage effects, though physical simulators could be more important as to not limit such simulation work to emergent effects disregarding causes.

[71] also covers many radiative transfer processes: diffuse surface and volume scattering, translucency, mul- tiple scattering due to reflections, refraction. More advanced radiative transfer models in graphics rendering may be used in scientific evaluation of simulated SAR. Physics-based rendering, diffraction shading. Appli- cation of textures and GIS data on a terrain is also a graphics problem.

This section contains generic rendering information that is highlighted to give a broader picture of how much a radar image simulator can benefit from developments in computer graphics beyond ray tracing elevation data. Another highlight is that graphics computing is not an obstacle.

3.2.1 Scattering Models

We will primarily discuss the rendering procedure using a simpler shading equation. Other models will be described to see if important features will be neglected.

We will focus on models and methods necessary for analysis and implementation issues for this thesis.

BRDF is defined as

BRDF =Irradiance

Radiance (5)

Backscatter is R(ϕinc, ϕref) when incidence and reflection angles are equal. For monostatic SAR, the case that is the focus of this thesis, only the backscatter case of BRDF is relevant. Bidirectional scattering would be important if working with bistatic SAR.

The importance of proper reflection models is dependant on higher quality DTMs. Microwave backscattering is strongly dependant on the geometry of the observed terrain, [64], and so application of advanced reflection models is unnecessary as long as the geometric models used are poor, [49].

• Simple specular, or mirror, reflection model, also called ideal specular reflector. [50] [64] Amplitude reflection coefficient for Fresnel reflection comes from the impedance of dielectric media in plane bound- ary.

• Simple diffuse scattering model, also called ideal diffuse reflector. [71] [64] [50] For any angle of incidence the radiance is isotropic. Irradiance, incident intensity onto a surface, follows the cosine law for Lambertian scattering. This cosine law determines the angle of incidence for an emergent radiance for a given surface reflectivity used in shape-from-shading, or the incidence dependance in Lambertian backscattering.

(29)

3. THEORY

Other electromagnetic scattering models include small perturbation method, bragg scatter, and kirchoff approximation. [64] The sum of diffuse and specular scattering and varying the amplitude of these scattering mechanism represents most radiometric radar specific features. [50] These scattering mechanisms have other names in graphics computing such as diffraction shader and microfacet models. [71]

3.3 Stereoscopic Radargrammetry

Photogrammetry and CV are not the same but mathematically equivalent. Radargrammetry is not the type of algorithm for which a sensor model is developed but it is a simpler way of highlighting some of the complexities of SAR in terms of positioning using image contents beyond just the geolocation of the image.

This theory is intented to make the introduction to epipolar geometry for SAR smoother.

Method of estimating position of point scatterers is presented in [72] using same-side configuration. Another effort into Stereo SAR is in [73] where the novel idea presented is the use of bundle adjustment instead of GCPs for dense image correspondence. The idea of bundle adjustment is to minimize a distance function, representing reprojection error, between image points and coordinates in 3D space using a sensor model for projection [74]. Reprojection error is only used on visible features why a visibility function is required in the error function. Visibility is different for SAR than for optical sensors as shadowing and occlusion is the same area and layover means features in different position in 3D space may inhabit the same pixel.

Rectangular building extraction in SAR images, as in [75] and for use in online damage assessment [76], work on the assumption that parallel lines are preserved by SAR imaging geometry.

Crossing flight path configurations for stereoscopic radargrammetry have been described as potentially useful for elevation extraction [77]. Practical work in stereoradargrammetry, for example [78], and simulation work, as the work presented in [79] investigating optimal trajectory parameters as a function of surface roughness, has generally been limited to parallel configurations from the same side.

Stereoscopic methods aim to create a sens of depth in overlapping images from different views, in stereopho- togrammetry as well as stereoradargrammetry. This also requires an image registration but not with the same requirements as for georeferencing or multisensor registration. In stereoscopy the demand put on im- age registration is to only include parallax due to depth. In the case of SAR it is groundrange or elevation.

One way of doing this is to select feature points that are in the same plane and use a homographic trans- form. Subimage correlation, or other dense matching done after registration, measures parallax. A dense registration defeats the purpose of stereoscopy.

3.3.1 Parallax

Here follows a clarification on why image registration is required in stereoscopy. Photography captures Az- imuthal and Elevation angles not range. Radar images capture range and azimuthal direction, not swath angle. The effect of parallax in images is a result of missing information in a 3D scene when forming images that are 2D. Parallax in photography is an effect of depth whereas parallax in radar images is an effect of elevation, or more precisely position on the circular segment described early in section 3.1.

Images must be registered such that there is no Y parallax interfering with stereoscopic evaluation. This type of parallax is induced by imaging from a different position, as can be seen in the differences in simu- lated figures from different positions. this parallax is more so a measure of baseline, or the distance between platform positions in the stereo channels, not a measure of the terrain.

In the case of SAR a different heading and altitude will produce rotation in the image and a scaling in the slant range direction respectively. The parallax that we want to measure in stereoscopy is called X Parallax,

(30)

3. THEORY

or Absolute Stereoscopic Parallax.

3.3.2 Parallel Heading Configuration

Parallel trajectories have equal heading. Parallax will be on parallel lines though position will vary non- linearly with height due to circular Doppler geometry. This will be approximated as linear using parallel ray approximation used throughout the thesis. Figure 10 shows the circular and linearly approximated sensor models used for parallel heading configuration stereoscopy.

a) b)

Figure 10: SAR observation geometries from different altitudes. a) contributions to pixel in a SAR Image comes from circle segment. b) Coregistration of parallel track stereo and parallel rays assumption gives altitude by image correlation.

References to trajectory configurations in radargrammetry is limited to parallel headings. [80] presents some range-Doppler models of arbitrary heading but it is stated in later articles that this approach does not improve dense reconstruction.

3.3.3 Arbitrary Heading Configuration

It is stated in [81] that a detailed feasibility and rigorous description does not yet exist for Epipolar lines or curves and their use in computing SAR images. Even when considering Doppler circular geometry the scatterer contributions to a pixel from equal heading are on parallel lines. Position on these lines behave nonlinearly which can be corrected for by simply considering the curvature of circles at the range value of pixels investigated. An arbitrary configuration, varying the heading of either aircraft, is not straightforward in rigorous radargrammetric processing.

Arbitrary heading as described in [77] present stereoscopic procedure where trajectories intersect, meaning the trajectories must be at the same height, such that image registration process is a rotation by the angle of intersection. Authors of [82] present a radargrammetry-based method of geolocation of point scatterers with difference in both heading and depression angle.

(31)

3. THEORY

a) b)

Figure 11: SAR observation geometries in 3D a) Circular segment of scatterers contributing to one pixel seen from two different headings. b) Linearized observation model, from two headings, where scatterers contributing to a pixel lie in a projective line normal to the slant range plane..

The parallax in the SAR images depend only on the heading and depression angles. Using parallel ray approximation any image taken from a position along the LOS will be the same image. Heading meaning the direction of flight in the x-y, or ground, plane, and depression angle being the angle at which the antenna is pointing relative to the horizon. The sensor model that may be used with traditional CV algorithms is shown in figure 11.

The use of slant range normal projection is also presented in [83]. This type of sensor model has been described before and simplifications have been used in stereoscopic radargrammetry though rigorous use of affine projection CV algorithms in SAR is to the authors knowledge still unpublished.

3.4 Affine Structure in SAR

Here we build a sensor model, by using theory of epipolar geometry and affine projections as well as ap- plying what we know about SAR observation geometry, that can be used for pose estimation and scene reconstruction using arbitrary heading and without the need of image registration.

3.4.1 Epipolar Geometry

A thorough introduction to epipolar geometry is beyond the scope of this thesis. The intention is to motivate approximations in SAR rather than a complete treatment of the subject. Just enough to apply certain al- gorithms and analyze results and errors will be covered with references to sources that expand on the topics mentioned.

Epipolar geometry is typically introduced using perspective projection models, figure 12. A point in an image reprojected out into space forms a line of projection along which any scatterer contributes to the same point in the image. This line projected onto an image in a different view produces the epipolar line given a point in the first image. Figures for orthographic projection is included in figure 13. This illustrates the added ambiguity of depth due to the parallel rays. [84]

(32)

3. THEORY

Figure 12: Epipolar geometry for perspective views. Figure from [85]

Figure 13: Epipolar geometry for orthographic views. Figure from [85]

Mathematically the relationship between corresponding points in epipolar geometry is described by the fun- damental matrix. The problem of estimating pose is reduced to estimating the fundamental matrix, for example using 8-point algorithm. However, the fundamental matrix elements are not directly related to air- craft states which is why an estimated fundamental matrix is decomposed or factorized into pose parameters.

[86] [74] [87] Points on a flat plane projected onto an imaging plane from different views are related by a homography transformation.

If correspondences are found between a pixel in one image and a position on projected reprojecton, or epipo- lar line, in another image we have solved for 3D coordinate given that we know the pose from which both images were taken. The dual problem is that we know the structure of the scene and we can solve for poses.

Affine epipolar geometry derivations includes a lot of variable substitutions which distracts from the ideas that this section aims to clarify. Calculations leading to affine fundamental matrix are presented in [85].

Once a fundamental matrix estimate has been made the matrix needs to be factorized into pose estimations, for example using algorithms based on [88].

3.4.2 Affine Projective Algebra

The discussion of algebraic projective geometry will be limited to the affine case. Orthographic, weak per- spective, and paraperspective cameras can be modeled by the same affine projection model. Weak perspective

(33)

3. THEORY

and paraperspective cameras are orthographic approximations of narrow FOV of perspective cameras, mean- ing that objects that are small or far away are projected into perspective camera in near parallel rays. [85]

Decomposition of intrinsic, projection, and extrinsic matrices for affine projection is

PAff= CPkG (6)

where elements written out are

PAff=

δ 0 u0

0 δ v0

0 0 1

1 0 0 0

0 1 0 0

0 0 0 1

R v0

¯0 1



(7)

The intrinsic parameters in the C are camera specific. For perspective cameras we also take into account focal length, but for parallel rays this is not considered. The extrinsic parameters in G are pose parameters where we have rotation and translation. Image resolution δ does not necessarily have to be the same for image coordinates u and v.

Now that we have some basis for affine epipolar geometry a linearized sensor model for SAR may implement algorithms developed for affine cases in photography.

3.4.3 SAR Sensor Model

Using the concept for a SAR sensor model seen in figure 11 and earlier efforts in Epipolar Geometry the conclusion that one may make is applying Affine Fundamental Matrix as a constraint in positioning SAR sensors. A camera decomposition for SAR would be defined as

PSAR= CSARPkG. (8)

The matrices CSARand Pkare conceptually straightforward: coordinate space in SAR images depend on the image size and resolution in azimuth and slant range direction and projection model is a consequence of the parallel ray approximation. v0depends on number of pulses, u0depends on Doppler bandwidth (footprint).

Spot-mode is different in v0 and Squint is not considered. This is covered in section 3.1.

CSAR=

δr 0 u0 0 δaz v0

0 0 1

 (9)

How radar image content appear analogous to cameras is illustrated in figure 14. Objects appear to fold over in the direction of the sensor. We are looking at objects illuminated from the side and occlusion is the same as shadowing in SAR. Another way of interpreting this point of view is that objects that are folded over appear transparent. This will not affect the CV algorithms as they depend on scatterer position in space given a coherent projection model. The matrix Gdescribes the pose of the virtual orthographic camera in the SAR sensor model.

(34)

3. THEORY

Figure 14: Nominal Trajectory, Slanted Range Plane, and Virtual Orthographic Camera. LOS as dotted lines for both Trajectory and Virtual Camera.

This is an affine projection model that approximates SAR and is consistent with previous efforts in CV that we can use for reference or build on.

(35)

4. MODEL PREPARATION

4 Model Preparation

Point cloud height map will be used to generate surfaces for rendering orthographic projections and simu- lated SAR images. Vector data will be used in a constrained surface generation process of the point cloud for use with different reflection models in rendering. Polygon vector data represent different areas of vegetation or other environmental scene classifications. Roads also contribute with salient scattering mechanism and are represented by linear vector data. The SAR simulator was developed during a PhD thesis at DLR. [50]

As is demonstrated in [89] GIS can be utilized in simulating different things with different radiometric proper- ties. We could use the laserdata to input building models into our DEM. Furthermore, we could use GIS data, vector data of classifications and the orthographic photography, to input 3D models of trees in areas without a dense forest canopy. These researchers apply GIS data differently using another simulator tool in rendering.

In a cooperation effort with other actors working in the field of surveying and related areas Lantm¨ateriet, among others, offer geodata free of charge for use in research, education, and culture activities. Geodata acquired for this work is under license applicable to student theses.9 Information about the process of both the data acquisition and data representation can be found in the course compendium [90].

4.1 Lantm¨ ateriet Dataset

The geodata that is used in this thesis for preparing a simulation model are

• 2x2 m resolution elevation maps

• Polygon data of vegetation classification

• Road networks from open database

The elevation map is the terrain over which we want to simulate an image. The polygon data and road network are vector data to be used in classifying different parts of the terrain into radar scatterers. Figure 15 presents to entire area over which we have ordered terrain and polygon data.

9https://www.lantmateriet.se/sv/Om-Lantmateriet/Samverkan-med-andra/Forskning-utbildning-och-kulturverksamhet/

(36)

4. MODEL PREPARATION

1000 3000 5000 7000 9000

1000

2000

3000

4000

5000

6000

7000

8000

9000

Full ortophoto and heightmap

500 1000 1500 2000 2500 3000 3500 4000 4500 5000 500

1000

1500

2000

2500

3000

3500

4000

4500

450 500 550 600 650

a) b)

Figure 15: Full area for which we have datasets of vegetation classes, road network, and, seen in this figure, a) ortofoto and b) heightmap.

Orthophoto, orthorectified aerial photograpy, can be used to check, or correct, the application of vector data to the terrain. In figure 16 is presented the height map and ortofoto over the area most used in the later simulation experiments. Note the three bodies of water to the left and the road in the right in figure 16a

Ortofoto and "Öppet Vatten" Polygons

6.422 6.423 6.424 6.425 6.426 6.427 E-coordinate

×105 7.3037

7.3038 7.3039 7.304 7.3041 7.3042 7.3043

N-coordinate

×106

a) b)

Figure 16: a) Ortofoto with water vectordata and b) height map over the same area.

The vectordata categories are quite extensive and so the curious reader is refered to Lantm¨ateriet docu- mentation for GIS users. Here we will use approximations to the data set where water is strongly specular, forests are strongly diffuse, and roads are specular. Besides terrain classifications vector data, linear instead of polygons, are used to represent networks of roads. The roads seen in figure 15a is presented in figure 17.

Boundary box is the geographic boundary of figure 15.

References

Related documents

This subset of AD is still quite novel and, therefore, poorly researched (Gustavsson and Rönnlund, 2013). This section also introduces methods for conducting AD,

realism traditionally, being a one in (just) one is. On the other hand, the phrase ‘realized universality’ need not imply transcendent realism. If Williams were to use it, he

A guide for social science students and researcher (2nd Edition ed.). Age patterns of migration: cause-specific profiles. International Institute for Applied Systems Analysis

It may be possible the knowledge produced on women’s participation in the Gacaca Courts is part of a greater scheme for framing of women, transitional justice, and

This project was performed within nuclear medicine physics and had the main goal to develop new methods for analysis of kidney and bone marrow dosimetry and diagnostics

Implementation of the medical Physics, Oncology & Nuclear medicine research image platform at Sahlgrenska Academy..

Basically the process of rendering deep images is skipping the 3D render engine’s last step of compositing all it’s gathered scene samples to a 2D image, and instead writing all

This study has addressed this knowledge gap by investigating the impact of the rationalization processes—with a focus on the rise of professional management (managerialism) and