• No results found

Image simulations of the mission Martian Moons eXploration

N/A
N/A
Protected

Academic year: 2022

Share "Image simulations of the mission Martian Moons eXploration"

Copied!
30
0
0

Loading.... (view fulltext now)

Full text

(1)

STOCKHOLM SWEDEN 2020,

Image simulations of the mission Martian Moons eXploration

ROMAIN DOMAGALA

KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ENGINEERING SCIENCES

(2)

Image simulations of the mission Martian Moons eXploration

Romain Domagala

KTH, Royal Institute of Technology, Stockholm, Sweden Centre National d’Etudes Spatiales, Toulouse, France

Abstract—Martian Moons eXploration is a JAXA-lead mission aiming at exploring the moons of Mars, Phobos and Deimos.

The hyperspectral infrared camera provided by France should perform global mapping of Phobos at high resolution in order to characterize surface composition and therefore help determine the origins of the moons. Based on 3D scenes and radiance model- ing, a physical simulator is implemented to render images without aliasing. Satellite position and attitude on quasi-static orbits as well as solar ephemeris and expected Phobos surface properties are given as entries. Thanks to the combination of geometry and radiometry simulations, hyperspectral cubes representative of the acquisitions from a matrix sensor are computed. Future prospects include the adaptation of the simulator to different instrument concepts and Earth-observation contexts.

Sammanfattning—Martian Moons eXploration ¨ar ett JAXA- lett rymduppdrag som syftar till att utforska Mars tv˚a m˚anar, Phobos och Deimos. En hyperspektral infrar¨od kamera utvecklad i Frankrike ska utf¨ora en h¨oguppl¨ost global kartl¨aggning av Phobos som senare ska kunna hj¨alpa till att analysera m˚anens ytegenskaper i f¨ors¨ok att fastst¨alla dess ursprung. Baserat p˚a 3D bilder och str˚alningsmodellering kommer en fysiksimulator att rendera bilder utan vikningsdistorsioner. Indata till simulatorn

¨ar satellitpositionen, styrning av kvasistatiska omloppsbanor, bandata fr˚an solen samt Phobos ytegenskaper. Hyperspektrala kuber kommer att ber¨aknas baserat p˚a en kombination av geometri och radiometri. Dessa representerar den uppsamlade datan fr˚an optiska matrissensorer. Framtida m¨ojligheter och studier inkluderar bland annat anpassningen av simulatorn till olika vetenskapliga instrument samt fj¨arranalys av jorden.

NOMENCLATURE

Acronyms

AOTF Acousto-optic tunable filter

CESBIO Centre d’Etudes Spatiales de la BIOsph`ere CNES Centre National d’Etudes Spatiales

DART Discrete Anisotropic Radiative Transfer DEM Digital Elevation Model

DLR Deutsches Zentrum f¨ur Luft- und Raumfahrt DTM Digital Terrain Model

JAXA Japan Aerospace Exploration Agency MMX Martian Moons eXploration

NIR Near-Infrared QSO Quasi-static orbit Symbols

ρ Reflectance E Irradiance

L Radiance

M Exitance

I. INTRODUCTION

A. Martian Moons eXploration

Martian Moons eXploration (MMX) is a Japan Aerospace Exploration Agency (JAXA) mission aiming at exploring the moons of Mars, Phobos and Deimos. The main objective is to map Phobos, land a probe on its surface and bring back samples to Earth. In addition, flybys of Deimos and atmospheric studies of Mars shall be performed. The central scientific question to be answered is the origin of the moons.

Do they stem from giant impacts or asteroid capture by Mars?

Different chemical components would be present on their surface depending on the answer. In addition, this mission will allow to further understand the sensitivity of planetary evolution to the solar system’s dynamics.

Phobos is the largest of the two moons and represents the main object of interest. It has an oblate shape and dimensions of about 26.8×22.4×18.4 km3 which makes it a small body compared to other moons of the solar system. A textured model, obtained by applying the highest resolution map avail- able (see Appendix IX-A) onto a 3D mesh by a mapping process allows to appreciate its form and defining features in Figure 1. It orbits closer to Mars than Deimos, with a semi- major axis of 9376 km (so about 6000 km from the surface of Mars) and rotates synchronously with it, meaning that the same side is always visible from the surface of the red planet.

In 1989, the Phobos mission allowed to acquire for the first time, resolved spectra of the moon in the 0.33-3.1 µm range, in the UV, visible and near-infrared (NIR) domains with a sub-kilometer footprint. Murchie et al. [1] showed that the surface is made of two spectrally distinct units: a “red unit”

covering most of the surface, in particular the crater floors and a “blue unit” constituting the ejecta of Stickney, Phobos’

biggest crater. The two spectrums, provided by scientists, are given on Appendix IX-B. Later, observations from Omega Mars Expresshave confirmed the presence of these two units at higher NIR resolution (50 m) and motivated further the interest for Phobos.

The probe, weighing 4 tons, is divided in 3 modules for propulsion (to get the probe around Mars), exploration (con- taining the instruments) and return (to bring back the samples).

The payload mainly consists of several visible light and NIR cameras developed by Japan, a neutron and gamma rays spec- trometer from the USA and an hyperspectral IR imager under Centre National d’Etudes Spatiales (CNES) supervision. This

(3)

Figure 1: Different views of Phobos [2] [3]

instrument is the French contribution to the payload and is the object of study of this work. The main objective is to globally map Phobos at a resolution under 20 m, which is 10 times better than the current resolution. It aims at detecting oxides, silicates, minerals and possibly organic compounds on the surface. Moreover, the ambition is to discover a more diverse surface mineralogy than the 2 known units mentioned earlier.

The mapping is also important to optimize the choice of the landing site. In addition, the instrument will study the martian climate and perform observations of Deimos. CNES also adds support to JAXA’s flight dynamics unit and is developing the rover which will land on Phobos in collaboration with Deutsches Zentrum f¨ur Luft- und Raumfahrt (DLR).

Regarding mission planning, the satellite is to be launched in September 2023, perform several observation orbits, land the rover in September 2026, collect samples and leave Mars’

vicinity in August 2028, with an Earth return in July 2029.

The trajectory and different phases are illustrated in Figure 2.

Figure 2: MMX Mission outline [2]

B. Instrument concept and specifications

The three main types of sensors used in satellite imagery are presented in Figure 3. For scanners, a detector scans the image lines by rotation of the entire satellite or of a mirror.

The columns are acquired either by motion of the satellite or also with the rotation of a mirror. Push-broom sensors acquire lines simultaneously by aligned detectors while columns are gathered by the satellite step. Finally, matrix sensors rely on a photography-like acquisition with a detector for each pixel.

When this work started, the French hyperspectral instrument was called MacrOmega and based on a matrix sensor of 256×256 pixels. During the course of the work, the project has since moved on towards a push-broom detector build by another institute. However, it was decided to keep working with a matrix sensor and to try to make the simulator as adaptable as possible to a push-broom geometry. The final products will thus not exactly reflect the true acquisitions from the new instrument but are still useful to the mission and CNES, as explained later. From now on, the report will refer to MacrOmega as the instrument simulated.

Figure 3: Sensor types: a) Scanner b) Push-broom c) Matrix [2]

If Mars is quite well known now, Phobos and Deimos can be full of surprises as the Rosetta and Hayabusa-2 missions have shown. Due to the fact that you do not know what you will observe and find, the instruments, MacrOmega included, required a lot of flexibility. Thus, it was supposed to collect hyperspectral data in the form of 3 dimensions data cubes with x, y and the wavelength λ, as seen in Figure 4.

The goal is to realise a global surface composition mapping of Phobos and to select the sample collection site. To do so, MacrOmega’s optics system was based on an Acousto- optic tunable filter (AOTF). From [4], “the principle behind the operation of acousto-optic tunable filters is based on the wavelength of the diffracted light being dependent on [an]

acoustic frequency. By tuning the frequency of the acoustic wave, the desired wavelength of the optical wave can be diffracted acousto-optically”. This device provides efficiency

(4)

Figure 4: Hyperspectral data cube [2]

and flexibility, hence this choice. The pointing system was also flexible thanks to the use of a scanner to compensate the relative motion of the platform. Overall, the specifications of the instrument were the following: under 20 m resolution at 25 km altitude, global mapping doable in under a month, thin identifications regarding radiometry (signal to noise ratio above 100), 300 wavelengths between 0.95 and 3.6 µm with an under 10 nm sampling.

In order to achieve its objectives, the probe will undergo different types of quasi-static (QSO) orbits around Phobos.

Due to its limited sphere of influence, it is not possible to directly orbit around the moon. Thus, orbital dynamics experts have chosen to take advantage of Mars and use it to perform innovative motions around Phobos. These orbits will vary in altitude (from 189 to 5.3 km to the center of Phobos, more on the orbital dynamics later) in order to achieve global mapping at high resolution (relative to previous missions). An example of a typical MMX orbit around Phobos is illustrated in Figure 5.

Figure 5: Quasi-static 2D orbit around Phobos [2]

These considerations have a direct influence on MacrOmega because one image (i.e. one wavelength) requires 1 s (to have enough signal over noise ratio). This means that it will take around 6 min for the whole cube (300 wavelengths). Yet, the satellite will continue its path along the orbit during this time.

In this case, image quality is first about limiting relative motion and jitter during the capture of each wavelength. Then, it is about maximising the common area in successive wavelengths.

This is where co-registration comes into play, one has to

make sure that the images become spatially aligned, features have to overlaps as best as possible. The issue per se is not that the images do not overlap but rather that without an accurate geometric model of the surface (which is the case on Phobos), these changes in point of view between wavelengths would make co-localisation of the pixels more difficult. For an average case, an orbit between 17 and 38 km from the center of Phobos and a velocity around 5 m/s, there is an offset of about 0.6 px/s.

C. Image quality and purpose of the thesis

The image quality unit at CNES has a 30-year legacy of high resolution Earth observation with technological breakthroughs (e.g. the Pl´eiades satellites) and operates for civil, defense or dual-use applications. It collaborates and brings expertise to international missions like Venµs, Trishna or Euclid. The three areas of interest regarding image quality are geometry, resolution and radiometry. Geometry relates to stability and localisation so it depends mainly on position, velocity and attitude. Resolution is about detail sharpness based on optics, distortion, detection and observed landscapes. Radiometry discloses homogeneity and color fidelity, leaning on irradiance and atmosphere composition.

Image quality is responsible for guaranteeing adequacy of image product to user requirement. Two type of tools ensure this responsibility: specifications or budgets concerning quality criteria (radiometric noise, calibration precision, image localization, etc.) and representative image simulations. There is a need for the latter because user satisfaction is not always guaranteed by a collection of specifications. Simulations con- stitute a natural discussion tool to translate requirements into measurable criteria. This responsibility is at stake on a system level (onboard and ground) during all stages of the project:

definition and refinement of requirements, development track- ing and in-flight follow-up. The objectives of image simulation are to present future end product users with representative images and possess data to devise the system. CNES pos- sess a renowned expertise regarding Earth-observation image simulations and physical phenomena modeling. However, the MMX mission introduces challenges that differ greatly from these contexts.

Initially, the purpose of the simulations, in addition to provide CNES’ image quality unit with an innovative tool to be reused in other contexts, was for scientists to assess geometrical performances of the instrument. The idea for MacrOmega’s provider was to evaluate thin co-registration (how pictures overlap) of the successive images thanks to the co-localization of each pixel with an accuracy of 0.5 px (it means that every pixel on the image is known plus minus 0.5 px). These co-registrations rested on image texture so that explains why the 3D scenes had to be representative of the moon. This was to be based on the satellite orbit and attitude with time and included a geometry aspect to the project since simulations were supposed to provide, in addition to the pictures themselves, the precise position on Phobos of each pixel. These localization assessments were supposed to frame the thesis in the scope of image geometry. Final product

(5)

performances could have been determined as a function of different parameters: quality of attitude/orbit/instrument, to- pography and time/latitude-longitude on Phobos/shadows, etc.

However, as mentioned earlier, the change of instrument has modified the situation and the simulations are no longer intended for such usage (in the sense that the algorithms are no longer available for CNES and the images cannot be tested the same way). They could nevertheless help the other two image quality areas. Regarding radiometry, the radiative budgets and final radiance product should be representative of real acquisition. As the new instrument is still being conceived, they could provide pertinent values to strengthen specifications. As for resolution, product analyzing algorithms could be used to try to detect the different materials on the final images. Once again, CNES does not possess such algorithms but should scientists want to drive such a study, it would theoretically be possible with the simulations. Another potential usage of the simulations is for compression, which also require representative textures in order to test algorithms.

Traditionally, simulations are based on airborne pictures with a greater resolution than what is expected of the rendered product. This allows to apply step-by-step processes to model the effect of the instrument, atmosphere, satellite motion, noise, compression and more. All these operations are gathered in what is called the image chain and can be used when an initial model is available. As MMX is the first mission intended to acquire high resolution images of Phobos, there are obviously no such models available so it is required to create one (or rather several). Another difference with Earth- observation simulation is how contrasts in the picture are generated. On Earth, the albedos, which are a measure of how a surface reflects radiation (on a scale from 0 to 1, 0 for a black body absorbing everything and 1 for a mirror) vary greatly from one another. For example, the surface of a lake has a mean albedo 0.03 while conifer trees are closer to 0.10 (some other trees can even go higher than 0.15). These heterogeneous albedos are enough to generate differences in energy received by the sensor. On Phobos, this is not the case because previous missions have reported that albedos are very even, with an average value of 0.07. What will cause contrast when observing the moon is rather the topography.

Local slopes are responsible for differences in energy directed at the sensor. These two arguments are the reasons why it is necessary to model the physical phenomena involved (shadows, thermal emissions, Mars irradiance, etc.) in order to generate the contrast. The objective is to create 3D scenes representative of Phobos (in terms of topography for the slopes and different materials for albedo) and model the radiative transfers occurring between the surface and the sensor. This shall produce rendered radiance images to play a similar role as airborne acquisitions for Earth-observation

Even though the end goals have changed during the course of the thesis, the initial plan remains similar to what was envisioned at the beginning, as illustrated in Figure 6. In this context, available data from scientists will be used to describe an assumed reality of Phobos’s surface topography, then model the acquisitions, from the point of view of the instrument MacrOmega but also from the orbital dynamics of the MMX

probe, in relation with the Flight Dynamics division of CNES.

All in all, there are many challenges to face: the absence of geometric libraries for Phobos, the implementation of a brand new simulator encompassing physical and 3D models, interfaces with the Flight Dynamics team and other entities, the adaptation to different missions and the generation of 3D scenes at different scales.

Figure 6: Project outline

D. Key concepts in optics

In this part the important physical quantities and concepts used in this work are defined. First, irradiance, E, is the radiant flux received by a surface per unit area, in W m−2. Spectral irradiance is then the irradiance of a surface per unit wavelength, typically expressed in W m−2µm−1 for the IR domains studied here. Similarly, exitance is the radiant flux emitted by a surface. Another important concept is the solid angle, illustrated in Figure 7. This angle, under which an object is seen from an observation point O, is the ratio between the spherical cap area defined by the cone relying on the appearing outline of the object and the squared radius d of this cap. The unit is the steradian [sr].

One can then define the radiance, L which often charac- terizes a source that has a directional behaviour. It is defined as the radiant flux per surface unit and solid angle unit, in W m−2sr−1 and its expression is:

L = d2P

dωdS cos θ (1)

As for irradiance, spectral radiance is defined per unit wavelength. These two quantities can be combined to define what is called bidirectional reflectance. It characterizes how a surface reflects what it receives in irradiance. Figure 8 defines

(6)

Figure 7: Solid angle definition [5]

the incident and azimuthal angles, the i and r respectively describing the source and the the direction of reflection. With these notations, the bidirectional reflectance is given by:

ρλi, φi, θr, φr) = πLλr, φr)

Eλi, φi) cos θi (2)

Figure 8: Angular notations in spherical coordinates to define bidirectional reflectance [5]

II. GENERATION OF3DLANDSCAPES- PHOBOSFORMING

The first step of the project is to generate 3D landscapes of Phobos. The type of structures created and their composition are based on data provided by scientists, corresponding to an expected scenery at the surface. The software used here is Blender, an open-source 3D creation suite. It enables the entirety of 3D pipeline-modeling, including animation, simula- tion, rendering, motion tracking and much more [6]. Although it is not used at its maximum abilities here, it provides an endless amount of possibilities regarding the generation of 3D scenes. In this section, the principles behind scene creation are introduced but may not reflect entirely what is actually done for a simulation (especially concerning the application on Phobos and the association of materials to texture, which became obsolete for this work but can be reused for other purposes).

A. Creation of meshes with Blender

Prior to the implementation of scientists’ inputs regarding the topography of the moon, a catalogue of possible structures was created. It encompasses different type of structures, either developed using Blender’s native tools to generate landscapes or by directly imposing digital elevation models (DEM) to the scene. The second approach is particularly interesting if you expect the structures observable at 100 m resolution (highest resolution available for Phobos as of today) to replicate themselves at smaller scales. Other celestial bodies DEM can also be used, as it was the case with the Moon here. Another way to generate a landscape is to apply a panchromatic image (as a “texture”) that the software will rebuild as a DEM (with unknown algorithms). Several scenes generated with these three methods are given in Figure 9.

Figure 9: Range of landscape possibilities generated with Blender, top and oriented view . a) Rugged terrain generated with Blender’s ANT Landscape b) High density crater gener- ated distribution with Blender’s ANT Landscape c) Extract from Moon DEM [7] d) Extract from Moon DEM [7] e) Extract from Phobos DEM [8] f) Applied photo of Phobos [9]

g) Extract from Phobos DEM [8] h) Low density crater dis- tribution generated with Blender’s ANT Landscape i) Rugged dunes generated with Blender’s ANT Landscape

(7)

Overall size, location and amplitude of the structures are editable. Several of them can also be combined, for example resulting in a need of 5 m wide craters on a 300 m2 terrain sprinkled with rugged dunes. Apart from terrain structures, one can also expect to find rocks on the surface of a celestial body, whether they originate from initial shattering, continuous impacts or surface activity. They are the main suspects when aiming at finding different minerals or organics components.

Therefore, it was key to be able to generate a random number and size sample of rocks to distribute across the scenes.

Blender provides such a tool and realistic editable rocks can be produced, as seen in Figure 10.

Figure 10: Procedural rock generation with Blender

Once a range of rocks has been generated, the software allows sprinkling on the scene. This distribution can be randomised, or not, meaning that areas where a certain type of rocks (in terms of shape and size, then relating to the composition, see II-B) is wanted can be chosen. For example, it can be decided to have rocks “A” in a chosen crater and rocks

“B” outside it to reproduce ejecta structures. The number, size scale and randomness factor of the generated objects can also be controlled. The only issue with this spreading of rocks it that it does not account for the surface rigidity and the created objects simply go through, overlapping on the far side. To solve this, it is necessary to change the center of rotation of the rocks and align them on a 90 rotated axis normal to the surface. After this manipulation, the rocks will not overlap the terrain again but some might be appearing “floating” around, due to the randomness rotations when the distribution happens.

This explains the small gaps between the surface and the rocks that might be observed further but as they are relatively small, they can be neglected for the rendering simulation (they do not affect the seen facets, more on that process in III).

At this point, all the tools required to generate realistic surfaces have been introduced. Once the landscape, in terms of relief and objects on the scene, is manually created (au- tomatising this step is possible but only for simple structures and if a significant amount of them is required, not relevant here), two steps remain. First, it is necessary to assign textures to each object in order to know what real material they represent. This will be presented in II-B. Secondly, the local relief of Phobos has to be taken into account. Indeed, due to the variable size of scenes and the different locations of observed sites, local deformation will play a role in how the image will be formed on the focal plan of the instrument. The

goal is thus to apply the 3D scene on a Phobos 3D mesh to deform the former and extract it. To do so, an algorithm and python script (usable in Blender) have been developed.

They rely on a “lattice” object on which the scene is “stuck”.

The lattice is then “wrapped” on Phobos thanks to Blender deformation functions. The required inputs are the targeted point and the swath dimensions. These data are provided by CNES’ flight dynamics unit and used to calculate the normal to the targeted surface and its orientation. An example of the result is given in Figure 11. Although the 3D model of Phobos is not very precise, it can give enough information and reproduces accurately wide local deformations (for example, the big crater Stickney, not seen in Figure 11). The exported scene has its vertices coordinates given in the rotating Phobos- centered frame.

Figure 11: Applying a scene on Phobos. a) First location example b) Second location example

B. Relating 3D meshes to spectral attributes

Initially, spectral attributes were to be affected through texture application but it has been simplified along the way and a material (from Blender) is enough. To do so, a dictionary has to be created. The key is a color and this color is linked to a spectral albedo (more on that later). These albedos will then characterize a real material (a precise mineral or organic component) that the simulations should differentiate. There can be as many materials as there are rocks (or rock shapes).

An example of material association is given in Figure 12. Note that the surface itself (i.e. everything but the rocks) has not been affected to a material for this example. However, it is obviously possible to add one, whether that would be on an homogeneous zone or not. It is also possible to add an image as a texture to represent a local modulation of irradiance. This would allow to model micro deformations of the terrain which are not visible at the scale the scene is built. Another way to do that would be to linearly mix several materials for certain objects.

C. Building the surface of Phobos

All in all, Blender is a very powerful tool that fits perfectly the needs of the project. After the demonstration of its poten- tial, 3 representative scenes have been conceived. For each of them, the variables are the quantities of craters, boulders and the position of the two so called spectral “endmembers” that

(8)

Figure 12: Example with 2 types of rocks (“red” and “blue”

material)

represent the different materials expected to be found (there are two because of the red and blue units mentioned in I-A). All of them were designed with dimensions of 2×2 km2but this can be adjusted according to the requirements (and will be anyway when applying the scenes on Phobos). The maximum size of boulders in comparison is about 12 m (also editable). The first scene contains many craters (generated via Blender internal tools and from a Moon MNT) and boulders. The endmembers are represented in red and gray. The idea here was to affect the red endmember to a few boulders and the white one to the rest of the scene. Figure 13 shows top, side views as well as the mesh itself. Scene 2, Figure 14, contains few craters and boulders. The craters are once again generated with a Blender tool and a Moon MNT (but this time, less dense in terms of craters). The red endmember is placed on a few craters, see the stains in Figure 14. Finally, scene 3, Figure 15, was designed to have many craters, boulders but also some sort of big structure meant to represent severe slope irregularities on the surface of Phobos. The red endmember is present on a part of the structure.

Figure 13: Phobos Scene 1

Figure 14: Phobos Scene 2

Figure 15: Phobos Scene 3

III. RADIANCE MODELING FORMMX

Once the textured mesh is generated, applied on the local relief of Phobos and extracted, the idea is to associate a radiance value to each facet of the mesh. Each triangle contains albedo data related to its material and the goal is to render the radiance value of each facet on the instrument’s pixels matrix. This section introduces how the radiance modeling was thought out. Initially, it was to be added to the same software that renders the images. However, another solution was found and is the subject of section V. The format is 256×256 pixels while the mesh typically contains more than 2·106 facets so one pixel obviously contains more than one facet.

The principle of radiance modeling is introduced in Figure 16. There are two main sources of light, obviously the Sun but also Mars since Phobos is the celestial body in the solar system that orbits the closest to its referential object. Thus, Mars accounts for 60 in the sky when standing on Phobos. The light of Mars mainly influences the shadows since it irradiates a lot less than the Sun (the surfaces enlightened by the Sun are also illuminated by Mars but the contribution is small). For other applications, one can imagine having a different source than Mars to irradiate the shadows (for instance the Moon).

Thermal emissions from the surface are also identified as a source.

Then, these light sources are to be modulated by identified surface materials’ spectral properties, texture attributes and the orientation of each triangle to the camera. Each facet will then contribute a certain radiance that will be received by the sensor.

(9)

Figure 16: Radiance modeling of a facet

A. Sun and Mars irradiance

Sun irradiance, which is the dominant source of light, is fairly easy to compute once the average irradiance on Phobos and the solar incident angle of the facet are known. Similar values are required for Mars’ contribution. With the notations from Figure 16 and ES (resp. EM) the solar (martian) spectral irradiance on Phobos (in W m−2µm−1, different for each wavelength), each facet will get:

ESun = EScos θi [W m−2µm−1] (3) EMars = EMcos θm [W m−2µm−1] (4)

One can also imagine, for other applications, that the sec- ondary source will also cast shadows. The process described in IV would then have to be implemented again.

Contrary to solar irradiance (ES) which is well known and can be calculated for any point in the solar system, the contribution from Mars to the enlightenment on Phobos (EM) has to be calculated by hand. The following model is very basic and simplifies tremendously the physical phenomena but given the importance of Mars as a source for the simulations, it is satisfying enough. First, the solar irradiance on Mars (top of atmosphere) is known by weighting the distance to the Sun and given by equation (5), with DSunEarth (resp. DSunMars) the distance between the Sun and Earth (Mars). Because of the distances (the mean orbit of Phobos is at 6000 km from Mars, compared to 1.5 AU, mean orbit of Mars around the Sun, which represents more than 2·108km, is negligible), the same value is used for solar irradiance on Mars and Phobos, meaning that ES is taken equal to ESunMars. Then, the first assumption made is to use the Bond albedo [10] for Mars of 0.25, called ρMars for every wavelength. This albedo accounts for all of light scattered from a body at all phase angles. It represents the “fraction of power in the total electromagnetic radiation incident on an astronomical body”. Accordingly, the resulting energy reflected by Mars, Eref is given by equation (6). The second assumption is to assume that half of Mars is always enlightened at every moment and thus the energy reflected is constant and obtained by dividing the volume of a sphere by 2.

This energy is then equalized to what is received by a spherical object at the position of Phobos by energy conservation, see equation (7), with RMars, Mars’ radius and DMarsPhobos the

distance between Mars and Phobos. As an example, with mean values of 1.5237 AU for the Sun-Mars distance and 9389 km for Phobos’ orbit, the calculation yields equation (8). The irradiance from Mars on Phobos is equivalent to 6.5% of the Sun irradiance, allowing to predict that Mars is almost negligible in the enlightened areas (this number has to be multiplied by the average albedo of 0.07, which results in a very small value) but will contribute significantly to the shadows. Of all the numerical values used for these calculations, the only ones to vary between configuration are the distance between the Sun and Mars because the red planet’s orbit is quite elliptic (with a perihelion of 1.381 AU and an aphelion of 1.666 AU) and the distance between Phobos and Mars (which is negligible to calculate the solar irradiance on Phobos but matters for the energy radiated by Mars).

ESunMars= ESunEarth(DSunEarth

DSunMars)2 [W m−2µm−1] (5)

Eref= ρMarsESunMars [W m−2µm−1] (6) 2πR2MarsEref= πD2MarsPhobosEM (7)

EM

ES = 0.065 (8)

B. Thermal emissions

Thermal inertia is a weak phenomenon on Phobos. Thus, a very simple model is used here, the temperature varies proportionally to solar irradiance with minimum and maxi- mum values of 150 K (no irradiance) and 330 K (maximum irradiance, the facet is facing towards the Sun, θi = 0).

Another simplification is to assume that the emission factor of the surface (surf) is 1. The black body thermal radiance model can then be used, with h the Planck constant, c the light velocity, λ the wavelength and k the Boltzmann constant:

T = 150 + 180ESun

ES [K] (9)

LλBB= surf2hc2λ−5

eλkThc − 1 [W m−2sr−1µm−1] (10) Once again, this is a basic model and that is due to Phobos’

properties. The effect of thermal emission should only be of importance at wavelengths greater than 3.0 µm so one might even decide to neglect them otherwise. To apply this to another context, one might develop other thermal emission models or use a dedicated software.

C. Modulation with spectral properties and textures

As introduced in I-D, materials have different spectral prop- erties that modulate the response to irradiation. Some models are simpler than others. The Lambertian model states that the reflectance of the surface does not depend on the viewing and incidence angles; ρλi, φi, θr, φr) = ρλ. A second possibility

(10)

is an azimuthal isotropy, the reflectance only depends on the incident angles; ρλi, φi, θr, φr) = ρλi, θr). Otherwise, the reflectance can be bidirectional and depend both on incident and viewing angles. Adding a functionality to read all kinds of spectral properties, be they Lambertian or more elaborated, was to be another important step for the simulating software.

The endmembers mentioned in II-C found their reason to exist here, they serve as a dictionary key to connect with real materials spectral properties.

However, having different materials assigned to the facets is not enough. If the texture inside each triangle was not modulated, one might only be able to visualise the triangles themselves on the final picture. That is why a panchromatic texture can be added inside each facet. They will modulate locally the reflectance assigned to the materials. This texture can be similar to what is represented inside the triangle of Figure 16, stemming from a noise generator in Blender. In case of high spatial variability, it would also be possible to include the reflectances as a multi-band spectral texture. One layer of texture would represent the reflectances at a certain wavelength. This would be useful for missions and scenes where each facet could be of a different material.

D. Result for each facet, MMX case

In the MMX context, a panchromatic texture is a bonus because there are only 2 different materials to simulate (and they are very close). First, enlightened and shaded faces have to be distinguished. The two cases are respectively equations (11) and (12) for the enlightened and shaded facets:

Etotλ = ESun+ EMars [W m−2µm−1] (11) Etotλ = EMars [W m−2µm−1] (12) Once the irradiance is computed, the spectral radiances can be calculated, using the classical definition, the modulation with the result from the integration of the texture R and the thermal emission. The 3 reflectance models mentioned in the previous section are given for example in equations (13), (14) and (15):

Lλ= ρλEtotλ

π R + LλBB [W m−2sr−1µm−1] (13)

Lλ= ρλi, θr)Etotλ

π R + LλBB [W m−2sr−1µm−1] (14)

Lλ= ρλi, φi, θr, φr)Etotλ

π R + LλBB [W m−2sr−1µm−1] (15) Finally, the integrated radiance has to be calculated for each facet. This means multiplying the previous radiance values by the bandwidth for each wavelength, equation (16). In the end, every facet should have a radiance value at each wavelength.

Liλ= Lλ∆λ [W m−2sr−1] (16)

E. Implementation

An existing tool at CNES is Simu3D, an image simulator developed by C-S Group, written in C++ and Python. It is used to generate satellite rendered images from a 3D textured mesh. It includes both push-broom and pinhole camera models and can simulate 3D images from stereoscopic views. The key point to ensure geometric quality of the render is to individ- ually render each facet by numerical integration (instead of using ray-tracing which leads to aliasing). There are, however, a few limits to this simulator. First, it only works for Earth observation applications. Then, it does not create radiance images but simply textures of a mesh, there is no physical modeling. Shadows are for example included in the textures. A comparison of a Simu3D product and real satellite acquisition is given in Figure 17. There, one can see that geometrically, the simulator is very accurate but does not render physical reality, notably due to the Sun position. On the right, the facades are way more enlightened due to low solar inclination (the image was taken in winter), something the textured meshes cannot reproduce.

Figure 17: On the left, a Simu3D product, on the right real satellite data.

The heart of Simu3D is called Girender (from Geometri- cal Integration Render). Simu3D encompasses Girender and other tools to simulate other camera models than pinhole and “georeference” every image to Earth. It does not handle other contexts than Earth observation but Girender itself does.

This is why the work was mainly done on this part of the software. To implement the radiance model mentioned in the previous section, it was first envisioned to directly add the feature to Girender. To do so, it would have been necessary to develop an intelligent dictionary to relate materials from the obj file to the corresponding optical properties. Such a dictionary could have taken many forms and entries with the surface being Lambertian or not. It also could have been combined with a N-bands texture file if spatial variability was significant. In addition, it would have been impossible to take into account environmental effects, i.e. how what is around a facet influences what this facets will radiate towards the camera. A good starting point was thus to add a feature to determine shadows and that is how it happened chronologi- cally (this is the subject of the next section). Then, however, this solution was not retained. Instead, a software called

(11)

DART (Discrete Anisotropic Radiative Transfer) conceived by CESBIO (Centre d’Etudes Spatiales de la BIOsph`ere) [11] was used. It was designed to model radiative transfers between Earth and atmosphere from visible to thermal infrared but can also be applied to MMX. Combining the comprehensive physical model of DART with the image quality offered by Girender, the best possible products were computed.

IV. UPGRADE OF CURRENT TOOLS

This section focuses on the addition of shadow determina- tion to Girender. As mentioned, it was done before DART was introduced as a solution and took significant time and commitment. In addition, it could be reused for other contexts.

A. Principle of shadow casting

Girender determines what parts of the scene are visible by projecting the 3D scene on the camera frame (2D) and sort the faces with the painter’s algorithm. This algorithm decides which polygons are visible and is pictured in Figure 18. First, the intersections are computed. Then, for facets that intersect, a sort by depth is done to find out which one is in front of the other. The hidden parts are finally removed and the visible parts are divided in triangles for the geometrical integration.

Figure 18: Painter algorithm

This method is used to determine which facets that are visible but it can be reused to determine which facets are shaded. Indeed, the way the simulator is built, it is possible to add the Sun’s position as if it were a camera and apply the same algorithm. Instead of affecting an attribute to the facets to say if it is visible or not, the attribute is about whether or not the facets is in the shadows. The concept is summed up in Figure 19.

There are then 3 main steps to take the shadows into ac- count. Firstly, a camera model is placed at the Sun’s position.

Secondly, the facets are projected in the Sun “camera” frame.

Thirdly, the new facets, enlightened or shaded, go through the classic Girender loop and are integrated to render the image.

Girender encompasses complex C++ code and structures, in what follows are explained only the algorithms and main blocking points encountered.

Figure 19: Principle of shadow casting

B. Projection in the Sun camera frame

To begin with, the overall structure of the tool was modified to take into account the Sun’s position, in the form of a camera file, into a parameter to the main function. This camera file includes the center of the camera plane, its unit vectors and intrinsic parameters relating to the instrument used to project in the image frame, which is not done for the Sun projection.

Figure 20 shows the different vector orientations.

Figure 20: Camera orientation

Once this camera file is created, the first step is to project the 3D scene in the 2D(+depth) Sun frame. To do so, classical kinematics are used, with (x,y,z) the coordinates in the mesh frame and (h,v,u) camera unit vectors. (X,Y,Z) are the new coordinates in Sun frame, the first two being divided by the depth (hence the 2D+depth projection):

Z = (x − Ox)ux+ (y − Oy)uy+ (z − Oz)uz (17) X = 1

Z((x − Ox)hx+ (y − Oy)hy+ (z − Oz)hz) (18) Y = 1

Z((x − Ox)vx+ (y − Oy)vy+ (z − Oz)vz) (19) In the actual code, there is use of a small “jitter” term to ease numerical calculations and this has led to troubles when intersecting faces but as it is purely a programming issue, it will not be mentioned here. Prior to this projection, when all facets are loaded in the simulator (to simplify the process), the aforementioned new attribute “FaceShadow” is affected at a value of 0 for every facet. 0 will mean that the facet is enlightened. At first, it is assumed that every facet is visible and enlightened.

(12)

Then, all the facets go through the first method that high- lights which one is visible by determining facets that are facing backwards from the camera (by calculating the scalar product between the facet normal and the camera pointing vector).

These are surely not visible and in this case, shaded. Then, the painter algorithm is put to use and this is where a major shift happens. The initial and new methods are illustrated in Figure 21. Instead of only selecting what is visible, the hidden parts are also kept here and tagged at FaceShadow equal to 1 to keep in memory that they are shaded. Finally, new facets are cut into triangles and exported, along with their texture information (position of their mapped vertices in the texture image).

Figure 21: Painter algorithm new selection

C. Applying initial methods to facets with shadow attributes Once the new facets with their shadow attributes are ex- ported, they need to be projected back to the mesh frame because they were generated in the Sun camera frame. If the point C is the origin of the mesh frame, (x,y,z) the unit vectors and the index Sun designates the coordinates in the Sun frame, to go back to the mesh frame, the projection is now:

z = (XSunZSun− ChCu)zh+ (YSunZSun− CvCu)zv+ (ZSun− Cu)zu

(20)

x = (XSunZSun− ChCu)xh+ (YSunZSun− CvCu)xv+ (ZSun− Cu)xu

(21)

y = (XSunZSun− ChCu)yh+ (YSunZSun− CvCu)yv+ (ZSun− Cu)yu

(22) There is a need to multiply by the depth component because coordinates in camera frame are written in the 2D+depth format as mentioned before. The coordinates of the unit vectors of the mesh frame in the camera frame are obtained by inverting the transfer matrix (known because the Sun frame unit vectors are given in mesh frame originally). Once this is done, the facets can be added to the mesh that will be treated by the initial Girender loop, this time with the true camera file.

There have been other programming issues that were raised for

instance regarding the normals of new facets or intersection precision but these will not be discussed here.

After the function to determine what facet is visible from the camera has been applied, only the geometric integration is left to do. As a reminder, the general idea is to integrate within each facet the value found in the texture for the mapped vertices of this facet. At this point, the radiance integration modification was not implemented so the tests to prove the shadows generation functions results in black shadows. All shaded facets are exported but they are not treated (and the default value in the integration gives a black render). The tests were made with a very simple mesh, with few facets in order to reduce calculation time. This scene is composed of a plane and two boulders, the biggest of them having a red texture while the rest of the scene is green, see Figure 22.

Figure 22: Test scene, seen by material (a) and texture (b)

The idea was first to position the Sun on the left or right side of the red boulder to have its shadow represented in the image. The camera is placed directly above the scene (similar to nadir pointing) at a sufficient height to capture it. Thus, the only black facets visible in the render are due to the shadows since there are no facets which the camera cannot observe.

As an example of results, Figure 23 shows 4 different Sun positions. a) and b) are at the same heights but opposing angles (45) while c) and d) are at the same angle but lower heights. One can then appreciate the expansion of the shadow with the decrease in the height. Other examples, with different enlightenment conditions can be seen in Figure 24.

All in all, casting shadows is a useful addition to the software and can be reused for applications other than MMX.

For instance, one can imagine applying this method to Earth observation missions and produce images at different times of the day (with different Sun positions). Nevertheless, the shadows generated here are valid for a point source and not for an area source. As visible in Figure 25, the illumination by the Sun should produce 2 distinct shadow areas, umbra and penumbra. However, the average albedo on Phobos is so low that it is decided to neglect these differences here and to go on with the Sun as a point source.

(13)

Figure 23: Different cast shadows results

Figure 24: Cast shadows results with oblique enlightenment

D. Testing shadows with true MMX orbital data

To test this new feature, MMX orbital data and represen- tative Phobos scenes were used. A few configurations (MMX position, Sun position, targeted point at the surface of Phobos) were provided by CNES’ flight dynamics unit. Mars’ position was also given but not used at this point. The 3 vectors defining the camera are at first assimilated to the satellite and also given in the Phobos-centered frame. An example is given in Table I.

This configuration is used for the positions of the objects but the instrument specifications are chosen such that the entire scene is visible. With the true field of view of MacrOmega, the 2×2 km2scenes generated for the tests are not completely observable. If it is not specified, the same method is applied for simulations results from now on (entire scene to be captured on the image).

Given the (very) distant position of the Sun, an approxima- tion is possible to fill Girender with a smaller value so that the

Figure 25: a) Hard shadows obtained with a point source b) Soft shadows obtained with an area source

Object x(m) y(m) z(m)

MMX −20159.4581 87145.4911 3.7671

Sun 3.992·1010 2.1812·1011 1.1501·1010 Targeted point −2743.5946 11860.0360 0.51269 Table I: Example of configuration, all data are expressed in the Phobos-centered frame

projections are done on a bigger scale. Indeed, if the real Sun position is kept, defects tend to appear on the rendered images, see Figure 26. First note that the mesh is deformed, it is due to the application of the scene at the targeted position on Phobos mentioned in II-A. Then, note the slight differences between shadows on image a) and b), which have been obtained with the Sun position respectively by 108 and 107. On c) and d), the shadows are the same, showing that dividing by 107or less yields the same result (regarding geometry). However, small lightened areas that change the radiometry of the image appear and thus it is decided to use the 107 factor, which results in a Sun position approximated to be at 399/2181/115 km away from Phobos for this particular configuration.

To further test this new feature and its robustness, 2 new scenes were created respectively with 2·105 and 2·106 facets.

Another configuration which generates more shadows because of the local relief targeted at the surface of Phobos has also been tested. Results are given in Figure 27. The first scene contains a lot of craters and boulders. All the boulders have the red end-member while the rest of the scene is blue. The second scene also contains a lot of boulders but not all of them are red. For the first configuration, one can observe that shadows are scarce and difficult to spot from the boulders, especially for the second scene. They are nonetheless present and a deeper analysis of the image shows that they are in adequacy with what is expected when looking at the topography and the Sun position. For the second configuration, the local relief is much more uneven with greater slopes. This yields images with large shaded areas. It is difficult to spot the red end-members at this scale without a closer examination but they are properly portrayed.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Generally, a transition from primary raw materials to recycled materials, along with a change to renewable energy, are the most important actions to reduce greenhouse gas emissions

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av