• No results found

Visualization of Space Debris using Orbital Representation and Volume Rendering

N/A
N/A
Protected

Academic year: 2021

Share "Visualization of Space Debris using Orbital Representation and Volume Rendering"

Copied!
45
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Science and Technology

Institutionen för teknik och naturvetenskap

LIU-ITN-TEK-A-19/050--SE

Visualization of Space Debris

using Orbital Representation

and Volume Rendering

Jonathan Fransson

Elon Olsson

(2)

LIU-ITN-TEK-A-19/050--SE

Visualization of Space Debris

using Orbital Representation

and Volume Rendering

Examensarbete utfört i Medieteknik

vid Tekniska högskolan vid

Linköpings universitet

Jonathan Fransson

Elon Olsson

Handledare Emil Axelsson

Examinator Anders Ynnerman

(3)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida

http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page:

http://www.ep.liu.se/

(4)

Abstract

This report covers a master’s thesis project done at the University Of Utah for the OpenSpace project. OpenSpace is a open-source astronomy visualization software and the thesis focus was to visualize the ever-increasing number of man-made space debris. Two different visualization methods have been used in this thesis. One was a volume rendering and it was evaluated how it works in relation to an orbital trail representation, which was the other method. If the volumetric representation would reduce cluttering, is one of the aspects that will be evaluated, as well as a more open ended exploratory question which is if the volumetric representation can provide any new insights about the data. In short, will a volumetric representation give anything that an orbital representation cannot? A volume rendering can use different types of grids. The thesis evaluates the pros and cons of a cartesian- and spherical grid, as well as the different resolution of the grid and tweaks in the transfer function.

An orbital trail representation was previously implemented in OpenSpace (which will be called the individual scene graph node implementation in this report) that had its pros. One con, however, was that it did not scale very well with increasing number of data ele-ments. Visualizing all the data sets containing each trackable piece of space debris simul-taneously using this implementation causes the software to slow down significantly. An alternative implementation (which will be called single draw call implementation in this report) was therefore tested in hopes to solve this issue. To see the performance difference, tests were performed where frame time for the whole scene was measured.

(5)

Acknowledgments

We have had a wonderful time in Salt Lake City at the University of Utah while working on this master’s thesis. The biggest thanks go to Gene Payne1who has been the one showing us the campus and the best restaurants in town, but also was super helpful when the program-ming was difficult. A huge thanks also go to Anders Ynnerman for providing us with the amazing opportunity of working on space debris visualization as our thesis. Thank you Emil Axelsson for standing up with our weekly meetings in inconvenient hours due to the time difference between Utah time and Swedish time. Thanks to Charles Hansen and Alexander Bock for also supervising and bouncing ideas. Thanks to all the people at SCI who welcomed us and helped us settle in.

Thank you to Micah Acinapura and Carter Emmart at the American Museum of Natural History in New York who welcomed us into the Hayden Planetarium and helped us set up and test our project, it was wild!

We want to thank every undergrad-, master- and PhD-student and other people we’ve met here during our stay. We now have a big friend network worldwide with smart, amazing and wonderful people. Thank you for all the concerts, board game nights, adventures, hikes, beers, cinemas, and climbing sessions together! Finally, a huge thanks to our friends and family who went out of their way to come to visit us! Our parents, Adam, Tove, and Emil, you guys rock!

Tack så mycket!

(6)

Contents

List of Figures vi

List of Tables vii

1 Introduction 1 1.1 Background . . . 1 1.2 Motivation . . . 2 1.3 Aim . . . 3 1.4 Research questions . . . 3 1.5 Delimitations . . . 4 2 Related work 5 2.1 Space debris visualizations . . . 5

2.2 Volume rendering . . . 6 3 Theory 8 3.1 Scene graph . . . 8 3.2 Volume rendering . . . 8 3.3 Orbits . . . 9 4 Implementation 12 4.1 Orbital representation . . . 13 4.2 Volumetric representation . . . 16 5 Results 19 5.1 Performance . . . 19 5.2 Perception / comprehension . . . 22 6 Discussion 28 6.1 Performance . . . 28 6.2 Perception . . . 29 6.3 Implementation . . . 31

6.4 The work in a wider context . . . 32

7 Conclusion 33 7.1 Performance . . . 33

7.2 Perception . . . 34

(7)

List of Figures

3.1 Illustration showing the process of a single ray being cast through a 3D volume. The dot in the image plane represents the pixel which is currently being calculated.

The dots in the 3D volume represents the sample points along the ray. . . 9

3.2 Depicting an orbit using v for True anomaly, ω for Argument of periapsis, Ω for Longitude of ascending node, Υ for Reference direction, i for Inclination and  for Ascending node . . . 10

3.3 Image showing how orbits tend to not orbit over the poles, instead forming a ring around them due to satellites launched into sun-synchronous orbit. . . 11

4.1 Illustrating how the fading of an orbits trail line looks like. . . 13

4.2 Showing vertices in the vertex buffer . . . 14

4.3 Showing the full vertex buffer. All orbits next to each other . . . 14

4.4 2D-Illustrates moving the point where the indexing of a cartesian grid starts, so it centers around earth instead of having its index zero at the origin of Earth. . . 16

4.5 A voxel in a spherical grid . . . 18

5.1 Comparison between ISGNI (individual scene graph node implementation) and SDCI (single draw call implementation) of their frame time in relation to the num-ber of data elements . . . 20

5.2 Volume based on a cartesian grid with the resolution (a): 16x16x16, (b): 32x32x32, (c): 64x64x64 and (d): 128x128x128 . . . 23

5.3 Volume based on a spherical grid with the resolution (a): 16x16x32, (b): 32x32x64 and (c): 64x64x128, (d): 128x128x256 . . . 24

5.4 Using transfer function to isolate more dense areas by (a) increasing threshold for when to render the less dense areas and by (b) lowering the alpha value of low dense areas. (c) shows the same volume, but uses the default thresholds and alpha values . . . 25

5.5 In the transfer function any color scheme can be chosen and applied to the differ-ent levels of density . . . 26

(8)

List of Tables

5.1 Hardware and OS specifics for the computer that ran the performance tests. . . 19 5.2 Average results in milliseconds from 1h tests of ISGNI (individual scene graph

node implementation) and of SDCI (single draw call implementation) as well as a time reduction in percent. . . 20 5.3 Result of comparison of start-up time in minutes: seconds, a fractal of seconds,

where ISGNI is the individual scene graph node implementation and SDCI is the single draw call implementation . . . 21 5.4 Performance of volume rendering . . . 21

(9)

1

Introduction

Space debris is a topic that is increasing in popularity and relevance. It almost seem like an ever growing issue that most people have heard about, but know little about. The main focus of this thesis is to create a new way to visualize space debris to hopefully convey a deeper understanding about the topic. A term that comprises this is "exploration"[1], the concept of how visualisation methods can use exploratory data to communicate results and convey an understanding from it. This has been the motivation for this thesis project that has been done in the software OpenSpace. The project itself is targeted towards the general public as the topics discussed within affects the population of earth as a whole and not any specific groups.

1.1

Background

To understand some of the technical terms, the data and software used in this thesis, etc, some explanatory background will be given in this chapter.

1.1.1

OpenSpace

OpenSpace is an open-source interactive 3D-visualization software partially developed at the University of Linköping to visualize the known universe [2] [3] [4]. It is designed to be used as a possible tool for scientists, to be displayed in a dome or planetarium as well as to be used by the general public on a home computer [5] [6]. The development of OpenSpace is a col-laboration between University of Linköping, The scientific Computing and Imaging Institute of The University of Utah, The American Museum of Natural History, New York University and NASA (National Aeronautics and Space Administration). OpenSpace is derived from another visualization software called Uniview [7]. Uniview held a similar purpose of visual-izing the universe and serves as a predecessor to OpenSpace.

1.1.2

Space Debris

As of 2019 ESA (European Space Agency) has estimated that there are roughly 900 000 pieces of space debris ranging from 1 to 10 cm in size and 34 000 objects larger than that, orbiting around Earth. It is also estimated to be 128 millions of pieces that are too small to track [8]. A piece of debris in LEO (Low Earth Orbit) is traveling with the average speed of 7-8 km/s, giving even small pieces a considerable impact force on collision. Space debris was originally a term for naturally occurring debris such as meteoroids as well as artificial debris. However, today the term is more directed at the man-made debris that orbits Earth. The problem with debris in orbit has become more relevant. With increasing numbers of debris several problems can potentially occur. For example, launching spacecraft can become more risky. Chances that satellites and other spacecraft already in orbit can be damaged rises as well.

(10)

1.2. Motivation

1.1.3

Data

One piece of debris is described as one data element in a TLE-file (Two-line element-file). The TLE-file format has been the standard data-format for describing any object in orbit around the Earth since the early 1970’s [9]. One data element in a TLE-file consists of 24 parameters. Not all of these parameters are used to describe the shape of the pieces of debris’ orbits, some represent information about the debris itself, for example, the year of launch.

An orbit and the position of the celestial body is more or less represented by eight of the total 24 parameters: year of epoch, day of the year and fractional portion of the day of the epoch, inclination, right ascension of the ascending node, eccentricity, argument of perigee, mean anomaly and mean motion [10].

The visualizations of space debris in OpenSpace uses five data sets categorized as space de-bris at www.celestrak.com [11]. They are named Breeze-m, Indian ASAT 2019, Iridium 33, Cosmos 2251 and Fengyun. In order these data sets consists of 12, 58, 303, 1025 and 2565 data elements, with a total of 3963. However, these data sets are consistently updated and may vary in number.

1.2

Motivation

The topic of space debris has lately become more widespread and the notion that it is a grow-ing issue is more prevalent than ever, especially in a time when thgrow-ings like commercial space-flight are ideas that are not too far fetched. An issue is how to convey to the general public how much space debris there is in orbit and how it behaves, to give a general understanding of it. Performance is an important part of representing visualizations, as the user benefits from having a smooth and pleasant experience when using and interpreting the visualiza-tion. The satellites and space debris visualization implementation existing in OpenSpace was not optimized to handle a large amount of data and its performance was affected negatively by this. As the amount of space debris in orbit most likely will increase over time this was an issue that could expand if not addressed. By improving the issue of low performance a user of OpenSpace will have a better experience while using the orbit representation of the space debris visualization. The orbital representation within OpenSpace suffered from the issue of being very cluttered at already around 4000 pieces of debris out of the 34000 track-able pieces that is estimated to exist. That itself is reason enough to start looking at other potential visualization methods to represent the space debris. In this thesis, an alternative method of visualization is examined to see if this issue can be improved upon and create a more comprehensible representation.

(11)

1.3. Aim

1.3

Aim

The purpose of this thesis was to create a more scalable implementation for visualizing satel-lites and space debris within OpenSpace, scalable in the sense that an alternative implemen-tation would work smoothly even with a higher number of data elements. This more scalable implementation will utilize one scene graph node and therefore only requires a single draw call per data set. This implementation will be called the single draw call implementation, in this report, unlike the previous implementation that uses a single scene graph per data ele-ment. This implementation will be called the individual scene graph node implementation in this report. Both implementation versions are representing each data element, i.e. satellite, with its full elliptic orbit as a line. With the fact that the data sets of space debris all ready are large and all orbit line is drawn simultaneous, comes the problem with cluttering. Another aim of this thesis was therefore to compare the orbit line representation with a volume ren-dering representation with the hypothesis that the volume renren-dering would reduce clutter and add a better way to view the density of the debris. With volume rendering, multiple parameters can be tweaked to get different results, for example, a transfer function and the dimensions of the grid. Another interesting aspect of volume rendering is how the grid can be structured. Therefore, one aim of the thesis is to compare a cartesian grid type with a spherical to find the pros and cons.

1.4

Research questions

The research questions of this report are divided into two, corresponding to the aims in the previous chapter 1.3. On one hand there needed to be a more efficient implementation since the software became slow when visualizing space debris and on the other hand, the aim was to reduce the cluttering. To improve the performance of the individual scene graph node implementation (the previous implementation) an alternative implementation, called the single draw call implementation, is suggested and with that suggested implementation the questions follow:

1. How significant improvement can the single draw call implementation produce com-pared with the individual scene graph node implementation in terms of frame time? 2. Will the single draw call implementation always provide a performance improvement

compared to the individual scene graph node implementation when data input is scaled?

Regardless of the individual scene graph node implementation or the single draw call im-plementation of the orbit representation is in use, they will both cause the same amount of cluttering which this thesis aims to reduce with the use of a volume rendering representation of the space debris. With this follows the questions:

3. Will a volumetric rendering of space debris reduce the effect of cluttering?

4. What are the benefits of a volumetric representation that cannot be provided with an orbital representation?

5. In what aspects will the volumetric representation in a cartesian grid be better than in a spherical grid and vice versa?

In addition it is of interest to be aware of the performance of using the volumetric represen-tation. Therefor comes the following question:

6. How will the performance be affected by rendering the volumetric representation of space debris?

(12)

1.5. Delimitations

1.5

Delimitations

As described in section 1.1.2 there is estimated to be millions of pieces of space debris in orbit around Earth and only a fraction of them are trackable. Access to the data for the tracked debris pieces is not trivial to get a hold of. Therefore, the data used in this thesis project have been limited to data that is sourced from NORAD and is continuously updated which easily can be retrieved from www.celestrak.com [11]. Unfortunately, there is only 3963 data element in total in the five data sets categorized as debris in comparison to the about 34000 trackable mentioned in section 1.1.2

OpenSpace is easily accessible and has partly therefor a wide variety of users, ranging from children to scientists. The visualizations of space debris created in this thesis project has had the general public as the target group. People in the target group can use the software and the visualizations created in this thesis project to create their own perception and understanding of how orbits, satellites, and space debris works. The focus has therefore not been to create a tool for scientists specifically.

(13)

2

Related work

There are a few other projects who has visualized space debris in orbit around Earth. Most of these visualizations also only utilize one type of visualization method, which sparks the idea to try different methods of visualizing space debris. By presenting these project and methods and summarise their strengths and weaknesses a more solid motivation for the work done in thesis is raised.

2.1

Space debris visualizations

There are multiple existing space debris visualizations in different forms; pictures, anima-tions, movies, and interactive 3D-visualizations. The visualizations of interest for the general public are foremost the interactive ones. Most of the easy to access space debris visualiza-tions use more or less the same type of visualization method, which is to render a dot at the location of the piece of debris at the current point in time. Clicking or hovering the dot reveals some information about the object.

One example of a visualization like this is called Stuff In Space [12]. It is a real-time vi-sualization of objects in orbits around Earth on the web using WebGL. These objects are from data sets of both debris and satellites. The visualization itself relies mainly on different colored dots which represent the position of the objects at the current time. Stuff In Space also provides an orbital representation when selecting an object, along with some general information about it. It also shows an orbital representation of all pieces of debris within the same data set when a data set is selected. Stuff In Space suffers from a similar type of clut-tering that is experienced in the OpenSpace representation which this thesis is set out to solve. Another example of a similar visualization is made by DLR.de and is called Space debris viewer [13]. This space debris viewer also shows simple dots representing the position of a piece of debris. It has the same features as Stuff In Space when it comes to user interactivity. It lacks the orbital representation of multiple orbits from a data set however.

At wearejust.com, a similar visualization can be found [14]. All of these three websites with space debris visualization have one goal, to visually show a general public how much debris is in orbit. They are all simple visualizations with no intention to retrieve new information from the visualization. More complex space debris visualizations are difficult to find. There are a lot of papers and reports with analysis from events concerning space debris, events such as the Chinese anti-satellite test (ASAT) 2007 [15] and the satellite collision between the operational US satellite Iridium 33 and a non-operational Russian communication satellite called Cosmos 2251 [16]. One or several more sophisticated software are used to create images highlighting different aspects of these types of events. These software products are AGI products which are not only visualization tools but also a data analysis tools [17]. To use these products membership is required and is therefore not used as visualization software

(14)

2.2. Volume rendering for the general public.

As mentioned, most of these easy accessible space debris visualizations uses more or less the same type of visualization methods. To find previous work or research on using volume ren-dering to visualize space debris was proven very difficult and may never been done before. This is partly what sparked the idea of testing using volume rendering and ray-casting to visualize space debris.

2.2

Volume rendering

Inspiration was taken from Alexander Bock, as motivation to why volume rendering were used in this project [18]. Volume rendering has been used in visualizations of, for example, space weather data and atmospheric data. An example of this is an implementation by Lianqing Yu, who uses volume rendering to visualize spherical shaped atmospheric data that spans around the earth [19]. The paper also tackles the issue of rendering spherical data, which is a problem faced in this thesis as well. Yu’s method is to create a spherical shell from two spherical surfaces and four walls to encapsulate the atmospheric data.

There have been multiple works on the topic of illumination of volume renderings, which is an important topic for creating a comprehensive result. An example is the proposal of a image plane sweep volume illumination, that incorporates advanced illumination techniques into a volume ray caster with the help of the plane sweep paradigm [20]. There is also the use of Historygrams, that uses extended photon mapping techniques for interactive volumetric global illumination by reusing photon media interactions [21]. A comprehensive survey of the existing illumination techniques for interactive volume rendering has been done by Daniel Jönsson, Erik Sunden, Anders Ynnerman and Timo Ropinski in 2014 [22]. The survey reviews the current techniques used in the field as well as discusses their future possibilities and challenges.

An important factor in volume rendering is the design of the transfer function. There are multiple proposals on how this is to be done relating to the work done in this thesis. An overview of the state-of-the-art in transfer functions has been done by Patric Ljung, Jens Krüger, Eduard Gröller and Markus Hadwiger [23]. This report classifies research done on the topic of transfer functions and discusses the development of next generation tools and methods for transfer functions. An interesting proposal is the use of local histograms when designing transfer functions. This has been done for capturing the characteristics of tissue, for example [24]. This is achieved by incorporating domain knowledge on the spatial relations of the data sets into the transfer function. Another implementation used for detection of tissue is the alpha-histogram which acts as an enhancement when creating transfer functions that amplifies the ranges that correspond to spatially coherent materials [25]. The transfer function can also be used to handle large data sets where large parts of a volume that would give little to no contribution to the finished rendering. This is done by using transfer functions as a guide in creating a scheme that selects the level of detail during decompression time [26].

There are also visualizations using volume rendering done in OpenSpace. These have been used to visualize space weather data. An example of this is the work done by Martin Törnros [27], in which he modeled space weather data events by using volume rendering techniques. Törnros also developed an implementation for ray casting in models that are defined in non-cartesian coordinate systems. This method will be used in this thesis when approaching similar obstacles.

(15)

2.2. Volume rendering A thesis work done within OpenSpace done by Oskar Carlbaum and Michael Novén [28], uses volumetric data for space weather data. In their case they suggest that volumetric data often needs to be down-sampled for an interactive software like OpenSpace to be computa-tionally efficient. However the space debris data used in this thesis project can be seen as scattered plots in 3D when used with volume rendering. The number of data elements is therefor not large enough to start considering down-sampling.

Volume rendering has a very broad spectrum of use in visualization and 3D computer graph-ics. For example, a common use is in the visualization of medical data, where segments from a CT or MRI scan are combined into a volume. This provides an alternative way of examining the 2D images provided by these scans. However, these methods and use cases fall outside the spectrum of this thesis and will not be delved into but deserve mention as the topic of volume rendering is quite extensive.

(16)

3

Theory

To fully understand the methods and concepts used in the implementation of this project, some explanations of the basics are given in this chapter. These topics presented are funda-mental for understanding how the implementation process elapsed and why certain choices were made. For example, in this chapter the fundamentals of how an orbit is described in parameters are clarified, as it is needed to understand how the visualization of space debris orbits are calculated.

3.1

Scene graph

OpenSpace uses a scene graph as the data structure that keeps track of positions, orientation, size, etc. for an object in the scene [29]. A scene graph is a general data structure that is used in a lot of different types of applications, for example, graphic animation and game development. A scene graph is built up by scene graph nodes where an object in the scene corresponds to one node. Often the scene graph is created as a tree structure to establish a hierarchy, where objects positions, orientations, and sizes relates to their parent nodes. The order of execution in a scene graph can be done in many different ways. For a tree-structure, a depth-first or breadth-first is usually used, depending on the application. OpenSpace uses a directed acyclic graph (DAG) instead of a tree structure. The DAG has a topological sorting that guarantees no cycles and where each node has dependencies. These dependencies of a node will be executed before the node itself. In this thesis project, the nodes used to visualize the space debris, really only have one dependency which is the barycenter of Earth.

3.2

Volume rendering

Volume rendering is a technique in scientific visualization to visualize sets of volumetric data. The definition of a volume in the context of volume rendering is a three-dimensional set of voxels. Each of theses voxels represents a scalar value of what quantity is to be visualized, i.e density, temperature, etc. There are several different methods of volume rendering, but the methods in this thesis are based on the use of volume ray casting and the use of transfer functions.

3.2.1

Volume ray casting

The basis of ray casting is to produce a ray for each pixel on the image plane from the cam-era’s viewpoint in the scene. By sampling at regular intervals along the ray each sample will correspond to a data value as it travels through the volume. A visual representation of this can be seen in Figure 3.1 These values are then interpolated and the finalized value can be mapped to a color and an alpha value with the use of a transfer function. The finalized color and opacity of the pixel in question is decided by these accumulated colors and alpha values. A common implementation of a volume raycaster was first presented by Krüger and West-ermann [30]. By rendering the bounding box encapsulating the volume twice in two passes.

(17)

3.3. Orbits In the first pass, the front face of the bounding box is rendered as a 2D RGB texture. The value of this texture is based on the vertices xyz-vectors. This is done to determine the entry points of the rays as the color components in the texture will correspond to the intersection between the ray and the volume. In the second pass, the same procedure is done but instead, the backside of the bounding box is rendered. By using the same method once again, the exit point of the ray can be determined. When the entry and exit point is determined for each pixel the color can be determined by sampling along the ray as described above.

Figure 3.1: Illustration showing the process of a single ray being cast through a 3D volume. The dot in the image plane represents the pixel which is currently being calculated. The dots in the 3D volume represents the sample points along the ray.

3.2.2

Transfer function

As described in chapter 3.2.1, a transfer function is used to map a data value to a color and opacity. The value is mapped to an RGBA vector, making it possible to take the scalar input that typically is between 0.0 to 1.0 and return output in an RGB-color of choice. The contri-butions from the samples are iteratively added from front to back. How the final color and alpha value for the pixel is calculated can be seen in equation 3.1. Ciacc.and αacc.i stands for the accumulative color and alpha value respectively and i represents the current sample. C is for color and α for alpha value.

Ciacc.=Ci´1acc.+ (1 ´ αacc.i´1Ci αacc.i =αacc.

i´1+ (1 ´ αacc.i´1)¨ αi

(3.1)

3.3

Orbits

As mentioned in chapter 1.1.3, eight parameters are used to describe the orbit trajectory and position. Some parameters are numbers that use the equatorial plane, as reference. This plane of reference for satellites around Earth is a plane that cuts through the equator. It uses the equatorial coordinate system that has its origin at the center of Earth. The coordinate system has its primary direction from Earth to the first point of Aries, which is a celestial reference point. Call that direction the x-direction. It has its z-direction through the north pole and its y-direction defined by being orthogonal to the xz-plane in a right-handed convention. The

(18)

3.3. Orbits plane of reference is the xy-plane that will cut through the equator of the Earth, depicted as the grey ellipse in the illustration in Figure 3.2 [31].

Satellite True anomaly Argument of periapsis Reference direction Inclination Ascending node

Longitude of ascending nodeΩ

ν

ϒ

Figure 3.2: Depicting an orbit using v for True anomaly, ω for Argument of periapsis, Ω for Longitude of ascending node, Υ for Reference direction, i for Inclination and  for Ascending node

The orbital trail has an ascending and descending node, where it crosses the plane of ref-erence. One parameter in the TLE-file called Right ascension of the ascending node or also known as the longitude of the ascending node, is the angle from the reference direction, the direction pointing to the first point of Aries, to the ascending node [32]. Inclination, which is another parameter in the TLE-file format, also uses the ascending node and plane of reference. The inclination is defined by the angle from the plane of reference to the orbital plane [33]. The argument of periapsis, or in the case of a celestial object orbiting earth: argument of perigee, is the angle from where the ascending node is to where the satellite has its lowest altitude in relation to its host, i.e. Earth in this case. In Figure 3.2, true anomaly is shown. True anomaly is not a parameter in the TLE-file, instead, the mean anomaly is used. Mean anomaly is defined by the fraction of a theoretical orbits period that has elapsed since the satellite passed the perigee. This applies under the condition that the theoretical orbit in question is perfectly circular as well as the satellite having constant speed, but the same period as the actual elliptic orbit being calculated. The mean anomaly in the TLE-file is expressed by an angle in degrees. Eccentricity describes how elliptic an orbit is. It is a number between 0 and 1, where 0 is perfectly circular and as soon as the eccentricity of an orbit is greater or equal to 1, it is no longer in orbit. An orbit with an eccentricity close to 1 is therefore extremely elliptic with a high apogee and low perigee. Mean motion is defined in the TLE-file as the number of revolutions per day an orbit makes [34]. The last two of the eight parameters that describe the position and orbital trail in the TLE-file can be combined. They describe the epoch. The first one is two digits which represent the last two digits of the year and the second parameter represents the day of the year and the fractional portion of the

(19)

3.3. Orbits day of the epoch. Epoch is a point of reference in time used as a temporal point of origin[10]. Orbits around Earth are categorized in three types of orbits; HEO (high earth orbit), MEO (medium earth orbit) and LEO (low earth orbit), where HEO is at an altitude of ě 35780 km, MEO: 2000 - 35780 km and LEO: 180 - 2000 km. In HEO a satellite with an orbit 42164km from the center of the Earth and with inclination and eccentricity at 0, will match the rota-tion of the earth and therefore be still on the sky looking up from Earth, at all times. This type of orbit is called a geostationary orbit. Here is were weather monitoring satellites and communication satellites, for phones and television, are. In LEO there is a type of orbit called sun-synchronous orbit, which is close to a polar orbit but with a slight angle. When a sun-synchronous satellite crosses the equator the local solar time on the ground right below the satellite is always the same. Satellites in a sun-synchronous orbit are used for science were the change of different things, for example, climate change, are consistently monitored over time [35].

As seen in Figure 3.3 the orbits of the debris form a ring around the poles. This is caused by satellites launched into sun-synchronous orbits. Not a lot of satellites are launched in perfect or near-perfect polar orbits because most require a sun-synchronous or other types of synchronous orbit. Another reason why almost no space debris orbits over the poles is because an orbit over the poles is less stable due to the Earth not being completely spherical. The Earth has a longer equatorial diameter compared to its polar diameter. The variation of gravitational pull in a different location on the earth is therefor affecting an object’s orbit.

Figure 3.3: Image showing how orbits tend to not orbit over the poles, instead forming a ring around them due to satellites launched into sun-synchronous orbit.

(20)

4

Implementation

This part will describe the implementation process of two different visualization methods to visualize space debris. One of them visualize the space debris orbits and positions. The other method is a volume rendering representing the density of the debris in each area represented by a voxel in a grid. This grid encapsulate the earth and all the space debris. The volume rendering was implemented with two different types of grids, cartesian and spherical. The orbital representation already implemented in OpenSpace handles each piece of debris posi-tion as one scene graph node and another scene graph node for its orbit. When displaying all data sets of debris with a total of 3936 data elements, the usage of 7872 scene graph nodes is therefore required. However, this individual scene graph node implementation can disable all scene graph nodes for the position of the piece of debris, to only visualize the orbit of the debris.

OpenSpace had a lot of useful functionality prior to starting this thesis project, such as interpreting the TLE-file data. The work done in this thesis is foremost written in the pro-gramming language C++ but also uses GLSL (OpenGL shader language) and some Lua. We propose an implementation that utilizes scene graph nodes more sparingly and effec-tively, an implementation that only uses one scene graph node for each data set. This single draw call implementation also calculates the fade of the orbit line in the fragment shader using the position of the piece of debris calculated in the vertex shader. Doing these calcu-lations in the shaders instead of in the render function in the C++ code run on the CPU, it is anticipated to run faster, since shader code is run on the GPU. An algorithm for reading the data, calculating the orbital trails and drawing the orbits is implemented. This single draw call implementation will be described below.

(21)

4.1. Orbital representation

4.1

Orbital representation

One implementation of the visualization method we call orbital representation in this thesis will be described in this chapter. The implementation is the single draw call implementation. The basic idea of an orbital representation is that a line is drawn for each piece of space debris orbital trajectory. At a position on the line, the line will fade out with time as the debris part gets further and further away from that position. The orbit line will therefore be transparent ahead of the position of the debris, and fully opaque at the actual position. In Figure 4.1 an illustration of a faded orbital trail line is shown.

Figure 4.1: Illustrating how the fading of an orbits trail line looks like.

4.1.1

Constructing the vertex buffer

The orbital data comes from the orbital parameters described in TLE-files, the details of the TLE format and contents are described in chapter 3.3. The relevant parameters needed to cal-culate the position of the debris are read from the TLE-file for each piece of debris in the data set and the parameters are stored in a vector data container. The position for each vertex can then be calculated using equation 4.1. Where rotascis a rotation around the z-axis to place the

location of the ascending node, rotincis a rotation around the x-axis to get the correct

inclina-tion and rotper is another rotation around the z-axis to place the closest approach(periapsis)

to the correct location. The rotations are based on an assumed coordinate system where the z-axis is chosen as the axis of which the current orbit is rotating around. The x-axis is pointing towards the first point of Aries, which is used as a reference point for determining positions of objects in orbit. The y-axis completes the right-handed system. These assumed coordinate systems are therefore different for each orbit that is calculated. In equations 4.2 and 4.3, C represents the semi-major axis, Eano is the eccentric anomaly and e is the eccentricity of the

orbit in question [36].   x y z  =   rotasc rotinc rotper   ¨   A B 0.0   (4.1) A=C ¨ 1000 ¨(cos(Eano)´e) (4.2) B=C ¨ 1000 ¨ sin(Eano)¨ a 1.0 ´ e2 (4.3)

With the positions calculated, the vertex buffer can now be filled. Along with the xyz-values, three other parameters are also fed to the buffer for each vertex; a time offset, the orbits epoch and its period. A visual representation of this can be seen in Figure 4.2 The time offset gives a percentage of how far along the orbit that the vertex in question is in relation to the epoch of the orbit. These three parameters will be used in the vertex shader and fragment shader, which will be covered in more detail in later sections.

(22)

4.1. Orbital representation The vertices are put in order in the vertex buffer, as seen in Figure 4.2. To distinguish what orbit a particular vertex belongs to, the number of vertices in each orbit needs to be stored. Knowing the number of vertices per orbit, lets us know what epoch value, time offset and period value to associate the vertex position with. With all vertices put in order, representing a whole orbit, the rest of the orbits vertices are also put in order, as in Figure 4.3. The filled vertex buffer now holds all orbits in the data set.

x, y, z, time, epoch, period,

x, y, z, time, epoch, period

First Vertex Second Vertex

Figure 4.2: Showing vertices in the vertex buffer

First

orbit

Second

orbit

Third

orbit

Fourth

orbit

Fifth

orbit

Sixth

orbit

Figure 4.3: Showing the full vertex buffer. All orbits next to each other

4.1.2

Rendering

As previously explained, the vertex buffer in this implementation will hold all orbits in a data set at once instead of creating a buffer for each orbit separately. To properly render the orbits the operation glDrawArrays uses GL_LINE_STRIPs. For each orbit, a GL_LINE_STRIP is drawn from a given starting index in the buffer using n number of vertices that are the number of vertices in an orbit. When an orbit has been drawn the starting index is set to the number n+1 in the buffer and the next orbit is drawn starting from that index. The "+1" makes it skip a line segment from the last vertex of one orbit to the first vertex in the next orbit in the vertex buffer that otherwise would connect each orbit with a line, which is not desired. This has to be done since the vertex buffer contains vertices for all orbits in the data set.

4.1.3

Shader implementation

The major advantage of this new shader implementation is the fact that it simplifies the cal-culations for positions on the orbits. An initial calculation of the orbits position is done only once to save computational time.

As described earlier, the position is passed as xyz-values to the vertex shader. To render the objects in camera space the values are multiplied with a model view transform. The proper depth of the position in the scene is decided by multiplying the view space position with a projection transform. This results in a vector with four elements. After normalizing this vector, the fourth element will be representing the depth and is passed on to the fragment shader.

(23)

4.1. Orbital representation The current position of the debris is represented by the fade of the orbit itself, as the orbit line will have no opacity just before the debris position and max opacity just after it. In this implementation, the fade is calculated as a fraction. This is done by first calculating how big of a fraction the piece of debris has traveled along the orbit relative to the epoch and also the fraction of where the current vertex is that the shader code is operating on. This is done in the vertex shader and the two fraction values are passed to the fragment shader. To calculate the fraction of how far the debris has traveled along the orbit, the number of finished rotations since the epoch is first calculated. This is done by subtracting the epoch from the current timestamp and divide it by the period. The fractional part of the resulting value will be the fraction completed of the current orbit rotation. This can be seen in equation 4.4. To calculate the fraction of how far along the orbit the current vertex is, the time offset parameter is divided by the period, which can be seen in equation 4.5. In the fragment shader, the difference between the fractions are calculated. If the difference is negative it is added with one. Finally, this value is inverted to produce a fraction that is multiplied with a parameter called lineFade. This parameter connects to the GUI of OpenSpace making it possible for a user to modify the fade of the orbit line at run time. The calculations for the fade are shown in equation 4.6. The FadeParameter needs to be clamped between a value of 0.0 and 1.0. This final value can be multiplied with the current fragments opacity to create the fade.

Rotations= (timestamp ´ epoch)/period

DebrisFrac= f rac(Rotations) (4.4)

O f f setFrac=timeO f f set/period (4.5) Di f f =DebrisFrac ´ O f f setFrac

Invert=1 ´ Di f f

FadeParameter= Invert ¨ lineFade

(24)

4.2. Volumetric representation

4.2

Volumetric representation

In addition to a different implementation for an orbital representation a volumetric imple-mentation was created. This was done using two different grids, cartesian and spherical. How this was implemented is described in this chapter.

4.2.1

Volume rendering space debris

Unlike the orbital representation, a volumetric representation was never previously imple-mented. Using volume rendering and ray casting to visualize space debris has to our knowl-edge never been tested. As the chapter 2.1 implies, many other representations and visualiza-tions of space debris are using the same methods, which sparked the idea of testing this. The volume rendering is not run in real-time and is not as easily enabled due to how OpenSpace currently is built and how the data needs so be processed. In the existing implementation, the volume data must first be generated separately before it can be rendered. This results in the fact that it cannot be done in real-time. First, a file is run from OpenSpace Taskrunner. This file specifies the type of volume, dimensions of the grid, lower- and upper bound for the bounding box, input file, output files, start time, time step, end time and grid type. These parameters are interpreted in the volume C++ class. This class will create raw volume data. Using the OpenSpace command prompt, another file is run. The file specifies what data to use, what transfer function to use, the step size for the ray in the raycaster, what the min and max value should be used to clamp the data output between, in which folder of the GUI-menu the volume will be placed and more. The output files that are created from the C++ code are one .rawvolume-file containing the data of the volume and one meta data file.

4.2.2

Volume grids

The C++ class specified in the task first off reads the TLE-data file and loops over each time step, starting at the specified start time. A position buffer is then constructed with the position of all pieces of debris in the data set. Knowing the size and resolution of the grid and the position of the piece of debris, the index of which voxel in the grid the piece of debris is within can be calculated. The index for a voxel in a cartesian grid is calculated by first moving the point where the indexing of the grid starts, like in Figure 4.4, so index zero does not start in the origin of Earth and is instead centered around the Earth.

Figure 4.4: 2D-Illustrates moving the point where the indexing of a cartesian grid starts, so it centers around earth instead of having its index zero at the origin of Earth.

(25)

4.2. Volumetric representation The total size of the grid is two times the maximum apogee in all dimensions because that will guarantee that the piece of debris that has the highest apogee out of all, is within the grid at all times when it is at its apogee. The shift of the grid is done by adding the maximum apogee to the position in all dimensions. A dimension coordinate is then constructed that is defined by equation 4.7 where each dimension on the right side of the equation is floored to the closest integer before being assigned to dimcoord, which is the dimension coordinate.

dimcoord= (p.x+MA)˚res.x/(2 ˚ MA),

(p.y+MA)˚res.y/(2 ˚ MA),

(p.z+MA)˚res.z/(2 ˚ MA)

(4.7)

In equation 4.7 p is the position of the debris, MA is the maximum apogee and res is the resolution in each dimension. Knowing the dimension coordinate, the index is the result of equation 4.8.

index=dimcoord.z ˚(res.x ˚ res.y) +dimcoord.y ˚ res.x+dimcoord.x (4.8) To calculate the index of a voxel in a spherical grid is different since index zero should now be by the origin of Earth. There will therefore not be a shifting of the grid, thus no adding of MA in equation 4.9. For the spherical grid the x, y, and z correspond to the radius, theta, and phi, in order. Notice also that the division in equation 4.9 is the maximum value that the grid can have in spherical coordinates.

dimcoord=p.x ˚ res.x/MA, p.y ˚ res.y/π, p.z ˚ res.z/(2 ˚ π)

(4.9)

A density array with the size of the resolutions in all three dimensions multiplied with each other, is created. The indices of the array corresponds to the indices of the very voxel within the grid. The value at an index in the density array is increased when the position of a piece of debris corresponds to the index of the voxel. That is, if the piece of debris is within voxel k, then the value at index k in the density array will be increased. For the cartesian grid, the value is increased by one, but for the spherical grid the sizes of the voxels vary. Therefore the density contribution from a single piece of debris in a voxel is normalized by the volume of the voxel, i.e. one divided by the volume.

(26)

4.2. Volumetric representation To calculate the volume of a voxel in a spherical grid a few things needs to be known: the index of which voxel, the resolution of the grid in all dimensions and what the max radius of the grid is. The max radius for the grid is the maximum apogee of all space debris orbits. Un-like in a cartesian grid, a voxel in a spherical grid will not be a cube, as previously mentioned. Figure 4.5 shows a voxel in a spherical grid.

dr rdΘ

Figure 4.5: A voxel in a spherical grid

The volume of a voxel is calculated by equation 4.10 where r is the radius, Φ is the angle phi, Θis the angle theta and R is the boundaries in each dimension for the specific voxel.

volume=

ż ż ż

R

r2sin(Θ)drdΘdΦ (4.10)

4.2.3

Sequence of volumes

In chapter 4.2.1 a starting time, a time step and an end time is mentioned. These are specified in the .task-file. This is used to create a sequence of volumes. If there is only one volume, then it will only depict the density of the space debris at one specific moment, which will be by the specified starting time. The volume is however shown in any time step of the OpenSpace simulation clock after that. To create a single volume, the starting time and end time simply have to be the same timestamp, regardless of the time step. If the time step is bigger than the difference between the starting time and end time, only one volume will be created. The time step is specified in seconds and for each time step, a new volume will be created. There will be two new output-files for each volume. One file will be containing raw volume data and one containing meta data that specify the parameters of the volumes.

(27)

5

Results

Due to low performance and low frame rate, visualizing the space debris covering our Earth in OpenSpace has not been very convenient up to this point. The result of this thesis has however made it possible to do just that in a more smooth way and is now more convenient to visualize due to higher performance. It is now also possible to visualize the debris using volume rendering in OpenSpace. Since this thesis is divided into two main objectives, per-formance improvement for the orbital representation and trying the alternative visualization method with volume rendering, so will this chapter be.

5.1

Performance

The performance test done on the orbital representation in this project gave an insight on how significant the improvement in performance the single draw call implementation had compared to the individual scene graph node implementation. In this section these results will be presented, as well as measurements for the volume rendering.

5.1.1

Performance on orbital representation

To measure the performance, the metric seconds per frame (SPF) was used. The main fo-cus for the result was to compare the implementation using one scene graph node for each piece of debris with the one using one for each data set to see how much the performance improved. In OpenSpace there is a frame rate written to the screen. It is calculated by timing the rendering. One iteration of the rendering loop results in one frame. Before the frames per seconds (FPS) is written to the screen, it is also written to a file. It was this file that was used to calculate the result as an average SPF. The test was run on a desktop computer with the specifics shown in Table 5.1.

PC specifics Description

CPU Intel i7-4770 3.4GHz GPU GeForce GTX 1070Ti 8GB

RAM 32GB 1600MHz

OS Windows 10

Monitor resolution 1920x1200

Table 5.1: Hardware and OS specifics for the computer that ran the performance tests. The test was run for one hour per data set. To keep the tests coherent they were all tested under the same circumstances. All tests were run in full screen on the same monitor. The simulation time in OpenSpace was set to start on a specific time (25th of June 2019 00:00:00 GMT) instead of the real-time to give the tests the same prerequisites. The first 500 samples from the test are not taken into account when averaging the SPF. No other applications were running at the same time other than potential background processes.

(28)

5.1. Performance The results for the one hour tests are summarized in Table 5.2. Note that the last row depicts the result when all data sets are run simultaneously. These tests were run overnight for ap-proximately 17 hours. Table 5.2 also consists of a column named "Time ratio in %" which says how much time in percent the single draw call implementation took in comparison to the individual scene graph node implementation. The ratio is calculated according to equation 5.1, where ft1 is the frame time for the single draw call implementation and ft2 is the frame time for the individual scene graph node implementation.

Time ratio= (f t1/ f t2)˚100 (5.1)

Data set ms/frame ISGNI ms/frame SDCI Time ratio in %

Breeze-m 2012 4,998986 mspf 4,77283 mspf 95,4759 % India ASAT 2019 6,311607 mspf 4,79549 mspf 75,9789 % Iridium 33 11,08822 mspf 5,34262 mspf 48,1829 % Cosmos 2251 25,94926 mspf 6,11757 mspf 23,5751 % Fengyun 76,33102 mspf 7,02952 mspf 9,20925 % All combined 117,5122 mspf 8,75554 mspf 7,45075 %

Table 5.2: Average results in milliseconds from 1h tests of ISGNI (individual scene graph node implementation) and of SDCI (single draw call implementation) as well as a time reduction in percent.

Figure 5.1 shows how the performance test results compare, to get a better indication and understanding. On the x-axis is the number of data elements in the data set and on the y-axis is milliseconds per frame for the corresponding data set. The blue line is the individual scene graph node implementation and the red is the single draw call implementation.

Figure 5.1: Comparison between ISGNI (individual scene graph node implementation) and SDCI (single draw call implementation) of their frame time in relation to the number of data elements

(29)

5.1. Performance In addition to the run-time performance tests a test was performed measuring the start-up time for OpenSpace. The result for that test is seen in Table 5.3. The results are the average of five tests timed manually using the timer on a mobile phone. The results, therefore, have a ˘human reaction error time added. The format on the results is minutes: seconds, a fractal of seconds. The column named "Time ratio in %" shows how much time in percent the single draw call implementation took compared to the individual scene graph node implementa-tion, also using equation 5.1.

Data set time ISGNI time SDCI Time ratio in %

Breeze-m 2012 00:05,706 00:05,160 90,4311 % India ASAT 2019 00:08,098 00:05,226 64,5345 % Iridium 33 00:20,564 00:05,100 24,8006 % Cosmos 2251 01:02,234 00:05,122 8,23023 % Fengyun 03:16,788 00:05,254 2,66988 % All combined 06:26,430 00:05,352 1,38499 %

Table 5.3: Result of comparison of start-up time in minutes: seconds, a fractal of seconds, where ISGNI is the individual scene graph node implementation and SDCI is the single draw call implementation

5.1.2

Performance of volumetric representation

It is of interest to see if using the volumetric representation is fast enough to be practical. Therefore a test was done with only rendering the density volume and earth in the scene. Just like the other performance tests this test was done by calculating an average of frame times during the first hour after start-up and under all the same circumstances as described earlier. In addition, however, the volume is run using a ray casting sampling step size of 0.01, where the total size is 1 between the entry and exit points of the ray. The grid size was set to 32x32x32. Also, the first 1000 measurements were ruled out instead of the previous 500 to account for the process required to start-up and include the volume. A volume is representing the density at one specific moment and since the positions of the debris are not updated while running the scene with one volume and the sample step size is fairly low, the performance was expected to be close to the same as running only the earth without any satellites. These result can be seen in Table 5.4 in milliseconds per frame and a ratio in percent. The ratio describe the time it took to render each grid in comparison with not rendering a volume.

Volume ms/frame Time ratio in % Spherical grid 4,900782 mspf 105,207% Cartesian grid 4,659940 mspf 100,037%

No volume 4,658209 mspf -Table 5.4: Performance of volume rendering

These results were run on the desktop computer with the specifics mentioned in Table 5.1. The resolution of the monitor as well as the step size are big factors of the performance results. The more pixels that cover the area of a volume, more rays in the ray caster will be used. A 4k HD monitor or projector will have 4 times more rays. A step size of 0.01 will be equal to 100 samples per ray and 0.001 equal to 1000 samples. In a dome for example where you may have six 4k HD projectors with stereo (3D), the performance will be significantly more affected than what the tests on the desktop computer showed.

(30)

5.2. Perception / comprehension

5.2

Perception / comprehension

Two types of volume rendering have been tested in this thesis. One representation utilizing a cartesian grid and one that uses a spherical grid. The two representations have a slight difference from each other and come with their own pros and cons. These results will be pre-sented in the following sections. The goal was to test an alternative approach of visualizing space debris, with the aim to reduce clutter from the orbital representation and add an alter-native way to represent debris. However, perception and comprehension of images can be a controversial topic and is highly dependant on the beholder. In this section, only objective comparisons and observations of the results will be given.

(31)

5.2. Perception / comprehension

5.2.1

Cartesian grid

The result of the volume rendering using a cartesian grid can be seen in Figure 5.2. As shown in the screenshots in Figure 5.2, at a low resolution of the grid the representation is cloudy and each voxel covers a larger area, making it less precise. At higher resolution precision increases but the representation gets closer to a dot representation and acts less like a volume. Since the cartesian representation is based on a cartesian grid the voxels are cuboids. This results in a representation that will have some visible corners at certain places, mostly around the edges. These artifacts decrease in visibility at higher resolution grids but are hard to avoid when using a cartesian grid.

The step size used when screen capturing the pictures for Figure 5.2 was set to 0.001, unlike in the performance tests were it was set to 0.01.

(a) (b)

(c) (d)

Figure 5.2: Volume based on a cartesian grid with the resolution (a): 16x16x16, (b): 32x32x32, (c): 64x64x64 and (d): 128x128x128

(32)

5.2. Perception / comprehension

5.2.2

Spherical grid

The result of the volume rendering using a spherical grid can be seen in Figure 5.3. The spherical representation behaves similar to the cartesian when increasing and decreasing the resolution. Notable is that a uniformly sized grid is not optimal in the spherical represen-tation, by limiting the resolution in certain axes a more satisfying result is received. This is because the dimension theta is limited by 0 and π and phi by 0 and 2*π. The spherical representation has an issue that is common with spherical grids, called polar pinch. Around the poles a lot of voxels exist and the volumes are therefore small and elongated when they converges to a single point at the poles. Since the density of debris is high around the poles, there can be sharp artifacts in these areas using lower resolution grids. Around the edge of the volume, best seen in Figure 5.3a, it is more smooth compared to the cartesian grid in Fig-ure 5.2a. It no longer has the staircase pattern as a cartesian grid with low resolution would have.

(a) (b)

(c) (d)

Figure 5.3: Volume based on a spherical grid with the resolution (a): 16x16x32, (b): 32x32x64 and (c): 64x64x128, (d): 128x128x256

(33)

5.2. Perception / comprehension

5.2.3

Transfer function

The screenshots seen in Figure 5.2 and 5.3 used the same transfer function. The colors go from a green color where the density is low, to a blue, and lastly to a red where it is dense. They all had the same alpha values; 0, 50, 250. The alpha, as well as the color values, goes from 0 to 255. The transfer function can also be used to specify at what density it should start going from green to blue for example. These threshold values are bound between 0 and 1 are set to 0.01, 0.14 and 0.3 by default. The threshold values can be modified to specifying when to start rendering the green color. The low-density areas can then be excluded to better isolate the more dense areas. An example of that is shown in Figure 5.4a.

A similar result can be achieved by lowering the alpha value on the low-density areas, as shown in Figure 5.4b. The perception difference is that when lowering the alpha values it gets more cloudy or blurry unlike when the density threshold is increased for when to start rendering the low-density areas, as in Figure 5.4a. The volume used to show this difference is using a cartesian grid of resolution 32x32x32. The Figure 5.4a uses the same alpha values as earlier; 0, 50 and 250, while Figure 5.4b uses the alpha values 0, 9 and 250 but kept the default threshold values. Figure 5.4a is however using the threshold values 0.09, 0.14 and 0.3. As comparison Figure 5.4c uses both the default threshold and alpha values.

(a) (b) (c)

Figure 5.4: Using transfer function to isolate more dense areas by (a) increasing threshold for when to render the less dense areas and by (b) lowering the alpha value of low dense areas. (c) shows the same volume, but uses the default thresholds and alpha values

(34)

5.2. Perception / comprehension The color can be changed in the transfer function however the user might prefer it. So far the figures have been using a default color scheme with three colors ranging from a green color at lower density, to a blue where the density is a bit higher, to a more red color when the density is at the highest. The green color is in RGB: 40, 160, 40, the blue: 40, 40, 240 and the red 200, 80, 0. In the transfer function, more or less any number of colors can be specify. All figures in this report use three colors. In Figure 5.5 three examples of different color schemes are found.

(a) (b) (c)

Figure 5.5: In the transfer function any color scheme can be chosen and applied to the differ-ent levels of density

5.2.4

Volume sequence

It can be interesting to use a sequence of volumes. A sequence means that at one time one volume will be in use and after a pre-specified time step a new volume will appear and the previous disappear. With a low enough time step, one can see movements in the volume, movements in the sense that the user can get a feel of directions and speeds of some indi-vidual pieces of debris. Where there is a higher density of debris, however, its more difficult to see any notable directions. With a larger time step, seeing movements and directions of individual pieces of debris is increasingly more difficult. One downside is that with many volumes a lot of files will be generated which takes both time to generate and also takes memory space. However, the biggest downside with this implementation is that it is not au-tomatically synced with the simulation timer, thus making it tedious to produce a real-time sequence. To see the volumes visualizing the density of the space debris in real-time, the user needs to specify the start time of the sequence to be equal to the real-time, or a few moments ahead of real-time. Then the user can wait until the starting time match with the real-time. This process takes some effort on the users side and is not a streamlined process.

(35)

5.2. Perception / comprehension

5.2.5

Orbital representation

Using the single draw call implementation over the individual scene graph node implemen-tation introduces limiimplemen-tation to the interaction and possibility to manipulate individual orbits trajectory lines color, width, and alpha. What could be done with the individual scene graph node implementation could instead be done ones per data set. That, however, results in an easier way to manipulate the default color on a whole data set, since it only needs to be done ones instead of ones per piece of debris in that data set. If for example, the user were to find the color contrast between two data sets colors to be too low, the user can easily change one or both to colors that suit that user. Figure 5.6 shows the orbital representation with different colors for all five data sets.

(36)

6

Discussion

As seen in the results, the single draw call implementation gave a significant increase on performance and both methods of volume rendering gave a satisfactory result, but there are more that needs to be analyzed from the results that came from this project. Therefore, in this chapter these results will be discussed in greater detail. Topics that will be brought up are how the results compared with the project aims, unexpected outcomes and possible im-provements for future implementations.

6.1

Performance

In this section the method and results of the performance tests are discussed. The reason for choosing the methods of testing the implementation are discussed as well as their drawbacks and strengths.

6.1.1

Orbits

Even though an implementation using one scene graph node per data set is many times faster than using one for each piece of debris, there are still reasons why you want to use the indi-vidual scene graph node implementation. The single draw call implementation described in this thesis has the advantage of performing fast, but it comes with limitations.

When using the individual scene graph node implementation the user can set the camera to focus at a specific piece of debris and follow its specific orbit. This makes it easier to see for example where one particular piece of debris was at a certain time. This implementation also makes it possible to get a list over every individual piece of debris in the OpenSpace GUI, where the user more easily can get specific information about a piece of debris of interest. This is because all of the data elements, i.e. the debris, are handled as single objects in the scene. With the single draw call implementation however, these functionalities have been striped.

The reason why each test was run for one hour each is motivated by three arguments. The first argument is that in the first few minutes of opening OpenSpace the performance is unpredictable and varies a lot when components are starting up and being initialized. This is also why the first 500 samples from each test are not being taken into account when cal-culating the results. The second argument is about why it is run for only one hour and not more. The thought behind this is that a session of running OpenSpace is probably no longer than an hour. The third argument is that after running the test for one hour there will most likely be enough samples for the average result to converge to a value that would probably not change drastically if it were to run for 24 hours instead.

Even though running the tests for one hour seems like an unnecessarily long time, we found it necessary because running it for only a minute or two yield results that varied a lot. This can happen for a number of different reasons; background programs or other processes

References

Related documents

In the third case, the debris is found within the firing range and the laser is fired, this can of course happen before the simulation is terminated in either of the other cases as

Re-examination of the actual 2 ♀♀ (ZML) revealed that they are Andrena labialis (det.. Andrena jacobi Perkins: Paxton & al. -Species synonymy- Schwarz & al. scotica while

Gruppen skulle gå emot myndigheter vilket ansågs vara drastiskt men enligt Persson var nödvändigt skriver Ringarp (2011).. Med i Skolprojektets arbete fanns en man

Oliver Gimm, Maria Domenica Castellone, Cuong Hoang-Vu and Electron Kebebew, Biomarkers in thyroid tumor research: new diagnostic tools and potential targets of

A minor proportion of individuals progress to the third phase of CoViD-19 by developing symptoms of hypercytokinemia (cytokine release syndrome (CRS)/cytokine storm) characterized

Skillnaden mellan de olika resultaten visar att i LUSAS utsätts pålarna närmast bottenplattans centrum för större krafter relativt pålarna i ytterkanter och hörn jämfört med

Pedagog 1 och 2 har varit med om att flerspråkiga barn visar intresse för tecken och bilder, vilket jag tänker skulle kunna vara för att barnen får ett sätt att förstå mer

The volume data is represented using a flat multiresolution blocking scheme that supports a fine-grained granularity independent of level-of-detail selections. The specific