• No results found

Bridging the Gap: Providing Public Science Dissemination through Expert Tools

N/A
N/A
Protected

Academic year: 2021

Share "Bridging the Gap: Providing Public Science Dissemination through Expert Tools"

Copied!
50
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Science and Technology

Institutionen för teknik och naturvetenskap

Linköping University

Linköpings universitet

LiU-ITN-TEK-A-16/054--SE

Bridging the Gap: Providing

Public Science Dissemination

through Expert Tools

Michael Nilsson

Sebastian Piwell

(2)

LiU-ITN-TEK-A-16/054--SE

Bridging the Gap: Providing

Public Science Dissemination

through Expert Tools

Examensarbete utfört i Medieteknik

vid Tekniska högskolan vid

Linköpings universitet

Michael Nilsson

Sebastian Piwell

Handledare Alexander Bock

Examinator Anders Ynnerman

(3)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida

http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page:

http://www.ep.liu.se/

(4)

Institutionen för teknik och

naturvetenskap

Department of Science and Technology

Examensarbete

Bridging the Gap: Providing Public

Science Dissemination Through Expert

Tools

by

Michael Nilsson & Sebastian Piwell

LIU-IDA/LITH-EX-A--16/001--SE

2016-11-14

(5)

Linköpings universitet

Institutionen för teknik och naturvetenskap

Examensarbete

Bridging the Gap: Providing Public

Science Dissemination Through

Expert Tools

by

Michael Nilsson & Sebastian Piwell

LIU-IDA/LITH-EX-A--16/001--SE

2016-11-14

Examinator: Anders Ynnerman Handledare: Alexander Bock

(6)

Abstract

This thesis aims to provide public science dissemination of space weather data by inte-grating a space weather analysis system used by experts in the field into an interactive visualization software called OpenSpace; designed to visualize the entire known Universe. Data and images from complex space weather models were processed and used as textures on different surface geometries, which are then positioned, oriented and scaled correctly relative other planets in the solar system. The obtained results were within the goals of the thesis and has successfully incorporated several features that will help understanding of space weather phenomena.

(7)

Acknowledgments

We would like to thank professor Anders Ynnerman for the opportunity to work on this project and our supervisor Alexander Bock for the guidance we received through the dura-tion of this thesis. Together with Emil Axelsson they have supported us with valuable input and suggestions every week. This thesis was supported by our friends at NASA’s Goddard Space Flight Center where we were offered an office and funding for travel expenses. A special thanks to Asher Pembroke, for assisting us in moving forward; Masha Kuznetsova, for allowing us to come and work at NASA; Justin Boblitt and Richard Mullinix, for answer-ing questions concernanswer-ing iSWA; Leila Mays, for her general helpfulness; Lutz Rastaetter, for providing us with model output data and images.

We would also like to thank Carter Emmart for having us come to the American Mu-seum of Natural History and test our work in the Hayden Planetarium. Last but not least, we would like to thank the rest of the OpenSpace team, and send our best wishes. Keep up the good work.

(8)

Contents

Abstract iii

Acknowledgments iv

Contents v

List of Figures vii

1 Introduction 1 1.1 Motivation . . . 1 1.2 Aim . . . 1 1.3 Research Questions . . . 1 1.4 Delimitations . . . 2 2 Background 3 2.1 OpenSpace . . . 3 2.2 Definitions . . . 3 2.3 Space Weather . . . 4

2.4 Integrated Space Weather Analysis System (iSWA) . . . 5

2.5 Reference Frames . . . 6

2.6 Software . . . 9

3 Theory 11 3.1 Standard Score Normalization . . . 11

3.2 Histogram Equalization . . . 13 3.3 Pseudo-Color Processing . . . 14 4 Method 15 4.1 Current Cygnets . . . 15 4.2 Visualization Cygnets . . . 18 4.3 CDF Cygnets . . . 20 4.4 Software Structure . . . 21 5 Results 24 5.1 Screen Space Cygnets . . . 24

5.2 Visualization Cygnets . . . 25

5.3 CDF File Cygnets . . . 26

5.4 Standard Deviation Interval . . . 27

5.5 Histogram Equalization . . . 28

6 Discussion 31 6.1 Performance . . . 31

(9)

6.3 Comparison with iSWA . . . 32

6.4 Rainbow Color Map . . . 32

6.5 Standard Deviation Interval . . . 32

6.6 Future Work . . . 33

7 Conclusion 35

(10)

List of Figures

2.1 A Cygnet on iSWA that is generated from the ENLIL model output data. Image

taken from the iSWA web app . . . 5

2.2 A Cygnet on iSWA that is generated from the BATS-R-US model output data. Im-age taken from the iSWA web app . . . 6

2.3 An overview of the iSWA system showing different data sources and cygnets. Im-age taken from CCMCs iSWA wiki pIm-age . . . 7

2.4 Example of an iSWA dashboard with different Cygnets (graphs, images, illustra-tions). Image created and taken from the iSWA web app . . . 7

2.5 The magnetic axis is tilted at an angle of about 12 degrees with respect to the Earth’s rotational axis. Image taken from Tufts University web page . . . 8

2.6 The the celestial equator and ecliptic plane and their relation to Earth. Image taken from the earthsky web page . . . 8

2.7 An overview of Kameleons architecture from model input to the interface layer. Image taken from slides by David Hyon Berrios . . . 9

3.1 Comparison between two standard deviation intervals. The red line is the stan-dard deviation curve. The red highlighted area represents the interval. σ is the standard deviation and E is the entropy of the histogram. (The data set is the pressure variable of the global magnetosphere) . . . 12

3.2 Comparison between a histogram equalized image and histogram and original. Image taken from the tutorialspoint web page . . . 13

3.3 The rainbow color scale. The minimum value is mapped to the color red and the maximum is mapped to blue, the intermediate values are interpolated in between 14 4.1 An example of two current cygnets in OpenSpace. . . . 16

4.2 Fisheye configuration with six viewports (shown in blue) for dome environments. Each Cygnet is positioned and rendered in the center of each viewport’s screen space. Red lines represents the blend areas/blend centers. . . 17

4.3 An example of the Visualization cygnet. . . . 19

4.4 An example of the CDF cygnet. . . . 21

4.5 A class diagram of the Visualization Cygnet component. . . 21

4.6 A class diagram of the Data Processor component. . . 22

4.7 A class diagram of the Cygnet Group component. . . 22

4.8 A class diagram of the iSWAManager. . . 23

4.9 A class diagram of the Screen Space Renderable component. . . 23

5.1 Example of two Cygnets (Earth Connectivity and Fok Ring Current) being dis-played in Screen Space for both fisheye and flat screen projection. . . 24

5.2 Three perpendicular cut planes visualizing pressure in the magnetosphere, pre-sented as a timeseries with approximately 10 hours between each time step. . . 25

(11)

5.3 Visualization of the Magnetosphere, presented with three different data variables: N (pressure), Vx(x-component of the velocity) and Bz(Z-component of the

mag-netic field). . . 25 5.4 Visualization of the Ionosphere, presented with four different data variables: Eave

(average energy), eflux (energy flux), ep (electric potential) and jr (radial current component) respectively from left to right. . . 26 5.5 Before (left) and after (right) applying histogram equalization on the Z-component

of the magnetic field in the Magnetosphere. . . 26 5.6 Before (left) and after (right) applying auto-filter on the x-component of the

veloc-ity in the Magnetosphere. . . 26 5.7 Different color maps can be used for the same data. The first and second image

portrays the same data variable with color maps of different nuances. The third image, three different data variables are rendered on the same planes but with different color maps: red, green and blue. All are contrast enhanced and auto-filtered. . . 27 5.8 Three cut-planes interpolated from a CDF-file (with auto-filter), one of which is

dragged through the volume. The data variable being visualized is rho. . . 27 5.9 CDF file Cygnets can be visualized together with field-lines (here representing the

magnetic field) for added value. The planes have increased transparency to make the field-lines easier to view. . . 28 5.10 Comparison between different standard deviation intervals. On the top the

in-terval is set to 1 and on the bottom it is equal to the original histograms entropy. From left to right the figure shows the histogram of the raw data with the standard distribution (in red), histogram from the clamped values, the equalized histogram of the clamped values. . . 28 5.11 Visual comparison of the result form the histograms in figure 5.10 . . . 29 5.12 Visual comparison of the result from the histograms between standard deviation

intervals for the eave component of the Ionosphere. . . 29 5.13 Visual comparison of histogram equalization methods on the eave variable of the

Ionosphere data set. The images to the left is the result without equalization. The middle image is the equalized data with considering the highest bin of the his-togram. The right image show the result when equalizing the data without the highest bin considered. . . 29 5.14 Visual comparison of the different histogram equalization methods on the

pres-sure variable of the global magnetosphere. The left image shows a histogram equalization with the highest bin considered and the to the right shows the equal-ization without the highest bin considered. . . 30 6.1 Examples of images of the sun in different wavelength available on iSWA. . . 33

(12)

1

Introduction

1.1

Motivation

The Community Coordinated Modeling Center (CCMC) at NASA provides public space weather data in the form of images through a web based analysis system called iSWA (inte-grated Space Weather Analysis). This system is primarily used by specialists in the field to forecast space weather and it can be hard to understand in this context for anyone who are not conversant with this subject. The drawback of this is that the wide array of public space weather data never reaches the public.

Space weather consist mostly of our Sun’s activity and the effect it has on the Earth and our solar system. It is important to predict for missions in space but it can also disrupt technology we use daily (e.g. satellites). It is also the cause of the northern and southern lights. By visualizing space weather in OpenSpace it can show the public that even though everything seems quiet when looking up at the stars, a lot more is going on.

1.2

Aim

The aim is to make the information on iSWA available to a broader audience by visualizing the streams of images (Cygnets) and data in a more suitable context and in bigger arenas. This will be accomplished by positioning, scaling, orienting and mapping the images in an inter-active 3D environment of our solar system, where the extra spatial dimension and realistic surroundings will help more people understand the content. This will also include processing data from different sources and different aspects (variables) of space whether and visualize them, sometimes at the same time. The implementation is to be integrated as a feature in OpenSpace, a program made for visualizing what we know about the universe and adapted to work in dome theaters. This will benefit the dissemination of the public data on iSWA by reaching out to more people and exposing it to an immersive and intuitive environment.

1.3

Research Questions

(13)

1.4. Delimitations

• Is there any way to enhance the cygnet visualization and make it more interactive? • How can we work with User Experience to make cygnets easy to use?

• Can this be achieved in real time? Seeing space weather as it is happening?

1.4

Delimitations

This thesis will primarily focus on visualizing 2-dimensional Cygnets and 2-dimensional sim-ulation output data. Only a subset of Cygnets and model output will be chosen for testing because what is currently available is not suitable for integration in any meaningful way as is (see section 6.3). We will also limit our work to uniformly sampled data (equally spaced sam-ples). We will only deal with space weather within the solar system (around the earth/from the sun to the earth) and concentrate on visualizing Cygnets and data on planes and spheres, but still make the software design easily extensible for other geometries (e.g. cylinders).

(14)

2

Background

2.1

OpenSpace

OpenSpace [26] is a open-source software application currently being developed to accu-rately visualize the known universe by using public space-related data sources. The project is a collaboration between Linköping University (LiU), the Community Coordinated Modeling Center (CCMC) at NASA and the American Museum of Natural History (AMNH) and was initiated in the end of 2013. The purpose of the program is for dissemination of science by letting an audience interactively travel through space and time experiencing the vast scale, details and phenomena of the universe. The application can be used either in home environments, using desktop computers, or immersive environments, such as the Hayden Planetarium at AMNH in New York. This thesis will solely work on integrating the space weather data from iSWA to OpenSpace.

The current state of the OpenSpace software can visualize our solar system accurately which includes the planets and the Sun in their right size and position for a given time. It can also render some specifically chosen space-crafts and satellites with accurate models and position. One module recreates the New Horizons fly-by of Pluto. OpenSpace has the ability to render magnetic field lines based on CCMC models of space weather. It has the ability for interactive volumetric rendering in both space and time for different models of space weather.

2.2

Definitions

Plasma

One of the four states of matter that are observable in everyday life where the others are, solid, liquid and gas. Plasma can be said to be a gas of charged particles like ions, electrons and protons [33].

Solar Wind

An outward flowing, weakly magnetized plasma at a temperature around 100 000 kelvin from the Sun’s upper atmosphere that reaches far beyond Pluto’s orbit filling a bubble-like region in space called the Heliosphere [3, 34].

(15)

2.3. Space Weather

Magnetosphere

A region around the Earth where the geomagnetic field determines the motion of charged particles, producing a cavity in the solar wind [17].

Ionosphere

Part of the Earth’s atmosphere that is ionized primarily through ultraviolet radiation from the Sun but also through particle precipitation from the magnetosphere. It is a highly variable region which roughly extends from 60 km to 1000 km altitude [17].

Magnetopause

The boundary between Earth’s magnetic field and the solar wind.

Solar Flares

A huge magnetic energy release process on the Sun (approximately 1025 J within the

full duration of 10 minutes), which accelerates particles and consequently emits electro-magnetic radiation throughout the spectrum from radio waves to X- and gamma-rays [17].

Coronal Mass Ejection (CME)

Large plasma and magnetic clouds ejected from the sun that travel at a speed roughly between 280 to 750 km/s when reaching Earth. CMEs are often observed in conjunction with solar flares [17].

Celestial Equator

An imaginary infinitely big circle with its center at Earth and coplanar with Earth’s equator.

Ecliptic plane

The plane in which the Earth orbits the sun, 23.4 degrees from the celestial equator.

First Point of Aries

The point defined by the intersection between the Celestial Equator and the Ecliptic plane. [32]. Also called vernal equinox and spring equinox.

Astronomical Unit (AU)

A unit of measurement equal to 149.6 million kilometers, the mean distance from the center of the Earth to the center of the Sun [24].

2.3

Space Weather

Definition

Space weather is a field within solar-terrestrial physics that grew predominantly during the 1990s that deals with solar activity which may affect technological systems in space and on the ground and endanger human health. The topics of interest include the conditions in the Sun, solar wind, magnetosphere and ionosphere, where space storms (e.g. Solar Flares and Coronal Mass Ejections) are the most harmful appearances of space weather [17].

Space Weather Models

In order to analyze and forecast space weather there is an active development of models that can simulate the environment of interest. Space weather models use sets of mathematical equations to describe a complicated physical process from a limited amount of input [17]. CCMC at NASA hosts, develops and evaluates models that cover the entire domain from the solar corona to the Earth’s upper atmosphere, providing services that are targeted toward space weather research and operational communities [4, 5]. The Space Weather Modeling

(16)

2.4. Integrated Space Weather Analysis System (iSWA)

Framework (SWMF) is a collection of modeling components with a common operating en-vironment. Each of the components models particular aspects of space weather (e.g. Sun, heliosphere and magnetosphere) [13].

ENLIL model

The ENLIL model is a physics based model of the heliosphere. It solves equations for plasma mass, magnetic fields, momentum density and energy density to give a 1-4 day forecast of solar winds and CMEs. Its inner radial boundary is located at 21.5 to 30 solar radii and its outer boundary can stretch out to 10 AU. It covers 60 degrees north to 60 degrees south in latitude and 360 degrees in azimuth. The information from the ENLIL model can be used to predict geomagnetic storms and when CMEs will hit Earth (see Figure 2.1) [6].

Figure 2.1: A Cygnet on iSWA that is generated from the ENLIL model output data. Image taken from the iSWA web app [8].

BATS-R-US

The Block-Adaptive-Tree-Solarwind-Roe-Upwind-Scheme focuses on the global magneto-sphere and space weather around the Earth. Its inputs are the solar wind plasma and mag-netic field measurements propagated from solar wind monitoring satellites. Its outputs in-clude the magnetospheric plasma parameters and ionospheric parameters. It is in the GSM coordinate system (2.5) which is aligned with the sun-earth line. It boundaries usually include about 30 Earth radii on the day side and stretches further back on the night side with a gap around the Earth about 2-5 Earth radii wide. The model can be used to predict space weather effects on the magnetosphere, ionosphere and magnetopause (e.g. reconnection events that can give cause to auroras). (see Figure 2.2) [12].

2.4

Integrated Space Weather Analysis System (iSWA)

iSWA is a web-based dissemination system for NASA related space weather information. The system provides a combination of forecasts from advanced space weather models with

(17)

2.5. Reference Frames

Figure 2.2: A Cygnet on iSWA that is generated from the BATS-R-US model output data. Image taken from the iSWA web app [8].

concurrent observational data through a flexible and configurable user interface. The space weather information is presented from a large and highly diverse set of sources in a form that is accessible and useful for NASA customers. iSWA is used as a decision-making tool when analyzing the present and expected future of NASA’s human and robotic missions (see Figure 2.3) [7]. The system input is space weather data that can come from many different sources, including physics based models. The system then sorts, characterizes and processes the data into Cygnets that can be more easily read by humans. These Cygnets can be plots, graphs, images or raw data depending on the data input. There are more than 500 different Cygnets which displays different values and different domains of space weather. The Cygnets can be saved in a layout to give an overview of the space weather at a specific time (see Figure 2.4) [9].

2.5

Reference Frames

Definition

A reference frame is an ordered set of (possible time-dependent) reference points with an associated center that locates and orients a coordinate system. In simplified terms, this can be described as the physical object to which we attach a coordinate system. There are two classifications of reference frames that describe their characteristics: Inertial Frames are those in which Newton’s law of motion hold, typically reference frames with a constant velocity and no rotation. Non-inertial reference frames are accelerating (including by rotation). [14, 22].

The space weather models output data in a certain reference frame, and needs to be trans-formed in order to be render it correctly in OpenSpace. NASA uses a number of well defined reference frames to facilitate and clarify the analysis [23].

(18)

2.5. Reference Frames

Figure 2.3: An overview of the iSWA system showing different data sources and cygnets. Image taken from CCMCs iSWA wiki page [9].

Figure 2.4: Example of an iSWA dashboard with different Cygnets (graphs, images, illustra-tions). Image created and taken from the iSWA web app [8].

Geocentric Systems

These reference frames have their coordinate system at the center of the Earth. In this cate-gory are systems based on the Earth’s rotation axis, systems based on the Earth-Sun line and

(19)

2.5. Reference Frames

systems based on the dipole axis (see Figure 2.5) of the Earth’s magnetic field [15]. Figure 2.6 shows some key features for these coordinate systems including the ecliptic plane and the celestial equator.

Figure 2.5: The magnetic axis is tilted at an angle of about 12 degrees with respect to the Earth’s rotational axis. Image taken from Tufts University web page [18].

Figure 2.6: The the celestial equator and ecliptic plane and their relation to Earth. Image taken from the earthsky web page [21].

Geocentric solar ecliptic (GSE)

This system has its X axis towards the Sun and its Z axis perpendicular to the ecliptic plane. (positive North). This frame is widely used when representing vector quantities and convenient for specifying boundaries in the magnetosphere [15].

(20)

2.6. Software

Geocentric solar magnetospheric (GSM)

This system has its X axis towards the Sun and its Z axis is the projection of the Earth’s magnetic dipole axis (positive North) on to the plane perpendicular to the X axis. It is considered the best system to use when studying the effects of interplanetary magnetic field components on magnetospheric and ionospheric phenomena [15].

Geocentric equatorial inertial (GEI)

This system has its Z axis parallel to the Earth’s rotation axis (positive to the North) and its X axis towards the First Point of Aries (or spring equinox). It is convenient for speci-fying the location of Earth-orbiting spacecrafts. However, the earth orbiting spacecrafts are often specified in a separate but similar coordinate system GEI2000 which is known as J2000. This reference frame is the same as GEI but for a standardized fixed epoch [15].

Heliocentric Systems

The coordinate frames in this category have their origin in the center of the Sun. Among those are frames that are based on the Sun’s rotation axis and others that are based on the plane formed from Earth’s orbit around the sun. One example of this in the Heliocentric Earth equatorial (HEEQ), which has its Z axis parallel to the Sun’s rotation axis (positive to the North) [16].

2.6

Software

Kameleon

Kameleon is a software suite that addresses the complication of analyzing the various output data formats of space weather model simulations. By employing a format standardization procedure, Kameleon reads data directly from the model simulation outputs and converts it to a common science format, Common Data Format (CDF). The new data files consists of additional metadata properties to make them self-contained and platform independent. Other than that, a high-level interface for access and interpolation of these converted data files is provided by Kameleon to facilitate data analysis and maximize code reuse. In other words, Kameleon works as a software layer between the model output and data dissemina-tion by abstracting away the reading and interacdissemina-tion with a easy-to-use interface (see Figure 2.7) [11]. Kameleon also offers transformations between commonly used coordinate frames.

Figure 2.7: An overview of Kameleons architecture from model input to the interface layer. Image taken from slides by David Hyon Berrios [2]).

The space weather community has defined many different coordinate frames for different domains of space. The frames that Kameleon supports are mostly dynamic frames. This means that they depend on multiple celestial bodies and the relation to each other. [10].

(21)

2.6. Software

OpenSpace uses the Kameleon tool to render field lines of the CDF files. Given a set of points in 3D space (seed points) the application traces how these points would move in the world based on the data in the CDF file read by the Kameleon tool.

The SPICE Toolkit

SPICE is an information system that support scientists in planning and interpreting scientific observations from space-borne instruments and assist NASA engineers to prepare planetary exploration missions. The SPICE software toolkit uses "kernel" files with ancillary data1to compute observation geometry parameters (also known as derived data of interests) at se-lected times. This can be information such as two spacecrafts distance from each other, if a comet is in viewing angle of a certain instrument or when an object is in shadow of the Sun [27, 28]. In OpenSpace the SPICE software toolkit is used for calculating the positions of planetary bodies and transforming between reference frames.

Simple Graphics Cluster Toolkit

SGCT is a windowing system for synchronizing clusters of computers together to create an image. It can synchronize multiple screens and computers to run in different configurations. The setup is read from XML files and can, without rebuilding, render the application in a dome theater, a VR headset or a power wall [1]. OpenSpace uses this as their windowing system.

1Data that describes properties like spacecraft, planet or comet positions as a function of time, instrument

de-scriptions such as shape, field of view and orientation, transformation matrices providing spacecraft orientation angles for a specific time, events and reference frames etc [22].

(22)

3

Theory

This chapter will introduce techniques used for processing and visualizing data. How these techniques are used in the project will be presented in chapter 4. The techniques to be in-troduced are: Standard score normalization, Histogram Equalization and Pseudo-Color processing. The Standard score normalization are used to process the data to be able to show multiple data variables on the same plane. This technique is also used to process the data so it can be trans-formed into a texture. The Histogram equalization is used to process the data to give more contrast in the final visualization. The last technique introduced, the Pseudo-Color process is used to visualize the data as colors on the screen.

3.1

Standard Score Normalization

To visualize the data it is first necessary to process it into a required format. To create a texture of the data it needs to be normalized into values between one and zero. But, space weather data comes in many different units of measurement and in drastically different scales. To be able to present multiple values together at the same time a normalization method is used bring them to the same unit of measure and to the same scale (otherwise values in a small scale could get lost in values with a big scale). When the values are at the same scale the values are normalized into the required format with values between one and zero.

The normalization method used to eliminate the unit is called Standard score normaliza-tion[35][20]. The method transforms the data into a new score (value) with a mean of zero and a standard deviation of one. The absolute value of the new score represents the distance between the old value and the population mean of the data in units of standard deviations. The standard score (z) of the old value (x) is calculated as:

z= x ´ µ

σ (3.1)

Where µ is the population mean of the data and σ is its standard deviation. The standard deviation of a data set is calculated as:

σ= g f f e1 N N ÿ i=1 (xi´µ)2 (3.2)

(23)

3.1. Standard Score Normalization

Where N is the number of data values in the data set population.

When the old values has been normalized to a standard score they will be normalized once again to fit the required format for the texture (values between one and zero). This is done by considering a standard deviation interval ([´n, n]) around the mean (z = 0), then clamp values outside this interval and normalize values inside it to values between one and zero as shown in equation 3.3.

z1 = $ ’ ’ & ’ ’ % 1 if z ą n 0 if z ă ´n z1+n 2n otherwise (3.3)

Where z1 is the normalized value for the texture and n is the standard deviation distance considered in the standard deviation interval.

To get the best visualization result of a data set different standard deviation intervals are required for different sets. The optimal interval depends on how the data is distributed. For spread out data sets the interval is better if it is large and for compact sets it is better if the interval is small. A value representing this well is the entropy of a histogram. The entropy is calculated as: EN ÿ i=1 xi Nlog xi N  (3.4)

Where E is the entropy of the histogram. This project uses the entropy as a heuristic when choosing the standard deviation interval. Figure 3.1 shows two identical histograms gen-erated by a non-normalized data set. The red line is the standard deviation curve and the highlighted area represents the standard deviation interval. These graphs represents what values are inside and outside the interval in the original data set. The first graph shows a standard deviation interval of one (n= 1) and the second graph shows the standard devia-tion interval set to the entropy of the histogram (n= E). The comparison shows that when

the interval is one, it will clamp a lot more values then when it is set to the entropy and this is something that will effect the visual result. Chapter 5 will show visual comparisons of this. (The interval[´1, 1]for the standard score values will be µ+ [´σ, σ]in the original data set.)

(a) µ+ [´σ, σ] (b) µ+ [´Eσ, Eσ]

Figure 3.1: Comparison between two standard deviation intervals. The red line is the stan-dard deviation curve. The red highlighted area represents the interval. σ is the stanstan-dard deviation and E is the entropy of the histogram. (The data set is the pressure variable of the global magnetosphere)

(24)

3.2. Histogram Equalization

3.2

Histogram Equalization

Contrast makes it easier to distinguish between objects in an image. Similarly, it makes it easier to see features in the visualization of space weather data. The normalization method described in the previous section may cause visualizations to have low contrast. To increase the contrast in the visualization Histogram Equalization is introduced.

Histogram Equalization is a technique that improves the contrast in an image. The tech-nique can also be used to the normalized scores to get a more even spread of values between one and zero. The technique accomplishes this by spreading out the most frequent intensity values in areas of low contrast to gain a higher contrast [30]. Figure 3.2 shows an example of this.

Figure 3.2: Comparison between a histogram equalized image and histogram and original. Image taken from the tutorialspoint web page [30].

The technique first requires a histogram to be created from the data. An image has a defined number of values (typically 256) but for a data set the number of levels must be chosen. The constant of available value levels is L and an equal number of bins will be used in the histogram. The mapping of a data value to an available level is:

li=

yi´ymin

ymax´ymin

¨L (3.5)

The probability of an occurrence of a level in the data set is:

px(i) = li

N (3.6)

Where N is the number of sampled data values. The cumulative distribution function corre-sponding to pxis: cd fx(i) = i ÿ j=0 px(j) (3.7)

The equalized bins/levels of the histogram is then defined as:

(25)

3.3. Pseudo-Color Processing

After calculating the cumulative distribution function of a data set with equation 3.7 an equal-ized value can be calculated by combining equation 3.5 and 3.8. By converting the normalequal-ized values from the previous section into equalized values, the contrast of the visualization will increase.

3.3

Pseudo-Color Processing

Pseudo-color processing is a technique that maps each value in a grey level image into an assigned color from a color lookup table or function. This pseudo-colored presentation can increase the visible difference between pixels with small or gradual changes by taking advantage of the human ability to distinguish more colors than gray-scale values [31]. Pseudo-colors is often used in scientific visualization to represent properties like veloc-ity, densveloc-ity, temperature or composition in order to make identification of certain features easier for the observer. These mappings are computationally simple and fast and can give contrast enhancement effects if they are used right. CCMC uses different color maps when generating the Cygnets on iSWA, and among those is the Rainbow color map, which is based on the order of colors in the spectrum of visible light.

This project uses the pseudo-color process on multiple color maps (chosen by the user) to relate the values generated by the previous two techniques into a color for the visual-ization. Figure 3.3 shows the rainbow color map and how the generated values relate to it.

Figure 3.3: The rainbow color scale. The minimum value is mapped to the color red and the maximum is mapped to blue, the intermediate values are interpolated in between [25].

(26)

4

Method

In order to integrate the iSWA Cygnets and the information generating them into OpenSpace, a new type of Cygnet was created. These Cygnets were produced in association with CCMC during the project and contain the necessary metadata to scale, orient and position the vi-sualizations in the correct location. The new Cygnets provide either rendered images of the raw data or the raw data itself. These new Cygnets are called Visualization Cygnets and have higher resolution and the images have a more intuitive and less misleading color scale than what the current Cygnets use (see section 6.4). This limits the amount of cygnets that could be integrated into OpenSpace (because CCMC cannot recreate all the current Cygnets) but it did raise the quality of the ones recreated. The current Cygnets were also integrated into OpenSpace but because of the lack of metadata these are rendered in screen space. Data from model output contained in CDF files were also integrated into OpenSpace. The CDF file con-tains all the necessary metadata and 3D data of a region and 2D cut planes from this region were visualized. These create three different parts of the project, the visualization Cygnet integration, the current Cygnet integration and the CDF file integration.

4.1

Current Cygnets

The current Cygnets are only images and will not be rendered in 3D space, but instead in screen space. At the start of the program the information for all active Cygnets are pulled from the iSWA web API. A checklist with name and description is created in the GUI allowing a user to choose which Cygnet to visualize. When requesting a Cygnet the image is loaded into memory and rendered on a rectangular plane. This plane is rendered on the screen with either Cartesian coordinates or polar coordinates. This allows a user to naturally move it when using a normal monitor or in a dome theater. More parameters like size, order and transparency are also controlled by the user through the GUI. The current Cygnets also have a timestamp and when that time has passed in OpenSpace a new image is downloaded. Figure 4.1 shows an example of two cygnets from iSWA in screen space.

iSWA API

The iSWA system exposes several access points allowing users to stream Cygnets directly from the system. This allows for integrating specific instances of Cygnets in screen space

(27)

4.1. Current Cygnets

Figure 4.1: An example of two current cygnets in OpenSpace.

using a single URL. The base URL for the CygnetStreamingServlet is:

http://iswa.gsfc.nasa.gov/IswaSystemWebApp/iSWACygnetStreamer

To specify the Cygnet to stream there are three required parameters that must be speci-fied:

Timestamp

Specifies the point in date and time (UTC) that you want to request a cygnet from.

Window

This argument takes a number that will provide three different behaviors of the date search method.

• If passed a negative number (<0), the response will be the most recent available Cygnet for a specific date and time.

• If passed zero, the date and time have to match the exact timestamp for the Cygnet. • If passed a number greater than zero (>0), then the most recent cygnet within the

range timestamp-window and timestamp+window will be returned.

cygnetId

each cygnet is given a unique identifier, which is used to select a specific iSWA Cygnet. These arguments must be specified as a URL GET request:

http://iswa.gsfc.nasa.gov/IswaSystemWebApp/iSWACygnetStreamer?timestamp=2012-03-09 13:01:00&window=-1&cygnetId=205

For the purposes of this project the window parameter was fixed at -1 and a variable cygnetId depending on what the user wants to look at. The OpenSpace time was format-ted and passed as the timestamp. All Cygnets on iSWA has a fixed update interval that is retrieved in JSON format together with the Cygnet description and cygnetId through the

(28)

4.1. Current Cygnets

systems CygnetHealthService:

http://iswa3.ccmc.gsfc.nasa.gov/IswaSystemWebApp/CygnetHealthServlet

The update interval is then used to calculate how often a Cygnet needs to be updated with a new request.

Projection

The difficult part with screen space projection in OpenSpace is the underlying configuration of SGCT. OpenSpace is meant to run on not only single monitors but in large displays such as domes with multiple projectors run by a computer cluster. This means that each projector will have its own viewport and screen space coordinate system, which complicates the posi-tioning of the images. When each computer in the cluster renders a Cygnet withing its own viewport’s screen space they will not appear in the same position across the full display. This is shown in figure 4.2.

Figure 4.2: Fisheye configuration with six viewports (shown in blue) for dome environments. Each Cygnet is positioned and rendered in the center of each viewport’s screen space. Red lines represents the blend areas/blend centers.

By using the images as textures on a plane, that is then given a world space position with a fixed distance in front of the camera, the state of the plane is synchronized and rendered correctly over each node in the computer cluster by SGCT. As a result, making it look like the images are rendered directly in screen space.

(29)

ro-4.2. Visualization Cygnets

tated to face positive Z at a fixed distance of -1 from the camera (within the viewing frustum). Subsequently the plane is textured with the Cygnet image and only translated in the xy-plane. A third coordinate is used for occlusion with the other screen space images. Finally, the view and projection matrix of the camera is applied.

The description of a position of an image in screen space differs from a flat screen (e.g. a computer monitor) and a spherical screen (e.g. a dome theater). For the flat screen it is intuitive to describe the position in Euclidean coordinates, with x and y relating to the height and width of the screen. For a spherical screen it facilitates to describe the position with spherical coordinates with the polar angle (θ), azimuthal angle (φ) and radial distance (r). Where r, the radius is used for occlusion.

One requirement was that the user should be able to move a screen space image on a Euclidean xy-plane in front of the camera and on spherical displays as well. Therefore, conversion between the two coordinate systems was necessary. Given that the screen space images is described in euclidean coordinates, they were converted to spherical coordinates with equation 4.1 r= b x2+y2+z2 φ=arccos(y r) θ=arctan(y x) (4.1)

For conversion in the opposite direction, equation 4.2 is used.

x=r ¨ sin(φ)cos(θ)

y=r ¨ sin(φ)sin(θ)

z=r ¨ cos(φ)

(4.2)

4.2

Visualization Cygnets

The Visualization Cygnets are the new Cygnets that provide higher resolution visualizations then what is currently on iSWA, an example is be shown in figure 4.3. The visualization Cygnet pipeline begins with a request from a user for a specific Cygnet through the GUI. Then metadata for that Cygnet is downloaded into memory. At this point, the most impor-tant information is what data type the Cygnet contains and what geometry it has. This will determine what Cygnet subclass will be created in the scene. The data type could either be image or raw data and the geometry could be plane or sphere. The metadata provided is parsed and the necessary data for size and position is calculated.

An object with the right geometry and size is created and placed in the correct location. If the data is an image, the image can directly be used as a texture. If the data type is raw data, it needs some more processing before a texture can be made from it.

The first process is the normalization described in section 3.1. What standard deviation interval to use depends on the user, they can use the GUI to set it themselves or use the entropy heuristic described in the theory. Since the data set is time varying, one problem is to keep the color mapping consistent. If the normalization method were to individually be applied to each timestep, the standard deviation, mean and entropy would be different. This would result in that the color mapping would also differ and the same value would change color each timestep. To avoid this the standard deviation, mean and entropy for the data set is stored for its first normalization, then the same values are used later to keep the color mapping consistent.

(30)

4.2. Visualization Cygnets

Figure 4.3: An example of the Visualization cygnet.

The next process is the histogram equalization described in section 3.2. It is an optional process that the user can turn on or off. For some data sets there is one very dominant value, such as data sets with large regions of uniform data that could be considered back-ground. This dominant value in the data set will reduce the quality of the visualization when performing histogram equalization. Because of this it is not considered in the equalization process. More on this and visual comparison is discussed in 5.5.

After these processes each value in the data set has been transformed into a vale between one and zero and a texture can be made with the Pseudo-Color process described in section 3.3. Multiple variables of each Cygnet can be processed and visualized at the same time. Each variable is made into a 1-dimensional texture sent to the shader where the Pseudo-Color process is performed. Since multiple variables can be visualized two different method can be used to get the final color. Method 1, uses one color map and the average of all values as the color value as shown in equation 4.3. Method 2, performs the color mapping first using one color map for each value, then adds the colors together for one final color. This method is show in equation 4.4. In the equations v is the values from the normalized data sets, n is the number of variables, colormap(x)is the function that generates a color from a value and

c is the final color.

colormap(v1+v2+...+vn

n ) =c (4.3) colormap1(v1) +colormap2(v2) +...+colormapn(vn) =c (4.4)

Metadata

To get the right position, orientation and scaling for the geometry, metadata about the Cygnets are needed. To assess what was needed Kameleon and CDF files was used. The following list is the conclusion of the research of the preferred metadata needed to visualize the geometry in the right place in the virtual universe.

(31)

4.3. CDF Cygnets

• Coordinate frame - To instruct Spice and Kameleon to transform between different co-ordinate frames and rotate the geometry to the right orientation.

• Parent - The scene graph parent, in this case the planet that the coordinate frame is centered around. It could be deduced from the coordinate frame itself but facilitates if it is present.

• Coordinate type - If the coordinates are in spherical or euclidean coordinates. This is needed to choose the right geometry.

• Minimum and maximum coordinate of sampled grid - Used to calculate the offset from the center of the system, so that the geometry can be translated to the right location from the parent.

• Spatial scale - The unit of measurement. Used to know how far to translate the geome-try (if needed) and what size it is.

• Update time - The time interval between two consecutive Cygnet images/data.

Demo Server API

The visualization Cygnets and data files were not available through the iSWA API, therefore a server that mimics the current iSWA API was created. This allows for a a smooth transition between the two in the future.

The main difference is how the parameters are appended to the url, whereas for iSWA they are added as query string parameters, on the demo server they are added to the end of the path.

http://iswa.gsfc.nasa.gov/IswaSystemWebApp/iSWACygnetStreamer?timestamp=2012-03-09 13:01:00&window=-1&cygnetId=205

compared to

http://localhost/:id/:datetime

The window parameter was also removed form the demo API and set to -1 by default. The new Cygnet IDs are chosen arbitrarily starting as 1 and increasing incrementally as more test files were added.

4.3

CDF Cygnets

The CDF Cygnets are Cygnets visualized with information from CDF files, an example of the Cygnet and field lines can be seen in figure 4.4. The CDF files are stored on the disc because of their size. Information about available CDF files are loaded into OpenSpace with a small script and some configuration files. The user will chose what CDF file to load through a checklist of the available files. When the CDF file is chosen it is opened with Kameleon. Kameleon extracts all the necessary metadata to place the information in the right place. Kameleon provides an interpolator interface that is used to uniformly extract slices of 2D information from the 3D volumetric data that the CDF file contains. The slices’ location can be chosen dynamically by the user and will be processed and visualized in the same way as the Visualization cygnet data sets.

The CDF files integration also comes with the option to render field lines. A list of seed points must be provided to start the field line tracing. Using the information in the CDF files,

(32)

4.4. Software Structure

colored lines are rendered. The lines are colored differently depending on if the field lines are open, closed or not connected to earth.

Figure 4.4: An example of the CDF cygnet.

4.4

Software Structure

The project is structured around a few class trees that are separated by function. The classes are explained in the following sections.

The Visualization Cygnet Class Hierarchy

Figure 4.5: A class diagram of the Visualization Cygnet component.

This is the main tree for the Visualization Cygnets in world space. Its function is to render and update the Cygnets. It contains information about the size, position and update time for the Cygnet. The tree is split at two different levels. The first level separates what kind of data to visualize. The source could be an image (TextureCygnet) or a data stream (DataCygnet). The DataCygnet owns a DataProcessor to process the data into a texture that can be visualized. The DataProcessor is another class tree explained later. The Cygnet fetches 2D data or images from a server (except the KameleonPlane) at a regular interval based on the application time and visualizes it in the world.

(33)

4.4. Software Structure

At the second level the tree separates the geometry. The implementation can handle two types of simple geometries, planes and spheres. The Kameleon data source needs to be handled differently and adds some extra functionality because of this it is its own class. The

KameleonPlane gets its data from disk and reads it from CDF files, therefore the plane can

change resolution and slice the volume at different positions using the Kameleon application.

The Data Processor Class Hierarchy

Figure 4.6: A class diagram of the Data Processor component.

The data processors process data for the DataCygnets. It transforms data from one format to one that can easily be transformed into a texture. The data is time varying and the scale can vary greatly. The data processor keeps the visual output consistent, meaning one value at one time step should correspond to the same color in another (this is not always the case depending on how the normalization works). It achieves this by storing some key values in the beginning and with the normalization and histogram equalization explained earlier. As of now the DataProcessor deals with three different kinds of data formats. The first one is unstandardized and is defined in regular text files, which are handled by the

DataPro-cessorText. The second format is CDF, that is handled by the DataProcessorKameleon that uses

the Kameleon library to read the CDF files. The last data format is JSON and is handled by the DataProcessorJson with a 3rd party JSON parser [19].

The Cygnet Group Class Hierarchy

Figure 4.7: A class diagram of the Cygnet Group component.

iSWACygnets displays 2D data from a 3D data source. To better understand the 3D structure multiple 2D planes can be used. The iSWABaseGroup adds the functionality to group the iSWACygnets together. This allows the parameters to be updated simultaneously for every Cygnet in the group. The base group has the same type of data processor as the Cygnets and the Cygnets will use the groups processor instead of their own. Each child of the base group controls more parameters. The IswaBaseGroup handles variables every IswaCygnet uses (e.g. transparency). The IswaDataGroup handles parameters for the DataCygnets (e.g. color maps

(34)

4.4. Software Structure

and normalization variables). The IswaKameleonGroup handles KameleonCygnets variables (e.g resolution).

The iSWAManager

Figure 4.8: A class diagram of the iSWAManager.

The iSWAManager controls everything in this module that are outside of the Cygnets respon-sibility. When OpenSpace is started and the iSWA module enabled, the iSWAManager will query the iSWA web app for healthy Cygnets and gather information about them, such as description, name and cygnet id number. These Cygnets will be the Current cygnet discussed in section 4.1 and will be displayed in screen space if the user choses to display them. Some Cygnets on the iSWA web app will be considered not healthy if they are down or hasn’t been updated for some time.

The iSWAManager downloads and reads metadata from different sources (as text, JSON and Kameleon). It will format this metadata into a script with the necessary information to create a new Cygnet (screen space, plane, sphere or Kameleon plane).

The iSWAManager keeps and reads information about the available CDF files discussed in section 4.3. It will also create, update crate and destroy iSWAGroups.

Screen Space Renderables

Figure 4.9: A class diagram of the Screen Space Renderable component.

The ScreenSpaceRenderable controls the screen space information in OpenSpace. Not all infor-mation is suitable in the 3D world and is better displayed in screen space. The inforinfor-mation on the screen can be images, graphs or videos. The class renders a rectangular plane with information on the screen and can control its positions, size and order (for occlusion). The

ScreenSpaceImage class renders an image from disk or from the web. The ScreenSpaceCygnets

is a special case focused on streaming Cygnet images from the iSWA API. The

(35)

5

Results

The obtained results are well within the goals of the thesis and has produced a number of different approaches to visualize 2-dimensional space weather data. The results are grouped into four sections where the first three corresponds to the different sub-tasks of the thesis (Screen Space Cygnets, Visualization Cygnets and CDF file interpolated Cygnets) and ends with normalization and equalization results. The Visualization and Screen Space Cygnets can be updated both by going forward and backwards in time and thus achieving an animation of their content.

5.1

Screen Space Cygnets

All the active Cygnets on iSWA are available for streaming in OpenSpace by selecting them from a list with their names and descriptions in the menu. Once loaded, the Cygnets will appear in the center of the screen and their properties becomes available for adjusting. The properties of screen space Cygnets that can be adjusted are: Position in screen space, trans-parency and size (where the side ratio is fixed). This feature works just as well on a single computer as with a computer cluster with multiple projectors (Figure 5.1).

Figure 5.1: Example of two Cygnets (Earth Connectivity and Fok Ring Current) being dis-played in Screen Space for both fisheye and flat screen projection.

(36)

5.2. Visualization Cygnets

5.2

Visualization Cygnets

The new Visualization Cygnets (including the raw data) were successfully retrieved from a local server, processed, positioned and rendered correctly in OpenSpace with a few interac-tive adjustments and options to manipulate them.

The Visualization Cygnets below are demonstrated with three perpendicular cut planes to give a better sense of their volumetric nature. The ready-made Cygnets are presented with a fixed data variable leaving a user with the possibility to enable/disable cut-planes and adjust transparency. Figure 5.2 shows the Cygnet during different times. Visualization of

Figure 5.2: Three perpendicular cut planes visualizing pressure in the magnetosphere, pre-sented as a timeseries with approximately 10 hours between each time step.

Cygnet data has been realized on plane and sphere geometries. The Cygnets demonstrated below have the option to visualize different data variables for model outputs by themselves or in conjunction with others. The user has the possibility to change the data variable to be visualized (Figure 5.3), change color map for each data variable (Figure 5.7), increase contrast through histogram equalization (Figure 5.5) and filter values automatically or manually (Figure 5.6). Just like the other Cygnets, these also give the possibility to adjust transparency.

Figure 5.3: Visualization of the Magnetosphere, presented with three different data variables: N (pressure), Vx(x-component of the velocity) and Bz(Z-component of the magnetic field).

(37)

5.3. CDF File Cygnets

Figure 5.4: Visualization of the Ionosphere, presented with four different data variables: Eave (average energy), eflux (energy flux), ep (electric potential) and jr (radial current component) respectively from left to right.

Figure 5.5: Before (left) and after (right) applying histogram equalization on the Z-component of the magnetic field in the Magnetosphere.

Figure 5.6: Before (left) and after (right) applying auto-filter on the x-component of the veloc-ity in the Magnetosphere.

5.3

CDF File Cygnets

Cut-plane interpolation of CDF files is accomplished with good results. Most of the features are the same as for the Visualization cygnets (auto-filter, different color maps and histogram

(38)

5.4. Standard Deviation Interval

Figure 5.7: Different color maps can be used for the same data. The first and second image portrays the same data variable with color maps of different nuances. The third image, three different data variables are rendered on the same planes but with different color maps: red, green and blue. All are contrast enhanced and auto-filtered.

equalization). Since these Cygnets represents a cut plane of a 3D region the planes can be dragged through the volumetric data generating new images each time step (Figure 5.8).

Figure 5.8: Three cut-planes interpolated from a CDF-file (with auto-filter), one of which is dragged through the volume. The data variable being visualized is rho.

Field-lines

Using an existing module of OpenSpace, field-lines can be visualized from the same CDF files that are used to create the cut-planes (Figure 5.9). To achieve this, a file with seed-points which defines positions of where a field-line should be interpolated from. These files can be loaded in to the GUI as a check-box list so that the user can select the seed-points during run-time.

5.4

Standard Deviation Interval

Higher contrast visualization can be made with the normalization method and histogram equalization. Depending on the standard deviation interval the effect will differ. In fig-ure 5.10 is histograms representing the pressfig-ure variable of the global magnetosphere and different stages of the normalization method. The process goes from left to right. To the left is the original histogram of the raw data values with its normal distribution and a cut-off representing the standard deviation interval (represented as a red line and highlighted area). In the middle is a new histogram made with the clamped values of the original. To the right is the histogram equalization of the middle histogram. The top row of figure 5.10 represents a standard deviation interval of 1 and the bottom row represents a standard deviation interval equal to the original histogram entropy. Figure 5.11 shows the difference in the visual result. The left image represents the case with the standard deviation interval to 1 and the right im-age has the interval set to the entropy. Another, more noticeable visual result is shown in figure 5.12.

(39)

5.5. Histogram Equalization

Figure 5.9: CDF file Cygnets can be visualized together with field-lines (here representing the magnetic field) for added value. The planes have increased transparency to make the field-lines easier to view.

Figure 5.10: Comparison between different standard deviation intervals. On the top the in-terval is set to 1 and on the bottom it is equal to the original histograms entropy. From left to right the figure shows the histogram of the raw data with the standard distribution (in red), histogram from the clamped values, the equalized histogram of the clamped values.

5.5

Histogram Equalization

Histogram equalization on data sets with one dominant value can decrease the contrast and visual result of the final image. Figure 5.13 represents the eave component of the Ionosphere. The values from this data set is zero almost anywhere except on the poles of the earth. This makes the zero value dominant in the histogram. The image to the left in the figure represents the data set without any histogram equalization. The middle image represents the data set with histogram equalization where the highest bin is considered and the right image is where the highest bin is not considered. All the images has the standard deviation interval set to one. Not all data sets benefits as much with this alteration of the histogram equalization. In

(40)

5.5. Histogram Equalization

Figure 5.11: Visual comparison of the result form the histograms in figure 5.10

Figure 5.12: Visual comparison of the result from the histograms between standard deviation intervals for the eave component of the Ionosphere.

Figure 5.13: Visual comparison of histogram equalization methods on the eave variable of the Ionosphere data set. The images to the left is the result without equalization. The middle image is the equalized data with considering the highest bin of the histogram. The right image show the result when equalizing the data without the highest bin considered.

(41)

5.5. Histogram Equalization

without considering the highest bin. The visual result between the two images is different but harder to tell which one is more favorable and without a defined metric it is hard to test. We can only rely on the visual result. The left image in the figure shows the method with the highest bin considered and in the right one it is not. Arguably this data set would be better of with the highest value considered because of the noticeable clamped values in the brightest areas. Because of this, the alteration may not favor every data set or variable and

Figure 5.14: Visual comparison of the different histogram equalization methods on the pres-sure variable of the global magnetosphere. The left image shows a histogram equalization with the highest bin considered and the to the right shows the equalization without the high-est bin considered.

could therefore be improved. By defining a metric of when the data set contains a clearly dominant value or not, a test could be made to decide if the highest bin should be considered in the equalization method or not. Such a metric could be if the highest bin contains more values then the rest of the values together or a height comparison between the highest and next to highest bin.

(42)

6

Discussion

6.1

Performance

To achieve a fluid animation of the Cygnets, new requests for data had to be performed very often and the download time can not be too extensive. Since we are dealing with relatively small 2D data files and images, this will unlikely be a problem. However, choosing a rea-sonable file size that is not too large and will not affect the network performance has been difficult since we have mainly tested against the demo server. This server can serve files without much delay. A delay problem would probably occur if these Cygnets were served trough the iSWA API (which is not tested). The data Cygnets are bigger than the image Cygnets because they provide multiple data variables in the same file and are uncompressed. This does however mean that there is room for improvement if necessary by splitting the data variables into separate files and only request the ones that you want to visualize. It is likely that the network will not be the bottleneck even if we request the Cygnets from iSWA. Reading and parsing the data is a very performance demanding process which at the moment is performed on a single thread on the CPU. If the sample resolution was higher (to achieve better quality), then this process would eventually have to be parallelized or put some of the workload on the GPU to maintain support to low performing PCs.

6.2

Data Format Standards

One challenge in this project has been to work with different data formats. Since iSWA cur-rently does not serve the data content we need, a standard for the data does not exist. To decide this, one must answer the question of who this service is for. If this is a service which exists only for the purpose of space weather visualization in OpenSpace, then the question is quite easy. We could specify an appropriate resolution of the samples, what data variables we need, what cut-planes are of interest, exactly what should be included in the metadata, and that it should conform to some universal standard like JSON. If however this service is more general purpose, then this question becomes more difficult to answer. To make this service as general purpose as possible would require a lot of work. The code has been designed to prepare for changes in the data format by separating the parsing functionality from the data processing.

References

Related documents

The analyses reveal differences and similarities in the structure of subject matter offered to the students to experience. If we take into account the subject matter

Thus, through analysing collocates and connotations, this study aims to investigate the interchangeability and through this the level of synonymy among the

This study provides a model for evaluating the gap of brand identity and brand image on social media, where the User-generated content and the Marketer-generated content are

The Ives and Copland pieces are perfect in this respect; the Ives demands three different instrumental groups to be playing in different tempi at the same time; the Copland,

Men när allt kommer omkring så handlar den här likhet- en inte om att de har svårt att skilja på könen, det vill säga misstar kvinnor för män, utan pro- blemet verkar vara

The aim of this study is to, on the basis of earlier studies of cultural differences and leadership in different countries, examine how some Swedish leaders view their

Which each segment given a spectral color, beginning from the color red at DE, through the color orange, yellow, green, blue, indigo, to violet in CD and completing the

As the initial FFT model was written in behavioral VHDL code the major challenge during the thesis has been to rewrite the code towards certain synthesis goals1. Beside reaching