• No results found

Graphical User Interfaces for Multi-Touch Displays supporting Public Exploration and Guided Storytelling of Astronomical Visualizations

N/A
N/A
Protected

Academic year: 2021

Share "Graphical User Interfaces for Multi-Touch Displays supporting Public Exploration and Guided Storytelling of Astronomical Visualizations"

Copied!
46
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Science and Technology

Institutionen för teknik och naturvetenskap

LiU-ITN-TEK-A--18/041--SE

Grafiska användargränssnitt

för multifunktionsdisplayer

som stöder publik utforskning

av astronomiska

visualiseringar

Hanna Johansson

Sofie Khullar

(2)

LiU-ITN-TEK-A--18/041--SE

Grafiska användargränssnitt

för multifunktionsdisplayer

som stöder publik utforskning

av astronomiska

visualiseringar

Examensarbete utfört i Medieteknik

vid Tekniska högskolan vid

Linköpings universitet

Hanna Johansson

Sofie Khullar

Handledare Emil Axelsson

Examinator Anders Ynnerman

(3)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida

http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page:

http://www.ep.liu.se/

(4)

Linköpings universitet SE–581 83 Linköping

Linköping University | Department of Computer and Information Science

Master thesis, 30 ECTS | Datateknik

202017 | LIU-IDA/LITH-EX-A--2017/001--SE

Graphical User Interfaces for

Multi-Touch Displays

sup-porting Public Exploration

and Guided Storytelling of

Astronomical Visualizations

Grafiska användargränssnitt för multifunktionsdisplayer som

stöder publik utforskning av astronomiska visualiseringar

Hanna Johansson

Sofie Khullar

Supervisor : Emil Axelsson Examiner : Anders Ynnerman

(5)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under 25 år från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och admin-istrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sam-manhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet – or its possible replacement – for a period of 25 years starting from the date of publication barring exceptional circum-stances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the con-sent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping Uni-versity Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

c

Hanna Johansson Sofie Khullar

(6)

Abstract

This report presents the development and implementation of a graphical user interface (GUI) for multi-touch displays as well as an application programming interface (API) for guided storytelling of astronomical visualizations. The GUI and the API is built using web technologies and the GUI is rendered in an OpenGL environment. The API is meant to provide the infrastructure needed to create different stories for the public, based on astronomical data. Both the resulting GUI and the API is developed such that it can be further developed and customized for different purposes.

(7)

Acknowledgments

We would like to thank everyone involved at Linköping University and Scientific Computing and Imaging (SCI) Institute at the University of Utah for giving us the opportunity to work with this master thesis project, especially Anders Ynnerman and Dr. Charles Hansen for arranging the collaboration and making the project possible in the first place. We would also like to thank our supervisor Emil Axelsson and Alexander Bock for providing us with insights and guidance throughout the work. Also special thanks to Gene Payne in the OpenSpace developer group for not only being an awesome colleague and supporter of our work, but also a great friend who made us feel more than welcome in Utah and provided us with help and company during the entire stay. Matthew Territo, thank you for great laughs, awesome food and for sharing your craziest ideas with us.

Additionally, we would like to thank everyone that we have met during our years of studies at Linköping University and during our studies abroad - friends, class mates, teachers. The student experience would not have been the same without the fun, cool and inspiring people around us. Last, we would like to thank our families for supporting us in good and bad times, always guiding us with love and great advice when we have needed it the most. Hanna Johansson and Sofie Khullar, 2018

(8)

Contents

Abstract iii

Acknowledgments iv

Contents v

List of Figures vii

List of Tables viii

1 Introduction 1 1.1 OpenSpace . . . 1 1.2 Motivation . . . 1 1.3 Aim . . . 2 1.4 Research questions . . . 2 1.5 Delimitations . . . 2 2 Related work 3 2.1 Multi-touch interaction . . . 3 2.2 Web GUI . . . 3 2.3 Inside Explorer . . . 4 3 Theory 5 3.1 User Experience . . . 5 3.2 JavaScript UI frameworks . . . 9

3.3 3D coordinates to 2D screen space . . . 11

3.4 Navigation and transportation . . . 12

4 Implementation 14 4.1 Touch event handling . . . 14

4.2 Properties handling . . . 14

4.3 3D coordinates to 2D . . . 15

4.4 Design choices . . . 15

4.5 GUI . . . 16

4.6 Story API . . . 22

4.7 Transportation and navigation . . . 24

5 Results 25 6 Discussion and analysis 27 6.1 Results . . . 27

6.2 Implementation . . . 27

(9)

7 Conclusion 31

7.1 Research questions . . . 31

(10)

List of Figures

2.1 Inside Explorer showing the Gebelein man.[gebelein] . . . . 4

3.1 The design hierarchy of needs; functionality, reliability, usability, proficiency and creativity. [DesignHierarchy] . . . 10

3.2 Sphere with a vector from center position to the outer edge of the sphere. . . 12

3.3 The creation of a camera coordinate system with three perpendicular unit axes with the camera’s position as the origin. . . 12

4.1 Overview of how the touch event handling works. . . 14

4.2 In Figure (a) the GUI can be seen as an overlay on top of OpenSpace, with Mars visible behind the GUI. In Figure (b) the layer mask for the GUI is displayed. Touch input on the white area (i.e. outside the layer mask) will send events to OpenSpace and input on the black area (i.e. the layer mask) will send events to the web GUI. . . 15

4.3 The start page of a story in the menu. Highlighted with blue rectangles is the information box displaying information to the user about the story, the dots at the bottom also telling which of all the stories available is chosen, and one of the two arrows indicating the possibility to move between stories. . . 17

4.4 A few slides from the start menu. . . 17

4.5 Focus buttons in the story ‘Solar System’. . . 18

4.6 Focus buttons in the story ‘Jupiter and its Moons’. . . 18

4.7 Earth with label and info window active. . . 19

4.8 The time controller with the play button active and the pause button visible as an option to change from playing to pausing. . . 19

4.9 Event controller. . . 20

4.10 Sights controller. . . 20

4.11 Scale controller in it’s two states, inactive and active respectively. This particular button scale the moons in the current story and alternates between the moons’ original sizes (1x original size, Figure (a)) and ten times their original sizes (10x original size, Figure (b)). . . 20

4.12 Toggle controllers for the story ’Galaxies’. . . 20

4.13 Utilities menu. . . 21

4.14 The story about weather and events on Earth with the help button active, display-ing instructions to the user of how to interact with the visualization on the touch screen. . . 21

4.15 OpenSpace run in developer mode, displaying additional information as well as a menu for changing story (in this image highlighted with blue rectangles). . . 22

4.16 Overview of how the story communication works. . . 23

5.1 Final version of the story ’Jupiter and its Moons’. . . 25

(11)

List of Tables

4.1 Extraction from the story API documentation, showing some of the different func-tionalities. . . 23

(12)

1

Introduction

1.1

OpenSpace

OpenSpace is an open source astrovisualization software, tailored to spread knowledge and information about space through real-time data visualizations [9]. It is a project between the collaborators Linköping University, NASA Goddard’s Community Coordinated Modeling Center, American Museum of Natural History, New York University’s Tandon School of Engineering and University of Utah’s Scientific Computing and Imaging Institute. The soft-ware uses dynamic data to visualize the entire universe as we know it and allows interactive exploration of the universe.[6]

The platform enables interactive presentations of dynamic and time-varying processes and enables knowledge by domain experts to be shared with the general public. Examples of visualizations available are image acquisitions of the New Horizons and Rosetta spacecraft, space weather phenomena and high-resolution images of planets. [7] New work is con-stantly performed to tailor and integrate different data processing and visualization methods to transform the raw data into something that can be better understood and grasped by non-experts. Currently the OpenSpace software enables interactive presentations in environ-ments such as immersive dome theaters and virtual reality headsets. [5]

Another, very similar visualization and simulation software that one might have heard of is Uniview [16]. Uniview, as well as OpenSpace, is a sophisticated system for the visual display and exploration of huge amount of complex data about the universe [19].

1.2

Motivation

OpenSpace is currently a software application poorly adapted for users with little or no experience of how the software is built and run. Enabling the general public to explore and understand the complicated astronomical data that OpenSpace presents requires a guiding graphical user interface (GUI). A GUI provides the user with additional information of what is being displayed on the screen. The components which build up the GUI, such as buttons and labels, will guide the user in the interaction with the data visualization.

The primary focus of OpenSpace have for a long time been the interactive presentation of dynamic data from observations (image sequences), astrophysical simulation (volumetric rendering) and space missions (observation geometry visualization) [13]. Little focus have however been on the user interface specifically and how the user can actually interact with the visualizations available. Additionally OpenSpace is currently primarly adapted for user interaction in a desktop environment. This motivates the work in this project of developing a GUI for multi-touch displays. Making OpenSpace available on large multi-touch

(13)

dis-1.3. Aim

plays, such as touch tables, will hopefully lead to the software being displayed in museum environments, exhibitions and science centers around the world.

1.3

Aim

The aim of this thesis project is to develop a GUI for multi-touch displays, such as touch tables. By doing this several functionalities within OpenSpace will have to be combined and adjusted for the purpose of letting users of all ages and backgrounds interact with the software and learn from it.

An additional goal with the project is to develop a GUI such that OpenSpace can be dis-played to the general public and such that a user with no previous experience can interact with the software and learn from it. By developing an application programming interface (API) the concept of different stories about space is integrated into the software. The stories are meant to define scenes with adapted GUI components, which will enable a mix of inde-pendent exploration and guided storytelling for a broad target audience.

Topics that will be investigated are User Experience (UX), handling of properties, navi-gation in a C++ application and underlying infrastructure. Along with these topics previous work within OpenSpace will be studied and adjusted. For example the current multi-touch navigation interface system will be targeted as well as the already existing web-based GUI. The aim is to combine these functionalities and additionally look into how an API can be developed to add more value to the user experience when interacting with the software.

1.4

Research questions

Answers to the following research questions will be sought and given.

1. What design choices have to be considered when implementing a graphical user inter-face for large multi-touch displays that communicates scientific data?

2. What are the drawbacks and advantages of displaying information to the user in a web browser window compared with displaying information directly in a C++ application? 3. What difficulties need to be considered when allowing the user to explore freely while

at the same time implementing predefined constraints of movement?

1.5

Delimitations

The project will be based on previous master thesis work that limits the choices of frame-works. The current web browser GUI is built with the JavaScript frameworks React and Redux and therefore they will be used in this project. The web browser GUI also limits the development platform to only Microsoft Windows. The touch interaction gestures are also based on previous work and will not be changed for this project.

To limit the project within the time frame, formal user tests have not been given a spe-cific slot of time during the development of the GUI. However, the design and the user experience will be tested continously and informally throughout the entire project by col-leagues and friends at the SCI Institute. Few qualitative testing sessions will be prioritized over many quantitive responses to recieve meaningful feedback regarding our work.

The components developed will target touch interfaces only and not desktop environ-ments. This is to keep a clear focus on the aim of the project - namely to develop a GUI for multi-touch displays.

(14)

2

Related work

Several students have done their master thesis work related to the OpenSpace project, con-tributing to the development of the software. Previous related work include making the soft-ware adapted to multi-touch exploration on touch tables as well as adding React components and creating a web browser GUI (Graphical User Interface), making it possible to control the software from a web browser. Other related work include similar applications where a GUI makes it possible for a user to interact with scientific data on touch screens.

2.1

Multi-touch interaction

In the year of 2017 a master thesis project within OpenSpace revolved around making OpenSpace available on multi-touch displays for public exploration and navigation [10]. Before that project was conducted interaction with OpenSpace was limited to mouse and keyboard input, as well as expert knowledge of the architecture of the system and the C++ GUI available at the time. This, of course, limited the interaction with OpenSpace to only one person at a time and this person also had to have some knowledge about the software itself. By combining a velocity-based interaction model with a screen-space direct-manipulation formulation Jonathan Bosson developed a user-friendly interface which was also integrated with a multi-touch navigation interface into the software. [10]

The enabling of touch interaction on large tangible surfaces, such as touch tables, engages the user on a higher level and enhances the understanding of the navigation. In other words, the work of integrating touch interaction into OpenSpace was an important step in the process of making OpenSpace available to the general public and users with little or no previous experience of the software. The learning time of the system’s user interface decreased and the added touch interaction functionality laid the foundation to further work on how to make OpenSpace accessible and usable in museum environments, exhibitions and science centers. [10]

2.2

Web GUI

Another master thesis project within OpenSpace that provides a fundamental foundation for this present work was carried out during the summer of 2017 and focused on creating user interfaces using web-based technologies [14].

Previously OpenSpace was limited to a C++ GUI, which had numerous drawbacks when it comes to further development of the software in the direction of making it more user-friendly. Therefore a project was carried out by Klas Eskilson, who developed and implemented a desktop user interface framework in OpenSpace with the use of web technologies and the JavaScript framework React. The React components were combined with a web socket server

(15)

2.3. Inside Explorer

to render the GUI in the OpenGL environment, using the framework Chromium Embedded Framework (CEF). [14]

By adding React components and CEF into OpenSpace further development of graphical user interfaces was improved and it added the advantage of shorter time needed to develop, implement and improve a GUI. The React components are far more flexible when it comes to appearance and design, compared with the previous C++ GUI based on the framework Dear ImGui, which enabled styling adapted to a specific audience, such as users with little or no previous experience of the software. [14]

2.3

Inside Explorer

A software application visualizing data on multi-touch displays is not a new concept. The idea of using a GUI to interact with data on large screens have been done before and one example is the interactive visualization system Inside Explorer [1].

Inside Explorer is an interactive exhibit system that enables museum and science center visitors to interactively explore data objects on touch screens to learn and discover for them-selves. The users of the system become the explorers of otherwise invisible interiors of unique artifacts and subjects [26]. The objects available for interaction and exploration have been scanned using CT (Computed Tomography) or MRI (Magnetics Resonance Imaging) medical scanning systems. In general the idea is: “Anything that can be scanned, can be visualized”. Users are able to peel away layers, rotate, zoom and cut through objects virtually and therefore reveal hidden interior detail that previously have not been possible. [2] One example of an object that can be explored through the Inside Explorer is the Gebelein man, see Figure 2.1.

Figure 2.1: Inside Explorer showing the Gebelein man.[3]

The aim of this thesis project, as stated in Chapter 1.2, is to develop OpenSpace such that it can be used for science communication in public spaces. In other words it is desired to make OpenSpace a software which can be displayed in interactive exhibits in museums and science centers, just like Inside Explorer. By using touch gestures users of Inside Explorer can examine complex 3D data in an intuitive and fun way. [1] Since Inside Explorer is easy to use and requires no training or special knowledge, as well as it is used by people of all ages, the targeted audience is the same as for this thesis project [2]. By investigating Inside Explorer relevant conclusions can be drawn regarding what makes a GUI successful and how to motivate design choices.

(16)

3

Theory

In the introduction of this report it was stated that this works’ contribution to the OpenSpace software is a graphical user interface (GUI) for multi-touch displays, as well as an applica-tion programming interface (API) for creating stories which highlights different parts of the available astronomical data. The development and implementation of the GUI and the API requires some underlying theory. The theory covers user experience (UX), storytelling and design choices, Javascript frameworks, coordination handling, and navigation in a 3D envi-ronment displayed on a 2D screen.

3.1

User Experience

Developing a GUI for a specific target audience inevitably involves taking the user experi-ence (UX) into account. How should the user experiexperi-ence the interaction with the software and the GUI? How can the GUI be designed to optimize the exploration of the scientific data presented? And additionally, can we use storytelling as a tool within astronomical visualiza-tions to provide insights and enhanced knowledge for the user? To investigate the answers to questions like the above we need to present the theory behind storytelling as well as different design principles.

Storytelling

Storytelling is a technique to easier remember things by creating stories around them. The stories can both teach and entertain. In short, storytelling is a dramatic description of some-thing.

There are several reasons to why storytelling is a good way of teaching people things. First of all, it is a lot easier for someone to remember a story rather than just rational ar-guments or lists of descriptions. By adding the facts to a context, creating a bigger picture, people are more available to take in the information given. If the story also includes several senses, such as describing smell, feel or appearance, more parts of our brains are active than if only words and facts are described.

Bringing storytelling into the OpenSpace project adds more possibilities and flexibility to the user. Previously OpenSpace data had to be presented to the users by a developer or someone who is very familiar with the software, but adding a GUI based on storytelling enables any user to interact with the system on her own. The possibility of interaction en-courages the user to not only look at the data presented, but actually control it with her own hands, read added descriptions and explore the data shown on screen.

Some benefits of using storytelling are: the creation of emotional connection to the tar-geted audience (the users of the system), the possibilities of creating richer content, and it gives the users a reason to come back again for more.

(17)

3.1. User Experience

Learning about storytelling there are many sources to go to as well as there are several aspects and interpretations of the concept that is storytelling. Contstructing a Cultural Con-text through Museum Storytellingby Margaret DiBlasio and Raymond DiBlasio, published in 1983, examined an exhibition on the Vikings at the Minneapolis Institute of Arts and the Metropolitan Museum of art. They investigated how that exhibition utilized storytelling to connect with children and even though their seven principles of storytelling were written 35 years ago they’re still current today. [11]

Presented here are DiBlasios’ seven principles of museum storytelling: An effective museum story...

• ...balances entertainment and factual soundness. • ...is compact.

• ...is concrete, employing highly visual language. • ...is personally appealing.

• ...not only describes artifacts but tells how some of them are made. • ...dislodges mistaken stereotypes.

• ...invites cross-cultural comparisons.

These principles can highly be applied to our work with a GUI for the OpenSpace soft-ware. The GUI should add value to the product - making it more entertaining and appealing for a user to explore scientific data about space. It should be visually appealing and straight-forward, leaving the user without confusion. The GUI should also add information about objects in space and what is currently visualized on screen telling the user about the history of space and how objects have evolved. Adding functionality to the GUI for the user to chose a date and/or geographical location on Earth may also encourage users to compare and discuss personal histories of places they’ve visited on Earth or events they’ve witnessed, such as hurricanes or solar eclipses.

In short it is desired to bridge the gap between scientific discoveries and their public dissemination. By using the huge amount of data that exists about space and space missions from NASA and other sources and combining it with storytelling and guided exploration, interactive experiences will help the public to engage with advanced space data as well as learn more about NASA missions, the solar system and outer space [8].

More information about how interactive visualizations used for science communication are supposed to be both exploratory and explanatory, as well as a description of the newly coined term exploranation can be found in the article Exploranation: A New Science Communica-tion Paradigmby Anders Ynnerman, Jonas Löwgren and Lena Tibell [25].

Stories

Researching about storytelling the idea of user stories came to our minds. A user story is a short, simple description of a feature or property told from someone who desires the new capability, usually a user or a customer of the system. User stories are usually a part of an agile approach and they strongly contribute to developers not only writing down features, but also discussing them.[23]

(18)

3.1. User Experience

A user story typically follow one simple template:

As a <role>, I want <some feature or goal> so that <some reason>[23].

Planning the layout and functionality of the new OpenSpace GUI made it clear that there were several user stories that were in focus of the entire work. These user stories all related to a user exploring different parts of space, one at a time. To distinguish between the different user stories, handling different parts of space to be visualized and explored, the concept of storiesevolved. Examples of user stories that in the rest of this report will be presented as different stories are:

• As a user, I want to be able to see all the planets in the Solar System at once, so that I can compare them with each other.

• As a user, I want to explore Jupiter specifically, so that I can learn more about it’s moons. • As a user, I want to be able to zoom in on Mars, so that I can see the high resolution

data available on it’s surface.

• As a user, I want to see all the galaxies, so that I can analyze the different types of galaxies and see what differs between them.

To conclude, a story in this report is a scene in the OpenSpace software where only a certain part of space is displayed to the user. Each story focuses on one clear subject only, such as ‘The Galaxies’ and ‘Jupiter and its Moons’, and is accompanied with a GUI specifically designed for that story.

Design principles

When developing a graphical user interface, GUI, there are several design principles to be considered. There is no list of principles set in stone, but throughout the history of the human-computer interface a lot of research have been done within the field and numerous people have attempted to define a set of general principles of interface design. As a tool for our work a compilation of design principles have been studied and are here presented. Note that the design principles, or design goals, are not presented in order of importance, but alphabetically. [15]

As an overview the design goals in creating a user interface are first presented in a list below, then some of them are described separately more in detail [15].

• Aesthetically Pleasing • Clarity • Compatibility • Comprehensibility • Configurability • Consistency • Control • Directness • Efficiency • Familiarity • Flexibility • Forgiveness • Predictability • Recovery • Responsiveness • Simplicity • Transparency • Trade-Offs

To fulfill the goal of a GUI being aesthetically pleasing it should provide visual appeal and be attractive to the eye. This means the design should provide meaningful contrast between screen elements, create motivated spatial groupings, align screen elements and use color and graphics effectively and simply. By having these things in mind when designing and implementing a GUI it is made more accessible and inviting to a user. [15]

(19)

3.1. User Experience

Clarityis covering several aspects of the design. The interface should be visually, concep-tually and linguistically clear, meaning that everything from visual elements and functions to words and text should be understandable and clear to the user. Compatibility also covers different parts of the interface. The design must be appropriate and compatible with the needs of the user, but also task and product compatibility have to be considered. [15]

A system should be easily learned and understood, providing a flow of actions, responses, visual presentations and information. In other words, it should be comprehensible. Addi-tionally this means that the user interacting with the system should know what to look at and what to do, as well as when, where, why and how to do it. Further on the system’s configurabilityenhances a sense of control, encourages an active role in understanding, and allows for differences in experience levels as well as for personal preferences. A good default configuration must be provided and settings for configuration and reconfiguration must be available. [15]

The goal of consistency is especially important to keep in mind when implementing and designing for an already existing software or system. The system should look, act, and operate the same throughout - all not to confuse the user. Similar components should simply have a similar look, have similar uses and operate similarly. This also includes that the same action always should yield the same result, standard elements should not change position and the function of elements should not change. [15]

The next design principle is control. Control means that the user feels that the system is responding to her actions, that the user is in charge. The interface should present a tool-like appearance - providing the tools for the user to do what is intended while feeling in control. Adjectives describing an interface that provides control are simple, predictable, consistent, flexi-ble, customizable and passive. An interface like this enables the user to control the interaction. Directnessconnects a bit to this. The user is in control and the system provides direct ways to accomplish tasks, while available alternatives also should be visible. In short, tasks should be performed by directly selecting an object and then also an action is selected, after which the action is being performed. [15]

Efficiency in this context means minimizing eye and hand movements, and other control actions. The transitions between different system controls should be easy and free, naviga-tion paths should be as short as possible and the eye movement across the screen should be sequential, natural and obvious. [15] When designing for large screens this is especially important to keep in mind, since it’s desired to keep the user’s attention to relevant elements of the screen. To make the interaction with the system even more efficient the principle of familiarityis used. The interface should build on the user’s already existing knowledge when it comes to the system and/or similar interfaces. By using concepts, terminology, workflows and spatial arrangements that the user is already familiar with the learning time of the system is reduced and the user can be more productive. [15]

One common and well-known principle, flexibility, is considering the system’s ability to respond to individual differences in people. The system must be sensitive to the differing needs of its users, taking into account the user’s knowledge and skills, experience, personal preferences, and habits. A well designed, flexible system also increases user control. Further on, increased user control most often decrease user errors, but human errors can never be totally avoided - therefore the principle of forgiveness. The system must be able to tolerate and forgive common and unavoidable human errors, as well as prevent errors from happening in the first place, provide constructive messages when an error should occur, and in large protect against errors with catastrophic outcomes. [15]

(20)

3.2. JavaScript UI frameworks

Predictability is targeting the fact that the user should be able to predict how the system will behave. The user should be able to anticipate the progression of each task and all the user’s expectations should be fulfilled uniformly and completely. By prodiving distinct and recoginizable screen elements predictability is fulfilled - it is greatly enhanced by design consistency.

Next up is the principle of recovery, which means that a user should be able to retract an action by undoing it. The user should never lose her work as a result of an error on her part, or errors made by the system, such as hardware, software, or communication problems. Recovery should be easy and natural to perform and the goal is stability. Also responsiveness is connected to the system being able to rapidly respond to the user’s request. Immediate feedback should be given, no matter if it is visual, auditory or textual.

The last three principles to be shortly described are simplicity, transparency and trade-offs. Simplicity is quite self-explanatory; an interface should be as simple as possible. This is achieved by present common and necessary functions first, provide uniformity and consis-tency, as well as providing defaults. Transparancy in short just means that the user never should be forced to think about the technical details of the system - the user should be able to focus on the task of interacting with the system, not the computer communication process. Finally, the trade-off principle is just to state that no matter how well a system is designed there will always be trade-offs. The designer must weigh different alternatives when it comes to decisions about time, cost, accuracy and easy of use. An easy rule is that the people’s re-quirements always take precedence over technical rere-quirements.

To simplify and summarize the many design principles, they can be narrowed down to one design hierarchy of needs, as can be seen in Figure 3.1. Beginning from the bottom of the pyramid it is clear that the first priority will always be the functionality, making sure the design is compatible with the rest of the system, configurable and working properly. The next step targets the reliability, such as forgivness and recovery. Moving higher we reach the usability, targeting principles such as clarity, comprehensibility and efficiency. At the top of the pyramid proficiency and creativity is found. Here is where the design becomes aesthetically pleasingand flexible to meet the different needs of the user.

3.2

JavaScript UI frameworks

JavaScript frameworks are widely used today in modern web development. Using frame-works helps to build better and more complex applications. The frameframe-works React and Re-dux will be used in this project.

React

React is an open source JavaScript framework developed for creating interactive user inter-faces. React is based on creating components that manage their own state. The components can be reused and composed to create complex user interfaces. When the there is a change of a component’s state it renders a virtual DOM, Document Object Model, and compares it with the previous virtual DOM. The changes are calculated and the changes are applied to the real DOM. This leads to an efficient way of updating the changed component and not the whole view. [20]

(21)

3.2. JavaScript UI frameworks

Figure 3.1: The design hierarchy of needs; functionality, reliability, usability, proficiency and creativity. [24]

JSX

React components can be created in different ways, one way is by using JSX. JSX is an XML-like syntax extension to JavaScript. Using JSX with React makes it easier for a developer to understand what the component will look like. [17]

Redux

Redux is an open source JavaScript library for handling application state. Redux is based on three principles and by following them the application state is made predictable. The principles are:

• ’Single source of truth’ • ’State is read only’

• ’Use pure functions for changes’

In short this can be desribed as there is one store which stores the current state of the entire application, the only way to change the state is by emitting an action, and to specify how the actions change the state pure functions are written. The pure function take the previous state and the action, and then returns the next state. [21]

JSON

JSON stands for JavaScript Object Notation and is a lightweight data-interchange format. JSON syntax is easy to read and understand for humans because it is based on data objects with attribute-value pairs. JSON format is only text and JavaScript has a build in function to convert the text into a JavaScript object. [18]

(22)

3.3. 3D coordinates to 2D screen space

3.3

3D coordinates to 2D screen space

The conversion between 3D coordinates to 2D screen space is essential when working with a 3D visualization and a 2D web browser. The 2D screen space needs to know the position of objects in the 3D world on the screen to for example show a label next to the object.

3D point to 2D position

The object’s position in the 3D world space is known and the first step in the conversion is to calculate the clip space coordinates. The clip space coordinates is calculated by using the Equation 3.1 where the camera’s projection and view matrix is used. From clip space coordi-nates the Normal Device Coordinate (NDC) system can be calculated with the Equation 3.2. The division with the w normalizes all the coordinates, this is called the perspective divide. The NDC system have origin in the lower left corner and numbers in the range between 0-1. The last step is to convert the NDC to screen space coordinates. The Equation 3.3 is used, where the NDC is the normal device coordinates and the res is the screen resolution.

clipSpace=cam.projectionMatrix ˚ cam.viewMatrix ˚ WorldPos (3.1)

NDC=clipSpace/clipSpace.w (3.2)

screenSpacePosition= ((ndc.x+1)˚ res.x/2,(ndc.y+1)˚ res.y/2); (3.3)

Radius sphere in 3D to 2D screen space

The conversion from the radius of a 3D sphere to a 2D radius is based on the same method as described in Section 3.3. To calculate the radius in 2D screen space the center position of the sphere, the radius and the up vector in world space must be known.

The length of the vector between the center position and the outer ’top’ edge of the sphere will give the radius of the sphere. Figure 3.2 illustrates the vector and the two points. By using Equation 3.4 the radius position in world space is obtained, where the upVector is the up vector in the model view space. The radius position and the center position is then converted to screen space using the method in previous section. The radius in 2D screen space is calculated by using Equation 3.5 followed by Equation 3.6 using the obtained screen space positions from previous equations.

radiusPosition=WorldPos+ (planetRadius ˚ normalize(upVector); (3.4)

radiusScreenSpaceVector=screenSpacePosition ´ radiusScreenSpacePosition (3.5)

(23)

3.4. Navigation and transportation

Figure 3.2: Sphere with a vector from center position to the outer edge of the sphere.

3.4

Navigation and transportation

Navigation in an OpenGL environment is complex and can be done in multiple ways de-pending on the purpose of the specific navigation handling. There are several built-in functions for handling positions of objects and the camera in the scene. One example of a function is the gluLookAt(), which creates a viewing matrix derived from an eye point, a reference point indicating the center of the scene, and an up vector [22].

In general no navigation in OpenSpace can be done if it is not known where the camera is in relation to the objects in the world space. To define a camera we need to know it’s po-sition in world space, the direction it’s looking at, a vector pointing to the right and a vector pointing upwards from the camera. From those known values we can get the camera/view space, where all the vertex coordinates are seen from the camera’s perspective as the origin of the scene. How the view matrix transforms all the world coordinates into view coordinates can be seen in Figure 3.3. [12]

Figure 3.3: The creation of a camera coordinate system with three perpendicular unit axes with the camera’s position as the origin.

Focus in this master thesis work regarding navigation and transportation is to facilitate navigation by predefining movements of the camera, as well as limit the navigation in a way that makes it easier for the user to maintain focus of what is most relevant.

In the Section 3.1 the definition of stories was presented and within every story there is a need of being able to explore different objects in space, such as planets and moons. To be able to look at these object up close the user needs to be able to get to the object chosen. For example, if the user decides that she wants to explore Venus she needs to be able to

(24)

3.4. Navigation and transportation

somehow get there. Instead of letting the user navigate there on her own it is possible to create predefined movements, taking the user to Venus as she choses to go there by clicking on a certain component in the GUI (more information regarding picking a focus in Section 4.5).

To navigate the camera from one position to another one way is to let the camera move along a straight vector from point A to point B. This will give the user the impression of traveling/flying from one place in space to the new, chosen position.

Mentioned previously was also the delimitation of the navigation. As the concept of stories is introduced to OpenSpace it is also desired that the user stays within the focus of each story. As an example: If the user have chosen a story that focuses on Jupiter and its moons, she should not be able to navigate to other parts of space which has nothing to do with Jupiter and its moons - she should simply stay within the chosen focus frame. How OpenSpace is built up therefore has to be studied and considered when later implementing limitations in the user navigation.

More information about the difficulties of accurate, fast and dynamic positioning and navigation in OpenSpace can be found in the article Dynamic Scene Graph: Enabling Scaling, Positioning, and Navigation in the Universeby Emil Axelsson, Jonathas Costa, Cláudio Silva, Carter Emmart, Alexander Bock and Anders Ynnerman. They address the challenge of seamlessly visualizing astronomical data regarding distances, sizes and resolution. [4]

(25)

4

Implementation

The work in this project can be divided into five main parts; communication between touch interactions and the web GUI, the story API, coordinates conversion between 3D and 2D, the web GUI, and transportation in the OpenGL environment. This chapter will cover the implementation of these parts of the project.

4.1

Touch event handling

The GUI is implemented and adapted for multi-touch displays. The user input will therefore be touch events that have to be processed. The touch event handling in this project utilizes an event handler from previous work [14]. The event handlers’ main task is to distinguish if the touch event hits the GUI or the OpenSpace content and pass the event to the right event handler. Figure 4.1 illustrates an overview of how the touch event handling process works.

Figure 4.1: Overview of how the touch event handling works.

The first step in the process is to check if the touch input corresponds to the web GUI or the OpenSpace content. This is done by using a transparency mask, created in previous work. An example of how the mask works can be seen in Figure 4.2b with the corresponding origi-nal view 4.2a. Interactions on the white area in Figure 4.2b will be passed on to OpenSpace’s touch interaction module to handle. The interactions on the black area will be passed to the web GUI.

The web GUI will be disabled when OpenSpace’s touch interaction module handles the event and so will the touch interaction module when the web GUI handles the event.

4.2

Properties handling

The web GUI needs to communicate with OpenSpace to be able to show right content at the right position. The communication happens through different topics and properties. The dif-ferent topics are requests with difdif-ferent purposes. The properties can be a group of settings or information attached to an OpenSpace object [14].

(26)

4.3. 3D coordinates to 2D

(a) The web GUI on top of OpenSpace. (b) The layer mask.

Figure 4.2: In Figure (a) the GUI can be seen as an overlay on top of OpenSpace, with Mars visible behind the GUI. In Figure (b) the layer mask for the GUI is displayed. Touch input on the white area (i.e. outside the layer mask) will send events to OpenSpace and input on the black area (i.e. the layer mask) will send events to the web GUI.

Four topics are used in this project:

• The subscription topic is used to subscribe on the value of a property. • The get topic is used to get a value of a property.

• The set topic is used to set a value of a property. • The trigger topic triggers a trigger property.

The subscription topic is used for all the properties that can be changed from another instance of the web GUI, because all the instances of the GUI should show the same content. For example the position of the labels (see Section 4.5) the subscription topic is used to update the position continuously. The get topic is used for example when checking if there is an active story, see Section 4.6. An example for the set topic is when a new story is chosen, then all the story specific properties are set with the set topic. The trigger topic is used when a function should be triggered, for example the overview button, see Section 4.5.

4.3

3D coordinates to 2D

The conversion of 3D coordinates to 2D screen space is implemented to make it possible to track an object position in the OpenSpace 3D world to the 2D web GUI. This can be used to for example place a label or an information button next to an object. The conversion can be for one single point and extended to calculate the radius of a sphere. The theory and equations can be seen in Section 3.3.

To place labels and information buttons next to, for example, a planet the conversion be-tween the planet’s 3D position and the 2D screen space needs to be updated often. The conversions are performed in OpenSpace and then sent to the web GUI. To make the label follow the planet the position needs to be passed to the web GUI every time the planet moves. To limit the amount of data sent a threshold value was set. The planet had to move more than one pixel to trigger data to be sent.

4.4

Design choices

Designing and implementing a GUI for touch tables, or any screens in general, requires some important design choices to be made and considered. A large screen as a touch table differs

(27)

4.5. GUI

a lot in a few aspects compared with a regular computer screen attached to a mouse and keyboard. Three examples considered in our design process and implementation are listed below:

1. The screen is a lot bigger.

2. Events are controlled by touch interaction with hands, not a mouse.

3. The screen can be tilted to either be seen from above (horizontally) or from the side (vertically).

The above mentioned properties requires some consideration when designing a GUI to achieve the best user experience possible.

The size of the screen is important to take into account when implementing elements on the screen. Objects that may not look very big on a regular laptop screen might seem un-reasonably large on the touch screen. For the GUI to work well on both larger screens and smaller ones it was early on decided to implement responsive elements.

The fact that interaction is controlled by the user’s hands, combined with the possibility of tilting the screen in different positions, adds other design constraints. What if the user is a child whose hands can’t reach to the top of the screen when it’s tilted vertically? The problem would be the same with a user in a wheelchair; clickable items should not be placed at the top of the screen, since this may exclude users from interacting with the system. If the screen, on the other hand, is displayed horizontally users may not reach the middle area of it. The solution to this was to implement the most important buttons of interaction close to the bottom of the screen and mainly display readable, non interactive, information higher up on the screen, see an example in Figure 4.14.

Another design choice made early on in the working process was to implement reusable elements. Buttons and controllers of different types (further explained in the next chapter, Chapter 4.5) should build up the menu in every story - no matter the content in the story. If the menus in the entire system look fairly the same, it was assumed to give a better consistency and less confusion for the users.

4.5

GUI

The GUI consists of three parts: the start menu, the menu bar and the labels. The menu bar is the core of the GUI where all the different story components are stored.

Start menu

Providing the user with the possibility to chose among different stories was one feature decided to be implemented early on in the working process. To enable this feature a start menu was implemented.

The start menu was implemented as an image carousel, much similar to the menu in In-side Explorer [2], visualizing each story as an image combined with additional information. The image is a screen shot taken in one of the scenes of the story, giving the user a ‘sneak peek’ of what the story is about. Additionally a text box is displayed on top of the image, which includes the title of the story, a short text about the story and a button which encour-ages the user to pick and explore this particular story, see Figure 4.3.

(28)

4.5. GUI

the stories available that the user is currently looking at, visualized by dots at the bottom of the screen. The user can move between the stories using two arrows, left and right respec-tively, as well as clicking directly on one of the dots.

Figure 4.3: The start page of a story in the menu. Highlighted with blue rectangles is the information box displaying information to the user about the story, the dots at the bottom also telling which of all the stories available is chosen, and one of the two arrows indicating the possibility to move between stories.

A few slides from the start menu can be seen in Figure 4.4. In the figure the slides represent the stories ‘Weather and Events on Earth’, ‘The Galaxies’ and ‘Jupiter and it’s Moons’.

Figure 4.4: A few slides from the start menu.

Focus buttons

To enable the user to easier navigate and focus on different objects in a story focus buttons were implemented. A focus button can be generated for any scene graph node available in OpenSpace, such as for example planets, moons or satellites. The button consists of an image of the object, if available in the image database, and the name of the object. If the object is lacking a corresponding image an icon is shown instead on the button. An example of focus buttons can be seen in Figure 4.5, representing the planets in the Solar System. Another example of focus buttons can be seen in Figure 4.6 where the buttons represents the objects possible to explore in the story ‘Jupiter and its Moons’.

(29)

4.5. GUI

the story) is almost always accompanied by an overview button. This button can also be seen in the figures mentioned above and is further described in Section 4.5.

Figure 4.5: Focus buttons in the story ‘Solar System’.

Figure 4.6: Focus buttons in the story ‘Jupiter and its Moons’.

Overview button

An overview button was implemented to provide clarity and give the user the possibility to understand each story better. The button navigates the user from the current camera frame to a position where the entire scene can be viewed and explored. In this way the user can explore the objects in the scene in detail by navigating between objects using the focus buttons, see Section 4.5, as well as see the objects in a bigger context. The button, together with some focus buttons, can be seen in Figure 4.5 and Figure 4.6.

As an example, in the story covering the Solar System the planets within in the Solar System can be explored very close to each planet’s surface using the focus buttons, but by pressing the overview button the entire Solar System can be seen at once, showing all the planets in relation to each other.

Labels and information buttons

To provide the user with information about different scene graph nodes (objects) in OpenSpace labels and information buttons were implemented, see an example in Figure 4.7. The labels are text with the name of the scene graph node and are placed just above it. The information button was created to provide additional information if the user is interested in more information about the scene graph node. The information button is an information icon and when the button is pressed an information window is shown. The information is retrieved from the story JSON file and is different for each story. If there is no information specified in the story the information button will not be visible.

The positions of the labels are calculated in OpenSpace and then sent to the web browser. The positions come from the implementation for 3D coordinates to 2D, see Section 4.3.

Controllers

To allow the user to control and change the OpenSpace scene different controller components were created. Some of the components are created as popup menus to save space on the main menu bar and some are buttons. The popup menus show a list with the data from the story API. The buttons can be used to toggle different properties in OpenSpace.

(30)

4.5. GUI

Figure 4.7: Earth with label and info window active.

Time controller

To speed up or slow down the simulation the time controller component was created, see Figure 4.8. The controller consists of five buttons, two for speeding up and two for slowing down and one for pause and play. The last pressed button will have a blue color to indicate that it is active.

Figure 4.8: The time controller with the play button active and the pause button visible as an option to change from playing to pausing.

Event controller

The event controller component makes it possible to go to a specific event in time. When the button is pressed the popup menu is shown where it is possible to choose between different events. The component sets the time and changes the camera to the location when and where the event is happening, see an example in Figure 4.9.

Sights controller

To enable the user to navigate quickly between different places in space the sights controller component was created, see an example in Figure 4.10. The sights controller sets the camera to the location of the selected sight.

Scale controller

The scale controller gives the possibility to scale up and down scene graph nodes. The scale controller shows the number of how many times the scene graph node will be scaled and the name, or type, of the node. An example of a scale controller that scale moons can be seen in Figure 4.11.

(31)

4.5. GUI

Figure 4.9: Event controller. Figure 4.10: Sights controller.

(a) Inactive button. (b) Active button.

Figure 4.11: Scale controller in it’s two states, inactive and active respectively. This particular button scale the moons in the current story and alternates between the moons’ original sizes (1x original size, Figure (a)) and ten times their original sizes (10x original size, Figure (b)).

Toggle controller

The toggle controller is a button that toggles any bool property in OpenSpace. The component consists of a name and a default value and toggles the property it is connected to. Figure 4.12 shows an example of how the toggle controller can be used. The toggle controllers in the figure can toggle of five different settings for the story ’Galaxies’.

Figure 4.12: Toggle controllers for the story ’Galaxies’.

Utilities

To provide consistency within the GUI and to maintain clarity to the user it was early on decided that some sort of utilities menu should be implemented into each story, no matter what the content in the story. The choice was made to build this menu with three buttons, see Figure 4.13. The buttons are ‘Home’, ‘Help’ and ‘Info’.

The home button takes the user to the start menu, where a new story (or the same one as previously) can be chosen. The help button displays an image of illustrations instructing the user how to interact with OpenSpace on the touch screen, see Figure 4.14. The instructions can be hidden by clicking the button again or they will automatically fade and disappear after ten seconds. The info button displays a pop up element with information about the content in the story, making it possible for the user to learn more about what is currently seen on the screen.

(32)

4.5. GUI

Figure 4.13: Utilities menu.

Figure 4.14: The story about weather and events on Earth with the help button active, dis-playing instructions to the user of how to interact with the visualization on the touch screen.

Developer mode

To consider the design principle of flexibility and answer to different users’ needs of OpenSpace it was decided to develop a developer mode. The main target audience of this thesis work is the general public, but for further development of the GUI and OpenSpace in general it can not be avoided that also the developers of the software will interact with it. For obvious reasons, the developers of OpenSpace have different needs than a user with no previous experience of the software.

The developer mode is easily toggled by clicking a specific button, ‘D’, on the key board, hence quickly displaying more information and enabling additional features not available in the default mode. A screenshot of the developer mode can be seen in Figure 4.15, where also the additional information displayed to the user is highlighted by the blue rectangles. The information displayed in the upper left corner, as well as in the lower right corner, is information that have previously been shown to developers of OpenSpace. This informa-tion could be highly relevant when working on developing OpenSpace, but is of no value at all to a user only interested in interacting with the newly developed web GUI and the stories provided. Information given are for example the simulation increment (in other words, the simulation speed), average frame rate in FPS (frames per second) and also the git branch name currently used and the version of OpenSpace.

(33)

4.6. Story API

Figure 4.15: OpenSpace run in developer mode, displaying additional information as well as a menu for changing story (in this image highlighted with blue rectangles).

In the upper right corner a menu of buttons was implemented. The menu enables the de-veloper to quickly change between stories available, instead of always reach the start menu by clicking the home button. Further on, this, in its turn, reduces the working time for devel-opers when they develop and interact with OpenSpace.

4.6

Story API

To make the creation of stories flexible and easy a story API was created. The story decides which GUI components that will be visible and also the content. The process of creating a story is simplified to creating a story file that adds components with the content from the story file.

The story files are using JSON data format because it is readable and works well with JavaScript. The overview of the flow of the process of reading in the story file and create the GUI is illustrated in Figure 4.16. The web browser starts with asking for a specific JSON story file and if the file is returned it reads the content and saves it in Redux. The story content is saved in Redux to make the content accessible for all the components in the web GUI. The content of the story decides two main things; which GUI components will be rendered and which properties should be set in OpenSpace.

The web browser is responsible for keeping track of the current story and needs to set properties to OpenSpace using Lua scripts. To handle story specific properties in OpenSpace a story handler class was created. The story handler is for example responsible of keeping track of the current story, the zoom limits and the interesting focus nodes that need to be inside OpenSpace and can not be in the GUI.

To handle several instances of the web browser GUI the story handler class is necessary. When having several instances of the GUI the web browser should show the same content in all browsers.

(34)

4.6. Story API

The process of handling stories can be divided into three cases:

No current story

If there is no current story set (i.e. the user is looking at the start menu) and the user selects one, the web browser will send commands to OpenSpace to set the properties of the chosen story. The GUI will display the components and content that belongs to the selected story.

New web browser instance

When there is a new instance of the web browser GUI the browser will check if there is a current story in the story handler. If there is a current story the web browser will not send any commands to OpenSpace, because all the properties are already set. The web browser will just display the components that belongs to the current story in the story handler class.

Current story selecting new story

If there is already a current story set and the user selects a new one the web browser will first tell OpenSpace to reset all the properties to their default values, then the new story file will be read and all the story specific properties will be set.

Figure 4.16: Overview of how the story communication works.

The story API have many different functionalities implemented. Table 4.1 shows an extrac-tion of the documentaextrac-tion for the story API.

API Example input Description

storyidentifier ’story_name’ Should be the same as the filename. storytitle “Name of story” Will be displayed as title for the story.

storyinfo “Information about the story” Will be displayed in the information component. overviewlimit "1.5e+13" Zoom out limit and used for the Overview button. inzoomlimit "1.2e+25" Zoom in limit

focusbuttons Ëarth¨, ¨Sun¨, ¨Moon¨, List of scene graph nodes. start {“Earth”, "2015-07-14", {pos} To set the start conditions. timecontroller true/false Show time controller or not. hideplanets ["Mercury", "Venus", "Earth"] To hide planets.

infofile “info_jupitermoons” Name of file with information to the story. Table 4.1: Extraction from the story API documentation, showing some of the different func-tionalities.

(35)

4.7. Transportation and navigation

The limitations for the API are:

• It is mandatory for all stories to define: ’storyidentifier’, ’overviewlimit’ and ’start’. • The story API assumes that all the planets in the solar system is loaded by default. • All story files must be named story_storyname.

• All story information files must be named info_storyname.

4.7

Transportation and navigation

It was early on decided to let the user "fly" to a planet if it was chosen with the focus buttons in the GUI. In other words, a predefined movement had to be implemented, where the cam-era in the OpenGL environment would move from one position to another in the 3D space. Vector arithmetics was used to calculate how to move the camera along a straight vec-tor, giving the user the feeling of flying to a new position by simply moving closer and closer to the newly chosen object - in this case another planet, moon or equivalent. Implementing the functionality to chose an object and travel there by clicking a button in the GUI added a feature previously not available in OpenSpace, which would also fulfill the user’s need of exploring different planets from a closer distance.

Additionally the user’s navigation had to be restricted within the different stories, such as limiting the user not to zoom out too far out in space. An overview limit was therefore added and implemented, combined with an overview button, described in Section 4.5. The overview button will transport the user to an overview of the current scene, but the overview limit is a limitation set in the code of the touch interaction which strictly restrains the user from zooming out. This was a concious design choice made to not let the user wander off in space, getting confused and loosing the focus of the current story.

Another implementation made regarding transportation and navigation in the OpenSpace environment is how the user arrives to a certain location on a planet, such as the different sights available in the story ’Intersting sights on Mars’. Since this feature in the new GUI tar-gets specific geographical locations some implementations had to be done handling latitude and longitude values. It was also stated that arriving to the "night side" of a planet, i.e. the side of the planet facing away from the Sun, would leave the user only seeing the siluette of the planet and nothing else but a big black circle on the screen. This problem was solved by making sure the user will always arrive to the sunny side of a planet when a planet is chosen with the focus buttons and if a sight is chosen with the sights controller the light settings are different if this location is currently on the night side.

(36)

5

Results

This chapter presents the results of the implemented graphical user interface (GUI) and the logic behind it.

The result of the work is a GUI designed and implemented for a user from the general public, someone who has no previous experience of the Openspace software. The GUI and an underlying API provides the software with the functionality to display one part of the space data at a time. The different scenes, or sets of data, have been named different stories, where each story has a different focus within space and the GUI is adjusted to the content and role of the story. Two examples of a stories taken from the resulting work are "Jupiter and its Moons" shown in Figure 5.1 and "Weather and events on Earth" shown in Figure 5.2.

Figure 5.1: Final version of the story ’Jupiter and its Moons’.

In other words the software have been made more adjustable and flexible to the users, but also to the developers. A special developer menu is also part of the resulting work (see Section 4.5). The developer menu enables the developer to turn on additional features while implementing new content for the GUI and the stories, making it easy to track, for example, simulation speed and the average frame rate in FPS.

The reason behind the increased flexibility is the relatively large number of components implemented for the GUI and the API. The components cover different functionalities which in different ways meet the needs of the user. The number of components available also

(37)

makes it easier for the developers to create new stories without demanding too many new components to be implemented.

The available components in the resulting GUI are: • Home button

• Help button • Information button

• Information icon combined with text boxes • Labels • Reset button • Focus buttons • Overview button • Time controller • Event controller • Sights controller • Scale controller • Toggle controller

The resulting GUI is made such that continuous development of the stories can be made to achieve the best possible user experience for users of the OpenSpace software. It provides the user with guidance of how to navigate in space, while still letting the user explore freely.

References

Related documents

According to a previous study in this area, the computer mouse was the most preferred and performed best when tested in speed and accuracy when compared to the keyboard

Den första är den troligen största inkluderingsstudie som har gjorts i ett enskilt land (Rose, Shevlin, Winter &amp; O’Raw, 2015). Sammanfattningsvis så beskriver forskarna att

The traditional interface (physical keyboard and mouse) showed the shortest time needed for completing the tasks, whereas touchscreen used with smaller objects on screen (standard

Vid ansningen krävs det att ibland våga gå utanför de formulerade ramarna på en arbetsplats eller hos sig själv och med små medel visa att man verkligen bryr sig om

Methods: This parallel group randomized controlled trial included 57 women randomly allocated into two groups – a strength training group (STRENGTH, 34 subjects) and a stretching

Detta kan bidra till minskad miljöpåverkan eftersom antal transporter till och från byggarbetsplatsen minskar, det bidrar även med arbetstidsbesparing samt ökad

Keywords: Aquaporin-4, Obsessive-compulsive disorder, La belle indifférence, Conversion disorder, Antibodies, Microparticles, Neuromyelitis optica spectrum disorder,

(2009:357) bekräftar även detta i sitt resultat då de menar att tekniken försvårar gränsdragningen mellan arbetsliv och fritid som i sin tur har en negativ inverkan på både