• No results found

Interactive Sonification in OpenSpace

N/A
N/A
Protected

Academic year: 2021

Share "Interactive Sonification in OpenSpace"

Copied!
56
0
0

Loading.... (view fulltext now)

Full text

(1)LiU-ITN-TEK-A--20/050-SE. Interactive Sonification in OpenSpace Malin Ejdbo Elias Elmquist 2020-08-28. Department of Science and Technology Linköping University SE-601 74 Norrköping , Sw eden. Institutionen för teknik och naturvetenskap Linköpings universitet 601 74 Norrköping.

(2) LiU-ITN-TEK-A--20/050-SE. Interactive Sonification in OpenSpace The thesis work carried out in Medieteknik at Tekniska högskolan at Linköpings universitet. Malin Ejdbo Elias Elmquist Norrköping 2020-08-28. Department of Science and Technology Linköping University SE-601 74 Norrköping , Sw eden. Institutionen för teknik och naturvetenskap Linköpings universitet 601 74 Norrköping.

(3) Linköping University | Department of Science and Technology Master’s thesis, 30 ECTS | Media technology 2020 | LiU-ITN-TEK-A--20/050--SE. Interactive Sonification in OpenSpace Elias Elmquist Malin Ejdbo Supervisor : Niklas Rönnberg Examiner : Camilla Forsell. Linköpings universitet SE–581 83 Linköping +46 13 28 10 00 , www.liu.se.

(4) Upphovsrätt Detta dokument hålls tillgängligt på Internet - eller dess framtida ersättare - under 25 år från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.. Copyright The publishers will keep this document online on the Internet - or its possible replacement - for a period of 25 years starting from the date of publication barring exceptional circumstances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.. ©. Elias Elmquist Malin Ejdbo.

(5) Abstract This report presents the work of a master thesis which aim was to investigate how sonification can be used in the space visualization software OpenSpace to further convey information about the Solar System. A sonification was implemented by using the software SuperCollider and was integrated into OpenSpace using Open Sound Control to send positional data to control the panning and sound level of the sonification. The graphical user interface of OpenSpace was also extended to make the sonification interactive. Evaluations were conducted both online and in the Dome theater to evaluate how well the sonification conveyed information. The outcome of the evaluations shows promising results, which might suggest that sonification has a future in conveying information of the Solar System..

(6) Acknowledgments First of all, we would like to thank Niklas Rönnberg for creating this exciting opportunity that would become the subject of this master thesis. You have been an invaluable asset as a supervisor for sonification, sound theory, evaluation methods and everything in between. Secondly, we would like to thank Alexander Bock for being our unofficial second supervisor by introducing us to OpenSpace and the Dome theater. Thank you for letting us into the Dome theater during inconvenient hours. Additional thanks to Lovisa Hassler for further introducing us to OpenSpace and pointing us in the right direction, to Camilla Forsell for her input on evaluation methods, and to everyone who participated in our evaluations.. iv.

(7) Contents Abstract. iii. Acknowledgments. iv. Contents. v. List of Figures. vii. List of Tables. ix. 1. 2. 3. Introduction 1.1 Background . . . . 1.2 Motivation . . . . . 1.3 Aim . . . . . . . . . 1.4 Research Questions 1.5 Delimitations . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. 1 1 2 2 3 3. Theory 2.1 OpenSpace . . . . . . . . . . . . . . . . . . . . 2.2 Sonification . . . . . . . . . . . . . . . . . . . 2.2.1 Strengths of Sonification . . . . . . . . 2.2.2 Types of Sonification . . . . . . . . . . 2.2.3 Types of Sounds . . . . . . . . . . . . 2.2.4 Sound Parameters . . . . . . . . . . . 2.3 Related Work . . . . . . . . . . . . . . . . . . 2.3.1 Sonifications with Visual Component 2.3.2 Sonifications in Planetariums . . . . . 2.4 SuperCollider . . . . . . . . . . . . . . . . . . 2.5 Open Sound Control . . . . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. 4 4 6 6 6 7 8 8 8 9 9 10. Method and Implementation 3.1 OpenSpace Views . . . . . . . . . . . . . 3.1.1 Solar System View . . . . . . . . 3.1.2 Planetary View . . . . . . . . . . 3.1.3 Compare View . . . . . . . . . . 3.2 Sonification . . . . . . . . . . . . . . . . 3.2.1 Identifying Important Concepts 3.2.2 Implementation of Sonifications 3.2.3 Sound Architecture . . . . . . . . 3.2.4 Surround in the Dome . . . . . . 3.3 Integration with OpenSpace . . . . . . . 3.3.1 Extraction Method . . . . . . . . 3.3.2 The Data . . . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . .. 11 11 11 12 13 14 14 14 18 19 20 20 20. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. v. . . . . .. . . . . .. . . . . . . . . . . . .. . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . ..

(8) . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. 20 23 23 23 24 25 25 26 26 27. Evaluation Results and Further Development 4.1 First Survey . . . . . . . . . . . . . . . . . 4.2 Changes for Second Survey . . . . . . . . 4.3 Second Survey . . . . . . . . . . . . . . . . 4.4 Dome Evaluation . . . . . . . . . . . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. 28 28 30 30 33. Discussion 5.1 Evaluation Method 5.2 Results . . . . . . . 5.3 Implementation . . 5.4 Future Work . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. 37 37 38 39 39. 3.4 3.5 3.6. 4. 5. 6. 3.3.3 Distances and Angles 3.3.4 Precision Error . . . . 3.3.5 Optimization . . . . . Open Sound Control . . . . . Graphical User Interface . . . User Evaluation . . . . . . . . 3.6.1 Initial Evaluation . . . 3.6.2 Online Surveys . . . . 3.6.3 Dome Evaluation . . . 3.6.4 Analyzing the Data . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . . . . . . . .. . . . .. . . . . . . . . . .. . . . .. . . . . . . . . . .. . . . .. . . . . . . . . . .. . . . .. . . . . . . . . . .. . . . .. Conclusion. . . . . . . . . . .. . . . .. . . . .. 41. Bibliography. 43. vi.

(9) List of Figures 1.1. 2.1. Screenshot of OpenSpace. The planets in order of distance to the Sun are named Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune. All planets have been scaled up for visibility in this picture. . . . . . . . . . . . . . . . . . . . .. 2. Screenshot of OpenSpace, showing the interface and settings of focus and time. . .. 5. 3.1 3.2 3.3 3.4 3.5 3.6. Screenshot of OpenSpace in the Solar System view, showing the inner planets. . . . Screenshot of OpenSpace in the planetary view, displaying Earth. . . . . . . . . . . Screenshot of OpenSpace in the compare view, where Earth and Mars are compared. The pitches of each planet, shown on a piano with 61 keys. . . . . . . . . . . . . . . An overview of the structure of the sonification. . . . . . . . . . . . . . . . . . . . . Ideal surround sound placement. Note that the LFE-component does not emit directional sound and can be placed more freely. . . . . . . . . . . . . . . . . . . . . 3.7 The calculation of the angles for planets relative to the camera. . . . . . . . . . . . . 3.8 The black curve depicts how the sound level for the sonification of Earth changes depending on the distance to the planet. The value on the Y-axis is the sound level for the planet and the value on the X-axis is the distance to the planet in kilometers (km). The points where the lines of the same color intersect are points where the curve was desired to be close to. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12 The GUI in OpenSpace for all three views. . . . . . . . . . . . . . . . . . . . . . . . . 4.1. 4.2 4.3. 4.4 4.5. 4.6 4.7. BUZZ results from the first online survey presented as a box plot. The statements have been simplified for clarity. Note that for the statements "Difficult", "Boring" and "Confusing" it is positive if the answer has a low value. The thick horizontal line in the box represents the median value and the cross the average value. . . . . Result for each sonification for the first survey. The thick horizontal line in the box represents the median value and the cross the average value. . . . . . . . . . . . . . BUZZ result from both online survey presented as a box plot. The statements have been simplified for clarity. Note that for the statements "Difficult", "Boring" and "Confusing" it is positive if the answer has a low value. The thick horizontal line in the box represents the median value and the cross the average value. . . . . BUZZ score result from both surveys. The thick horizontal line in the box represents the median value and the cross the average value. . . . . . . . . . . . . . . . . Result of the added BUZZ statements after each sonification for the second survey. The thick horizontal line in the box represents the median value and the cross the average value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Result for the sonifications in both surveys. The thick horizontal line in the box represents the median value and the cross the average value. . . . . . . . . . . . . . BUZZ result for the second survey and the Dome evaluation. Note that for the statements "Difficult", "Boring" and "Confusing" it is positive if the answer has a low value. The thick horizontal line in the box represents the median value and the cross the average value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. vii. 12 13 13 16 18 19 21. 22 25. 29 29. 31 31. 32 33. 34.

(10) 4.8. Total BUZZ score result for all the evaluations. The thick horizontal line in the box represents the median value and the cross the average value. . . . . . . . . . . . . . 4.9 Result of the three BUZZ statements after each sonification for the Dome evaluation. The thick horizontal line in the box represents the median value and the cross the average value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Result for all the sonifications in the Dome evaluation. The thick horizontal line in the box represents the median value and the cross the average value. . . . . . . . .. viii. 34. 35 36.

(11) List of Tables 3.1 3.2. List of planet properties for the eight planets. Temperature is given in unit Kelvin (K) and the other values are given in a ratio compared to Earth. . . . . . . . . . . . List of planet properties and their respective sonification mappings. . . . . . . . . .. ix. 14 15.

(12) 1. Introduction. There is no sound in space [16]. That is a fact that has discouraged the use of sound when it comes to representing scientific data in outer space. Instead, scientific visualization is the primary method of conceptualizing the information of space, which is the method used in the visualization software OpenSpace [5].. 1.1. Background. Sound relies on vibrations propagating as pressure waves through a transmission medium [16]. On Earth this medium is air which surrounds us within the atmosphere [7]. In space however, there is no transmission medium for sound to travel in. The average density of atoms in space is about one atom per cubic centimeter [16], compared to millions of billions of air molecules by cubic centimeter in Earth’s atmosphere [7]. Even though sound does not exist in space, sound can still conceptualize data collected from space using sonification. Instead of visually showing data, sonification is a method of conveying information through non-speech audio [11]. It is a method that has been around for several decades but has only in recent years started to gain more attention. There are certain advantages of using sonification over visualization, since our hearing can be a powerful tool to analyze data. By using these advantages as well as immersing the audience with sound, a further understanding of astronomy can be acquired. Sonification has proved itself as an important scientific tool of certain scientific discoveries. One example is when the space probe Voyager 2 approached Saturn’s rings [15]. When scientist looked at the data it appeared to just be noise. However, by sonifying the data, the scientists could discern a sound resembling a "machine gun". With this discovery the scientists could conclude that this was because of micrometeoroids hitting the hull of Voyager 2. This example show that sound can perceptualize data in a way that visualization techniques cannot. The Solar System contains one star and eight planets that orbits around it in elliptical shapes at different distances and speeds [7]. The differences between the planets can sometimes be very large, for example the diameter of the largest planet Jupiter is almost 30 times larger than the diameter of the smallest planet Mercury. One way to explore the Solar System is to use a visualization software that visualize space such as OpenSpace, see Figure 1.1.. 1.

(13) 1.2. Motivation. Figure 1.1: Screenshot of OpenSpace. The planets in order of distance to the Sun are named Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune. All planets have been scaled up for visibility in this picture. OpenSpace is an open source interactive data visualization software which uses scientific visualization to visualize the known universe in several ways [5]. It is a collaboration project between Linköping University, American Museum of Natural History and NASA Goddard’s Community Coordinated Modeling Center to visualize the known universe. The software supports a wide variety of setups, ranging from personal computers to advanced planetariums with clusters of computers and projectors. The software can show the Solar System in an accurate relative scale, while also being able to visualize dynamic simulations such as the magnetic field of Earth and other planets. OpenSpace does not output any sound, which is a missed opportunity as sound can be used as a tool to further engage audiences, as it can give new perspectives of the data [11]. This is what will be explored in this thesis by implementing sonification in OpenSpace.. 1.2. Motivation. During a planetarium show, the visuals of OpenSpace are usually accompanied with background music while a presenter gives information through speech. The music does evoke a feeling of space and a sense of wonder, but it does not relate to anything showed visually or add any additional information or insight. By offering sonification in a planetarium show, the show could become more immersive and informative as the sounds can be connected to what is shown on the screen [10]. The immersive feeling can create a sense of wonder for astronomy that is tied to astronomical data, inspiring more people to take interest to it. There are a couple of examples where sonification has previously been tested in planetariums as a demonstration for the general public [18, 25]. However, these sonifications were not a part of any visualization software to complement the audio. By adding sonification in a software like OpenSpace, an interactive and immersive audio-visual experience would be created, which in turn could lead to increased understanding of astronomy for the audience.. 1.3. Aim. The aim of this thesis is to investigate in what ways sonification can be used to increase understanding of astronomy together with the software OpenSpace, as well as to increase the immersiveness for the audience. Specifically, the sonification will be implemented to be used in the Dome theater at Visualization Center C1 in Norrköping, which includes sound and visuals in 360 degrees around the audience. An implementation will be made to extract data from OpenSpace to be sent to a real-time audio synthesizer where the data will be sonified. Evaluations will be conducted during the 1 Visualization. Center C: http://visualiseringscenter.se/en/about-c. 2.

(14) 1.4. Research Questions development to test how well the sonification conveys the data to the audience. Feedback from the evaluations will be used to improve the sonification further.. 1.4. Research Questions. This thesis aims to answer the following research questions: 1. How can sonification give a comprehensible understanding of the Solar System? 2. What Solar System data should be used in the sonification, and what sound parameters should be manipulated to convey the data? 3. How can a sonification be integrated into an interactive data visualization software such as OpenSpace, and how can it make the experience more immersive?. 1.5. Delimitations. The addition of sound in OpenSpace will increase the accessibility of the software to visually impaired people. However, the work of this thesis will not be focused on replacing the already existing visualization with a sonification. Instead the goal is to use sonification to complement the visualization of OpenSpace. The size of space will be limited to the Solar System, which includes the Sun, the eight planets and their respective moons. Data from other parts of space will not be part of the sonification. The sonification is going to be tailored to the setting of the Dome theater with a 7.1 surround sound system. During development stereo speakers and headphones will however be used to monitor the audio.. 3.

(15) 2. Theory. Creating a sonification for OpenSpace required knowledge about how OpenSpace works and how interaction is done with the software. It was also necessary to identify in what ways sonification could be used to complement the experience of OpenSpace. This required knowledge about different types of sonifications to find the most suitable way to represent the data. Related work was also studied to find sources of inspiration for creating a sonification.. 2.1. OpenSpace. The system behind OpenSpace is designed in such a way that it can easily be expanded. The system architecture is structured into four layers, the OpenSpace-core, modules, OpenSpaceapplications and external applications [5]. This structure makes it possible to expand the functionality to send data to the sonification software. The coordinate system in OpenSpace has its origin located in the Sun and uses floating point numbers with single precision and meters as unit. However, this causes single precision floating point errors since floating point numbers are not precise enough to describe every point in the large environment of the Solar System with high enough precision [2]. The precision decreases with increased distance to the origin and is caused by the fact that not every number can be represented with single precision. The numbers that cannot be represented with single precision are rounded to the closest possible number and this causes the numbers to jump quickly from one value to another. This could cause problems with the parts of the sonification that are dependent on positions.. 4.

(16) 2.1. OpenSpace. Figure 2.1: Screenshot of OpenSpace, showing the interface and settings of focus and time. Navigation in OpenSpace is focus-oriented, which means that all navigation is relative to an object that is in focus. In order to navigate to a planet, the user must switch focus to the planet of interest by using the graphical user interface (GUI) in OpenSpace. Once the new planet is in focus the user can navigate closer to that planet and explore it instead. The focus-oriented navigation makes it easy to navigate in the environment while also preventing the user from getting lost in an otherwise large and empty environment. The object that is in focus heavily influences what is visually shown on screen and should consequently also influence the sonification. In OpenSpace it is possible to change how fast time is simulated, this makes the planets go faster either into the future or the past. This enables the user to appreciate changes that would otherwise be hard to discern by observing it in real-time, which includes almost everything temporal happening in space. The sonification should react to this change to create a more dynamic connection with the software. The GUI of OpenSpace can be used to change the object in focus, change the speed of time and change settings for the individual objects in the scene, see Figure 2.1. It can also be used to change the settings of the individual modules in OpenSpace, which could also be used to control the sonification. OpenSpace gives a visual representation of the Solar System and communicates facts of the Solar System that can be visualized. The appearances of planets are shown by using real photographs taken from previous space missions [4]. Their position in the Solar System and their length of day and year are visualized in the software by simulating the Solar System. However, OpenSpace does not show all of the information about the planets, this would instead be communicated via a presenter during a planetarium show. In similar software1 such information could be visible in an information window, but this would not be an immersive way of presenting the information in a planetarium show. Instead audio in form of a soni1 NASA’s. Eyes on the Solar System: https://eyes.jpl.nasa.gov/eyes-on-the-solar-system.html. 5.

(17) 2.2. Sonification fication could be used to complement the visuals of OpenSpace to mediate information that would otherwise not be seen.. 2.2. Sonification. Sonification is the use of non-speech sound to convey and communicate information. It does this by translating relationships in data into sound where the human hearing is stimulated to a sense that the data relationships become comprehensible [11]. Sonification shares the same purpose as visualization, to communicate information, with the biggest difference being that sonification transmits information sonically to the ears.. 2.2.1. Strengths of Sonification. The strengths of sonification are based on how the ears are different from the eyes. It is therefore important to identify specific concepts where sound can perceptualize a concept better than a visual representation. Compared to our vision, our auditory perception has a better sense of detecting temporal and other continuous changes [20, 11]. This can be useful when representing dynamic data by altering the sound according to it. Sound can also be positioned in a wider space than visual components. This can be referred to as the spherical sound, which describes the multidimensional and multi-directional nature of hearing [23]. While our eyes are only capable of observing things that are in front of us, our ears can detect and focus on changes all around us. This opens the possibility to position objects all around the audience, which enables a sense of presence and immersion. Planetariums and other science centers often rely on engaging audiences by offering an immersive and stimulating experience to spark their curiosity. This is described by Mark Ballora [3] as the "wow-factor", where he also states that sonification is an under-utilized element for this purpose, and that it should be used because humans are audiovisual by nature. Ballora also touches on the fact that sound has a power to evoke emotions and memories, more so than its visual counterparts, which would enable the audience to be more emotionally connected to the experience. Presenting both visual and auditory components increases the chance that the audience remembers the information presented as they can create associations to either of the senses. With multidimensional data, several sounds can be played at once, as the hearing is capable of perceiving and distinguishing between several sounds simultaneously. This is why music is often consisted of several instruments playing together. When listening to music, it is also possible to focus on one instrument throughout the song. This relates to the Cocktail Party phenomenon [6], where a person can isolate another person talking among a crowd of other people. There is however a difference between perceiving several sounds and actually absorbing the information. Schuett and Walker [21] stated that audience can focus on at least three auditory streams at the same time. This comprehension can also be improved by separating the sounds spatially, which was shown by Song and Beilharz [22] where a separation of 120 degrees between two audio streams gave an increased comprehension while listening to them simultaneously. Creating a balance between the number of audio streams is important to stimulate the audience enough while still not playing too many sounds which would instead lead to confusion.. 2.2.2. Types of Sonification. Different types of sonifications exists to cater for different situations. For data exploration, which is the focus of this project, both direct and more abstract techniques are available. Audification is the direct conversion of data into a waveform in the audible domain. A waveform which would normally not be audible could become that by speeding up or slowing down the waveform, i.e. transposing the waveform. Common applications of audifi6.

(18) 2.2. Sonification cation is putting ultrasound or infrasound, such as the vibrations of earthquakes, into the spectrum of human hearing. One of the more recent and famous examples is the audification of the gravitational waves discovered by the laser experiment facility LIGO [1] in 2016. The detection of the gravitational waves resulted in a waveform increasing in frequency from 35 to 250 Hz, which was converted into an audio clip. One of the simpler kinds of sonification is an auditory graph, which is a sonic representation of the data from a graph. Most commonly the pitch of a sound will be manipulated depending on the value of the Y-axis, which is then played continuously along the X-axis. This type of sonification is the most straight forward way of representing a graph to visually impaired users. Xsonify [8] is an example of a software that can convert data into auditory graphs. Parameter Mapping is the most common type of sonification [15], which connects the value of data to a parameter of a synthesized sound, such as pitch, loudness or rhythm. The type of conversion of the data to the synthesized sound determines the quality of the mapping, which can also be affected by using an appropriate scaling. If data of a linear domain would be mapped to pitch for example, the relation between the data and sonification might not be successful because our hearing perceives pitch in an exponential manner. Interactive sonification [12] is another type of sonification which can be achieved by enabling an interactive element in any of the types of sonifications mentioned above. For parameter mapping this could mean that the user can manipulate the data, which in turn changes the parameters of the acoustic variables. In the present study, parameter mapping with an interactive element was used to best suit the data and software that the sonification was going to be integrated with. By using parameter mapping the planets can be compared by letting their data affect the acoustic variables of the sonification. OpenSpace is an interactive software, which enables the sonification to follow the focus of the visualization and be played at the same time speed. The GUI can also be used to let the audience navigate through the different sounds.. 2.2.3. Types of Sounds. There are different techniques that can be used to create a variety of sounds for a sonification. This can range from using simple audio clips to creating fully synthesized sounds. Using the correct type of sound can help the audience understand the data in a better way. Auditory icons are symbolic audio clips that can be a straightforward way of representing certain types of data [11]. Just like visual icons, auditory icons are often used in the interface of computer software, and often refers to real-life sounds to be as intuitive as possible. One prime example is the sound of crumpling a piece of paper which indicates that a document or any other kind of file has been deleted. Auditory icons can also have a parametric approach, where the sound changes depending on the data. A size parameter can for example determine the pitch of an audio clip of a bouncing ball to convey the perceived size of an object. For continuous values, synthesized sounds can be used to easily follow the data as synthesized sounds can easily be manipulated. An initial approach can be to use pure tones such as sine waves which are then used to create a fundamental sound. Several pure tones can then be used to create a richer sound, using a technique called additive synthesis. This combines the pure tones to create more harmonics and overtones which is perceived as a richer sound. On the other hand, subtractive synthesis can be used to remove features from an already rich sound. White noise can for example be filtered to only output a single tone, which creates other types of sounds.. 7.

(19) 2.3. Related Work. 2.2.4. Sound Parameters. Sonification using parameter mapping involves using the different parameters of a sound. Using the knowledge of how humans perceive sound is known as psychoacoustics [13], where parameters of sound can be manipulated to convey information to the audience. Pitch is the perceived frequency of a sound, which makes a higher frequency be judged as a higher pitched sound. The human hearing recognizes pitch in an exponential manner in respect to frequency, with a range of about 20 - 20 000Hz. Pitch can be divided into tones which can be played together to create chords and is one of the most common parameters to manipulate with sonification [9]. Pitch can often be mapped to the size of an object [9], where a bigger object would correspond to a lower frequency and a smaller object to a brighter frequency. This follows the results of an experiment made by Walker [26], were a negative polarity mapping was considered most suitable when conveying the size of an object with pitch. Loudness is the perceived volume of a sound, which is changed by altering the amplitude of the sound waves. This can affect how close an object is perceived to be from the audience and can be perceived farther away by decreasing the loudness [9]. This effect can also be amplified by adding room characteristics such as reverb to place the object in a room and create a sense of distance. Tempo is the pace of several consecutive sounds or tones and is often manipulated to convey temporal values [9]. It is measured in beats per minute, and often influences the energy of a sound. A fast tempo can be perceived as more energetic, while a slow tempo can instead invoke calmness [9]. If a tempo is too slow however, the audience will no longer perceive it as a beat, and if the tempo is too fast it will instead create new harmonics. The timbre of a sound explains its character [13], which is based on how many harmonics and overtones are present and what relation they have to the fundamental sound. This will determine how soft or sharp a sound is and makes it possible to identify what the sound originated from. A simple example of timbre is the different harmonics and overtones of a sawtooth-wave compared to a square-wave. A square wave consists of even spaced overtones, creating a relatively mellow timbre. A sawtooth-wave contains both even and odd overtones, which results in a more sharp timbre.. 2.3. Related Work. Sonification is used in many areas and applications where it is often used as a scientific tool to make more sense of data. It can also be used to increase the scientific outreach of a visualization to make it more accessible and understandable.. 2.3.1. Sonifications with Visual Component. One of the more common use cases for sonification is to complement an already implemented visualization. One example is in the area of chemistry where Rau et al. [19] used sonification to indicate certain events in a molecular simulation. The sonification was useful to observe events in the simulation, even when they were occluded or outside the view. Spatial sound was used to position the events in the simulation using a Head Related Transfer Functions (HRTF) with headphones. The study shows that spatial sound can be particularly useful in the case where the event occurs outside of the visual view. For sonifications in astronomy, temporal activities are often highlighted to make use of this advantage of human hearing. SYSTEM sounds2 showcases the orbital resonance of the three inner moons of Jupiter, which are locked in a 4:2:1 ratio. This is done by signaling an orbital period with a specific drum sound of each planet, where the pitch is determined by 2 SYSTEM. sounds sonification of Jupiter’s moons: http://www.system-sounds.com/jupiters-moons/. 8.

(20) 2.4. SuperCollider the orbital period. A similar method is used in Whitevinyls Solarbeat application3 , where all the planets of the Solar System also emits a sound after each orbital period.. 2.3.2. Sonifications in Planetariums. As mentioned in section 1.2, there are a couple of examples where sonification has been used in planetariums. Much inspiration was drawn from these projects as they had similar goals and target platform as the present study. Quinton et al. [18] sonified the planets of the Solar System with focus on testing and demonstrating it for the end user. A planetarium representative was initially interviewed to grade the importance of each planetary parameter, where the parameters of most importance were density, diameter, gravity, length of day, length of year, mean temperature and orbital velocity. These parameters were mapped to pitch, loudness, tempo and timbre among others. An evaluation was conducted with 12 participants where they were tasked with discerning properties for each planet by listening to the sonification. The evaluation showed promising results, stating that the participants could discern several characteristics of the planets. Quinton et al. stated that the use of surround sound could be especially effective at sonifying the orbits of the planets, as the planets could orbit around the audience. Quinton et al. also suggested how the sonification of the Solar System can act as a scientific tool that could be used to create a comparative model to other exo-solar systems. For this work, there was no visual component, but it was stated that the sonification would be enhanced if a visual component was present. The second planetarium example is the work done by Tomlinson et al. [25]. It shares many aspects of the report by Quinton et al. [18]. For example, similar important parameters were concluded by interviewing astronomy teachers for different levels of classes. Similar mappings to Quinton et al. were also used to sonify the data. The sonification was divided into two views, the Solar System view and the planetary view. An evaluation was done in a planetarium using the available quadraphonic speaker system. Images of the planets were used during the demonstration of the sonification for visual context. The evaluation included a survey of five questions about how interesting, pleasant, helpful and relatable the sonification was. The variety of questions shows that it is not just important that a sonification can increase understanding of something, but that it also needs to be pleasant to listen to. The survey was later developed into an audio user experience scale called BUZZ [24]. The results from the survey showed that their sonification managed to relay information to the audience while still being pleasant and interesting. An important difference between these mentioned works were in their sound design. Tomlinson et al. [25] could be perceived as having a more concrete sound design, focusing on straight-forward mappings. Quinton et al. [18] could instead be perceived as having a more abstract and musical approach. Both approaches have their perceived pros and cons, where straight-forward mappings give a more intuitive and informative experience, while a musical approach gives a more pleasant and immersive experience. The aim of the present study was to lay somewhere in between these approaches by both being concrete to make it intuitive, while still presenting a pleasant soundscape to create immersion.. 2.4. SuperCollider. To create and manipulate a sonification depending on external data in real-time, an audio synthesizer was needed. SuperCollider4 is a code environment which enables real-time audio synthesis and algorithmic compositions, suiting well for sonification [17]. The SuperCollider environment consists of a server (scsynth) and a client (sclang) [17], where the server contains 3 Whitevinyls Solarbeat sonification of the Solar System: http://www.whitevinyldesign.com/solarbeat/ 4 SuperCollider: https://supercollider.github.io/. 9.

(21) 2.5. Open Sound Control the tools to create real-time audio synthesis, which is done by using unit generators (UGens) that generates audio. A UGen can be as simple as generating a sine wave, to as complex as modeling the impact of a bouncing ball. A combination of UGens with effects such as filters can be composited to create a richer sound. This is all combined in an object called a SynthDef, which works similarly to a programming function where variables can be used within UGens to create a dynamic sound by manipulating its sound parameters in real-time. Arguments can be used to access data outside of the SynthDef. This makes it possible to create different versions of a SynthDef, which are called synths. In this project each planet had its own synth to reflect the different properties of the planets. The server and client in SuperCollider communicate via OSC, which can also be used to transfer data to SuperCollider from external applications such as OpenSpace.. 2.5. Open Sound Control. Open Sound Control5 (OSC) is a communication protocol used between computers, synthesizers and controllers to communicate with each other. It is transport-independent, meaning that it can be used to send information between devices through any transport layer, such as UDP or TCP [27]. OSC can be used to send many arbitrary messages at the same time in the form of bundles. To organize the messages OSC uses an addressing system where every message is given an address that can be used to label the messages with what data they contain [28]. In this way the data from the planets in the Solar System can be sent with separate messages with different addresses. This reduces the risk of the data being mixed up.. 5 Open. Sound Control: http://opensoundcontrol.org/. 10.

(22) 3. Method and Implementation. The sonification was developed by identifying important concepts in the Solar System which were scaled and mapped to different sound parameters. To integrate the sonification to OpenSpace, data such as distances and angles between the camera and the planets were extracted. This data was processed and converted to a suitable format before it was sent to SuperCollider using the protocol Open Sound Control. SuperCollider then used this data to control the sonification depending on the state of OpenSpace. User tests were conducted to evaluate the sonification, where the feedback was used to make improvements to create the final sonification. This was done iteratively two times before the final sonification was created.. 3.1. OpenSpace Views. The aim of the sonification was to enhance the experience of using OpenSpace and convey more information to the audience by using sound. However, it was not possible to present all the information at once, since there are eight different planets to convey. The environment of OpenSpace was therefore divided into three different views in the software to convey information in different detail. The sonification was then developed in consideration of these views.. 3.1.1. Solar System View. The Solar System View was intended to give an overview over the Solar System, conveying simple information through sound about all the planets at the same time. To do this, the camera was positioned in such a way that it gave a top-down overlook over the Solar System, see Figure 3.1. This meant that all planets were visible and moved around the Sun in a counterclockwise orbit. The sounds would then be spatially positioned according to planets’ position on the screen. This would be especially fitting in the Dome theater, as the orbits could be fully visible in 360 degrees above the audience, as well as giving accurate spatial positioning using a 7.1 surround system.. 11.

(23) 3.1. OpenSpace Views. Figure 3.1: Screenshot of OpenSpace in the Solar System view, showing the inner planets.. 3.1.2. Planetary View. The planetary view was intended to be a close up view of one planet at a time, where more sonifications could be used to convey more information about the specific planet, see Figure 3.2. Similarly to the Solar System view, moons orbiting around a focused planet would also create a miniature Solar System view at a certain distance, conveying information about the moons of the planet using sonification. The sound level of the planet would also change depending on the distance to the planet, which would reinforce the sense that the sounds were coming from the planet itself.. 12.

(24) 3.1. OpenSpace Views. Figure 3.2: Screenshot of OpenSpace in the planetary view, displaying Earth.. 3.1.3. Compare View. The compare view was intended to be a hybrid of the views mentioned above, where the view of the Solar System was combined with the sonifications of the planetary view. The motivation behind this view was that in the planetary view it would not be possible to directly compare planets, and because the sonification would rely much on the relations between the planets this view was considered useful. The compare view would create a way to listen to the sonifications of two planets at the same time for comparison. In order to make it more visible which planets were being compared, the planets would be highlighted by increasing the size of the planets that were selected, see Figure 3.3.. Figure 3.3: Screenshot of OpenSpace in the compare view, where Earth and Mars are compared.. 13.

(25) 3.2. Sonification. 3.2. Sonification. The development of the sonification began by first identifying important parameters and data of the Solar System to sonify. These parameters then needed to be scaled accordingly so that a fair comparison between the planets could be made. The sound design was then developed to create suitable sounds to represent each parameter. An architecture was built up to easily manage the sonification. Finally, the sonification was prepared for the Dome theater by implementing surround sound and rerouting the channels to the correct speakers.. 3.2.1. Identifying Important Concepts. The sonification design process began by studying which planet parameters were suitable to be sonified. Informal interviews with OpenSpace-developers were made to identify important concepts that could be sonified from OpenSpace. Most concepts were however obtained from related work [18, 25], where a planetarium representative and astronomy teachers were interviewed about which aspects of astronomy were the hardest to teach. In general, it was concluded that the differences between the planets of the Solar System were of most interest. Specifically, the properties that were of most importance were mass, density, diameter, gravity, length of day, length of year and temperature. Some of these properties are listed with their respective values for each planet in Table 3.1 with data from NASA’s planetary fact sheets1 . Parameters that were already visualized in an informative way in OpenSpace were also less prioritized in the sonification. This included parameters such as the planets distance from the Sun and their orbital eccentricity. When the concepts were identified the process continued with determining what the different properties would map to in the sonification. Table 3.1: List of planet properties for the eight planets. Temperature is given in unit Kelvin (K) and the other values are given in a ratio compared to Earth. Mercury Venus Earth Mars Jupiter Saturn Uranus Neptune. 3.2.2. Diameter 0.383 0.949 1 0.532 11.21 9.45 4.01 3.88. Gravity 0.378 0.907 1 0.377 2.36 0.916 0.889 1.12. Length of Day 175.9 116.8 1 1.03 0.414 0.444 0.718 0.671. Length of Year 0.241 0.615 1 1.88 11.9 29.4 83.7 163.7. Temperature (K) 100 to 700 735 185 to 331 120 to 293 163 133 78 73. Implementation of Sonifications. Implementing a sonification using parameter mapping needed consideration of the mapping, scaling and sound design. The mapping of the sonification parameters to the data properties were experimented with in the early stages of the project. Common mappings that had previously been used in other sonifications [9] were used as a starting point while at the same time experimenting with new mappings and drawing inspiration from related work [25, 18]. All of this lead to the final mappings shown in Table 3.2. A problem with the parameters of the planets was the big difference between values, since the planets in the Solar System vary largely across many parameters. It was however important that all planets could be compared within a reasonable scale to appreciate the differences. Scaling was therefore used to more easily compare the data. Following that, the sound design was developed to make the sonification as intuitive as possible, while also being immersive 1 NASA’s planetary fact sheets: https://solarsystem.nasa.gov/planet-compare/, and https:// nssdc.gsfc.nasa.gov/planetary/factsheet/index.html. 14.

(26) 3.2. Sonification and pleasant to listen to. All the planets shared the same kind of sound design, but with different parameters to represent their differences. For the following paragraphs, each planet property is explained with respect to its mapping, scaling and sound design. The sonifications can be listened to here: https://www. youtube.com/watch?v=JfPtZn2fgYs. Table 3.2: List of planet properties and their respective sonification mappings. Planet Property Mass and diameter Type of planet (density) Length of day Length of year Gravity Atmosphere Wind Speed Temperature. Sonification Mapping Pitch Type of waveform (timbre) Rate of oscillation Spatial positioning Bouncing ball Wind depth Wind intensity Density of grating sound. Mass and Diameter Mass and diameter were conveyed as the pitch of the fundamental sound of the planet. This was done in a manner that a bigger planet had a lower pitch than a smaller one, according to the polarity mappings of Walker [26]. To determine the specific pitch of each planet, an interval of frequencies was first decided. The pitch intervals of related works were first considered, where Quinton et al. [18] used a two-octave range (C2-C4), while Tomlinson et al. [25] instead used a higher octave range of approximately six octaves. An intermediate option was used, creating a three-octave range (C2-C5). The lowest octave was dedicated to the gas and ice giants, and the higher octave was used by the terrestrial planets. This created a gap of almost one octave between the type of planets which represented the differences in size between the inner and outer planets. The mass and the diameter of each planet were considered when placing each planet within the respective octave. A balance between accuracy and musicality had to be made. The eight planets could be seen as four pairs of planets which shared their sizes the most. These were therefore placed within two semitones of each other. To place planets closer than that would scale-wise make sense for some pairs of planets considering their similarities in size. However, because an aspect of musicality had to be made, the two semitones difference was preferred. The aim of the sonification was not to create an exact representation of the data, but instead to increase the understanding of it. A sonification which would not have musical elements could be perceived as less appealing and would therefore not be as useful. The resulting tones of the planets can be seen in Figure 3.4.. 15.

(27) 3.2. Sonification. Figure 3.4: The pitches of each planet, shown on a piano with 61 keys. The type of waveforms used to sonify the size of the planet depended on the type of the planet. For a terrestrial planet a sawtooth-wave was used, and for a gas/ice giant a squarewave was used. The main difference between these waveforms is that the sawtooth-wave creates more overtones than the square-wave. This made the terrestrial planets to have a sharper timbre to represent the higher density of these planets, while the gas and ice giants obtained a more mellow timbre representing their lower density. Additionally, to indicate if a planet had a global magnetic field, a phaser-effect was also layered on the sound. Length of Day To convey the length of day of a planet, the analogy stated by Tomlinson et al. [25] was used, which said that the brightness of daytime was mapped to the rate of an oscillator. This created an increase in sound level following the sunrise of a certain position on a planet, and a decrease in sound level following the sunset. The oscillation was applied to the fundamental sound of the planet by modulating the cutoff frequency of a resonant low pass filter where the speed of the modulation depended on the length of day of the planet. This created an oscillation of the sound level, but also of the amount of high frequency content in the sound. A small modulation of the pitch of the planet was also added to create a slight Doppler-effect which made the sound more dynamic. One of the more important values to scale that was shared for all the planets was time. It was important that the sonification was interesting and informative to listen to in realtime, while also not being too fast to avoid loss of information. A default timescale of 24 hours/second (1 Earth-day/second) was chosen to account for this, which was also what was used in the sonification by Tomlinson et al. [25]. The time speed could however later be changed by changing the time speed in OpenSpace. For example, with a time speed of 1 day/second, the length of day of Earth would be represented with an oscillator with the speed of 1 hertz. Length of Year Length of year was conveyed through the spatial position of the planet. Using the positional data extracted from OpenSpace (see subsection 3.3.3), a directional sound was created to follow the actual position of the planet in the software. A planet with a shorter length of year would revolve faster around the audience and vice versa. Using the same time speed as length of day with 1 day/second, this would take Earth 365 seconds to revolve around the audience. If a planet had moons it would also be positioned in a similar way in the planetary view. The surround sound implementation is covered in subsection 3.2.4.. 16.

(28) 3.2. Sonification Gravity The sonification of gravity was inspired by Tomlinson et al. [25], where a bouncing ball was used to convey gravity. This worked as a parametric auditory icon, where a planet with low gravity caused the ball to bounce less frequently and for a longer time. Because gravity is related to the mass of the planet, the pitch of the ball would also be higher for a planet with lower gravity. The sound of the ball was created using the built-in UGen TBall, which models the impact of a bouncing ball. The UGen has a gravity parameter which was set in ratio to Earth for all the planets. Panning was also used on the ball sound so it would be perceived that the ball was bouncing sideways to create a wider stereo image. The ball was dropped every seventh second for every planet as it became a fitting time interval for all the ball bounces to fit within for each planet. It also worked as a time indicator when listening to every planet as seven seconds in the default time speed of 1 day/second represented a week on Earth. To scale the gravity of the planets, the unit used was the gravity of Earth, denoted as g. Because this gravitational unit is in relation to Earth, it gives Earth a value of 1, and the rest of the planets became a ratio of Earth’s gravity. Atmosphere and Wind Atmosphere and wind speed were conveyed with a sound that resembled the sound of wind. The depth of the wind corresponded to the density of the atmosphere, while the intensity of the wind corresponded to the average wind speed of the planet. The sound of the wind was created by letting noise be modulated by a low-pass filter where the cutoff frequency was swept randomly. The type of noise depended on the type of planet, where pink noise was used for the inner planets and brown noise was used for the outer planets. Brown noise has a steeper fall-off in sound level with increasing frequency compared to pink noise, which created a deeper wind, reflecting on the thick atmosphere of the outer planets. Additionally, if a planet did not have a defined atmosphere (like Mercury) the sound would not play. The density of the atmosphere of a planet was determined mainly by using the surface pressure of the planet. However, this value varied largely between the planets and did not apply for the outer planets since they lack a defined surface. Instead of mapping to the actual values of their atmosphere, the planets were instead ranked using these values and depending on the type of planet. A planet with a higher surface pressure would get a deeper wind sound, but not directly mapped to its values. A similar strategy was also used for the wind speed, since no definitive data source could be found for all of the wind speeds of the planets. It was also decided that the data used would be the wind speed of the overall atmosphere, and not necessarily the surface of planet. Venus for example has winds of up to 100 m/s in its atmosphere but decreases to just 3 m/s on its surface. Temperature Temperature was conveyed as the density of a grating sound. The analogy of the sound could either be linked to a frying sound, fire crackling, or the sound of a Geiger counter. Higher temperature resulted in a higher number of impulses, creating more noise. The lowest and highest temperature for a planet created an interval which was swept through with the speed of the length of day for the planet. A low-pass filter was also used together with the speed of length of day to highlight the day (highest) and night (lowest) temperature of the planet. The sound was created by using the UGen Dust in SuperCollider. Dust provided random impulses in which density is increased with higher temperature. Kelvin was used as the unit for temperature. This was used mainly to avoid negative values, as it would not suitable to map to the frequency of the noise. The inner planets had a temperature range since they had the most accurate data, while the mean temperature was instead used for the outer planets. The temperatures of Uranus and Neptune were also 17.

(29) 3.2. Sonification clamped to the lowest temperature of Mercury to create a smaller interval of values between the planets. All the temperature values of the planet were then scaled down so that the planets with the lowest temperature received a value of 1. This meant that the coldest planets would only output a sound one time per second in average. Rings Rings were sonified by letting pure-tones be played around the audience which fluctuated in frequency, where the amount of pure-tones represented how many ring groups were present. Because ring systems has no recorded mass or unified position, the sonification works more to indicate the presence of the rings. Because only Saturn’s rings are visible in OpenSpace, only Saturn had this sound even though all the outer planets have rings to some degree. Solar System View The Solar System view included the sonifications of mass, length of day and length of year of each planet. The sound design was however simplified to enable playing more planets at the same time. This was done by using brown noise through a band-pass filter which center frequency was set according to the size of the planet. The pitch was doubled for all the planets, both to signify that the planets appear smaller on the screen, and to better fit for the brown noise. Instead of using a sweeping low-pass filter to convey the length of day, a pulse oscillator was used to simplify the sound. This type of sound was also used for the moons of the planets.. 3.2.3. Sound Architecture. To create the sonification an architecture was built up in SuperCollider to make sure that the data for all the planets were processed in the same way but with different parameters. An overview of this structure can be seen in Figure 3.5. The data for each planet went through the same SynthDef, only with different values, creating an environment where the planets could be compared to each other as they originated from the same kind of sound generators. This created several instances of synths that were grouped together according to which planet they belonged to. Grouping the sounds made it easier to control all of the sounds of a specific planet. One way of controlling the sounds was to turn them on and off, which would be controlled by the GUI of OpenSpace (see section 3.5). This was done using a gate function, where a binary signal determined if the sound should be played or not, similar to a key being pressed down or released on a keyboard. An envelope, i.e. the contour of the amplitude, was also used together with the gate function to create a fade effect when starting and stopping the sound, i.e. the attack and release times.. Figure 3.5: An overview of the structure of the sonification. At the end of the SuperCollider pipeline, effects that were meant to affect all the sounds in the sonification were implemented. This included a reverb effect, which created a reverbera18.

(30) 3.2. Sonification tion after every sound. This simulated how a sound would resonate in a certain size of room, depending on the length of the reverberation time. Having the same reverberation time on every sound created a sense that all the sounds exists in the same space, which makes it easier to relate to them. The reverb was created by duplicating the original sound of all sound channels, and then delaying them randomly by a small margin to lastly apply a reverb filter.. 3.2.4. Surround in the Dome. SuperCollider offers native support of surround sound. By default, the first eight output channels are reserved to the eight speakers of a 7.1 surround sound. The order of these speakers and their ideal placement is illustrated in Figure 3.6.. Figure 3.6: Ideal surround sound placement. Note that the LFE-component does not emit directional sound and can be placed more freely. To position a sound is called panning. For a stereo setup a value of 0 in SuperCollider means that the sound is only played in the left speaker, a value of 1 plays the sound in the right speaker, and a value between 0 and 1 positions the sound somewhere in between the left and right speaker. To extend this concept to a surround setup, the placement of the speakers is expressed as an angle within a circle, also known as the azimuth. However, the panning functions of SuperCollider uses the channels in their numerical order shown in Figure 3.6, therefore the order of the channels does not increase proportionally to the azimuth angle. To correct this, rerouting had to be done which reordered the channels to their circular order. Positional data from OpenSpace (see subsection 3.3.3) could then be used to create the desired panning. The third channel in SuperCollider represents the low frequency emitter (LFE) or subwoofer. Because this channel only handles low frequencies which are not perceived as directional, it did not need to be part of the panning function. Instead this channel received a weighted sum of all channels, which was natively low pass filtered to suit the frequency range of the subwoofer. To monitor the surround sound during development, a conventional stereo setup would not be enough. An external sound card was initially used to see that surround sound was. 19.

(31) 3.3. Integration with OpenSpace outputted outside of SuperCollider. There were however difficulties with getting the equipment for a surround speaker setup. Instead, a virtual surround software was used to be able to output all eight channels, albeit only virtually through two headphone speakers. The software that was used was an older version of the headphone virtualization software Razer Surround2 , which enabled any stereo headphones to output virtual surround.. 3.3. Integration with OpenSpace. To integrate the sonification with OpenSpace, the positional and temporal data of OpenSpace needed to be used in the sonification to match what was shown on screen. How far away a planet was from the camera should affect how loud the sonification would be, and where the planet was on screen should affect in what direction the sound would come from. To accomplish this, data needed to be extracted from OpenSpace and converted to be compatible with SuperCollider.. 3.3.1. Extraction Method. The structure of OpenSpace allows for different approaches to extract data from the software which were explored to find the most suitable one. In the beginning of the project an attempt was made to create an external C++ application that would extract data from OpenSpace. This application would be connected to OpenSpace with a network socket and communicate via json messages. However, it was discovered that OpenSpace received the messages but then ignored them. Therefore, it was decided to work more closely with the OpenSpace software and create a new module. This new sonification module started a new thread when it was initialized by the OpenSpace-core. The thread was in charge of monitoring the data and to send it directly to SuperCollider when needed. When the module was de-initialized a message was sent to SuperCollider to stop the sonification and the thread was destroyed.. 3.3.2. The Data. It was initially intended that all the data needed for the sonification were going to be extracted from OpenSpace. However, since some data for the planets does not change over time (for example mass, density and gravity) it would be unnecessary to extract it from a dynamic software like OpenSpace. Instead the static data was acquired from the NASA planetary fact sheets3 which were used in SuperCollider as constants. Dynamic data that were of interest for the sonification was the position of the camera, the planets and their moons. These positions could then be used to calculate the distances and angles from the camera to the planets. OpenSpace uses a scene graph to keep track of the objects in the scene and every object is represented in the scene graph by one or more nodes. The positions were acquired by searching the scene graph for the corresponding node and extract its world position.. 3.3.3. Distances and Angles. Surround panning was used in the sonification to spatially position the planets around the audience, as previously mentioned in subsection 3.2.4. This required the angle between the camera and the planets, which was calculated in steps, see Figure 3.7 and Equation 3.1. First ~ this vector was then prothe vector from the camera to the planet was calculated giving CP, ~ jected on to the horizontal camera plane giving CPCH.plane . The angle θ between the projected 2 Razer. virtual 7.1 surround sound: https://www.razer.com/7.1-surround-sound planetary fact sheets: https://solarsystem.nasa.gov/planet-compare/, and https:// nssdc.gsfc.nasa.gov/planetary/factsheet/index.html 3 NASA’s. 20.

(32) 3.3. Integration with OpenSpace. ~ vector and the forward vector of the camera C f orward was calculated using the function ori4 entedAngle from the library glm using the cameras up-vector C~up as the reference axis. The oriented angle was used instead of the absolute angle since the surround system had to be able to distinguish between left and right. This angle was then sent to SuperCollider in radians ranging from ´π to π.. Figure 3.7: The calculation of the angles for planets relative to the camera.. ~ = ~P ´ C ~ CP ~ ´ CP~C CPC~H.plane = CP up. (3.1). ~ θ = glm::orientedAngle(C f orward , CPC~H.plane , C~up ) For other situations the sonification required the angles to be calculated differently. This depended on if the sonification was supposed to be perceived spatially from the camera or from an object inside OpenSpace. One case of this is when the angles to the moons are to be perceived from the planet they orbit instead of the camera. If the angles were to be perceived from the camera the audience would perceive that the moons are emitting sounds where they are on the screen. However, if the angles are instead perceived form the planet, the audience would hear the moons orbit around them. Since this was the wanted outcome, the angles were in this case calculated by first calculating the vector from the planet to the ~ This vector was then projected onto the horizontal camera plane PMC~ moon PM. . The H.plane ~ angle φ between the forward vector C f orward of the camera and the projected vector was then calculated in the same way as previously, see Equation 3.2. This method was also used to calculate the angle from the Sun to the planets with respect to the camera. This case could however be simplified since the origin in the world coordinate system is placed in the Sun.. ~ =M ~ ´ ~P PM ~ ´ PM ~ Cup PMC~H.plane = PM. (3.2). ~ φ = glm::orientedAngle(C f orward , PMC~H.plane , C~up ) The sound level SP of the sonification for each planet was mapped to the distance between the camera and the planet to create an immersive element. The distance was calculated as the 4 glm::orientedAngle documentation: https://glm.g-truc.net/0.9.4/api/a00210.html, source: https://github.com/g-truc/glm/blob/master/glm/gtx/vector_angle.inl. 21.

(33) 3.3. Integration with OpenSpace length of the vector between these objects. The rate that the sound level would decrease by the distance was determined by Equation 3.3, or if the planet did not have any moons Equation 3.4 would instead be used. k=. 0.5 ´ 1 dmoon ´ dclose. (3.3). SP = e a¨kd ¨ b 3¨I (3.4) d In Equation 3.3 and Equation 3.4 the diameter of the planet was denoted as I, d was the distance to the planet from the camera, and the values of the coefficients a and b were used to adjust the equation for the different planets. The equation depended on three distances for each planet. dclose was the distance when the camera was considered close to the planet which would correspond to the highest sound level for the planet sonification. dmoon was the distance when the camera was approaching the orbits of the moons of the planet which would create a transition in sound level between the planet and its moon system. Lastly d0 was the distance where the sound level of the sonification would be almost inaudible because the planet would be too far away. Equation 3.3 created a sound level curve (see Figure 3.8) which was individual for each planet since the planets size, number of moons, and the distance from the planet to the moons would affect the sound level. Before any sound level was sent to SuperCollider the values were clamped between the values 0 and 1 to make sure it would not send unreasonably high sound levels. SP =. Figure 3.8: The black curve depicts how the sound level for the sonification of Earth changes depending on the distance to the planet. The value on the Y-axis is the sound level for the planet and the value on the X-axis is the distance to the planet in kilometers (km). The points where the lines of the same color intersect are points where the curve was desired to be close to. It was also desired that the sound level of the moon sonification would be the loudest when the planet sound level was half of its loudest sound level, as well as fade away when the planet sonification became either louder or quieter. This was done to put more focus on the moons when they were more present on screen. Therefore, the sound level SM of the. 22.

(34) 3.4. Open Sound Control moon sonification was calculated separately with Equation 3.5 where SP is the sound level of the planet that the moons orbit. SM = ´4(SP ´ 0.5)2 + 1. 3.3.4. (3.5). Precision Error. As mentioned in section 2.1 the single precision floating point errors in OpenSpace [2] caused the extracted positions to become unstable, which in turn caused the calculated angles to fluctuate as well. This problem became more prominent for the planets further away from the Sun and when the time was simulated at a faster pace. This would cause unwanted sound artifacts when SuperCollider applied a new angle that differed substantially from the previous angle. In order to solve this a lag was used in SuperCollider which interpolated towards the new value from the old to remove the errors that causes the unwanted sound. However, as mentioned earlier the angles were in a range from ´π to π where both ´π and π are positioned behind the audience. When the lag-function receives the command to go from ´π to π the mathematical solution is to interpolate through the 0 which is in front of the audience. This causes the sound to pan around the front to get to the other side instead of crossing the gap directly. This problem was not solved and is further discussed in section 5.3.. 3.3.5. Optimization. In games and computer graphics there is a method called level of detail (LOD) that saves rendering time by using the appropriate resolution of the 3D models in a scene depending on the distance to the camera [14]. Similarly to this, all the data from OpenSpace was not sent continuously to SuperCollider but instead was only sent if it had been changed from what it previously was. For example, if the time speed in OpenSpace was real-time and the camera was not moving, the values would only change by a small margin which made it unnecessary to send the same data again. However, since the environment of OpenSpace is large and some of the planets move faster than others, the distance could change more rapidly for some planets and trigger to be sent more frequently even when those planets were not in focus or in the frame. To avoid this, two different sensitivities for what was classified as new data were used. Firstly, if the planet was in focus the sensitivity would be higher than if the planet was not in focus. This gave more accurate data for the planet that was of interest while the less interesting planets would send less data. Secondly, when the planet was in focus, data was sent if the new angle varied more than 3 degrees or if the distance varied with more than 1000km from the previous data, and if the planet was not in focus the sensitivity was lowered to 6 degrees and 10000km.. 3.4. Open Sound Control. Open Sound Control (OSC) was the protocol used to send the extracted data from OpenSpace to SuperCollider. It was integrated into the sonification module in OpenSpace with the library oscpack5 . The data was written to a buffer that was connected to an outbound packet stream. Before the data was written to the stream an address was defined for each planet where all data relevant to that planet was sent with that address. The data was written to the stream in a defined order to make it easier for SuperCollider to know which part of the message contained a particular type of data. The data included was the distance and angle to the planet, an array with the GUI-settings for that planet and the angles to the major moons if the planet had any. The array of the GUI-settings was sent as a Binary Large Object (Blob) [27], which behaves as an array with binary content in SuperCollider. For the Solar and Compare 5 oscpack:. http://www.rossbencina.com/code/oscpack?q=~rossb/code/oscpack/. 23.

References

Related documents

Re-examination of the actual 2 ♀♀ (ZML) revealed that they are Andrena labialis (det.. Andrena jacobi Perkins: Paxton & al. -Species synonymy- Schwarz & al. scotica while

Nevertheless, a detailed analy- sis of the particular acceptance rate for each sonifica- tion model shows noticeable differences between the two groups: casual rowers seemed to

Interactive auditory feedback based on expressive music performance allowed us to design high-level sonification mappings of expressive user gestures when inter- acting with

Density growth in cosmological dust models of Bianchi type I have been studied by Osano, see [10], and an expanded study with this method would be to consider the effect of the

Detta bidrar till att socialsekreterare kan använda sitt handlingsutrymme, att socialsekreteraren tolkar lagstiftningen i relation till den enskildes individuella behov genom

The aim of this study was to describe and explore potential consequences for health-related quality of life, well-being and activity level, of having a certified service or

the computer will calculate the practical supply current based on the analog input voltage transmitted data from measurement converter. Finally, the computer can get the

Figure 4 shows that firms with a discount factor of more than ½ can sustain collusion at the monopoly price for any level of contract cover, when the contracts last for two spot