• No results found

SENSITIV – Mapping Design of Movement Data to Sound Parameters when Creating a Sonic Interaction Design Tool for Interactive Dance

N/A
N/A
Protected

Academic year: 2021

Share "SENSITIV – Mapping Design of Movement Data to Sound Parameters when Creating a Sonic Interaction Design Tool for Interactive Dance"

Copied!
21
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT

COMPUTER SCIENCE AND ENGINEERING,

SECOND CYCLE, 30 CREDITS

,

STOCKHOLM SWEDEN 2020

SENSITIV

Mapping Design of Movement Data to Sound

Parameters when Creating a Sonic Interaction

Design Tool for Interactive Dance

(2)

SENSITIV – Mapping Design of Movement Data to Sound Parameters when

Creating a Sonic Interaction Design Tool for Interactive Dance

ABSTRACT

Technology has during the last decades been adopted into the dance art form which has appeared as interactive

dance. Many studies and performances have been conducted investigating this merging of dance and

technology and the mapping of the motion data to other modalities. But none have previously explored how the

introduction of technology affects the mutually interdependent relationship, the co-play, between a dancer and

a live musician in a completely live se"ing. #is thesis specifically explores this novel se"ing by investigating

which sound parameters of a live drummer’s sound a dancer should be able to manipulate, through the usage of

motion tracking sensors, to alter the dancer’s experience positively in comparison of without the usage of the

tool. For this purpose, two studies have been conducted. First, a development study to create a prototype from

the first-person perspective of a professional dancer and choreographer. Second, an evaluative study was

conducted to evaluate the applicability of the prototype and the experience of the manipulation of the sound

parameters chosen, on a larger group of professional dancers. #e studies showed that the sound parameters of

delay and pitch altered a dance experience most positively. #is thesis further shows that it is important for the

user to get enough time to truly get to know the interactions allowed by the system, to actually be able to

evaluate experience of the sound parameters.

(3)

SENSITIV – Mappning av Rörelsedata till Ljudparametrar för att Skapa ett

Verktyg för Sonisk Interaktionsdesign för Interaktiv Dans

SAMMANFATTNING

Teknik har under de senaste decennierna anammats in i danskonsten vilket har framträtt som interaktiv dans.

Många studier och föreställningar har genomförts för att undersöka denna sammanslagning av dans och teknik,

och mappningen av rörelsedata till andra modaliteter. Men ingen har tidigare undersökt hur introduktionen av

teknik påverkar samspelet, den ömsesidigt beroende relationen, mellan en dansare och en live-musiker i en

fullständigt live inramning. Den här avhandlingen utforskar specifikt denna nya inramning genom att

undersöka vilka ljudparametrar av en live-trummis ljud en dansare ska kunna manipulera genom användning

av rörelsesensorer, för att förändra en dansarens upplevelse positivt i jämförelse med utan användning av

verktyget. För detta ändamål har två studier genomförts. Först en utvecklingsstudie för att skapa en prototyp

från ett förstapersonsperspektiv av en professionell dansare och koreograf. Sedan genomfördes en

utvärderingsstudie för att utvärdera användbarheten av prototypen och upplevelsen av att manipulera de valda

ljudparametrarna, på en större grupp professionella dansare. Studierna visade att ljudparametrarna av

fördröjning och tonhöjd förändrade en dansupplevelse mest positivt. Denna avhandling visar vidare att det är

viktig för användaren att få tillräckligt med tid för att verkligen lära känna de interaktioner som systemet

tillåter, för att faktiskt kunna utvärdera upplevelsen av själva ljudparametrarna.

(4)

SENSITIV

Mapping Design of Movement Data to Sound Parameters when Creating

a Sonic Interaction Design Tool for Interactive Dance

Lisa Andersson López

Media Technology and Interaction Design KTH Royal Institute of Technology

Stockholm, Sweden lisaa3@kth.se ABSTRACT

Technology has during the last decades been adopted into the dance art form which has appeared as interactive dance. Many studies and performances have been conducted investigating this merging of dance and technology and the mapping of the motion data to other modalities. But none have previously explored how the introduction of technology affects the mutually interdependent relationship, the co-play, between a dancer and a live musician in a completely live se"ing. #is thesis specifically explores this novel se"ing by investigating which sound parameters of a live drummer’s sound a dancer should be able to manipulate, through the usage of motion tracking sensors, to alter the dancer’s experience positively in comparison of without the usage of the tool. For this purpose, two studies have been conducted. First, a development study to create a prototype from the first-person perspective of a professional dancer and choreographer. Second, an evaluative study was conducted to evaluate the applicability of the prototype and the experience of the manipulation of the sound parameters chosen, on a larger group of professional dancers. #e studies showed that the sound parameters of delay and pitch altered a dance experience most positively. #is thesis further shows that it is important for the user to get enough time to truly get to know the interactions allowed by the system, to actually be able to evaluate experience of the sound parameters.

KEYWORDS

Interactive dance; sonic interaction design; mapping design; movement data; sound parameters; Max/MSP INTRODUCTION

The practice of mixing dance, music and technology has been around for many decades. There has been an expansion of technology during the past century that has radically shifted the artistic processes in dance performances. This due to that the usage of technology has been embraced into dance contexts as a new creative tool for exploring interactive performance environments

[4]. Dance has been mentioned to be one of the art forms that has been most widely influenced by the development of digital technology during these last decades [26]. Many studies have been conducted that explore the embodied interaction and relationship between music and dance through the usage of innovative ideas and technology. The modification of playback music by a dancer has been studied [22], as well as how dancing can create music through sonification, that implies the usage of non-speech audio to convey information, and mapping of movements [2, 29, 30]. But there is a lack of studies exploring how the mutually interdependent relationship between a dancer and a live musician improvising music for the dancer changes when introducing sensor technology. For such cases, both the musician and dancer are affected and are affecting each other at the same time, also in cases without the usage of technology, and are thereby both influenced by the introduction of technology. In this paper, we refer to this way of co-creation and mutually interdependent relationship between dancer and musician as the co-play among them. There are many possibilities when designing the mapping between movement and sound when making use of technology. Different mappings of movement to sound could of course bring out different results when studying this kind of sensory collaboration between dancer and musician and how the experience and the co-play is affected. This way of constructing a performance makes, besides the actual musical instrument, an interesting setting where the dancer acts as an instrument oneself. This project, named SENSITIV, explores the relationship and co-play between a dancer and a drummer making use of sensor technology where the dancer manipulates the drum sound, that itself is created through percussion sensors on the drum set. The project name SENSITIV is Swedish for sensitive, with the intention to manifest that the dancer’s movements are in this project explicitly significant when creating the soundscape while dancing. Specifically, this study investigates what real-time motion to sound parameter mappings the sensor system should have to alter the experience positively for the dancer. For this purpose, a prototype is built to examine the novel experiences obtained, for the interactive dance practice

(5)

that is shaped through letting both the dancer and drummer concurrently control the sound by their different approach to the sound. This stance is novel in the sense that no previous studies have been found that examine the introduction of technology in the co-play between a dancer and a live musician in a completely live setting, where both the dancer and musician are interacting with each other beyond the classical dancer-musician interaction that happens without technology. This interaction is unique as the dancer’s real-time manipulation of the sound qualities of the digital sound produced by the drummer adds a new layer of complexity to the interaction. With the help of this tool, a part of the physical control of the sound produced is taken away from the musician, that is used to normally sense full control over the sound produced, and put in the hands of the dancer, where the musician loses control in a sense not explored before.

The research question for this thesis has therefore been stated such as: “What mappings from a dancer’s movement to sound parameters of the musician’s instrument influence the dancer’s experience positively in the context of an interactive co-play tool between a dancer and a drummer?” This question will be answered through taking into consideration previous work in the field, by conducting an iterative development study to create a first-person perspective prototype of a system from the perspective of a professional dancer and choreographer, and by conducting testings of the final prototype on a set of dancers to be able to evaluate the applicability of the tool. A literature review is first presented, followed by a detailed description of the methodology used and the result of the choice of examination. Both the method and the result section are divided into two sections; a section for a development study and a section for an evaluation study. The results are thereafter analyzed and discussed. BACKGROUND

Interactivity, Dance & Technology

Dance is an art form that has been “significantly influenced by the development of digital technology over the last two decades.” [26]. When combining dance and technology in a performance, the performance becomes an interactive dance performance. The first word in the term interactive dance, interaction, can be defined as an action of at least two things that affects each other and work together [33]. Interactivity in dance has been defined by Birringer [5] as a collaborative performance with a control system, making use of movement tracking technology such as cameras or sensors, which activates or control components from chosen media. Birringer develops this by stating that this kind of interaction becomes a dialogue

between the technology and human performer [5]. An interactive dance is thereby a performance where a dancer combines dance movements with something else, in most cases technology, that affects each other mutually. Interactive dance can thus be defined as a performance where a dancer’s movement and action are in real-time interpreted by sensory devices, translated into digital information, which is processed by a computer program, and restored into output that shapes the environment of the performance in real-time [26]. Through interactive dance, and by embodied interaction of the dancer’s body with technology, meaning is created in the environment setting as the actions in the performance and interaction are enhanced through the use of technology [6, 11]. A mutually beneficial relationship is created between the human body and the technology used, where the dance itself can be perceived as a musical composition as sound is produced or manipulated by body movement [8]. Still, tensions can occur when integrating technology in art creation as negotiations between art and limitations of technology have to be considered [15]. Different users can have divergent experiences of technology, though in the same setting, where some might perceive technology as augmenting artistic expression, while others as limiting bodily and artistic expression [15].

One of the first to incorporate interactive technology into dance was Merce Cunningham. He was a significant dancer and choreographer in the area of interactive dance performances, specifically known for his innovative usage of technology in his many interactive performances in the 1900’s. He did in the year 1965 a substantial performance called Variations V together with composer John Cage, where dancers interacted with antennas on the stage, which reacted to the dancer’s movements and thereby triggered sounds [25]. Besides this novel performance, Cunningham created many more innovative and interactive dance performances using technology, sound and visuals that have been crucial in the field [10, 36]. Systems for Interactive Dance Performances

Since Cunningham’s first steps in the area, a considerable amount of interactive performances, experiments and systems that make use of technology in dance performances and music exploration have been created. Such a study was conducted by Jap [22] who investigated the effect of letting dancers control playback music tempo in real-time. This was implemented by real-time tracking and estimations of the dancer’s movements through wearable wireless Inertial Measurement Unit (IMU) sensors, that measures rotation and acceleration in 3D space. The prototype created by Jap showed to enhance engagement and enjoyment for the dancer interacting with a prototype controlling the tempo of the music.

(6)

Many other systems studying the interaction of dance and technology have been conducted [1, 2, 7, 9, 12, 13, 15, 30]. A handful of these have constructed systems where the dancer composes music in one way or another. Such a study is VIBRA by Bergsland et al. [2], an interactive and embodied dance project using sensory technology to investigate the impact of the chosen technology on the artistic expressivity of the dancers. This was explored by letting dancers create sound in real-time by their movements, through wearing IMU sensors on different body parts. Another approach for this has been applied by Palacio & Bisig [30], who developed an interactive dance system where a dancer composes a music score through movement, that is immediately mapped to a piano and played in real-time. CalmusWaves [29] was further another interactive dance performance, where dancers composed musical notation through mapping of their gestures and movements, and with the help of artificial intelligence. The notation created was simultaneously played live by musicians.

A kind of co-control between dancer and musician has previously been considered by Erdem et al. [12] in their music-dance performance Vrengt. The focus was to explore the boundaries between standstill and motion of the dancer. For this purpose, Erdem et al. developed a co-performance between dancer and musician with the aim of creating a shared instrument between both, letting the dancer and musician co-control the same musical parameters instead of controlling them separately, making use of very small movements of the dancer. Furthermore, some interactive dance projects do not lay focus on interaction through sound but on other approach or media. Nautilus [17] was one project found whose focus was on real-time visuals and projections of dancers in a dance performance, instead of sound control.

Interaction design using a first-person perspective has been adopted in the process of developing interactive dance performances [13, 15]. Eriksson et al. [13] created the interactive dance performance Dancing With Drones making use of an optical motion capture system to track dancer’s movement, where a dancer controlled several drones through movement. The design process had focus on creating the project together with one dancer, from a personal and first-person felt perspective. Designing from such a first-person perspective has been shown favorable when the developers of a work have a different background and mindset than those the prototype is designed for [21]. The research method of research through practice has also been effectively used along with person perspective [15]. Both design from first-person perspective and research through practice has been implemented in this study.

Besides studies whose ambition has been to develop interactive dance systems through exploring the aesthetics

and development of dance and technology, there are works with many other underlying purposes. One of these purposes found were the ambition to explore the possibilities of creating educational tools through interactive dance systems [8, 14]. Another was to develop an interactive dance system and thereby make use of technology to attract a younger audience to dance performances [7]. There are endless possibilities and purposes to develop and explore interactive dance systems.

Sensors and Technology for Different Approaches Wireless interfaces and wearable devices have grown to be a natural part of the music, dance and performance field. Specifically, IMU sensors has gained traction into movement-based interaction design [22]. Several of the interactive movement-sound projects found have used IMU sensors [1, 2, 14, 24, 27, 30]. Some other wireless interfaces used in interactive dance performances or other experiments with motion and sound are; Myo armbands [12, 15], Nintendo Wii [31], Kinect [9, 17], optical motion capture systems [13, 23], infrared light and camera [7], and VR-technology such as HTC Vive [6]. For some studies, the authors have created their own wearable sensors [32].

A commonly used software for the purpose of creating interactive sonic interfaces is the visual programming language for music and multimedia Max/MSP1 [1, 2, 3, 12,

15, 16, 18, 22, 23, 24, 31, 32]. Mapping Methods and Strategies

Mapping is a way of assigning output variables to input variables. Schacher [32] defined mapping such as the “perceptual impact of interaction” meaning that mapping is what is perceived in the interactive action. The design of the mapping of movement to sound is crucial for the perception of movement and gestures in a real-time interactive audio and dance performance [6, 32]. This due to the relation between the mapping and the perception is strongly interconnected and thereby important for the musical expression [32]. Previous research indicates that the mapping of sound parameters is very important when designing new instrument and should therefore not be overlooked in the design process [20]. Motion data can further also be mapped to other modalities, such as mapping from movement to visuals [17].

A number of studies particularly exploring mapping strategies can be found [3, 6, 9, 18, 19, 24, 28, 32, 35, 37]. It has been shown in these previous works that the mapping design is significant for interactive performances. Bomba et al. [6] explored interactive dance by letting visitors

(7)

move and use dancer’s body wearing motion tracking sensors as an instrument to control and create sound. The authors mention that the motion-to-sound mapping design was indeed essential for the study. Bomba et al. [6] uses the definition of mapping from Hunt et al. [19], who defines mapping as the act of taking real-time performance data from an input device and using that data to control parameters of a sound. Further, a strong correspondence between movement and sound is required when aiming for the mapping of movement to sound to be substantial [9]. The process of mapping gestures and movement to sound is in other words significant to manage to build a well-functioning prototype and performance. Both the choice of sound properties and parameters to map, and how to design the mapping of these to the movement data, is significant to successfully implement a meaningful experience.

There have been a number of studies alone investigating how to create a mapping tool for movement and sound [3, 18, 35]. Gelineck et al. [18] focused on creating an educational mapping tool that was easy to set up and use by everyone. They found that it is beneficial to implement many constraints in the tool to make it effortless for the users. When Bevilacqua et al. [3] created a mapping tool, they similarly aimed to create a simple, practical and intuitive tool to be able to implement complex audio mappings. Also, Van Nort et al. aimed to create an easily accessible mapping tool, specifically for collaborative mapping design [35]. These works show that it is favorable to create easy and straightforward mapping tools when creating for others.

METHOD

For this study, a prototype was designed with the purpose to investigate what mappings of sound parameters to movement alter the dance experience positively. During the development of the prototype, there were four collaborators; the author of this thesis; main collaborator Thelma Svenns who collaborated in the co-development of the technology and who specifically focused on the input design [34]; drummer Jakob Klang who contributed as a live musician and with the artistic sound design; and dancer and choreographer Isabell Hertzberg who contributed in the development study with personal input to the design of the prototype.

An initial development study was conducted together with all collaborators by iterative testings and development of the initial prototype. Through this process, the initial prototype got developed to the final prototype, by proof-of-concept mapping of movement to sound, designed in a first-person perspective from the experience and input of Isabell. After the period of the development study, an evaluation study was conducted

together with Thelma and Jakob where the final prototype was tested and evaluated by a larger group of dancers, individually. The prototype was tested on seven dancers as participants in the evaluation study (see Figure 1).

Figure 1. Overview of the method process.

Equipment

The material used for the prototype consisted of:

• Mesh drum set: 1. Snare drum; 2. Rack tom; 3. Floor tom; 4. Bass drum (see Figure 3)

• Four Sensory Percussion (SP) sensors from Sunhouse2 on the drum set for analog-to-digital

conversion (ADC) of the drum sound (Figure 4) • Four Next Generation of Inertial Measurement

Units (NGIMU) sensors from x-io Technologies Ltd3 for tracking of dancer movement (Figure 4)

• TP-Link AC750 router for Wi-Fi communication between NGIMU sensor and computer

• MacBook Air laptop

• MOTU 8pre sound card for connection of SP to computer and of Max/MSP sound to loudspeakers The programming language Max/MSP was chosen as the coding environment, as it previously has been a popular choice when it comes to creating interactive dance performances.

From now on in this paper, when sensors are mentioned but not specified it is always referring to the NGIMU sensors. Else, the specific sensor is explicitly specified such as “SP sensors”.

System Design

An overview of the system design for the prototype, with the aim of letting a dancer manipulate and filter the sound created by the drums through movement, can be seen in Figure 2.

In both the development study and the evaluation study, the dancer wore the wireless NGIMU sensors placed on body parts, and the drummer used the SP sensors on the different drums in the drum set for ADC and for sound filtering. The NGIMU sensors uses the Open Sound Control (OSC) protocol4 which the Max/MSP patch also

2https://sunhou.se/sensorypercussion 3https://x-io.co.uk/ngimu/

(8)

Figure 2: Overview of the system design of prototype.

use to receive data and was one reason for choosing NGIMU sensors as the motion tracking sensor technology. The sensor data from the dancer’s movement interacted with the sound produced by the drummer and the SP sensors, through the Max/MSP patch, and thereby the dancer controlled the characteristics of the sound outcome.

Figure 3: The mesh drum set with mounted SP sensors.

The SP sensors mounted on the drum set converted the analog mesh drum signal to digital. Inside the SP software, that is built as a music production tool for drums, the digital sound was manipulated depending on what drum was played on and in what way. The sound was also manipulated depending on how the mapping was constructed between drum and sound in the SP software. Through the router, the NGIMU sensors were connected wirelessly to the laptop in real-time. Thereafter, these sensors were coded to be connected to the Max/MSP patch. The Max/MSP patch showed the real-time NGIMU sensor data of the dancer’s movement. The motion parameters from NGIMU sensors used in the initial prototype were the accelerometer and gyroscope data. These movement parameters were mapped to different sound parameters that are presented further on. The SP software was then connected to the NGIMU sensors in the Max/MSP patch through a SP VST (Virtual Studio Technology) plugin. The digital sound signal created in the SP software was sent through the sound card and picked up by the Max/MSP patch. The Max/MSP patch

then connected the SP sound signal in Max/MSP with the NGIMU sensory data, so that the NGIMU sensors could manipulate the sound produced by the drummer through the SP technology and software. The final audio output, the sound created by the drummer and manipulated by the dancer in real-time, was sent out from the Max/MSP patch, through the sound card and out to the external speakers.

Figure 4: Up: Placement of one of the SP sensors on drum set. Down: Placement of one NGIMU sensor on Isabell.

Initial Mapping and Input Design

The initial prototype made use of oneNGIMU sensor. This sensor was in the development study initially placed on one of the arms of Isabell. During the development study, different placements were explored, which was the focus of the study presented in [34]. The initial mapping consisted of mapping the raw acceleration data from the NGIMU sensor to the sound parameters volume and pitch. These were initially chosen as they were clearly noticeable and the difference in sound was easily heard when manipulated. Specifically, one of the x-, y- and z-axis from the raw acceleration data were initially used and

(9)

alternated at the time and mapped to one of the initial sound parameters. The sounds mapped to the drum set through the SP software was at this stage classical drum sounds. These were the sounds to be controlled by a dancer.

Development Study

The development study was conducted during a five-week period together with dancer Isabell. The aim of the development study was to design a prototype from a first-person perspective by research through practice, focusing on creating a satisfying tool specifically for Isabell. This study consisted of iterative testings of the prototype in the form of 1-2 h meetings once a week with all the collaborators; developers, musician and dancer. During these meetings, the prototype was tested and developed from the feedback of the dancer through think-aloud sessions of her dance experience. The iterative testings focused on the development and evaluation of the mapping between movement and sound. From the data collected through the feedback from Isabell, the mapping provided by the prototype was iteratively reshaped.

Figure 5: Isabell dancing during a session in the development study.

During the development study, different approaches to mapping were tested and examined. For the very first test of the initial prototype with Isabell, the initial mapping was evaluated where the acceleration motion parameter values were explored. All the axes, x-, y- and z-axis, were individually mapped and tested with the sound parametersinitially chosen. After every testing with the dancer, the developers adjusted and redesigned the prototype from the feedback given. The final sounds mapped to the drums through the SP software were designed and chosen by Jakob (see Appendix C). The outcomes obtained from this part of the study were used in the final prototype.

Lastly in the development study, all the collaborators designed the evaluation study together for it to fit for a larger group of dancers to test the prototype.

Evaluation Study

The goal of the evaluation study was to evaluate the final prototype on other participants than the dance collaborator Isabell. For this purpose, seven participants were recruited. All the participants were professional dancers in the genres jazz, modern and/or ballet, and accustomed to improvising dance. The age of the participants was between 20 and 32 years (mean age 28 years). All of them were female. Each participant was video recorded, and approved consent before the experiment.

Figure 6: Environment setting, and equipment used, during both the development- and evaluation study.

The experiment was executed individually, one participant at the time. It took place at Stockholm University of the Arts (SKH) and lasted 1-2 h for every participant. Each experiment was divided into three dancing sessions;

1. Dance with NGIMU sensors turned off;

2. Exploration time of the mapping environment and sensors with NGIMU sensors turned on; 3. Dance with NGIMU sensors turned on

Every dance session lasted 5 minutes, 15 minutes of dancing in total. Jakob played the drums using the same sounds mapped to SP sensors for all the dance sessions and played similar beats to all participants.

The experiment started with providing the participant a brief introduction of the project and experiment, while the participant got to put on all four sensors. All participants wore four NGIMU sensors during the entire experiment; one on each wrist and one on each ankle. Each participant was then told to start the first session that consisted of improvising dance in interaction with the beat played by Jakob through the SP sensors. The NGIMU sensors were not in use and therefore turned off during this session. The purpose of the first session was for the participant to be able to compare the experience of the dance and co-play without, and then with, the NGIMU sensors turned

(10)

on (later in session three). Also, for the participant to get used to wearing the sensors, though turned off. During the second session, the NGIMU sensors were used and turned on. Session two was a warm-up session where the participant was told to explore the mapping environment of the prototype and sensors, while not given any information about how the prototype worked nor about the mapping. In the third and main experiment session, each participant was given the same task as in the first session, to improvise dance from the music played by Jakob through the SP sensors. During the third session, the sensors were used and turned on. The participant was for this session given some information about how the prototype worked and about the mapping, but not in detail, and was then told to explore the impact of the movement on the music through dance.

Figure 7: Jakob playing to one of the participants testing the prototype during the evaluation study.

To evaluate the participant’s experience of the prototype, each participant got to fill out a questionnaire followed by an interview together with the developers and Jakob. Both the questionnaire and the interview took around 20-30 minutes each, 40-60 minutes in total. The participants were first given the questionnaire (see Appendix A). This included both open- and closed-ended questions, for the purpose of collecting both qualitative and quantitative data. The questionnaire treated both the experience of the mapping that was designed for this study, and the input design aimed for main co-developer Thelma [34]. Second, interviews were conducted with each participant, directly after the questionnaire was filled out. During the interview, the participant watched the video from the third session, together with the developers and Jakob. The interview was designed to be semi-structured with some specific questions asked to every participant (see Appendix B) while also making space for open discussions. The participant was also told to think aloud while looking at the video and tell about the experiences from the session. The audio of the interview was recorded. The data extracted were qualitative.

RESULTS

In this section, the results gathered from both the initial development study and the evaluation study are presented. The process of the development study is presented together with the results from the iterative testings with Isabell. This result discloses the mapping design of the final prototype. Thereafter, the results from the evaluation study of the prototype gathered from both the questionnaire and interview are presented.

Development Study

The result of the initial development study was the final prototype built together with Isabell and the remaining collaborators. During the five-week period of this study, several sound filters were developed in Max/MSP, mapped to the sensor and movement data, and iteratively tested by Isabell.

The initial sound parameters mapped to the sensors were volume and pitch control. These were initially mapped to the accelerometer data of the NGIMU sensors. Both volume and pitch were mapped to increase in volume and pitch respectively when the acceleration increased. The raw accelerometer data was mapped and scaled adequately to both sound parameters. Volume was mapped and controlled through the “gain~” object in Max/MSP. Isabell expressed that manipulation of volume was not satisfying as it did not give her much effect. Neither did Jakob like manipulation of volume as he felt that it disturbed his playing as he could not hear himself playing the drums at the times the volume got to low. Manipulation of volume was therefore dropped. At this point, the mapping between the acceleration values from Isabell’s movement and the sound parameters were switched to mapping between gyroscope (rotation) data and the sound parameters instead, due to Isabell’s preferring the sense of rotation control than acceleration control [34]. The square root sum of x, y and z from the gyroscope values was calculated in Max/MSP and mapped to some of the sound parameters later tested. As a result from Thelma’s study [34], we used only one NGIMU sensor at a time when testing different sound parameters, tested on different body parts. Only one sound parameter was also tested at a time. The sound parameters tested were mapped to manipulate all the four drums equally and at the same time.

When experimenting with mapping of pitch, different kinds of mappings and scaling of value ranges of the pitch control were tested. The objects “pitchshift~” and “gizmo~” were initially used. These objects were dropped due to much latency when connected to both the NGIMU

(11)

sensor and SP. Instead, the “freqshift~”5 object was used

for the purpose of pitch manipulation. Firstly, we tested to map and scale the gyroscope square root sum value to change in a low frequency range between +0 to +500 Hz, where the pitch value showed what frequency value to add to the current sound created by Jakob through SP. Thereafter, a change in a higher frequency range between +0 to +1000 Hz was tested. Several approaches were evaluated, such as setting a threshold for the sound manipulation to start when a certain threshold of movement data was passed. Isabell experienced that neither of these scaled pitch control value ranges or mappings were satisfying nor lived up to her expectations. She expressed that this mapping between her movements and the pitch control of the SP sound gave her a distant feeling, made her feel disconnected to and not as one with the music. Isabell commented: “When I made intense movements, then the sound didn’t get as intense as I intended, instead the sound became more like foggy”. Regarding the mapping of the pitch range between +0 to +1000 Hz Isabell commented: “The effect felt weak. I want to the sound effect to feel closer to the instrument. The pitch felt like outside the co-play, I want the pitch and Jakob together, I want them to become one.”.

As none of these approaches of pitch manipulation were satisfying for Isabell, one last mapping approach was tested. Only the y-axis from the gyroscope data was mapped to pitch control, that corresponded to the vertical rotational movement. This gyroscope measures the rotational motion, the angular velocity in degrees per second. The raw rotational data was directly mapped to the freqshift~ object and had the intrinsic range -500 to +500 Hz with a base in 0 Hz in pitch shift when no movement. The mapping was in this case direct and without any calculations, which means that the raw rotational data was set as the change in frequency in Hz. Isabell experienced that she really liked this mapping of pitch, specifically that she could both increase and decrease the frequency: “[...] super clear, exciting and effective. [...] It felt hands-on.”. Isabell further expressed that she did feel closer to the sound: “I liked this very much. I heard more of Jakob’s rhythms and sounds and got a sense of controlling the pitch. [...] I didn’t at all have the same distant feeling as before”. Jakob had a similar experience and expressed: “Nice that the pitch also went down.”. When coding the pitch manipulation for the final prototype, we set a threshold for detection of movement, to not detect every small movement as the NGIMU sensors were very sensitive, through setting the step size

5 The freqshift~ object in Max/MSP is a time-domain frequency shifter. The object

takes either an integer or float as argument. The value of the integer or float argument is used to shift the frequency of the sound in Hertz.

https://docs.cycling74.com/max8/refpages/freqshift~

to 100 Hz. Isabell’s feedback regarding this step size was that it gave her a better experience.

The next sound parameter experimented with was reverb mapped to the gyroscope square root sum value. Neither Isabell nor Jakob felt that the manipulation of reverb was rewarding enough and did not alter Isabell’s dance experience. Reverb was therefore not used for the final prototype, neither volume control. Isabell commented: “We get reverb all the time. [...] It doesn’t feel like I’m affecting the sound.” And: “[...] becomes a powerful soundscape but it was difficult to hear that I’m controlling anything”. Jakob mentioned: “Reverb doesn’t stand out that much, instead it smears together the soundscape”.

Distortion of the sound was thereafter explored and mapped to the sensors making use of the “clip~” object in Max/MSP. The mapping of this sound parameter was tested but not further evaluated due to disturbing glitch in the sound when manipulating the values through movement.

Next, manipulation of the delay of the drum sound was experimented with by using a comb filter object “comb~” in Max/MSP, mapped to the square root sum of the rotation data. The delay value was set to 400ms. Isabell experienced that manipulation of the delay was the most rewarding sound parameter to control, and that this altered her experience a lot. She explicitly expressed that she got very emotional when controlling delay. “I got a strong and powerful feeling. This is the test where I got most emotionally involved.”, motivated by: “When some sounds sound dark, like when the world is going under in movies – I like that. This sound effect reminded me of that.”.

At this point, all the collaborators were satisfied with the sound parameters tested and no more sound parameters were developed nor tested. Therefore, from the verbal feedback from Isabell, two sound parameterswere chosen for the final prototype: pitch manipulation between -500 to +500 Hz, and delay. Different placements and number of sensors were also examined. As a result from Thelma's study [34], four NGIMU sensors were used in the final prototype, one on each wrist and one on each ankle of Isabell. Each of the sound parameters (pitch and delay) were mapped to two of the four sensors, and were mapped diagonally. Each NGIMU sensor was in the final prototype mapped to only one drum (see Equipment), i.e. to only one SP sensor each. This design choice was made due to both Isabell and Jakob expressing positive feedback when each movement sensor was mapped to only one drum. Isabell expressed that the sound, and therefore also her experience, felt messy when each sensor was mapped to all drums in the drum set. This meant that each body part with sensor only manipulated one of the drums. Therefore, the sound control did not have effect if Jakob was not playing that specific drum mapped to the specific

(12)

body part and sensor moved at that specific time. The final prototype setup was therefore:

• Pitch manipulation was mapped to the sensor on the left wrist (mapped to drum 2) and to the sensor on the right ankle (drum 4)

• Delay manipulation was mapped to the sensor on the right wrist (drum 1) and to the sensor on the left ankle (drum 3)

Isabell’s expressed that her experience of the final prototype and mapping was altered positively and described it as: “I really felt that I owned the sound. It felt like layers in the sound. [...] The body became an instrument itself. That was very obvious.”. It was through the positive feedback from Isabell that the mapping between each sensor was only to one drum each, but she also explicitly commented some problems with that approach: “The problem is that if I move my arms but Jakob is not playing on those drums, then the sound effect disappears from that body part”. This final version of the prototype (see Appendix C) was the prototype evaluated in the evaluation study. Isabell was video recorded while testing the final prototype 6.

Lastly, the conduction of the evaluation study was designed together with all collaborators, in firsthand shaped from the preferences of Isabell. Isabell expressed that the participants should not be given too much information about the mappings of the tool due to the short time they got to test it. She explained that they might focus more on the sensors than on their experience of the dance if they are given detailed information about the mappings. Therefore, the decision of giving the participants for the third dance session only some information about how the prototype worked and about the mapping, was made. Each participant was given the information of that the different sensors worn controlled either delay or pitch through rotation values, and that each sensor was mapped to only one drum. The participant did not get information about which specific sound parameter nor drum each sensor was mapped to. Evaluation Study

The results presented below were gathered in the evaluation study through the questionnaire and interview conducted with each of the seven participants. This section evaluates how the sound parameter mappings of the final prototype created for and together with Isabell are experienced while improvising dance by a larger number of dancers that have not been a part of the design process. Whereas the development study answered the research question in an individual context, this part

6 Video of Isabell testing the final prototype: https://vimeo.com/414692921

investigated how far this answer generalizes to other dancers. The sound parameters are referred to as “sound effects” in the questionnaire and interview for participants to easily understand and avoid misunderstandings. General Experience of the Prototype

When asking during the interview about what the participants general experience was of dancing with the sensors, all participants expressed excitement and that the overall experience was fun, such as: “It felt very fun, something very exciting” (P4). Along with this positive feeling, all participants further expressed feelings of confusion while using the prototype in the third and last session. Participant P3 followed up by explaining that “I liked a lot to dance with the sensors, but I got stuck in trying to understand, I wanted to understand more, it would had been cool to really know the system first”. Another participant (P5) explained it as “Exciting to explore dance with sensors. But sometimes I didn’t know what was me and what was Jakob.” while P7 explained it as “Very fun to have control, even though did not understand.”.

When asked about how much in control the participants felt during the third session, a slight majority of four participants expressed in the questionnaire that they felt little sense of control, where the remaining felt barely control (twoparticipants) or neither nor (oneparticipant). The participants were dissentient regarding which sound parameters they felt in control of. In the questionnaire, a slight majority of three rated delay as the sound parameter they felt in control of, where twofelt in control of pitch and remaining two of both sound parameters. During the interview, P7 described her experience as “I didn’t understand what gave me the control nor when. Maybe if we would have done it even one more time.” and explained that she believed that more time was needed to get more sense of control. P6 declared that she felt that “It was difficult to understand exactly where and how I had control”.

Change in Experience

Figure 8: Result of the change in experience.

Figure 8 show the result from the questionnaire when asked about how the experience changed between the first and last session. A majority of fiveparticipants expressed

(13)

that the experience changed positively, where oneof these fivefelt that the experience changed to “much better". One of the participants who had rated the experience as “worse” (P1) motivated that by “I experienced myself being more in the present and in contact with the music the first time, and more tentative the last session. But it was interesting.”, while the participant rating the experience as “much better” (P4) motivated that by “I was more awake in body and mind. I felt like there were strong effects.”. When discussing during the interview what the difference was in their experience between the sessions, all participants stated that they noticed that they affected the sound, even though none really understood how or when. P7 described that “It was clear that something was happening, but it was difficult to pinpoint what movements. [...] I heard difference but did not know what did it” and P1 expressed “I felt that I affected the sound but I was not so aware of how, it was difficult to distinguish”. One participant (P6) commented that she actually got a sense of control: “I got a sense of control in the third session. It became like a dialogue with the music instead of just follow after. With the sensors I got some of the control the musician often has.”.

P1, P3 and P7 mentioned that their sense of co-play changed. P3 expressed “I got more into the co-play in first session, you could say that I “found” the co-play. The last session was more of a searching of the co-play.” while P7 motivated: “The co-play changed. I listened more to Jakob. I was more responsive and receptive”. Further, two participants mentioned that the terms changed and became unequal between the dancer and Jakob, explaining that Jakob had the advantage of knowing the system: “I want to break down the technology and understand [...] it has to do with the co-play, so that both me and Jakob have equal terms and the same understanding of the prototype” (P3). Four participants explicitly expressed that their movement and awareness of the body changed in the third session. P6 was positive to this experience and motivated it by: “I used the entire body. I explored more.”. P3 stated “I felt more aware of my movements, the form and what effect the movement had” and P7 “I tried to play more in the third session and to do movements that would make a difference”. Positive or Negative Experience

The participants were asked Was your experience of the dance positively or negatively influenced by controlling the sound? where six participants expressed in the interview that their experience was generally positive, and one (P1) expressed neutrality: “It wasn’t negative, it just became more neutral”. All expressed again that they felt confused and had a split feeling about the third session. All participants developed that they wanted to explore the prototype more. P7 stated that it was “Positive, absolutely. Difficult to know, but not in a negative way. I did just not know which movements did the sounds.” and P2 explained

that she had felt both positivity and frustration at the same time: “The experience was positive, but I felt frustrated. I first felt “wow”, but when I started to understand a little, I got very frustrated. I wanted to understand and really feel in control of what I do. I would have needed more time. It’s like learning a new instrument.”. Another participant (P5) also mentioned that the lacking sense of control was frustrating for her. But at the same time, she also questioned the need of feeling in control by saying that “It was difficult to sense control. But it is important to sense control? You need to let go of control to do these kinds of things”.

One participant (P3) commented that she had a positive but less relaxed experience: “It was a positive experience, and super cool to get an elevation of yourself in movement, but I felt more relaxed when not using them. [...] It was very satisfying when I felt that I did a specific sound”. P6 said that it was a positive experience but that she still would prefer to dance without the sensors.

Figure 9: Rating of overall experience to control sound.

Further, four of the participants expressed in the questionnaire that they felt neutral to the experience of controlling the sound, while two felt that it was a positive experience and one felt that it was a very positive experience (see Figure 9).

Figure 10: Result of participants rating satisfaction when controlling delay and pitch respectively.

As can be seen in Figure 10, a great majority of the participants expressed in the questionnaire satisfaction when controlling the sound parameters. Fiveout of seven felt that it was little or very satisfying when controlling

(14)

the delay while sixout of sevenparticipants felt the same when controlling the pitch.

The participants expressed that they did not fully understand the system, but that they still sensed the manipulations of the sound parameters and therefore felt satisfaction when controlling the sound.

Specific experience of sound parameters

Figure 11 further shows that all participants agreed on in the questionnaire that their overall experience of both the delay and pitch was positive though feelings of confusion. P5 motivated it such as “It was exciting to explore” while P4 motivated by “It was lovely to be able to influence the soundscape”.

Figure 11: Result of participants experience of delay and pitch respectively.

When asked in the questionnaire about the experience of controlling the specific sound parameters, the answers differed. Figure 12 shows the results regarding this experience.

Figure 12: Rating of the experience of controlling delay and pitch respectively.

In the discussions during the interviews, there was no consensus around which sound parameters the participants liked the most. Some mentioned that delay was more satisfying, such as P3 expressing that “It was more rewarding with delay, pitch was more difficult”, while others felt that they did understand the pitch better: “I liked the sound effects. I noticed that I changed pitch. Delay was more difficult to know.” (P5). When asked what other sound parameters they think would give them a positive experience, the sound parameters of controlling volume, manipulate to get bass sounds or be able to start and stop

the music were brought up. P2, P3 and P4 further expressed that they would have wanted to create completely own music through their movements: “It would have been more satisfying if I was a synth or instrument myself” (P2).

DISCUSSION

This project has explored the real-time interaction between dancer and live musician where both parts co-created and controlled the music produced. The aim of this study was to discover which sound parameters of a drummer’s live digital sound a dancer should be in control of to alter the dancer’s experience positively. A prototype was developed together with and from the input of the professional dancer and choreographer Isabell. This process had the approach of creating the prototype by research through practice and from a first-person felt perspective. The final prototype was the result of the initial development study together with Isabell; a tool where a dancer could manipulate the sound parameters of pitch and delay from the sound created by Jakob by rotational movements with both arms and legs through the wearable motion sensors in real-time. The second part of this study was to evaluate this tool with a larger group of dancers individually, who had not been part of the design process. The results from the second study indicated that the dance experience was positively altered for most participants, in comparison of without the usage of the tool.

Experience of Dancers

The experience of the dancers from the different studies was distinctly different. The participants in the evaluative study implied that their experience was mostly positive and fun but at the same time confusing, difficult to understand and that it evoked frustration, declaring a lacking sense of control. In contrast, Isabell clearly expressed that the prototype solely positively altered her experience and gave her a positive sense of control. The obvious difference between Isabell and the participants was that Isabell had been an important part and collaborator of the design process, as the prototype was built from her feedback. She had tested the prototype many times, during a long period of time and therefore been able to incorporate the system. In contrast, the participants got to use the tool for two 5-minute sessions while the system being completely new for them. In other words, the main difference was that the tool had not been designed with or for them personally. The participants expressed this themselves, describing that they would have wanted and needed more time to test the tool. This result could imply that it is difficult to design a general tool with this level of complexity for a general user. This seems reasonable as nobody can learn to play a complex

(15)

instrument, for example the violin, in only 10 minutes. The tool was constructed very specific and complex as a consequence of the distinct mappings between all the different kinds of sensors, data, sounds and drums. The design choice of mapping each motion sensor to only one drum and SP sensor each came with both positive and negative characteristics. It was positive for Isabell as she knew the prototype and felt satisfaction to not control the entire soundscape when moving, instead only control one part of the entire sound. It was evident that much of the confusion of the participants came out of this design choice due to the fact that the sound only responded on their movement if the specific drum mapped to that specific body part and sensor was played on at that specific time. This might not give a coherent and logical sensation of responsiveness. Though, this should not be stamped as simply negative as Isabell expressed that this design choice added a complexity that she valued. Again, this is further evidence that the experience depends on the user’s knowledge and integration of the tool. This should be taken into consideration in the design process when crafting for interactive real-time practices. Therefore, when designing complex artistic systems and tools, it can be considerable beneficial to design the tool together with the user letting the user be part of the process like Isabell was during the development study. If this is not possible, it can instead be favorable to construct the tool and its mappings to be less complex and specific, e.g. like a piano, where one can press a key and easily get a nice sound. As stated, participants sensed that they lacked sense of control due to the confusion. This result implies that not completely feeling in control can make the dance experience less positive than if the dancer would sense full control. Letting users be a collaborative part of the design process may enhance the sense of control in the interaction and thereby alter experience more positively, irrespective of the motion to sound mapping.

Regardless of this outcome, an interesting finding from the evaluation study concerning control was that the majority of the participants found it positive to manipulate the sound despite feelings of frustration and lack of control. One might believe that a lacking sense of control, and the usage of a tool that is not understood completely, would not have given this positive impact on the dance experience as was the case for this study. It was further interesting that one participant (P2) expressed that she felt “It’s like learning a new instrument”. I believe this comment pinpoints on the sense of lack of control and frustration many feel when learning a new instrument, especially when learning a complex acoustical instrument such as the violin. Further, this positive experience despite the lack of control might come from the dancer’s experience of trying out something new and therefore

exciting. None of the dancers had experienced interactive dance with technology before. I believe that the sense of novelty could have contributed to experiencing satisfaction in spite of not sensing control. In addition, one participant (P7) expressed that the interaction became a dialogue between herself as a dancer and the musician, which was likewise stated Birringer in [5] and might be another reason for feeling satisfaction. These findings could imply that enjoyment of an instrument does not have to be strictly correlated to the user’s sense of control. Pitch and delay were the sound parameters that altered Isabell’s experience most positively, while volume and reverb was not satisfying for her. The participants evaluating the tool did not feel the same satisfaction as Isabell while manipulating the sound parameters of pitch and delay. As mentioned above and explicitly by the participants, a reason could be the short amount of time each participant had when thrown into trying out the prototype. The time for testing was significant for the users. More time could have helped the participants to better incorporate their bodily interaction with the system to more easily detect and notice the sound parameter manipulation. These are similar tensions to the ones mentioned in [15] that one come across when integrating technology in art.

A reason for Isabell and the participants getting a positive experience when using the prototype and controlling the delay and pitch could be due to their bodily expressions being augmented by the sonification of their movements. This created a sense of an extended version of the dancer in some of the participants, which in itself might be experienced as thrilling. Further, as implied by the results, the use of the tool appeared to foster a new awareness and sensitivity in the dancers. This shows that the adoption of technology enable dance to take new forms.

Critical Discussion

An obvious result from the evaluation study was that the dancing sessions performed during this study should have been much longer. A suggestion is that the length should have been at least the double, to give participants more time to practice and get to know the mapping environment. Additionally, a larger group of participants would have been favorable and given more accurate results, as the seven collected for this study were quite few. Also, all participants were young and women which is not truly representative of what all dancers could think of the sound parameters mapped in this tool.

Mapping a different soundscape to SP could also have affected the outcome of this paper to be otherwise. The sound design by Jakob were not classical drum sounds, as SP was used, and other sounds were chosen and mapped to the different drums. Further, though several and

(16)

different sound parameters were tested to be manipulated through movement, there are still countless other sound parameters to explore. This tool’s mapping setup was very detailed and specific, and we could have tested yet some more sound parameters as the question of this paper could really be stated as which of the sound parameters tested to control did alter the dance experience most positively. Lastly, and as mentioned previously, the design choice from the development study of mapping one motion sensor to only one drum and SP sensor was evidently difficult for the participants. Even though this was a result from the development with Isabell, another mapping strategy of the connection between the sensors could have been developed to more easily extract this paper’s aim; to investigate what sound parameters are meaningful and satisfying for dancers to manipulate through their movement. The fact that the participants had to spend time being confused of how the entire system worked took time from them experiencing the delay- and pitch manipulation.

Future Research

I believe this tool has great potential and could be developed to further explore how the co-play between a dancer and a musician in a completely live setting is altered through integration of technology. This was specified as the novelty of this paper and could favorably be explored further.

There are many possible developments. In this study, some of the control has been taken away from the musician, that is used to sense control over the music, and given to the dancer. This unique setting could have further been explored by making the project larger and testing the prototype on a much bigger group of dancers. Most important, the prototype needs to be experimented with for a longer time. Also, many different tools could be developed, one for each participant as in the development study of this paper. There are endless possibilities regarding the mapping and sound filtering design between movement and sound, also as the sound design can be designed different to alter specific sound parameters. As has been stated in [6, 9, 20] before in this paper, the mapping of movement to sound is very important. Several participants mentioned that it would be interesting to experiment with the tool in groups or ensembles of dancers. Furthermore, participants also suggested to map the sensors to create sound for the dancers to be completely own instruments instead. This would though make the project lose the aim of studying the complex co-play between dancer and musician when dancer manipulates the music produced live. Therefore, a suggestion would be to instead let the dancer both manipulate the live music and create own sounds. Lastly,

the audience perception when using this kind of tool could be studied and explored.

All the collaborators of SENSITIV have the intention to together develop this project further. The aim is to further investigate the interaction and co-play between a dancer and musician in this novel collaborative setting, with the ambition to create a final interactive dance performance. CONCLUSION

#is thesis explored which sound parameters should be mapped to an interactive dance tool to alter a dancer’s experience positively, where the dancer manipulates a drummer’s live music by controlling these sound parameters through movement in real-time. Motion tracking sensors were used for this purpose. #is was explored during the SENSITIV project in two studies; first a development study to develop a prototype together with a professional dancer, and therea%er an evaluation study where the tool developed was evaluated by a larger group of dancers. It was found from the manipulation of the sound parameters explored in this study that manipulation of pitch and delay altered the dancer’s experience most positively. It was further found that the experience differed depending on if the dancer had been part of the development and design process of the tool or not. When built together with the user, from a first-person perspective, the experience was found to be obviously altered positively. Further, the time spent on evaluating the tool by the dancers who had not been part of the development process needed to be long enough for them to understand the system before specifically evaluating the sound filtering parameters.

ACKNOWLEDGEMENTS

I would like to thank this project’s creative collaborators Isabell Hertzberg and Jakob Klang; Isabell for taking her time, space and giving her full heart in the development of the tool; and Jakob for being an invaluable artistic collaborator during the entire project.

I also wish to thank my supervisor André Holzapfel for his great support and guidance through the entire process. Most of all, I wish to thank Thelma Svenns for being the greatest collaborator and co-developer I could have wished for.

REFERENCES

[1] Ryan Aylward, and Joseph A. Paradiso. 2006. Sensemble: a wireless,

compact, multi-user sensor system for interactive dance. In

Proceedings of the 2006 conference on New interfaces for musical expression, 134-139.

[2] Andreas Bergsland, Sigurd Saue, and Pekka Stokke. 2019.

References

Related documents

While the development study had more focus on exploring the input design, such as placement and numbers of sensors and the motion signals to be used from the

Before doing this project, I had no earlier experiences working with generative design as a method. That made this project a little bit risky in the beginning, in terms of not

Resultatet för denna studie visar att de två lägre nivåerna minnas faktakunskap och förstå faktakunskap är vanligast förekommande i vad som efterfrågas i frågorna relaterade

To validate the results, the best-performing optimal shape for the clamped case was imported into a 3D computational structural model, and the resulting forced vibration response

These musical manifestations are intended to serve as demonstrations of working methods, references to this thesis but can also be seen as independent works of spontaneous

The main source is the teaching examples in the first and second edition of Johann Georg Herzog’s Orgelschule (1867/1871). The emphasis is on the differ- ent approaches to

In the paper, I have discussed the line as medium to express the physical forces at of gravity, magnetism, velocity and sound and how they are connected to the

From these fragments it seems probable that the scholars who followed Dionysius Thrax based their view on the super- natural status of the shield on three elements in