• No results found

A Dichotic Test of Pitch Induced Lexical and Emotional Language Prosody

N/A
N/A
Protected

Academic year: 2021

Share "A Dichotic Test of Pitch Induced Lexical and Emotional Language Prosody"

Copied!
44
0
0

Loading.... (view fulltext now)

Full text

(1)

Erik Witte

Örebro University, School of Health and Medical Sciences Audiology

Audiology, Advanced Course, Degree Project, 15 credit points Spring term 2013

Abstract: This study investigated the possibility of creating a dichotic listening test for the perception of pitch induced language prosody. It further examined the effect of the choice of response hand in dichotic listening, as well as exploring possible reasons for the origin of dichotic ear advantages. The fundamental frequency contours of a set of recordings were digitally morphed in order to create fully

synchronised dichotic stimuli differing in lexical pitch accent and emotional quality. Test results of 25 normal hearing adult mother tongue speakers of Swedish were analysed. The analysis indicated that the perception of Swedish lexical pitch accent generates a right ear advantage on Swedish subjects, and that the perception of pitch induced emotional qualities generates a left ear advantage. The analysis also

indicated that the choice of response hand has no effect on the ear advantages in dichotic listening. Furthermore, the analysis indicated that it is the nature of the dichotic task (lexical or emotional) that determines the direction of the ear

advantages attained in the dichotic listening tests of this study. This finding lends support to the attentional models of auditory asymmetry. Due to largely varying laterality configurations for the lexical and the motional tasks attained among the different participants, as well as marked stimulus dominance in the accent tests, the current dichotic test was considered not to be of any clinical value.

Key terms: Dichotic listening tests, pitch perception, pitch accent, emotion perception, language prosody, attention, response hand.

(2)

Abbreviations

A1 Swedish pitch accent 1 A2 Swedish pitch accent 2

AT Accent test

BOR Bored emotional quality

(C)APD (Central) auditory processing disorders CB Bilateral score, percent correct for both ears CL Left ear score (in percent correct)

CR Right ear score (in percent correct)

CV Consonant-vowel

DA Divided attention

EA Ear advantage

ENT Enthusiastic emotional quality

ET Emotion test

FA Focused attention

LEA Left ear advantage LI Lateralisation index

LII Lateralisation index for intrusions

LIPOC Lateralisation index of the percent-of-correct type LIPOE Lateralisation index of the percent-of-error type LIRT Lateralisation index for response time

NEA No ear advantage

NEU Neutral emotional quality REA Right ear advantage RHL Response hand left RHR Response hand right

TF Test word carries clausal focus

(3)

Contents

1. INTRODUCTION ... 1

1.1DICHOTIC LISTENING TESTS ... 1

1.2LANGUAGE PROSODY AND DICHOTIC TESTS ... 2

1.3DICHOTIC TESTING AND (C)APD ... 3

1.4ASWEDISH PROSODY DICHOTIC LISTENING TEST ... 3

1.5COMPLICATING FACTORS ... 4 1.6PURPOSE... 6 1.7RESEARCH QUESTIONS ... 6 2. METHOD ... 7 2.1STIMULI ... 7 2.2THE COMPUTER SOFTWARE ... 12

2.3CHOOSING THE MEASURE OF LATERALITY ... 14

2.4DATA AGGREGATION ... 15 2.5STATISTICAL ANALYSES ... 16 2.6PROCEDURE ... 18 2.7APPARATUS ... 18 2.8PARTICIPANTS ... 19 2.9ETHICAL CONSIDERATIONS... 20 3. RESULTS ... 22

3.1INDIVIDUAL LATERALITY CONFIGURATIONS ... 22

3.2GROUP LEVEL EAR ADVANTAGES ... 22

3.3COMPARISONS BETWEEN FOCUSED AND DIVIDED ATTENTION ... 24

3.4COMPARISONS OF THE EAR ADVANTAGES GENERATED BY THE FOUR DIFFERENT SWEDISH PITCH ACCENT CONTOURS ... 25

3.5COMPARISONS OF THE EAR ADVANTAGES GENERATED BY THE THREE DIFFERENT EMOTIONAL PITCH CONTOURS ... 27

3.6THE EFFECTS OF THE CHOICE OF RESPONSE HANDS ON EAR ADVANTAGES... 30

3.7COMPARISONS OF THE EAR ADVANTAGES GENERATED BY SIMILAR STIMULI IN THE TWO DIFFERENT TEST DOMAINS ... 31

4. DISCUSSION ... 33

4.1EAR ADVANTAGES FOR PITCH ACCENTS AND EMOTIONAL PITCH ... 33

4.2THE DIFFERENT TEST MODES ... 33

4.3THE EFFECTS OF RESPONSE HAND ... 34

4.4TASK DEPENDENT EAR ADVANTAGES ... 34

4.5STIMULUS DOMINANCE ... 35

4.6CLINICAL USE ... 35

5. CONCLUSIONS ... 36

ACKNOWLEDGEMENTS ... 36

REFERENCES ... 37

APPENDIX 1 - PHONETIC DESCRIPTIONS OF THE STIMULI ... 40

(4)

1. Introduction

1.1 Dichotic listening tests

The central subject of this study is dichotic listening. In dichotic listening tests, different competing stimuli are presented to both ears simultaneously.

The concept of dichotic listening tests was introduced by Donald Broadbent in the early 1950s, with the purpose of studying how people deal with competing signals. His studies resulted in a model in which the central auditory system is capable of splitting sounds from the different ears in separate channels, and attend selectively to only one of these. As a proof for the human ability to chose selectively to attend to different stimuli in a purely mental way, his discovery had large implications for the behaviourist black box theory of the human mind, and contributed significantly to the cognitive revolution (Bryden, 1988).

Dichotic testing, however, was not popularised until the discovery by Doreen Kimura in 1961 of the asymmetry of verbal perception in dichotic listening tests. This discovery is considered among the most important neuropsychological findings ever (Hiscock & Kinsbourne, 2011). What she discovered was that verbal stimuli were more correctly reported from the right ear than from the left. This better performance of the right ear for verbal stimuli is since referred to as the right ear advantage (REA). Kimura subsequently created a model in which the origin of the REA was explained by the structure of the auditory neural anatomy. In this model, the REA emerge due to the more direct neural pathways from the right ear to the left cortical hemisphere, in which verbal processing is assumed to take place (Hiscock & Kinsbourne, 2011).

Since these early days of dichotic listening tests, two different types of models have emerged within the field of dichotic listening, explaining the origin of ear advantages (EA) rather differently. The structural models build on the thoughts of Kimura and explain the origin of the EAs attained in dichotic listening either to structural differences in the ascending auditory neural pathways, the efficiency of the inter-hemispheric connections of the corpus callosum, a naturally superior sensitivity in the right cochlea, or to acoustic factors present in the stimuli (Hiscock & Kinsbourne, 2011).

Quite contrary to focusing on the auditory pathways, the attentional models build on the tendency of the brain to direct attention to the hemisphere contralateral to any sensory input. Since most sensory neural pathways are crossed, a sensory input from the right hand will arrive in the left cortical hemisphere, whereby it gets activated. There need, however, not be any sensory input in order to activate different regions of the brain. Merely expecting a sensory input will have the same result. Therefore, a subject's expectancy of a sensory input from one side will automatically activate the contralateral cortical hemisphere. In a similar fashion, expectancy of a verbal stimulus will always activate the left hemisphere, since this is where most linguistic processing takes place. This lateralised activation of the brain biases attention to the stimuli arriving from the contralateral side, creating a REA for verbal stimuli, and a LEA for other stimuli (Kinsbourne, 1970).

(5)

Possibly, both attention and the neural structure of the auditory pathways play a role in creating the ear advantages. Several such two-component models have been proposed (Hiscock & Kinsbourne, 2011).

Since the neural pathways of the auditory system are both ipsilateral and contralateral, a mechanism of inhibition of the ipsilateral pathways is likely to be active during dichotic listening, ensuring the dichotic stimuli to arrive primarily only to the contralateral cortical hemisphere (Brancucci et al., 2004).

As of today, many different dichotic tests have been developed for different purposes. Commonly the test stimuli comprise consonant-vowel (CV) patterns, digits, words, and sentences (Keith & Anderson, 2007), but also other stimuli such as musical tones, musical chords and environmental sounds (Bryden, 1988).

One especially large collection of dichotic tests results is housed by Kenneth Hugdahl and co-workers at the University of Bergen. This so called Bergen-test has been developed in several different languages, including Swedish, and consists of dichotic CV-patterns (Hugdahl, 2002). In dichotic listening tests, a substantial number of different test modes can be used. Among these, two deserve to be mentioned. In the focused attention mode (FA), the subject is asked to direct attention to only one ear at a time and report what is heard there. In the divided attention mode (DA), the subject is asked to report what is heard in both ears. Dichotic tests are often administered in several different test modes. Normally, ear advantages are larger in FA modes than in DA modes (Bryden et al., 1983). The overall scores, however, should normally be equal between DA and FA test modes. Superior scores in the DA mode,

compared to the FA mode, are commonly interpreted as indications of some kind of cognitive dysfunction (Jerger & Martin, 2006).

1.2 Language prosody and dichotic tests

Complementary to the segmental features of human language, there are also supra-segmental features, often referred to as language prosody. Prosodic features are constantly present in spoken language and convey several different types of information, which may be of much importance for the understanding and interpretation of any utterance. Information that can be carried by the language prosody range from clues to the identity of the speaker (individual voice qualities), to affective information, grammatical information, pragmatic, or even lexical information (Peppé, 2009).

Language prosody is mainly conveyed by variations in the intensity, duration and pitch, of an utterance (Peppé, 2009). The use of pitch as a feature of prosody differs between different languages, especially as only some languages use different pitch contours to contrasts between different lexemes. These languages can be described as either tonal languages (e.g. Mandarin and Thai) or languages with lexically contrastive tone accent (e.g. Swedish, Norwegian, and Japanese) (Bruce, 2012).

(6)

Several different prosodic features have been studied in dichotic listening tests, including emotion perception, lexical tone (e.g. in Thai and Mandarin), and lexical pitch accent (e.g. in Norwegian and Japanese). A challenge in interpreting the results of such dichotic tests is due to some uncertainty as to where different types of pitch are normally processed in the brain. Most researchers agree pitch extraction takes place in some cortical pitch extraction centre. The issues however concerns firstly, whether this centre would be lateralized to the temporal lobe of one hemisphere only, if there are bilateral pitch extraction centres, or possibly even multiple pitch extraction centres in several different locations in the brain. And secondly, if one assumes that there exist bilateral pitch centres, to what extent these would differ

functionally and why (Wong, 2002).

Several studies have indicated left hemisphere dominance for lexical tone and pitch accent (e.g. Van Lancker & Fromkin, 1973; Moen, 1993; Wang et al., 2001, 2004) and several others have indicated right hemisphere dominance for emotional stimuli (e.g. Voyer et al., 2009; Rodway & Schepman, 2007; Brådvik et al., 1991). Hence it could be assumed that the cortical processing of linguistic stimuli is dominantly located to the left hemisphere and the

processing of emotional stimuli to the right. There are however, a number of studies which have failed to confirm such a hemispheric dominance pattern (Brådvik et al., 1990; Wu et al., 2012; Baudoin-Chail, 1986; Wong, 2002).

1.3 Dichotic testing and (C)APD

Dichotic speech tests are capable of assessing the function of several important components of central auditory processing including the auditory pathways and nuclei of the brainstem, the auditory cortex, as well as the interhemispheric connections of the corpus callosum (Keith & Anderson, 2007). Especially, impaired ability of binaural separation, which for instance is needed to extract speech from background noise, could be indicated by the use of dichotic listening tests (Johnson Martin et al., 2013). As such, dichotic listening tests have gained a place among some other tests of auditory central function in the test battery recommendations issued for the diagnosis of (central) auditory processing disorders ((C)APD) by the American Speech-Language Hearing Association (ASHA, 2005).

Abnormal test results from a dichotic speech test are defined by various deviations from a normal REA (Keith and Anderson, 2007). However, since REAs can only be assumed to be generated by lexical stimuli, these diagnostic indications would not apply to other kinds of dichotic verbal stimuli expressing, for instance, emotional quality.

1.4 A Swedish prosody dichotic listening test

To the knowledge of the author, no dichotic tests for the perception of prosodic features have previously been developed in the Swedish language. Apart from the Bergen test referred to above, which only uses simple CV-patterns, there is only one other Swedish dichotic speech test available (Hällgren et al., 1998). The Hällgren test uses a set of phonemically contrasting CV-syllables, a set of one syllable digits, a set of spondees, as well as a set of sentences as stimuli. For digits and CV-patterns both divided attention and focused attention test modes are

(7)

used. The test has been seen to generate a significant REA for the sentences, and the CV-syllables, with CV-syllables showing the largest REA (Hällgren et al., 1998).

The selection of stimuli used in the Hällgren et al. (1998) test includes quite a variety of Swedish speech sounds. There are, however, no prosodic features included in the test. A dichotic test utilising a set of important prosodic features in the Swedish language as stimuli could therefore be a possible complement to the existing test in the diagnosis of people with (C)APD. A potentially useful method would be to manipulate the fundamental frequency contours of recorded sentences in order to create stimuli that differ in Swedish pitch accent on the one hand, and in emotional quality on the other. This study is a first step towards

developing such a dichotic listening test. 1.5 Complicating factors

Developing such a test, a number of factors need to be taken into consideration. Firstly, the ear advantages generated in dichotic tests have been seen to be affected by a lot of different factors, such as gender (Voyer & Rodgers, 2002), age (Jerger et al., 1994), the amount of musical training (Burton et al., 1989), the mother tongue of the listener (Wang et al., 2001), menstrual stage (Sander & Wenmoth, 1998 ), and many more. All these would be interesting to study, but is beyond the scope of this study.

As has already been mentioned, most evidence point to a left hemisphere dominance in lexical processing. However, a few studies have indicated that even the right hemisphere could be responsible for at least some lexical processing. In the Grimshaw et al. (2003) study, evidence was found for a right hemisphere ability for lexical processing which took place when the stimuli used were emotionally loaded. The authors proposed that this right

hemispheric lexical processing occurs because the emotional component in the stimuli would activate the right hemisphere, which hence becomes the dominant hemisphere for the

processing of the stimuli. In a study of Japanese lexical pitch accent, Wu et al. (2012) found that different pitch accents were lateralised to the different hemispheres. Also this finding could lend some support to a right hemisphere ability for lexical processing, even if the authors themselves attribute their finding mainly to differing acoustic properties of the stimuli (Wu et al., 2012).

Concerning the hemispheric dominance for the processing of emotions, Reuter-Lorenz and Davidson (1981) have found a valence dependent difference, whereby positive emotions tend to be processed in the left hemisphere and negative emotions in the right hemisphere. Later investigations, however, have failed to replicate this result in a dichotic listening test (Rodway & Schepman, 2007).

In the light of these results, it would be interesting to investigate whether the different Swedish pitch accent fundamental frequency contours, as well as different types of negative, neutral and positive emotional pitch, generate ear advantages in the same or different

(8)

Interesting in connection with this study is also the question whether the choice of response hands in a manual response mode study affects the ear advantages attained. Since many (probably most) dichotic listening tests use a verbal response mode, there is no way to effectively control the amount of bias on the ear advantages that are due to cerebral

asymmetries in the speech production, and not as intended in dichotic tests, asymmetries in speech perception. Reporting a Swedish pitch accent word or an emotional quality, it is probably most appropriate to use a manual response mode, in which the subject responds by pointing at different images or signs. By using a manual response mode, it should be possible to investigate any effect on the ear advantage that derives from the production/motor side, rather than from auditory perception, simply by alternating response hands. In a study of emotional and lexical perception, Grimshaw et al. (2003) found that alternating response hands affected ear advantages for response times but not ear advantages for accuracy. However, as Grimshaw et al. (2003) used ordinary voice recordings in which the speech segments may naturally differ in duration between different test stimuli used, it would be interesting to see whether this ear advantage could also be attained using fully synchronized stimuli which still differ in emotional and lexical meaning. As mentioned above, such stimuli could be created by altering the fundamental frequency contours of Swedish sentences.

And finally, given that Swedish pitch accent generates a REA and emotional pitch generates a LEA, by using a neutral voice for the pitch accent discrimination task as well as including a neutral voice in the emotion perception test, it would be possible to shed some light on the role of attention versus the role of auditory neural structure in the origin of the ear advantages. If the neutral voice generates a similar ear advantage when the task is to identify a lexical pitch accent word as when the task is to identify an emotion, then it is probable that the ear advantage attained originates from structural asymmetries in the auditory neural pathways. Such a result would favour the structural model described above. If however, different ear advantages are generated by similar stimuli in the different tasks, then the ear advantages are likely to arise from the task at hand, favouring the attention model.

(9)

1.6 Purpose

The purposes of this study was firstly to develop a dichotic listening test for the perception of different fundamental frequency contours commonly used in the Swedish language, secondly to evaluate the test on a group of normal hearing adults by investigating the test results it generated, thirdly to examine the effect of the choice of response hands on the test results, and fourthly to explore possible reasons behind the ear advantages attained in the dichotic

listening tests.

1.7 Research questions

1. What ear advantages are generated by Swedish pitch accent and emotional pitch on normal hearing adult subjects?

2. Are the overall scores generated by Swedish pitch accent and emotional pitch on normal hearing adult subjects equal between a focused attention test mode and a divided attention test mode?

3. Do different Swedish pitch accent fundamental frequency contours generate ear advantages in different directions?

4. Do different emotional pitch fundamental frequency contours generate ear advantages in different directions?

5. Does the choice of response hand influence ear advantages generated by Swedish pitch accent and emotional pitch?

(10)

2. Method

In this section, the design and recording of the dichotic stimuli, the development of the testing software, the data analysis, the test procedures and technical equipment, the selection of participants, as well as some ethical considerations will be described.

2.1 Stimuli

There were two types of stimuli used in the dichotic test; one type signalling different pitch accents, and one signalling different emotions.

2.1.1 The pitch accent stimuli

All stimuli used in the dichotic test consisted of the carrier phrase "Peka på den där..." (In English: Point at that...) and a subsequent set of eight different test words. Each test word was a member of a minimal pair differing only in pitch accent. The different pitch accents are normally referred to as accent 1 (A1) and accent 2 (A2). At least around a 150 such minimal pairs for pitch accent exist in Swedish.1 Out of these, 4 minimal pairs were selected to be included in the test. Additionally, one more minimal pair was selected to be included in a practise test.

The following criteria were used to select the test words. Firstly, all had to be easily illustrated using a picture. Secondly, in order to fit within the syntax of the carrier phrase, they had to be neuter gender nouns in a singular definite grammatical form. And lastly, the words with the least number of voiceless segments were selected. The reason why voiceless segments were avoided was that these naturally have no pitch.

In table 2.1, the five minimal pairs best fulfilling these criteria are listed. Of these words, the minimal pair <moppen> was the least suitable because of its long medial voiceless segment /pː/, and therefore it was selected for use in the practise test.

Accent 1 English translation Accent 2 English translation

anden the duck anden the genie / the spirit

kullen the litter (animals) kullen the hill

tanken the tank tanken the thought

tummen the inch tummen the thumb

moppen the mop moppen the moped

Table 2.1. The four sets of minimal pairs for pitch accent used for the test words, as well as the minimal pair <moppen> which was used in the practise tests.

Obviously, in a dichotic task of binaural separation, there need to be more than two alternative stimuli, lest it would be possible to deduce from the stimulus heard in one ear which stimulus was presented in the other. Such a possibility could possibly be suitable in a dichotic test for binaural integration. In a test of the ability of binaural separation however, the ears should not be allowed to cooperate to solve the task. Even though Swedish pitch accent has only a two

1 For a collection of Swedish minimal pairs for pitch accent see:

(11)

way lexical distinction, in several dialects, each of these accents has two different

fundamental frequency patterns depending on whether the word carries clausal focus or not (Bruce, 2012). Since this phenomenon provides four distinctly different fundamental frequency contours, also four different sound files containing the two accents were created and used in the test. In those stimuli where the test word lacked clausal focus (TNF), the word <där> (In English: that) in the carried phrase was focused instead. By means of randomly changing between presenting the same accents bilaterally and presenting different accents bilaterally, never using the same focus position, it would be virtually impossible for the participant to deduce from the stimulus in one ear which stimulus was present in the other. Table 2.2 lists the eight combinations of stimuli that were used bilaterally for each minimal pair to create the dichotic stimuli. Since there were four minimal pairs and eight different fundamental frequency contour combinations, a total number of 32 different dichotic stimuli were created for use in the pitch accent test.

Right Ear

TNF-A1 TNF-A2 TF-A1 TF-A2

L eft E a r TNF-A1 √ √ TNF-A2 √ √ TF-A1 √ √ TF-A2 √ √

Table 2.2. Bilateral combinations of pitch accent fundamental frequency contours used in the accent test. TNF-A1 represents Accent 1 without clausal focus, TNF-A2 represents Accent 2 without clausal focus, TF-A1 represents accent 1 carrying clausal focus, and TF-A2 represents Accent 2 carrying clausal focus.

2.1.2 The emotional pitch stimuli

The emotional fundamental frequency contours used in the dichotic test signalled neutral (NEU), enthusiastic (ENT), and bored (BOR) emotions. Also these were signalled through differences in the fundamental frequency contours. For the emotional pitch stimuli, the same carrier phrase and test words as in the accent stimuli were used. However, the shifts in

fundamental frequency were not only applied to the test words, but also to the carrier phrases. The emotional pitch for enthusiastic was signalled by large variations in fundamental

frequency, as well as a slightly elevated average of the fundamental frequency. Bored was signalled by minor variations in fundamental frequency as well as a slightly lower average of the fundamental frequency. Neutral was in between these, both concerning variation and average of fundamental frequency (Cf. Lindblad, 1995).

In the dichotic stimuli, all emotions were used bilaterally in all possible combinations. Consequently, there were also a number of diotic stimuli, in which the same emotional pitch was presented to both ears. Thus, there were nine possible bilateral combinations of the three different emotional pitch contour and eight different test words, this rendered a total of 72 dichotic stimuli. To reduce the number of test trials therefore, only one word from each

(12)

minimal pair was used. Those words were the stimuli Anden(A1), Kullen(A2), Tanken(A1)2 and Tummen(A1). This reduction rendered a total number of 36 different dichotic stimuli for use in the emotion test.

2.1.3 Regional variations in Swedish pitch accent

In the Swedish language, there are five different types of pitch accent, distributed roughly in a number of different geographical regions, illustrated in figure 2.1. The main difference

between the different accents lies in the timing of the different fundamental frequency peaks. Furthermore, only two of these, 2A and 2B, take different forms depending on the location of the clausal focus (Bruce, 2012).

For the stimuli used in this study, accent type 2A has been selected. This could possibly make the test more difficult for speakers from other accent regions. However, there is probably rather extensive mutual intelligibility between all different Swedish dialects concerning pitch accent (Bruce, 2012).

2 Initially, another minimal pair with accent two, namely the word <klaven>, was used instead of <tanken> with

accent 1. Thus, there were two test words with each accent used in the emotion tests. However, during pilot testing, the minimal pair <klaven> proved to be too difficult in the accent test, probably since it is rather unusual in the Swedish language. Therefore it was changed to the word <tanken>, which initially was planned to be used as a second minimal pair in the practise tests. However, since <tanken> occurred with accent 1 in the emotion practise tests, changing this meant that there were three words with accent 1, and only one word with accent 2 included in the emotion tests. Probably, this have no large implications for the test results.

Figure 2.1. Approximate

geographical distribution of different pitch accent types. Adapted from Bruce & Gårding (1978). 0=no differentiation between accent 1 and accent 2.

(13)

2.1.4 Creation of the dichotic sound files

The sound files used in this study were recorded in a sound treated booth using a Neumann TLM 103 microphone and a TASCAM US-122MKII external sound card connected to a portable computer. The author used his own voice in the recordings.

In order to create fully synchronised dichotic sound files containing the different pitch accents and emotional pitch contours, only one recording was made for each minimal pair. The

fundamental frequencies of these recordings were then morphed digitally, using the PitchTier function in the software PRAAT (Boersma & Weenink, 2013). As a guide to how these changes in fundamental frequency concerning the different pitch accent contours should be applied, separate recordings of the same carrier phrase and test words were used, as well as pitch contours found in the literature (Bruce, 2012; Gårding, 1977; Bruce & Gårding, 1978). The fundamental frequencies of the sound files were thus manually adjusted so as to sound natural.

Concerning the morphing of the emotional pitch contours, several different ways of

modifying the fundamental frequency contours were tested, and subjectively evaluated by the author. In the end a set of formulas were used, which held the frequency 99 Hz constant while at the same time changing all other fundamental frequencies in the stimuli proportionally to their distance from 99 Hz, so that all NEU stimuli would have a maximum frequency of 192 Hz, all ENT stimuli would have a maximum frequency of 288 Hz, and all BOR stimuli a maximum frequency of 128 Hz. The relation between the maximum frequencies of BOR and ENT, on the one hand, and NEU and ENT on the other, were musical fifths (2:3). And

between BOR and ENT the relation is a musical ninth (4:9). These relations were chosen in an attempt to maximize the degree of inhibition of the ipsilateral auditory pathways, which is known to be significantly smaller for certain frequency relations (Sidtis, 1981), without making the different stimuli too difficult to distinguish.

The reason 99 Hz was held constant was that this seemed to be the natural pitch floor of the speaker, since all four original recordings ended at approximately this frequency.

Originally, the four different recordings had maximum fundamental frequencies of between 204-222 Hz. The emotional quality of the voice used in the original recordings is probably best described as neutral. Still, the author decided to shift their maximum frequency

somewhat downwards when those stimuli were to be used for the NEU stimuli in the emotion test, the reason being that they were judged by the author to sound a little more neutral this way.3

After all fundamental frequency contours needed were plotted, the original sound recordings were resynthesised by the PitchTier function in PRAAT, producing all the different stimuli needed in the test as mono sound files.

3 Unfortunately the author did not make an equivalent frequencies shift for the sound files used in the accent

tests. Therefore these stimuli are not exactly the same. The maximum frequencies for the stimuli used in the accent tests are between 12 and 30 Hz higher than the same stimuli used in the emotion tests. However, the pitch floor of the speaker remains the same in the two domains.

(14)

Having created all the mono sound files, their mean intensities were equalized using the audio editing software Audacity (ver. 2.0.3), as well as the WaveStats Audacity add-in. The

intensities were equalised to the same mean dBA-weighted intensity level. Since the frequency ranges were different between the sound files, and since they were going to be presented on a "most comfortable level" at approximately 72 dB SPL, the dBA-weighting was used in the intensity equalisation, in order to ensure equal loudness of the sound files.

Finally the appropriate sound files were mixed into dichotic stereo tracks using the Audacity software.

By this way of producing the stimuli, the only parameter modified is the fundamental frequency. Other parameters such as intensity and duration largely remained constant. The morphed pitch accent contours for the test words <anden> and <tanken> can be seen in figure 2.2, and in figure 2.3 the different emotional pitch contours are presented.4

< Peka på den där anden > < Peka på den där tanken >

TNF A1 TNF A2 TF A1 TF A2

Figure 2.2. Modifications of the fundamental frequency contours in order to create the four different pitch accent contours TNF-A1, TNF-A2, TF-A1 and TF-A2. The vertical lines mark approximately the beginnings of the test words. Only sentences for the test words <anden> and <tanken> are presented.

4 The minimal pairs <anden> and <tanken> are presented, since among the test words, these two differ most in

terms number of voiced segments.

In addition to these fundamental frequency contours, also a phonetic transcription as well as spectrograms of all four stimuli with A1-TNF are presented in Appendix 1.

(15)

< Peka på den där anden > < Peka på den där tanken >

BOR

NEU

ENT

Figure 2.3. Modifications of the fundamental frequency contours in order to create the three different emotional pitch contours bored (BOR), neutral (NEU) and enthusiastic (ENT). The vertical lines mark approximately the beginnings of the test words. Only sentences for the test words <anden> and <tanken> are presented.

2.2 The computer software

The dichotic test was administered by the use of computer software, especially developed for the purpose of this study. This software was programmed using the Visual Basic 2010

Express platform.

The user interface of the software was developed to be used with two different screens connected to the host computer. On the test administer's computer screen there was a control interface from which the test was run. On the participants' screen, which was a touch sensitive screen, the test itself appeared. Throughout the test the participants reported any stimuli heard by touching different objects presented on the touch screen.

There were two different test domains, one for the pitch accent tests and one for the emotional pitch tests. These test domains should more correctly be described as the pitch accent

fundamental frequency contour perception test domain, and the test domain for the perception of fundamental frequency contour generated emotional qualities. For brevity however, these are henceforth referred to as the accent test domain and the emotion test domain.

In the accent test domain, the participants were presented with two images depicting the different members of the minimal pairs and instructed to point at the picture illustrating the test word he or she perceived. The pictures were presented in a random order, positioned vertically, one above the other. In the emotion test domain, the participants were always presented with three different, vertically ordered, emoticons. The emoticon was used for the bored emotion, the emoticon was used for the neutral emotion, and the emoticon

(16)

was used for the enthusiastic emotion. 5 These were always presented in the same vertical order with enthusiastic at the top of the screen and bored at the bottom of the screen. The reason why their order was not randomized was that it was assumed to be too difficult for the participants to locate them using such a procedure.

In addition to the different test domains, there were also different test modes. In the binaural recognition test mode, diotic (i.e. the same sound was presented bilaterally) stimuli were presented to the participants. This test mode was used partly as a practise test, and also as a test to see whether participants were able to perceive the different pitch accents and emotions. This was especially important for the accent tests, since not all dialects differentiate between accent 1 and accent 2 (see figure 2.1).6 In the binaural test mode, the 16 different stimuli of the accent domain and the 12 different stimuli of the emotion domain were presented in a predetermined random order.

In the focused attention (FA) test mode, participants were presented with dichotic stimuli, as well as the appropriate pictures or emoticons with the instruction to focus on only one ear (the test ear) at a time, ignoring the stimulus in the other ear. The pictures or emoticons were presented laterally on the screen, on the same side as the side of the test ear. The participants were instructed by the use of message boxes appearing on the screen which ear they should focus on. The side of the test ear was switched in regular intervals. In the accent test, every eighth trial, and in the emotion test, every sixth trial. Since all 32 dichotic stimuli of the accent domain and 36 different stimuli of the emotion domain were tested for both ears separately, the number of stimuli presented in the FA mode added up to 64 in the accent domain, and 72 in the emotion domain.

For the FA mode, there were two different predetermined randomizations of the stimuli, which were alternated between the tests with different response hands.

In the divided attention (DA) test mode, participants were instructed to report the stimuli presented to both ears. The stimuli presented in the right ear were reported on a set of pictures or emoticons appearing on the right side of the screen, and the stimuli presented in the left ear were reported on a set of pictures or emoticons appearing on the left side of the screen. For each stimulus, the participants could choose which side they wanted to report first. Since both ears were tested at the same time in this test mode, there were only 32 trials in the accent domain, and 36 trials in the emotion domain. Also here the stimuli were presented in a predetermined random order.

For each different test mode in each of the two test domains, there were short practise tests which used separate stimuli with the minimal pair <moppen> (See table 2.1 above).

5 These emoticons are Unicode characters from the Symbola font, with the character codes 1F611 for bored,

1F642 for neutral and 1F603 for enthusiastic.

6 As will also be mentioned in section 2.8 below, only participants from dialects that distinguish between the

pitch accents were included in the study. Since the accent region categorisation is somewhat outdated though, as well as imprecise, the participants' abilities of making lexical distinctions between the pitch accents could therefore not be guaranteed.

(17)

In all tests, there was a fixed response period in which the participant had to report the answer. If no response were given at the end of the response period, this was counted as a wrong answer. The response period started with the presentation of the sound stimuli and lasted 3 seconds beyond the length of the sound in the accent bilateral recognition and the focused attention test modes. In the divided attention test mode, the response period lasted 5 seconds beyond the length of the sound files.

Response times were measured in all test modes using the computer's internal clock.

Test results and response times for each test trial on the one hand, and for each completed test on the other, were exported to two separate semicolon delimitated text files which were imported into the software Microsoft excel, and further into IMB SPSS Statistics for statistical analysis.

Included in the computer software was also a 10 second 1000 Hz calibration tone. To ensure that no peak clipping of the sound files occurred when the sound was directed through the audiometer, the calibration tone had an RMS level of 15 dB above the average RMS level of the sound files (Cf. Frank & Rosen, 2007). The calibration tone was generated in the software Audacity (ver. 2.0.3), and its RMS level was measured with the WaveStats Audacity add-in. A computerised version of the Swedish APHAB questionnaire was also incorporated into the software.7

2.3 Choosing the measure of laterality

The choice of a laterality measure to use in a dichotic listening test is a difficult matter. In the early days of dichotic testing a straightforward CR-CL index was commonly employed

(Harshman & Krashen, 1972). However, this simple measure exhibited some serious problems. When used to assess hemispheric lateralisation on children in different ages, it implied that lateralisation decreased with age. As this was contrary to the expected, Harshman and Krashen (1972) analysed the results of a large number of dichotic listening tests and found that CR-CL was in fact significantly negatively correlated with overall accuracy (Pearson's r = -0,448). Thereby, Harshman and Krashen (1972) had shown that it was in fact children's improvement on the overall test results with increasing age that delusively caused their degree of lateralisation to decrease. For this reason Harshman and Krashen (1972) developed two alternative lateralisation indices, which they called POC8 (percent of correct) and POE (percent of errors). In a comparison of these different indices, they found the POC to be even stronger negatively correlated to overall accuracy (Pearson's r = -0,749). The POE, however, was only slightly correlated to accuracy with a Pearson's r value of +0,21.

Moreover, this weak correlation did not show any statistical significance (p > 0,05).

7 The Swedish APHAB questionnaire is available at

http://www.harlmemphis.org/files/2713/4618/0945/SWEDISH.pdf

8 This index is calculated POC=C

R/(CR+CL) and is equivalent to the (CR-CL)/(CR+CL) index which is also

(18)

This lack of correlation to accuracy was one reason why the LIPOE was chosen in this study. A further reason was that the LIPOE is relatively resistant to the amount of guessing on the part of the participant (Harshman & Krashen, 1972). A drawback of the index, however, is that it is sensitive to what Harshman and Krashen (1972) call information loss, namely that errors occur which are not created by the ear asymmetries themselves, but by loss of memory, or other similar factors. In this study, it was assumed that a similar effect would occur due to the fact that errors were reported when the participants failed to respond within the fixed response period. As Harshman and Krashen (1972) point out, this weakness is not present in the POC index. Considering the experiment in this study, however, the amount of guessing was assumed to extend far beyond the amount of information loss. Taken together, POE arguably seemed to be the most suitable laterality measure for this study.9

A simple third reason for the choice of using POE in this study was that several important similar studies have used this laterality index, e.g. Van Lancker and Fromkin (1973) and Wang et al. (2001). In comparison to the latter, however, no type of single error trial strategy was employed.10

2.4 Data aggregation

In this section, a number of explanations and formulas are presented which have been used to generate all the data needed to analyse the test results in this study.

For each participant, and each different test, the following scores and indices have been calculated:

The right ear score (CR) expresses the percent correctly reported stimuli when the participant was instructed to listen and respond to what was heard in the right ear.

The left ear score (CL) expresses the percent correctly reported stimuli in the left ear. The bilateral score (CB) is calculated both for the bilateral recognition tests and the dichotic tests. For the bilateral recognition tests, CB expresses the percent correctly reported stimuli, while for the dichotic tests, it expresses the mean of CR and CL.

Right ear errors (ER) and left ear errors (EL) expresses the percent incorrectly reported stimuli in the right ear and left ear respectively and are derived from CR and CL in the following way:

ER = 100-CR

EL = 100-CL

The laterality index percent-of-error (LIPOE) (Harshman & Krashen, 1972) is used to express ear advantage. It is derived from ER and EL and expresses the percentage of all errors that were made by the left ear. It stretches from 0 %, indicating a complete LEA, to 100 %,

9 There are however a number of more complex laterality measures, including non metric ones, which have not

been considered in this selection of laterality measure (Se. Harshman & Lundy, 1988)

10

(19)

indicating a complete REA. A value of 50 % indicates no ear advantage (NEA). LIPOE is calculated according to the following formula:

LIPOE = [EL / [ER+EL] ] * 100

The laterality index for intrusions (LII) also expresses ear advantage, but in a somewhat different manner than the LIPOE. An intrusion occurs when the stimuli in the ear contralateral to the test ear is identified and given as a response for the test ear. It is believed to be a result of the competition between the two ears induced by the dichotic stimulation. As such,

intrusions can give vital information as to which ear is more dominant (Harshman & Krashen, 1972; Van Lancker & Fromkin, 1973). To count the number of intrusions in this study, the character of the errors made in the test ear were therefore analysed. Only when an incorrect response given for the test ear equalled the contralateral stimuli, it was counted as an

intrusion. Like the LIPOE, the LII also stretches from 0 to 100, where 0 % indicate a complete LEA, 100 % a complete REA, and 50 % NEA. In this study, the LII is calculated according to the following formulas:11

IL = [The total number of intrusions from the left ear] / [the total number of right ear

trials]

IR = [The total number of intrusions from the right ear] / [the total number of left ear

trials]

LII = (( [ IR - IL ] * 100) + 100) / 2

Also a third laterality index have been calculated, namely the laterality index for response time (LIRT). LIRT expresses the difference in response time between the ears in a manner corresponding to the other laterality indices. A value of 50 for the LIRT means that the response times were equal for both ears. A value above 50 specifies that the response time was shorter for the right ear then for the left, hence a REA is denoted. In a corresponding way, a number below 50 denotes a LEA. The LIRT was calculated according to the following

formula using average response times as well as the response periods specified for each different test (se section 2.2):

LIRT = (( [ [Average response time for left ear] / [Response period] - [Average response

time for right ear] / [Response period] ] *100) + 100) / 2

2.5 Statistical analyses

The statistical analyses of all test scores and ear advantages were performed on a group level. With regards to the results from previous research presented in the introduction, hypotheses were formulated for each of the six research questions. Concerning ear advantages, the null hypotheses were always that no ear advantage would be found. For the overall ear advantages generated in the different tests, the alternative hypotheses were always one sided, and stated

11 The transformation expressed by "+100/2" found in the end if both the LI

I and the LIRT formulas serve the

purpose of adjusting the range of possible values to 0,0 - 1,0. Without this adjustment the indices would range from -1,0 to 1,0. The adjustment is made in the sole purpose of making the three different laterality indices comparable.

(20)

that in the accent test domain there would be right ear advantages (REA) found, and in the emotion test domain there would be left ear advantages (LEA) found. When investigating the ear advantages generated by each type of stimuli separately, the alternative hypotheses were always two-sided, since it was not assumed that all different pitch accents contours in the accent tests, and all emotional pitch contours in the emotion tests, would necessarily generate ear advantages in the same direction.

Concerning differences between different ear advantages and bilateral test scores, the null hypotheses were always that there would be no differences. The alternative hypotheses in these cases were always two sided, denoting that there would be a difference between

different ear advantages or between different bilateral test scores, but that the direction of the difference would not be possible to deduce in advance.

Before any statistical analyses were performed, each included variable were examined using the Kolmogorov-Smirnov test with the Lilliefors significant correction in order to test whether the variable could be assumed to be normally distributed.

In establishing significant ear advantages for normally distributed variables, the appropriate laterality indices were tested with one sample T-tests, using a test value of 50, representing no ear advantage (NEA). One sided significance levels were noted. In this way, it was assessed whether the group means deviated significantly from NEA in the hypothesised direction. In establishing significant ear advantages for variables that were not normally distributed, the appropriate laterality indices were tested with the Wilcoxon signed rank tests for paired samples, comparing the values of each laterality index to a dummy variable with the constant value of 50 (representing NEA). One sided significance levels were noted. Hence, it was assessed whether the group medians deviated significantly from NEA in the hypothesised direction.

In assessing whether differences between two normally distributed variables were statistically significant, paired samples T-tests were used, noting two sided significance levels. If one of the variables in a comparison were not normally distributed, a Wilcoxon signed rank tests for paired samples were used instead.

In all figures in chapter 3, statistical significance is marked by an asterisk and non-normally distributed variables are marked by daggers. Asterisks in bar charts always refer to a group mean value, tested parametrically with a T-test. And conversely, asterisks in boxplots always refer to a group median value, tested non-parametrically with Wilcoxon signed rank tests. Throughout the statistical analyses, an α-level of 95 % was used.

(21)

2.6 Procedure

The testing sessions took place at the department of audiology at Örebro University. Each test session was preceded by pure tone audiometry using the modified Hughson-Westlake method (Roeser & Clark, 2007), by which the hearing status of the participant was assessed. Hearing thresholds were measured down to -10 dB HL.

During the test sessions the participant was seated in a sound treated booth with the test administer seated outside, overlooking the participant through a window. To reduce glare on the participant's computer screen, the main lights in the booth were turned off during the test session. The participant was seated on a chair with the touch screen placed on a table at a comfortable distance.

The test session was divided into four parts. In the first part, the two bilateral recognition tests were administered. In the second, the three tests of either of the two dichotic test domains (i.e. ATs or ETs) were conducted. In the third part, the APHAB questionnaire was presented to the participant, and in the fourth part came the three tests of the remaining test domain.

All tests were preceded by short practise tests to ensure that the participant had understood the task at hand.

The dichotic tests of both test domains were always administered in a fixed order beginning with the FA test with the initial response hand, continuing with the FA test with the other hand as response hand, and finishing with the DA test.

In the computer software, sixteen different test orders were incorporated, of which one were chosen for each participant initially in the test session. These test orders automatically changed a number of test order parameters between different subjects according to a

predetermined scheme. The parameters that were changed between the subjects were initial test domain, and in the FA mode: initial test ear, initial response hand, and initial stimuli randomization. The reason why these parameters were changed between different subjects was to decrease the influence of factors such as weariness and training effects acquired during the test session.

At the outset, the tests were presented at a mean RMS level of 72 dB SPL. After the first practise test, the participant was asked whether this sound level was comfortable. If not, it was possible to adjust the sound level to the most comfortable level for the participant. However, all participants were comfortable with the preset presentation level, and thus all tests in the entire study were performed using the same average sound level of 72 dB SPL.

2.7 Apparatus

The computer hosting the dichotic testing software was a standard portable computer, with an internal sound card run by the Realtek high definition audio driver.

An Interacoustics Clinical Audiometer of the model AC40, with Telephonics ear phones (TDH-39P), was used both for the pure tone audiometry and for the dichotic test. During the dichotic test, the internal sound card of the computer hosting the testing software was

(22)

connected to the speech audiometry input channel of the audiometer, delivering the sound from the computer to the TDH ear phones.

The touch screen used by the participants was a 15 inch TFT type resistive touch screen (Deltaco, TM1500).

The amount of total harmonic distortion (THD) generated by the audiometer circuitry were controlled by directing a 1000 Hz pure tone, with a RMS level equal to the highest peak RMS level found in the test stimuli, from the computer hosting the dichotic test software through the audiometer, and finally from the ear phone output of the audiometer into the input jack of a second computer. Using the software Visual Analyser 2011, THD values for the

presentation levels used in the dichotic test were measured to be 0,34 % for the right channel, and 0,18 % for the left channel.

2.8 Participants

29 right-handed mother tongue speakers of Swedish were initially selected as participants in this study. They were recruited partly through personal contacts of the author and partly through a small invitation campaign held at Örebro University. The results from one

participant had to be excluded from the analysis due to certain technical problems during the test session. Moreover, three participants were excluded due to asymmetric hearing thresholds (see section 2.8.1 below). The group of remaining participants consisted of 10 men and 15 women. The participants were between 20 and 61 years old, and the group age median was 29. Only participants who had grown up in any of the pitch accent regions 2A and 2B in figure 2.1 were included in the study. Of those, 12 originated from pitch accent region 2A and 13 from region 2B. In addition, all participants had at least one parent who had grown up in that same accent region. The reason why only pitch accent regions 2A and 2B were selected, was that these are the regions in which the changes in pitch-accent pattern occur due to changes in clausal focus (Bruce, 2012).

None of the participants had any language impairment diagnosis. 2.8.1 Exclusions due to asymmetric hearing thresholds

To ensure normal hearing, pure tone averages (PTA) were calculated for each participant for the audiogram frequencies 500, 1000, 2000, and 4000 Hz. Participants with PTAs equal or better than 20 dB HL were included. No participant was excluded due to hearing loss.

Obviously asymmetric hearing thresholds could naturally bias the ear advantages attained in a dichotic listening test. In order to ensure symmetric hearing thresholds on the part of the participants, pure tone thresholds were compared for the frequency range of 125-4000 Hz. The reason why asymmetries above 4000 Hz are not excluded is that such frequencies do not affect the perception of pitch (Wang & Bendor, 2010). Three participants with bilateral hearing threshold differences larger than 10 dB on one frequency, or more than 5 dB (in the same direction) for more than two adjacent frequencies were excluded.

(23)

2.8.2 Exclusions due to subjective hearing problems

To ensure normal subjective hearing, a global APHAB score for each participant was calculated from the partial APHAB scores and subsequently compared to the norms of the English language APHAB for young subjectively normal hearing people published by Cox (1997). Since all participants were well within the norms of normal hearing young people on the global (average) scale, none were excluded due to subjective hearing problems. These norms are presented together with the global APHAB results of each participant in figure 2.4.

Figure 2.4. The results of the APHAB global score for all included participants. Horizontal lines indicate the APHAB norms of normal hearing young people for the English language version of the APHAB (Cox, 1997). The columns indicate individual scores for all 25 participants.

2.8.3 Exclusions due to low overall test results

In order for the test results of any participant to be included in the analysis, his or her overall test results in the different test domains would need to be significantly superior to the results generated simply by guessing. In order to test this, one sample T-tests were used.

In the accent domain, one sample T-tests with the test value 50 (representing the result generated by guessing) were used. The test variables were each participant's bilateral scores on the three different accent tests. Only the bilateral scores of one participant did not differ significantly from the tested value (p=0,168). This participant was excluded from further analysis of the accent tests.

For the emotion test domain, one sample T-tests were used with the test value 33,33 (representing the result generated by guessing in these tests). The test variables were each participant's bilateral scores on the three different emotion tests. Here the bilateral scores of all participants differed significantly from the tested value, hence no participant was excluded due to low overall test results in the emotion test domain.

2.9 Ethical considerations

All participants were informed that their participation in the study was confidential and voluntary and that they could abort their participation at any time. Informed consent was obtained from all participants. No information that could be used to identify the participants

(24)

were stored anywhere, except for on their individual informed consent forms, on which either an e-mail address or a postal address was kept for the purpose of mailing the final results of the study, would the participant like to be informed about those. See appendix 2 for the informed consent form.

Immediately after each test session the participants were offered to see their individual test results as well as receiving an explanation of their audiograms.

(25)

3. Results

3.1 Individual laterality configurations

In figure 3.1, individual laterality configurations for each of the 25 participants generated from the mean ear advantages of the dichotic FA tests with RHR and DA tests in both the pitch accent and the emotional pitch domains. As can be seen, there is extensive variation between different subjects. Approximately a third of the participants show the hypothesised laterality configuration. About a third show very little or even no ear advantage in any test domain. Some show a laterality effect in only one test domain, and a few even show a reversed laterality.

Figure 3.1. Mean laterality for each participant calculated from LIPOE and LII for the dichotic FA tests

with RHR and DA tests in both test domains. Triangles indicate the mean EA for the accent tests, and squares indicate the mean EA for the emotion tests. A value above 50 indicates a REA, a value below 50 indicates a LEA, and the value 50 indicates NEA.

3.2 Group level ear advantages

In this analysis, all three laterality indices (i.e. LIPOE, LII and LIRT) as generated by the FA tests with RHR and DA tests in both test domains were studied. Null hypotheses were rejected if at least one of the laterality indices indicated a statistically significant (p < 0,05) ear

advantage in the hypothesised direction as long as there were no statistically significant ear advantages in the other direction.

In figure 3.2a, mean LIPOE and LII values from the four tests are presented. Asterisks indicate significant mean EAs, and daggers indicate that the variable was not normally distributed. For the LIRT there were essentially no ear advantages found in any of the tests, hence this index is not presented graphically.12

12 The LI

RT showed the following values. LIRT for FA-AT-RHR: 50,18. LIRT for DA-AT: 50,60. LIRT for

FA-ET-RHR: .50,07. LIRT for DA-ET: 50,15. One samples T-tests indicated that these small differences were not

statistically significant. 20 30 40 50 60 70 80 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 M ea n la te ra lit y Participant

(26)

As can be seen in figure 3.2a, the group means of all tests show the expected EAs. However, only four of these EA measures are statistically significant, namely LIPOE (p = 0,048, 1-tailed sig.) and LII (p = 0,014, 1-tailed sig.) in the DA accent test, as well as LIPOE (p = 0,005, 1-tailed sig.) and LII (p = 0,003, 1-tailed sig.) in the FA emotion test with RHR.

In figure 3.2b, the minima, 1st quartiles, medians, 3rd quartiles, and the maxima of the LI values for the four tests are presented. Here asterisks indicate statistically significant

deviations of the group median from NEA. No statistically significant deviation of the group median from NEA could be established by the Wilcoxon signed rank tests for the two non-normally distributed variables in figure 3.2a.

For the DA accent test and the FA emotion test with RHR, the null hypotheses can therefore be rejected and the alternative hypotheses accepted, establishing the expected EAs for these tests. For the FA accent test with RHR and the DA emotion test, however the null hypotheses cannot be rejected on statistical grounds.

Because of the lack of any differences between the ears due to response time, the LIRT has largely been left out of further analysis.

Figure 3.2a. Mean ear advantages for the FA tests with RHR and the DA tests. Black columns indicate LIPOE and white columns indicate LII. A number above 50 indicates a REA, a number below 50

indicates a LEA, and the number 50 indicates NEA. Asterisks indicate statistically significant deviations from NEA (p < 0,05). Daggers indicate that T-tests could not be performed due to lack of normal distribution. 55 54 41 48 52 53 46 49 20 30 40 50 60 70 80 FA AT RHR DA AT FA ET RHR DA ET LI LIPOE LII

*

*

*

*

† †

(27)

Figure 3.2b. Boxplot displaying the group minima, 1st quartiles, medians, 3rd quartiles, and maxima for the LI values for the FA tests with RHR and the DA tests. A value of LIPOE above 50 indicates a

REA, a value below 50 indicates a LEA, and the value 50 indicates NEA. Asterisks indicate that the group median deviated significantly from NEA (p < 0,05).

3.3 Comparisons between focused and divided attention

In this analysis the bilateral scores of the FA tests were compared to the DA tests in both test domains. All these bilateral scores were normally distributed.

The mean bilateral scores for the FA accent test with RHR, the DA accent test, the FA emotion test with RHR and the DA emotion test can be seen in figure 3.3a below. Paired sample T-tests revealed that the difference in CB between the FA accent test with RHR and DA accent test were highly statistically significant (p < 0,001). This difference is marked by an asterisk in figure 3.3a. The difference between the FA emotion test with RHR and the DA emotion test however were not statistically significant.

In figure 3.3b the group CB minima, 1st quartiles, medians, 3rd quartiles, and maxima are presented for the FA accent test with RHR, the DA accent test, the FA emotion test with RHR and the DA emotion test.

0 25 50 75 100 LI

*

*

*

(28)

Figure 3.3a. Mean bilateral scores (CB) for the

FA accent test with RHR, the DA accent test, the FA emotion test with RHR and the DA emotion test. The asterisk indicates a

statistically significant difference between the indicated columns (p < 0,001).

Figure 3.3b. Boxplot displaying the group minima, 1st quartiles, medians, 3rd quartiles, and maxima for the bilateral scores (CB) for

the FA accent test with RHR, the DA accent test, the FA emotion test with RHR and the DA emotion test. The asterisk indicates a

statistically significant difference between the indicated columns (p < 0,001).

For the emotion tests therefore, the alternative hypothesis must be rejected in favour of the null hypothesis, indicating that there are no differences between the overall scores of the FA and the DA test modes in this group of normal hearing subjects.

For the accent tests however, the null hypothesis must be rejected and the alternative hypothesis accepted, establishing that there is a difference in the overall results between the different test modes.

3.4 Comparisons of the ear advantages generated by the four different Swedish pitch accent contours

In order to answer the third research question, two steps were required. The first step was to examine the ear advantages of each separate pitch accent contour. Since it was not assumed that all pitch accent contours would necessarily generate ear advantages in the same direction, two-sided hypotheses were used in this analysis. In the second step, any significant ear

advantages generated in the first step were compared to see whether they had the same or different directions.

Step 1. Establishing ear advantages

Due to the minor number of intrusions generated for each different group of stimuli, also the LII have been left out of analysis at this point, rendering the LIPOE as the only laterality measure used in this analysis.

76 70 77 78 50 60 70 80 90 100 CB

*

50 60 70 80 90 100 CB

*

(29)

The FA test mode

In figure 3.4a, group means of the LIPOE for each pitch accent contour in the FA accent test with RHR are presented. The asterisk indicates a significant deviation of the group mean from NEA, and the daggers indicate that the variable was not normally distributed. TNF-A2 was the only pitch accent contour which had a normal distribution. A one sample T-test indicated that the REA for TNF-A2 was statistically significant (p = 0,01, 2-tailed sig.).

In figure 3.4b, minima, 1st quartiles, medians, 3rd quartiles, and the maxima of the LIPOE values for the different pitch accent contours are presented. Here asterisks indicate statistically significant deviations of the group median from NEA. Wilcoxon signed rank tests indicated that there were no significant REAs for either of TNF-A1, TF-A1 or TF-A2.

In the FA mode then, only one significant REA for the different pitch accent contours could be established.

Figure 3.4a. Mean ear advantages (LIPOE) for

the different pitch accent fundamental frequency contours in the FA accent test with RHR. A number above 50 indicates a REA, a number below 50 indicates a LEA, and the number 50 indicates NEA. The asterisk indicates a statistically significant deviation from NEA (p < 0,01). Daggers indicate that a T-test could not be performed due to lack of normal distribution.

Figure 3.4b. Boxplot displaying the group minima, 1st quartiles, medians, 3rd quartiles, and maxima for the LIPOE of the different pitch

accent fundamental frequency contours in the FA accent test with RHR. A value above 50 indicates a REA, a value below 50 indicates a LEA, and the value 50 indicates NEA.

Asterisks indicate that the group median deviated significantly from NEA (p < 0,05).

The DA test mode

In figure 3.5a, group means of the LIPOE for each pitch accent contour are presented for the DA accent test. Here, all variables were normally distributed. However, T-tests indicated that none of the EAs seen in figure 3.5a were statistically significant.

In the DA mode therefore, it was not possible to establish any significant ear advantages for the different pitch accent contours separately.

53 60 52 50 20 30 40 50 60 70 80

TNF-A1 TNF-A2 TF-A1 TF-A2

LIPO E

*

† 0 25 50 75 100

TNF-A1 TNF-A2 TF-A1 TF-A2

LI

P

OE

(30)

Figure 3.5a. Mean ear advantages (LIPOE) for

the different pitch accent fundamental frequency contours in the DA accent test. A number above 50 indicates a REA, a number below 50 indicates a LEA, and the number 50 indicates NEA.

Figure 3.5b. Boxplot displaying the group minima, 1st quartiles, medians, 3rd quartiles, and maxima for the LIPOE of the different pitch

accent fundamental frequency contours in the DA accent test. A value above 50 indicates a REA, a value below 50 indicates a LEA, and the value 50 indicates NEA.

Step 2 - Comparison of the significant ear advantages

In this step, a comparison of the significant ear advantages attained for the different pitch accent contours was planned. However for all pitch accent contours in both the FA and the DA mode, only one ear advantage, namely a REA for TNF-A2 in the FA mode, proved to be statistically significant, whereby no comparison was possible. Still this analysis reveals, however weakly, that Swedish pitch accent contours cannot be said to generate ear advantages in different directions.

During this analysis, it was further noted that the bilateral scores were markedly different between the TNF and the TF stimuli. In the FA mode, the mean values for A1 and TNF-A2 were 60 % and 57 % respectively, while for TF-A1 and TF-TNF-A2 they were 91 % and 93 % respectively. The same difference could be seen in the DA mode in which the mean values for TNF-A1 and TNF-A2 were 37 % and 23 % respectively, while the values for A1 and TF-A2 were 87 % and 81 % respectively.

3.5 Comparisons of the ear advantages generated by the three different emotional pitch contours

Also in order to answer the fourth research question, two steps were required. The first step was to examine the ear advantages of each separate emotional pitch contour. Since it was not assumed that all emotional pitch contours would necessarily generate ear advantages in the same direction, two-sided hypotheses were used in this analysis. In the second step,

significant ear advantages generated in the first step were then compared to see whether they had the same or different directions.

55 50 49 47 20 30 40 50 60 70 80

TNF-A1 TNF-A2 TF-A1 TF-A2

LI P OE 0 25 50 75 100

TNF-A1 TNF-A2 TF-A1 TF-A2

LI

P

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Syftet eller förväntan med denna rapport är inte heller att kunna ”mäta” effekter kvantita- tivt, utan att med huvudsakligt fokus på output och resultat i eller från

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

When it comes to the high amount of “vet ej” answers given by the informants from year seven to the English and Swedish idioms this may be due to lack of