• No results found

Audio Quality of Film Dialogue Across the Listening Space: Do Untrained Listeners Perceive a Difference in Audio Quality when Switching Between Ideal and Less Ideal Listening Positions?

N/A
N/A
Protected

Academic year: 2022

Share "Audio Quality of Film Dialogue Across the Listening Space: Do Untrained Listeners Perceive a Difference in Audio Quality when Switching Between Ideal and Less Ideal Listening Positions?"

Copied!
38
0
0

Loading.... (view fulltext now)

Full text

(1)

Audio Quality of Film Dialogue Across the Listening Space

Do Untrained Listeners Perceive a Difference in Audio Quality when Switching Between Ideal and Less Ideal Listening Positions?

Johan Ehn

2016

(2)

space:

Do untrained listeners perceive a difference in audio quality when switching between ideal and less ideal listening positions?

Johan Ehn Bachelor Thesis

Luleå University of Technology 2016

(3)

Abstract

The use of surround sound in cinema can be considered fairly ubiquitous today, and with that the practice of using the center channel for playback of dialogue content. The studies done for dialogue or speech content in the center channel have utilised Speech-in-Noise tests, without placing the test procedure in a realistic application. This study investigated how untrained listeners rated different excerpts of film with accompanying surround sound in three different playback options for dialogue content: center channel, phantom center and the dialogue positioned in the soundscape according to the actors' positions on screen. The purpose of these ratings was to examine whether one of these three playback options better preserved audio quality in a non-ideal listening position compared to an ideal one. The results show that none of the qualities changed significantly when comparing the two listening positions. The conclusion was made that either the tasks were too complex for untrained subjects or that untrained listeners do not perceive a detriment in audio quality of dialogue when sitting in a non-ideal listening position.

(4)

1. Introduction ... 4

1.2 Background ... 4

1.2.1 History of multi-channel audio in film ... 4

1.2.2 The center channel ... 4

1.2.3 The professionals view on the use of a center channel ... 5

1.2.4 Breaking down audio quality ... 5

1.2.5 Earlier research ... 7

1.3 Purpose ... 7

2. Method ... 9

2.1 Stimuli ... 9

2.1.1 Stimuli playback manipulation ... 13

2.2 Materials ... 13

2.3 Subjects ... 14

2.4 Rating of excerpts ... 14

2.5 Listening Environment ... 17

2.6 Test procedure ... 18

3. Results ... 19

3.1 Analysis ... 21

4. Discussion ... 27

4.1 Critique of the method ... 27

4.2 Results ... 28

4.3 Conclusion ... 29

4.4 Further Research ... 30

Acknowledgements ... 30

5. References ... 31

Appendix A - Demby's original response ... 32

Appendix B - Listening test instructions and rating scales ... 33

Appendix C - Compiled ratings for each trial and clip ... 35

(5)

1. Introduction

In film, the use of center channel in multi-channel audio is considered fairly ubiquitous in today's film industry, and as with any widespread technique there are techniques that are developed thorugh trial-and-error as well as industry practices benefitting from research projects in the academic field. The use of the center channel for dialogue content is to be considered the former variant, and research on the subject are relatively recent compared to the timespan of multi- channel audio for commercial uses.

The presumed benefits of placing dialogue in the center channel is the great increase in perceived lateral stability of the soundscape, creating an anchor of sorts for the audience to rely on. Another benefit is the absence of acoustical crosstalk when using a single playback source instead of several which should in theory produce a signal higher in clarity and intelligibility (Shirley, Kendrick, 2005) as well as prevent cancellation effects that may reduce qualities related to presence in the audio content, making it feel distant (Holman, 2008).

1.2 Background

1.2.1 History of multi-channel audio in film

The first case of true multi-channel audio for cinema dates back to 1940, when Walt Disney's Fantasia (Disney, 1940) premiered and with it a brand new way to play back film audio in a cinema setting. The system, dubbed Fantasound,

featured three separate frontal loudspeakers as well as an array of surround speakers feeding of the same surround audio track, making it a true four-channel recording and playback. A frontal loudspeaker setup with a left, center and right channel had previously been tested by Bell Labs in 1937, but had stopped at the trials stage of the proposed technique. Fantasound, also, did not last very long as a cinema audio format due to its expensive and overly complex nature (Kernis, 2011).

In 1987, a sub-commitee of SMPTE (Society of Motion Picture and Television Engineers) decided on 5.1 as the digital standard for audio in cinema, with the motivation that six channels is the minimum amount of discrete channels

required to create the feeling of a surrounding soundscape, without leaving holes in the surround field. The practice of five channels plus one channel for low- frequency content was already in use in analogue cinema with 70mm film, and this could be viewed as the codification of that practice (Holman, 2008).

1.2.2 The center channel

The center channel was from the beginning used to improve the creation of a

horizontal soundscape that could extend the stereophonic soundstage further to

(6)

dialogue, but also to deliberately fill the aforementioned "hole" that can arise in a stereophonic 2-channel soundscape, especially in larger playback settings such as a full-scale cinema (Holman, 2008).

1.2.3 The professionals view on the use of a center channel

Christoffer Demby, mixing engineer at LjudBang was asked to give his opinions on the purpose and benefits of the center channel (his repsonse is translated by the author from Swedish). Demby's original answer in Swedish can be found in the appendix.

"When it comes to the center speaker the easiest answer is that the direction (panning) becomes more defined to the visuals and is more cohesive to the picture.

Dialogue does not necessarily need to be placed in the center channel 100% of the time but one should be aware of the fact that in a cinema salon people are sitting all over the room and the listening spectrum, which can decrease intelligibility when dialogue is placed in other channels other than the center channel."

(C. Demby, personal communication, 21 Jan 2016)

Demby's practical perspective does match up quite nicely with the literature used for this study, and his statement does reinforce a few of the points Holman makes in his writings.

1.2.4 Breaking down audio quality

To study the term audio quality in a way that is both accurate to film audio conditions as well as using terminology that can be internalized and understood by untrained listeners (individuals who does not have any experience with critical listening training) it needs to be broken down into parts that can be adressed and evaluated to a greater precision than the term "quality".

Let's begin with looking at what constitutes audio quality as a whole. It could be considered one of the goals for a film sound engineer to create a listening

environment to either augment or reflect the visual information passed on to the consumer through the picture, and that they do this by surrounding the listener with a soundscape that achieves this vision. Klas Dykhoff (2002) writes that it is one of the main objective for the sound designer to further the narrative and augment it, working in conjunction with the visual information to tell a story rather than just adding sounds without motivation. The conclusion could be made that this approach include the dialogue treatment as well, to make it seem related and thematically correct in regards to other sound elements in the soundscape, as well as making it a tool to help tell the same story as the picture.

It is also worth considering that when discussing different aspects of audio quality with untrained listeners, as is the case for this study, the terminology and definitions of technical terms and the dissection of terms must be adapted to be understandable and approachable for individuals without formal training or experience of sound engineering or critical listening. It is therefore more feasible to use descriptive terms such as depth, clarity, naturalness, etc. instead of

technical terms relating to amount of distorsion, frequency distribution, dynamic range and so on.

(7)

With this in mind, the term audio quality in regards to dialogue was broken down into parts, to decide which of them seemed like the best fit for untrained listeners to internalise and assess critically.

Several studies on intelligibility ratings for speech in noise have been carried out, and though they are somewhat related to this research in methodology,

intelligibility has been ruled out as a feasible component of the term audio quality of speech in film. This is because intelligibility is only tangibly degraded when the listening conditions are also degraded to a degree where it just does not reflect a realistic film product aimed to consumers. Kalikow et al (1977), for example, conducted a listening test where subjects were tasked with identifying single words in a sentence masked by noise. In their research they maintained a Signal-to-Noise ratio of -5dB to +10dB. Similarily, Shirley & Kendrick (2005) conducted similar tests, with an SNR of -2dB. Listening conditions as non-ideal as these just does not occur in a cinema showing commercially produced films, and therefore intelligibility is considered to be to good to accurately test, regardless of which dialogue playback is used.

Clarity, while being somewhat related to the term naturalness that relates to how natural a certain sound is perceived in a given context, is not quite the same thing. Clarity can be described as the amount of high frequency content in a given audio signal, and a lack of high frequency content, a loss of clarity, can make the signal feel muddled and fuzzy or even incoherent. This seems like a better fit for testing, and the fact that clarity can easily be described verbally by obscuring the speaker's mouth should make it easier for untrained subjects to comprehend this quality.

Naturalness could have been considered a good fit for testing, but its similarities to clarity and the amount of overlap in their layman's term-definitions makes the author consider it redundant for this occasion. The same can be considered for the terms airy and sharp, which can both be related to the same energy

concentration in the high frequency range. Harald Gagge (2014) concluded that both of these terms are often related to energy concentrations in the range of 8 000-16 000Hz.

Depth as a part of audio quality in dialogue could also be argued to relate to

naturalness, but covering a different part of that term than clarity does. Depth

relates to naturalness in the way that the right amount of depth can enhance the

depiction of the room, increasing the feeling of naturalness when the dialogue

content seems to reflect the space depicted in the visual information. In contrast,

a video of a cave with completely dry and present dialogue could be considered

not very natural, and no depth quality to observe either. Together with clarity,

depth covers the part of audio quality in dialogue that tells us whether it can be

(8)

deemed easy enough for untrained listeners to comprehend and accurately assess, as they cover quite a large area of audio quality without being overly technical in their definitions, as well as being tangible enough in a soundscape that they can separate the qualities from one another.

1.2.5 Earlier research

As for research done on the center channel, Holman (1991, 1996) concluded that dialogue content in noise does suffer less from losses in clarity when the

dialogue content is played back through a physical center channel instead of a phantom center channel. Shirley and Kendrick (2005) states that one of the benefits of using a center channel is the lack of acoustical crosstalk, which otherwise can cause phase cancellation in the midrange frequency area (about 1,5-3kHz) that masks out the speech content.

In his book Surround Sound: Up and Running Holman discusses the different localization cues used by the human auditory system, and paints a picture of what differences arise between stereo and surround playback. Because

localization of upper midrange and treble frequencies is calculated using level differences between the left and the right ear of the listener, the center image (where dialogue and speech content most often is played back) will shift quite easily when the listener moves his or her head in any direction. The center channel is more resilient to this, as it is a physical source and will be interpreted as coming from that physical location even if the listener moves his or her head around. He states this as a flaw in stereo reproduction, and continues to mention the same dip in consonant information due to crosstalk as Shirley and Kendrick does, and adds that this might also have a subjective detrimental effect, as the content played back in a phantom stereo image may seem more distant as it lacks the frequencies most commonly associated with perceived presence (Holman, 2008).

It can be concluded from this research that the center channel is quite important to multi-channel audio in films, and that it can improve intelligibility of speech in a speech over noise setting as well as improved clarity in a similar setting. It also seems to be preferred by mixing engineers for placing dialogue content.

What is lacking on the academic front though, is a study where these claims from engineers and textbooks of the potential benefits of using a center channel for dialogue reproduction is put to the test, and where earlier studies of audio quality in speech is furthered in a test applied specifically to film.

1.3 Purpose

The main topic that this thesis aims to investigate is: do untrained listeners perceive a difference in audio quality of dialogue content in 5.1 film between an ideal listening position and an adverse counterpart when utilising three different dialogue playback options?

By investigating this matter the author hopes to find indications on which

playback option best preserves the audio quality of dialogue content when the

listening position is equidistant from each of the three front speakers (this

position is often referred to as the sweet spot) compared to an off-center

(9)

listening position. These two positions are of equal importance as very few film consumers gets to sit in the ideal listening position, but are scattered across the whole listening space.

Earlier studies on the matter of audio quality of speech have been done with the subject in the sweet spot, and several of the claims that Holman and other authors make about the benefits of using the center channel should in theory be amplified when placing the listener outside of the optimal listening position. By doing this, the stereo playback will be skewed to one side because of the

listeners increased proximity to one speaker over the other, causing that half of the stereo playback to be percevied as disproportionally loud compared to the other half. This phenomenon will in theory not occur with a physical center channel playback, as the acoustic signal will only emanate from a single point regardless of listening position.

As the difference in perceived position of speech between center channel playback is quite significant in theory, the stability ratings should reflect that diference quite clearly. The clarity ratings will likely be higher in the stimuli utilising center channel playback, although the differences between listening positions is less predictable. The acoustical cross-talk of two-channel dialogue reproduction should reflect in the results in such a way that the difference in depth ratings between the two listening positions ought to be larger of the stimuli utilising two-channel reproduction than the ones using center channel playback.

(10)

2. Method

To investigate how untrained listeners perceive a difference in audio quality of dialogue when comparing two listening positions a listening test was

constructed where subjects would watch and rate six different film excerpts according to three different qualities of audio quality. These qualities were selected to be easy to detect and understand for untrained listeners while still covering as much as possible of the under the umbrella term audio quality of dialogue. The film excerpts utilised three different playback methods for the dialogue content (more on this in section 2.1.1 Stimuli playback manipulation), with two video stimuli for each playback variation. This produced six different stimuli excerpts that each subject watched and rated twice, once from each of two listening positions.

2.1 Stimuli

The stimuli used for this listening test was two excerpts extracted from a Swedish short film named "Istället för Abrakadabra" (Eklund, 2008) and is provided by Ljudbang AB.

The first video excerpt takes place outside in a backyard, with a father and son discussing a broken garden ornament. They are interrupted by a third actor, the mother, shouting from the upstairs window of the house behind the father. The father and son are positioned on the left and right of the screen, with the

perspective changing from over the shoulder of one to the other. The mother is in the far right of the screen. The second video excerpt is situated indoors, in the waiting room of a hospital. The father is featured in the first few seconds of the clip and walks of to the right. The rest of the clip is a back-and-forth conversation between the son and a nurse, again changing the perspective from one shoulder to another.

Figure 1: Freeze-frame of the first video excerpt. This perspective is the main one

along with the mirrored one from behind the shoulder of the man in the red hat.

(11)

Figure 2: Freeze frame of the first video excerpt. This is the third perspective, the mother in the picture being behind the older man from the first and second perspective.

Figure 3: Freeze frame of the second video excerpt. The initial perspective of the scene, with the father soon walking away to the right.

(12)

Figure 4: Freeze frame of the second video excerpt. The main perspective of this excerpt, again coupled with a mirrored perspective from over the man's shoulder facing the woman.

Because of availability, the mix used for testing was not the same as the final mix Ljudbang AB provided to the customer, but rather a mix created by the author using the same dialogue and foley assets. All atmospheric sounds and effects assets were not provided by Ljudbang, instead they were procured by the author from various sources. A 5.1 mix was created from these assets, and from this mix three different exports were done for each video excerpt, one with the dialogue placed in the center channel, one with the dialogue placed in the middle of the left and right channels (also known as phantom center) and lastly one with the dialogue panned in LCR according to the actors' position on screen.

Table 1: Visual stimuli and playback option for each excerpt used for testing.

Stimuli Name Video Excerpt Dialogue Playback Used Clip 1 Outside scene (#1) Physical Center

Clip 2 Outside scene (#1) Phantom Center Clip 3 Outside scene (#1) Panned to Video Clip 4 Inside scene (#2) Physical Center Clip 5 Inside scene (#2) Phantom Center Clip 6 Inside scene (#2) Panned to Video

Each of the six final audio files (Left, Center, Right, Left surround, Right surround

and LFE) for each video excerpt were exported at 48kHz sample rate and 24bits

of depth. They were combined with the corresponding video file and embedded

into .mpg files using Premier Pro CC 2015 and Minnetonka SurCode Dolby Digital

encoder. To counteract the increased level of dialogue in the phantom center

playback due to two channels playing back the dialogue rather than one, the

dialogue track was lowered 3dB following recommendations from Holman's

book Surround Sound (2008) to keep the sound power level equal between clips.

(13)

For the panned playback option, levels were manually observed and adjusted to match the center playback in dialogue power level. This turned out to be an attenuation of 3dB in overall level.

In interest of determining whether the spectral distribution of audio frequencies differed between video stimuli a spectral analysis was done for each of the two video excerpts' corresponding audio stems. Since the audio mix should be of equal levels and elements across all the stimuli featuring the same video file, only two analyzes were made, one for the six mono master stems for video excerpt

#1, and one analysis of the master stems for excerpt #2. The spectral distribution plots can be observed below.

Figure 5: Spectral distribution of all six audio channels correlated to movie excerpt 1.

(14)

2.1.1 Stimuli playback manipulation

Each of these two video excerpts were exported in three versions: one with the dialogue in the center channel, one with the dialogue in the phantom center and lastly one with the dialogue panned according to the position of the

corresponding actor on the screen. The center channel playback option placed no dialogue content in the left or right channel, only the center channel was used.

The phantom center option utilised only the left and right channel, and the panned playback option utilised all three front speakers. This gives us six different excerpts.

These playback options were chosen based on several reasons. The physical center playback option is taught to engineering students as the "norm" for dialogue playback in surround soundscapes as well as being the default technique for most mixing engineers working in the film industry (see section 1.2.3 on Introduction section). The phantom center playback is the default approach when mixing in a stereo environment, and was in this study used to provide a clear alternative to the physical center channel playback. The last playback option was used to provide an extreme alternative, and this playback option is neither anchored in common mixing techniques nor is it taught in academic literature.

The original idea was to have three different video clips to increase the variation of the stimuli to avoid any systematic errors correlating to too few stimuli. These errors can be manifested in learning effects from watching the same stimuli too many times in rapid succession, possibly lowering their sensitivity to differences in the stimuli. Having vey few variations of stimuli also decreases the

randomisation available for stimuli playback. If one stimulus affects the next in a detrimental way and is played back in the same order every time, the same error would be present in every test procedure, possibly skewing the results.

This was later revised and one excerpt was cut from the test after consultation and trials with colleagues on campus, who voiced concerns with the length of each test trial and the amount of critical listening demanded by untrained subjects. To counteract this problem, the third video excerpt was scrapped.

2.2 Materials

The materials used for producing the stimuli and for conducting the test procedures are listed in table 3 below.

Table 2: List of materials used for construction and execution of listening tests.

Software

Digital Audio Workstation for

producing audio stimuli Pro Tools HD 10.3.7 Embedder of mono stems to .ac3-file Apple Compressor 3 Editing software for video stimuli and

embedding Premier Pro CC 2015 and Minnetonka

SurCode Dolby Digital encoder Playback software used in testing VLC Media Player 2.0.8

LEQ (A) metering: Phasescope, Pro Tools 10.3.7

Spectrum analysis: Audacity 2.1.2

(15)

Hardware

Speakers for stimuli playback: Bowers & Wilkins 803 Subwoofer for stimuli playback: Bowers & Wilkins DB1 Speaker amplifier/processor: Classé SSP-800

Sound pressure level meter: Norsonic Nor131 Meter of RT60 for listening

environment Norsonic Nor131

2.3 Subjects

For this study, very few restrictions to subjects was made except for two

criterias: Firstly, the subjects could not be trained listeners (e.g sound engineers) and secondly they were required to have fully functional hearing. These

restrictions were made to as accurately as possible replicate the traits of a typical cinema consumer, which is assumed to not having partaken in any training on critical listening.

This limitation on listening training also favors the type of test being conducted, as the differences between the excerpts are quite large. This means that if trained listeners were to be included in this study, the data would be skewed both in the sense that the population is not accurately portrayed in the subjects and that the rating scales would be skewed as trained listeners will have

considerably less issues in identifying the different playback options.

For this study there were a total of 20 subjects, ranging in age from 20 to 29 years of age. The subjects were both male and female. They were all students of LTU Campus Piteå.

2.4 Rating of excerpts

Continuing on the same tangent as in the background section earlier (see section 1.2.4, breaking down audio quality), audio quality of dialogue is too broad of a term for subjects to rate, especially in the case of untrained listeners. Instead, three of the qualities discussed earlier were chosen as qualities to rate in this study as they portray parts of audio quality that are related to dialogue, and were deemed non-complex enough for untrained listeners to properly understand what to listen for.

Table 3: Definitions and motivations for the three rating qualities used in testing.

Depth

Depth has been chosen on the grounds that it is easy enough for untrained listeners to comprehend and properly evaluate while still being a relevant part to the under the umbrella term of audio quality. For this test, it has been

specified as the feeling of the soundscape continuing beyond the physical line of

(16)

the perceived depth of the soundscape as a whole.

Bech & Zacharov (2006) who developed a list of attributes in different playback situations for use in audio evaluation applications does not list depth, but uses the terms sense of space as well as distance to events, which in combination does match up to this definition of depth, at least to an acceptable degree. Their attributes are not specifically developed for audio quality evaluation in cinema, but rather for listening tests utilising loudspeakers in general.

Stability

One of the most common arguments for use of the center channel for dialogue is that the dialogue content will be physically anchored to the center of the viewing area, whereas a phantom center will skew to the side of the center line the subject is sitting thanks to the Precedence effect, giving the impression that the dialogue originates from either the left side or the right side [Holman 2008]. This is seemingly a vital part of perceived audio quality of dialogue content, which makes it a good candidate for rating. A lack of stability may not necessary be a detrimental quality, but for predominantly static video excerpts such as the ones used in this study a dialogue track that skews to one side or the other may cause a feeling of disorientation when the visuals and the audio doesn't match up.

This attribute is not anchored to any previous studies relating to audio quality evaluation in cinema, but has been determined and defined in discussion

between the author and his supervisors on the grounds that is does pertain a lot of potentially critical information on whether the audio quality of dialogue can be considered adequate for the given playback option.

Clarity

Due to the nature of one playback source (center channel) versus two sources (stereo with phantom center), the frequency curve of the acoustic signal will differ between the playback options. This is because of acoustic interference when playing back from two sources, which will amplify some frequencies and attenuate others. The acoustic signal will also differ between playback options due to the different acoustic attenuations and reflections the sound is affected by when hitting the torso, shoulders and head of the listener (these are called HRTFs, or Head-Related Transfer Functions)(Holman, 2008; Rumsey &

McCormick, 2009).This might have an audible effect on the dialogue content, and is therefore a quality to be measured and quantized.

Bech & Zacharov (2006) does list clarity as an attribute related to audio quality evaluation, but they list in under attributes relating to listening tests conducted using headphones rather than loudspeakers. On the topic of speech quality, Bech

& Zacharov lists the terms clear vs. muffled, which does match up with both Holman's use of clarity (2008) as well as Rumsey & McCormick's (2009).

These qualities were rated on a linear scale of 1-6 for each film excerpt. The

scales were not labeled in the questionnaire, but it was verbally explained to the

subjects before testing that a 1 represented very little of the quality in question,

(17)

and a 6 represented a lot of that quality. The lack of labels were chosen so as to not clutter the scales and risk further confusion for the subjects, it was deemed easier to understand when the scales were labeled verbally by the instructor during testing. The scale of 1-6 was chosen so as to force the subjects to rate either higher or lower than the central numerical value on the scale (the central value being 3,5 for this particular scale), the subjects were only allowed to answer in whole integers (e.g. 2, or 5) to further aid this rating method. This is chosen on the grounds that the differences in the rated qualities might be quite small for an untrained listener and therefore there might be an

overrepresentation of indifferent answers because of the small perceived difference in quality between clips.

Given that the subjects and the stimuli were Swedish, the three qualities to rate were translated to Swedish as well. Stability was translated into Stabilitet, which is a literal translation which does contain the same definition in Swedish as it does in English. Clarity was translated into Tydlighet, a term which in Swedish means the quality of being clear to understand or comprehend. This term is a bit broader in both languages than just the definition used for this study, but the translation should preserve the same meaning as the English term.

Depth was translated into Djup, again a literal translation that conveys the same meaning and definition.

The subjects also got swedish explanations of the terms on their instruction sheet. These definitions, as stated earlier, were meant to convey the technical definitions in layman's terms, so as to simultaneously instruct the subjects without using overly complicated terms or phenomena and still keep the

definitions as close to the technical definitions used by the author in this study to get accurate ratings from all subjects. Below is a translation of these definitions as written in the test instructions, with their Swedish counterparts. The full sheet of instructions and the questionnaire can be found in the Appendix.

Depth

Depth can be observed as the feeling of the soundscape continuing beyond the speakers, creating a "room" of sorts. The contrary of this being that all sound seems to emanate from the line created by the three front speakers. To compare this to a visual media, depth can be likened to the feeling of 3D rather than 2D.

Djup kan betraktas som känslan av att ljudlandskapet fortsätter bortom högtalarna och skapar ett slags "rum", motsatsen är att ljudet upplevs komma från en linje längs med högtalarna. För att likna detta vid bild så kan djup jämföras med känslan av 3D gentemot 2D.

Stability

This quality shows how stable the soundscape seems to be, in this case I want you to

(18)

Clarity

Clarity conveys how clearly and distinctly a sound is perceived to be. In this case I want you to focus on the dialogue: is it perceived as clear and defined, or is there something that is perceived as unclear or "fuzzy"?

Tydlighet berättar hur tydligt och klart ett ljud uppfattas vara. I detta fall vill jag att du fokuserar på dialogen: upplevs den som klar och tydlig, eller är det något som upplevs som otydligt eller "luddigt"?

2.5 Listening Environment

The test procedure took place in the film screening room at Musikhögskolan Piteå entitled L158. The room features a 5.1 audio setup and is built to mimic a small movie theater with 28 seats in four rows. These rows are placed on risers so that each row is slightly higher from the ground than the row in front of it, creating an incline of the listening area. This room configuration includes a vertically "slanted" speaker setup, with the surround speakers suspended at a greater height from the floor than the front speakers to account for the incline of the seats in the listening space.

The listening space of the room is an approximate square with the sides measuring 3,5 meters as seen in figure 1, and the room as a whole measures eight by six meters, with a ceiling height of six meters. The room is acoustically treated to shorten the reverb time, as well as being treated to minimise external noise leaking in through the walls.

(19)

Figure 6: Representation of the listening environment.

The two listening positions for the test were seated in the second row, the ideal one (henceforth referred to as the sweet spot) in the middle of seven seats and the non-ideal one (henceforth referred to as off-center) was the leftmost seat in the row.

Table 4: Distances to each speaker, measured from each listening position.

Left Center Right Left

Surround Right Surround Sweet

Spot 3m 2.4m 3m 2.8m 2.8m

Off-Center 2.4m 3m 4.2m 1.8m 4m

The test conductor was placed in the upper right corner with a Macbook Pro, playing back each stimuli.

2.6 Test procedure

(20)

1,5 meters to the left known as the less ideal listening position. The starting position was randomised across all twenty subjects so that ten subjects started in the ideal position and the other ten in the less ideal position.

The subjects were then instructed to read a one-page instruction sheet before starting the procedure that briefly described the focus of the test as well as explaining the three qualities to be rated and how the procedure was constructed. These instructions can be found in the appendix.

When the subjects had read the instructions it was clarified how the three qualities were defined to ensure that every subject had approximately the same frame of reference and perspective on the rating scales.

Each subject then got to watch one of the six clips as a test sequence to

familiarize themselves with the stimuli, and was given an extra rating sheet if they for some reason wanted to practice that aspect as well.

The test began with showing the subject the first clip, and then having the subject rate this clip according to clarity, depth and stability. After the subject had rated the clip, they were instructed to move to the other listening position, where they were presented with the same clip again. They rated the clip once more from this new listening position, and finally moved back into the original listening position. This procedure was then repeated six times until each subject had watched and rated each clip twice, once from each listening position. The playback order of the stimuli was randomised for each subject.

The test conductor was in the room at all times, both to control stimuli playback and also to provide assistance and/or explanations should the subject need it during testing.

All stimuli was played back at an approximate level of LEQ

A

= 71dB, a level chosen by the author through practical trial on the testing location as a comfortable listening level in a silent environment.

3. Results

On the next page is a compilation of all ratings collected in testing. They are divided into three bar charts, one for each quality. Since no comparison of video stimuli will be made, each stimuli that featured the same listening position and playback option have been compiled into the same frequency distribution. A full compilation of the results of each trial can be found in the appendix.

(21)

Compiled rating scores for Depth quality

Figure 7: Graphical representation of each stimuli's depth rating score. Each colour represents one playback option from one listening position. SS indicates Sweet Spot and OC indicates Off-Center.

Compiled rating scores for Stability quality

Figure 8: Graphical representation of each stimuli's stability rating score. Each colour represents one playback option from one listening position. SS indicates Sweet Spot and OC indicates Off-Center.

0 5 10 15 20 25

1 2 3 4 5 6

Frequency

Score

CLIP 1+4 SS CLIP 1+4 OC CLIP 2+5 SS CLIP 2+5 OC CLIP 3+6 SS CLIP 3+6 OC

0 5 10 15 20 25

1 2 3 4 5 6

Frequency

Score

CLIP 1+4 SS

CLIP 1+4 OC

CLIP 2+5 SS

CLIP 2+5 OC

CLIP 3+6 SS

CLIP 3+6 OC

(22)

Compiled rating scores for Clarity quality

Figure 9: Graphical representation of each stimuli's clarity rating score. Each color represents one playback option from one listening position. SS indicates Sweet Spot and OC indicates Off-Center.

3.1 Analysis

To get some useful information out of all the data collected, a comparison was made between the two listening positions for each quality rating group and playback option. To increase the sample size in this comparison, each data group that had the same playback option and listening position but different video stimuli were combined, as it was deemed irrelevant to compare the two video excerpts with each other as they are quite similar except for the environment they portray.

As the rating scales cannot be guaranteed to be equidistant, no comparison can be made with average rating values. The rating scales does not have a point of reference or a value on the scale which represents no occurence of the given attribute, which defines the data as ordinal.

0 5 10 15 20 25

1 2 3 4 5 6

Frequency

Score

CLIP 1+4 SS

CLIP 1+4 OC

CLIP 2+5 SS

CLIP 2+5 OC

CLIP 3+6 SS

CLIP 3+6 OC

(23)

Below is each quality rating group for each listening position compiled into frequency tables, to depict the spread of rating values for each quality.

Depth

Table 5: Compiled data for each depth rating. SS indicates Sweet Spot and OC indicates Off-Center listening positions.

CLIP 1+4 SS

Rating 1 2 3 4 5 6

Frequency 3 5 8 10 8 6

CLIP 1+4 OC

Rating 1 2 3 4 5 6

Frequency 2 4 8 13 10 3

CLIP 2+5 SS

Rating 1 2 3 4 5 6

Frequency 0 4 11 7 15 3

CLIP 2+5 OC

Rating 1 2 3 4 5 6

Frequency 0 5 13 14 5 3

CLIP 3+6 SS

Rating 1 2 3 4 5 6

Frequency 2 1 10 14 9 4

CLIP 3+6 OC

Rating 1 2 3 4 5 6

Frequency 4 8 9 12 3 4

(24)

Stability

Table 6: Compiled data for each stability rating. SS indicates Sweet Spot and OC indicates Off-Center listening positions.

CLIP 1+4 SS

Rating 1 2 3 4 5 6

Frequency 0 5 9 14 11 1

CLIP 1+4 OC

Rating 1 2 3 4 5 6

1 6 10 13 8 2

CLIP 2+5 SS

Rating 1 2 3 4 5 6

0 4 6 7 14 9

CLIP 2+5 OC

Rating 1 2 3 4 5 6

1 4 10 13 11 0

CLIP 3+6 SS

Rating 1 2 3 4 5 6

0 3 7 14 10 6

CLIP 3+6 OC

Rating 1 2 3 4 5 6

Frequency 0 10 9 11 8 2

(25)

Clarity

Table 7: Compiled data for each clarity rating. SS indicates Sweet Spot and OC indicates Off-Center listening positions.

CLIP 1+4 SS

Rating 1 2 3 4 5 6

Frequency 0 0 2 7 22 9

CLIP 1+4 OC

Rating 1 2 3 4 5 6

0 2 1 16 16 5

CLIP 2+5 SS

Rating 1 2 3 4 5 6

0 3 3 8 14 12

CLIP 2+5 OC

Rating 1 2 3 4 5 6

0 0 4 9 21 6

CLIP 3+6 SS

Rating 1 2 3 4 5 6

0 1 2 12 17 8

CLIP 3+6 OC

Rating 1 2 3 4 5 6

Frequency 1 0 5 12 18 4

When the data was compiled, a statistical analysis was used on each pair of rating groups with the same playback and quality, but with different listening positions, e.g. CLIP 2+5 SS and CLIP 2+5 OC for the Stability quality. This was done to examine whether the subjects actually did rate the clips differently between listening positions, and in extension also see how much the different playback options influenced these differences in rating scores.

The chosen analysis method for this study was the Wilcoxon Signed-Ranks Test.

This test is commonly used to compare two sets of ordinal data to determine if the differences between them is big enough to be considered not caused by random chance, but rather there being a statistically significant difference.

The Wilcoxon Signed-Ranks Test measures the differences between each

frequency value and the ranks them from highest to lowest, giving each rank a

(26)

greater or smaller than the critical z value needed to reject the null hypothesis, or H

0

.

The Z-statistic formula:

For a two-sided analysis with the significance level α=0.05, the critical z statistic is z

crit

=±1.96.

For this analysis, the null hypothesis can be defined as follows:

H

0

= There is no difference in rating scores between the ratings for the sweet spot and the ratings for the off-center listening position.

The alternative hypothesis, which will be accepted if the null hypothesis is rejected, can be defined as follows:

H

1

= There is a difference in rating scores between the ratings for the sweet spot and the ratings for the off-center listening position.

Below is each analysis with its z statistic.

Depth

Table 8: Statistical values for each Wilcoxon Signed-Rank test for the Depth quality.

Playback T-value Z-statistic H

0

status

Clip 1+4 Center 3 -1.57 Not rejected

Clip 2+5 Phantom 6 -0.94 Not rejected

Clip 3+6 Panned 3 -1.57 Not rejected

Stability

Table 9: Statistical values for each Wilcoxon Signed-Rank test for the Stability quality.

Playback T-value Z-statistic H

0

status

Clip 1+4 Center 10 -0.11 Not rejected

Clip 2+5 Phantom 6 -0.94 Not rejected

Clip 3+6 Panned 3 -1.57 Not rejected

Clarity

Table 10: Statistical values for each Wilcoxon Signed-Rank test for the Clarity quality.

Playback T-value Z-statistic H

0

status

Clip 1+4 Center 3 -1.57 Not rejected

Clip 2+5 Phantom 6 -0.94 Not rejected

Clip 3+6 Panned 6 -0.94 Not rejected

(27)

Each of the analyzes showed that the data was not statistically significant with α=0.05, and that H

0

was not rejected. This means that for this test, subjects could not be considered rating the stimuli differently when they were seated in the off- center listening position rather than the sweet-spot one. Every playback option proved equally good in preserving the chosen audio qualities when the listening

position is non-ideal.

(28)

4. Discussion

4.1 Critique of the method

When conducting tests with untrained listeners as opposed to trained

counterparts, it is important to adapt the tasks given to the subjects as well as the terminology and approach to match the knowledge level of the subjects. The benefit of better imitating the population (in this case, cinema consumers) is exchanged for the lost opportunity of more technically detailed data collection.

This exchange may have been done incorrectly for this study, meaning that the qualities that the subjects rated during test procedures might have been too complicated for them to fully understand and internalize, making the test unreliable as the subjects had different degrees of understanding towards the task at hand. During testing, several of the subjects communicated orally that they had troubles, either with defining and understanding the attributes to rate, or that they reacted to other elements in the sound design that had a detrimental effect on the experience and made it harder to focus on the dialogue. Examples of this includes comments on uneven atmosphere sounds, background sound effects that felt like they masked the dialogue as well as remarks that it did not sound like a finished film. There were also some difficulties for some of the subjects to understand the rating scales. One question that arose several times was how to rate correctly in the case that the observed quality was considered too prominent in a given stimulus, indicating tendencies to give the scale a positive and negative extreme point rather than rating the amount of the given quality for that stimulus. This could be indicative of the task given to the subject either being to complex and better utilized in a test for trained subjects, and that their inability to fully understand what to listen for skewed the results toward a more insignificant score.

To counter this, an argument could be made to have a preference study instead of giving them attributes to rate, making the listening test easier for untrained listeners to comprehend and follow out. This however, would mean that the frame of reference for each subject might vary even more greatly because of the inherent vagueness of the preference; some subjects might choose the one that sounds more natural to them, while the next subject prefers the other stimuli because it sounds better according to their frame of reference. This would then require for the research question to be altered, and thus was not a feasible option when researching this topic in particular.

As for the qualities/attributes chosen to represent the under the umbrella term Audio quality in dialogue, an argument could be made that these three terms does not accurately represent audio quality in dialogue, and thus makes the test skewed towards a given part of the whole definition. Audio quality will

inherently at the core be a subjective term, and the definition will vary

depending on the application and to whom the question is asked. Another way to

give subjects a better chance to comprehend the qualities would have been to

label the scales with the three qualities on the high end of the scale and their

respective opposites on the low end. For example, the Clarity scale could have

been labeled clear on the high end and muddled on the low end to better guide

the critical assessment of stimuli. Bech & Zacharov (2006) have produced

(29)

attribute lists that are more particular to loudspeaker testing scenarios, as well as lists relating to speech quality evaluation which could have been useful to follow in a more literal manner. These lists were used as references to the

chosen attributes after the actual testing rather than being used to facilitate well- researched definitions and motivations before the fact, and this was an oversight.

However, certain differences do occur when the testing is applied directly to a field of audio engineering, such as dialogue audio in film context. This means that such a list as Bech & Zacharov's might not be useful to follow in a literal fashion, but rather to use as a grounded point of reference when evaluating which attributes to examine in future studies.

The stimuli itself was not professionally created and produced, and this might have had a detrimental impact on the listening test and its results, given that the sound design and mixing of sound elements might not match the expectations for a commercial product and be of a lower standard than one, diverting the

listener's attention away from the dialogue playback and onto the sub-optimal treatment of the soundscape. This was mostly a question of availability, and should the study have been redone it would have been of top priority to aquire a finished consumer product and manipulate that instead, to more accurately depict a real-world listening scenario.

The excerpts chosen for testing were also of a low-intensity nature, both scenes being relatively static, with calm back-and-forth conversation between

characters. This put very little strain on both the depth and the stability

attribute, possibly giving them to much of an inherent prominence in the stimuli from the start, making the playback methods matter less. Should a scene with higher intensity been chosen, perhaps with characters moving around and with loud effects and atmospheric sounds, the visual stimuli would have been more dependent on high stability to not disorient the consumer. With a moving scene rather than a static one the depth quality might have been varied to a greater degree with the different playback options, making the values of these attributes stretch over a larger spectrum than what they did this time.

The environment this kind of listening situation would occur in is most likely a

bigger listening location such as a film theater, and while the film screening room

of Campus Piteå is built to mimic these conditions there are still factors that

differentiates it from a real theater. Maybe most significantly this room is quite

small, and this may influence playback through an increase in direct sound

rather than reverberant sound, which would be the case of a bigger listening

space. The smaller size also means that the distances between the different

listening positions also are much smaller than those of a real-world scenario, and

this might lessen the differences in rating scores between sweet spot and off-

center simply because the physical distance of the two positions is smaller, and

(30)

be affected by both the design of the stimuli as well as the complexity of the subjects' task during testing.

Although no significant results were observed, the differences of some clip comparisons show tendencies. The depth rating difference of clip 1+4 as well as 3+6 are not far off from reaching statistical significance, both with a z-statistic of -1.57. Why the depth category was subject to the biggest differences is hard to tell for certain, but it most certainly could have been connected to the earlier mentioned hardships of the subjects of understanding the qualities to rate. The same z-statistic can be observed in the stability score comparison of clip 3+6, and the clarity score comparison of clip 1+4. It seems that from these, albeit statistically insignificant, data, the phantom center playback is the most resilient to the changing of listening positions. This does not match the textbook claims discussed earlier nor Holman's or Shirley & Kendrick's studies

(1991,1996)(2005).

The hardships some of the subjects experienced with both comprehending the qualities and rating scales, but also with critically assessing the different stimuli during testing might skew the results more in this study because of the low number of subjects. With a larger pool of subjects the outliers become less and less indicative of the results as a whole, and this may have affected results somewhat.

Other subjects also communicated that they felt insecure about how to rate a quality that is well represented, but in such a way that is negative (e.g. too much depth), this implies that the untrained listeners were unable to in a objective fashion rate the stimuli or that they were not aware that the scales did not represent objective values rather than subjective evaluations of the attributes.

This might have been a detrimental effect to the results, as some subjects may have rated on a subjective level without communicating this to the conductor, making the results slightly unpredictable to analyze. This could probably have been avoided to some degree should the instructions have stated clearly that the subjects were to rate how much of a given attribute was present in the given stimuli and not rate according to their preference of whether it was too much or too little represented of the attribute in question.

4.3 Conclusion

The results of this study suggests that untrained listeners do not perceive any difference in the audio quality of dialogue between an ideal and a non-ideal listening position when audio quality of dialogue is represented by Depth, Clarity and Stability regardless of dialogue playback option. This either suggests that the design and execution of the listening test was sub-optimally adapted to suit untrained listeners and was too complicated, that the attributes used for rating the stimuli misrepresents the potential differences in audio quality between the two listening positions and thus making the subjects rate the wrong components of the stimuli or it suggests that untrained listeners either are not aware or just doesn't care about the playback of the dialogue and the potential effects it might have on the listening experience.

(31)

The results also show that there seemed to be a consensus among subjects of the overall quality of the individual clips being that the compiled rating data is centered around 4 and 5 on every scale, although this may very well be another effect of the subjects' inability or unwillingness to assess each stimuli critically and thus the subjects might have rated each stimuli with a high, though not the highest, rating. If this is not the case, it implies that the subjects perceived the qualities as fairly prominent in every stimulus.

No conclusion can be made that any of the playback options tested increased or decreased the difference in rating scores, and thus the expected results that the center channel playback option should stabilize the qualities measured when comparing the two listening positions were not observed in this batch of tests.

4.4 Further Research

If this study were to be used as a stepping stone in a future project, it might be of interest to do a similar test but with trained listeners, as they might be able to better comprehend and rate the stimuli critically along the given attributes than their untrained counterparts, enabling a comparison between trained and untrained listeners.

It would also be of interest to investigate whether a different, higher intensity set of film excerpts would affect the results of untrained listeners.

Lastly, the same test could be repeated but with either preference or a single quality rating scale utilized, further removing any complexity that might take away reliability of results due to the subjects' inability to critically assess audio quality on several levels at once.

Acknowledgements

The author would like to thank Nyssim Lefford and Roger Johnsson for their support and guidance. He also thanks Jonas Ekeroot for his guidance on the matter of statistical analysis. Thank you also to LjudBang AB for sharing their assets for the test stimuli.

(32)

5. References

Bech, S. Zacharov, N. (2006). Perceptual Audio Evaluation. West Sussex, John Wiley & Sons Ltd.

Dykhoff, K. (2002). Ljudbild eller Synvilla?. (1st ed.). Malmö: Dramatiska Institutet & Liber AB.

Everest, F. A., Pohlmann, K. C. (2009). Master Handbook of Acoustics. (5th ed.).

New York: McGraw Hill.

Gagge, H. (2014). What Frequency Ranges do Audio Engineers Associate with the Words Thick, Nasal, Sharp and Airy? (Bachelor's Thesis). Luleå Tekniska

Universitet, Instutitionen för Konst, Kommunikation och Lärande, 941 63 Piteå.

Holman, T. (2008). Surround Sound: Up and Running 2nd edition, Abingdon, Focal Press.

Holman, T. (1991). New Factors in Sound for Cinema and Television, Journal of the Audio Engineering Society, Vol. 39(7) pages 529-539.

Holman, T. (1996) The Number of Audio Channels. (paper presented at the 100th AES Convention, Copenhagen 1996).

Kalikow, D.N., Stevens, K.N., Elliot, L.L.. (1977). Developing a test of speech intelligibility in noise using sentence materials with controlled word predictability. The Journal of the Acoustical Society of America, Vol. 61(5).

Kernis, M. (2011). Beyond Dolby (Stereo): Cinema in the Digital Sound Age.

Bloomington: Indiana University Press.

Rumsey, F., McCormick, T. (2009). Sound and Recording. (6th ed.). Oxford: Focal Press.

Shirley, B, Kendrick, P. Measurement of speech intelligibility in noise: A

comparison of a stereo image source and a central loudspeaker source. (paper presented at the 118th AES Convention, Barcelona, May 28th-31th, 2005).

Wilcoxon Signed-Ranks Test. (n.d.). Retrieved March 19th, 2016, from StatisticsLectures.com,

http://statisticslectures.com/topics/wilcoxonsignedranks/

(33)

Appendix A - Demby's original response

Vad gäller centerhögtalaren är väl det lätta svaret att

riktningen (pan) blir tydligare till bilden och sitter ihop mer med bilden.

Dialog behöver nödvändigtvis inte alltid ligga i centern 100% men man ska vara medveten om att i en bio sitter publik överallt i rummet och lyssningspekrtat, vilket kan göra att hörbarhet försämras när man lägger ut framför allt dialog i andra kanaler än centern.

(34)

Appendix B - Listening test instructions and rating scales

Hej!

Vad kul att du deltar i detta lyssningstest.

Det här testet undersöker den upplevda ljudkvaliteten hos dialog i film, och det du ska få göra idag är att titta på ett antal kortare klipp som sedan skattas enligt tre kvaliteter, mer om det nedan.

Du kommer att få se 18 stycken filmsekvenser (9st olika, två ggr var)(30- 40s/sekvens), med en paus efter varje sekvens där du får skatta klippet och sedan byta position för att se samma klipp en gång till.

För varje klipp har du tre skalor med numreringarna 1-6 som var och en tillhör en kvalitet som du ska skatta. Dessa kvaliteter är:

Djup

Djup kan betraktas som känslan av att ljudlandskapet fortsätter bortom högtalarna och skapar ett slags "rum", motsatsen är att ljudet upplevs komma från en linje längs med högtalarna. För att likna detta vid bild så kan djup jämföras med känslan av 3D gentemot 2D.

Stabilitet

Denna kvalitet visar hur stabil ljudbilden upplevs vara, i detta fall är det dialogen jag vill att du fokuserar på. En stabil dialog uppfattas komma från skådespelarna i fråga, medan en instabil upplevs komma någon annanstans ifrån.

Tydlighet

Tydlighet berättar hur tydligt och klart ett ljud uppfattas vara. I detta fall vill jag att du fokuserar på dialogen: upplevs den som klar och tydlig, eller är det något som upplevs som otydligt eller "luddigt"?

För varje sekvens ska du alltså betygsätta dessa tre kvaliteter efter ett av de sex värdena på skalan.

Du får i varje paus läsa dessa hjälptexter om du glömmer bort vad kvaliteterna betyder eller om du känner dig osäker. När du är redo går vi vidare till nästa klipp.

När testet är färdigt får du om du vill skriva ned dina övriga tankar i fritext på baksidan av ditt svarshäfte.

När du tittat på ett klipp kommer du att få byta lyssningsposition för att sedan titta på och skatta samma klipp igen.

För att du ska få ett hum om vad det är som behöver göras så börjar vi med en testomgång där du får se endast ett klipp och skatta det.

Lycka till!

(35)

Klipp Nr:

Djup

1 2 3 4 5 6

Stabilitet

1 2 3 4 5 6

Tydlighet

1 2 3 4 5 6

______________________________________________________________

Djup

1 2 3 4 5 6

Stabilitet

1 2 3 4 5 6

Tydlighet

1 2 3 4 5 6

(36)

Appendix C - Compiled ratings for each trial and clip

Table 11: All ratings for the depth quality, divided into Sweet Spot and Off Center, respectively. 1 represents very little of the quality, and 6 represents a lot of the same quality.

Sweet Spot Off-Center

TRIAL Clip 1 Clip

2 Clip 3 Clip

4 Clip 5 Clip

6 TRIAL Clip 1 Clip

2 Clip 3 Clip

4 Clip 5 Clip

6

1 2 4 5 5 5 5 1 4 4 5 5 5 5

2 5 5 4 6 3 2 2 3 6 3 6 4 1

3 1 3 3 2 2 3 3 2 2 4 4 2 2

4 2 4 3 5 3 5 4 3 3 2 4 2 3

5 1 5 1 1 5 3 5 1 4 1 1 3 2

6 3 4 6 4 5 6 6 4 3 6 5 5 6

7 4 4 4 2 4 4 7 4 4 4 3 4 4

8 5 4 5 5 5 4 8 4 3 4 4 4 2

9 3 4 4 5 3 3 9 3 4 4 5 3 3

10 3 3 4 5 3 3 10 4 3 4 5 4 3

11 4 3 3 4 5 4 11 5 3 4 3 3 4

12 4 5 4 6 5 6 12 5 4 4 5 5 5

13 4 5 4 6 5 4 13 4 4 3 3 4 3

14 3 3 5 3 5 5 14 3 3 6 5 3 4

15 4 2 4 3 2 3 15 4 4 4 2 2 2

16 3 6 6 6 6 4 16 3 6 6 5 6 3

17 5 3 3 3 5 5 17 4 2 1 4 3 2

18 6 6 4 6 5 5 18 5 5 3 6 4 4

19 4 2 1 4 3 3 19 2 3 1 2 3 2

20 2 5 4 4 3 5 20 4 4 2 6 5 3

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

The ambiguous space for recognition of doctoral supervision in the fine and performing arts Åsa Lindberg-Sand, Henrik Frisk & Karin Johansson, Lund University.. In 2010, a

One key feature of the method is the elicitation of personal constructs in the form of verbal descriptions of sound, subsequently used for development of assessment

Ideal type (representing attitudes, strategies and behaviors contributing to weight maintenance.. Characterized by these questions in

Rebecka Arman, The Research-and-Practice-Unit in The Göteborg Region (FoU i Väst/GR), Sweden The space of listening within different models of professionalism in social work.

While the first and second audio books include elements that are not included in the third, Jane’s self-reflection, for example, the third includes the important parts that the

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating