• No results found

CORTICAL PHASE SYNCHRONISATION MEDIATES NATURAL FACE-SPEECH PERCEPTION

N/A
N/A
Protected

Academic year: 2021

Share "CORTICAL PHASE SYNCHRONISATION MEDIATES NATURAL FACE-SPEECH PERCEPTION"

Copied!
44
0
0

Loading.... (view fulltext now)

Full text

(1)

CORTICAL PHASE SYNCHRONISATION MEDIATES

NATURAL FACE-SPEECH PERCEPTION

RINA BLOMBERG

DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE LINKÖPINGS UNIVERSITY

COGNITIVE SCIENCE MASTERS THESIS

(2)
(3)

Copyright © 2015

Rina Blomberg

(4)

ABSTRACT

It is a challenging task for researchers to determine how the brain solves multisensory perception, and the neural mechanisms involved remain subject to theoretical conjecture. According to a hypothesised cortical model for natural audiovisual stimulation, phase synchronised communications between participating brain regions play a mechanistic role in natural audiovisual perception. The purpose of this study was to test the hypothesis by investigating oscillatory dynamics from ongoing EEG recordings whilst participants passively viewed ecologically realistic face-speech interactions in film. Lagged-phase synchronisation measures were computed for conditions of eye-closed rest (REST), speech-only (auditory-speech-only, A), face-speech-only (visual-speech-only, V) and face-speech (audio-visual, AV) stimulation. Statistical contrasts examined AV > REST, AV > A, AV > V and AV-REST >

sum(A,V)-REST effects. Results indicated that cross-communications between the frontal

lobes, intraparietal associative areas and primary auditory and occipital cortices are

specifically enhanced during natural face-speech perception and that phase synchronisation mediates the functional exchange of information associated with face-speech processing between both sensory and associative regions in both hemispheres. Furthermore, phase synchronisation between cortical regions was modulated in parallel within multiple frequency bands.

(5)

CONTENTS

ABSTRACT ... ii

ACKNOWLEDGMENTS ... vii

INTRODUCTION ... 1

Theories of Multisensory Processing ... 1

Phase Synchronisation and EEG ... 3

Study Purpose ... 5

METHOD ... 8

Participants ... 8

Stimulus and Experimental Procedure ... 8

Data Preprocessing... 11

Regions of Interest ... 11

Lagged Phase Synchronisation Analysis ... 12

Statistical Analyses ... 14

RESULTS ... 14

Group-level Phase Synchronisation Values ... 14

Face-Speech versus Resting-State Activity ... 16

Face-Speech versus Speech ... 17

(6)

Face-Speech versus Sum(Face, Speech) ... 21 DISCUSSION ... 23 CONCLUSION ... 26 REFERENCES ... 28 APPENDIX I ... 1 APPENDIX II ... 2 APPENDIX III ... 3 APPENDIX IV... 4

(7)

LIST OF TABLES

Table 1. ... 12 Table 2. ... 15 Table 3. ... 17

(8)

LIST OF FIGURES

Figure 1. . ... 6 Figure 2. . ... 9 Figure 3. ... 10 Figure 4. . ... 18 Figure 5. . ... 20 Figure 6. ... 22

(9)

ACKNOWLEDGMENTS

To my supervisors, Dr. Carine Signoret and Prof. Thomas Karlsson whom, throughout my studies, have provided me with invaluable opportunities to learn and experiment with

methodologies in Cognitive Neuroscience, thank you! I hope we one day become colleagues.

To my wonderful husband, thank you for your patient nature, your genuine interest in all things puzzling and your genius brain that has challenged me since the day we first met. I wouldn’t have arrived at these cross-roads without you.

(10)

INTRODUCTION

A critical goal for understanding the link between cognition and perception involves understanding the way in which a single contextual source of environmental information entering the brain simultaneously through different sensory channels is processed. With seemingly little effort, the brain evaluates multisensory information and can rapidly

determine to an impressive degree of success whether the incoming signals are incongruently associated and the information should be processed separately, whether the information is redundant and can be considered noise, or whether the separate signals are congruently meaningful and should be integrated into a coherent percept (Senkowski, Schneider, Foxe, & Engel, 2008). These complex processing decisions are often computed with such

implicitness that it is a challenging task for researchers to determine how the brain solves multisensory perception and the neural mechanisms involved remain subject to theoretical conjecture and experimental investigation.

Theories of Multisensory Processing

Multisensory perception is a young, yet rapidly expanding field of scientific investigation. Historically, sensory perception was studied from a unimodal perspective because traditional theories of multisensory perception conceptualised processing through a feedforward,

hierarchical system, in which pathways from sensory-specific sites converged progressively into “specialised” heteromodal regions such as the prefrontal cortex and the inferior parietal

lobule (Alais, Newell, & Mamassian, 2010; Foxe & Schroeder, 2005). From this

(11)

processed unisensory signals merged together in heteromodal regions. Neurons in primary sensory cortices were thus considered strictly unimodal, whereas neurons in heteromodal regions were hypothesised to have multisensory receptive fields (Meredith, 2002; Stein & Meredith, 1993).

Recently, this traditional model for multisensory perception has been challenged by a competing view that argues for the involvement of synchronised oscillatory feedback communications across widely distributed brain regions that can influence early unisensory processing. When two neural regions are synchronised, the membranes of the underlying populations of neurons phase into a mutually optimal state of excitability, which in turn enables the efficient reception and projection of net information between regions (Bauer, 2008; Fries, 2005). Thus transient phase synchronisation of oscillatory signals is interpreted as “functional connectivity” between brain regions. Because spatially distributed neuronal

populations can be effectively modulated by phase synchronisation, some theorists argue that it is the mechanism through which sensory information is integrated in to a coherent percept (Varela, Lachaux, Rodriguez, & Martinerie, 2001).

An interesting consequence of this theoretical perspective is that brain regions traditionally considered to be exclusively unisensory may only be so with respect to the afferent

projections they receive (Driver & Spence, 2000; Foxe & Schroeder, 2005). Through phase synchronisation, spatially distant cortical regions are able to project meaningful information into sensory regions and influence processing of the unisensory signal. Indeed,

neuroanatomical studies in non-human primates indicate that reciprocal connections from non-sensory cortices to specialised sensory sites have the ability influence early stages of

(12)

sensory processing (Barone, 2005). Thus unisensory regions may have the physiological means to receive and integrate perceptually congruent information from other unisensory sites together with contextually relevant cognitive expectations (generative models) from memory (Altieri, 2014; Engel, Fries, & Singer, 2001). Traditional, convergence theories of sensory processing therefore are argued to offer an incomplete account of multisensory perception vis-à-vis the influence descending signals and projections from other primary sensory regions have upon processing within the primary area.

Phase Synchronisation and EEG

A variety of EEG (electroencephalographic) studies have shown that multisensory interactions can elicit changes in oscillatory responses associated with task-specific perceptual, sensorimotor and cognitive processes (Başar, Başar-Eroglu, Karakaş, & Schürmann, 2001; Fries, 2005; Güntekin & Başar, 2014; Senkowski et al., 2008). EEG

oscillations are the direct measure of variations in cyclical activity from stimulated

populations of neurons. Active populations oscillate at frequencies which represent different spatial scales of brain function (Lakatos et al., 2005) and by convention have been

categorised into five generic frequency bands: delta (0.5-3 Hz), theta (4 -7 Hz), alpha (8-12

Hz), beta (13-30 Hz) and gamma (31-80 Hz). In electrophysiological studies investigating

multisensory interactions, phase synchronised oscillations have been observed in all

frequency bands (Sakowitz, Quiroga, Schürmann, & Başar, 2005) and are thought to reflect functional connectivity within and across the underlying active populations of neurons (Fries, 2005; Senkowski et al., 2008; Womelsdorf et al., 2007). The shift in theoretical perspective from convergence theories to integration-through-phase synchronisation has thus been

(13)

particularly supported by investigations into the functional underpinnings of the oscillatory EEG signal, together with observations that multisensory interactions educe condition-specific changes in oscillatory responses (see Senkowski et al., 2008; and Engel et al., 2012; for a review). Phase synchronisation is therefore hypothesised to play a mechanistic role in multisensory perception (Fries, 2005; Senkowski et al., 2008; Varela et al., 2001;

Womelsdorf et al., 2007)

In EEG studies, the terms phase coherence and phase synchronisation are sometimes used as interchangeable terms because qualitatively they are conceptualised as a measure of

“functional connectivity” (Pascual-Marqui, 2007c); but they are in fact quantitatively

different measures. Phase coherence is a linear measure of the consistency between the phases of two oscillatory signals over time. Phase synchronisation is a non-linear case of phase coherence, in which the two oscillatory signals share identical phase angles in each cycle (Senkowski et al., 2007, 2008). Phase synchronisation measures have an advantage over coherence measures in that the phase angle values are derived in isolation of amplitude components for a given frequency (Poil, 2014). Phase coherence on the other hand is

weighted by power values (µv2) thus coherence analysis can be strongly influenced by strong increases/decreases in power which may produce biased results (Cohen, 2014).

Another advantage of phase synchronisation over coherence in EEG studies is that neural oscillations in the brain do not cease to oscillate when the coupling is absent, rather the two signals will continue to oscillate independently of each other. These are ideal conditions for conducting phase synchronization analysis, because even for weak coupling and in the presence of noise, phase synchronization can reliably detect coupled interactions (Orn,

(14)

Winterhalder, Timmer, & Urgen, 2007). Generally though, phase coherence and phase synchronisation produce interpretably similar results (Cohen, 2014).

Phase coherence is the method used by most of the references in this report. But due to the aforementioned advantages, the research conducted in this paper chose lagged-phase synchronisation measures (see METHODS section) of functional connectivity as the

preferred approach. When the term coherence is used vis-à-vis reported findings, it refers to the general notion of connectivity between two neural regions.

Study Purpose

In a review of the field, Senkowski et al. (2008; see also Engel et al., 2012), proposed a cortical model of multisensory processing during perception of naturally occurring

audiovisual interactions (Figure 1). Their model hypothesises that under complex scenarios of audiovisual stimulation, multisensory perception is mediated by phase coherent oscillatory signals engaging reentrant communications between the prefrontal cortex, higher-order heteromodal associative areas and the primary sensory cortices. Evoked changes in phase synchrony within the primary occipital and auditory cortex has the potential to induce enhanced oscillatory activity in the heteromodalintraparietal associative area, forming an efficient exchange of bottom-up and top-down interactions. In addition, oscillatory coupling with prefrontal cortical regions might provide a valuable cognitive influence upon processing within sensory and associative areas (see also Burgess, Simons, Dumontheil, & Gilbert, 2005; for a similar perspective).

(15)

Figure 1. Cortical model of complex audiovisual perception proposed by Senkowski et al. (2008). Phase

synchronised oscillatory signals engage in interconnected reentrant communications between the frontal cortex (blue), primary sensory cortices (auditory-red; visual-purple), and the heteromodal intraparietal associative area (yellow). The model is depicted on the left hemisphere for simplicity but also includes interconnections between the same regions in the right hemisphere and across hemispheres. Cortical image retrieved from sLORETA (Pascual-Marqui, 2002) imaging software.

EEG paradigms investigating multisensory perception typically employ highly controlled, often artificial stimuli (e.g. avatars) in event-related designs, which enable the researcher to manipulate the congruency and/or perceptual clarity of modal inputs. Participants are typically subjected to multiple numbers of trials across repeated conditions in an effort to increase the signal to noise ratio after trial averaging, allowing the researcher to observe the neural correlates of a hypothesised evoked brain state. Very few EEG studies have

investigated multisensory processing under more ecological conditions, thus the suggested model by Senkowski et al. (2008; and also Engel et al., 2012) remains to be extensively tested.

The purpose of the current study was to test, Senkowski et al.’s (2008) hypothesis for its

(16)

source-localised oscillatory dynamics from EEG recordings whilst participants passively viewed an 8 minute audiovisual extract from the Swedish short film Mitt liv som en trailer (Öhman, 2009) were investigated. The film was chosen because it included numerous scenes in which conversing characters looked directly into the lens of the camera (i.e. scenes with full frontal face-speech stimulation). In everyday social encounters, we readily perceive concurrent facial expressions together with speech, and the unfolding conversations depicted in this film closely resemble the kinds of face-speech experiences we often encounter in our natural, daily lives.

If, as the model suggests, naturally occurring audiovisual perception is mediated by phase synchronised oscillatory signals and involves reentrant communications between the frontal cortex, the intraparietal associative areas and the primary auditory and occipital cortices, then increased functional connectivity in this network (as determined by phase synchronisation) should be observed in the source reconstructed EEG when participants view complex storyline-related, face-speech interactions in the film compared to conditions of unisensory (face or speech only) and suppressed-sensory (silent, eyes-closed) stimulation. EEG investigations that have explored oscillatory dynamics during passive auditory, visual and audiovisual stimulation have reported evoked oscillatory responses across a diverse range of frequencies and electrode sites (Sakowitz, Quiroga, Schürmann, & Başar, 2001; Sakowitz et al., 2005; Sakowitz, Schürmann, & Başar, 2000; von Stein, Rappelsberger, Sarnthein, &

Petsche, 1999). Based on these findings, it is reasonable to postulate that face-speech stimulation in this study will involve synchronous oscillations in multiple frequency bands between interacting brain regions within the network rather than be associated with any

(17)

specific frequency band (see Engel et al., 2012; for a similar hypothesis).

Because complex and ongoing sensory stimulation tends to evoke highly transient state changes in the EEG signal that lack an easily observable structure at the level of group analysis, this study adopts an intra-participant design and compares differences in lagged-phase synchronisation within classical frequency bands between conditions of eyes-closed resting state activity, speech-only and face-only perception. Intra-participant designs have the advantage over traditional within-participant designs in that they provide the researcher with additional information that would not be readily observable if the data was collapsed to a within-participant mean (Pernet, Sajda, & Rousselet, 2011). A particular advantage in this study is that participant-specific functional connectivity can be systematically mapped to changes in stimulus conditions allowing within-participant variances to be qualitatively examined.

METHOD

Participants

13 right-handed native speaking Swedish adults (8 females, Mage = 27.4 ± 8) volunteered for

the study. All had normal or corrected-to-normal vision and no reported hearing problems.

Stimulus and Experimental Procedure

The selected stimulus for this study was an unedited, eight-minute extract from the Swedish short film: Mitt liv som en trailer (Öhman, 2009). This particular film was chosen because the

(18)

exchange of dialogue between characters was portrayed such that the characters were looking directly into the camera lens when talking. This enabled the extraction of 1 s epochs of EEG activity time-locked to the first frame of each character’s (1 female, 2 male characters) line of dialogue when looking directly into the lens (Figure 2). None of the participants reported having seen the film prior to the experiment.

Figure 2. The ongoing EEG was partitioned into 1 s epochs time-locked to the first frame of each character’s

line of dialogue when looking directly into the lens. The diagram shows a recorded neural response (bandpass 1-45 Hz) from one participant at the TP9 electrode and corresponding screen-shots of character dialogue are taken at 200 ms intervals over the duration of the epoch.

The continuous cortical activity was recorded using a 129-channel net montage (Electrical Geodesics, Inc., 2013; impedances < 50 kΩ; vertex reference; 250 Hz sampling rate; online bandpass filter 0.1–200 Hz) inside a shielded, sound-proof room. Participants were

requested to limit eye and head movements as much as possible. E-prime software

(Psychology Software Tools, Inc., 2012, v. 2.0.8.73) was used to present the stimulus on a 15

in monitor from which participants sat circa 60 cm. Sound was distributed via two (left, right)

(19)

the monitor. The volume was kept at a constant level for all subjects (loudest effect was music: 70-80 dB).

Participants viewed the eight minute film clip two times. Prior to, between and after each viewing, three minutes (total) of silent, eyes-closed resting (REST) activity was recorded (Figure 3). In order to compare synchronised neural responses to speech (AV), face-only (V) and speech-face-only (A) conditions, the first eight minute film viewing included four, 45

s (± 4.6) segments where only audio (i.e. a black, blank screen in view) or video (muted

sound) was perceived.

Figure 3. A one minute eyes-closed resting period was recorded prior to, between and after the film viewing

sessions. The first eight minute film session included four segments with only audio (light orange shading) or video (light green shading). 1 s epochs corresponding to scenes of speech without accompanying video (A: orange vertical line), faces without accompanying audio (V: green vertical line) and full face-speech stimulation (AV: blue vertical line) respectively were extracted from the film clip.

The duration of the audio-only and video-only segments varied so as not to disrupt the

general flow of the film, hence changes in stimulus conditions were made at points of explicit scene transitions within the film. In the second, eight minute viewing session, participants

(20)

watched the entire clip with full audiovisual stimulation1.

Data Preprocessing

Preprocessing procedures used the EEG-lab MATLAB module. Continuous EEG was band-pass filtered (1 – 45 Hz) to remove nonphysiological artifacts (e.g. slow signal drift,

alternating current and screen refresh rate) and re-referenced to the average reference.

Independent components analysis was then applied to detect and remove ocular and muscular artifacts. Bad channels were removed and interpolated using EEG-lab’s automated algorithm (kurtosis, ± 5 z-score threshold). The data was then partition into 1 s epochs corresponding to REST (n = 135; first and last 7.5 s of 60 s REST periods were discarded), AV (n = 50), A (n = 21) and V (n = 23) conditions (Figure 3). AV epochs were time-locked to the first frame of each character’s line of dialogue (Figure 2).

Regions of Interest

Using exact low resolution electromagnetic tomography (eLORETA; Pascual-Marqui, 2007) to the scalp EEG, phase synchronisation analysis was applied to intracerebral electrical

sources of interest. eLORETA is a weighted minimum norm inverse solution with exact localisation (although closely proximal neural sources are highly correlated) providing the researcher with a 3D reconstruction of the EEG current source density (CSD). To test the proposed model (Figure 1) by Senkowski et al. (2008), cortical regions of interest (RoIs)

1 The reason for this particular experimental design is that the neural responses recorded were also investigated in a between-groups study which examined the effects of multisensory representations during unisensory stimulation see Blomberg, (2013) for more details.

(21)

were selected using Brodmann area (BA) anatomical definitions provided by the eLORETA software package (Pascual-Marqui, 2007a; http://www.uzh.ch/keyinst/loreta.htm). Due to the low spatial resolution of eLORETA, only single voxel centroids of BAs corresponding to the rostral superior frontal gyrus (SFG), lingual gyrus (LG) of the primary occipital cortex, inferior parietal lobule (IPL) and the transverse temporal gyrus (TTG) of the primary auditory cortex were generated as per Senkowski et al.’s (2008) hypothesis. The procedure resulted

in eight RoI seed points (Table 1).

Table 1.

A total of eight, single voxel RoIs (MNI coordinates) were generated using the eLORETA software package.

X-MNI Y-MNI Z-MNI Lobe Structure BA

20 50 0 Right Frontal Lobe Superior Frontal Gyrus 10 -20 50 0 Left Frontal Lobe Superior Frontal Gyrus 10

15 -85 0 Right Occipital Lobe Lingual Gyrus 17

-15 -85 0 Left Occipital Lobe Lingual Gyrus 17

-45 -50 40 Right Parietal Lobe Inferior Parietal Lobule 40 45 -50 40 Left Parietal Lobe Inferior Parietal Lobule 40 55 -25 10 Right Temporal Lobe Transverse Temporal Gyrus 41 -55 -25 10 Left Temporal Lobe Transverse Temporal Gyrus 41

Lagged Phase Synchronisation Analysis

Phase synchronisation measures are susceptible to contamination by instantaneous, nonphysiological contributions due to volume conduction and low spatial resolution. To overcome these problems, eLORETA estimates lagged phase synchronisation between pairs of cortical RoIs with the exclusion of the instantaneous contribution (Pascual-Marqui, 2007b;

(22)

Thatcher, North, & Biver, 2007). Lagged phase synchronisation is a nonlinear measure of functional connectivity between cortical RoIs and is calculated in the frequency domain using normalized Fourier transforms. The resulting measure ranges from zero coherence (no synchronisation) to a coherence of one (perfect synchronisation) (Pascual-Marqui, 2007a).

For dimension reduction purposes, phase measures between bilateral RoIs (e.g. right-SFG versus left-SFG) were ignored in this study. Because evoked oscillatory responses to audiovisual stimulation has been observed in all frequency bands (Sakowitz et al., 2001, 2005, 2000), phase synchronisation measures were computed for the classical frequency bands: theta (4 - 7 Hz), alpha (8 - 12 Hz), beta1 (13-18 Hz), beta2 (19 - 21 Hz), beta3 (22 - 30

Hz) and gamma (31 - 45 Hz). The delta frequency range (< 4 Hz) was not investigated

because the signal-to-noise ratio of the single trials is negatively affected as the quantity of cycles per epoch falls under 3 Hz (Cohen, 2014).

Because calculations were being performed over relatively narrow frequency bands for EEG segments that were only 1 s long, measures of nonlinear dependence for AV, A and V conditions were computed for pairs of single trials. This ensured that the intra-participant measures were mathematically well defined (Pascual-Marqui, 2007). Due to the high number of available trials from the REST condition (n = 135), phase measures were computed in groups of five rather than pairs.

To calculate the statistical interaction: AV-REST > sum(A,V)-REST; 21 single-trials from A and V conditions were first linearly summed in the time domain before computing lagged phase synchronisation values for pairs of trials between RoIs (Senkowski et al., 2007) using the LORETA-KEY utilities module (Pascual-Marqui, 2015). Phase synchronisation values

(23)

for the REST condition were then subtracted from phase synchronisation measures for AV and sum(A, V) conditions in the LORETA-KEY statistical software package (Pascual-Marqui, 2015).

Statistical Analyses

Statistical nonparametric mapping (Nichols & Holmes, 2002) was used to identify intra-participant differences in lagged-phase synchronisation for AV > REST, AV > A, AV > V and AV-REST > sum(A,V)-REST comparisons within each frequency band. To correct for multiple comparisons across all RoIs and frequencies, a total of 5000 random permutations were used to calculate the critical probability threshold (α = .01, one-tailed) for log of F-ratio values under the null hypothesis of zero coherence between conditions. Lagged-phase synchronisation measures were not explored statistically at the group-level because complex and ongoing sensory stimulation tends to evoke highly transient state changes in the EEG signal, however observations regarding the number of participants showing significant effects and group-level coherence values between RoIs are reported.

RESULTS

Group-level Phase Synchronisation Values

Group-averaged lagged phase synchronisation measures (averages across trials and

participants) are reported in Table 2. At the group level, increases in phase synchronisation (albeit not always surmounting significance threshold) between RoI couplings in the network were more frequently observed during multisensory stimulation than unisensory and

(24)

suppressed sensory (REST) stimulation.

Table 2.

Group-averaged lagged phase synchronisation measures. Measures range from zero coherence, corresponding to an absence in phase synchronisation, to a coherence of one, corresponding to perfect phase- synchronisation.

LG-SFG IPL-SFG IPL-LG

BAND HEMISPHERE REST A V AV REST A V AV REST A V AV

THETA LEFT 0.48 0.71 0.69 0.88 0.47 0.68 0.66 0.86 0.57 0.79 0.79 0.92 RIGHT 0.51 0.74 0.73 0.90 0.53 0.72 0.73 0.90 0.59 0.80 0.79 0.93 LEFT-RIGHT 0.47 0.67 0.70 0.89 0.44 0.67 0.67 0.86 0.45 0.69 0.69 0.88 RIGHT-LEFT 0.46 0.68 0.68 0.87 0.45 0.67 0.66 0.87 0.49 0.70 0.69 0.88 ALPHA LEFT 0.41 0.60 0.61 0.78 0.41 0.61 0.61 0.79 0.49 0.69 0.68 0.84 RIGHT 0.44 0.64 0.69 0.80 0.46 0.61 0.64 0.81 0.52 0.71 0.73 0.85 LEFT-RIGHT 0.40 0.60 0.61 0.78 0.38 0.58 0.57 0.75 0.41 0.58 0.57 0.78 RIGHT-LEFT 0.38 0.60 0.61 0.77 0.39 0.56 0.58 0.76 0.42 0.61 0.62 0.79 BETA1 LEFT 0.36 0.52 0.53 0.70 0.35 0.53 0.52 0.70 0.44 0.61 0.61 0.79 RIGHT 0.38 0.53 0.57 0.73 0.38 0.57 0.55 0.72 0.47 0.63 0.66 0.79 LEFT-RIGHT 0.33 0.50 0.51 0.68 0.32 0.46 0.50 0.67 0.32 0.52 0.50 0.68 RIGHT-LEFT 0.33 0.53 0.49 0.68 0.32 0.50 0.52 0.68 0.34 0.54 0.55 0.70 BETA2 LEFT 0.47 0.71 0.71 0.88 0.48 0.72 0.69 0.87 0.57 0.79 0.78 0.92 RIGHT 0.50 0.70 0.70 0.88 0.52 0.74 0.76 0.89 0.60 0.77 0.79 0.93 LEFT-RIGHT 0.46 0.67 0.67 0.87 0.43 0.68 0.69 0.86 0.46 0.71 0.67 0.87 RIGHT-LEFT 0.46 0.69 0.68 0.87 0.43 0.67 0.66 0.86 0.47 0.71 0.70 0.88 BETA3 LEFT 0.23 0.35 0.36 0.50 0.23 0.35 0.33 0.52 0.31 0.46 0.49 0.62 RIGHT 0.25 0.37 0.39 0.52 0.16 0.24 0.25 0.34 0.34 0.50 0.51 0.64 LEFT-RIGHT 0.23 0.33 0.39 0.49 0.21 0.32 0.34 0.49 0.22 0.34 0.34 0.49 RIGHT-LEFT 0.22 0.31 0.33 0.48 0.21 0.32 0.34 0.49 0.24 0.37 0.38 0.52 GAMMA LEFT 0.14 0.22 0.24 0.33 0.13 0.22 0.22 0.33 0.20 0.30 0.30 0.42 RIGHT 0.16 0.25 0.25 0.33 0.53 0.72 0.73 0.90 0.23 0.34 0.33 0.45 LEFT-RIGHT 0.14 0.23 0.23 0.31 0.13 0.20 0.19 0.30 0.13 0.23 0.20 0.29 RIGHT-LEFT 0.13 0.21 0.21 0.30 0.13 0.21 0.20 0.30 0.15 0.24 0.23 0.33 TTG-SFG TTG-LG TTG-IPL

BAND HEMISPHERE REST A V AV REST A V AV REST A V AV

THETA LEFT 0.48 0.73 0.70 0.88 0.57 0.78 0.77 0.92 0.56 0.77 0.78 0.92 RIGHT 0.51 0.74 0.75 0.90 0.54 0.77 0.79 0.92 0.60 0.80 0.82 0.93 LEFT-RIGHT 0.48 0.68 0.70 0.88 0.49 0.71 0.71 0.89 0.49 0.70 0.69 0.90 RIGHT-LEFT 0.47 0.69 0.68 0.88 0.51 0.73 0.73 0.90 0.48 0.72 0.72 0.87 ALPHA LEFT 0.41 0.61 0.62 0.78 0.48 0.68 0.67 0.83 0.51 0.71 0.69 0.86 RIGHT 0.45 0.63 0.66 0.81 0.49 0.68 0.68 0.83 0.54 0.72 0.73 0.86

(25)

LEFT-RIGHT 0.40 0.59 0.60 0.78 0.43 0.59 0.60 0.79 0.43 0.61 0.63 0.80 RIGHT-LEFT 0.40 0.60 0.60 0.78 0.43 0.63 0.63 0.82 0.43 0.61 0.59 0.80 BETA1 LEFT 0.35 0.50 0.52 0.71 0.43 0.59 0.60 0.77 0.44 0.63 0.59 0.79 RIGHT 0.37 0.55 0.55 0.72 0.42 0.62 0.59 0.75 0.48 0.64 0.68 0.81 LEFT-RIGHT 0.34 0.52 0.52 0.69 0.36 0.54 0.51 0.69 0.36 0.55 0.56 0.72 RIGHT-LEFT 0.35 0.52 0.51 0.69 0.37 0.54 0.54 0.72 0.35 0.51 0.52 0.71 BETA2 LEFT 0.48 0.69 0.71 0.88 0.54 0.78 0.77 0.92 0.56 0.82 0.78 0.92 RIGHT 0.50 0.72 0.71 0.90 0.55 0.76 0.74 0.92 0.62 0.78 0.80 0.94 LEFT-RIGHT 0.46 0.67 0.69 0.88 0.47 0.72 0.70 0.88 0.48 0.70 0.72 0.89 RIGHT-LEFT 0.45 0.70 0.69 0.87 0.49 0.76 0.73 0.89 0.48 0.69 0.68 0.88 BETA3 LEFT 0.24 0.35 0.35 0.51 0.28 0.43 0.45 0.60 0.31 0.48 0.47 0.64 RIGHT 0.26 0.39 0.40 0.54 0.29 0.43 0.45 0.59 0.35 0.49 0.54 0.66 LEFT-RIGHT 0.23 0.34 0.37 0.50 0.24 0.36 0.37 0.52 0.25 0.37 0.39 0.54 RIGHT-LEFT 0.22 0.31 0.36 0.49 0.25 0.39 0.42 0.54 0.23 0.38 0.39 0.52 GAMMA LEFT 0.14 0.20 0.21 0.33 0.19 0.30 0.30 0.40 0.21 0.32 0.30 0.43 RIGHT 0.16 0.25 0.25 0.34 0.19 0.30 0.28 0.39 0.24 0.37 0.35 0.48 LEFT-RIGHT 0.14 0.22 0.20 0.31 0.15 0.23 0.24 0.33 0.15 0.24 0.23 0.33 RIGHT-LEFT 0.14 0.23 0.22 0.31 0.16 0.24 0.25 0.35 0.14 0.23 0.22 0.32

Face-Speech versus Resting-State Activity

AV stimulation elicited global cortical responses across all participants that were

significantly more coherent than the REST condition. These wide spread differences in phase synchronisation were observed between pairs of RoIs within and across both hemispheres for theta, alpha and beta2 frequencies in particular. Table 2 shows the number of participants, per frequency band and hemispheric interaction within RoI couplings, whose phase

(26)

Table 3.

The number of participants with significant (p < .01) increases in phase synchronisation associated with AV versus REST conditions. Results are reported for both within and across left and right hemispheres per RoI-pair and frequency band. NB: Inter-hemispheric couplings do not imply directed connectivity but a logical representation of RoI pairings (i.e. LEFT-RIGHT = RoIleft – RoIright; RIGHT-LEFT = RoIright – RoIleft).

BAND HEMISPHERE LG-SFG IPL-SFG IPL-LG TTG-SFG TTG-LG TTG-IPL

THETA LEFT 13 13 13 13 11 13 RIGHT 13 13 13 13 13 13 LEFT-RIGHT 13 13 13 13 13 13 RIGHT-LEFT 13 13 13 13 13 11 ALPHA LEFT 8 10 13 13 13 13 RIGHT 12 13 13 12 11 13 LEFT-RIGHT 13 10 13 11 13 12 RIGHT-LEFT 13 13 12 13 8 13 BETA1 LEFT 9 1 13 13 13 10 RIGHT 13 10 12 9 13 12 LEFT-RIGHT 3 7 13 4 10 13 RIGHT-LEFT 3 4 13 6 13 11 BETA2 LEFT 13 13 13 13 13 12 RIGHT 13 13 13 13 13 13 LEFT-RIGHT 13 13 13 12 13 13 RIGHT-LEFT 12 13 13 13 13 12 BETA3 LEFT 2 2 0 1 13 13 RIGHT 2 4 9 3 6 12 LEFT-RIGHT 2 13 3 0 2 5 RIGHT-LEFT 0 0 1 2 1 10 GAMMA LEFT 0 0 3 0 0 6 RIGHT 0 0 13 1 1 6 LEFT-RIGHT 0 0 0 12 0 0 RIGHT-LEFT 0 0 0 0 0 12

Face-Speech versus Speech

In general, face-speech processing elicited a greater increase in synchronised oscillations between pairs of RoIs than perception of speech only (Figure 4). Significant left lateral communications between the LG in the primary occipital cortex and SFG were seen in the alpha frequency range for the majority of participants (N = 10).

(27)

Figure 4. Graphs indicating the number of participants (y-axis) who showed significant (p < .01) synchronised

activity associated with AV > A perception. Top two graphs show results for oscillatory communications between RoIs in the left and right hemispheres respectively. Bottom two graphs show inter-hemispheric communications between pairs of RoIs. NB: the bottom two graphs do no imply directed connectivity but a

logical depiction of RoI pairings on the x-axis (i.e. LEFT-RIGHT = RoIleft – RoIright; RIGHT-LEFT = RoIright

(28)

Right hemispheric processing revealed coherent interactions from the auditory and occipital cortex at mid-high beta frequencies with the IPL and SFG at theta-alpha bands across the majority of participants (N > 7). Inter-hemispheric communications associated with AV perception were also observed in the majority of participants. In particular, both the left primary occipital (LG) and auditory (TTG) cortices demonstrated highly synchronised activity with the right SFG in theta and mid-beta frequencies (N ≥ 8). Cross hemispheric communications between the left IPL and right SFG were also commonly observed across participants (N = 10) in the alpha frequency band.

Face-Speech versus Faces

In contrast to the AV > A condition, synchronised oscillations associated with the AV > V condition demonstrated remarkably little within-participant variance and involved highly specific frequencies between RoI couplings (Figure 5). Synchronised, beta2 activity was robustly observed across all participants (N = 13) between the right IPL and bilateral SFG regions, the right IPL and left LG, the right TTG and left LG and the right TTG and left SFG. Robust theta synchronisation in the left hemisphere between the IPL and SFG and

predominant low-beta communications between the left IPL and LG, the left TTG and SFG and the right LG and SFG were also observed in all participants. Coherent upper-beta and gamma oscillations predominated in the majority of participants (N ≥ 12) between the left TTG and LG; the right IPL and LG; and the left TTG and right SFG. Alpha synchronisation was strongest across all participants between the right auditory cortex (TTG) and the left SFG.

(29)

Figure 5. Graphs indicating the number of participants (y-axis) who showed significant (p < .01) synchronised

activity associated with AV > V perception. Top two graphs show results for oscillatory communications between RoIs in the left and right hemispheres respectively. Bottom two graphs show inter-hemispheric communications between pairs of RoIs. NB: the bottom two graphs do no imply directed connectivity but a

logical depiction of RoI pairings on the x-axis (i.e. LEFT-RIGHT = RoIleft – RoIright; RIGHT-LEFT = RoIright

(30)

Face-Speech versus Sum(Face, Speech)

Results from the AV-REST > sum(A,V)-REST contrast were in general, highly variable across participants but some noteworthy similarities could been seen between certain RoIs and frequency bands (Figure 6). In the left hemisphere, high beta oscillations were associated with communications between the IPL and SFG in the majority of participant (N = 10); couplings between the TTG and LG were predominantly observed in the theta-band (N = 9); and oscillations synchronised at low-beta frequencies involved interactions between the TTG and IPL (N = 9). Communications between the left LG and right SFG were observed at mid-beta frequencies for the majority of participants (N = 10). Oscillations synchronised at theta frequencies were associated with inter-hemisphere communications between the left LG, the left TTG and the right IPL (N ≥ 9). The right TTG and the left LG were also highly coupled at alpha frequencies in most participants (N = 10).

(31)

Figure 6. Graphs indicating the number of participants (y-axis) who showed significant (p < .01) synchronised

activity for the [AV – REST] > [V –REST] + [A – REST] contrast. Top two graphs show results for oscillatory communications between RoIs in the left and right hemispheres respectively. Bottom two graphs show inter-hemispheric communications between pairs of RoIs. NB: the bottom two graphs do no imply directed

connectivity but a logical depiction of RoI pairings on the x-axis (i.e. LEFT-RIGHT = RoIleft – RoIright;

(32)

DISCUSSION

This study tested Senkowski et al.’s (2008) proposed model that natural audiovisual

perception is mediated by a synchronised network involving the primary cortices, the heteromodal intraparietal lobule and prefrontal cortex. To test the model, source-localised oscillatory dynamics from EEG recordings were investigated whilst participants passively viewed realistic, face-speech conversations. It was hypothesised, that if naturally occurring audiovisual perception is mediated by phase synchronised oscillatory signals across the proposed cortical network then connectivity measures at specific frequency bands should be regularly observed in the EEG signal. Statistical comparisons in audiovisual-related network connectivity were made between resting state, face-only and speech-only conditions for each participant.

As predicted, when connectivity measures for face-speech perception was compared to connectivity measures from eyes-closed resting, speech-only and face-only processing, results revealed that the face-speech condition elicited a pattern of widely distributed coherence in the network. The following is a general summation of findings for these three statistical comparisons:

 When compared to the resting condition, face-speech perception produced highly synchronised activity in theta-mid-beta frequency bands across the inter-hemispheric network. This effect was observed in all participants.

 The face-speech condition tended to elicit a greater coherence between pairs of RoIs than perception of speech only, although moderate inter-participant variation was observed.

(33)

One relatively consistent finding within participants involved greater communications between hemispheres, which consisted of mid-high beta modulated couplings between right heteromodal/frontal regions with left primary sensory areas.

 Comparisons with the face-only condition revealed remarkably specific frequency-band increases in coherence between a wide number of RoIs across the network, and these results were highly consistent across all participants.

Whilst these listed findings provide some evidence that face-speech perception (as opposed to resting and unimodal conditions) elicited greater network coherence, the results are difficult to interpret with respect to the hypothesis that functional connectivity in the network is a specific mechanism for audiovisual integration. For instance, comparisons with unimodal conditions do not detail whether greater network coherency associated with the AV condition involved specifically audiovisual integration or was simply an additive effect of an

additionally active sensory channel. The purpose of the AV-REST > sum(A,V)-REST contrast was to shed more light on this issue. In a seminal paper addressing methodological approaches for investigating multisensory integration, Calvert & Thesen (2004) suggested that testing interaction effects against a common baseline (REST) reference, allowed the researcher to robustly identify neural regions that show supra-additive activity during multisensory stimulation; i.e. a positive interaction effect identifies neural responses to audiovisual stimulation which cannot be predicted by the sum of the unimodal responses alone. Although the authors were specifically recommending the method to hemodynamic signal measures, its application to eLORETA lagged-phase synchronisation dependencies was used here to infer whether synchronous couplings within the network were functionally

(34)

related to face-speech perception such that the data could not be explained by the sum of face and speech perception in isolation.

Results from the AV-REST > sum(A,V)-REST interaction were in general, highly variable across participants, but significant interaction effects found in each participant provided convincing evidence that a) the hypothesised network is specifically activated during natural face-speech perception, and b) that phase synchronisation mediates a functional exchange of information specifically associated with face-speech processing. In light of these findings, it is reasonable to conjecture, as per Senkowski et al. (2008), that natural audiovisual perception is the result of widely distributed top-down and bottom-up interactions that are supported by neural coherence.

Senkowski et al.’s (2008) model, proposed that evoked changes in phase synchrony within

the primary occipital and auditory cortex induces enhanced oscillatory activity in the heteromodalintraparietal associative area and the prefrontal cortex, forming an efficient exchange of bottom-up and top-down interactions which ultimately give rise to the

subjectively unified multisensory percept. The authors did not form any direct hypotheses about which roles different frequency bands might play across the integrative process but suggested in a later publication (Engel et al., 2012) that oscillatory activity in response to passive, audiovisual stimulation would be observed across a wide range of frequencies. Başar et al., (2001) have also argued, based upon an extensive review of the field, that

complex sensory stimulation (and cognition) involves multiple, superimposed oscillations (i.e. parallel oscillations) with varying degrees of durations and delays. The results from this study are consistent with this prediction, however, the approach here was to observe the

(35)

stability of phase differences over time segments (1 s epochs) in the frequency domain rather than time-varying connectivity. Perhaps if temporal factors were taken into consideration a more informative pattern of frequency changes between RoIs would emerge.

Although, the method used in this report does not allow for inferences regarding temporal dynamics (i.e. when couplings occurred) or directed connectivity (the direction of

information flow between any two RoIs), results consistently demonstrated across

participants, that distributed, synchronised couplings in multiple frequency bands are actively engaged during natural face-speech perception.

CONCLUSION

Modern cortical theories of multisensory perception have shifted away from a traditional view that primary sensory brain regions process afferent signals in a strictly modular fashion. Rather, it is proposed that the outcome of processing in primary regions might be influenced by lateral communications from other active sensory regions and top-down communications from heteromodal associative areas. Furthermore, it is hypothesised that the interaction between brain regions is made possible through neural coherence. When two neural regions are synchronised, the underlying populations of neurons phase into a mutually optimal state of excitability, which in turn enables the efficient reception and projection of information between regions. Under complex, ecological multisensory perception, functional

communications between low-level and high-level regions in the cortical processing hierarchy are likely to engage in continuous reciprocal signalling such that the outputs of

(36)

sensory regions are influenced by inputs of associative areas and the outputs of associative areas are influenced by inputs from sensory regions.

The goal of the research conducted in this study was to provide evidence for stimulus specific phase synchronisation between a network of neural regions involved in processing/integrating ecologically realistic audiovisual information. Results weigh heavily in favour of the thesis that cross-talk within the cortical network under investigation is specifically enhanced during natural face-speech perception and that phase synchronisation mediates the functional

exchange of information associated with face-speech processing between both sensory and heteromodal regions in both hemispheres. Furthermore, phase synchronisation during complex perception is modulated in parallel by multiple frequency bands.

Precisely how and when the subjective flow of coherent percepts emerges from this complex interaction of functional connectivity remains open to theoretical conjecture and further experimental investigation.

(37)

REFERENCES

Alais, D., Newell, F. N., & Mamassian, P. (2010). Multisensory processing in review: from physiology to behaviour. Seeing and perceiving (Vol. 23). http://doi.org/10.1163/187847510X488603

Altieri, N. (2014). Multisensory integration, learning, and the predictive coding hypothesis. Frontiers in Psychology, 5(257), 1–3. http://doi.org/10.3389/fpsyg.2014.00257

Barone, P. (2005). Heteromodal connections supporting multisensory integration at low levels of cortical processing in the monkey, 22(April), 2886–2902. http://doi.org/10.1111/j.1460-9568.2005.04462.x Başar, E., Başar-Eroglu, C., Karakaş, S., & Schürmann, M. (2001). Gamma, alpha, delta, and theta oscillations

govern cognitive processes. International Journal of Psychophysiology, 39(2-3), 241–248. http://doi.org/10.1016/S0167-8760(00)00145-8

Bauer, M. (2008). Multisensory Integration: A Functional Role for Inter-Area Synchronization? Current Biology, 18(16), 709–710. http://doi.org/10.1016/j.cub.2008.06.051

Blomberg, R. (2013). Evoked Multisensory Cortical Representations During Unisensory Stimulation. Diva-Portal, (June).

Burgess, P. W., Simons, J. S., Dumontheil, I., & Gilbert, S. J. (2005). The gateway hypothesis of rostral prefrontal cortex (area 10) function. In J. Duncan, L. Phillips, & M. P (Eds.), Measuring the Mind: Speed, Control, and Age. (pp. 217–248). Oxford: Oxford University Press.

Calvert, G. a., & Thesen, T. (2004). Multisensory integration: Methodological approaches and emerging principles in the human brain. Journal of Physiology Paris, 98(1-3 SPEC. ISS.), 191–205. http://doi.org/10.1016/j.jphysparis.2004.03.018

Cohen, M. X. (2014). Analyzing neural time series data: theory and practice. Massachusetts: MIT Press. Driver, J., & Spence, C. (2000). Multisensory perception: Beyond modularity and convergence. Current

Biology, 10(20), 10–12. http://doi.org/10.1016/S0960-9822(00)00740-5

Engel, A. K., Fries, P., & Singer, W. (2001). Dynamic predictions: oscillations and synchrony in top-down processing. Nature Reviews. Neuroscience, 2(10), 704–16. http://doi.org/10.1038/35094565

Engel, A. K., Senkowski, D., & Schneider, T. R. (2012). Multisensory Integration through Neural Coherence. In M. M. Murray & M. T. Wallace (Eds.), The Neural Bases of Multisensory Processes. (pp. 115–130). Boca Raton: CRC Press. http://doi.org/NBK92855 [bookaccession]

Foxe, J. J., & Schroeder, C. E. (2005). The case for feedforward multisensory convergence during early cortical processing. Neuroreport, 16(5), 419–423. http://doi.org/10.1097/00001756-200504040-00001

Fries, P. (2005). A mechanism for cognitive dynamics: Neuronal communication through neuronal coherence. Trends in Cognitive Sciences, 9(10), 474–480. http://doi.org/10.1016/j.tics.2005.08.011

Güntekin, B., & Başar, E. (2014). A review of brain oscillations in perception of faces and emotional pictures. Neuropsychologia, 58, 33–51. http://doi.org/10.1016/j.neuropsychologia.2014.03.014

(38)

Lakatos, P., Shah, A. S., Knuth, K. H., Ulbert, I., Karmos, G., & Schroeder, C. E. (2005). An oscillatory hierarchy controlling neuronal excitability and stimulus processing in the auditory cortex. Journal of Neurophysiology, 94(3), 1904–1911. http://doi.org/10.1152/jn.00263.2005

Meredith, M. A. (2002). On the neuronal basis for multisensory convergence: a brief overview. Cognitive Brain Research, 14(1), 31–40. http://doi.org/10.1016/S0926-6410(02)00059-9

Nichols, T. E., & Holmes, A. P. (2002). Nonparametric permutation tests for functional neuroimaging: a primer with examples. Human Brain Mapping, 15, 1–25.

Öhman, A. (2009). Mitt liv som en trailer. Sweden: Folkets Bio. Retrieved from http://folketsdvd.se/kortfilm/svensk-kortfilm-co-folkets-bio-2

Orn, B. J., Winterhalder, M., Timmer, J., & Urgen, J. (2007). Phase Synchronization and Coherence Analysis : Sensitivity and Specificity, 17(10), 3551–3556.

Pascual-Marqui, R. D. (2002). Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details. Methods and Findings in Experimental and Clinical Pharmacology, 24 Suppl D, 5–12. http://doi.org/841 [pii]

Pascual-Marqui, R. D. (2007). Coherence and phase synchronization: generalization to pairs of multivariate time series, and removal of zero-lag contributions, (July), 1–12. Retrieved from http://arxiv.org/abs/0706.1776 Pascual-Marqui, R. D. (2007a). Discrete, 3D distributed, linear imaging methods of electric neuronal activity.

Part 1: exact, zero error localization, 1–16. http://doi.org/10.1186/1744-9081-4-27

Pascual-Marqui, R. D. (2007b). Instantaneous and lagged measurements of linear and nonlinear dependence between groups of multivariate time series: frequency decomposition, 1–18. Retrieved from

http://arxiv.org/abs/0711.1455

Pascual-Marqui, R. D. (2015). LORETA-KEY alpha-software. Retrieved from https://www.uzh.ch/keyinst/loreta

Pernet, C. R., Sajda, P., & Rousselet, G. A. (2011). Single-trial analyses: why bother? Frontiers in Psychology, 2, 322. http://doi.org/10.3389/fpsyg.2011.00322

Poil, S. S. (2014). Phase Locking Value. Retrieved May 26, 2015, from

http://www.nbtwiki.net/doku.php?id=tutorial:phase_locking_value#.VWO7Ok_zqWg

Sakowitz, O. W., Quiroga, R. Q., Schürmann, M., & Başar, E. (2001). Bisensory stimulation increases gamma-responses over multiple cortical regions. Cognitive Brain Research, 11(2), 267–279.

http://doi.org/10.1016/S0926-6410(00)00081-1

Sakowitz, O. W., Quiroga, R. Q., Schürmann, M., & Başar, E. (2005). Spatio-temporal frequency characteristics of intersensory components in audiovisually evoked potentials. Cognitive Brain Research, 23(2-3), 316– 326. http://doi.org/10.1016/j.cogbrainres.2004.10.012

Sakowitz, O. W., Schürmann, M., & Başar, E. (2000). Oscillatory frontal theta responses are increased upon bisensory stimulation. Clinical Neurophysiology, 111(5), 884–893. http://doi.org/10.1016/S1388-2457(99)00315-6

(39)

Senkowski, D., Gomez-Ramirez, M., Lakatos, P., Wylie, G. R., Molholm, S., Schroeder, C. E., & Foxe, J. J. (2007). Multisensory processing and oscillatory activity: Analyzing non-linear electrophysiological measures in humans and simians. Experimental Brain Research, 177(2), 184–195.

http://doi.org/10.1007/s00221-006-0664-7

Senkowski, D., Schneider, T. R., Foxe, J. J., & Engel, A. K. (2008). Crossmodal binding through neural coherence: implications for multisensory processing. Trends in Neurosciences, 31(8), 401–409. http://doi.org/10.1016/j.tins.2008.05.002

Stein, B., & Meredith, A. (1993). The Merging of the Senses | The MIT Press. MIT Press. Retrieved from http://mitpress.mit.edu/books/merging-senses

Thatcher, R. W., North, D., & Biver, C. (2007). Intelligence and EEG current density using low-resolution electromagnetic tomography (LORETA). Human Brain Mapping, 28(2), 118–33.

http://doi.org/10.1002/hbm.20260

Varela, F., Lachaux, J. P., Rodriguez, E., & Martinerie, J. (2001). The brainweb: phase synchronization and large-scale integration. Nature Reviews. Neuroscience, 2(4), 229–239. http://doi.org/10.1038/35067550 Von Stein, Rappelsberger, P., Sarnthein, J., & Petsche, H. (1999). Synchronization between temporal and

parietal cortex during multimodal object processing in man. Cerebral Cortex (New York, N.Y. : 1991), 9(2), 137–150. http://doi.org/10.1093/cercor/9.2.137

Womelsdorf, T., Schoffelen, J.-M., Oostenveld, R., Singer, W., Desimone, R., Engel, A. K., & Fries, P. (2007). Modulation of neuronal interactions through neuronal synchronization. Science (New York, N.Y.), 316(5831), 1609–1612. http://doi.org/10.1126/science.1139597

References

Related documents

When a speaker pauses, the pause will raise the turn tak- ing potential, since ceasing to speak is a turn yielding cue. The longer the pause, the more it raises the turn

Silences can make or break the conversation: if two persons involved in a conversation have different ideas about the typical length of pauses, they will face problems with

It is important for a manufacturing company to understand that different service steps provide the customers with different value and therefore changes the focus of their

Figure 3: Disabilities of the Arm, Shoulder and Hand (DASH) scores calculated for 103 cases treated with different surgical in- terventions (simple decompression, n � 58;

Table 4.4.1: Significant p-values between root diameters of less than 2mm (&lt;2mm) and 2-5mm in the different sample occasions in the control (C) and ammonium sulphate treatment

This study provides a framework that is based on Orem’s grand nursing theory about self-care to guide nurses working in outpatient care and to enhance their understand- ing

This study aimed to compare the IPs and accuracy rates for the identification of different types of auditory speech stimuli (consonants, words, and final words in sentences)

635, 2014 Studies from the Swedish Institute for Disability Research