• No results found

Measuring Patient Experience in Swedish Hospital Maternity Care

N/A
N/A
Protected

Academic year: 2021

Share "Measuring Patient Experience in Swedish Hospital Maternity Care"

Copied!
129
0
0

Loading.... (view fulltext now)

Full text

(1)

Measuring Patient Experience in

Swedish Hospital Maternity Care

HEIDI WAHL

KTH ROYAL INSTITUTE OF TECHNOLOGY

(2)

Hospital Maternity Care

Heidi Wahl

School of Science

Thesis submitted for examination for the degree of Master of Science in Technology.

Espoo 21.11.2019

Examiners

Assoc. Prof. Ylva Fernaeus, KTH Royal Institute of Technology

Prof. Marko Nieminen, Aalto University

Advisors

Dr. Marianela Ciolfi Felice, KTH Royal Institute of Technology

(3)
(4)

Author Heidi Wahl

Title Measuring Patient Experience in Hospital Maternity Care

Degree programme Master’s Programme in ICT Innovation

Major Human-Computer Interaction and Design Code of major SCI3020

ExaminersAssoc. Prof. Ylva Fernaeus, KTH Royal Institute of Technology and Prof. Marko

Nieminen, Aalto University

Advisors Dr. Marianela Ciolfi Felice, KTH Royal Institute of Technology and Prof. Marko

Nieminen, Aalto University

Date 21.11.2019 Number of pages 107+20 Language English

Abstract

This thesis concerns Patient Experience (PX), in hospital maternity care in Sweden. The focus lies in the development of a measure to describe the current state of PX. The thesis uses a semi-sequential mixed-methods study design; exploration of the patient journey, through qualitative methods, informs the adaptation of an existing maternity care experience survey instrument. The resulting survey instrument is tried in a pilot study and renders a composite measure of PX. Part of the analysis is dedicated to understanding the effect of information and communication in PX; Exploratory Factor Analysis is used to test the model and attempt an answer.

The results show that it is possible to describe PX using the proposed survey instrument. The composite measure preserves differences in perceptions better than an arithmetic average of two discrete VAS-1 type measurements, and is more appropriate when measuring attitudes, and opinions using Likert-type measures. A three component solution describes 65.44% of the total sample variance. Determining to what degree PX is influenced by information and communication remains difficult to quantify, but these initial results indicate that the manner of the attending staff during

aftercare and the respondent’s mastery of information during discharge are important dimensions of

patients’ total PX (ANOVA R .695, R Square .483).

The model’s three components are almost entirely built from items that address interpersonal skills and information assimilation. These correspond to two of the three Service Quality Dimensions, namely Interaction Quality and Outcome Quality. Most important of the three is the component “Chemistry in aftercare”. The predictive strength of the model shows merit under the context of the study and could advise further efforts to develop measurements for PX in maternity care in a Swedish hospital setting.

Lastly, this study contextualises Service Design in hospital maternity healthcare; the study therefore offers ample opportunity for innovation.

Keywords Patient Experience (PX), Performance in healthcare, Service Quality (SQ),

(5)

Författare Heidi Wahl

Titel Mätning av Patientupplevelse i Förlossningsvården

Utbildningsprogram Master’s Programme in ICT Innovation

Huvudämne Human-Computer Interaction and Design Huvudämnets kod SCI3020

ExaminatorerAssoc. Prof. Ylva Fernaeus, KTH Kungliga Tekniska Högskolan och Prof. Marko

Nieminen, Aalto Universitetet

Handledare Dr. Marianela Ciolfi Felice, KTH Kungliga Tekniska Högskolan och Prof. Marko

Nieminen, Aalto Universitetet

Datum 21.11.2019 Sidantal 107+20 Språk Engelska

Sammandrag

Arbetet handlar om Patientupplevelse (PU), i förlossningsvården i Sverige. Fokus ligger på utveck-lingen av ett mätvärde att beskriva den nuvarande patientupplevelsen. Arbetet använder kvalitativa och kvantitativa metoder (mixed-methods), i en semi-sekventiell design; utforskning av patientresan ligger till grund för anpassningen av ett existerande mätinstrument. Det nya mätinstrumentet testas i en pilotstudie och ger ett kompositmätvärde av PU. En del av analysen ägnas åt att förstå vilken effekt information och kommunikation har på PU; Explorativ faktoranalys används för ändamålet. Resultaten visar att det är möjligt att beskriva PU genom det föreslagna mätinstrumentet. Det resulterande kompositvärdet är bättre på att beskriva skillnader i uppfattning än ett medelvärde av två diskreta variabler av VAS-1 typen, och är också lämpligare när attityder och åsikter mäts med hjälp av Likert-skalor. En trekomponentslösning beskriver 65.44% av den totala stickprovsvariansen. Att avgöra hur mycket PU påverkas av information och kommunikation förblir svårt att kvantifiera, men dessa inledande resultat visar att patientbemötande under eftervårdstiden och patientens

för-måga att bemästra information under utskrivningen är viktiga dimensioner av patienters totala PU

(ANOVA R .695, R Square .483).

Modellens tre komponenter är nästan uteslutande uppbyggda av variabler som fångar upp per-sonliga relationer och assimilering av information. Dessa motsvarar två av de tre dimensionerna i Servicekvalitetsmodellen, nämligen Interaktionskvalitet och Utfallskvalitet. Viktigaste komponenten är Personlig kemi under eftervården. Modellens förutsägningsstyrka visar förtjänst under studiens kontext och kunde informera framtida ansträngningar att utveckla mätvärden för förlossningsvården inom svensk sjukhusmiljö.

Till sist kan nämnas att studien kontextualiserar Service Design inom förlossningsvården; stu-dien erbjuder därför omfattande möjligheter för innovation.

Nyckelord Patientupplevelse (PU), Prestation i hälsovården, Servicekvalitet, Information,

(6)

Contents

Abstract 3

Abstract (in Swedish) 4

Contents 5

Symbols and abbreviations 7

1 Introduction 8

2 Background 9

2.1 Research question. . . 9

2.1.1 Specified problem definition & expectations . . . 10

2.1.2 Case environment . . . 10

3 Important concepts 12 3.1 Quality & Performance in healthcare . . . 12

3.1.1 Outcome measures . . . 12

3.1.2 Critique surrounding PROM/PREM . . . 13

3.2 Patient Experience, PX . . . 14

3.2.1 Patient-centeredness . . . 15

3.2.2 PX from a Service Experience perspective . . . 16

3.3 Measurement of Patient Reported Outcomes and Experiences . . . 19

3.3.1 First considerations when contemplating measurement . . . 19

3.3.2 Instrument design . . . 20

3.4 Information processing & PX . . . 26

3.4.1 Communication . . . 26

3.4.2 Barriers to communication . . . 26

3.4.3 Improving communication in healthcare . . . 28

3.5 Synthesis and conceptual framework . . . 29

4 Initiatives to measure PX 30 4.1 VAS-1 Birthing Experience . . . 30

4.2 The Danderyds Sjukhus Patient Satisfaction Survey . . . 30

4.3 CIF Observations . . . 31

4.4 Picker Institute Maternity Survey. . . 32

4.5 Summary of gaps in current practices . . . 33

5 Research design 34 6 Qualitative field study 36 6.1 Procedures . . . 36

6.1.1 Data collection & Participants . . . 36

6.1.2 Analysis of qualitative findings . . . 37

6.2 Qualitative Study Results & Analysis . . . 38

6.2.1 Maternity care context. . . 38

6.2.2 Manifestations of PX. . . 39

6.2.3 Relating results to the Service Quality Model . . . 41

(7)

7 PX Survey study 51

7.1 Motivation . . . 51

7.2 Objectives . . . 51

7.3 The Patient Experience Survey 2017 . . . 52

7.3.1 Section details . . . 52

7.3.2 Piloting the survey . . . 54

7.4 PU-17 Survey results . . . 55

7.4.1 Pilot performance . . . 56

7.4.2 Characteristics of the population . . . 57

7.4.3 Performance of maternity care at Danderyd . . . 61

7.4.4 Integrative analysis and discussion (Select disambiguation) . . . 73

8 Discussion 88 8.1 Measuring attitudes and the state of PX . . . 88

8.2 Effects of Information and Communication in PX . . . 91

8.3 Tool for policy development . . . 92

8.4 Reflection . . . 93

8.4.1 Limitations of the sudy . . . 96

8.4.2 Development ideas for the hospital . . . 97

8.4.3 Future research . . . 98

9 Conclusions 100

References 101

A Danderyd Sjukhus Patient Satisfaction Survey (sample) 108

B Interview & Observation guides used in the field study 110

C Manifestations of PX mapped to the SQ dimensions 112

D Important policy areas in the UK 116

E PU-17 Dispatch specimen & cover page samples 117

F PU-17 Hospital Maternity Care Survey (Swedish) 118

G Scoring Schema 120

H Side by side comparison between PI-15 and PU-17 124

(8)

Symbols and abbreviations

Symbols

Cronbach’s coefficient alpha Standardised coefficient beta

Abbreviations

DS Danderyds Sjukhus / Danderyd Hospital (Principal) PX Patient experience

PU Patientupplevelse / Patient experience CIF Clinical Innovation Felllows

KK Kvinnokliniken / Women’s Department BB Barbördshus / Maternity ward

MVC Mödravårdscentralen / Antenatal care unit BVC Barnavårdscentralen / Postnatal care unit PROM Patient Reported Outcome Measure PREM Patient Reported Experience Measure KPI Key Performance Indicator

DRG Diagnosis Related Group

RCOM Routine Clinical Outcome Measure UX User Experience

HCD Human Centered Design SQ Service Quality

IQ* Interaction Quality

PEQ* Physical Environment Quality OQ * Outcome Quality

WHO World Health Organization

IOM The American Institute of Medicine

STEEEP Safe, Timely, Effective, Efficient, Equitable, Patient-centered (Dimensions of quality in healthcare, according to IOM) CQC Care Quality Commission (UK)

NHS National Health Services (UK) PI Picker Institute (UK)

(9)

models. Typical indicators of quality and efficiency are: activity (as in number of treatments), length of stay and cost, mortality, 30-day mortality and number of re-admissions. See e.g. [22][50]. Performance is thus measured as both a) the utilisation of resources (fiscal efficiency) and b) the clinical success achieved by treatment (technical efficiency). There is a certain tension between these elements of performance. For example, a desire to free up bed spaces may lead to inadequately early discharges; and a physician may report a positive outcome, whereas the patient may experience a severe decline in her quality of life [40][26]. Especially the latter conflict has led to an increasing interest in investigating healthcare performance from the patient’s perspective. [22][61][50][64][90]. Research has shown a positive correlation between patient experience and technical efficiency; patients with positive experiences show up for their check-ups, follow instructions for care [39] and suffer fewer complications [91][11]. PX has also been linked to lower readmission and mortality rates [37][11][69][34].

(10)

2 Background

In the autumn of 2016, Danderyd Hospital Obstetrics Department invited a team of innovators, Clinical Innovation Fellows (CIF) to aid the hospital uncover unaddressed needs in the organisation. The team identified a need to improve inpatient experience as a route to more efficient care; CIF deemed that personnel spent inordinate amounts of time managing patients’ expectations and correcting their misunderstandings. CIF proposed there exists a link between information and communication and how well prepared patients were; unprepared patients did not collaborate well with staff. Satisfactory collaboration was expected to lead to more efficient care because personnel would waste less time. Unsatisfactory collaboration was, according to CFI, a result of poor inpatient satisfaction1. See Figure1 Causality according to Clinical Innovation Fellows, below.

2. Unprepared patients 3. Unsatisfactory collaboration 4. Poor inpatient satisfaction 5. Inefficient care 1. Poor Information & communication

Figure 1: Causality according to Clinical Innovation Fellows

Topics that emerge from this hypothesis include: information and communication, cooperation, patient satisfaction, efficiency, performance and improvement. In the thesis assignment, efficiency is seen as utilisation of (personnel) resources; it is assumed that improvements in patient satisfaction will lead to efficiency improvements through increased staff performance. Increased staff performance is equated to efficient care.

2.1 Research question

The initial hypothesis by the Principal was transformed to the following research question:

In the context of maternity care at Danderyd Hospital Obstetrics’ Department, how can information and communication towards patients improve patient experience, so that the benefits of a positive PX can be realised?

(11)

2.1.1 Specified problem definition & expectations

In order to answer the research question, we must first understand the current state of PX and examine the role that information and communication play in building the PX. The initial research question is therefore divided into two areas of enquiry:

1. Can we quantify the current state of PX in hospital maternity care?

2. Can we determine to what extent PX in hospital maternity care is influenced by information and communication?

The study will result in an instrument to measure patient experience in hospital maternity care and enable us to arrive at a measure of PX. It will also serve to substantiate existing assumptions and uncover the policy areas where PX can be addressed to advantage. The study may enable us to modify information and information communication to contribute to improvements in PX. The underlying ambition from the Principal’s perspective is to improve personnel efficiency. The author’s ambition is to provide a better understanding of PX and its dimensions at present, and identify where possible interventions may translate into meaningful improvements of PX.

2.1.2 Case environment

Danderyd Hospital Women’s Department (Kvinnokliniken, KK) consists of the two divisions Gy-naecology and Obstetrics. Obstetrics consists of Birthing (Förlossning), Maternity ward 16 (BB Avdelning 16), Maternity ward 17 (BB Avdelning 17) and Specialist antenatal care (Specialistmö-dravård), see Figure2 Organisation of maternity care at Danderyd Hospital (Sweden).

(12)

Birthing caters for natural, assisted and emergency caesarean section births andMaternity ward 16 for their aftercare. Maternity ward 17 is reserved for planned c-section patients and offers preparation and aftercare for its patients. Maternity aftercare policy at the hospital promotes self-sufficiency in patients and their companions.

Danderyd is one of the largest maternity hospitals in Sweden, roughly 10% of all newborns are delivered at Danderyd. The hospital performs most caesarean sections in the country (2016). The closure of the private clinic Sophia BB in late 2016 and a strong baby boom trend has resulted in unusually high inflow of patients and acute lack of space. According to personnel at Ward 17, It is not uncommon for maternity patients to receive care at the “wrong” obstetrics’ ward. Occasionally, Ward 17 patients must be referred to BB Stockholm, a semi-private affiliate, co-owned by Danderyd, or to other hospitals outside the Stockholm region (for any type of delivery).

(13)

3 Important concepts

This chapter explores concepts of quality and performance in healthcare. Here, we also examine how PX relates to different Service Quality dimensions, and we look into important aspects of communication. A synthesis, which serves as a conceptual framework for this study, concludes this chapter.

3.1 Quality & Performance in healthcare

The question of performance of a healthcare system concerns the utilisation of finite resources to maximise favourable health outcomes, without waste, prejudice or discrimination. For this purpose, the WHO Report 2000, Health Systems: Improving Performance compiled three goals for a healthcare system: 1) good health, 2) responsiveness to the expectations of the population and 3) fairness of financial contribution. The report also highlighted the role of evidence and system oversight. [67]. The WHO goals were expanded upon by The American Institute of Medicine (IOM) in 2001. The IOM report identified six aims for modern healthcare; healthcare should be: 1) Safe, 2) Effective, 3) Patient-centered, 4) Timely, 5) Efficient and 6) Equitable. The acronym STEEEP, is nowadays used globally to describe the dimensions of quality in healthcare [66][9].

The quality of a healthcare system, that is, the performance of such a system, combines technical (safety, effectiveness, efficiency) and non-technical (patient-centeredness, equitableness) assessment of healthcare delivery. There is a clear societal agenda behind these goals and an interest in oversee-ing their fulfillment. Indeed, beyond the quality aims, IOM proposed four additional strategic steps to modernise healthcare, which include: changing the patient-clinician relationship, democratising of evidenced best practices, aligning financial incentives to coincide with quality aims, and reducing the fragmentation of care. The report signals the need for fundamental structural and mindset changes in how a system answers to the needs of the people it serves. [96][66][94]. A natural consequence is the evolution of outcome measures. It is fair to say that outcome measures originally emerged to monitor and steer performance of the healthcare system from a societal/systemic perspective [67][30].

3.1.1 Outcome measures

Outcome measures have evolved in roughly three stages, see Figure3below. First came crude Key Performance Indicators (KPIs), e.g. through the diagnosing conditions according to harmonised Diagnosis Related Group (DRG) codes. DRG codes supported financial incentivisation, by recording and compensating for actual delivered and comparable units of care, but gave little insight into whether a treatment had been successful [22]. Next came Routine Clinical Outcome Measures (RCOMs). RCOMs were devised to improve the quality of target driven data and are geared towards rewarding medical results. RCOMs capture technical performance using measures that are considered important by clinicians, e.g. in a renal failure case, phosphate levels post intervention [18]. RCOMs record the clinician’s impression of improvement in their patient and not the patient’s point of view; these two perspectives have been known to vary [26][89][18][22][40][1].

KPI RCOM PROM/PREM

Figure 3: Gradual evolution of outcome measures in healthcare.

(14)

of the patient [89][13][18][95][48].

Importance of PROM/PREM

Research has shown that a positive patient perception of the care correlate with better clinical outcomes in both a fiscal and technical sense [91]. Higher performance in this area has been linked to lower readmission and mortality rates [39][37][85][69] and more recently with staff wellbeing [73]. There is also evidence that interpersonal skills and communication can influence and improve patients’ perceptions of healthcare quality [50][3][10][99][64][69]. Performance in healthcare has thus moved beyond the provisioning of adequate care, to a state in which the patients’ perceptions are considered important indicators of quality. Outcomes from the patients’ point of view are now used to guide improvement in healthcare [5][2][64].

3.1.2 Critique surrounding PROM/PREM

While there is agreement that patients’ perceptions are important indicators of performance, and evidence that PX correlates to fiscal and technical efficiency, there is also critique around what importance PROM/PREM should carry. The application of PROM/PREM to drive reform has been found lacking; for example, Greenfield et al concluded that a systematic implementation of patient-centeredness is still missing in the UK, and that a successful PX is more a chance occurrence than conscious change in care delivery [43]. Similar conclusions were reached by [74] when investigating how hospital management and front-line clinicians operationalise PX. According to the authors, investments in PX measurement were not coupled with structured plans of action for PX improvement. Glenn et al found that the NHS national surveys support provider accountability when it comes to centrally driven initiatives, but have failed to promote interest at the local level

“Indeed, the survey programme may have contributed to the failure of hospital management boards to reflect sufficiently on their own responsibilities for collecting and using patient experience data to

improve the quality of local services.” [72]. Bleich et al state that patient experience is only a small

dimension in the evaluation of satisfaction with a healthcare system [13]. Farley et al question the appropriateness of using PROM/PREM as performance indicators in the Emergency Room for remuneration purposes, stating that technical outcomes risk being obscured in the context [36]. The concerns reported here may also be seen as an opposition to the general principle of measurement; indeed, the expert consensus meeting for renal registries in Europe in 2015 noted difficulties in implementing PROM/PREM programmes due to lack of organisational support [18]. A certain mistrust directed at measurement was also identified by [74]. Glenn et al find that measurement in its current forms suffers from not being systematic, timely and relevant with respect to what matters most to patients [73]. Mountford and Shojania advocate local ownership of measurement by clinicians to overcome these problems [60].

What people are assessing

Another form of critique concerns laypeople’s ability to judge performance, and what is judged by laypeople, that is, the validity2 of the measurement. Since laypeople lack medical expertise, non-technical aspects of healthcare delivery are judged to assess service quality, e.g. waiting times or level of pain [3]. Stizia and Wood suggest that the reason technical aspects are not judged is because they are not considered with the same rigour and scrutiny by patients [76]. Williams critiques the utility of measurement instruments which do not consider how service users think about and rate experiences “Patients may have a complex set of important and relevant beliefs

which cannot be embodied in simple questions of satisfaction” [91]. Beattie et al caution that PX

measurement instruments based on normative expectations may be using outdated priorities3 from a population standpoint [9].

(15)

How people assess

The third form of critique concerns the behaviours of raters and the reliability4 of the instruments. It is a well recognised problem that patients tend to overrate satisfaction due to gratitude bias [9]. Measuring the fulfilment of positive expectations yields high scores and little response variability [5][76]. Similarly, measuring the unfulfillment of negative preconceptions can also be passed as a high satisfaction score [6][91]. Williams, and Williams et al caution about interpreting a good satisfaction score as meaning good care, rather, it simply means that nothing extremely bad happened [91][92]. Williams et al found that high satisfaction scores may result from an absence of dissatisfaction in a low or wrongly scoped expectation setting; individuals tend to not express dissatisfaction with an experience if they don’t believe it to be part of the provider’s responsibilities [92]. Also the layout, order, and wording of the instrument may not sufficiently consider other types of response bias such as acquiescence bias, social desirability bias, extreme responding, highlighted by [31]. The critique signals the difficulties in establishing a coherent definition for PX. PX adopts different meanings and scopes; its measurement also serves different and partly unrealised purposes. So how can one assess perceptions of care from the individual’s perspective? The next section de-scribes the origins of PX, some interpretations of its scope and how PX relates to the Service concept.

3.2 Patient Experience, PX

In 2000, the WHO first described PX in terms of healthcare (system) responsiveness. Healthcare

responsiveness is how well a systems answers to the individual’s legitimate expectations of: auton-omy, choice, communication, confidentiality, dignity, prompt attention, quality of basic amenities and support. The concept was further organised into two categories: Respect for the Individual and

Client orientation. See Figure4below.

Respect for the individual ● Respect for the dignity of the

person, no humiliating or demeaning the patients. ● Confidentiality: the right to

determine who has access to one’s personal health information.

● Autonomy to participate in choice about one’s own health.

Client orientation

● Prompt attention: immediate attention in case of emergencies, and reasonable waiting time for

non-emergencies.

● Amenities of adequate quality, such as cleanliness, space and hospital food.

● Access to social support networks - family and friends - for people receiving care. ● Choice of provider, or freedom

to select which individual or organization delivers one’s care.

Figure 4: Healthcare responsiveness categories. Adapted from [67] p. 32

Substantial effort was laid down to arrive at global populations’ legitimate expectations so they could be held as universally applicable [67][30]. It is therefore unsurprising to find some form of these dimensions at the heart of a) well established (Quality) PX assessment instruments, like CAHPS (US) and Picker Institute PX surveys (UK) [30], GS-PEQ (Norway) [77], QPP (Sweden) [52], b) in studies assessing determinants of PX [97][98], and c) in policy and compliance reports as well as in agendas for improvement [96][79][33][57].

(16)

Despite much work dedicated to understanding PX, to this day, there is no clear definition of the concept, see e.g. [53][95][47]. PX has been conceptualised as: the responsiveness domain by the WHO [13], as (a sub-domain of) patient satisfaction [40][9], as the non-fiscal and non-technical outcomes of care [30], as the P in STEEEP [43], as the gap between expectations and results or lived experiences or a form of customer experience in a service setting [40][83][49]. Figure5 shows some of the interpretations of PX and its assigned scope.

INTERACTION QUALITY When understood as the most important (but not the only) aspect of service quality evaluation. Encompasses: Attitude

Behaviour Expertise CUSTOMER EXPERIENCE

What matters to consumers in their interaction with providers: Responsiveness Ease of access Intimacy Flexibility Feeling valued

Courtesy and Competence of staff

PATIENT-CENTEREDNESS (STEEEP) (IOM) A change in how care is delivered and a revision of roles:

Compassion Empathy

Responsiveness (to the needs, values, and expressed preferences of the individual patient).

EXPERIENCE OF PATIENT-CENTEREDNESS IN INTEGRATED CARE What individuals consider of central importance in how they partake of care: Being acknowledged Being respected Being understood Being seen Being heard Emo ti o n al & P h ysi cal S p ace HEALTHCARE RESPONSIVENESS (WHO) Ability to respond to individuals’

legitimate expectations of:

Autonomy Choice Communication Confidentiality Prompt attention Quality of basic amenities Support

THE GAP BETWEEN EXPECTATIONS AND THE RESULTS OR LIVED EXPERIENCE

THE NON-FISCAL AND NON TECHNICAL OUTCOMES OF CARE

Figure 5: Some interpretations of PX and its scope

The past 30 years have seen a steady evolution in how involved patients and their families have been both in the design, consumption and evaluation of their care [8][73]. The trend is not surprising, WHO legitimised the placement of the individual at the center of their care in the responsiveness framework. Patient-centeredness is an important goal and evidence of quality in modern healthcare. [57][96][33].

3.2.1 Patient-centeredness

(17)

From passive recipient to engaged participant From fitting the individual to the system to fitting the system to the individual

Figure 6: Patient-centeredness in healthcare builds on HCD principles.

e.g. consumer journeys and touchpoints, value proposition, customer loyalty, image, consumer life-cycle, critical events, frontline and backstage, servicescape. [46][83][20]. One could say that the popularisation and adoption of HCD-principles by management disciplines has contributed to enhancing and amplifying the voice of the user as a consumer.

3.2.2 PX from a Service Experience perspective

The Beryl Institute5considers PX as “the sum of all interactions, shaped by an organisation’ culture,

that influence patient perceptions across a continuum of care”, which shows strong affinity with the

Service concept (see below). Much like UX is not an attribute of a system but a result of its use6, PX can be thought of as the individual’s evaluation of the quality in interactions with healthcare in all its manifestations. Both Service and Experience interact and coalesce in PX. Both terms deserve further exploration.

Experience

[59] describe experience as complex, fluid, holistic, reflexive (has a subject-object relationship) and recursive (is continuously re-evaluated). The authors further clarify that experience is constructed through the process of sense-making, which entails anticipating, connecting, interpreting, reflecting, appropriating and recounting, without a fixed order between the steps. The construction of experience through sense-making describes, in essence, human information processing and the associated creation of mental models that help us understand and act in the world [55][88][62], (see

Mental models). Experience is clearly personal; it involves a personal interpretation of personal perceptions, even when the experience is shared with others.

In order to conceptualise and design for experience, [59] proposed that we contemplate Experience from four different but fundamentally intertwined vantage points or "threads". The Compositional

structure thread deals with our understanding of the parts and wholes of an experience. The Sensual

thread and Emotional threads help us direct attention to our sensory and emotional engagement in an experience. Similarly, the Spatio-Temporal Perception thread aids our awareness of how experiences might be influenced by our perception of time or space; for example, from time flying, to time dragging slowly, from our surroundings feeling intimate and cosy to confining. The point of the framework is not to deconstruct experience, but to help us understand its complex nature and consider different perspectives of Experience to better guide our design efforts.

5See The Beryl Institute’sdefinition of PX.

(18)

Service

The service concept is described as an interplay between service operations (how), service outcomes (what), and the consumer’s (user’s/patient’s) direct experience of the service and her judgement of attained value compared to cost [49]. The service concept framework recognises the difficulties in separating experience from outcome, much like it may be difficult for a patient to discriminate between her PX and her medical outcomes or the tools, materials, environments and processes involved in her treatment. This is because experience is directly lived in interactions with the provider and all its manifestations (staff, technology, facilities, processes). Experiences result in expected tangible benefits, judgements, emotions and intentions. Expectations exist even before a direct personal experience takes place, as expectations are influenced by an individual’s knowledge and perceptions, her previous experiences, her personal values and beliefs, by her own and others’ judgements, by marketing, word of mouth, etc. [42].

The term Healthcare Consumerism reflects the steady shift towards regarding healthcare as a service like any other, for consumers like any others, in a free market. This means that consumers of healthcare are thought to be knowledgeable, conscious and engaged parties who exercise free choice through their “purchase power”. This orientation has clear commercial, service design origins. The expectation (realised or not) of increased productivity and reduced waste is coupled to this term. [87][23][20][28].

Service Quality, SQ

A long standing model to evaluate healthcare services and quality of care is the Donabedian triangle model from 1966. In this model, the dimensions of quality in healthcare are structures, processes and outcomes [58][4]. Satisfaction is "the gap between expectations and outcomes" [58][40]. PROM corresponds to Patient Satisfaction whereas PREM is encapsulated in the assessment of processes and structures of care [85][3].

A more general framework for service quality was presented by [17]. In their model, Service Quality is composed of three primary dimensions: Interaction Quality (IQ), Physical Environment Quality (PEQ) and Outcome Quality (OQ). Figure7below, shows the dimensions with Interaction Quality

expanded. The three dimensions interact with each other.

Service Quality

Physical Environment Quality Outcome Quality

Interaction Quality •• AttitudeBehaviour

• Expertise

Figure 7: The three Service Quality Dimensions according to [17]

Interaction Quality

The Interaction Quality sub-dimensions in the model are: Attitude, Behaviour and Expertise. The authors find ample support in literature suggesting that “the interpersonal interaction during service

delivery often have the greatest effect on service quality perceptions.” [17]. In a similar manner [43]

(19)

seen and heard, in their encounters with healthcare providers. Interaction Quality, or the how service is delivered, clearly adopts and promotes a patient centric orientation.

Physical Environment Quality

Physical Environment Quality (PEQ), is composed of Ambient Conditions, Design and Social Factor. Ambient Conditions refers to non-visual aspects of the environment whereas Design refers to the layout, architecture, function of aesthetics of the environment. Social Factor tells how the absence or the presence and behaviours of other who share in your experience affect you. PEQ is also conceptualised as Ambient Conditions (AC) and Social Factor with Design implicitly included in AC.

Outcome Quality

Outcome Quality (OQ) gathers Waiting Times, Tangibles and Valence. Waiting Times may be likened to responsiveness and is the time transcurred without progress,"deadtime", "delay". Tangibles are the physical evidence of the experience, for example a scar, a tracking number, etc. Valence is a positive or negative judgement of the whole outcome, which is impervious from the evaluation of any other aspect of the experience and therefore difficult to manage [17]. OQ has also been conceptualised as "What the customer is left with once the transaction process is complete" [44] and as Waiting Times, Patient Satisfaction, Loyalty and Image, where Patient Satisfaction corresponds to Valence [25][17].

(20)

3.3 Measurement of Patient Reported Outcomes and Experiences

Patients’ perceptions of healthcare, that is PROM/PREM, are commonly assessed by means of questionnaires. There are a number of recognised instruments for the purpose, e.g. the Hospi-tal Consumer Assessment of Healthcare Providers and Systems ((H)CAHPS), the Picker Patient Experience Questionnaires (e.g. PPE and PPE-15), the Quality from the Patient’s Perspective (QPP and QPPS), and more, see e.g. [36][76][41][52]. Measurement instruments are generally co-designed between health care professionals, healthcare management and organisations that represent the interests of patients [18][27][97][25]. The instruments vary in length, time of delivery and method of completion, and may be generic or disease-specific. In a consensus meeting by the renal registries in Europe in 2015, a preference towards specificity was voiced. In the same paper, shorter questionnaires were thought to have a positive effect on response rates. Further, self completion outside of the hospital setting (after the patient has been discharged) was deemed more appropriate by practitioners as well as patient representatives, and also seen as affecting response rates favourably. [18]. Also [31] advocate specificity, which, according to the author, comes from having clarity about the intentions behind measurement.

3.3.1 First considerations when contemplating measurement

Developing a measurement instrument takes considerable effort; implementing measurement of any phenomenon, comes with a responsibility to secure the necessary resources and to cater for the purposeful use of the results. Several authors invite us to consider the cost and possible benefits of measurement before engaging in it. Indeed, [9] state that shortened versions of instruments are sometimes motivated by practical arrangements surrounding the organisation of a study, e.g.

material costs, distribution, collection and analysis. For this purpose, and beyond the crucial

Validity and Reliability, the authors examine the criteria for assessingUtility and arrive at the

additional: 1) Cost efficiency, which includes considerations such as knowledge and expertise required in the organisation to design instruments and analyse results, 2) Acceptability, which is concerned with the respondents’ reception or ability to work with the instrument, and 3)

Edu-cational impact, which describes whether the results can be translated into learnings or provide

ground for decision making. Also [7] lift the necessity to examine utility, but the authors equate the term with how practical it is to use the instrument in the field. Related to the argument of utility, when assessing or designing an instrument, [71] advise to consider a) if measurement by means of a questionnaire study is adequate for the purpose and, b) to examine if existing instru-ments might fulfil the purpose as they are or with small adaptations. A clarity of purpose before design starts, and a census and consideration of existing similar instruments is also advocated by [31].

What is being measured and why?

Questionnaires like the ones mentioned above commonly share thematic areas in the care domain, for example: continuity & coordination, access, cleanliness, emotional support, timeliness, physical

comfort, empathy, respect, interpersonal skills, communication & education. Amongst these areas,

instruments include aspects of communication which largely correspond to the Interaction Quality dimension in the SQ model (e.g. cordiality, patient-centeredness, being listened to, etc.). Most of these constructs can be traced back to WHO definition work where the concept of Healthcare

System Responsiveness was first coined (see3.2). Ultimately, the content and choice of instrument

reflect the needs of the institution, in terms of policy and oversight work.

Could an existing instrument be the solution?

(21)

following maternity care areas: 1. Access to care.

2. Choice

3. Continuity of care 4. Communication

5. Hospital experience and 6. Well-being and Involvement.

These correspond to important policy areas in the UK maternity care services today and translate to existing and evolving guidelines and goals for maternity services. Some examples of important policy from the UK include promoting the midwife’s competence, and encouraging mothers with low-risk pregnancies to consider less resource intensive delivery options, like having a home birth or giving birth at a midwife led unit7, instead of opting for a hospital delivery.

It should be noted that the content and focus of PI-15 is maternity care arrangements but the question of, how the particular policy areas emerged remains unclear. The areas could have emerged as a result of instrument development efforts, they could be based on what was deemed important (by industry, policy makers, patient interest organisations), with an instrument designed and retrofitted for that purpose, or they could have been arbitrarily selected. In their evaluation of instruments to report patient experience, [9] find evidence that instruments incorporate more and more policy items, transforming them into instruments to measure healthcare policy

implementa-tion instead of the patient experience of the quality of hospital care. To some extent, the mixed

purposes of measuring PX appear to jeopardise instrument validity. The content and fit of the PI-15 instrument will be further analysed in section4.4 Picker Institute Maternity Survey, on page32.

3.3.2 Instrument design

Scales, Indices and the matter of Causality

Questionnaires can form scales, indices, both or neither, see e.g. [31][12][32][15]. A scale describes a latent theoretical concept that is not directly observable, but which causes the scores for its observable variables; the latent variable is the reason why respondents answer as they do to the proxies (variables) representing it. According to [31], it is easy to confuse the observable proxy and the unobservable variable by how the items in the scale are phrased, but the author offers further clar-ification by stating that in a scale, the latent variable captures attributes of respondents rather than features or dimensions of the phenomenon of interest. An index is also a latent construct, but with reversed causality; an index can therefore be understood as a summary of directly observable

phenom-ena. Figure8, on page21, shows a schematic representation of the two concepts using path diagrams.

Variables in an index may not share a common cause but, according to several authors, they all lead to a common effect, see e.g. [31][15][12]. However, [54] warn that causality in formative models is often misinterpreted; the formative latent variable is not distinct from its indicators, therefore, the indicators do not cause the latent variable; rather, they are the latent variable. The authors do not denounce graphical representations of causality for formative models as shown in Figure8, as long as the “causality” is understood as described above.

(22)

Y

X3

X2

X1

e1 e3 e2

(a) Scale. Common cause, Y

Report

Delivery group

Group 1 Group 2 Global

Mean N SD Mean N SD Mean N

How do you rate your labour and delivery experience?

How do you rate your aftercare experience? Average Patient Experience 8.476 1 0 1 1.816 8.767 3 0 1.832 8.543 1 3 1 1.817 7.725 1 0 2 2.194 7.567 3 0 2.441 7.689 1 3 2 2.244 8.132 9 9 1.680 8.167 3 0 1.384 8.140 1 2 9 1.611

Report

Delivery ... Global SD How do you rate your

labour and delivery experience?

How do you rate your aftercare experience? Average Patient Experience 1.817 2.244 1.611 Page 1

Y

X3

X2

X1

e1 e3 e2

(b) Index. Common "effect", Y

Figure 8: Path diagram with error terms. (a) represents a typical reflective measurement model (adapted from [31], p. 29), while (b) represents a typical formative model

Scales and indices are notoriously difficult to separate, even for established researchers; [32] cautions that several instruments going by the name of scale are in fact indices because they describe formative measurement variables and models. [12] states that the terms scale and index are used interchangeably. Because causality is difficult to establish in cross-sectional studies, he simplifies the definition by considering scales instruments with proven unidimensionality. Unidimensionality, that is, all variables are describing the same concept, is generally examined by means of Factor Analysis, see e.g. [31][12]. Perhaps contributing to the confusion is the fact that Likert-type measures, colloquially referred to as "Likert scales" are commonly used in healthcare to assess opinions, attitudes and beliefs [71][31][63][80]. The "Likert scale" is a recording format. The use of the word scale is unfortunate in the context, as scale has a distinct meaning in measurement theory [71][24]. Defining a causal model has a strong impact on item development and how the instrument can be assessed, for example, excluding variables in a formative model means the phenomenon risks being only partially described, see e.g [54][32][31].

Measurement theory

Although measurement theory goes beyond the scope of this thesis, some important concepts are worth refreshing. The interested reader may find that [31] offers a clear and comprehensive review of these concepts. The book is the main source for this section.

Path diagrams

Path diagrams are a way to represent how items are related to a latent variable. In Figure8left for example, the three individual scale items X1, X2 and X3 are related to a single latent variable Y. Each scale item is also influenced by its distinct error variable, e. The error variable has also a causal relationship and explains all variation that is not accounted for by other causal relationships. The important take away is that the latent variable Y score, contains a portion of error through each of its indicator variables. The true score is thus the measured scored minus error. The path diagrams in the example show very simple causalities. In reality, causality may prove to be a complex multidimensional construct, see e.g. [25][32][12].

Measurement models

In measurement, certain assumptions are made that underlie many analytical procedures. [31] reminds us that under the Classical Measurement Model, it is assumed that error terms vary randomly and that when aggregated over a large pool of respondents, error for each item will have a mean of zero. In this model, it is also assumed that there is no correlation between the error terms and the true score of the latent variable. Under the Parallel Test Model, the amount of error in each item is considered equal. Further, the latent variable will influence each item to the same degree. Both of these assumptions are very strict and unlikely. Less strict models are the Tau-Equivalent

Test and the Essentially Tau-Equivalent Test, which allows for unequal error and as a consequence,

(23)

is influenced equally by the latent variable. The Congeneric Model removes these restrictions; all items in the scale need only share a common latent variable, the strength of the relationship may vary as well as the amount of error influencing each item. The most relaxed model is the General

Factor Model, which allows for multiple latent variables to influence a set of indicators. Given the

very strict assumptions made in each of the models, it is perhaps not surprising to find that the General Factor Model has improved accordance with real-world data [31][25][54].

Validity

Validity tells how well an instrument (e.g. a scale or an index) measures a specific concept; according to [7], validity is not an absolute judgement but a matter of degree, meaning it is preferably an assessment of multiple qualities of the instruments. In a similar train of thought, [71] reminds us that an instrument’s validity is never absolute, but is the result of an ongoing process to collect evidence to support the researcher’s argument. Validity is therefore examined for different purposes: the instrument measures what is intended (content validity), the instrument can predict events (criterion-related validity) or it correlates to other measures (construct validity) [31][71][7]. Face validity appears in literature as another popular form of content validation [48][7]. Face validity is however often influenced by personal opinion; the steps taken to select, create and validate items are indicative of content validity, see e.g. [31][7]. Criterion-related validity describes the strength of the relationship between two events; it represents a prediction of two variables’ movements together but is not concerned with whether the criteria precedes, follows or coincides with the measurement [31]. The Correlation coefficient (R) is used to describe criterion-related validity [12][31]. Construct validity also uses R but for a different purpose; the correlation coefficient is investigated to predict the magnitude of an unobservable variable (e.g. PX) by its observable indicator (e.g. Average Patient Experience Score [31]. The construct validity measure demonstrates that covariance between items was not caused by similarities in how the instrument was applied, but by what items the instrument contained, see e.g. [31][12][71].

Reliability

Reliability is the proportion of variance that can be attributed to the true score of what is being measured instead of other extraneous factors; this means that a reliable instrument will pro-duce consistent results across different population samples and separate administrations [31][7]. Regression analysis and Analysis of variance (ANOVA) are known methods for accomplishing assessment of reliability, see e.g. [25][71][31]. Both methods compare an estimation of the true score variance and the total variance. The instrument is reliable when it can produce consistent results [12]. Internal consistency reliability builds on the Classical Measurement Model; since items do not share a common error source, high inter-item correlations indicate strong links to a single latent variable, thus fulfilling the expectations of homogeneity of scale items implied by the model [31]. A typical measure of internal consistency isChronbach’s coefficient alpha (–), which can be calculated using either covariances or standardised scores. An – of .70 is held as an acceptable lower band for group data, but much higher standards, in the range of .90, must apply for instruments that will be used for assessing individuals (medical condition, aptitude, etc.) [31][12]. For a scale to be internally consistent it follows that its items need to be highly intercorrelated [31] but this poses a problem when developing an index; excessive collinearity amongst indicators makes it difficult to separate each indicator’s contribution to the latent construct, there is a risk of introducing trivial redundancy in the instrument. This is why the maximum Variance Inflation Factor (VIF) for items constituting an index should be below 10. [32].

(24)

in the instrument grows [12], and finally, for it being easy to manipulate to the appearance of increased reliability through progressive item deletion8 [31]. There are alternatives ways to estimate a measurement instrument’s reliability. For example, by splitting equivalent item scales into two sets, administering both sets to the same respondents, and comparing the obtained scores [12][31]. Splitting does not necessarily involve separate administrations of the instrument [31]. Inter-rater agreement, measured using the Cohen’s kappa (Ÿ) coefficient, is an alternative form; it tells the frequency of exact agreement between respondents that exceeds what would be expected by chance [31]. The Test-retest reliability method measures fluctuations in scores between administration occasions and is sometimes referred to as instrument stability [7][31][12]. [31] concludes that despite its shortcomings, – remains a useful estimation of instrument reliability.

Eight steps in instrument (scale) development

[31] describes eight steps for scale development and corresponding guidelines for each. From clarifying what will be measured (Step 1), to item generation, choice of recording format and review by experts (Steps 2–4), to considerations like including validation items (Step 5), piloting the survey, evaluating the items, to optimising the scale length (Steps 6–8). Even though these relate specifically to scale development, the guidelines are equally applicable in instrument development where causality is reversed. A few of these recommendations are elaborated below.

Item generation (Step 2)

When generating items, one should keep in mind that each item in itself is a test of the strength of the latent variable and should reflect the scale’s purpose, therefore non-trivial redundancy increases reliability of the set of items [31]. According to the author’s experience, mixing positively and negatively worded items in long questionnaires tend to confuse respondents rather than prevent acquiescence bias. Generating items may be guided by literature reviews, theoretical models, qualitative findings or similar [25].

Recording format (Step 3)

Likert-type measures, or "Likert scales", are a recording format commonly used in instruments that measure attitudes, opinions and beliefs, see e.g. [71][80][63]. Items are declarative sentences with several equidistance options to mark agreement or endorsement of the statement, e.g. from “strongly disagree” to “strongly agree”. A neutral midpoint is commonly included. See Figure9

below.

1. Exercise is an essential component of a healthy lifestyle

1 2 3 4 5 6

Strongly Moderately Mildly Mildly Moderately Strongly

Disagree Disagree Disagree Agree Agree Agree

1 2 3 4 5

Completely Mostly Equally Mostly Completely

True True True and Untrue Untrue Untrue

a.

b.

Figure 9: Example item and two alternative response spectrums: (a) even number of options, and (b) uneven number of options. The latter allows for a neutral response. Adapted from [31] p. 129 8Relates to deletion of items that decrease the portion of error in an instrument, instead of improving it’s

(25)

Level of measurement

Related to the recording format is the actual level of measurement. Perhaps the clearest explanation of the level of measurement in a Likert-type measure is by [31], who states that Likert-type measures capture ordinal (ranked) continuous data, ordinally. This is because researchers generally assume equidistance between response options "...(when equidistance is assumed), an ordinal-level measure

becomes and interval-level measure with discrete categories." [12], p. 24.

Whether an attitudinal response represents ordinal or interval data has been debated for a long time [80][24]. Interval (continuous) data comes with more powerful parametric analysis opportunities and it is therefore tempting to run this type of analysis on Likert-type measure data. But this is a questionable practice as it is difficult to transform an opinion into a number that carries the same meaning across the population [31][71]. The controversy is resolved by [71]; the authors state that also ordinal (= read "ordinally captured") data from e.g. Likert-type measure can be analysed in parametric fashion when it is formed as a composite score of unweighted average scores of all items within a scale (=read "construct"). The essential distinction is that a construct is composed by

several individual items, as opposed to each item (question/statement) being a "scale" on its own. Phrasing to elicit variation of opinion

According to [31], statements in Likert-type measures should be created to capture the difference of opinion, therefore, they should be fairly strong. The idea is that the choice of response option will the mechanism that moderates or "declares" the strength of the opinion. The author states that ideally, a statement should be worded such that typical respondents will tend to mark an option near or at the midpoint of the range of options. This way of phrasing statements will capture a wider score variance, meaning the item will have a better chance to correlate well with other items in the instrument (which in turn has a positive effect on instrument reliability). Related to how items are phrased, according to [9], experience measuring instruments should use questions that capture what actually happened instead of asking for the satisfaction with an event, “the emphasis

is on asking patients whether or not, or how often, they have experienced certain care processes, rather than rating aspects of the care treatment.”.

Piloting the survey (Step 6)

Instrument development is an iterative process [31][12][25][97]. It is recommended to administer the candidate instrument to a large enough sample of representative respondents, however, what constitutes a large enough sample offers no clear cut answer [14]. [31] states that the sample size is determined by the number of scales to be extracted from the items, for example, for a single scale from a 20 item pool, he recommends 300 subjects (15:1 subject to item ratio). Both [14] and [31] offer rules of thumb made by other authors in the past, where subject to item ratios range from 5–15:1, up to a sample size of 300 individuals, at which point the ratio criteria may be relaxed to a degree. Beyond the sample size, also the composition of the "development" sample is important. Non-representativeness may be due to quantitative (level of attribute present) or qualitative (unlikeness in important ways, atypical sample) differences from the target population [31].

(26)

Optimise instrument length (Step 8)

The final guideline by [31] is about optimising the length of the instrument, keeping in mind that there is a trade-off between brevity and reliability. The strategy is to examine items that contribute least to internal consistency [12][25][25]. The extent of shared variance with other items is the item’s communality; items with low communalities are candidates for exclusion. A split sample can be used to cross-validate the optimised scale; unequal size split samples are acceptable, provided the larger sample is used during scale development [31]. [12] goes a step further in instrument development by also examining the relationships between excluded items. The author does this in order to identify or rule out the existence of other latent constructs. Investigating excluded items and their relationships is an application of the General Factor Model (seeMeasurement models). As stated earlier, the General Factor Model accepts a complex causality between indicators and latent

(27)

3.4 Information processing & PX

In a service oriented worldview, outcome and experience are inextricable (seeThe service con-cept). The quality of interaction, e.g. intimacy, flexibility, feeling valued, courtesy and competence of staff, has a strong effect on the customer (service user) experience [49][16]. Boulding et al found a strong correlation between staff-patient communication (R=0.85 and R=0.7 for nurses and doctors respectively) and overall patient satisfaction [16]. Instruments that measure PX routinely include information and communication themes, as these have been found to be impor-tant to patients [21]. It is clear that interpersonal communication affects how we perceive and rationalise situations and make judgements, but why should how we partake of information have such a strong effect? To answer this, it is necessary to briefly go through how we process information.

3.4.1 Communication

Communication is both the conveyance of information and the forming of common understanding about the information. The communication process involves a sender, a message, a medium and a recipient alongside elements of feedback and noise. Problems with any of the communication elements decreases the quality of communication. [56]. Communication may be preceded by intent by one or more of the parties, but may also be unconscious [84]. Physiologically, information is processed and assimilated in stages; we perceive information through senses (e.g. audio, visual, touch, smell), and decode information for sense-making, storage and retrieval [88]. Sense-making involves processing new information alongside information that has been accumulated through earlier exposure, and storing and adjusting one’s own mental model (see below). Information processing steers both conscious and unconscious actions. When multiple senses are engaged, enabling cross-referencing and linking, information is strongly integrated. Strong integration, repeated exposure and repetition increase the strength of remembrance and the practical utility of information. [55][88]. Spoken instructions in unfamiliar settings are demanding but visual cues or storytelling improve retention over time [82].

Mental models

The human information processing system explains how we perceive, learn and use information. Of particular interest are the internal mental models we create to explain and rationalise what we perceive around us and about ourselves, about our role in a situation. Internal models are not necessarily complete or accurate, but they are still purposeful, since they help us conceptualise ideas and function in the world [62]. Mental models thus help us not only to understand but also to plan our actions, and to act. Our models are continuously evaluated and refined as more experiences are gained and are essential in learning. Information that is well structured can facilitate the forming of purposeful mental models [55][62]. Johnston and Clark talk of “Service in the mind”, which corresponds to a mental model of a service (a phenomenon of interest). Each individual, be it staff, patient or companion, will have a different image of what is intended (service design) and what is experienced (service delivery) [49].

3.4.2 Barriers to communication

It is understood that no communication between us is completely incomprehensible nor can it ever be completely comprehensible, however, shared world views, language and culture facilitate understanding and agreement [56][84]. In the study of intercultural communication, the degree of

“interculturalism” in communication is commonly explored along the variables: a) code, meaning

(28)

Referential distance is the size of the difference that exists between individuals along communication variables. Short referential distances increase the possibility of successful communication. [84]. It is reasonable to think of referential distances as noise and interference in the communication process, e.g. wrong choice of words or symbols, wrong choice of medium, etc. Also self-image/projected image (attitude), pitch, pauses, gestures, posture and other non-verbal cues may introduce noise in interpersonal communication [56][45]. "Noise" (Referential distance) may predispose individuals to reject the sender, the message or both. [56][38].

Communication in hierarchies

Interaction between individuals contains a form of role-playing where one’s self-image is adjusted throughout the interaction. According to Berne’s Transactional Analysis (TA) framework (included in [38]), people assume one of three roles when engaged in communication: that of a child, that of a parent or that of an adult. Ideal communication happens between individuals who assume the role of adults; this implies communication where parties stand on equal terms and is inherently open and respectful (I am OK – You are OK). In contrast, a child-role, communicating with a parent-role or an adult-role, may desire to please or seek reassurance (I am not OK – You are OK). A parent-role, communicating to a child-role or an adult-role, may demand obedience or determine that the other party behaves with obstinacy (I am OK – You are not OK). The TA framework invites reflection on how individuals communicate both with superiors, externals (e.g. a patient) and colleagues including subordinates. [38].

Healthcare organisations are strictly hierarchical. Studying how nurses, patients and doctors communicate shows that hierarchy makes it difficult to uphold the ideals of communication on equal terms (adult to adult). In a study by [45], communication difficulties were attributed to differences in training; doctors were trained to be brief and succinct whereas nurses and bedside staff were trained to be verbose and descriptive. The difficulties communicating across hierarchies led to poor job satisfaction. The study also found that nurses were hesitant to voice recommendations to doctors because nurses did not feel it was their place to do so. Similarly, a North American survey by [75] about errors, stress and teamwork in medicine and aviation, found that errors in medicine (emergency room and operation room) are difficult to discuss for many reasons, including fear of loss of personal reputation, fear of threat of malpractice and going against expectations or egos of other team members. The study found that the perception of teamwork, or how collaboration is between doctors, anaesthesiologists and nurses, is not reciprocated across hierarchies; doctors consistently rate collaboration with non-doctors higher than non-doctors towards doctors. Senior staff was also found to be more reluctant to take input from junior staff [75].

The power discourse

(29)

3.4.3 Improving communication in healthcare

SBAR (Situation, Background, Assessment, Recommendation) is a method within Crew Resource Management CRM, which first emerged in the military and aviation industries. SBAR was developed as a means to improve communication and has been adapted to the healthcare sector. The method consists of structuring verbal communication according to a pattern, as shown in Figure10, below.

SBAR – akut situation

S

R

A

B

Rekommendation Åtgärd Situation Vad är problemet/ anledningen till kontakt?

Bakgrund

Kortfattad och relevant sjukhistoria för att skapa en gemensam helhetsbild av patientens tillstånd fram tills nu.

Aktuellt tillstånd Status ... därför föreslår jag; omedelbar handläggning övervakning/överflyttning utredning behandling. Hur ofta ska jag ... ? Hur länge ... ? När ska jag ta kontakt igen? Finns fler frågor? Är vi överens? Ange Eget namn, titel, enhet patientens namn, ålder, eventuellt personnummer. Jag kontaktar dig för att ... Informera om Tidigare och nuvarande sjukdomar av betydelse. Kort rapport av aktuella problem och behandlingar tills nu. Eventuell allergi. Eventuell smittorisk. Rapportera A: luftväg B: andning C: puls, blodtryck, saturation D: medvetandegrad, smärta, orienterad till tid/ rum/person E: temperatur, hud, färg, buk, urinproduktion, yttre skador. Jag tror att problemet/ anledningen till patientens tillstånd är ... Nätverket för patientsäkerhet Bedömning Tidsram Bekräftelse på kommunikationen

(a) Acute situations

SBAR – icke akut situation

S

R

A

B

Rekommendation Åtgärd Situation Vad är problemet/ anledningen till kontakt?

Bakgrund

Kortfattad och relevant sjukhistoria för att skapa en gemensam helhetsbild av patientens tillstånd fram tills nu.

Aktuellt tillstånd Status ... därför föreslår jag; övervakning utredning/behandling vårdplanering/hjälpbehov överflyttning uppföljning. Hur ofta ... ? Hur länge ... ? När? Finns fler frågor? Är vi överens? Ange Eget namn, titel, enhet patientens namn, ålder, eventuellt personnummer. Jag kontaktar dig för att ... Informera om Tidigare och nuvarande sjukdomar av betydelse. Kort rapport av aktuella problem och behandlingar tills nu.

Eventuell allergi. Eventuell smittorisk. Rapportera Vitala funktioner. Aktuellt status kopplat till situationen. Jag bedömer att ...

Nätverket för patientsäkerhet Bedömning Tidsram Bekräftelse på kommunikationen (b) Non-acute situations S Situation

Varför kontaktar du vården? Vilka förväntningar har du på detta besök?

B Bakgrund Har du tidigare Ƣǽ51(3ǽ++51+(%3ǽ2)4*Ƨ Ƣǽ51(3ǽ(-+%"ǽ/tǽ2)4*'42Ƨ Ƣǽ +(5(3ǽ./#1#1"ǽ#++#1ǽ$t33ǽ---ǽ #'-"+(-%Ƨ Ƣǽ-t%1ǽ-"1ǽ2)4*".,1Ƨ (--2ǽb1$3+(%ǽ2)4*".,1ǽ(ǽ"(-ǽ2+b*3Ƨǽ(+*Ƨ A Aktuella uppgifter

Vilka besvär har du? Hur länge har du haft besvär? "ǽ31.1ǽ"4ǽ2)b+5ǽ*-ǽ51ǽ.12*#-Ƨ -5b-"#1ǽ"4ǽǽ+b*#,#"#+Ƨǽ(+*Ƨǽǽ -5b-"#1ǽ"4ǽ-341+b*#,#"#+Ƨǽ(+*Ƨ a1ǽ"4ǽ++#1%(2*ǽ,.3ǽ-t%.3ǽ+b*#,#"#+Ƨǽ(+*Ƨ ü*#1ǽ"4Ƨǽ ǽ #) -421ǽ"4Ƨǽ ǽ #) Hur ofta dricker du alkohol? Dagligen 1 gång/vecka Mer sällan Aldrig R Rekommendation 33ǽ*.,,ǽ('t%ǽ#$3#1ǽ*34#++ǽ5t1"*.-3*3 Vad är viktigtǽ$ü1ǽ,(%Ƨ Vad måsteǽ)%ǽ%ü1Ƨ "ǽ2*ǽ)%ǽkomma ihåg?

Framtagen av Sveriges Kommuner och Landsting, augusti 2011. SBAR Patient

(c) For patients

Figure 10: SBAR communication. Available fromSKL

SBAR facilitates active listening and empowers each party to think through their communication and to focus the communication to relevant information, delivered in an objective, predictably structured manner. SBAR has been found to reduce risk by reducing errors in communication. It has also been found to help personnel overcome false perceptions of hierarchical entitlement. [45][93].

In Sweden, the "Swedish Association of Local Authorities and Regions", SKL9 launched SBAR in 2010’s Patient Safety Conference [81]. While SBAR is a natural part of the healthcare profession, SKL extends its use and recommends that also patients adopt SBAR in their communication with healthcare professionals10[65][78]. The unvoiced expectation that patients communicate using the SBAR model places a great responsibility on laypeople to discover and learn SBAR communication. [86], in contrast, advocates a patient first, patient centred philosophy for healthcare professionals and their communication. This suggests that the patient should not carry the burden of ensuring good communication. The author concludes that there must, by necessity, exist a power imbalance in the relationship between patient and healthcare, where the “power” position is exercised by the patient and her needs “The care dialogue on equal terms demands an unequal relationship, the

sub-ordination of the caregiver to the patients’ needs and situation” [38] p. 92 (Translated by the author).

The importance of communication and addressing problems in communication (attitude / reception / professionalism in encounters/ ethical behaviour / bedside manners), is evidenced in the training curriculum for healthcare professionals. The prestigious universities Karolinska Institutet and Lunds universitet, for example, have established professorships in Medical ethics. [68].

9SKL Sveriges Kommuner och Landsting

References

Related documents

Objective: This study aimed to describe registered nurses ’ (RNs) experiences of providing respiratory care in relation to hospital acquired pneumonia (HAP), specifically among

Bob Schott Manager.. 10.4 Applicant seeks a confirmation of this right to use, reuse, successively use, and otherwise dispose of all such nontributary and not nontributary ground

Eftersom det inte finns några direkta riktlinjer för hur beräkningen av väsentlighetsbeloppet ska utföras och olika revisorer på olika byråer arbetar efter olika regler, normer

I linje med att majoriteten av respondenterna rapporterade att de utför träning i relation till inre träningsmotiv rapporterade de också att de tränade för att förändra vissa

I vår kunskapsbakgrund kommer vi att lyfta fram Aristoteles syn på kunskap som vår teoretiska utgångspunkt. Vidare kommer vi att belysa användningen av skönlitteratur i

Upplevelsen av att känna stress återfanns hos samtliga deltagare i studien och kan kopplas till Larschinger och Fida (2014) som menar att nyutbildade sjuksköterskor

We propose a multi-view clustering analysis ap- proach for mining network datasets with multiple rep- resentations. The proposed approach is used for mon- itoring a DH network

All the other claims have good surface showings, especially the "Belle Claim, which have a shaft 40 feet deep sunk through a strong iron capping, showing considerable