• No results found

Flight Safety from a Reality-based Systems Approach

N/A
N/A
Protected

Academic year: 2021

Share "Flight Safety from a Reality-based Systems Approach"

Copied!
107
0
0

Loading.... (view fulltext now)

Full text

(1)

Flight Safety from a Reality-based Systems Approach

Tomas Johansson

Institutionen för juridik, psykologi och socialt arbete, Örebro Universitet

Handledare: Mats Liljegren Examinator: Katja Boersma

Examensuppsats, PS3111 2021-05-24

(2)

Sammanfattning

Traditionella metoder för Safety Management har bidragit till en exceptionellt hög flygsäkerhet. Händelser inom flygindustrin och ökande systemkomplexitet har dock aktualiserat att systemteori kan ge kompletterande perspektiv och bidra till flygsäkerhet. Syftet med studien var att beskriva och förstå piloters och chefers erfarenheter av

flygsäkerhet, risk och anpassning av arbetssätt. Kvalitativa intervjuer genomfördes med fem piloter och fem flygchefer/Safety Managers i civila flygbolag och Flygvapnet. En induktiv tematisk analysmetod användes. Resultaten identifierade teman för konflikter och

motsägelser i de studerade systemen. Resulterande konfliktteman var mellan produktion och säkerhet, ett starkt system och individens roll, standardisering och piloters beslutsutrymme, efterlevnad och flexibilitet, förtroende och avstånd, samt flygsäkerhetskultur och

flygsäkerhetsmätning. Systemen som undersöktes uppvisade komplexa egenskaper, vilket visade på en risk för systemolyckor och gjorde systemkonflikter svårare att lösa. Olösta konflikter mellan till exempel produktion och säkerhet var kopplade till organisatorisk och praktisk glidning/avdrift, ibland förvärrad av hemlighållande. Konflikterna uppträdde olika i olika organisationer. Studiens resultat visar på vikten av att använda kvalitativa data för att få flera olika perspektiv, öka systemanpassningsförmågan och övervaka balansen mellan

produktion och säkerhet. Storleken på piloternas beslutsutrymme bör medvetet hanteras och tillägg av ytterligare traditionella säkerhetsbarriärer bör utvärderas utifrån påverkan på systemets transparens och komplexitet.

(3)

Abstract

Traditional aviation safety management methods have contributed to exceptional aviation safety. Recent aviation events and increasing system complexity has actualised that a systems theory approach could provide complementary perspectives and contribute to aviation safety. The purpose of the study was to describe and understand pilots’ and managers’ experiences of flight safety, risk and adaptations of work practises. Qualitative interviews were performed with five pilots and five managers/safety managers in civil airlines and an air force. An inductive thematic analysis method was used. The results identified themes of conflicts and contradictions inherent in the aviation systems studied. The conflict themes were production and safety, a strong system and role of the individual, standardisation and discretionary space, compliance and flexibility, trust and distance, and safety culture and safety measurement. The systems studied showed complex characteristics, making them liable to systems accidents and making system conflicts more difficult to solve. Unresolved conflicts such as between

production and safety was connected to organisational and practical drift, in some cases compounded by secrecy. The conflicts appeared differently in different organisations. The study results highlight the importance of using qualitative data to gain a multitude of

perspectives, increase system adaptability, and monitor the balance between production and safety. The size of the pilots’ discretionary space should be deliberately managed and added traditional safety barriers should be evaluated by their effect on system opaqueness and complexity.

Keywords: Flight safety, systems safety, qualitative interview, inductive thematic analysis

(4)

Acknowledgements

I would like to extend my deepest appreciation and thanks to my mentor Mats Liljegren for his continuous encouragement and support throughout this process. Our discussions as well as his helpful questions and suggestions have been highly valuable. A warm thank you to study participants for taking time to share personal experiences in interviews, checking quotes and providing feedback on the thesis. Thanks also to examiner Katja Boersma and opponents Jennifer and Ida for providing valuable feedback.

I would also like to thank the following for discussions and reflections about the thesis subject in the early stages of the thesis work: Roel van Vinsen, Hans Gordon, Statens Haverikommission (Swedish Accident Investigation Board), and Transportstyrelsen (Swedish Transport Agency).

(5)

Contents

Introduction 7

Theoretical Framework 11

Systems safety 11

Complex Systems 12

Tradeoffs and boundaries of acceptable performance 14

Normal Accident Theory and Systems Accidents 15

Organizational and Practical Drift 18

Work-as-done and other varieties of work 18

High Reliability Theory 20

Resilience Engineering 21

Summary of theoretical framework 23

Recent empirical studies 24

Gap between research and practise 27

Purpose 29 Method 29 Design 29 Author’s pre-understanding 30 Sample 30 Participants 31 Data collection 32 Analysis 33

(6)

Ethical considerations 35

Results 35

The Operational Decision Situation 35

Production and Safety 40

Trust and Distance 45

A Strong System and the Role of the Individual 50

Standardisation and Discretionary Space 52

Compliance and Flexibility 56

Safety Culture and Safety Measurement 61

Discussion 65

Analysis 65

Practical implications and management 84

Results and analysis discussion 91

Methods discussion 96

Further research 99

Conclusions 100

(7)

Flight Safety from a Reality-based Systems Approach

Flight safety is a focus in aviation through adoption and refinement of safety enhancements. This includes improved technology, human-machine interface, regulations, standard operating procedures, safety management systems, flight data monitoring, redundant safety barriers, crew training, crew resource management (communication, assertiveness, leadership, decision making etc.), which is evident in significant declines in accident rates between years 2010-2019 both globally and in Europe (International Air Transport

Association [IATA], 2020a; International Civil Aviation Organization [ICAO], 2021). Commercial aviation is considered Ultra Safe (Amalberti, 2013) with a safety level of 1.13 accidents in 1 000 000 flights globally with safety levels in Europe even better (IATA, 2020a). The strategy of an Ultra Safe industry is to control and avoid risk as much as possible (Amalberti, 2013). The Ultra Safe level is achieved in part by reducing the variability of performance through traditional safety management strategies such as standardisation, compliance, quality assurance and crew training in procedures and standards.

At the same time, with both technical systems development (more efficient components, improved warning and awareness support systems, more efficient flight planning systems and computerized performance calculations) and non-technical

development (increased efficiency in utilizing resources such as crew and aircraft, increased procedural specification, increased commercial success, increasing amount of air traffic) the already highly complex aviation environment all the time gets even more complex. Dekker (2011) argues that the human ability to construct advanced technological and organisational systems has developed faster than our understanding of how complex systems interact and fail. Hollnagel & Woods (2005) observed that technical growth and automation has resulted in a self-reinforcing complexity cycle, as technical development increased productivity but also complexity which leads to more opportunities for malfunctions. When redundancies are

(8)

then built into the system to protect against failures, this is increasing the complexity further. This leads to a vicious cycle driving towards increased complexity.

The Covid-19 crisis has required significant adaptation on all levels in an airline, and while the focus of this thesis is not adaptations to the Covid-19 crisis, the crisis has

highlighted that aviation has been highly influenced by its environment, and therefore must be able to adapt. An example is that the number of unstabilised approaches (approaches to land that does not meet predetermined safety criteria, increasing the risk of runway

excursions) increased by a factor of three during the first months of the pandemic (IATA, 2020b), only to decrease towards the end of 2020 (IATA, 2021). The reason for this is not entirely evident even in hindsight, but it seems that the complexity of the aviation system made it possible for the acute Covid-19 crisis to somehow in unnoticed and intractable ways reverberate through the system and into the airline flight decks.

Several recent high-profile accidents have arguably been the results of system processes, including Air France 447 where an Airbus 330 stalled and crashed into the

Atlantic Ocean as well as the two Boeing 737 MAX accidents of Lion Air 610 and Ethiopian Air 302. In the Air France 447 accident it has been demonstrated that it was not the pilots but rather the sociotechnical system that lost situational awareness, leading to the accident (Salmon et. al, 2016). In case of the two Boeing 737 MAX crashes, it has been concluded the accidents happened because of a combination of system factors such as production pressure, faulty design assumptions about human performance, a culture of concealment, and lack of independence of the regulator from the aircraft manufacturer (U.S. House Committee on Transportation & Infrastructure, 2020). These accidents could not have been avoided by better pilot compliance with regulations, but to understand and prevent future similar accidents the complexity of the aviation system must be recognised. The European Union Aviation Safety Agency (EASA, 2020) in their plan for aviation safety 2021-2015, states that

(9)

“the latest accidents and serious incidents and the massive worldwide impact of the COVID-19 pandemic on the aviation system underline the complex nature of aviation safety and the significance of addressing human and organisational factor aspects” (p. 27). EASA (2020) concludes there is a need to improve systemic safety through improved safety management.

Despite this, traditional safety management strategies remain major parts of the International Air Transport Association’s Safety Strategy (IATA, 2020a). Of the six focus areas of the IATA Safety Strategy shown in table 1, three contain reducing performance variability through standardization, training and compliance assurance, two areas relate to individual hazards, and one relates to specific measures aiming to develop Safety

Management Systems (SMS) (IATA, 2020a). Standardisation, compliance and testing of individuals is more frequent than strengthening organizational capacities.

Amalberti (2013) suggests that there is an over-optimization of traditional solutions to manage risk in Ultra-Safe systems, leading to the system becoming highly unadaptive.

Traditional safety management strategies have taken aviation to an Ultra Safe level, but Amalberti (2013) argues that further optimization of present strategies will not lead to increased safety. This author proposes that consequences of traditional safety management, such as reduced performance variability and reduced adaptability, can lead to decreasing safety levels. Safety management based on individual hazard assessment, added technical and procedural barriers as a response to incidents, and efforts to reduce performance variability, seem to make the system insensitive to processes that lead to accidents emerging in the absence of serious breakdowns and unrelated to identified individual hazards. Amalberti (2013) suggests to complement traditional hazard and barrier risk management by pursuing qualitatively different ways of managing safety. The author suggests that one important aspect of intervening effectively in safety is to “gain and effectively communicate a systemic view of the risks facing the enterprise” (Amalberti, 2013, p. 121). Systemic is different from

(10)

systematic, in that it is a view on a system level rather than a systematic view of individual hazards. For Ultra Safe systems, Amalberti (2013) argues that resilience, the ability to successfully adapt to exceptional conditions,

automatically disappears as a result of using the traditional tools that enhance safety in industry and service sectors. In many cases it would be preferable [...] to

reintroduce it in ultra-safe systems (because it would permit adaptation to exceptional situations, an ability which by this stage has largely been lost). (p.78)

In summary, there is a critique (Amalberti, 2013; Dekker, 2011) towards traditional methods of safety management, which are said to reduce variability of performance through highly specified procedures and compliance, locating a need to improve predominantly on the individual level, and controlling individual hazards with individual barriers, all the time increasing complexity. The most influential global association of airlines, IATA, seems to focus only on traditional safety management strategies. In the remainder of this section, the systems safety approach will be introduced, together with some of its most influential theories, highlighting how such an approach could provide a complementary perspective beyond traditional safety management and contribute to aviation safety.

Table 1

IATA Safety Strategy Priority areas (IATA, 2020a).

Focus area Primary means

Reduce operational risk Technology, training, compliance, incident analysis Enhance quality and compliance Audits and quality control programmes

Improved aviation infrastructure Improving regulations, awareness of specific hazards Consistent SMS implementation Safety culture surveys, exchange between organisations Effective recruitment and training Pilot aptitude testing and pilot training

(11)

Theoretical Framework Systems safety and the relationship to psychology

Systems theory or systems thinking is a perspective that emphasises holism, processes and interactions across levels of a system or in subsystems rather than reductionism by focusing on system components (Dekker, 2011). Systems safety is a perspective within safety science embracing systems thinking and applying it to safety critical work and industries. Within a systems safety approach, a system has been defined as “the deliberate arrangement of parts (e.g. components, people, functions, subsystems) that are instrumental in achieving specified and required goals” (Hollnagel & Woods, 2005, p. 3). Chapanis (as cited in Wilson, 2012) provides a similar definition: “an interacting combination, at any level of complexity, of people, materials, tools, machines, software, facilities and procedures designed to work together for some common purpose” (p. 3862). At the essence of a system is thus

interactions, relationships and goal-directedness. System models focus on how different structural, relational, technological and task characteristics interact across the individual, group and organisational levels and beyond (Dekker, 2011).

Systems safety is a multi-disciplinary area of research drawing on disciplines such as psychology, sociology, ergonomics, engineering, economy as well as general systems theory and cybernetics. Psychology, as the science of human mind and behaviour, is a core

discipline in safety science because human functioning and interactions play a vital part in the outcomes of sociotechnical systems. Psychology contributes specifically with knowledge on organizational, group and individual levels as well as the interactions between these levels, for example through organizational culture and climate, group processes, social psychological influences, as well as attributional and perceptual processes

.

For example, safety climate, which is an aspect of the psychological construct organizational climate, has been the subject of much research focusing on organizational factors that predict and influence safety

(12)

outcomes. It has been found that certain safety climate aspects such as management safety involvement can predict safety related outcomes (Zohar & Luria, 2005), which is in line with a systems safety perspective highlighting that errors are not random but systematically connected to the operational environment (Dekker, 2014). At the same time, it has been difficult to scientifically prove a causal connection between aspects of safety climate and accident levels (Chmiel, 2009). From a systems perspective this is not surprising since stable cause-effect relationships are difficult to capture in complex systems. Psychology as a

scientific discipline may need to be integrated with other disciplines in a systems approach to describe and understand complex sociotechnical systems. It is coherent with an integrative tradition in psychology to take advantage of several theories and even different scientific disciplines to explain human and organizational functioning.

Complex Systems

In a complex system there is high interdependence between a large number of components, activities and functions. Cilliers (in Dekker, 2011) has formulated six main characteristics of complex systems: (1) It is an open system, meaning that it is constantly influencing and being influenced by the environment, (2) System components respond locally to information available to them, according to their local rationality, (3) No component has the capacity to represent the complexity of the whole system in itself, (4) Inputs need to be made the whole time by system components in order to keep the complex system functioning, (5) Complex systems have a history or a path-dependence, where past events and processes influence how the system works today, (6) Interactions in complex systems are non-linear, so that a small change or failure can lead to a large consequence.

The openness of a complex system means there is a constant need for the system to adapt to outside influences. Because of the many interactions and activities in a complex system, it is intractable, i.e. the internal functioning is difficult or impossible to fully describe

(13)

(Flach, 2012). A complex system therefore needs to be self-organising, and components must use their local rationality and adapt to changes by adapting its interactions with the system. Complexity arises from the large networks of interactions resulting from the components' local behaviour, where effects spread through the system by interactions in non-linear ways. Therefore, reductionistic linear cause-effect models are insufficient to understand a complex system (Dekker, 2011).

A complicated system is not necessarily complex. In a complicated (but not complex) system there might be many components but the relationship between them are linear and system outcomes can be fully accounted for by closely studying system components (Dekker, 2011). Cause and effect can be understood and predicted as the sum of the component

behaviours. In a complex system, on the other hand, outcomes emerge from system

interactions and cannot be reduced to the behaviour of system components (Dekker, 2011). The human component or human error often gets to “carry the explanatory and moral load of accidents” (Dekker, 2011, p. 76). This is in line with psychological research on fundamental attribution error, that humans have a systematic tendency to attribute errors, or group failures, to other people’s personal factors, while own mistakes are more attributed to situational factors. In addition, hindsight bias makes known outcomes seem more probable in hindsight and makes us use the outcome to judge the process rather than evaluating the process itself (Woods et al., 2010). In this way, we protect our own sense of competence, professionalism and control of our own outcomes and exposure to risk. Nevertheless, such simplistic and judgemental explanations as “human error” can only ever be valid if there is a direct cause-effect relationship between component and system functioning. Because

outcomes emerge from system interactions, identifying and rectifying a faulty component or hazard, or replacing a human, will likely not be enough to reduce the risk of future problems in a complex system. A systems approach can give perspectives on how system factors such

(14)

as structures, change processes, collective beliefs, interactions between functions, resources, and multiple interacting organisational goals contribute to system outcomes. Since no

component can have complete knowledge of the whole system at a given time, it is beneficial to use multiple perspectives to describe a complex system (Dekker, 2011). When drawing conclusions from events or on how a complex system functions, it is therefore not sufficient to try to describe “objectively what actually happened” or use only one source of information on the system. One also needs to understand the different perspectives and experiences of actors. Any empirical or theoretical account will be one view contributing towards

understanding a complex system better. Several different systems safety theories will therefore be explored in the following sections.

Tradeoffs and boundaries of acceptable performance

Rasmussen’s (1997) model of boundaries of acceptable performance describes and visualises the tradeoffs between three main types of system objectives: safety, workload and financial (see figure 3, p. 73). Gradients from the three objectives form the area where the operation can take place. This is called the discretionary space and is the area in which the system allows and provides the capacity for sharp-end operators to use their expertise and judgement to make decisions at their discretion. The operator is guided in adapting to the situation by criteria such as cost, workload and risk of failure (Rasmussen 1997). The size of the gradients determine the available discretionary space. Strong gradients can also make the system migrate. For example, strong gradients from financial objectives leads to the system migrating towards the safety and workload boundaries. The boundaries represent the points where safety margins are exhausted and an accident can happen, where workload becomes unmanageable or finances so strained it leads to a system breakdown.

Since, for the system to function, constant inputs must be made in a complex system to respond to the local situation and environment, decisions and the associated tradeoffs must

(15)

be made all the time in many different parts of the system and the search for a solution will depend on criteria set by the gradients (Rasmussen, 1997). This is in line with findings in social psychology that human behaviour is indeed in many situations more affected by external influences than by general personal attitudes (Wicker, 1969). According to

Rasmussen (1997) accidents happen not because of faulty components, but because failures emerge from unexpected interactions that the system controls cannot cope with and that pushes the system over the boundary.

Normal Accident Theory and Systems Accidents

Normal Accident Theory (NAT) by Charles Perrow (1999) describes how certain structural characteristics make some types of systems liable to breakdown due to unexpected interactions in the system, despite significant redundancies. NAT classifies systems as either tightly or loosely coupled and either low or high interactively complex. Coupling has to do with how dependent and time sensitive different system components are of each other. In a tightly coupled system interdependent components or technical subsystems may require inputs in certain sequences and there is little slack. Interactive complexity describes if interactions are linear and tractable or non-linear with difficult to understand causal

mechanisms, and therefore complex, such as having common-mode connections, so that one failure can cascade through interactions in the system and create unexpected outcomes.

Perrow (1999) describes how system control design should be a function of the two dimensions of NAT. An interactively complex system should be controlled at the local level because a centralized control component cannot capture a complete understanding of the whole complex system, and because of the dynamic nature of the system requires constant and timely inputs. A tightly coupled system should be controlled centrally, because of the dependencies and time-sensitivities between different components.

(16)

The difference between the two different kinds of control seems similar to the organizational psychological difference between Weber’s traditional machine bureaucracy, with centralized decision making and a high level of formalization, and the professional bureaucracy, where professionals have more autonomy and there is decentralized decision authority, but the operation is still stable enough to allow standardization of many tasks (Kaufmann & Kaufmann, 2010). The machine bureaucracy, where the organization is seen as one mechanic machine, has very low flexibility and adaptability but requires less professional competence than the professional bureaucracy. Too high competence can even be

unfavourable for professionals’ possibility to adapt to work in a machine bureaucracy. Systems that are both tightly coupled and highly interactively complex are according to Perrow (1999) liable to certain kinds of accidents called Normal Accidents. This can be described as a systems accident happening because of the properties of the system as a whole rather than faulty components. Thus, according to NAT, system accidents happen not because of individual operator or design errors. Rather, when the interactive complexity is large and there is no slack in the system to cope with that, unanticipated interactions and consequences can occur. Perrow (1999) is pessimistic of the possibility to apply both centralized and decentralised control at the same time, and therefore advise against designing systems that have too much of both interactive complexity and tight coupling.

Aviation is according to Perrow (1999) a tightly coupled and interactively complex system and is therefore liable to systems accidents. This is because of “proximity and common-mode connections [...], very limited time for recovery from failures, almost no buffers or redundancies other than those designed-in and subject to failure, limited slack” (Perrow, 1999, p. 132). The invariant sequences for technical systems and built-in

redundancies according to laws rather than a general slack being available where margins can be created ad-hoc, adds to the tight coupling. The extensive system redundancies and

(17)

automatic safety systems constructed to cope with risks brought about by complexity, in turn increase the interactive complexity (Perrow, 1999).

One critique is that NAT is overly pessimistic in asserting that system accidents are inevitable in certain kinds of systems. High Reliability Theory proposes that systems or normal accidents can be avoided through a focus on reliability of system components, competent decision authority at operational level and a good safety culture (Rochlin, 1999). Leveson et al. (2009) agrees that NAT is overly pessimistic but puts forward that reliability is not the same thing as safety and that in a complex system a systems approach that looks beyond reliability of components is required and can aid in avoiding accidents even in tightly coupled interactively complex systems. The strength of NAT seems to be the concepts of coupling and interactive complexity, and not whether accidents are “normal” or not.

Coupling has later been developed to become a situational and dynamic concept (Snook, 2000) as opposed to the structural coupling Perrow (1999) suggested. This means coupling between processes and components in the system can vary over time and between different situations. In critical situations coupling can increase. The concept of complexity has also advanced beyond technical complexity to include the history of and interactions between social, organisational, cultural and political phenomena (Vaughan, 2016). The psychological concept of organizational culture and change in such culture is thus introduced in the NAT theory, both as a model for action and as an integrating function creating a common meaning (Kaufmann & Kaufmann, 2010). This can be useful to describe how beliefs, processes and practises in an organisation can change over time. Such change can according to NAT be absorbed by a slack or redundancy in the organisation. Then in a critical situation where there is little slack but the coupling between processes tighten, the system may no longer be able to cope (Snook, 2000). Organizational culture and processes can then interact with a dynamic operational situation, in which there are many variables and only

(18)

limited access to information, and in the absence of failures in individual components create a systems accident, while people indeed are just doing normal work (Snook, 2000)

Organizational and Practical Drift

Change of collective beliefs, attitudes, processes, interactions, practises etc over time is in safety science often called drift. Drift can be conceptualised as risk attenuation

(Pettersen Gould & Fjaeran, 2020), either as organisational drift through social and cultural processes (Vaughan, 2016) or practical drift as a change in practises over time (Snook, 2000). Organizational drift entails a gradual acceptance of risk through normalization of deviance affecting both structure and social processes in the organisation (Vaughan, 2016).

Organizational drift can be seen as closely related to change in organizational culture, which can develop through shared experiences and a common understanding formed through being together at the workplace (Kaufmann & Kaufmann, 2010). Practical drift happens when from a local rationality perspective rules do not fit to the actual situation experienced by

practitioners and therefore practises migrate over time (Snook, 2000). Both types of drift describe how the history of the organisation can influence current practises and processes, and lead up to unanticipated consequences.

Work-as-done and other varieties of work

Shorrock (2016) describes varieties of work distinguishing between work-as-done, work-as-disclosed, work-as-imagined and work-as-prescribed. Work-as-done is the actual activity, how work gets done by sharp-end staff. This can be different from work-as-imagined by regulators, management or outside observers, because of the “multiple, shifting goals, variable and often unpredictable demand, degraded resources (staffing, competency, equipment, procedures and time), and a system of constraints, punishments and incentives, which can all have unintended consequences” (Shorrock, 2016, p. 10). Work-as-done can change through processes of practical drift. According to Shorrock (2016) work-as-done is

(19)

almost impossible to prescribe precisely because of the variations, adaptations, tradeoffs and the operational know-how involved in actual work. In a reality where adaptations may be needed to meet demand, but where this at the same time can increase risk, Shorrock (2016) asks if it is ethically right that practitioners routinely need to work around procedures or be excessively flexible in applying them, to get the work done.

Work-as-prescribed is the rules and procedures that prescribe how work should be done, often written by senior members of an organisation (Shorrock, 2016). It is generally upheld as the correct way to do things and the standard to judge if performance is

satisfactory. However, it is very difficult to prescribe all aspects of human work, to imagine and write a rule for every condition and to articulate in written linear form exactly how work in dynamic situations shall be carried out.

Work-as-disclosed is how work is described often by the sharp-end staff performing the work. What is said can be consciously tailored to the purpose of the message, what the recipient expects or possible consequences of what is said. In addition, a finding in social psychology is such messages can be tailored to positive self-presentation. Secrecy about how work is done can also be due to a fear of withdrawn resources, sanctions or reduced safety margins. Shorrock (2016) writes that “in an environment where people are punished for trade-offs, workarounds, and compromises, which the staff believe to be necessary to meet demand, then the overlap between work-as-disclosed and work-as-done may be deliberately reduced” (p. 5). On the other hand, where there is mutual trust, there is a larger chance that staff disclose to managers how work is done to a bigger extent even if rules are broken.

Work-as-imagined is how work is imagined from for example experience of work-as-done, from work-as-prescribed and from work-as-disclosed. Shorrock (2016) argues that all these imaginations are more or less biased, partial, outdated, simplified and/or wrong. This aligns with common findings in psychology, that our preconceptions and previous

(20)

experiences determines a great deal of how we see the world around us. Shorrock (2016) concludes that for analysis, investigations, design or prescribing rules one must be aware that work-as-imagined, work-as-prescribed, work-as-done and work-as-disclosed to varying degrees do not align.

High Reliability Theory

High Reliability Theory prescribes that safe operations can be reached through reliability of system components with competent decision authority at operational level (Rochlin, 1999). HRT addresses the combination of interactive complexity and tight coupling and the resulting duality between centralised and decentralised control. This is done through detailed procedures to cope with tight coupling, but also flexibility, degrees of freedom and a readiness to act when unexpected things happen. HRT emphasises that there must be decision authority at low levels in the organisation. This decentralisation is deemed successful in High Reliability Organizations through a culture that enhances reliability among the operators and encourages social learning where operators are constantly developing their skills with the help of each other. Psychological research shows that such experiential and situated learning is more beneficial in challenging job situations and can happen when interaction between participants serves to increase expertise through “a social process of constructing meaning and understanding” (Sonnentag, Niessen & Ohly, 2008, p. 63). Safety stories that the members of the organisation tell, created from operators’ experiences, interactions and collective beliefs, are according to Rochlin (1999) the basis for the collective commitment to safety and hence reliability, through a constant enactment and reproduction of the stories by the operators. Such stories and enactment are a part of building a strong organizational culture (Kaufmann & Kaufmann, 2010). HRT in this way accounts for the resulting duality between confidence in stories about a safe organisation and the mindfulness of system state and alertness to act, as well as the duality between responsibility of the organisation versus

(21)

responsibility of the practitioner, which is seen as “a constitutive property that enters both as agency and structure” (Rochlin, 1999, p. 1553). The operators have the autonomy and competence to handle situations that arise from unexpected interactions.

As a critique of HRT, Leveson et al. (2009) points out that there is an assumption in HRT that if each person and part of the organisation is reliable, works as it should and has the competence and authority to make good decisions, there will automatically be safety.

However, reliability and safety is not necessarily the same thing, especially not in complex systems (Leveson et al., 2009). Accidents in complex systems can happen even when every part of the system is reliably doing what it is supposed to do (Dekker, 2011). Also, it seems difficult to fully engineer high reliability, since the safety stories are interactively constructed collectively by people in the organisation (Leveson et al., 2009). In addition, perspectives from Resilience Engineering highlights that performance variability is required to adapt to the variability of situations that can be encountered in an open and complex system (Haavik et al., 2019). High Reliability Theory is normative rather than adaptive in the Resilience

Engineering sense, and reliability often suppresses variability. In HRT, learning increases the familiar system states and attempts to restrict the operation to those states. The resilience present in HRT is therefore a simpler ability to rebound to normal operation after a disruption and increase robustness by increasing the number of know conditions that the system can handle (Pettersen & Shulman, 2019; Woods, 2015).

Resilience Engineering

Hollnagel (2016) defines that “a system is resilient if it can adjust its functioning prior to, during, or following events (changes, disturbances, and opportunities), and thereby sustain required operations under both expected and unexpected conditions”. Hollnagel’s (2016) definition includes an ability that lasts over time, includes opportunities as well as disruptions, and the ability to sustain operations by adapting to changes rather than

(22)

rebounding to normal operation or increasing robustness. In Resilience Engineering (RE) therefore adaptation is more a normal state of affairs than in HRT. Resilience in RE means avoiding brittleness by graceful extensibility, i.e. being able to adjust the functioning to cope with events without sudden failure, and the ability to survive changes long-term by

sustainable adaptability (Woods, 2015). The unit of analysis in RE is a sociotechnical system rather than individuals or even organisations.

Developing graceful extensibility and sustained adaptability, Hollnagel (2009)

describes four cornerstones or basic system capacities of a resilient sociotechnical system: (1) respond to changes such as disturbances or opportunities by using prepared actions or

adjusting current functioning, (2) monitor and know what in the system and in the

environment could affect system performance, (3) learn from experience, and (4) anticipate future developments such as demands, disruptions, constraints and opportunities. Sustained adaptability is not limited to safety but includes long term survival of the system including other constraints such as financial (Hollnagel, 2016). Research in organizational psychology has highlighted the importance of flexible and updated mental models for strategic

management, to remain adaptive to the environment (Hodgkinson, 2005).

The concepts of graceful extensibility and sustained adaptability is more about making sure that things go well than focusing on when things go wrong. It builds on the assertion that it is the same processes of adaptation of practises and interactions that lead to both successful adaptations and to failure (Hollnagel, 2016; Woods, 2015). Building on control theory and cybernetics, the concept of variability has been introduced, explaining that unexpected variability in complex systems must be met by self-organisation and variability in performance to maintain safety (Flach, 2012). According to Resilience Engineering it is therefore not unproblematic to suppress performance variability, as is done in Ultra Safe systems (Amalberti, 2013) and to some extent in High Reliability Organizations (Rochlin,

(23)

1999), since a complex system needs constant inputs to remain operative and performance variability is needed to meet unexpected variability in the system. Since it is the same performance variability that explains both such normal system control and system failures (Flach, 2012), an ecological descriptive research approach to such adaptations in normal work could therefore be useful to increase understanding of these processes.

Functional resonance highlights that variability in several functions or tasks can intensify each other and lead to large system-wide variability in safety. Functional Resonance Analysis Method (FRAM) is a method increasingly used to describe variability and

interactions between system functions, to understand how they are coupled and how

variability may lead to unexpected results (Hollnagel, 2012; Salehi et al., 2020). The method uses decomposition, which means that the system is decomposed in their functions, and not their structures or components. While FRAM seems to be a promising way to describe and understand a part of a complex system, the time required to create them and the complexity of the resulting models affects their usability and the possibility to comprehend them. Thus, it seems not even this state-of-the-art method can defy the complexity of a complex system. Summary of theoretical framework

A foundation in a systems approach is a breadth of perspectives and this inherently means using a wide range of theoretical perspectives. Aviation sociotechnical systems are too complex to be portrayed from only one perspective. As Hollnagel (2020) puts it, “the main issue with safety – and the reason why it is an ever-growing problem – is that the lack of safety neither is due to a single factor nor can be comprehended by a single view” (p. 266). The theories presented in this section are complementary since they have different focuses and incorporates several theoretical backgrounds including sociology, psychology, control theory and general systems theory. In addition, HRT originates from empirical research on aircraft carriers and air traffic control. The different theories focus on a breadth of issues such

(24)

as what makes systems complex, influences on decision making and migration of practises and beliefs in a system, the implications of different kinds of system control, the possibility to combine different forms of control to restrict variability in performance, and the ability of systems to adapt in variable conditions.

The theories have in common that they focus on the system rather than the individual, they see safety as the presence of system capacities and resources rather than the absence of component failures, individual mistakes or accidents. The theories also focus on work-as-done and normal work carried out and use a multitude of perspectives to reach a richer

understanding of how safety and risks emerges from dynamic actual conditions and contexts.

Recent empirical studies

Adriaensen et al. (2019) built a systems model of the work in an airline flight deck when slowing down during the approach, using a modified FRAM method. The researchers interviewed former pilots on the aircraft DC-9 as well as documents and manuals to build models of work-as-imagined and work-as-done. The method and the resulting model are complicated but nevertheless show how variability in different functions can interact in the flight deck while maintaining a systemic perspective. It was found that both internal variability such as a pilot selecting incorrect speed and external variability such as a congested airspace could be coped with through e.g. cross-checks and flight path management. It was found however that variability in an incorrect Zero Fuel Weight propagate to many other functions without detection and leads to incorrect approach and landing speeds. Only flight crew experience creating a feeling that something is not right with the relation between weight and speed, could catch this variability. Adriaensen et al. (2019) concluded that the non-linear understanding of complex systems that Resilience Engineering and FRAM makes possible, is appropriate to investigate and describe issues in complex

(25)

sociotechnical systems. They recommend, for understanding of the resulting model, FRAM is used for a limited part of the system, such as particular tasks in an aircraft flight deck.

Studic et al. (2017) built a framework for airport ground operations based on FRAM methods to understand ground operations in an aircraft turnaround, from a systems

perspective. The model afforded a better understanding of the variabilities in different system functions and coupling between them, that could be used to anticipate hazardous system variability, as well as retrospectively study incidents and come to different conclusions than a common linear cause-effect based method had done. In a case study, it was found that rather than the cause of an incident being a faulty technical component, that what controlled the outcome was found on an authority/regulatory level. Studic et al. (2017) emphasize a strength of their study was to utilize work-as-done through interviews and observations instead of work-as-imagined, to base analysis on actual conditions.

Three other empirical studies used interviews with either middle managers, safety managers or flight safety investigators respectively (Callari et al., 2019; Ioannou et al, 2017; Macrae, 2014) to describe interactions between actors in aviation organisations. The role of middle managers in the management of safety, as self-reported in interviews with 48 middle managers from European aviation organisations, was found to be mainly decision making, information management and influencing other key stakeholders (Callari et al., 2019). Managers said a combination of qualitative and quantitative information is required to manage safety and that information is collected through both formal and informal channels, making use of personal networks or contacts with staff, supervisors and experts. To influence top managers, staff, other departments and external stakeholders, middle managers frequently had collaborations and discussions with people having different perspectives and used non-technical skills such as negotiating, explaining rationales in a clear way and becoming

(26)

trustworthy. This indicates middle managers use a large amount of formal and informal interactions to manage safety.

Safety Managers were interviewed to identify factors that affect the safety

performance of an organisation by hindering safety management or development of good safety performance indicators (Ioannou et al, 2017). One factor hindering safety management were found to be top management decisions, including allocation of resources, failure to show commitment to safety and a feeling that top management do not want to know. Another factor found was a lack of Safety Culture and Just Culture, meaning not enough or lack of several factors including safety promotion by management, training in identifying hazards, trust and management attention to issues brought up. The last factor found was impractical and fearful data collection, meaning that reporting was not encouraged by management and a fear of punishment prevailed among staff that made them not report. While two of the three co-authors of this study (Ioannou et al., 2017) are affiliated with a middle east airline, the cultural and geographical context of the study sample are not known, and it is difficult to draw conclusions of applicability to a European aviation setting.

Macrae (2014) made a qualitative study interviewing 39 flight safety investigators as well as observations of safety work and reviewing flight incidents. The author describes the expertise and important role of safety practitioners in creating safety and resilience in airlines, not only by analytical work but also in generating action from the organisation. Safety

practitioners, after identifying emerging risks, used both discursive practises to set an agenda around safety issues and reported to managers and other accountable staff to influence through their accountability.

In summary, to the knowledge of the author of the present study, there is limited empirical research made into sociotechnical systems in an aviation context from a systems perspective. Interviews of safety managers and middle managers add valuable perspectives

(27)

on actual interactions and processes in parts of such systems. The studies utilising FRAM, while an underspecified method requiring specific expertise, add an understanding of how variabilities in functions can be coupled and interact creating undesired consequences in a system or subsystem. There seems to be a lack of studies crossing system levels in aviation systems. The lack of recent empirical studies does however not correspond to a lack of theory, since there is extensive non-empirical theoretical development within the field. Gap between research and practise

There seems to be a gap between research and practise in safety science. A topic discussed in the special issue on the future of safety science in the academic journal Safety Science, in press at the start of this thesis, was how to close this gap. According to Le Coze (2020) researchers in safety science refer to their contributions within the reference of their academic discipline while safety practitioners more consider their practical experience. Rae et al (2020) writes that safety practitioners are primarily guided by laws and standards rather than theory and empirical evidence “given how divorced the theories are from safety practice, and how a-theoretical most safety practice is” (p. 5). The same authors concludes that safety research should be informed and built on the current practise of safety and operational work.

Rae et al (2020) presents a manifesto on Reality-based Safety Science with

commitments for future empirical safety research. The first commitment is to “investigate work, rather than accidents, as our core object of interest” (Rae at al., 2020, p. 4). Because aviation accidents are extremely rare, a focus on investigating accidents will potentially miss many sources of knowledge, and “consistently leads safety researchers into the trap of drawing conclusions about how work is, and how work should be, based on single instances of work that, by definition, have unusual outcomes” (Rae at al., 2020, p. 4). Reality-based Safety Science recommends that research focus on work and questions concerning how work

(28)

happens, how practitioners make sense of what they do, what events are meaningful to them, how events are interpreted, and how work varies in short and long term.

The second commitment for Reality-based Safety Science is to “describe current work before we prescribe changes” (Rae et al., 2020, p. 4), stating what can best help develop current practises is research firmly based in those practises. This can be done by “a deep interest in the current practice of safety work and operational work. By describing these things back to safety practitioners in new ways, we seek to give them improved

understanding and capability to do their jobs.” (Rae et al, 2020, p. 5). Given the gap between research and real world safety practice, this author asserts that descriptive research should for the near future dominate safety science, and only after a more thorough understanding of current practise has been attained, should there be attempts for intervention research.

It is seen as important for the development of safety science that future research within the field takes place within its associated disciplines, taking advantage of research practises and advances in that discipline (Le Coze, 2020; Rae et al, 2020). Psychology is such an associated discipline and is well positioned to contribute to progress within safety science with its research area spanning over both individual, group and organisational levels over different contexts including organisational psychology, work psychology, social psychology with developed research methods for both qualitative and quantitative research.

Bergström (2020) argues that there is a need for more descriptive rather than normative research in safety science and cautions against “the reductionist trap of locating system resilience at the level of adaptive, sharp-end staff” (p. 181). Indeed, the majority of Resilience Engineering research is about resilient behaviours on the individual level, in the operator or the manager, rather than resilience on a system level (Bergström, Van Winsen & Henriqson, 2015). As discussed above, several of the key focus areas of the IATA (2020a) Safety Strategy focus on individual resilience through pilot training and aptitude tests and

(29)

restricting variability by compliance, rather than enhancing the system ability to provide adaptive resources and capacities needed. Also, the European Union Aviation Safety Agency (EASA) has made it a mandatory requirement that pilots undergo a resilience development training programme intended to improve individual mental flexibility and performance adaptation (Bergström, 2020). Bergström (2020) highlights that complexity theory is about understanding interactions between levels rather than prescribing the behaviour at one level. He means therefore that improving resilience should not be about enhancing mental processes and motivation, but about “the logics of organisational work design and structures of

organisational power” (Bergström, 2020, p.181) and how the system provides the adaptive resources and capacities needed for sharp-end staff to adapt to challenging situations. Purpose

Using a systems approach framework, the present study is an attempt to understand the experiences of work of actors on different levels, including pilots, safety management and Flight Operations Chiefs, within an aviation organisation context. Specifically, the purpose of the study is to describe and thematize pilots’ and managers’ experiences of flight safety, risk and adaptations of work practises. Themes will then be interpreted in the light of complex system theories.

Method Design

In line with the purpose of the study, a qualitative design with interviews was used. The analysis of the empirical data was done using a thematic analysis method. The resulting themes were then analysed deductively with systems safety theories.

(30)

Author’s pre-understanding

This is a master thesis in psychology, 30 ECTS, within the Psychologist Programme at Örebro University, Sweden. The author has an interest towards Human Factors and Systems Safety and is an airline pilot since 12 years, presently an airline captain on the aircraft type Boeing 737. The combination of studies and experience of the environment studied creates a pre-understanding of the research area. It is assumed that the author as observer plays an active part in creating and interpreting what is observed and therefore it is necessary to disclose the author’s background (Dekker, 2011; Langemar, 2008). It is

recognised that the study and the analysis is one account of how the collected data can be understood in relation to the research question and be connected to the chosen theory. Sample

To increase the qualitative variation in interview data, theoretical sampling was used to choose participants that would contribute to increased understanding of the complexity of the phenomenon studied (Langemar, 2008). Since the intention was to collect different experiences and perspectives and to cross system levels, perspectives from different organisations and from both active line pilots and management were sought. It was

considered beneficial if participants would have experience of both normal work in different circumstances as well as difficult situations, and therefore all participants had considerable flying experience.

The sample was initially drawn from participants with experience from airlines within Europe. The sample was then allowed to grow strategically to increase the qualitative

variation in the material, based on collected data and the analysis of the data, which started after the third interview and continued through the data collection process. After three interviews with commercial pilots, it was decided to also include participants from the Swedish Air Force (SwedAF) to see what could be understood from the similarities and

(31)

differences between these operational environments. Within the criteria, the participants were recruited based on availability, perceived openness to disclose and consent.

The required size of a qualitative sample depends primarily on how large the quantitative variation is in the data (Langemar, 2008). When reaching 10 interviews it was experienced that that saturation and exhaustiveness were reached in the data in relation to finding a pattern of themes in the data set.

Participants

The study contained a total of 10 participants, all active or former commercial or air force pilots. Seven participants were commercial airline pilots in western Europe and three participants were pilots in the SwedAF. Eight of the participants were male and two were female. Participants had a flying experience ranging from around 10 years to over 30 years.

Five of the participants had experience of a management role, two in flight operations management and three as Safety Managers. Some of the managers combined this role with flight duties as commanders. Two of the participants were active in their management roles during the study and three had previous experience in the role 1-5 years before this study.

All the five non-management pilots were actively operating as pilots at the times of the interviews. They included both commercial pilots, including the ranks instructor/training captain, commander/captain and first officer, as well as SwedAF pilot in command.

To ensure confidentiality participants have in the results section been assigned a number 1-10 and a two letter code for their rank: FO = First Officer (airline), CP = Captain/Pilot-in-command (airline), FV = Air force Pilot-in-command, SM = Safety

Manager, FC = Flight Operations Chief (Flygchef). For example, 5CP is participant number 5 who is a captain in an airline. Other titles have been written in English and where the title has been translated from Swedish, the Swedish title is stated within parenthesis.

(32)

Data collection

Ten semi-structured interviews were performed via personal meeting, video call or phone call. A semi-structured interview form gives the opportunity to probe deeper into the participants' answers while still making sure that relevant areas were included (Langemar, 2008). Interview guides were created at the start of the project and then modified and developed through the data collection stage. Interview guides had main questions with examples of follow up questions intended to probe deeper. The follow up questions were allowed to vary depending on the participants' answers.

The interview guide for line (non-management) pilots consisted of two main questions: (1) Can you describe a difficult operational situation experienced? (2) Are there adaptations that are part of normal work?

The interview guide for Safety Managers and Flight Operation Managers consisted of four main questions: (1) What is most important in flight safety work? (2) How do you know if the operation is safe? (3) How do you view pilots’ adaptations of normal work and rule compliance? (4) What kind of objectives and tradeoffs are there in the organisation and are these monitored?

The interviews were audio recorded and then transcribed for the analysis, with the exception of one interview that could not be recorded but where detailed and extensive notes were taken. The interviews took approximately 45-90 minutes each. The interviews were de-identified during transcription and the audio files were deleted after transcription.

Most interviews were conducted in Swedish language, which was the common mother tongue of the author and most of the participants. Non-Swedish speaking participants were interviewed in English. The interviews were transcribed in the original language of the interview, kept in that language throughout analysis and translated into English as needed during production of the written thesis.

(33)

Analysis

A thematic analysis has been used, drawing from the methods and the analytical choices described by Braun and Clarke (2006) for thematic analysis in psychology. The present study has been conceptualized from a constructionist epistemology, where an emphasis on the social origin of meaning and experience has been endeavoured. A

constructionist thematic analysis has been attempted, where the analysis “does not seek to focus on motivation or individual psychologies, but instead seeks to theorize the sociocultural contexts, and structural conditions, that enable the individual accounts that are provided” (Braun & Clarke, 2006, p. 85). The steps of thematic analysis in psychology by Braun and Clarke (2006) has been guiding an inductive thematic analysis. While this is described as steps, it has been necessary to go back and forth between steps in the analysis to reiterate and validate steps taken to the previous phases in the analysis and the full data set. The process is illustrated in figure 1.

Figure 1

Illustration of Analysis

After the interviews had been transcribed verbatim by the author and thoroughly familiarized with, initial ideas were noted down and the data was coded in a systematic way over the entire data set, resulting in 121 codes. The coding was explicitly data-driven at this stage. While it is recognized that the author cannot be free of theoretical and epistemological

(34)

preconceptions, it was explicitly attempted to stay close to the data and to avoid trying to fit the data into a pre-existing theoretical framework. An inductive analysis was performed and 19 preliminary themes were created by collating the codes identified. These themes were then reviewed in relation to the data extracts and the entire data set to check that the themes

correspond to the data. From the preliminary themes some main themes relating to managing and understanding organisational conflicts, on a slightly more latent level, started to become apparent. Since this is a single author thesis, these themes arose in discussions with the thesis mentor who had read and was familiar with the full data set but had not taken part in the coding. This resulted in six main themes involving a conflict or contradiction and one theme describing the operational work situation. The themes were further defined, and example data extracts were chosen and analysed, and the themes presented in the results section.

A guiding principle for determining what a theme is, has been that it “captures something important about the data in relation to the research question, and represents some level of patterned response or meaning within the data set” (Braun & Clarke, 2006, p. 82). As far as possible, overlap between themes has been avoided (Langemar, 2008). However, because of the interactive and open nature of the systems studied and the conflicts and contradictions evident, themes are not easily delimited, and some overlap has been accepted in favour of drawing up artificial boundaries. It has however been made sure that each theme is coherent and has a separate and distinct core. While the themes cover most of the data set, it was not a criterion that they must cover the entire data set. The resulting themes can be seen as a plausible way of constructing themes based on a majority of data. The research question has been allowed to some extent to develop in parallel with the analysis. The results have then been analysed and discussed using relevant safety science and psychological theories.

(35)

Ethical considerations

The thesis project received internal ethical approval by the course coordinator (kursansvarig) at Örebro University. All participants consented to participation and had received a letter ahead of the interview explaining the purpose of the study, how interview answers would be used, that interviews would be audio recorded, that participation was voluntary and that consent could be withdrawn at any time. Since the interviews were anticipated to involve sensitive material, confidentiality of the participants was guaranteed. No information was saved about the identity of the participants and any information that risked making the participant identifiable, such as times, places and other identifying details, was removed during transcription. Audio files were deleted immediately after transcription. There has been respondent validation as participants have been given the opportunity to review the full thesis including their quotes and give comments, for example in case of misunderstandings, translation issues or risk of identifying participants. One participant has made a comment of a possible risk of misunderstanding which led to a change of wording.

Results

Seven different themes have been identified during the inductive thematic analysis. The first theme is the pilot’s operational decision situation and pilot interactions in that situation. The other themes relate to conflicts, contradictions, tradeoffs and ambiguities that were present in aviation systems. These themes are Production and Safety, Trust and

Distance, A Strong System and the Role of the Individual, Standardisation and Discretionary Space, Compliance and Flexibility, and Safety Culture and Safety Measurement.

The Operational Decision Situation

Pilots describe that procedures, own experience, colleagues and resources in the context are used to make decisions and solve tasks in an operational situation. The aircraft

(36)

commander has the ultimate responsibility to make decisions and is fully accountable for decisions, both to the organisation and the judiciary. The aircraft commander is a focal point for information and decision making through a high number of interactions with other actors.

Two out of four commercial pilots interviewed said that medical emergencies, when a passenger needs urgent medical attention, requiring a landing as soon as possible and

therefore difficult tradeoffs, was the most difficult situation they had encountered. Similarly, one air force Flight Operations Chief (10FC) stated that rescue missions were

“probably the most difficult tradeoffs to make decisions about - that we turn back or we do not initiate this mission knowing that this will end very badly for someone”. These situations seem to have in common that there is an acute tradeoff between the safe operation of the aircraft according to the prescribed procedures and on the other hand to act to save persons in immediate danger.

In a medical emergency, one commercial pilot stated that rigidly following the procedures and heavily considering the expectations from the airline in this situation made it more difficult to solve the situation efficiently. Pilot 1CP described

“such situations are tricky, I would say… when faced with wanting to make many happy, or you want to meet the needs of many. Both this [ill] person, it should go quickly and smoothly, but the airline has this commercial pressure”.

It was expressed that it is easier to handle such situations with more experience, if one does not become preoccupied with possible reactions from the organisation and if one applies the rules in a more flexible manner. In the interest of increasing the chances of survival of the sick passenger by landing as soon as practicable, pilot 1CP explained that procedures that in this pilot’s judgement did not contribute to safety on that day had been disregarded:

“I should personally be able to look at the person who is laying there and needs medical care and say that… I chose to do something completely unnecessary so that

(37)

you received care later. I would not be able to motivate that, and therefore I omitted what I thought was ... unnecessary [...] I ignored what I knew was not needed [...] I made a quick assessment of what was worth mentioning and not.”

It can be noted that the decision to not comply with certain procedures was a decision taken by the aircraft commander on the commander’s own accountability and had to be done in secrecy, because this was not acceptable to the organisation.

One air force pilot (7FV) said that if people's lives were directly at stake, this is a situation where the pilot would consider to not comply with rules:

“when it comes to people's lives, I would not hesitate for a second to break a rule. Unless it puts me in dangerous situations, of course [...] people's lives are more important than what's in the papers.”

The same pilot (7FV) continues:

“Say I have someone ill on board and choose to land somewhere where I may not have the right to land, I might go under my minimum fuel or similar if I land there, I exceed the maximum landing weight or... such things, for me that does not matter... if I have someone on board, whose life depends on whether I break the rule or not.” In addition, it was explained that the cooperation with the other pilot and the other pilot’s experience level had a great influence on the possibility to solve the situation

efficiently. Several captains reported that good cooperation with an experienced first officer greatly enhanced the possibility of a good outcome.

One pilot (5CP) had an emergency situation where the colleague in the flight deck chose to perform one step in the non-normal procedures in another way than prescribed and did not communicate this. That caused considerable confusion to pilot 5CP. This pilot emphasised that it is very important to comply with procedures especially in emergency situations, and if not following procedures, this must be communicated to the other pilot so

(38)

that they can have a shared understanding about the situation and actions taken. There was an agreement among the pilots interviewed however that pilots have the right to, and in certain situations should, deviate from procedures when required to keep the operation safe, for example in an emergency. Pilot 5CP:

“of course you have to go outside SOPs if it is for the safe conduct of the flight [...] I do not go outside the SOP if it is not needed, but if it's needed, I have to do it”. Another situation discussed by commercial pilots was the decision about the amount of fuel taken on a flight. There is a tradeoff between taking more fuel for longer endurance and more options available in case of for example bad weather and on the other hand taking less fuel which lowers the mass of the aircraft, lowers the fuel consumption and therefore is more cost efficient and environmentally friendly. Several pilots said there is always a pressure to take less fuel, but that they however try not to be too influenced by this. Several pilots expressed that it is considered important that pilots can have the integrity not to let their decisions be too much affected by this pressure.

The pilot’s own experience was considered important in decision situations and tradeoffs made in keeping the operation both safe and efficient. This sometimes means increasing the safety margins above what is required by procedures at times when, according to the experience of the pilot, this would be needed. It can also mean applying experience to make, what was expressed as smaller, diversions from SOP to increase chances of meeting other targets such as efficiency, on time performance or workload constraints.

All participants agreed that it is important to follow the rules and procedures. In the decision situation pilots interviewed reported they follow the rules when there is no adverse consequence of this. One pilot said that in an emergency, they would actively try to think how to solve the situation safely and try to not think too much about what the organisation prefers the pilot to do. At the same time this pilot and several pilots pointed to the importance

References

Related documents

As the global population is growing there will be an increased need in energy demand. In order to have a safe and sustainable energy production, the infrastructure needs to be

Figure 6 shows how the derived safety contracts from FTA are associated with a safety argument fragment for WBS using the proposed contract notation in Figure 3-a.. We do not want

Bounds for the uncertainties in the DRGA of a multivariable process are derived for a given nominal process model with known uncertainty region.. The resulting uncertainty region

As outlined earlier, the complexity of operational situations also makes dynamic risk assessment services necessary for enabling optimized control in a priori unknown situations

Bestämmandet över den utsatta kvinnan kunde anhöriga även indirekt försöka att upprätthålla genom uppmaningar till vård- och omsorgspersonal, vilket inte alltid överensstämde

Utgående från det som enligt undersökningen kan sägas karaktärisera infanteri, kan man generellt säga att studerade förbands utbildning i strid innehåller just dessa komponenter.

It investigates how the film content parameters Actor, Director and Genre can be used to further enhance the accuracy of predictions made by a purely collaborative