• No results found

The overarching purpose of this chapter has been to give a brief overview of a couple of general approaches to study complex systems, which have been influential to the research presented in the present thesis. It was very difficult to keep the overview brief due to the fact that the “paradigms”, systems theory in particular, are very broad. In addition, there are often diverging views regarding the content of the paradigms and this review is not claiming to address all relevant characteristics and diverging claims about them. What is especially important to note is that these disciplines not should be seen as mutually exclusive approaches to understand and analyse complex systems. On the contrary, many principles are acknowledged by all disciplines and in many senses the disciplines overlap or complement each others and can be used in conjunction. The clearest commonality between the disciplines is the migration from reductionistic thinking to holistic thinking. All disciplines can be said to follow the “systemic principle”

which “emphasises the understanding of the interactions between agents [or components] and looks for the nature of the relationships between them, instead of decomposing the system into parts, to study them independently (Le Coze, 2005).

With the concern of holistic thinking, common to the three disciplines, comes a concern of capturing emergent behaviours; however, what differs between the approaches is the means of capturing these behaviours. In systems dynamics, for example, top-down approaches are used, where causal relationships between variables on the same system level are investigated and modelled. In agent-based simulations, on the other hand, bottom-up approaches are used where units and relationships between units at a lower system level are specified in order to come at conclusions about emergent, system-level properties. Proponents of the two different approaches sometimes argue that using the opposing approach inhibits the possibilities of capturing emergent properties. An “always preferable” approach is however not believed to exist; instead the appropriateness of an approach must be decided on in the context of the particular situation and problem at hand;

however, using both approaches in parallel when analysing a complex system may very well provide complementing insight into the system’s function.

The field of risk and safety engineering is already today largely influenced by the systems movement and can be seen as a branch of systems engineering (Haimes, 1998). However, the focus in this field has not always been on system-level properties. Earlier the focus of risk management and related paradigm was on reliability and risk for individual components and devices (Saleh and Marais, 2006).

Saleh and Marais argue that it was during the 1970s that system-level safety,

reliability etc. became the foci of the studies. Much of the developments took place in the oil, gas, chemical and nuclear industries and where the Rasmussen report (Rasmussen, 1975), mentioned in chapter 1, was one of the pioneering efforts.

Still, however, there are researchers who argue that many of the methods for risk analysis are not able to capture “systemic accidents”. Hollnagel, for example, argues that systemic models of accidents, which has implications for risk analysis as well, go beyond causal chains and try to describe system performance as a whole, i.e. it views safety as an emergent property, where “the steps and stages on the way [to an accident] are seen as parts of a whole rather than as distinct events” (Hollnagel, 2004). It is not only interesting to model the events that lead to the occurrence of an accident, which is done in for example event and fault trees, but also to capture the array of factors at different system levels that contribute to the occurrence of these events. Such as factors stemming from the local workplace, factors on management, company, regulatory or governmental level and finally also factors associated with social norms and morals. The main point of the systems-oriented approach to accidents, which is also acknowledged by Leveson (2004a), is that a single factor is seldom the only “cause” of an accident; it is more common that the

“cause” stems from a complex set of factors and their interactions.

As a conclusion of this chapter it is argued that all three general approaches, described previously, can be seen as frameworks for understanding and analysing complex systems. The systems of interest in the field of risk and emergency management often involve elements and sub-systems of various types, such as social, technical, natural, organisational, biological, and so on. Any approach used for analysis in such a context needs to be able to incorporate these multidisciplinary aspects of risks and emergencies. The described approaches, taken separately or used in conjunction, provide methods, tools, concepts, and a vocabulary for addressing such systems and they have been an important source of influence to the present thesis, which will be seen in the following chapters. Furthermore, it is very clear today that we need to make use of well developed and appropriate methods in order to gain insight into system with extensive interactions among its components. Our abilities to use “pure brainstorming” and intuition alone for analysing risks and vulnerabilities of such complex systems are simply not sufficient, because even a small amount of feedback loops, dependencies and interdependencies make it difficult to grasp how a system’s behaviour will change over time.

5 Operational definitions of risk and vulnerability

The concepts of risk and vulnerability are central to the present thesis; however, since these are applied over a wide range of research disciplines and professions the interpretations of them often vary. Efforts initiated to try to develop standard,

“universal” definitions of such concepts seldom succeed. An example is the effort initiated by the Society for Risk Analysis to define risk, where a committee labored with the concept for 4 years before it gave up, concluding that “maybe it’s better not to define risk. Let each author define it in his own way, only please each should explain clearly what that way is” (Kaplan, 1997). Not clearly defining the concepts when they are being used can potentially be a serious problem. This chapter therefore aims at providing such clarifications by presenting definitions of both risk and vulnerability. It is not claimed that the proposed definitions are the only ones possible or the best ones in every situation. However, experience with use of the definitions, gained by the research group in which the author is part, has been successful. In addition, the definition of risk that will be described has been frequently used in the risk field for more than two decades.

Primarily this chapter will give a review of how the concepts of risk and vulnerability commonly are being used in the fields of risk and emergency management. In addition to reviewing the concepts, the chapter will describe how the concepts, especially vulnerability, have been developed in the research group in which the author has been a part. The development mainly stems from a perceived need for developing and operationalizing the concept of vulnerability and to clarify the important role played by values in defining, analysing and evaluating risks and vulnerabilities.

The definitions given in this chapter are of operational type. An operational definition is a special type of definition that has its roots in the methodological position advocated by Percy Bridgman in the beginning of the 20th century (Ennis, 1964). What characterises an operational definition is that it provides an operation, procedure or method that can be used to measure (if quantitative) or characterise the concept (if qualitative) of interest. In the social sciences operational definitions are common, since concepts that are investigated there are often quite abstract, such as the psychological states of happiness or distress or the distributions of power in groups. In such cases and in many other cases as well, it is important be specific about the concepts and as Ennis argues; “concreteness is one of the virtues of operational definitions” (Ennis, 1964). Furthermore, defining concepts in an operational way provides a means for other scholars to understand them and how

they are being employed in a particular study. The possibility for an external researcher to critically review scientific work is also enhanced if abstract concepts are operationalized, which in turn will increase the quality of a scientific discipline in the long-run. Operational definitions also exist in the hard sciences, such as in physics, where elementary entities such as length and mass can be defined in operational terms. Since many of the concepts used in the field of risk and vulnerability analysis are not straightforward, operational definitions are very useful there too. This is because they can guide the work of developing methods for measuring/characterising a concept, guide the work of measuring/characterising a concept for a specific system of interest, or guide the work of evaluating an existing method for measuring/characterising a concept.

It is important to note that in this thesis a distinction is made between a method for measuring risk or vulnerability and an operational definition, in that the operational definition provides an ideal or theoretical way of measuring the concept, whereas a method provides a practical way to comply with the operational definition, at least approximately. In practice, there are many aspects that affect the appropriateness of using different methods for complying with the operational definition, such as the resources available, the analysts’ competence, the scientific knowledge available about the system of interest etc. Which method to choose, if there are several methods available that all comply with the operational definition, is thus contingent on the particular situation and system being studied.