• No results found

A New Manhattan Project? Interoperability and Ethics in Emergency Response Systems of Systems

N/A
N/A
Protected

Academic year: 2021

Share "A New Manhattan Project? Interoperability and Ethics in Emergency Response Systems of Systems"

Copied!
5
0
0

Loading.... (view fulltext now)

Full text

(1)

A New Manhattan Project? Interoperability and

Ethics in Emergency Response Systems of Systems

Monika Buscher

mobilities.lab, Lancaster University, UK

m.buscher@lancaster.ac.uk

Markus Bylund

Swedish Institute of Computer Science, Kista,

Sweden,

bylund@sics.se

Pedro Sanches

Swedish Institute of

Computer Science / Royal

Institute of Technology,

Kista, Sweden

sanches@sics.se

Leonardo Ramirez

Fraunhofer Institute for Applied

Information Technology (FhG-FIT)

Sankt Augustin,

Germany

leonardo.ramirez@fit.fraunhofer.de

Lisa Wood

School of Health and

Medicine

Lancaster University, UK

l.a.wood@lancaster.ac.uk

ABSTRACT

In this paper we discuss ethical challenges arising around IT supported interoperability in multi-agency emergency management and explore some methodological responses.

Keywords

Interoperability, emergency response, systems of systems, ethics

INTRODUCTION

Post-disaster reflections on response efforts across the globe highlight coordination and collaboration as problematic. The difficulties are often described as symptomatic of a lack of ‘interoperability’, referring to ‘the ability of different organisations to conduct joint operations’ (NATO, 2006). Systems of system approaches, which integrate between multiple technical and organisational systems, promise some reprieve:

A system of systems exists when a group of independently operating systems—comprised of people, technology, and organizations—are connected, enabling emergency responders to effectively support day-to-day operations, planned events, or major incidents.(US Department of Homeland Security, 2004)

Systems of systems innovation in information technologies is characterized by autonomy, connectivity, diversity, and emergence (Jamshidi, 2011) and requires ‘interoperation not only at the mechanistic level, but also at the levels of system construction and program management’ (Morris, Levine, Meyers, Place, & Plakosh, 2004) . The drive for low level integration embeds interoperability between emergency agencies, local authorities, government, military and volunteer organizations and their information systems into broader contexts of ‘smart cities’, ubiquitous healthcare, and ‘ICT services for a resilient society’ (Maeda, 2010). Citizens, equipped with mobile digital devices and smart cards, and registered in – to name but a few systems – electoral rolls, electronic health systems (EHR), and social media networks naturally generate information that can be used for a range of different purposes, from e-government, to corporate services, to crisis management. However benignly intended, these visions have a dark side.

A NEW MANHATTAN PROJECT?

Calls by the Pentagon’s Defense Science Board (DSB) for a ‘New Manhattan Project’ of advanced information technology (2004, in Crang & Graham, 2007) cast a shadow of such darkness over the efforts of the designers and practitioners involved in the design of interoperable emergency management systems of systems, ourselves included. We are reminded of Robert Oppenheimer, a physicist at the centre of the original Manhattan Project, who has become a symbol for the folly of thinking that scientists or designers can control the consequences of innovation. Oppenheimer’s story highlights the dilemma of moral responsibility in science. While the original Manhattan Project aimed to create weapons of mass destruction, the aim of the new Tracking, Tagging and Locating or TTL Manhattan Project would be global mass surveillance. Calls for integration between diverse information systems are motivated mainly by the fact that terrorists are able to disappear ‘in an environment of

(2)

Proceedings of the 10th International ISCRAM Conference – Baden-Baden, Germany, May 2013

one in a million’, and only the power to process the vast amounts of personal data from global populations might allow identification of suspicious patterns and practices to ‘give the United States the same advantages in asymmetric warfare [as] it has today in conventional warfare’ (DSB 2004, in Crang & Graham 2007:800). To our knowledge the DSB’s suggestions have not been implemented, and there is no concerted New Manhattan Project of TTL. Moreover, even with extensive TTL interoperability, the kind of advantages the DSB envisages are unlikely to materialize. Complete situation awareness is impossible,

omniscience is elusive. As anyone who has ever tried to resolve a simple billing dispute will know, even the telephone company lacks enough internal coordination to make sense of its data to you. … Generally, as information becomes more and more abundant, clear views through it become less and less possible.

(McCullough 2004: 15, cited in Graham and Crang 2007:813)

However, many individual and distributed innovation projects in smart city, ubiquitous healthcare, resilient society, or emergency management systems of systems may still amount to an inadvertent Manhattan Project of disorganized, yet deeply transformative mass surveillance. Moreover, coupled with new capabilities of Big Data processing (Cohen, Dolan, Dunlap, Hellerstein & Welton, 2009, Zhang, Zhao & Li 2013), such technological advances may slide societies into unprecedented levels of surveillance and an erosion of civil liberties (Dennis & Urry, 2009; Lyon, 2002), where social sorting splinters societies and increases exclusion (Graham & Marvin, 2001). In their analysis of ubiquitous healthcare systems, which utilize personal digital devices and wearable bio-sensors to monitor individual’s health status and support interventions, for example, Brown & Adams (2007) show that privacy, agency, equity and liability are serious ethical challenges. New uses of personal data trigger foundational transformations of human agency, autonomy and responsibility. For example, they blur the boundary between lifestyle and healthcare, allowing doctors, policy-makers and insurers ever deeper influence in people’s everyday affairs, while responsibility for error in cases of software failure or misuse cannot be determined easily, leading to transformations of medical liability.

Surveillance studies like these try to understand how the increasingly interoperable ways in which personal data is used affect people and populations. They problematize the roots of surveillance, expose relationships between surveillance and power, outline means of resistance and discusse the place of new technologies. Concepts such as bias and social sorting – i.e. the ways that classifications made by surveillant systems affect the life chances of the data subjects – transcend the individualistic definitions of data control that traditional privacy mechanisms (such as encryption and role-based access control) embody, requiring broader innovation in social and organizational practices and regulatory frameworks. Surveillance studies raise a series of questions: Is the price worth paying? How much do researchers, commercial developers, practitioners who commission and implement such systems, policy-makers and the public know about the dynamics of unintended consequences? How can they take more responsibility? What are alternatives? How might emergency responders and designers of system of system for emergency response better understand and control undesirable unintended consequences?

Interoperability and Ethics in Emergency Management Systems of Systems

As far as we are aware, there are no comprehensive studies specifically aimed at understanding ethical issues of systems of systems interoperability in emergency management. Apart from surveillance studies in smart city, cloud computing and ubiquitous healthcare, there are, however, attempts to build sensitivity for ethical issues into the design of individual Emergency Management Information Systems (EMIS) (Jillson, 2010),

Jillson (2010) discusses ethical opportunities, such as the capability of EMIS to extend surge capacity, to maximize availability and enable more equitable distribution of services, and to enhance risk communication. But the informational and communicative advances EMIS can enable complicate adherence to core ethical principles of beneficience (do no harm), respect for human dignity, and distributive justice (equal access). Jillson specifically considers public health emergencies and asks how EMIS might support people in

balancing individual right[s] to privacy with the need for information on which to base protection of the public health, including defining “confidentiality” in practice – are infectious diseases included [in the provision of medical confidentiality] or not? Should public health agencies and other emergency response agencies have access to individual electronic medical records? How should informed consent ... be applied when records may be lost and the situation requires urgent decision making?

In their current use of IT, emergency responders often air on the side of caution when faced with such questions, especially in multi-agency collaboration, often choosing not to share data. Fragmentation of response through ‘silo-thinking’ is a common result (Cole, 2010). Paradoxically, this is, at least partially, a result of the very capability of information systems to enable data sharing. They enable others to monitor professional communications and decisions, including decisions over data sharing. In an environment where such data can be treated as evidence and attract blame and punishment, people are likely to be reluctant to take risks.

(3)

Novel support for ‘balancing’ individual rights with the demands of protecting the public might include measures that help emergency responders and members of the public better understand the positive and negative consequences of both sharing and not sharing data. And systems that monitor the communications and decisions of professional responders could be embedded in cultures of responsibility that better recognize the complexities of vulnerabilities (Perrow, 2011), as well as employ mechanisms of anonymization and forgetting. But to develop such support, technology designers must gain a better understanding of actual real world practices.

DESIGNING FOR AN ETHICS OF EMERGENCE

In the domain of crisis management, calls for more interoperability in the face of difficulties with communication, coordination and collaboration have inspired a wave of innovation. The BRIDGE project, for example, where some of the authors of this paper seek to inform design, develops middleware for dynamic assembly of systems of systems. By supporting emergent interoperability (Mendonça, Jefferson, & Harrald, 2007) between existing and legacy systems (such as police and healthcare databases, ambulance dispatch systems) and novel systems (such as crisis informatics apps on personal mobile devices, or environmental sensors), new architectures for large scale multi-agency collaboration will be developed. A first experimental implementation and evaluation took place in September 2012. During an exercise where a train caught fire in a tunnel, fire fighters and emergency medical teams deployed ad-hoc ‘mesh’ networks, attached e-triage devices to ‘victims’ and explored a set of ‘master’ and ‘risk analysis’ tools. The prototype novel systems were strung together – conceptually and in parts physically – by a prototype BRIDGE middleware. Evaluation was carried out through a series of ‘cold runs’, where the professionals involved in the exercise, an ‘End-user Advisory Board’ (comprising high ranking European professionals from the police, fire and medical emergency services and industrial producers of emergency response technologies), and a group of technical experts were presented with the individual ‘systems’ and an explanation of the middleware functionality. This was followed by a ‘hot run’ where some of the novel systems were deployed during the fire figthing and rescue operations.

Figure 2 Video screenshots from the first BRIDGE Project Evaluation

The reasoning behind involving practitioners in such ‘living laboratories’ (that is, as functional as possible experimental implementations of systems of systems innovations in the context of as realistic as possible real work contexts) is that with the appropriation of such new technologies, new work practices evolve, which need to be anticipated. Ideally, BRIDGE systems and middleware should support future work practices in a way that is sensitive to new, emergent ethical challenges and opportunities. The hope is that ethical challenges and opportunities will surface concretely in experimental assembly and appropriation of systems of systems and enable design for an ethics of emergence that matches the scope of such systemic innovation. For example, if emergency responders discover that existing or novel external systems – such as insurance or transport management databases, medical health records, or surveillance cameras mounted on unmanned aerial vehicles – might aide in the production of situation awareness, the BRIDGE middleware should allow (simulation) of such an assembly. And the practicalities of assembling and of using the resulting system of systems should raise ethical questions in a very concrete manner, as well as generate ideas about how data protection might be

(4)

Proceedings of the 10th International ISCRAM Conference – Baden-Baden, Germany, May 2013

managed practically and technically, e.g. through privacy by design and accountable data mining protocols (Langheinrich, 2001, Weitzner et al., 2008, Buscher, Wood & Perng 2013). However, while some such questions and ideas did arise from the first BRIDGE evaluation, the harvest was too thin to sufficiently explore ethical issues arising in practice. In the discussion below, we explore methodologies that may be able to support a broader and deeper form of ethically sensitive co-design of systems of systems.

Designing for an ethics of emergence through a methodological ethics of emergence

Clearly, even simulated real-time functional integration of existing and novel databases and systems is nigh on impossible as part of living laboratories. Nevertheless, some form of experimentation with systems of systems is necessary to grasp the complex and dynamic interdependencies between information flows concretely enough to understand and address ethical challenges and opportunities. Engineers, prospective users and other stakeholders, including members of the public involved in the development of surveillant systems need experience of such systems to be able to consciously contribute to the dialectic between social practices and new technology, to shape transformations from an ethical and reflective standpoint, and to create socio-technical systems that support ethical conduct. The gap between ethically-sensitive design methods and the growing body of knowledge about challenges documented in surveillance studies calls for new design methods and methodologies that designers of surveillant ICT can use. If living laboratories fail to generate sufficient insight and ideas, because simulation of working systems of systems, interoperability and data processing is impossible, how might we otherwise elicit user input regarding ethical issues and social acceptance? We discuss four candidates below.

Firstly, we can use methods that enable system designers and users to envision dystopian systemic effects of the systems they produce. These methods, e.g. value scenarios (Nathan, Klasnja, & Friedman, 2007), developed within the tradition of Value Sensitive Design (Friedman, 1996), allow for accounting of human values in principled and comprehensive ways and are especially useful in early stages of the design process.

Secondly, following Agre’s method of critical technical practice (Agre, 1997), design methods to make strange what is familiar with the aim of uncovering hidden assumptions, might be employed in conjunction with living laboratories. Critical technical practice workshops and value scenarios can be used to generate contextualized knowledge about the emerging domain which can be translated into non-functional requirements and hypotheses about the likely impact of the systems we develop. Such workshops might explore the topics of trust, privacy, social sorting, marginalization and exclusion, agency, and liability with the following activities:

• Designing for non-users, i.e., users who would be unlikely to use the system

• Designing for settings which are not intended by the designers (e.g. festival management) • Using the system to accomplish goals other than those originally intended

Thirdly, living-lab style co-design with end-users can be augmented with cultural probes or critical design inspired activities (Dunne & Raby, 2001; Gaver, Dunne, & Pacenti, 1999), where future users are challenged to experiment with artifacts (probes) that mimic some of the functionalities that the ability to assemble systems of systems might provide in their everyday work practice. The purpose is to stimulate reflective appropriation in everyday contexts and feedback from users based on experiences (rather than just discursive imagination or ethical impact assessment). User evaluations supported by probes can be done iteratively in several phases of the design process, as functionality develops.

Fourthly, design projects produce visual and written documents, describing functional properties, scenarios and use cases for the envisaged systems. By analyzing these documents through the lens of surveillance theory and forms of discourse analysis, with the aim of revealing implicit assumptions – a form of ethical ‘archaeology’ (Introna, 2009), it is possible to expose hidden assumptions and their potentially surveillant dynamics to debate. By continuously inspecting systems under development and the design process at multiple levels and making assumptions transparent, we seek to both create knowledge about societal, political and ethical implications of the technologies we develop, and contribute to the creation of a culture of transparency and accountability.The methods we advocate work towards a methodological ethics of emergence that acknowledges all participants’ situatedness within a force field of complex interdependencies and transformations (Suchman, 2002).

CONCLUSION

We have discussed a range of ethical challenges arising around innovations in systems of systems approaches in the connected fields of emergency management and response, smart cities, cloud computing and ubiquitous healthcare. These innovations make a deliberate or inadvertent New Manhattan Project of mass surveillance a possibility. We have discussed methodologies that – we hope – might allow designers and prospective users to take more informed responsibility in the emergent foundational transformations that are taking shape.

(5)

REFERENCES

Agre, P. E. (1997). Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI. Bowker, G., et al. (Eds.) Social Science Technical Systems and Cooperative Work Beyond the Great Divide, 1–17. Brown, I., & Adams, A. A. (2007). The ethical challenges of ubiquitous healthcare. International Review of

Information Ethics, 8, 53–60.

Buscher, M., Wood, L. & Perng, S-Y. (2013) Privacy, Security, Liberty: Informing the Design of EMIS. In Comes, T. F. Fiedrich, S. Fortier, J. Geldermann and L. Yang (Eds.) Proceedings of the 10th International

ISCRAM Conference – Baden-Baden, Germany, May 2013.

Cole, J. (2010). Interoperability in a Crisis. Human Factors and Organisational Processes.

http://www.rusi.org/downloads/assets/Interoperability_2_web.pdf [accessed 15th January 2013] Crang, M., & Graham, S. (2007). Sentient cities: ambient intelligence and the politics of urban space.

Information Communication Society, 10(6), 789–817.

Cohen, J., Dolan, B., Dunlap, M., Hellerstein, J. M., & Welton, C. (2009). MAD skills: new analysis practices for big data. Proceedings of the VLDB Endowment, 2(2), 1481–1492.

Dennis, K., & Urry, J. (2009). After the Car. Polity Press.

Dunne, A., & Raby, F. (2001). Design Noir: The Secret Life of Electronic Objects. August/Birkhauser. Friedman. B. (1996). Value-sensitive design. Interactions 3, 6 (December 1996), 16-23.

Gaver, B., Dunne, T., & Pacenti, E. (1999). Design: Cultural probes. Interactions, 6(1), 21–29. Graham, S., & Marvin, S. (2001). Splintering urbanism. Routledge.

Introna, L. (2009). Ethics and the speaking of things. Theory, Culture & Society, 26(4), 25–46. Jamshidi, M. (2011). System of Systems Engineering: Innovations for the Twenty-First Century. Wiley. Jillson, I. (2010). Protecting the public, addressing individual rights. Ethical issues in Emergency Management Information Systems for Public Health Emergencies. In B. van de Walle, M. Turoff, & S. Hiltz (Eds.),

Information systems for emergency management. (pp. 46–61). New York: Sharpe.

Langheinrich, M. (2001). Privacy by Design - Principles of Privacy-Aware Ubiquitous Systems. Proceedings

UbiComp 2001 (pp. 273–291).

Lyon, D. (2002). Surveillance as social sorting. Routledge.

Maeda, Y., Higashida, M., Iwatsuki, K., Handa, T., Kihara, Y., & Hayashi, H. (2010). Next Generation ICT Services Underlying the Resilient Society. Journal of Disaster Research, 5(6), 627–635.

Morris, E., Levine, L., Meyers, C., Place, P., & Plakosh, D. (2004). System of Systems Interoperability (SOSI): final report. http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA455619 [accessed 15th January 2013] Naphade, M., Banavar, G., Harrison, C., Paraszczak, J., & Morris, R. Smarter Cities and Their Innovation

Challenges. , 44 Computer 32–39 (2011). IEEE.

Nathan, L. P., Klasnja, P. V, & Friedman, B. (2007). Value scenarios: a technique for envisioning systemic effects of new technologies. Computers and Society (pp. 2585–2590).

NATO. (2006). Interoperability for joint operations. http://bit.ly/UoVQP2 [accessed 15th January 2013] Perrow, C. (2011). The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist

Disasters. Princeton University Press.

Suchman, L. (2002). Located accountabilities in technology production. Scandinavian Journal of Information

Systems 14(2), 91–105.

US Department of Homeland Security. (2004). The System of Systems Approach for Interoperable Communications. http://1.usa.gov/Y7JYFO [accessed 15 January 2013]

Weitzner, D. J., Abelson, H., Berners-Lee, T., Feigenbaum, J., Hendler, J., & Sussman, G. J. (2008). Information accountability. Communications of the ACM, 51(6), 82–87.

Zhang, C., Zhao, T., & Li, W. (2013). Towards Improving Query Performance of Web Feature Services (WFS) for Disaster Response. ISPRS International Journal of Geo-Information, 2(1), 67–81.

References

Related documents

Inom Örebro kommun arbetar man mot satta fokusområden och de främsta områden kommunen arbetar aktivt för genom hållbar upphandling är Klimat och Giftfri miljö (L.

Detta kapitel kommer att belysa tre huvudskäl – krafter och händelser 97 – till varför en persons reella handlingsutrymme kan minska, eller redan från början är litet. Alla

This has for instance been made evident in the aftermath of declared states of emergency in response to the Covid-19 pandemic, whereby in some countries, measures have gone

A study examined whether the primary health care diagnose terminology system KSH97-P can obtain a richer structure using category and chapter mappings from KSH97-P to SNOMED CT

Hereby the strengths of ERP systems regarding data quality can be used as a platform providing better quality through standardization and integration of central data that

The control system, compared to the corporate culture, has been more clearly implemented in UD Trucks (Skoglund, personal interview 2013-05-15) but there is still work to

There are also different forms of IS and mobile applications under development in the ad-hoc mobilisation projects (e.g., Tryvge, http://www.trygve.se/). In such cases,

However, a comparison with international research showed that the Swedish ERS can be seen as an instantiation of ERSs throughout the world that also reflects global cross-sector