• No results found

Online engineering: On the nature of open computational systems.

N/A
N/A
Protected

Academic year: 2022

Share "Online engineering: On the nature of open computational systems."

Copied!
178
0
0

Loading.... (view fulltext now)

Full text

(1)
(2)
(3)

O n t h e n a t u r e o f o p e n c o m p u t a t i o n a l s y s t e m s

O N L I N E E N G I N E E R I N G

M A R T I N F R E D R I K S S O N

Department of interaction and system design Blekinge institute of technology

Sweden

(4)

Blekinge institute of technology Dissertation series No. 2004:05 ISSN 1650–2159

ISBN 91–7295–045–5

Published by Blekinge institute of technology

© Martin Fredriksson, 2004

Jacket illustration – In the loop – by Societies of computation laboratories

© Tomas Sareklint, 2004 Printed by Kaserntryckeriet Karlskrona, Sweden, 2004

(5)
(6)

This thesis is submitted to the Faculty of technology at Blekinge institute of technology, in partial fulfillment of the requirements for the degree of Doctor of philosophy in computer science.

Contact information

Martin Fredriksson

Department of interaction and system design School of engineering

Blekinge institute of technology Box 520

372 25 Ronneby Sweden

(7)

A B S T R A C T

Computing has evolved from isolated machines, providing calculative suppor t of applications, toward communication networks that provide functional suppor t to g roups of people and embedded systems. Perhaps, one of the most compelling feature and benefit of computers is their overwhelming computing efficiency. Today, we conceive distr ibuted computational systems of an ever-increasing sophistication, which we then apply in var ious settings – cr itical suppor t functions of our society just to name one impor tant application area. The spread and impact of computing, in ter ms of so-called infor mation society technologies, has obviously gained a very high momentum over the years and today it delivers a technology that our societies have come to depend on. To this end, concer ns related to our acceptance of qualities of computing, e.g., dependability, are increasingly emphasized by users as well as vendors.

An indication of this increased focus on dependability is found in contemporary effor ts of mitigating the effects from systemic failures in cr itical infrastr uctures, e.g., energy distr i- bution, resource logistics, and financial transactions. As such, the dependable function of these infrastr uctures is gover ned by means of more or less autonomic computing systems that interact with cognitive human agents. However, due to intr icate system dependencies as well as being situated in our physical environment, even the slightest – unanticipated – per turbation in one of these embedded systems can result in deg radations or catastrophic failures of our society. We argue that this contemporary problem of computing mainly is due to our own diffi- culties in modeling and engineer ing the involved system complexities in an understandable manner. Consequently, we have to provide suppor t for dependable computing systems by means of new methodologies of systems engineer ing.

From a histor ical perspective, computing has evolved, from being suppor tive of quite well defined and understood tasks of algor ithmic computations, into a disr uptive technology that enables and forces change upon organizations as well as our society at large. In effect, a major challenge of contemporary computing is to understand, predict, and har ness the involved systems’ increasing complexity in ter ms of constituents, dependencies, and interactions – tur ning them into dependable systems. In this thesis, we therefore introduce a model of open computational systems, as the means to convey these systems’ factual behavior in realistic situations, but also in order to facilitate our own understanding of how to monitor and control their complex interdependencies. Moreover, since the cr itical var iables that gover n these complex systems’ qualitative behavior can be of a very elusive nature, we also introduce a method of online engineer ing, whereby cognitive agents – human and software – can instrument these open computational systems according to their own subjective and temporal understanding of some complex situation at hand.

(8)
(9)

I

T A B L E O F C O N T E N T S

P R E F A C E V

P a r t 1

I N T R O D U C T I O N

Chapter 1

O U T L I N E O F T H E S I S 1

1.1 Introduction . . . 1

1.2 Challenges in dependable computing . . . 2

1.3 Contributions from the author . . . 4

1.4 Guidelines to the reader . . . 6

1.5 Concluding remarks . . . 8

Chapter 2 D E P E N D A B L E C O M P U T I N G S Y S T E M S 11 2.1 Introduction . . . 11

2.2 General concerns . . . 13

2.3 Cognitive agents . . . 16

2.4 Concluding remarks . . . 21

Chapter 3 M E T H O D O L O G Y O F C O M P U T I N G 23 3.1 Introduction . . . 23

3.2 Framework of instruments . . . 25

3.3 Principles . . . 28

3.4 Models . . . 30

3.5 Methods . . . 33

3.6 Technologies . . . 35

3.7 Concluding remarks . . . 36

(10)

O N L I N E E N G I N E E R I N G I I

P a r t 2

T H E O R Y

Chapter 4

I S S U E S O F C O M P L E X I T Y 41

4.1 Introduction . . . 41

4.2 Evolution of systems . . . 43

4.3 Isolation . . . 45

4.4 Adaptation . . . 47

4.5 Validation . . . 49

4.6 Concluding remarks . . . 52

Chapter 5 O P E N C O M P U T A T I O N A L S Y S T E M S 55 5.1 Introduction . . . 55

5.2 Model for isolation . . . 60

5.3 Environment . . . 62

5.4 Domain . . . 65

5.5 System . . . 68

5.6 Fabric . . . 70

5.7 Concluding remarks . . . 73

P a r t 3 P R A C T I C E Chapter 6 O N L I N E E N G I N E E R I N G 77 6.1 Introduction . . . 77

6.2 Method of adaptation . . . 80

6.3 Articulation . . . 82

6.4 Construction . . . 84

6.5 Observation . . . 86

6.6 Instrumentation . . . 88

6.7 Concluding remarks . . . 89

(11)

Chapter 7

E N A B L I N G T E C H N O L O G I E S 93

7.1 Introduction . . . 93

7.2 Architecture for validation . . . 95

7.3 SOLACE . . . 97

7.4 DISCERN . . . 100

7.5 Concluding remarks . . . 102

P a r t 4 C O N C L U S I O N Chapter 8 N E T W O R K E N A B L E D C A P A B I L I T I E S 109 8.1 Introduction . . . 109

8.2 Experimenting with dependability . . . 112

8.3 TWOSOME . . . 114

8.4 Benchmark . . . 118

8.5 Concluding remarks . . . 122

Chapter 9 S U M M A R Y O F T H E S I S 125 9.1 Introduction . . . 125

9.2 Experiences . . . 127

9.3 Assessment . . . 130

9.4 Future challenges . . . 133

P a r t 5

R E F E R E N C E S Appendix A

G L O S S A R Y 141

Appendix B

N O T E S 145

Appendix C

B I B L I O G R A P H Y 151

(12)
(13)

V

P R E F A C E

Great discover ies and improvements invar iably involve the cooperation of many minds.

A. G. Bell

I was first introduced to the wonderful world of computing in the early 1980s. At that point in time, I considered computing as a mere area of personal indulgence, i.e., game playing. However, it would not be long until my curiosity regarding construction of software came into focus. How did people go about when creating those entertaining artifacts of computer games? Unfortunately, it was not until the late 1980s that I had the opportunity to get some real experience from construction of software. At the time being, I finally managed to get my hands on a Commodore Amiga 500 and this was really one fine piece of machinery. Not only did it support the notions of multitasking and graphical user interfaces, under the hood, it even had specifically tailored chipsets that relieved the central processing unit from media intensive tasks. By means of a few hardware reference manuals and assorted construction tools, I soon indulged myself with the activity of creating and integrating computer graphics, audio, and software into quite entertaining artifacts of my own – demos.1

However, when you get sufficiently skilled in programming, it is typically not long before you start asking yourself certain questions regarding the limits of computing.

Moreover, you naturally also wonder if there are other individuals out there who ponder the same question. As a matter of social networking and so-called bulletin board systems, I was soon to find out that it most certainly only was the individual who set those limits and there was plenty of other individuals who were concerned with pursuing the same questions. In fact, there was a whole underground movement out there, who constantly tried to surpass the known limitations of some particular piece of computing machinery and, at the same time, demonstrate their excellence in a visually tractable and enter- taining manner – the scene.2

(14)

O N L I N E E N G I N E E R I N G V I

The most important things that I learned during this time of my life was that technological innovation is pointless if one cannot demonstrate its superiority in an easily accessible format and, moreover, innovation is best served by a whole team of dedicated individuals. Experiencing those two lessons in such a concrete and practical manner as I did at the time has led me to believe that most any creative result in computing neces- sarily has to be produced as a team effort; emphasizing the vivid graphical and aural experience of real world computing phenomena. The potential benefit from such a mindset should not come as a surprise to anyone and yet, during my time as a student of software engineering as well as computer science, I have come to understand that there is a somewhat peculiar reluctance to acknowledge the importance of its very essence.

During my time as a research student, people have continuously told me that the only thing I ought to concern myself with is to write one scientific report after another.

Then, as soon as I have mustered myself to produce a sufficiently large number of reports;

my research education will come to an end and I am about to become a full-fledged researcher. I consider this state of affairs to be most unfortunate and, in essence, a philosophy of research education that will lead us to a dead end. In my humble opinion, we ought to educate future researchers with the sole intent of making them skilled in the art of science as well as engineering. That is, I strongly believe that a research student needs to experience as many aspects of this fine art as possible and, preferably, to acquire proficiency in funding, investigations, authoring, demonstrations, development, experi- mentation, and innovation as a team effort.

This has always been, and still is, my fundamental conviction regarding what I consider the most important activities involved in computer science as well as software engineering research. Consequently, when I met Professor Rune Gustavsson in the mid 1990s, which held a similar conviction close at heart, I decided to pursue the opportunity of an education in his research group – Societies of computation. In doing so, one of the first challenges I was introduced to was the group’s need for a demonstrations facility. In my own opinion, this was indeed a very promising start to this thing they call research education. In essence, the group’s research programme needed a facility, including personnel, which could provide for demonstrations of proficiency in development, experimentation, and innovation. It would not be long until our first laboratory was born – Societies of computation laboratories.3

Ever since the start of my research education, until this very day, it is almost exclusively through the challenges and opportunities in managing this laboratory that I have gained experience as a student of research. Of course, as is required, the monographic thesis presented herein is based on a number of publications that were authored or co-authored by myself (see Appendix A – Notes – for their individual abstracts4–10), and I sincerely hope that they convey at least some of my experiences over the years. However, this sufficient number of publications was not produced as a matter

(15)

of focusing on refining my proficiency in authoring, but rather from demonstrating real world phenomena of computing – as a cooperative effort involving a whole team of dedicated individuals and organizations. I would therefore like to express my everlasting gratitude toward all parties involved, for helping me acquire the experience and confi- dence that I certainly will come to depend on as a practitioner of research.

I would therefore like to express my gratitude toward the agencies and organiza- tions, in no particular order, that certainly made the trip as efficient and pleasant as I could ever had hoped for: The knowledge foundation (KKS), The Swedish agency for innovation systems (VINNOVA), Societies of computation (SOC), Societies of compu- tation laboratories (SOCLAB), and Kockums AB (KAB). Secondly, there were past or present individuals in these organizations that, by means of most capable suggestions, guidance, and help, have been instrumental in asserting that I was to reach my intended destination. In particular, the individuals that immediately come to my mind are Rune Gustavsson and Paul Davidsson (SOC); Markus Andersson, Jimmy Carlsson, Anders Johansson, Johan Lindblom, Fredrik Nilsson, Jonas Rosquist, Robert Sandell, Tomas Sareklint, Christian Seger, Björn Ståhl, Björn Törnqvist, and Tham Wickenberg (SOCLAB); Pär Dahlander, Ola Gullberg, Jens-Olof Lindh, and Tom Svensson (KAB).

Finally, I have reached the destination of one very interesting journey and, to my great relief, without any alarming indications of a dead end in sight. I therefore feel quite confident and intrigued when I see a whole plethora of alternative routes and possible experiences that lie ahead.

Martin Fredriksson – August 2004 – Ronneby

(16)
(17)

I N T R O D U C T I O N

(18)
(19)

1

O U T L I N E O F T H E S I S

We cannot too carefully recognize that science star ted with the organisation of ordinary exper iences.

A. N. Whitehead

1 . 1

I N T R O D U C T I O N Computing has evolved from isolated machines, providing calculative support of applica- tions, toward communication networks that provide functional support to groups of people and embedded systems. Perhaps, one of the most compelling feature and benefit of computers is their overwhelming computing efficiency. Today, we conceive distributed computational systems of an ever-increasing sophistication, which we then apply in various settings – critical support functions of our society just to name one important application area. The spread and impact of computing, in terms of so-called information society technologies, has obviously gained a very high momentum over the years and today it delivers a technology that our societies have come to depend on. To this end, concerns related to our acceptance of qualities of computing, e.g., dependability, are increasingly emphasized by users as well as vendors.

An indication of this increased focus on dependability is found in contemporary efforts of mitigating the effects from systemic failures in critical infrastructures, e.g., energy distribution, resource logistics, and financial transactions. As such, the dependable function of these infrastructures is governed by means of more or less autonomic computing systems that interact with cognitive human agents. However, due to intricate system dependencies as well as being situated in our physical environment, even the slightest – unanticipated – perturbation in one of these embedded systems can result in degradations or catastrophic failures of our society. We argue that this contemporary problem of computing mainly is due to our own difficulties in modeling and engineering the involved system complexities in an understandable manner. Consequently, we have to

(20)

C H A L L E N G E S I N D E P E N D A B L E C O M P U T I N G 2

provide support for dependable computing systems by means of new methodologies of systems engineering.

From a historical perspective, computing has evolved, from being supportive of quite well defined and understood tasks of algorithmic computations, into a disruptive technology that enables and forces change upon organizations as well as our society at large. In effect, a major challenge of contemporary computing is to understand, predict, and harness the involved systems’ increasing complexity in terms of constituents, depen- dencies, and interactions – turning them into dependable systems. In this thesis, we therefore introduce a model of open computational systems, as the means to convey these systems’ factual behavior in realistic situations, but also in order to facilitate our own understanding of how to monitor and control their complex interdependencies.

Moreover, since the critical variables that govern these complex systems’ qualitative behavior can be of a very elusive nature, we also introduce a method of online engineering, whereby cognitive agents – human and software – can instrument these open computational systems according to their own subjective and temporal under- standing of some complex situation at hand.

To this end, the model and method advocated herein should merely be considered as examples of certain instruments that we need in addressing the notion of complex computing phenomena. Therefore, in this introductory chapter’s first section – Challenges in dependable computing – we introduce some general concerns that, in effect, are addressed throughout the whole thesis. In particular, we emphasize the dichotomy of computer science and software engineering, as the basic means to attain more confidence in complex and embedded computing systems. In the following section – Contributions from the author – we therefore elicit those instruments that could be considered as appli- cable in dealing with the complex phenomena at hand. In fact, the coherence and comple- mentary nature of these instruments reflect what should be considered as the thesis’

major emphasis, i.e., putting theory into practice. Consequently, in the following section – Guidelines to the reader – we present the general structure of the thesis as well as the specific topics of each chapter. Finally, in this chapter’s last section – Concluding remarks – we emphasize that almost all of the material presented herein has evolved as a matter of experience from designing as well as developing enabling technologies and systems.

However, without any further ado, let us start with some general concerns and challenges in dependable computing systems.

1 . 2

C H A L L E N G E S I N D E P E N D A B L E C O M P U T I N G During the last centuries, the scientific approach towards understanding of our physical world has been tremendously successful in areas such as physics, chemistry, and biology.

(21)

As such, science relies on the continuous establishment of certain methodological instru- ments, i.e., principles, models, methods, and technologies; aiming at revealing the very nature of some particular phenomenon under investigation. Moreover, we conduct and evaluate real world experiments in order to, on the one hand, verify the functionality of methodological instruments in isolation and, on the other hand, to validate the applica- bility of comprehensive methodologies as such.

In effect, the engineering community relies on science to establish the soundness of these methodological instruments as well as related work practices, in order to facilitate the construction of dependable systems with desirable qualities, no matter how complex or delicate their behavior might seem. Moreover, we consider the qualitative behavior of such systems to be more than the sum of their parts, i.e., there are constitutive qualities of an integrated whole that are difficult or even impossible to derive by means of analyzing isolated entities of some system. Due to this elusive nature of qualities in complex systems, a plausible answer to our concern regarding dependability is probably not to be found in the deployed systems as such, but rather in the methodological approaches we apply in dealing with them – from a scientific as well as engineering perspective. An illustrative example of such a state of affairs is typically found in the evolution of present day transportation systems, which has been going on during more than a century. In this particular case, systemic qualities, such as security and safety in transport, is an overall concern of our society at large. However, one does not address the concerns involved by emphasizing the qualities of one component in isolation, e.g., a car, but rather by means of emphasizing certain criteria of invariants that are relevant at all times, e.g., traffic regulations.

As we have previously indicated, we consider any scientific methodology to involve fundamental instruments such as principles, models, methods, and technologies. It is by means of such instruments that we can design and implement qualitative systems.

Moreover, in this context, we conduct scientific experiments in order to verify not only the isolated instruments, but also in order to validate the soundness of whole methodol- ogies as such. In the field of computing, however, such methodological trials and experi- ments are quite seldom the primary subject of study. In any field of research, or devel- opment for that matter, the lack thereof is indeed most unfortunate. In fact, without the continuous development and validation of comprehensive and scientific methodologies, the engineering communities are left with no other alternative than to provide us with systems developed according to the art of best practice, i.e., when principles and models derived by science are nowhere to be found. In this respect, one should indeed question our confidence in the notion of dependable computing systems.

The main difference between the physical world and the world of computing lies in the latter being designed and implemented, in contrast to the former, being there waiting for exploration. In effect, the dichotomy of science and engineering – already established

(22)

C O N T R I B U T I O N S F R O M T H E A U T H O R 4

in the disciplines of natural science – is less understood or established in the field of computing. Consequently, if our primary concern is to acquire a certain level of confi- dence in that computing systems of today can be ascribed with the quality of being dependable, we must establish the conception of a comprehensive methodology that is grounded in the systems as such. Obviously, we cannot prove the applicability of such a methodological approach by means of formal models and methods. Instead, we must try to establish the soundness of such a methodological stance by means of concrete trials and real world experimentation. The principle challenges of concern that we address in such an experimental endeavor are as follows.

P R O B L E M 1 . 1 In what way can we understand the nature of dependable computing systems?

P R O B L E M 1 . 2 In what way can we har ness the complexity of dependable computing systems?

The hypothesis that we emphasize in this thesis could therefore be stated as follows.

Firstly, in order to address certain crucial challenges of dependable computing systems, we need to develop and establish a methodological approach that – in a comprehensive manner – addresses the dichotomy of science and engineering in the field of computing.

Secondly, since the applicability of such a methodological approach is impossible to establish by means of formal verification in isolation, we can instead validate its soundness by means of assessing the quality of service as provided for by its resulting products – developed and assessed by means of the methodological instruments.

S O L U T I O N 1 . 1 We aim at revealing system complexity – constituents, dependencies, and interactions – by means of a model of open computational systems and we aim at gover ning the qualitative behavior of these systems by means of a method of online engineer ing.

1 . 3

C O N T R I B U T I O N S F R O M T H E A U T H O R From the perspective of challenges in dependable computing systems, the author’s intent with this thesis is to present a comprehensive set of methodological instruments – developed by means of real world trials and experimentation. Certain principles, models, methods, and technologies are therefore discussed and should, from the perspective of this particular thesis, be considered as the major contributions from the author. However, the author would like to emphasize that the thesis as such should be interpreted as an even more important contribution than these components in separation. That is, the thesis provides for a context, in which all instruments of the methodology fit quite naturally

(23)

and, in addition, it provides for a context in which an important example and evaluation of the methodology in its applied form can be introduced.

The first contribution of this thesis outlines a framework of computing that, in essence, comprises a number of principled areas of investigation that could guide us in reasoning about theory and practice of dependable computing systems. In effect, the author advocates a principled approach toward theory and practice of computing, in which we consider computer science and software engineering as two interrelated disci- plines that are addressing the same academic and industrial school of prowess – dependable computing systems. As such, our framework provides for guidance in empha- sizing various concerns of computing professionals, i.e., models, methods, and technol- ogies.

The second contribution of this thesis involves a model of open computational systems that aim at conveying the essential characteristics of complexity in contemporary computing systems – constituents, dependencies, and interactions. As such, the model involves four abstraction layers, that each addresses particular concerns in modeling of open and fielded computational systems. Moreover, the foremost benefit of this model is its capability to capture the factual constituents of physical environments and cognitive domains of online computational systems.

The third contribution brought forward in this thesis is a principled method and approach toward establishment of open computational systems – online engineering. The method emphasizes four iterative activities in dealing with complexity in open computa- tional systems. As such, these activities all aim for an evolutionary development of system qualities, in such a manner that we can avoid the divide–and–conquer as well as debug–

repair approaches toward engineering and verification of complex systems. Instead, the method advocated herein focuses on the continuous in situ refinement of behavior and qualities in open computational systems, according to temporary concerns and individual aspects of cognitive agents – human and software.

The fourth contribution of this thesis involves an outline of certain enabling technol- ogies, as required in modeling and development of open computational systems. A software architecture that emphasizes the activities of exploration and refinement simulation is consequently introduced. As such, the architecture’s constituent technol- ogies form the basis in applying the methodological approach advocated throughout this thesis. With such technological instruments at hand, we are properly equipped to conduct real world trials and experiments in the field of dependable computing systems.

The final contribution of this thesis involves a practical example of applying our advocated methodology. In essence, this particular contribution constitutes the summary and assessment of a real world experiment, involving the model of open computational systems as well as the method of online engineering. As such, the experiment aims to establish the soundness of our approach – toward a better understanding regarding the

(24)

G U I D E L I N E S T O T H E R E A D E R 6

dichotomy of science and engineering in the field of dependable computing systems. As the means to an end, the application domain of the experiment was that of network- enabled capabilities, in which concerns such as situational awareness and information fusion are of the essence. The author’s main experience in performing this method- ological experiment was, indeed, that addressing qualitative behavior in complex compu- tational systems is a daunting challenge in every respect. Just consider the time and effort it took us to gain confidence in the qualitative and dependable behavior of modern day transportation systems. However, for better or worse, in doing so one can gain an immense support from a methodology that at least includes the essential instruments required.

1 . 4

G U I D E L I N E S T O T H E R E A D E R In order to emphasize the intent and specific contributions of this thesis, a general overview of the material presented herein as well as guidelines from the author to a potential reader would perhaps be appropriate. In essence, the author’s intent with this thesis is, on the one hand, to frame the complex challenges of dependable computing systems and, on the other hand, to exemplify those aspects of the frame that the author believes could be of benefit for a broader public of computing practitioners. Conse- quently, the material presented herein has been divided into five consecutive parts in order to reflect these intentions: Introduction, Theory, Practice, Conclusion, and References.

Excluding this introductory chapter, the first part of this thesis includes two chapters;

Dependable computing systems and Methodology of computing. As an introduction to the first part of the thesis, the second chapter introduces the reader to contemporary concerns, approaches, and challenges in the field of dependable computing systems. As such, this introduction depicts the field of computing as both highly innovative as well as in great need of addressing some major concerns. Consequently, the aim of this chapter is to strengthen certain problem statements that we, in effect, address throughout all chapters of this thesis. In the third chapter of the thesis, we therefore introduce a framework that addresses the previous chapter’s problem statements, by means of certain methodological instruments at hand.

The thesis’ second part includes two chapters; Issues of complexity and Open computa- tional systems. As a follow-up on the first part of the thesis, the fourth chapter introduces certain general concerns regarding complexity, due to interactions at different levels of system abstraction. In particular, the chapter highlights notions such as openness, surviv- ability, and quality of behavior as fundamental concepts to consider in dependable systems. Consequently, in the thesis’ fifth chapter, we introduce a model of dependable systems that aims to capture the above concepts of openness and systemic quality. The

(25)

model depicts four abstraction layers, each with its particular constructs and relations.

The aim of this chapter, with its focus on a model of open systems in nature, is to act as a foundation for investigations as well as design of open and physically grounded computa- tional systems.

The third part of this thesis includes two chapters; Online engineering and Enabling technologies. As such, this part of the thesis delves further into certain practical issues that that one faces in dealing with open computational systems. Therefore, the sixth chapter of the thesis introduces a method for attaining approximate quality of service in complex systems of contemporary computing. The method, as such, characterizes control as a matter of continuous system evolution and, in effect, the aim of this particular chapter is to identify and discuss certain fundamental activities that are involved in so-called online engineering. The seventh chapter of this thesis introduces a number of enabling technol- ogies of our methodological approach. These technologies, as such, are not only the practical result from many years of development, but they also represent the essence of our attempt to develop a comprehensive methodology for dependable computing systems. The aim of this chapter is, consequently, to summarize and discuss the architec- tural framework that, in fact, is the foundation of all material in this thesis.

The fourth part of the thesis includes two chapters; Network enabled capabilities and Summary of thesis. In order to conclude the previous parts of the thesis, including this introductory chapter, the eighth chapter outlines a synthesis of what we consider as a first example of putting our theory into practice. As such, the example depicts a real life system in the domain of network enabled capabilities. The aim of this chapter is, conse- quently, to evaluate the model of open computational systems and the method of online engineering, but also to put our methodological approach to the test in a relevant appli- cation domain of societal concern. Finally, in the ninth chapter of the thesis, we introduce the reader to an important discussion regarding our results as well as an assessment and comparison with related work. In this particular chapter of the thesis, we also present what should be considered as relevant directions of future work in the field of dependable computing systems. As such, we emphasize the challenges of establishing qualitative performance envelopes of open computational systems as well as the conduct of empirical investigations in doing so.

The fifth and final part of this thesis includes three appendices; Glossary, Notes, and Bibliography. As such, this part of the thesis is intended to function as a practical reference for those readers that call for more information regarding some particular topic. Conse- quently, the first appendix enumerates a number of basic terms and their general inter- pretation as applicable in the context of this thesis. When such a term appears for the first time throughout the chapters of this thesis it is marked with a ‘*’ symbol. The second appendix includes the author’s various notes as they appear throughout the thesis’

chapters. The third and final appendix includes a list of publications that should provide

(26)

C O N C L U D I N G R E M A R K S 8

for additional and complementary information regarding certain topics addressed in the thesis.

1 . 5

C O N C L U D I N G R E M A R K S As previously indicated, the material presented herein constitutes the author’s intent and subsequent results from framing the complex challenge of dependable computing systems. As such, one could obviously question the author’s ambition in that the problems at hand are of such a magnitude that any results presented would be of mere value.

However, the material must necessarily not be interpreted as a rigorous claim that any technical problems have been fully solved, but merely as an initial attempt to introduce a comprehensive methodology toward dependable computing systems, i.e., a methodology that includes all the essential instruments needed by a practitioner of computer science and software engineering. In effect, the author considers a methodology to guide us in matters of a theoretical as well as practical nature.

To consider methodological instruments in isolation, devoid of contextual depen- dencies, is of mere benefit to a practitioner – scientist as well as engineer – that has to face the actual consequences from applying some particular instrument in a real world context. As the means to an end, the methodological approach advocated herein has therefore evolved as a matter of experience. It has continuously been refined over a number of years, by means of experimental trials, where all of its constituent instruments have been subjected to evaluation and supplemental refinement. Obviously, the experience from such an endeavor is difficult, if not impossible, to convey in terms of a discourse where the experience of others is the general origin of thought. Instead, the author’s currently acquired experience and understanding of a particular issue at hand – qualitative system evolution – has provided for the point of departure and subsequent goal with the material introduced herein.

As such, this thesis is intended to convey a seldom acknowledged fact of scientific research and development. People make mistakes and it is from our experience in making those mistakes that we can engage in corrective actions. To this end, the foremost contri- butions brought forward in this thesis have all been, and still are, subjected to this continuous trial–and–error evolution of theory and practice. However, as previously indicated, this thesis is based on the current understanding of not only the author himself but also on the practical experience and efforts of many individuals in his own research and development group.

In summary, the material presented herein should merely be interpreted as an initial effort to provide for a first outline of what we consider an applicable and comprehensive methodology toward dependable computing systems. As such, the methodology and its

(27)

constituent instruments should be of particular interest to those practitioners of computer science and software engineering that value experimental research and devel- opment over scholastic proficiency. Obviously, in the particular context of this thesis, it is therefore very important to stress the general stance taken toward a better understanding of the dichotomy of computer science and software engineering.

Of course, there is a difference between the disciplines of computer science and software engineering. One might be temped to interpret this difference as a matter of exclusive concern with theory and practice, i.e., computer science exclusively encom- passes the theoretical part of computing whereas software engineering only deals with the practical part. However, this depiction of the difference between the disciplines is by no means correct. Naturally, they both concern themselves with theory as well as practice of computing. Instead, in the context of this particular thesis, we consider the principle difference between computer science and software engineering a matter of differing emphasis in the disciplines’ everyday conduct. On the one hand, we argue that software engineering emphasizes organizational theory and work practice, in attaining quality assurance of computing systems’ function. On the other hand, we argue that computer science emphasizes complexity theory and simulation practice, in attaining knowledge discovery of computing systems’ nature.

In a sense, the material presented in this thesis could therefore be interpreted as aiming to even the balance between these two disciplines, i.e., to better understand the attainability of complex and dependable computing systems from the perspective of computer science as well as software engineering. It is in order to make this intent and goal explicit that the thesis was named Online engineering: On the nature of open computa- tional systems. However, without any further ado, let us start our methodological journey toward dependable computing systems.

(28)
(29)

1 1

D E P E N D A B L E C O M P U T I N G S Y S T E M S

The cheapest, fastest, and most reliable components of a computer system are those that aren’t there.

G. Bell

2 . 1

I N T R O D U C T I O N From a historical perspective, there are many great ideas that have contributed to the birth of computing. However, the one idea that probably has had the most profound impact on our everyday lives, as well as society at large, is that of algorithmic compu- tation [76]. The general idea is, in essence, to remove the need for human-based computing – performing computations according to a fixed set of rules and without the authority to deviate from these rules in any detail – by means of a mechanical counterpart. However, when the activity of human computation (reasoning) is projected onto a physical machine, it is important to understand that a fundamental change of the prerequisites has occurred – the notion of semantic awareness and rationality is trans- formed into that of syntax interpretation and performance.

That is, by means of skills in observation, reasoning, and instrumentation, a human can decide to deviate from some pre-assigned set of rules whereas a machine cannot. The capability and performance of digital computers is limited to the way their physical constituents and mechanical components* interact – information storage, instruction execution, and program control – and the extent to which they are capable of inter- preting and manipulating discrete representations of information. At first, the qualitative envelope of such machinery can indeed seem quite limited but, taking the performance of physical machinery into account, the distributed digital computer* certainly provides for an awesome quality of service that far surpass its human counterpart in specific computa- tions. In this respect, the limiting factors of digital computers and their quality in perfor-

(30)

I N T R O D U C T I O N 1 2

mance are actually those capabilities, i.e., awareness and rationality, which their human operators employ to instrument the internals of computational systems.*

Still, irrespective of this natural limitation, we have found that the number of appli- cation areas where computing machinery can provide for an essential and much appre- ciated quality of service is of an ever increasing importance. At first, during the second world war, we came to depend on its service in such isolated application domains as crypto analysis and trajectory prediction. Then, with the advent and subsequent incorpo- ration of programming paradigms and digital communication [68], we eventually came all the way to depend on computing machinery in critical support functions of society, e.g., energy distribution, resource logistics, and financial transactions. However, as was the case of past decennia, instead of isolated machine instrumentation being the sole limiting factor of computing’s quality of service, we are now facing new factors of limitation, e.g., system complexity and behavioral semantics.

Over the years, we have engineered ourselves into a situation where, on the one hand, we have become dependant on computing in critical support functions of society and, on the other hand, the quality of service provided by computing machinery, e.g., depend- ability, is severely hampered by limitations in our own understanding thereof. Even though the service of computing certainly should be considered as highly innovative and beneficial for humanity – individuals, organizations, and society – we need to deal with certain critical issues of great concern [62]. As a result, from contemporary trends of computing – developing distributed and interdependent service-oriented systems of an ever-increasing complexity – we would be well advised to consider the following concerns:

P R O B L E M 2 . 1 What do we mean by dependable computing systems?

P R O B L E M 2 . 2 Can we design and maintain dependable computing systems?

From a scientific perspective, there are typically three instruments at hand which would support us in providing an answer to these particular questions, i.e., ontology (theory of reality), epistemology (theory of cognition), and methodology (science of method). The aim of this particular chapter is, consequently, to further discuss the issue of dependable computing systems from the perspective of these scientific instruments, but also to strengthen and clarify certain problem statements that we, in effect, will address throughout the remaining chapters of this thesis. In the first section of this chapter – General concerns – we depict computing machinery as embedded in nature and, conse- quently, discuss the implications from such a stance. In particular, to consider computing machinery as a physical phenomenon introduces us to the occurrence of natural as well as unforeseen events and stimuli that inevitably will affect system quality as well as our ability to comprehend and control their operation. Consequently, in the following section

(31)

– Cognitive agents – we discuss the implications of a contemporary approach toward modeling and coordinating complex computational systems by means of cognitive agents.* Moreover, in that particular section, we also discuss the implications from taking on a cognitive agents approach, i.e., introducing operators that are capable of perceiving as well as experiencing their physical environment. Finally, in the last section of this chapter – Concluding remarks – we introduce the paradox of dependable computing systems and our understanding of the fundamental challenge at hand. As such, we consider practitioner confidence in the methodological approaches as the foremost challenge at hand. Any methodological approach advocated should at least emphasize requirements of a theoretical as well as practical nature, i.e., the dichotomy of science and engineering. To this end and without any further ado, let us start with a brief intro- duction to our general concerns of dependable computing systems.

2 . 2

G E N E R A L C O N C E R N S The applicability of computing is gaining momentum and we are constructing, operating, and using this technological service in ever-increasing situations of sophistication and complexity. As a result, numerous paradigms and approaches toward computing are currently flooding the communities of computer science and software engineering, e.g., autonomic and ubiquitous computing. However, in this somewhat chaotic situation, one might wonder what essential problems that all of these approaches actually address. We argue that the common set of concerns they all implicitly share involves considering computational systems to be (1) embedded in nature, (2) constituted by programmable entities, and (3) governed by cognitive agents.

However, with respect to the first property in this list – embedded in nature – there are certain implications that call for our attention. As is the case with all systemic phenomena in nature, embedded systems are under the influence of physical events and stimuli that have precedence over the best of our rational intentions in designing, constructing, and deploying some particular service of computing. Since the occurrence of these disruptive events is an unfortunate and unavoidable fact, one must necessarily take their occurrence as well as impact into account when constructing the involved systems, but also in operating their continuous behavior in a dependable manner.

D E F I N I T I O N 2 . 1 Dependability: The tr ustwor thiness of a computing system which allows reliance to be justifiably placed on the service it delivers.11

Moreover, since the cardinality of system constituents and interactions tend to grow, as a matter of more sophisticated ways of service provisioning and safeguarding against disruptive events, the complexity of constituents, dependencies, and interactions in these

(32)

G E N E R A L C O N C E R N S 1 4

systems also tend to grow at an ever-increasing pace. In effect, the sustainable behavior of embedded systems, as well as our understanding thereof, is no longer easily compre- hended or harnessed. Neither on an individual basis nor from a group perspective.

The general vision of the future in computing involves a universal communication network* of automatically integrated physical devices of computing – sensors, computers, and actuators. Consequently, the involved systems are characterized as being immersed in a physical environment of pervasive communication and ubiquitous computing. The physical manifestation of these systems is considered to be of a temporal nature, in that the various devices and their constituent services are integrated with each other in an automated and ad-hoc fashion. Simply power on a particular device and, in a dynamic manner, it will automatically be integrated into a universe of interaction and service provisioning [15].12 An innovative vision indeed, but more importantly; what are the implications?

The envisioned emergence of ubiquitous computing obviously introduces us to many interesting and unique opportunities – quality of computing at its finest. However, when the notion of ubiquitous computing was introduced in the beginning of the 1990s it was not introduced as a vision of some distant future, but rather in order to frame an already existing application domain in great need of more appropriate models, methods, and technologies than currently available. At the time being, even though the existence or attainability of this universal system of ubiquitous computing was somewhat questionable, the response from various computer science and software engineering communities was immediate – a plethora of competing models, methods, and technologies started to emerge, e.g., multi-agent systems, grid computing, ambient intelligence, and semantic web.

Considering the exponential growth of the Internet and its impact on our perception of computing’s potential in providing for quality of service, the attainability of ubiquitous computing should indeed not be considered as fiction. However, as advocated by some of its proponents, further analysis of this topic implies certain profound implications. On the one hand, if we consider our physical environment as constituted by an ever- increasing amount of computing devices – automatically integrated by means of pervasive communication – the summative performance envelope of these computational systems would be immense. On the other hand, due to their complex and situated nature, the constitutive capability envelope will continue to elude us. We have to realize that ubiquitous computing implicates an unfolding of the standard chain of command and control in computational systems.

As previously described, the quality of an isolated computer is limited to some particular operator’s performance in instrumenting its internal constituents. However, when additional machines and operators are integrated into this chain of command, the previously isolated computer is all of a sudden subject to command and control instru-

(33)

mentations that emanate from potentially unknown as well as unanticipated origins. We are dealing with the complex dynamics of open systems. Moreover, when pervasive communication enters the scene, these computational systems and their performance are suddenly under the influence of not only operator and machine interactions, but physical environment* interactions as well. That is, by means of introducing sensor and actuator technologies into the command and control loop of computing systems, we have immersed the service of computing into nature itself and must now face the interesting consequences. As a matter of ubiquitous computing, the primary concern of computing has started to move away from mere computation, i.e., manipulation of isolated infor- mation, and instead we are emphasizing interaction [80] and coordination [50] of distributed computations.

The prevailing model of computation is the Turing machine from 1936. As such, the model was not intended to convey aspects of some natural phenomenon. Instead, it was a mathematical model that addressed the Entscheidungs problem, put forward by the mathematician David Hilbert [74]. From a contemporary perspective, theoretical computer science has since the 1930s devoted much effort in investigating comple- mentary models of computation. Particular strands involve investigations regarding computational power, equivalence, and strengths in verification. Moreover, the practical applications of those models typically included investigations regarding algorithm complexity, distributed computations, deadlock avoidance, fairness and so on. However, all of these efforts address particular theoretical concerns and refinements regarding the general model of computation. There is, obviously, also a somewhat more practical aspect to these efforts of progress in the field of computing – programming and operating the machinery.

The first (American) digital computer – ENIAC – was designed by von Neumann and Turing, using the abstract Turing machine as a blue print for von Neumann’s architecture of computing machines. The digital computer executed stand-alone numerical algorithms in a way that faithfully mimicked the formal transitions defined by the Turing machine [75]. Since the semantics of these numeric calculations were faithfully mirrored by means of formal symbol manipulation, there was no semantic gap between the abstract model and the concrete architecture. The involved problems were, however, of a practical nature – how to engineer and maintain the digital computer’s operations as well as how to implement the algorithms in a correct and pragmatic way.

Besides these practical problems, problems of semantics entered the scene when the involved algorithms were to be executed on different machines. The real semantics, i.e., semantics regarding the behavior of physical machinery, could then differ due to imple- mentations of the same algorithm on different hardware platforms* – a problem of inter- pretation. The introduction of formal languages, abstract machines, and compiler techniques during the 1950s throughout the 1970s, combined with advancements in

(34)

C O G N I T I V E A G E N T S 1 6

software engineering during the same period, led to the idea that formally defined and verified algorithms could, in principle, be implemented on different hardware platforms and still deliver the same computational behavior.

In 1968, however, NATO sponsored a historic meeting with leading academic as well as industrial participants in order to discuss what they dubbed the software crisis. They saw software systems rapidly growing in size and complexity, at the same time as they were providing for computing services in application areas where failures could cost lives and ruin business. They believed that the fundamental notion behind programming – construction of programs that implement mathematical functions – could not cope with the complexity and fuzziness of requirements in embedded (safety-critical) systems. In short, they advocated a new discipline – software engineering – to remedy the situation.

However, we have come to learn the hard way that we still have a long way to go toward such a principled discipline of (software) systems engineering.

We are still infatuated and captured by the tradition, as well as mindset, where programs are considered in terms of mathematical functions, and programming is the fundamental practice in translating these functions into qualitative systems. To this end, we have only partially embraced the mindset that emphasizes software as systems of systems; necessarily modeled and designed under the severe constraint of unanticipated environmental events. However, even though industrial actors have emphasized the latter mindset for quite some time, the academic community persists in focusing on the prior case [12].

Meanwhile, more semantically complex applications have emerged since the 1960s.

Notable examples are found in the areas of artificial intelligence and knowledge engineering. Techniques supporting knowledge representations – formal concepts and logics – were developed, as well as knowledge acquisition techniques to capture the relevant semantics of system behavior in some particular domain under investigation [69].

The limits of symbol processing, when conveying meaningful and understandable semantics of system behavior to human users, were tested in several applications.

Typically, we were successful in closed domains where the involved concepts had an unambiguous formal meaning and were not susceptible to ambiguous interpretations by different users [72].

2 . 3

C O G N I T I V E A G E N T S Today the techniques of artificial intelligence and methods of knowledge engineering have partly been incorporated into a research and development area of computing that typically is referred to as multi-agent systems. As such, the area primarily emphasizes the notion of computational entities – agents – that are endowed with mental capabilities and

(35)

social competencies. If one feels confident enough in trusting the sustainable behavior of such autonomous entities, one can even empower them with responsibilities and authority – on behalf of their human operators. However, in this respect, an important challenge that so far has attracted little, if any, attention from the academic community involves to what extent the involved models* of these autonomous agents, and the subse- quent multi-agent systems they form (described by formal semantics), depicts the factual behavior that they are said to exhibit in their natural habitat – physical environments.

The main reason for advocating autonomous agents and multi-agent systems is the involved models’ high level of abstraction as well as the agents’ natural capacity to exhibit capabilities similar to those of their human counterparts. In this respect, the notion of autonomous agents can best be characterized as a matter of human actors in their techno- logical form. There is, however, something of a puzzle here. As we previously stated, digital computers already occupies such a role of performing complex and tedious tasks of computation. Why then do we introduce the notion of autonomous agents? In essence, these computational entities are not introduced in order to take over the role of the computer, but rather to take over the role of operators.

Consequently, the paradigm of multi-agent systems aims to recapture the loss that occurs in projecting the role of a human computer onto a mechanical counterpart, i.e., involving cognitive capabilities such as awareness and rationality of reasoning. As previ- ously stated, this projection typically involves the notion of authority being transformed into that of a mechanical capability to perform computations. However, in doing so, we still require the performance of computations to be supervised by an operator and this is where the notion of autonomous agents appears. These computational entities are empowered with the authority to carry out such cognitive actions that their human counterparts – operators – otherwise would be responsible for. In every respect, auton- omous agents and multi-agent systems therefore aim to provide for an operational solution toward automation and control of complex computational systems, where cognitive capabilities are of the essence.

In practice, the agent paradigm emphasizes the notion of a piece of software exhibiting certain behavioral qualities of cognition. With a focus on automation of cognitive tasks, these qualities typically involve the capability of an agent to autonomously perceive its environment through sensors as well as acting on it by means of actuators. Moreover, this particular focus of the agent paradigm also introduces the need for additional capabilities that often are associated with cognitive entities – learning and reasoning. Consequently, agents are required to exhibit the capability of discovering facts and rules about their surrounding environment, i.e., facts and rules regarding other agents that proliferate in the agent’s environment [36]:

(36)

C O G N I T I V E A G E N T S 1 8

D E F I N I T I O N 2 . 2 An agent operates in some physical or computational environment.

An agent is itself a physical system of some sor t. Even a pure software agent is embodied on a computer that g ives a home to the agent’s inter nal structures (data str uctures, if you will) and enacts its program.

When dealing with the agent paradigm, it is implicitly understood that the environment as such is of either a physical or a computational nature. However, first class citizens in this world of agents are the agents themselves. In effect, the models applied in cognitive activities such as sensing and reasoning typically emphasizes normative behavior and mental constructs of cognitive agents – humans. In ubiquitous and dependable computing, on the other hand, the environment at hand is in a similar manner also of a physical nature, but the first class citizens involved are primarily physical artifacts – machines – not normative entities of a computational nature. Consequently, one would be well advised to note that there is a semantic difference between the systemic nature depicted by models of the agent paradigm and the models of physical systems. The former primarily conveys constituents and behavior of normative and sociological phenomena [9], whereas the later conveys the behavior of causal and natural phenomena.

In addressing the ideas and intent of the agent paradigm, a common approach is however to assimilate conceptions and metaphors related to the notion of system organi- zation and control in the area of sociology [25] and recently from the area of biology as well [52; 55; 56]. As argued by Malsch [44], there is a danger in such an approach since it more often than not focuses on construction of cognitive agents according to models and metaphors of sociological origin. In fact, the notion of agents as sociological entities, i.e., endowed with human capabilities such as believes, desires, and intentions, was at first only intended to identify the required capabilities in reasoning about sociological systems and contexts. Today, however, this intention and model still prevails as the major approach toward construction of autonomous and cognitive agents, even though the problems addressed in, e.g., dependable computing, exhibit little resemblance with systems of a sociological nature. In fact, most application areas advocated by industry and society are of a technological nature – embedded systems.

It is important to understand that organization and control activities of cognitive agents in physical systems will not be attained as long as the idea of single agents with predetermined motives, interests, and intentions prevails. The general message brought to light by such statements can be considered as twofold. First, we must necessarily avoid making use of metaphors in construction of complex computational systems; if the sought for system behavior does not conform to the interaction principles of the metaphors in question. Secondly, since we consider complex systems, control over system behavior is a result from coordination of multiple agents with cognitive capabilities; as opposed to operations governed by a single agent with normative performance constraints.

(37)

Still, we are ourselves cognitive agents and, as such, we notice what goes on around us. This unique capability helps us in our everyday activities by suggesting what events we might expect and even how to prevent unwanted outcomes. In effect, the capability to observe and, subsequently, to understand our surroundings actually fosters our very survival. However, this peculiar expedient works only imperfectly. There are surprises – unanticipated events – and they are unsettling. We rely on our capability to perceive events in our surroundings and, by means of internal models of cause and effect scenarios, we feel confident in what to expect. Still, since unanticipated events keep occurring, then what could possibly be wrong with our expectations? We are faced with the problem of error.

These are errors regarding our knowledge of the external world, i.e., how we come to understand things. Philosophers such as Thales, Socrates, and Plato devoted much thought along these lines of thinking. A crucial breakthrough came in 1290 when Roger Bacon introduced experiments and observations as key components in construction and instrumentation of our knowledge regarding the external world and how it evolves. In fact, this approach was ingeniously geared so that maximization of cognitive experience was attainable. Philosophers and researcher such as Copernicus and Galileo later on refined Bacon's fundamental approach toward knowledge exploration and discovery – experimental research – and we entered the era of natural science. In many respects, the introduction of experimental research can, at least in the domain of natural science, be considered as the introduction of what we today call methodology.*

Meanwhile, other philosophers addressed the more deep issues of knowledge itself – epistemology. In that particular case, key ideas were typically Hobbes’ materialistic views, i.e., that our sensations are the effect enforced upon us by the otherwise unknowable world, Renk’s and Descartes’ thoughts about mind and matter, Locke’s notion of knowledge as a result from coherence of ideas, and Bishop George Berkeley’s intriguing stance that nothing exists except that which can be perceived. Hume focused on the issues of identification and validation whether two objects observed at different occasions are indeed the same and thus brought up the nature and difference between concepts such as identity and similarity.

Locke, Berkeley, and Hume were the classic British empiricists and all three agreed that our lore about the world is a fabric of ideas – based on sense impressions. As Wittgenstein observed about 150 years later, even simple sensory acquired qualities are elusive unless they are mapped onto constructs of public language. For example, an individual might come to understand that many environmental events are recurrences of the same subject’s qualities, despite a substantial accumulation of slight differences between observations. Consequently, public naming and cognitive inspection are essential capabilities in approximation of physical identity and similarity. Public words anchor ideas and are the basis for common awareness of natural phenomena.

References

Related documents

nutidsmänniskan. Under neandertalmänniskans avsnitt beskrivs dess morfologi, kultur och att den blev utkonkurrerad av cromagnon-människor. Nutidsmänniskan, Homo sapiens sapiens,

tool, Issues related to one tool (eMatrix) to support coordination One important consequence of using eMatrix as the Information System in the Framework is that it is possible

equipment · joint action · common identifiers · integrationism· epistemology · ontology · representation.. 1

The most evident difference when working as a leader within a foreign owned company is according to the interviewee, the cultural aspects between the different countries.. It is

Detta kan också kopplas till begreppet ledarskap inom systemteorin eftersom cheferna enligt denna teori vill öka förståelsen för organisationens mål

Paper III develops a computer simulation to investigate the emergent network structural effects of free social spaces on the diffusion of social mobilization, thus illustrating the

As we will see, these two positions are in fact also reflected within the intersection of complexity science and sociology, where they are incarnated as two fundamentally

understanding of disciplinary knowledge and professional competence. In many ways, these standards embrace Söderberg’s ideals, asserting the dignity of useful knowledge and