• No results found

Simulation and Measurement of Non-Functional Properties of Web Services in a Service Market to Improve User Value

N/A
N/A
Protected

Academic year: 2022

Share "Simulation and Measurement of Non-Functional Properties of Web Services in a Service Market to Improve User Value"

Copied!
58
0
0

Loading.... (view fulltext now)

Full text

(1)Degree project. Simulation and Measurement of Non-Functional Properties of Web Services in a Service Market to Improve User Value. Author: Catherine Catherine Supervisor: Welf Löwe External Supervisor: Jens Kirchner, Karlsruhe University of Applied Sciences Date: 2013-12-31 Course Code: 5DV01E, 15 credits Level: Master Department of Computer Science.

(2) Abstract In a Service Oriented Architecture, information systems are composed of different individual services. This is comparable to Lego bricks, which can be arbitrary combined with other bricks. In the context of Service Oriented Computing, the Internet is becoming a market of services, in which providers and consumers meet each other. In this case, it could happen that multiple vendors provide the same service functionality. This condition makes it difficult for the consumers to select the provider which best fulfils his requirements and needs. Nevertheless, the existing service varieties distinguish themselves in their non-functional characteristics, such as response time, availability or costs. As part of the joint research project of the Karlsruhe University of Applied Sciences and the Linnaeus University in Växjö, Sweden, a framework is developed which automatically binds service consumers to their best-fit service providers based on nonfunctional properties and consumers’ requirements. This form of service market stands currently in the emerging development process which is conspicuous in a rapidly growing Internet world. This work aims for the validation whether the application of the developed framework in such a market would bring additional value for service consumers. For this purpose, a simulation environment is created, in which a futuristic service market is projected. Within the simulation environment, network communication for service invocation scenario is simulated based on a simplified model. The goal is to manipulate the measured nonfunctional properties of services like response time and availability to mimic real world scenario. Keywords: Service Oriented Computing, Service Oriented Architecture, services, framework, simulation. Disclaimer This work is done under supervision of researchers at Karlsruhe University of Applied Sciences, Germany and Linnaeus University, Växjö, Sweden. The report is therefore presented and published at both places.. i.

(3) Table of Content 1 Introduction ........................................................................................................ 1 1.1 Service Oriented Architecture .............................................................................................. 1 1.2 Problem Definition ............................................................................................................... 1 1.3 Framework of Service Oriented Computing ........................................................................ 2 1.3.1 Motivation and Project Background................................................................................. 2 1.3.2 Framework Approach ....................................................................................................... 2 1.3.3 Framework Architecture................................................................................................... 3 1.4 Objective of the Thesis ......................................................................................................... 4 1.5 Report Layout ....................................................................................................................... 5. 2 Background ........................................................................................................ 6 2.1 Discrete Event Simulation .................................................................................................... 6 2.2 General Simulation System .................................................................................................. 7 2.2.1 DESMO-J ......................................................................................................................... 7 2.2.2 J-Sim (Java Simulator) ..................................................................................................... 8 2.3 Network Simulation.............................................................................................................. 9 2.3.1 WANem (Wide Area Network Emulator) ........................................................................ 9 2.3.2 NS2 (Network Simulator version2) ................................................................................ 10 2.3.3 OPNET (Optimized Network Engineering Tool) Modeler ............................................ 11 2.3.4 OMNET++ (Optical Micro-Networks Plus Plus)........................................................... 11 2.3.5 NIST Net (Network Emulation Tool)............................................................................. 12. 3 Architectural Design ........................................................................................ 14 3.1 Construction of Existing Framework ................................................................................. 14 3.2 Architecture Concept of Simulation Environment ............................................................. 14 3.2.1 Full Virtualization using Virtual Machines .................................................................... 14 3.2.2 Virtual Hosting using distinct ESB Instances ................................................................ 16. 4 Simulation Design and Concept ....................................................................... 19 4.1 Non-functional Characteristics of Services ........................................................................ 19 4.1.1 Response Time ............................................................................................................... 19 4.1.2 Availability ..................................................................................................................... 20 4.1.3 Throughput ..................................................................................................................... 21 4.1.4 Reliability ....................................................................................................................... 21 4.2 Implementation concept ..................................................................................................... 21 4.2.1 Interception of the Simulator .......................................................................................... 22 4.2.2 Simulating Response Time ............................................................................................. 23 4.2.3 Simulating Availability .................................................................................................. 25. 5 Implementation ................................................................................................ 27 5.1 The Simulator as Java Servlet ............................................................................................ 27 5.2 Simulating Response Time ................................................................................................. 29. ii.

(4) 5.3 Simulating Availability ...................................................................................................... 30. 6 Experiment and Evaluation .............................................................................. 32 6.1 Test Scenario ...................................................................................................................... 32 6.2 Evaluation ........................................................................................................................... 32. 7 Conclusion........................................................................................................ 38 7.1 Summary............................................................................................................................. 38 7.2 Future Work........................................................................................................................ 39. References ............................................................................................................ 40 Apendix A Detailed Validation Results ............................................................... 44. iii.

(5) List of Figures Figure 1.1: SOC Framework Architecture (Kirchner et al., 2011) ................................... 4 Figure 2.1: Configuration options of WANem............................................................... 10 Figure 3.1: Current system construction......................................................................... 14 Figure 3.2: System Architecture for Virtualization of Integration Environment ........... 15 Figure 3.3: Architecture for Virtualization of Integration Platform ............................... 17 Figure 4.1: Request Flow in SOC Framework ............................................................... 22 Figure 4.2: Web Proxy Server (Schnabel, 2004) ............................................................ 22 Figure 4.3: Request passing the simulator ...................................................................... 23 Figure 4.4: Additional headers in SOAP message ......................................................... 24 Figure 4.5: Service response flow .................................................................................. 24 Figure 5.1: Sequence diagram of response time simulation ........................................... 30 Figure 6.1: Response time comparison of service calls from Karlsruhe to United Kingdom ......................................................................................................................... 32 Figure 6.2: Response time comparison of service calls from Kalmar to United Kingdom ........................................................................................................................................ 33 Figure 6.3: Response time comparison of service calls from Karlsruhe to Korea ......... 33 Figure 6.4: Response time comparison of service calls from Kalmar to Korea ............. 33 Figure 6.5: Network as a graph (based on Peterson and Davie, 2011) ......................... 35 Figure 6.6: Comparison between time for ping calls and simulation from Kalmar to Korea .............................................................................................................................. 35 Figure 6.7: Comparison between time of ping calls and simulation from Kalmar to United Kingdom ............................................................................................................. 36 Figure 6.8: Response time comparison of service calls within simulation .................... 36 Figure 6.9: Comparison of simulated service availability with the same availability value ............................................................................................................................... 37. iv.

(6) List of Abbreviations NIST National Institute of Standards and Technology NIST Net Network Emulation Tool from NIST NMS Network Measurement and Simulation NS2 Network Simulator version2 NSF National Science Foundation OMNET++ Optical Micro Network Plus Plus OPNET Optimized Network Engineering Tool OS Operating System OTcl Object-oriented Tool Command Language QoS Quality of Service SL Service Level SLA Service Level Agreements SOA Service Oriented Architecture SOAP Simple Object Access Protocol SOC Service Oriented Computing Tcl Tool Command Language VM Virtual Machine WAN Wide Area Network WANem Wide Area Network Emulator XML Extensible Markup Language. ACA Autonomous Component Architecture API Application Programming Interface Bps Bits per Second DARPA Defense Advanced Research Projects Agency DES Discrete Event Simulation/Simulator DESMO-J DES Modelling in Java ESB Enterprise Service Bus GUI Graphical User Interface HTTP Hypertext Transfer Protocol IC Integrated Circuit ICV Intelligent Collaboration and Visualisation J-Sim Java Simulator JVM Java Virtual Machine MTU Maximum Transmission Unit MURI/AFOSR Air Force Office of Scientific Research’s Multidisciplinary University Research Initiative NAM Network Animator NED Network Description NF non-functional. v.

(7)

(8) 1 Introduction This chapter provides background information about the research project in the field of Service Oriented Computing and framework which is developed within the project and is basis of this work. Afterwards, the aim of the thesis will be presented. 1.1 Service Oriented Architecture IT-Systems form the base of almost all business processes, no matter of the affected products, market sections, and enterprise sizes. Typical enterprise software performs a large range of functionality that consists of a set of functions that are parameterised and combined in a task-specific manner. To allow an efficient development and high flexibility for reacting on changing requirements, those functions, called services, can be deployed independently of each other. The combination of those services to software that supports full business processes is called Service Oriented Computing (SOC) / Service Oriented Architecture (SOA). SOC focuses on the development of software solutions based on services that use standard technologies. “A primary goal of SOC is to make a collection of software services accessible via standardized protocols, whose functionality can be automatically discovered and integrated into applications or composed to form more complex services.” (Bichier and Lin, 2006) The implementation of such models relies on the SOA. SOA is an architectural approach to build software systems based on loosely coupled components (Weerawarana, 2005). This kind of system design provides flexibility and allows enterprise wide reusability of the systems or services (Krafzig et al., 2005). The usage of SOA services is platform independent, i.e. clients are not limited to use specific hardware or software platforms to invoke the services. As standardised technologies are used to implement these services and standardised interfaces are used for the interaction of services, the concrete implementation of a service becomes less important. Instead, the service can be performed by any implementation that provides the required functionality and obeys the interface definition. This makes implementations exchangeable. Georgakopoulos and Papazoglou (2008) argue that Web services have become the preferred implementation technology of realising SOA. Technologies used to develop Web services are e.g. HTTP, SOAP or XML. There are three major components in the web service architecture: service provider, service registry, and service consumer. Service providers publish their services’ definition to a service registry hosted by a service discovery agent (service broker) like UDDI (Papazoglou, 2008). Service consumers search for providers with corresponding services in the service registry. Discovery agents then respond to the consumer by giving some information about the service providers, the service description. Finally, service consumers use the provided information to establish contact with a service provider and invoke the service (Papazoglou, 2003). 1.2 Problem Definition In the past years, the number of available services on the internet has increased. As a consequence, the same service functionality is offered by multiple providers. This situation makes it difficult for service consumers to choose the right service provider that best fits the consumer’s functional and non-functional (NF) requirements. As many 1.

(9) providers offer the same functionality, the consumer must rely on NF requirements to choose the right service. Therefore Georgakopoulos and Papazoglou (2008: 192) point out that there is a need for consumers to be able to distinguish the services on the basis of their NF properties such as quality of service (QoS). They define QoS as a set of quantitative (e.g. response time and price) and qualitative (e.g. reliability and reputation) characteristics of a service which correlates service delivery with consumer requirements. For example, a consumer from Sweden preferably uses a service also located in Sweden due to its geographical proximity. But since the consumer invokes the service during lunch time, which is the service’s busiest time in this scenario, not each request can be processed in time. The consumer decides to call another service provider, which offers the same functionality, but is located in America. Hence the different time zone, the service workload is lower at the given invocation time, i.e. the response time to call services in America is shorter than to invoke services in Sweden within the specific time frame. NF properties of services are up to now defined in the so called Service Level Agreements (SLA), which service consumers need to rely on during the decision process of choosing a service instance. Due to the fact that service providers want to sell their services profitably, they tend to describe their services in SLA better than they actually are (Kirchner et al., 2011). For the consumer this leads to the questions, which of the available services provides both the required logic as well as the required NF requirements in a reliable manner. To eliminate this issue, a study was conducted within a research project to measure and analyse the NF characteristics of services. This study results in a development of a framework which helps service consumers in finding their best-fit service provider based on the NF properties of services. Next section gives an overview about this framework. 1.3 Framework of Service Oriented Computing This following section describes a framework developed within a joint research project in SOC field, which is the basis of this thesis. 1.3.1 Motivation and Project Background A joint research project of the Karlsruhe University of Applied Sciences and the Linnaeus University in Växjö, Sweden, has shown that NF properties of services such as response time can vary from time to time (Kirchner et al., 2011). Therefore, “the created static SLAs do not always reflect the current NF characteristics of a service” (Kirchner et al., 2011). A decision based solely on the information provided by the providers themselves is not a reliable basis for the selection of the best-fit services. Within this research project, a framework has been developed that will serve as a remedy to evaluate and compare the NF characteristics of different service providers from the service consumer’s point of view. It should help service consumers in finding and selecting service providers, which can best fulfil their requirements. 1.3.2 Framework Approach The developed framework intends to steadily select the most suitable service provider for each consumer and to ensure improved QoS. The measurement of the NF properties is performed every time the service is used, which is why the decision regarding the service binding adapts dynamically to the changing environment and different quality of service providers. In addition to the NF properties of service providers, the selection of best-fit service also depends on the service consumer context (e.g. service call time, 2.

(10) consumer location, etc.) and consumer utility (i.e. weighted quality goals) (Kirchner et al., 2011). 1.3.3 Framework Architecture The framework is composed of different components. Following is a detailed explanation of the components: (Emmel, 2011) 1. Integration Platform The Integration Platform is an Enterprise Service Bus (ESB). This is part of the architecture, whose implementation can be integrated into the existing business environment. A so called Local Component plugin extends the integration platform with some functionality for the administration of the framework’s components and participants. This component is responsible for the integration and interaction between the framework’s participants with each other and with the central component (point 2). There are three major participants in this framework, also called service components; they are service consumer, service provider, and service intermediaries. The compound of service components with integration platform is called integration environment (see Figure 1.1). The Service Consumer is the end customer, who searches for and consumes services. During the integration process, the service consumer provides the central component with information about its calling context such as location or bandwidth. Each consumer is assigned to a so called Service Binding which describes the current best-fit service instance for this consumer, based on the consumer’s own statements about NF requirements. Service Provider supplies services for consumers and intermediaries. The communication between service providers and consumers or intermediaries within the integration environment and also with those outside the environment is always performed through the integration platform. Each service provider can be assigned to multiple service consumers. The Intermediary takes the role of service consumer as well as service provider within the integration environment. If a service consumer invokes a service, the local component monitors this call and generates a so called data feedback1 about the particular service call. It captures information about the consumer and the call context like location and call time as well as NF properties of services like response time or availability. These data are then sent to the central component to be evaluated. 2. Central Component The Central Component, which is a self-learning broker (Kirchner et al., 2011), is the key component of this architecture. It is referred to as “central, adaptive service broker” in Figure 1.1. There is only one central component, which multiple integration platforms can communicate to. It has the responsibility to define the current best-fit service instance for a service user in the integration environment. To do this, the central component analyses data received from local components. It reviews results of past service calls and evaluates them to draw comparison between the service instances. If a service instance is found which can better fulfil the requirement of a specific consumer profile, the corresponding local component will be informed and the Service Binding will be updated. Additionally, the “central component also monitors the service providers’ current achievements to (possibly) adjust future best-fit decision” (Kirchner et al., 2011). 1. In this work, the term feedback refers to a report or protocol.. 3.

(11) The central component is developed as a Web service running on an application server.. Service Components. Integration Platform. Integration Environment. Figure 1.1: SOC Framework Architecture (Kirchner et al., 2011). 1.4 Objective of the Thesis This framework addresses a currently emerging service market, whose final form is still in the development. It is uncertain whether the application of this framework would offer added value and indeed bring improvements in a certain way. Therefore a test environment is required, which can sufficiently picture the environment of service markets in the future and enables the validation of the developed framework. To obtain useful results, the validation process must be performed in various realistic scenarios, i.e. as mentioned before, that NF properties of services are not static performance indicators, but their values may change from time to time depending on several factors. The simulation environment should be developed to reproduce this kind of scenario, i.e. simulation should be able to control or manipulate NF properties of services with regards to realistic factors. The simulation environment should become a component of the existing framework environment described in the previous section and provide the capability to simulate generic network characteristic of service calls. Realistic means that the determining factors, created in the simulation, should comply with the one in reality. This includes delay or jitter during service request transmissions, services outages, services overloads, dropped service calls when bandwidth exceeds, and variable service response time depending on aspects like bandwidth, invocation time and other factors. Distinct services behave differently in various network environments. Service instance characteristics like provider location or utilisation rate are not the only influencing factors in measuring NF properties, but also consumer context such as 4.

(12) bandwidth or time of call. The simulator should observe those properties and facilitate the configuration of these extents for each service component. In this way, various scenarios can be created during the simulation process. Moreover, considering that there are a huge number of factors or conditions affecting the behaviour of services in a real world scenario the simulation environment is built in a controlled manner. It is desirable to create a reproducible world within the simulation. Reproducible means that service calls under identical conditions can be reproduced to generate similar measurement results with a moderate deviation. To achieve this, a good understanding of the implementation of the simulation environment is necessary so that the desired outcome is verifiable. By this means, service request traffic with varied settings can be produced, local components can capture data feedback, which afterwards can be forwarded to the central component and used to verify the framework using machine learning and data mining approaches. In conclusion, the simulation system should be a stand-alone component which can be assembled to be part of the SOC framework without affecting the function of each framework component and participant. Following goal criteria should be considered in developing the simulation system: 1. The simulation system could be integrated to the existing laboratory SOC framework environment. 2. The simulation system could be easily detached from the framework environment. 3. The simulation system could realistically simulate NF properties of services such as response time, availability, etc. 4. User of the simulation system should be provided with graphical user interface to able to configure different input parameters for the execution of the simulation process. 5. Simulation results of NF properties of services could vary depending on defined model and input parameters. 1.5 Report Layout This thesis is organised as follows. The next chapter gives an overview about the background in the simulation area, general service simulation as well as network simulation. In Chapter 3, the architecture of the existing framework will be introduced and ideas about building simulation environments based on the existing architecture design are contributed. Also the decision to create an own simulation software is explained. Chapter 4 gives an overview of NF characteristics which should be simulated. Then, concepts to simulate the selected NF properties are exhibited. Knowledge about the implementation of introduced concepts is presented in Chapter 5 and validation of the developed solution in Chapter 6. The last chapter concludes this work and suggests possible improvements for the future work.. 5.

(13) 2 Background Simulation techniques have been applied and developed in different research fields. Hartmann (2005) argues that research cannot be imagined without simulation and almost each academic discipline has at least a little use for simulations. In science and engineering, simulations are used to recreate a real-world system. Real-world systems are in most cases too complex to allow recreating realistic models for evaluation purpose. Therefore, computer simulation is a convenient way to evaluate a model of a real system (Law, 2000). Hartmann (2005) introduces five motivations of running simulation: 1. Technique: Simulations allow researchers to investigate the detailed dynamic of a real system and to obtain very accurate answers to the questions of interest. 2. Heuristic tools: “Simulations have a major role in the process of developing hypotheses, models and new theories.” 3. Substitution for an experiment: Simulations allow researchers to perform impossible experiments, e.g. study the formation of galaxies. 4. Tool for experimentalists: Simulations support experiments. 5. Pedagogical tool: Simulations can help students in understanding an underlying process during learning. In the SOC context, service vendors create simulation models of the offered services to verify their functionality before the release. The vendors have the possibility to address feasible issues before these become problems (Chandrasekaran et al., 2002). Miller et al. (2002) also argues that since flexibility is one of the requirements for the competitiveness on the open market, simulation can serve for the purpose of service design correction and improvement as well as adaptive changes at runtime. Considering that the success of organisations depends on efficiency and effectiveness of its business processes and that most business processes are implemented by combining services, it is useful to analyse the overall structure and functionality as well as the NF characteristic of the services (Chandrasekaran et al., 2002). Simulation of NF aspects will respond to “what-if” questions during the process design and help in investigating how composed web services will perform when they are deployed (Chandrasekaran et al., 2002). This chapter points out some examples of service simulation systems and environment for the purpose of (non-)functionality validation, correction, and improvement. Furthermore, some network simulators are introduced to be used in the QoS analysis. The next section gives a short introduction about discrete event simulation, based on which most simulation technologies are developed. 2.1 Discrete Event Simulation In order to simulate something, an appropriate model is required. A model usually contains several systems that are themselves models of logic that represents behaviour and variables that represent the state of the model. The process of simulation is usually the initialisation of the variables followed by either manual or automated change of a given variable and the observation of how the other variables change in reaction. According to Banks et al. (2005), discrete event simulation (DES) is a modelling methodology of systems in which the state variable changes only at a discrete set of points in time. “The state of a system is a collection of variables necessary to describe a system at any time” (Banks et al., 2010: 30). A possible state variable in production system is for example the status of machines (busy, down, etc.). A system is composed of a number of entities. Entities are system components, e.g. server, machine, etc. Each. 6.

(14) entity has attributes, which are part of the system state. If a production machine is the entity, its attribute could be the machines’ status (busy or idle). (Law, 2007) The changes of these variables are organised in terms of events (Göbel, 2006). Events occur instantaneously and may change the state of the system (Law, 2007), e.g. breakdown of a production machine. Events are processed sequentially. The completion of one event may trigger the start of another event (event routine). Future events are stored in an event list (Buss, 2002). If two or more events are scheduled at the same time, a rule is required to determine which event is to be executed first (Buss, 2002), like for example by prioritising the event based on the execution time. Simulation processes usually start at fictional time T by initialising the required parameters, such as the simulation clock (value of simulated time), state variables, statistical counters (which contain information about the system performance for statistical report purpose at the end of the simulation), and the event list (a list of future events). Data and events are often relative to the time T, for example the simulated bandwidth utilisation over time. Banks et al. (2010) recommends that the event list is ordered by event time, i.e. events are arranged chronologically. The determination of future events is therefore based on the event list. If the next event to occur is associated with t1, the simulation clock is advanced to the time t1, and the event is executed. During this time, some activities occur, like for example updates of the state variables, gathering of information about the system performance or establishment of the following next events. If another future event is found, the process is repeated until stopping circumstances arise. If termination should be processed, statistical counters are analysed and a simulation report will be generated. Each finished event will then be removed from the event list. (Law, 2007; Banks et al., 2010) The main focus in this thesis is network simulation, because, as explained in Chapter 3, this is the only simulated resource that does not exist as part of the framework. 2.2 General Simulation System In the following section, some simulation environments are described representatively. 2.2.1 DESMO-J Information about DESMO-J in this section is based on Page and Kreutzer, 2005. DESMO-J (DES Modelling in Java) is a Java-based development framework for DES modelling. It contains a library of reusable classes for simulation experiments. It was developed at the University of Hamburg and is available as free and open-source software under the Apache License, Version 2.0. DESMO-J is divided into two major components. Using a so-called black box framework, a number of pre-defined components can be easily chosen, used, and suitably instantiated. Domain experts with basic programming skills should be able to integrate them conveniently, since only simple changing of parameters is necessary. This framework provides the core functionalities for implementing DES, for example event lists, simulation clocks and schedulers, methods to record and store simulation results and protocols, support for queues and random processes, and the capability to create statistical data for a simulation. Another component of DESMO-J is referred to as white box framework. It offers abstract classes which must be adapted or customised to fulfil model specific requirements. This framework extends core functionalities mentioned to model-specific active entity types. The creation of a DESMO-J model can be divided into a number of tasks. First, the selection of black box components is required, and then the lifecycle of the white box components must be implemented by customising suitable abstract classes. Finally, top 7.

(15) level functionality such as instantiation of model components and entity’s lifecycle must be implemented. DESMO-J supports different kinds of modelling style, such as: 1. Event-oriented simulation This model describes transformation as a number of events. Subclasses of the so-called Entity class must be implemented for each of the model-specific objects (e.g. a ship or crane to model container traffic) and extended with suitable attributes and functionalities. Additionally Event classes as how the major focus of a view should be defined for each type of event in the model, e.g. the event of a ship arrival in a container traffic model. 2. Process-oriented simulation DESMO-J defines processes as active entities which model properties (data structure) and behaviour (life cycle which controls process’ state). In this model, each active entity must be defined as subclass of SimProcess, i.e. ship and crane should be described as process. And each of these must implement the lifecycle() method which describes entities’ behaviour in state changing, waiting list, and interconnection in different process phase. DESMO-J comes with a graphical user interface (GUI) to configure the model and experiments. This enables users to make changes to model parameters and control the experiment. Statistical measures of the simulation process will also be visually presented. 2.2.2 J-Sim (Java Simulator) The majority of information below is distributed from J-Sim Official, n.d. J-Sim (former JavaSim) was developed by a team at the Distributed Realtime Computing Laboratory of the Ohio State University. The project has now been supported by the National Science Foundation (NSF) within their Next Generation Software program, the Defense Advanced Research Projects Agency (DARPA) network modelling and simulation program, MURI/AFOSR, Cisco System Inc., Ohio State University, and University of Illinois at Urbana-Champaign. J-Sim is an open-source, component-based simulation environment implemented in Java. It is applied for building quantitative numeric models (Siraj et al., 2012). It is documented in the Tutorial - Working with J-Sim (2003) that in addition to the Java environment there is a scripting interface framework which enables the integration and the use of different scripting languages. Tool Command Language (Tcl) is currently supported in J-Sim, with a Tcl/Java extension called Jacl. While Java is used to create the simulation’s entities, called components (Małowidzki, 2004), J-Sim makes use of Tcl to manipulate these objects, to configure and control the simulation execution process, and to collect simulation data. The component-based software architecture J-Sim is based on is called Autonomous Component Architecture (ACA). The development was initiated by the design and manufacturing model of the integrated circuit in terms of how components are specified, designed, and assembled. “Integrated circuit (IC) or also known as Microchip is a blackbox fully specified by the function specification and the input/out signal patterns in the data cookbook. Changes in input signals trigger an IC chip to perform certain functions, and change its outputs according to the chip specification after a certain delay. The fact that an IC chip is interfaced with other chips/modules/systems only through its pins (and is otherwise 8.

(16) shielded from the rest of the world) allows IC chips to be designed, implemented, and tested, independently of everything else.” (Component Architecture, 2005) One of the features in ACA is that the components are loosely coupled. They communicate with each other via their ports and are bound to contracts. A contract specifies the behaviour of and interaction between the components. (Małowidzki, 2004; Component Architecture, 2005) In the architecture documentation (Component Architecture, 2005) it is said that JSim provides an independent execution context for a component to handle incoming data. It is implemented by the Java threads, with the thread scheduler in the Java Virtual Machine (JVM) scheduling thread execution. When a component receives data, a background thread manager (called runtime) creates a thread as new execution context to process the data. After the execution, the created threads will not be discarded but recycled and kept alive in sleep state. A Swing-based graphical editor, called gEditor, is available to create, configure, and execute the simulation model (Clemmensen et al., 2004). 2.3 Network Simulation “In the network research area, simulation is often used to evaluate newly developed network protocols under varied real-world conditions or to change existing protocols in a controlled and reproducible environment” (Siraj et al., 2012). There exist a number of network simulators with different features for various target areas. This section introduces some examples of well-known network simulators. 2.3.1 WANem (Wide Area Network Emulator) WANem is open-source software developed by Tata Consultancy Services – Performance Engineering Research Center in 2007. The background is that applications perform differently in production systems than during the test phase. The reason is that applications usually have more than the required network features on a local testing network environment than it actually needs. In the real world, there are various types of user networks which have an effect on the behaviour of the applications. WANem provides functionality to emulate a Wide Area Network (WAN) or the Internet. It allows developers to simulate some network characteristics like network delay, packet loss, jitter, packet duplication, etc. By doing this, developers can monitor the applications’ behaviour or performance in the internet and take action before the applications are released to the end user. WANem development is based on Linux kernel in provisioning the network emulation characteristic with extended modules. It is distributed in form of a bootable live-CD with Knoppix Linux which can be easily launched as a virtual appliance for example using VMware (WANem, 2008). WANem acts as gateway for the connection of two hosts, whose network characteristics should be simulated. Each host needs to set its routing parameters to the WANem system, so that all packets between hosts are routed through WANem. WANem provides a Web interface allowing users to configure network characteristics (see Figure 2.1). WANem comes with WANanlyzer, a simple tool used to measure network characteristics such as bandwidth, jitter, etc. to help users “in giving realistic input to WANem” (WANem, 2008).. 9.

(17) Figure 2.1: Configuration options of WANem. 2.3.2 NS2 (Network Simulator version2) The majority of information in this section is originated from ns-2, 2011. NS2 is an open-source discrete event network simulator tool. It supports simulation of routing algorithms, TCP, UPD, and multicast protocols over wired as well as wireless networks. The development of NS2 is based on the REAL network simulator from 1989. REAL was developed in cooperation of the University of California and the Cornell University. It was aimed to study the “dynamic behaviour of flow and congestion control schemes in packet-switched data networks” (Keshav, 1997). Since 1995, NS2 development is supported by the DARPA through the Virtual Internetwork Testbed project and NSF. Malowidzki (2004) suggests that NS2 is the most often used simulator in research projects because it is free, easy to use, and this project was contributed by well-recognised scientists. NS2 is based on two programming languages: C++, which defines the mechanism of the simulation objects (e.g. protocol behaviour) and runs the simulation (backend), and OTcl (Object-oriented extension of Tcl) to assemble and configure simulation objects as well as to schedule the events (frontend). (Issariyakul and Hossain, 2009) Köksal (2008) informs that NS2 provides a huge library of simulation objects and protocols. Each simulated packet is an event in simulation. No real data is actually transferred in a real network. “NS2 supports designated and random packet loss, deterministic and stochastic modelling of traffic distribution, allows conditioning disturbance and corruptions in network like link interruption, node stoppage, and recovery. It is also possible to connect NS2 with a real network, capture live packets, and inject packets into live network.” (Köksal, 2008: 6) The first step before running the simulation is to design the simulation itself. Users need to create simulation scenarios, i.e. determine the purpose of the simulation, the network configuration, performance measurements, and the expected results within the 10.

(18) Tcl scripting language (Issariyakul and Hossain, 2009). While running the script, NS2 records all processes or events. When the simulation is finished, it creates trace files containing information either as text-based files or in the NAM (Network Animator) format. These files contain the packet flow details such as source node, transmission time, packet size, destination node, and received time (Hasan et al., 2005). Using the NAM program, the results of the running network simulator can be reviewed in an animated way. (Altmann and Jiménez, 2003) 2.3.3 OPNET (Optimized Network Engineering Tool) Modeler The OPNET Modeler is developed by OPNET Technologies, Inc. It was originally developed at the Massachusetts Institute of Technology in 1987 (Siraj et al., 2012). OPNET is a commercial DES which is available for free for academic research objectives. OPNET provides a development environment for designing protocols and modelling different kinds of network types, behaviour, and technologies (Köksal, 2008). It is based on C programming language. The Modeler is delivered with an advance GUI used for the creation of network topology or protocol model, parameter settings, simulation execution, and data analysis (Köksal, 2008). There are a number of models provided by the OPNET Modeler. Varga and Hornig (2008) assume that OPNET probably has the largest range of ready-made protocol models including IPv6, MIPv6, QoS, Ethernet, and many others. Based on Köksal (2008) users can modify these models or implement their own models. After the creation of a model of the network systems, users need to specify which information needs to be collected during the execution time. It is possible to create a number of network simulation scenarios, configure parameters for them and let them run concurrently. Malowidzki (2004) argues that through its detailed implementation the OPNET simulation process closely reflects the real world network behaviour. Simulation results are graphically displayed by the built-in analysis tool. It can generate different forms of outputs, such as numerical data, animation, and detailed traces. (Köksal, 2008; Siraj et al., 2012; Chang, 1999) 2.3.4 OMNET++ (Optical Micro-Networks Plus Plus) OMNET++ was designed as a general purpose DES framework. Yet its main application area is the simulation of computer networks (wired and wireless) and other distributed systems (OMNeT++, n.d). It has been used for queuing, wireless, ad-hoc, peer-to-peer, optical switch, and storage area network simulations (Köksal, 2008; Siraj et al., 2012). OMNET++ is developed by OpenSim Ltd. in Budapest, Hungary, and available as an open-source product under a dedicated Academic Public license for academic and non-profit use. According to Varga and Hornig, 2008, OMNET++ provides an extensible, modular, and component-based architecture. Its components (simple modules) are written in C++, using the simulation class library “which consists of simulation kernel and utility classes for random number generation, statistics collection, topology discovery etc.” (Köksal, 2008) Multiple simple modules can be grouped into compound modules. Modules communicate with each other through messages which can be sent either directly to the destination modules or via their gates (similar to ports in J-Sim). Gates are linked to each other via connections. Users can assign properties to connections like propagation delay, data rate or error rate. (Varga and Hornig, 2008) Based on Weingartner et al. (2009), the structure of the simulation model (i.e. modules and their connection) is described in the OMNET++ topology description language, NED. NED stands for Network Description. “NED makes it possible for users to define simple modules, connect and assemble them into compound modules” (User 11.

(19) Manual Omnet++, n.d). When the simulation is compiled as a whole, NED is rendered into C++ code. The OMNET++ package comes with an Eclipse based Integrated Development Environment contained graphical editor for NED allowing design, execution, and evaluation of the simulation (Siraj et al., 2012). At the beginning of the simulation execution, modules are instantiated and a simulation model is built by the simulation kernel and the class library. The simulation executes in an environment provided by the user interface libraries (Tkenv, Cmdenv). Tkenv provides the user with the graphical view of a simulation process, e.g. the simulation progress or the simulation results at runtime. Cmdenv is designed for the batch execution of a simulation. The results are stored into output vector (.vec) and output scalar (.sca) files. Examples of recorded information are queue length over time, delay of received packet, packet drops, and many more. Users are able to configure what outputs should be recorded. (Varga and Hornig, 2008; Varga, 2010; Köksal, 2008) 2.3.5 NIST Net (Network Emulation Tool) The majority of information below is distributed by Carson and Santay (2003) and from NIST Net Home Page, n.d. NIST Net is a Linux-based network emulation package. It was developed by Mark Carson at the National Institute of Standards and Technology (NIST) in 1997 and supported in part by the DARPA’s Intelligent Collaboration and Visualisation (ICV) and Network Measurement and Simulation (NMS) projects. It is a tool for emulating performance dynamics on IP packets through a Linux-based PC router. It is implemented as a kernel module extension of the Linux operating system (OS) (NIST Net Home Page, n.d). The intention of this tool is to provide a controlled, reproducible environment for testing network-adaptive applications. NIST allows the users to emulate common network effects such as packet loss and duplication, delay, and bandwidth limitation. Emulators are not exactly identical with simulators. Here, emulation is defined with a combination of two techniques for testing network model: simulation and live testing. It exploits the advantages of both techniques: simulation is relatively quick and easy to assemble and live testing avoids any uncertainty about the correctness of the model representations. Carson and Santay (2003) illustrate NIST Net as a specialised router which emulates network characteristics when the data traffic passes through it. It replaces the normal Linux IP forwarding mechanism in a way that the network administrators are enabled to particularly control and configure a number of key network behaviours (Dawson, 2000). NIST Net has a table of so called emulator entries. Based on these entries, the emulator identifies the specification of packets that pass through it, the effects are applied to matching packets, and statistics are stored about the packets. These can be added and changed manually as well as programmatically during the emulators operation. NIST Net has two principal components: “A loadable kernel module, which hooks into the normal Linux networking and real-time clock code, implements the run-time emulator proper, and exports a set of control APIs; and a set of user interfaces which use the APIs to configure and control the operation of the kernel emulator: a simple command line interface for scripting and an interactive GUI for controlling and monitoring emulator entries simultaneously.” (Carson and Santay, 2003: 114). 12.

(20) The NIST Net homepage (NIST Net Home Page, n.d) states that NIST Net is no longer actively maintained. “Its functionality has been integrated into netem and the iproute2 toolkit, which are available in a current Linux kernel distribution.” In the next chapter, the design concept of the architecture of simulation environment is introduced. And the decision about the options whether to use existing network simulator or to create own simulator will be clarified.. 13.

(21) 3 Architectural Design This chapter describes the construction of the existing framework environments. Afterwards two considered alternatives to build a simulation environment are introduced, together with their advantages and drawbacks. 3.1 Construction of Existing Framework Currently, a laboratory environment exists representing a test scenario of the existing framework. The framework components were constructed to run on virtual machines (VMs) (Figure 3.1). One VM represents one integration environment and is analogous to a company environment. As described in Chapter 1.3.3, the integration platform, which is realised as an implementation of ESB, can be deployed into an existing system landscape of any enterprise. In this case, the ESB with its plugins (local components) is installed on each VM. Additionally, there are a number of Web services running on each VM, which act as service components. In an enterprise context, those could be end users (employees) or also services offered by the company itself or service intermediaries. The key or central component is running on a separate VM. It is an independent service which can be operated by an independent organisation (Kirchner et al., 2011). All components are connected to each other. A user from one enterprise (VM1) can access services from other enterprises (VM2). All service invocation processes are monitored by the local component which then reports the monitoring feedback to the central component for assessment purpose.. Figure 3.1: Current system construction. 3.2 Architecture Concept of Simulation Environment During the creation of the simulation environment based on the existing laboratory environment, two approaches were taken into consideration. Both are based on virtualization method, but with different virtualization focus. 3.2.1 Full Virtualization using Virtual Machines The first design concept to simulate a real world environment is based on the existing laboratory environment and a virtualization approach suggested by Liu et al. (2010). 14.

(22) The basic idea is to keep the current running environment and minimise the effort to build a completely new system. In this concept, an existing VM pool is extended by another VM hosting WANem or NIST Net (see Chapter 2.3.1 and 2.3.5). In this scenario, VM1, VM2, and other VMs placed in the VM pool will be configured to use VM3 (virtual machine, where WANem is running) as gateway, so that all requests from one integration platform to another will pass through WANem and network characteristics can be emulated. Similar to WANem, NIST Net acts as a router. Using the provided client module, administrator is able to configure an emulation rule. This has equivalent principal to the routing table of a router, with additional information for emulation purpose such as delay, bandwidth or packet loss. One special component is the VM Monitor, also called hypervisor. It is responsible for the management, resource allocation, and isolation of the VMs (Thorns, 2008). Liu et al. (2010) choose to use VMware Server to manage a number of VM entities. It is a freely available software product and widely used in virtualization environments. But VMware ended its support for VMware Server since the end of 2011 (VMware Support Policies, n.d). As alternatives to VMware Server, VMware also offers free VMware vSphere Hypervisor or VMware Workstation. Both work basically similar to VMware Server (Paiko, 2012).. Figure 3.2: System Architecture for Virtualization of Integration Environment. In case of VMware Server one interface to the VM Monitor is based on VIX API (VIX 1.12 Getting Started, n.d). VIX API is a library to write scripts and programs to manipulate VMs. An additional function of VM Monitor is to interpret the configuration data received from the management console. Using the management console, an administrator is able to set the desirable configuration for any VM. The configuration data is saved as an XML file. This file is sent to the VM Monitor, which performs the setting. There are several advantages of designing the system architecture in this manner. Since the existing laboratory environment is already built this way, no big effort needs to be invested to create the VM pool. The only additional task is adding WANem2 or 2. VMware virtual disk file can be downloaded on WANem Website.. 15.

(23) NIST Net to the pool and setting up the routing rule. One VM can be considered as one company infrastructure in the real world, which can be easily customised in terms of CPU speed, memory size or storage by just changing the setting without a need to take an action on the physical hardware area. This gives the flexibility to allow testing in different software architecture circumstances. Through the functionality given by the VMware platform to create snapshots3 of the VM, it is easy to recover the whole system in case of crashes or errors. Liu et al. (2010) mentions good resource isolation as one benefit of virtualization technology. Each VM is running as a complete standalone system on the hypervisor. Changes on one system will not be noticed by others and not affect them, so that services running on a VM cannot be easily interrupted by events issued by other services on other VMs (Thorns, 2008; Liu et al., 2010). In this case, performance isolation is obtained, which is one important requirement on a virtualised environment (Popek and Goldberg, 1974; Gupta et al., 2006). Considering the maintainability of all VMs, VMware provides a VIX API that allows the creation of scripts or programs to administrate the VMs. Even though flexibility is constituted, virtualization technology also entails certain drawbacks. Goldberg (1974) argues that there are apparent disadvantages in using a VM instead of a real machine. These are the extra resources needed by VMs (overhead) and potential system throughput. The additional resources are used for the maintenance of the virtual processor, privileged instruction support, paging support within VMs, console functions as well as I/O operation (Goldberg, 1974; Huang et al., 2006). Since the resources of a host machine are not unlimited and need to be shared within the VMs, the performance of each VM may be a concern, if multiple VMs are working concurrently. In case that the implementation of the local component is changed, high administration effort is necessary, because the changing needs to be deployed on each VM. But this can actually be easily solved by creating a script to deploy changing on the VMs. In real world, there are a huge number of service instances and company environments. In this sense a number of integration environments are required to create a simulation environment as realistic as possible. To achieve this circumstance, more VM instances are required, i.e. more resources are needed to be able to construct new VM, this condition directs to the performance issue defined by Goldberg (1974). Beyond these drawbacks, services within an integration environment cannot be customised to act differently, i.e. all services in one VM have the same network parameters/characteristics. Using WANem or NIST Net, only the connection between host (VM) and these simulation systems are effected, but not individual service connection to the simulation system. Considering the fact that available resources are limited, hence performance issues can lead to uncontrolled side effect of VMs behaviour. These side effects might affect the reproducibility of results, which is an important characteristic of the simulator. Therefore, the research project team decided against this approach and to realise the method described next. 3.2.2 Virtual Hosting using distinct ESB Instances A different approach for the simulation environment is also based on virtualization technology. But instead of virtualizing the whole operating system, virtual hosts are configured for the integration platform. Figure 3.3 shows the system architecture of this. 3. This term refers to backup copy of the actual system state.. 16.

(24) virtualization approach. This approach virtualizes several ESB in one OS instead of virtualizing several OS with one ESB each. All components are now located on one physical machine. The purpose of the management console and interpreter remains similar to the previous approach. The management console is developed to enable customisation of the simulation context such as service consumer’s location or bandwidth. The configuration is saved into an XML file, which will then be analysed by the interpreter and applied to the simulation. The system hosting the integration platform is configured in a way that only a single installation of an ESB server is required, of which multiple instances are set to run. This is done by modifying the default configuration at the ESB server so that it is possible to create multiple instances of ESB, which can run independently from one another. Thus each service instance is considered as a single independent server. Equivalent to the previous approach, service components are available on the machine and connected to the integration platform (ESB instance), whose composition forms an integration (enterprise) environment.. Figure 3.3: Architecture for Virtualization of Integration Platform. By constructing the integration platform in this way, the virtualization goal can be reached conveniently. It does however not enforce strong isolation at the ESB. In addition, there is also a risk that the services influence each other adversely because they are running on the same hardware, so no resource isolation as in VM can be guaranteed. Beyond that, this kind of architecture has also a number of advantages. Due to the fact that all systems are running on a real machine, problems regarding the performance e.g. caused by CPU overhead will directly be handled by the system scheduler. Even though server instances need to share resources with each other, the amount of required resources for each instance is unambiguously less than for running VMs. So, all systems can take a great advantage of high availability of system resources. One important component in this architecture is the simulator, which is connected to the integration platform. This is the key role of the system. In the previous approach, WANem or NIST Net can be used to simulate network characteristics. In this approach, 17.

(25) the simulator replaces the functionality of those systems. The principle is still the same; all service calls go through the simulator to be manipulated or modified. In Chapter 2.3, several commonly used network simulators are presented. One option was to evaluate those existing network simulators, to choose the most suited one and integrate it in the existing laboratory environment. The research team however decided to create own simulator. The reasons for this decision are outlined in the following paragraphs. One significant drawbacks of this approach is that an extra effort is required to implement the simulator to support various functions WANem or NIST Net offers, since no existing network simulators comply with the goal of the simulator. A big challenge was to analyse the real networking system and recreate it as a model for the simulation environment. An own network simulator however can be placed in-between the virtual ESB instances without using real routing as it is done between the VMs. Self-developed application/software gives such a flexibility to define the model as desired, not depending on any third-party libraries or tool and to keep expanding it with favoured functionalities. This leads to better understanding and more control of what actually happens during the simulation process (controlled and reproducible simulation scenarios). For these reasons, it is decided to implement simulation environment based on this procedure. The aim in creating a simulation environment is to be able to simulate realistic network conditions to manipulate service calls within the laboratory environment of the existing framework. In doing so, relevant components in the framework can record the process and produce data similar to realistic conditions. Later on, the central component can analyse the captured data to be able to give an appropriate recommendation or assignment of best-fit service instances to specific consumer profiles. For this purpose, the existing environment needs to be involved in the simulation process, since monitoring the service call and collecting the data is the task of the local components. For this reason, the simulator cannot only be a standalone software product, with which computer users design the network topology, define input parameters and simulation scenarios, execute the simulation, and receive visualised simulation result at the end. It should rather be part of the existing environment. Though simulation tools with integration ability are available, NS2 is to be mentioned. It is a well-established network simulation tool, which provides advanced features to simulate various kinds of network characteristics (e.g. package loss or channel throughput), and even offers users through its open-source nature the flexibility to build and extend its components for users’ specific environment. Unfortunately NS2 is implemented in C. Since the framework components were developed in Java, it would be preferable to have a simulation tool based on Java for the integration convenience. Indeed, there are some ready-made Javabased simulators available on the market like J-Sim. But it yet does not solve the integration problem, because most of them are standalone products which are difficult or even not designated to work with other systems. One issue in using ready-made tools is the uncertainty about the implementation of the software/framework; how it really works at the backend; how it e.g. drops or duplicates packets. It is indeed possible to examine the open-source simulators, but one needs to put in extra effort and time to do this and perhaps modify it the way the research project team wants to, since it is significant to understand and to be able to control the simulation process precisely. Furthermore a simplified but accurate network model is preferable for the understanding to reuse and enhance the simulator rather than an advanced and complex model offered by the available simulators. Due to these matters, it is desirable to build Java-based simulator, which is simple, easy to understand, and accurate in performing network effects. 18.

(26) 4 Simulation Design and Concept This chapter first describes some non-functional characteristics of services. Those are the service properties that should be simulated. Afterwards, the design concept of the implemented simulation tool will be explained. 4.1 Non-functional Characteristics of Services As mentioned before, the market of services in the Internet is growing and covers more and more functionality. This brings the service consumers into a difficult position regarding the selection of their best-fit service provider. Many services may offer the same functionality; however they are expected to differ in their NF characteristics. These properties are highly significant for the comparison of the services by the central component. In the framework the central component makes estimations and decisions based on its measured value about which service should be assigned to which service consumer. The following sections discuss the NF service characteristics taken into consideration for the service selection. 4.1.1 Response Time In a Web service context, response time can be defined as the time a service consumer needs to wait until he/she receives an answer of the requested service function. In some literatures, response time is also referred to an execution time. This term is indeed usually used to evaluate CPU processors (Shute, 2007). In this work, both terms will be used synonymously. Response time or execution time is the time needed to execute a service request, starting from the submission of request until a response has completely arrived at the requestor. Response time depends on several factors such as message size, distance between source and destination, bandwidth, etc. The execution time is defined by summarising the following parameters: 1. Transmission time: The time required to send a message from source to destination. (Forouzan, 2006) Transmission time for both the request and the response must be computed separately, since they may differ in their size. The calculation depends on the size of the transmitted message and bandwidth:  

(27)    =  ℎ Bandwidth is a measure of network capacity to send data and usually specified in bits per second (bps). Message size is indicated in bits and includes size of request body and length of TCP/IP header. 2. Propagation delay, also called latency or delay: The time required for a bit to travel from source to destination. (Forouzan, 2006)   

(28)    = 

(29)     Propagation speed varies depending on the medium that transports the signal. The speed of propagation e.g. through free space or vacuum is 3·108 m/s and through optical fibre cable around 2·108 m/s. 3. Processing time: The time that a service needs to process a request. “Web services usually announce their processing time publicly or provide a formula to calculate it” (Zeng et al., 2003). For this work, the processing time will be measured at runtime. The processing time is not static over the time. Instead, the processing time of several identical requests from same client depends on several factors such as the capacity of the server, on which Web service is running, or busy rate at the call point of time, etc. 19.

(30) 4. Segmentation overhead: Upon sending a message to a communication partner over the network, there is a limitation of packet size to be considered. A socalled maximum transmission unit (MTU) defines the maximum size of the packet, including header and data payload, which can be transmitted by a communication layer, e.g. 1500 bytes for Ethernet. The Internet is composed of heterogeneous networks and network hardware with diverse MTU. This is an issue, because e.g. a router can only receive and transfer data not larger than the MTU. When the packet size is greater than the MTU, it will be divided into smaller segments. Each segment will then be transmitted independently over the network (Comer, 2009).    

(31)    

(32)      ℎ  =  ℎ The calculation of this value correlates with the transmission time. 5. Invocation time overhead: In the world of cloud services, a service consumer does not have to bother about the resources used to execute tasks, but simply use the provided resources. The service provider is the one responsible for allocating an appropriate size of shared resources to be able to reliably serve his service consumers. These provided resources are in fact not unlimited and need to be shared within service consumers, what eventually could lead to a resource bottleneck. This incident can be noticed through the longer service waiting time, i.e. higher value of response time. Kirchner et al. (2011) and Chen et al. (2011) have established in their experiment that the network utilisation rate (and in return the available resources) of service providers varies over day time; in fact it is higher during certain work hours. Beside day time, Kirchner et al. (2011) also assumed that calling a service during working day and weekend will give a different result in response time. The overhead will be calculated as a multiplicative variable on top of the calculated response time (based on the former parameters).    =  ∗ (   +   +  

(33)  ) Variable n is a multiplying factor determined based on the invocation time. As an example, an average response time of a weather service during the day is equal to 250ms. In the morning hours around 9:45 and 10:45 (Chen et al., 2011), where most people start and plan their day depending on the weather, they tend to check the weather, i.e. many people invoke weather services during this period of time. Since there are only limited resources, it occurs that not all requests can be processed simultaneously. As a consequence, some users need to wait and the response time will exhibit higher value than a normal or average value, e.g. 400ms instead of 250ms (i.e. 60% slower than response time outside the peak period, n=0.6). In the real world, there are a lot of more factors affecting the service response time, such as computer capacity to process the request, type of requested information, devise used to call a web service, etc. For the purpose of this work, the focus is only on those mentioned above. 4.1.2 Availability Availability is a runtime quality of service parameter. It is usually defined by the percentage of service operation time. In this work, service availability is simply represented as true or false, no percentage will be calculated. The service is considered available as long as it replies to a request regardless of its waiting time. However the waiting time is limited by timeout of service user. Connection timeout is diverse for 20.

(34) each system. It depends on the requirement of the users of the maximum time a user is able to or willing to wait until a service responds. If the timeout is exceeded, service is assumed to be not available. Moreover it happens that a service responds an error message, meaning its HTTP response code is between 4XX and 5XX (Leach et al., n.d). In this case, service is also interpreted as unavailable. 4.1.3 Throughput This parameter gives an answer of the question of the number of users or requests a service can process within a period of time. Throughput can be a performance indicator of the service provider and its ability to process jobs concurrently (Lee, 2005). In this working context, throughput describes a statistic about the web service usage within the framework over the given period of time. 4.1.4 Reliability Reliability, also successful execution rate, can be seen as both functional and nonfunctional property of services. Reliability indicates the correctness of the service responses within a specified time frame (Zeng et al., 2003). On the one hand side, correct implies that the information delivered by the service provider is accurate and matched or suited to the desired request (functional requirements). An example in a real life situation is that google search engine gives proper results based on the search query. On the other hand side, correct also means that a service is delivered successfully within a maximum time defined in a service contract (SLA) or Web service description (Zeng et al., 2003); this term indicates a NF requirement. The correctness of the delivered information is difficult to measure and is not part of this work. The second, NF understanding of reliability is more suitable in the context of this work. For this work, reliability is proposed as a comparison of target and actual performance, i.e. the promised NF properties of services defined in the SL Agreements are compared to the real measured performance of services (SL Achievement). When for example it is written in SL Agreements that the service is unavailable for less than one hour in a year, service calls will be monitored over the whole year and then analysed, whether service unavailability in reality is less than one hour. Service providers compete on a service market with each other to acquire as many consumers as possible and grab a big market share. Therefore they need to offer a good quality impression of their service to potential consumers. All these characteristics can be found in the SL Agreement. Likewise service intermediaries “need to specify their own SLAs while depending on the correctness of the specifications provided by their sub-services. [...,] deviations of actual non-functional characteristics and those specified in SLAs may propagate and spread even unintendedly and without control of the providers” (Kirchner et al., 2011). SL Achievement as proposed by Kirchner et al., 2011 on the other hand provides information about the real measured NF properties from the service consumers’ point of view. In this way, service consumers can analyse whether the service provider stands by its commitment and hence is trustworthy. The practical implementation to compare these two components is not part of this work. 4.2 Implementation concept Currently, the existing framework environment scenarios are running on the same physical machine, which is an isolated local host environment. Under this condition services can perform perfectly since there are no fundamental interruptions compromising their efficiency in executing tasks. This circumstance does not actually exist in the real world. To be able to generate reliable data which represent a genuine 21.

References

Related documents

Previous research (e.g., Bertoni et al. 2016) has also shown that DES models are preferred ‘boundary objects’ for the design team, mainly because they are intuitive to understand

producing electricity and present factors that influence electricity production costs. Their project will also highlight different system costs for integration of generated

In order to get a deeper understanding regarding internationalization within the DSP sector and the role of intermediaries within the industry, we have chosen a qualitative

Article 2 (co-authored with Patrik Ström). Going Viral Across Borders: the Internationalization Process of an Online Service Provider. This paper, which has been submitted

In the close future, unless radical changes are made in Internet policy, we can expect further online gaps to emerge on the usage side (online usage gap), on

The interpretation of the yearly CE EB -responses to covariates’ time horizon and risk aversion is that the yearly CE EB value represents the corresponding risk free return, excess

An effective Product Stewardship strategy can by providing a PSS, retain the ownership of products and create a shared value with environmental, social, and economic

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller