IT Licentiate theses 2010-003
Managing Applications and Data in Distributed Computing
Infrastructures
S ALMAN Z UBAIR T OOR
UPPSALA UNIVERSITY
Managing Applications and Data in Distributed Computing
Infrastructures
Salman Zubair Toor
salman.toor@it.uu.se
March 2010
Division of Scientific Computing Department of Information Technology
Uppsala University Box 337 SE-751 05 Uppsala
Sweden
http://www.it.uu.se/
Dissertation for the degree of Licentiate of Philosophy in Scientific Computing
Salman Zubair Toor 2010c ISSN 1404-5117
List of Papers and Report
This thesis covers three areas in the field of enabling distributed computing infrastructure for scientific applications.
In the first paper we present tools for general purpose solutions using portal technology while the second paper addresses the access of grid re- sources within an application problem solving environment for a specific domain.
• E. Elmroth, S. Holmgren, J. Lindemann, S. Toor, and P–O. ¨Ostberg.
Empowering a Flexible Application Portal with a SOA-based Grid Job Management Framework. Accepted for publication in Proc. 9th Work- shop on State-of-the-art in Scientific and Parallel Computing (PARA 2008), Lecture Notes in Computer Science, Springer-Verlag.
• M. Jayawardena, C. Nettelblad, S. Toor, P–O. ¨Ostberg, E. Elmroth and S. Holmgren. A Grid-Enabled Problem Solving Environment for QTL Analysis in R. Accepted for publication in Proc. 2nd Interna- tional Conference on Bioinformatics and Computational Biology (BI- CoB 2010), 2010.
The next two papers focus on architectural design for distributed storage systems. Here, the third paper in the thesis presents a the architecture of the Chelonia system and a proof-of-concept implementation, and the fourth paper focus on extensive system stability and performance testing.
• Jon K. Nilsen, Salman Toor, Zsombor Nagy and Bjarte Mohn. Che- lonia – A Self-healing Storage Cloud. Accepted for the Proc. Cracow Grid Workshop 2009.
• J. K. Nilsen, S. Toor, Zs. Nagy, B. Mohn, and A. L. Read. Perfor- mance and Stability of the Chelonia Storage Cloud. Submitted to the Journal of Parallel and Distributed Computing, special issue on Data Intensive Computing.
The final paper in the thesis presents a review of grid resource allocation models in different grid middlewares and proposes modifications to build a more efficient and reliable resource allocation system.
• S. Toor, B. Mohn, D. Cameron, S. Holmgren. Case-Study for Different Models of Resource Brokering in Grid Systems. Technical Report no.
2010-009, Department of Information Technology, Uppsala University.
Contents
1 Introduction 3
1.1 Grid Technology . . . 4
1.2 Cloud Technology . . . 5
1.3 Grids vs Clouds . . . 6
1.4 Service-Oriented Architectures and Web Services . . . 6
1.5 Middlewares . . . 7
2 Application Environments for Grids 11 2.1 Grid Portals . . . 11
2.2 Application Workflows . . . 12
2.3 The Job Management Component . . . 12
3 Distributed Storage Systems 13 3.1 Characteristics of Distributed Storage . . . 14
3.2 Challenges of Distributed Storage . . . 15
4 Resource Allocation in Grids 17 4.1 Models for Resource Allocation . . . 17
5 Summary of Papers in the Thesis 19 5.1 Paper-I . . . 19
5.2 Paper-II . . . 19
5.3 Paper-III . . . 20
5.4 Paper-IV . . . 20
5.5 Paper-V . . . 21
Abstract
Over the last few decades, the needs of computational power and data storage by collaborative, distributed scientific communities have increased very rapidly. Distributed computing infrastructures such as computing and storage grids provide means to connect geographically distributed resources and helps in addressing the needs of these communities. Much progress has been made in developing and operating grids, but several issues still need further attention. This thesis discusses three different aspects of managing large-scale scientific applications in grids:
• Using large-scale scientific applications is often in itself a complex task, and to set them up and run experiments in a distributed environment adds another level of complexity. It is important to design general purpose and application specific frameworks that enhance the overall productivity for the scientists. The thesis present further development of a general purpose framework where existing portal technology is combined with tools for robust and middleware independent job man- agement. Also, a pilot implementation of a domain-specific problem solving environment based on a grid-enabled R solution is presented.
• Many current and future applications will need large-scale storage sys- tems. Centralized systems are eventually not scalable enough to han- dle huge data volumes and also have can have additional problems with security and availability. An alternative is a reliable and efficient distributed storage system. In the thesis the architecture of a self- healing, grid-aware distributed storage cloud, Chelonia, is described and performance results for a pilot implementation are presented.
• In a distributed computing infrastructure it is very important to man- age and utilize the available resources efficiently. The thesis presents a review of different resource brokering techniques and how they are implemented in different production level middlewares. Also, a mod- ified resource allocation model for the Advanced Resource Connector (ARC) middleware is described and performance experiments are pre- sented.
Chapter 1
Introduction
Curerently, a wide span of application areas present increasing requirement of utilizing distributed computing infrastructures. The rapid acceptance of the concept of e-Science [8] indicates that this rapid growth will continue in the future and to fulfill the requirements, more computing and storage recourses need to be made available. During the last decades, a number of different projects have been run to design systems which enable efficient use of geographically distributed resources to fulfill computational and storage requirements. Several names have been used to describe different distributed computing infrastructures, e.g. utility computing, meta computing, scalable computing, internet computing, peer-to-peer computing, and grid comput- ing. Today, service oriented architecture also enables cloud computing to focus on providing non-trivial quality of services both for computational and storage users.
The idea of building a computational grid evolved from the concept of electric grids [51]. Under the headline of grid computing, issues of efficient, reliable and seamless access to geographically distributed resources have been extensively studied, and a number of production level grids are today essential tools in different scientific disciplines. The work presented in this thesis addresses three areas in this field; application environments, storage solutions and resource allocation for distributed computing infrastructures.
Below, a brief introduction to the challenges studied in each field is given.
Application environments: When building distributed computing in- frastructures it has been realized that to get the maximum benefits out of this framework two major actions should be taken. First, the monolithic de- sign of many applications needs to be modified so that they are not tightly coupled to a specific type of resource/system for execution of to a specific type of user interface for user communication. Second, more user friendly and flexible application environments are required to execute and manage complex applications in distributed environments. Many efforts have been made in these directions, and a number of solutions have been proposed
based on high level client API(s), web application portals and workflow management systems.
Storage solutions: Many applications utilizing distributed computing infrastructure use large amounts of data storage. This means that the stor- age system a vital component of the overall distributed infrastructure. The task of building a large-scale storage system using geographically distributed storage resources is non-trivial, and to achieve production level quality re- quires functionality such as security, scalability, a transparent view over the geographically distributed resources, simple/easy data access, and a certain level of self-healing capability where components could join and leave the system without affecting the systems availability. Different projects have been run to develop to production quality solutions based on completely independent storage middlewares and as part of the computational grid sys- tems.
Resource allocation: For grid systems, efficient selection of the exe- cution or storage target within the set of available resources is one of the key challenges. The heterogeneous nature of most grid environments makes the task of resource discovery and selection cumbersome. Here, a number of solutions and strategies for resource allocation have been proposed. Each offers certain features but also introduce limitations. One of the challenges is that the resources are normally administrated by different organizations and thus the availability is not guaranteed. A monitoring system is there- fore required to identify the available resources. A comprehensive view of available resources require up-to-date information. The task of collecting information is expensive and requires network bandwidth. , a certain level of self-healing capability,
1.1 Grid Technology
Grid Technology provides means to facilitate work collaborative environ- ments, formed across boundaries of institutions and research organizations.
In [49], grid technology is stated to “promise to transform the practice of science and engineering, by enabling large-scale resource sharing and coordi- nated problem solving within farflung communities”. Over the last decade, a number of research and development projects have put a lot of effort into making grid technology stable enough to provide a production infrastructure for both computation and data.
Grid technology allows different kinds of resources to be seamlessly avail- able over geographical and technological boundaries. The resource can be anything from a single workstation, a rack mounted cluster, a supercom- puter, a complex RAID storage, to e.g. a scientific instrument that produces data. These resources are normally independent and managed by different administrative domains. This brings in lots of challenges in how to enable
different virtual organizations [52] to access resources in different domains.
A basic question is to select which resource to use to run the application or store the data. Since each set of resources are subject to different access policies, how can one enable a standard access mechanism? And how can the environment be made secure enough to maintain the integrity of the sys- tem? How can one build a reliable monitoring and accounting system with low overhead? What protocols should be used to communicate with users, between computing resources and between storage centers? Each of these questions emerge as a sub-field grid computing research in which different research groups come up with various types of solutions.
The uptake of grid technology within the scientific community can be measured by the number of middleware initiatives and the number of projects utilizing grid resources using these middlewares. For example, the gLite middleware [9] has more than 260 sites all over the world, in which they have 150,000 processing cores, 28 petabytes of disk space and 41 petabytes of long-term tape storage. More than 15 different scientific domains benefit from this infrastructure. The Advanced Resource Connector (ARC) middle- ware [46] by NorduGrid [22] have 65 sites in which more than 41,960 CPUs are in use [7]. Many other middlewares, such as Condor-G, Globus [15], Unicore [33] for computing grids and DCache, CASTOR, DPM and SRB for storage grids are also heavily used in different scientific experiments.
Apart from these production middlewares for computational and storage grids, there are a number of research projects which have developed differ- ent application specific and general purpose environments based on these middleware.
1.2 Cloud Technology
Clouds address the complexity in the large-scale storage and computing in- frastructures by providing a certain level of abstraction. This technology has gained much attention over the last few years and companies like Amazon, Yahoo and Google have presented their own solutions. There are a num- ber of definitions [32, 81] explaining the concept of a cloud, one example is found in [83] stating that “A Computing Cloud is a set of network en- abled services, providing scalable, QoS guaranteed, normally personalized, inexpensive computing platform on demand, which could be accessed in a simple and pervasive way”.
The basic idea of cloud technology is to provide a given level of quality of services while keeping the infrastructural details hidden from the end users.
The customer pays and get the services on demand. In [81], the set-up of a cloud service is based on two actors; Service Providers (SPs), which provide a set of different services (e.g. Platform as a Service (PaaS) or Software as a Service (SaaS)) and ensure that the customer access these. Then the In-
frastructure Providers (IPs) are responsible for the hardware infrastructure.
Actors with specialized roles introduce flexibility in the system, for example one SP can utilize infrastructure of multiple IPs and a single IP can provide infrastructure for a single or multiple SP(s).
Having actors responsible for providing services fulfilling a certain Ser- vice Level Agreement (SLA) together with an economic model encourage companies to adopt cloud technology and sell computing and storage ser- vices like other utilities such as electricity or gas.
1.3 Grids vs Clouds
Currently, a discussion aiming at pinpointing the differences between clouds and grids is ongoing. In [50], a detailed comparison of these technologies is presented, and it is clarified that there are differences in security, computing and programming model. However, there are also similarities in vision, somtimes in the architecture and also in the tools that are used to build the system.
1.4 Service-Oriented Architectures and Web Services
One framework for implementing loosely coupled distributed applications is to use Service Oriented Architectures (SOA). Here the definition given in [24] is that “Service Oriented Architecture (SOA) is a paradigm for orga- nizing and utilizing distributed capabilities that may be under the control of different ownership domains”. In general, SOA allows for having a relation- ship between the needs and the capabilities. This relation can be one-to-one, where one need can be fulfilled by one capability, or it can be many-to-many.
The service is defined as a “mechanism by which needs and capabilities are brought together”. The visibility of the capabilities, offered by entities, is described in the service description which also contains the information nec- essary for the interaction. The service description also informs about what result that will be delivered and under what conditions the service can be invoked.
Web-services [77] form one implementation of a service-oriented archi- tecture. A web service is basically a distributed application that offers functionality by publishing functions, interfaces and hiding the implementa- tion details. Clients communicate with standard protocols without actually knowing the platform or the implementation details. The success of web ser- vice technology is due to the acceptance of standards. Usually the commu- nication process is based on three components: XML (eXtensible Markup Language) [11] for data exchange between client application and service, SOAP (Simple Object Access Protocol) [29] and HTTP(s). Here, SOAP is
an XML based protocol used for envelope information and HTTP(s) is used for communication. Also, WSDL (Web Service Description Language) [34], which is an XML based language to describe the attributes, interfaces and other properties of the web-service, is used.
Some grid initiatives are based on the SOA model. Fore example, the Open Grid Forum (OGF) [14] has defined the Open Grid Service Architec- ture (OGSA) [25]. OGSA describes a service-oriented grid in which a range of higher level services use the core services to provide data management, workload management, security etc.
1.5 Middlewares
The term grid middleware is used to describe a component-based software stack, designed to enable seamless, reliable, efficient and secure access to the geographically distributed resources. A number of different middleware initiatives have been started over years, and the following description only gives a brief overview of a few production level middlewares for computa- tional and storage grids.
• Globus Toolkit: Globus is a pioneering project that provides tools to build grid middlewares. The toolkit [61] provided by Globus contains several components which can broadly be categorized into five classes:
Execution Management [10], which execute, monitor, and schedule grid jobs; Information Service [20], which discover and monitor re- sources in the grid; Security [28], which provides a Grid Security In- frastructure (GSI); Data Management [18], which allows for handling of large data sets, and finally Common Runtime, which is a set of tools and libraries used to build the services. Most of these components are based on web-services.
Other middleware initiatives provide a more full-blown solution for Dis- tributed computational and storage resources and are directly used in dif- ferent application areas:
• Advanced Resource Connector (ARC): The Advanced Resource Connector (ARC) Grid middleware is developed by the NorduGrid consortium [21] and the EU KnowARC project [19]. The next gen- eration of this middleware is SOA-based where services run in a cus- tomized service container called the Hosting Environment Daemon (HED) [40]. HED comprises pluggable components which provide dif- ferent functionalities. For example, Data Management Components are used to transfer data using various protocols, Message Chain Com- ponents are responsible for the communication within clients and ser- vices, ARC Client Components are plug-ins used by the clients to
connect to different Grid flavors, and Policy Decision Components are responsible for the security model within the system. There are a number of services available for fulfilling fundamental requirements of a grid system. For example, grid job execution and management is handled by the A-REX service [69], policy decisions are taken by the Charon service, the ISIS service [56] is responsible for information indexing, and batch job submission is handled by the Sched service.
The work presented in this thesis is based on the ARC middleware.
In [23], further details on each of the component and services in ARC are presented.
• gLite: The gLite middleware [70] is the interface to the resources in the EGEE [57] infrastructure. Also gLite is SOA-based. Two core components of the gLite middleware stack are gLiteUI, a specialized user interface to access available resources, and the Virtual Organi- zation Management Service (VOMS) which manages information and access rights of the users within a VO. Resource level security is man- aged by the Local Centre Authorization Service (LCAS) and Local Credential Mapping Service (LCMAPS). The Berkeley Database In- formation Index (BDII) is used for publishing the information. The Workload Management System (WMS) [73] is a key component of the system and distributes and manages user tasks across the available resources. The lcgCE and CREAM-CE (Computing Resource Execu- tion And Management Computing Element) are services for providing the computing element, and lcgWN is the service for a worker node.
For Data Management [78], the LFC (LCG File Catalog) and the FTS (File Transfer Service) are used. R-GMA [37] and FTM [12] are used for monitoring and accounting.
• UNICORE: UNICORE [75] is a middleware based on a three-layered architecture. Here, the top layer deals with the client tools, the second service layer consist of core middleware services such as authentication, job management and execution. Application workflows are managed by Workflow Engine and Service Orchestrator. The bottom layer is the systems layer, which contains a connection between Unicore and The autonomous resources management system. External Storage is managed by the GridFTP protocol.
• OGSA-DAI: The Open Grid Services Architecture – Data Access and Integration (OGSA-DAI) [68] is a storage middleware solution that allows uniform access to data resources using a SOA approach. OGSA- DAI consist of three core services, the Data Access and Integration Service Group Registry (DAISGR) allows other services in the system to publish metadata and capabilities, the Grid Data Service Factory (GDSF) has a direct connection to the data resource and contains
additional metadata about the resource, and the Grid Data Service (GDS) creates GDS(s) which is used by the clients to access the data.
A set of JAVA- based APIs allows clients to communicate with the system.
• Meta-middlewares: The problem of having to learn and use multi- ple middlewares has been addressed by adding another layer on top of the existing middlewares. This meta-layer interacts with the under- ling middlewares and can also add new functionality. The Grid Job Management Framework (GJMF) [47] is an example of a middleware independent resource allocation framework.
• Amazon Services: In contrast to the grid middleware initiatives described above, Amazon [58, 6] provides commercial solutions for computing and storage capabilities by using Elastic Cloud Computing (EC2) [1] and Simple Storage Solution (S3) [4] web services. The Amazon cloud provides a seamless view to the computing and storage services with a pay-as-you go model. Here, the S3 service is based on the concept of Buckets; a container to store objects and can be config- ured to be stored in specific region. S3 provides APIs using REST [67]
and SOAP for most common operations like Create Bucket, Delete, Write and Read Objects and Listing Keys. EC2 allows access to the computational resources using web service interfaces. When commu- nicating with the EC2 service, a client selects the instance with the required operating system, sets the security and network setting, loads the application environment and runs the image on the desired number of systems. The EC2 service also provides tool to monitor a running applications. Apart from these two service, Amazon also provides SimpleDB [5] for providing core database functions like indexing and querying in the cloud, while RDS [3] addresses the users that need a relational database system and the Elastic ReduceMap [2] services enable users to process massive amount of data.
Chapter 2
Application Environments for Grids
Grid systems provide a means for building large-scale computational and storage environments meeting the growing demands of scientific communi- ties. There are challenges in building and managing efficient and reliable grid software components, but another area that also requires serious at- tention is how to enable applications to use the grid environment. Often, scientific applications are built using a monolithic approach which makes it difficult for to exploit a distributed computing framework. Even for a very simple application, the user needs certain expertise to run the job on a grid system. The client tool has to be installed and configured, a job de- scription file has to be prepared, credentials have to be handled, commands to submit/monitor the job have to be issued, and finally the output files might have to be downloaded. Complex scientific applications use external libraries, input data sets, external storage space and certain toolkits which adds complexity when running the application in a grid environment. Large efforts are needed to handle all these issues, and this greatly affects the overall progress of the real scientific activity.
To get maximum beneifit of a grid computing infrastructure, there is a need to facilitate the user community with flexible, transparent and user friendly general purpose and application specific environments. Such envi- ronments can also e.g. handle several different middlewares in a transparent way.
2.1 Grid Portals
Grid application portals represent one way to address the requirements men- tioned above. The goal is to access the distributed computational power using a web interface and make application management as simple as uti- lizing the web for sharing the information. A number of different projects
have developed production level application portals. For example; Grid- Sphere [16], LUNARC portal [71], GENIUS [35] and P-Grid [42] together with GEMLCA [79] provide middleware independent grid portals.
2.2 Application Workflows
Scientific applications are often quite complex and a computerized experi- ment is built up from the execution of multiple dependent or independent components. Single or bulk jobs submission and management systems can- not handle such applications. Enabling complex applications to utilize grid resources require a comprehensive execution model. In a grid environment such models are known as application workflows [87]. In [53] a formal defini- tion of a grid workflow is given as “The automation of the processes, which involves the orchestration of a set of grid services, agents and actors that must be combined together to solve a problem or to define a new service”.
Apart from different independent web based or desktop applications for handling workflows, different middlewares provide separate components for managing workflows. These components allows for submitting a workflow as one single, complete task. Condor’s DAGMan (Directed Acyclic Graph Manager) [74] and Unicore’s Workflow engines [44] are examples of such components. Other extensive efforts include Tirana [31], an open source problem solving environment, Pegasus [27], and Taverna [80] for bioinfor- matics applications.
2.3 The Job Management Component
The job management component is an important basic building block of an application environment. The task of this component is to handle job submission, management, resubmission of failed jobs and possibly also mi- gration of jobs from one resource to another. Often the job management component is designed as a set of services having well-defined tasks and the functionality is exposed by client tools or a set of APIs. This component works together with the client side interface to provide a flexible, robust and reliable management component. This job management component is also responsible for providing seamless access to multiple middlewares. One example is the GEMLCA integration with the P-Grid portal in which the layered architecture of GEMLCA provides a Grid-middleware independent way to execute legacy applications. In other examples, the GridWay [45]
metascheduler provides reliable and autonomous execution of grid jobs, and GridLab [62] produces a set of application-oriented grid services which are accessed using the Grid Application Toolkit (GAT). Using these tools, ap- plication developers can build and run applications on the grid without knowing too much details.
Chapter 3
Distributed Storage Systems
Large-scale storage systems have become an essential computing infrastruc- ture component for both in research and commercial environments. Dis- tributed storage system already hold up to petabytes of data, and the size is constantly increasing. The challenge of handling huge data vol- umes include requirements of consistency, reliability, long term archiving and high availability. In distributed collaborative environments, such as particle physics [26], earth sciences [76], biomedicine [63], the requirement of a distributed storage system is more pronounced. In order to efficiently utilize the computational power, high availability of required data is es- sential. In commercial environments, companies like Amazon, Yahoo and Google are working with solutions to provide `Ounlimited storage anytime, anywhere ´O.
Centralized storage solutions cannot handle data challenges in a scalable way, but by instead using a distributed storage systems (DSS) such chal- lenges might be handled. Network Attached Storage (NAS) and Storage Area Networks (SAN) provide limited solutions, but for large scale storage requirements the concept of geographically distributed resources in a Data Grids [38] appear as the viable solution. The concept of the data grids is to create large, virtual storage pools by connecting a smaller set of geographi- cally distributed storage resources.
During the last years, the challenge of designing DSS for huge data sets has been addressed in a number of projects. Solutions such as Google BigTable [41], which is a distributed storage system for managing petabytes of data over thousands of machines, have been dployed. BigTable is based on the Google file systems [59] and in use with some highly data intensive ap- plications like Google Earth, Google Analytics and Google personal Search Engine. Amazon Dynamo [43] is a storage system used by the worlds biggest web-store Amazon.com. Hadoop [17] is another effort aimed at designing a reliable, scalable, distributed storage system.
In the research community there are several projects where different so-
lutions have been developed. For example, CASTOR, DPM [78] from CERN and DCache [55] from FermiLab and DESY laboratory are in use to handle peta-scale of data generated from the Large Hardon Collider (LHC) experi- ment. Here, the data centers are located all over the world and the DSS are used to store the data on geographically distributed storage nodes. DCache is also capable of handling tertiary storage for long term data archiving.
Tahoe [30] is an open source filesystem which utilizes several nodes with the design of a resilient architecture. XTreemFS [60] addresses the same problem of distributed storage over the heterogeneous environment using object-based filesystem. iRODS [84] presents a layer on top of third party storage solutions and give a high level seamless access to different storage systems.
The projects listed above shows the variety of large scale distributed storage systems available for both the commercial and research communities.
Despite of all these big projects, new efforts are needed to assess limitations in the current DSS.
3.1 Characteristics of Distributed Storage
To address challenges of the distributed storage systems various solutions are emerging. Different studies have been conducted to identify the key features or the characteristics of large-scale storage systems. [82] gives a comprehensive view of the requirements and the key characteristics of such systems:
• Reliability: The system should be capable of reliably store and share the data generated from various applications.
• Scalability: The system should have a scalable architecture in which thousands of geographically distributed storage pools can dynamically join and leave the system.
• Security: The security model is an essential part of the DSS. It is important that users can share the data in an easy-to-use but secure environment. The security is required at different levels in the system, e.g. between different components of the system, when transferring data, when accessing data, and to determine ownerships on files and collections.
• Fault Tolerance: While handling large amounts of data in a geo- graphically distributed environment, it is expected that the system experiences hardware or component failures. The system should have the capability to recover transparently from a certain level of problems.
• High Availability: To run the system in a production environment it is important that the system should be highly available.
• Accessibility: To make the system practically usable it is very im- portant that the interfaces should be simple enough to hide the overall complexity from the end user.
• Interoperatability: The diversity of the overall system requires that different projects can build solutions that address the needs of differ- ent communities. It is important to follow standards that allows for interoperatability between such systems.
3.2 Challenges of Distributed Storage
Designing large-scale distributed storage systems is a non-trivial task. All the characteristics of DSS listed above have been extensively studied in the past years. In [38] core components have been identified for distributed data management. Several projects have been initiated that helps to increase the overall progress. Below, the most commonly identified technical challenges in building a reliable, efficient, scalable, highly available and self-healing distributed storage system are listed:
Data Abstraction or Virtualization: The system should provide a high level abstraction when utilizing the storage resources over indepen- dently administrative domains.
Data Transfer: Data intensive applications and replication mechanism require protocols for efficient and reliable data transfer.
Metadata Management: Decoupling and management of information about the available data in the system is a serious challenge in the design of DSS. For large scale systems, the meta-data store often is the scalability bottleneck and a single point of failure in the system.
Authentication and Authorization: Resources running in indepen- dent administrative domains must have a security layer which allows single sign-on access to the resources. In grid systems security is often handled by x509 certificates signed by an certificate authority. Also, the concept of a virtual organization has evolved to make it possible to apply policies or rules by defining group of individuals or projects in the same field.
Replica Management: High availability and reliability of the data is often ensured by creating multiple copies of the data. A number of strategies have been proposed and studied for offering efficient and reliable replica management in the DSS.
Resource Discovery and Selection: The heterogeneous nature of most DSS results in a need of a mechanism that gives information about the availability of data and its replicas in the system. The information about the data availability helps to select the source which can efficiently deliver the data to the destination.
Chapter 4
Resource Allocation in Grids
Resource allocation is considered to be one of the most important areas in the design of computational grids. The task is to hide the complexity of the underling system and select the best possible resource from the available pool of resources. This requires information from e.g. information and cataloging components in the system, and sometimes also information directly from the resources (depending the architecture of the system).
In the grid systems, the process of resource allocation and the actual task submission to the selected resource are normally two separate pro- cesses. The grid resource broker, also known as the high-level- or meta- scheduler, selects a resource on the basis of the available information. The local resource management system is then responsible for submitting jobs to the underlying cluster. Different strategies for resource brokering in the meta-scheduler have been chosen in different middlewares like gLite [13], Condor [72], ARC [46] and Nimrod-G [36].
In many cases it has been observed that the brokering component is a scalability bottleneck and a single point of failure within the whole grid system. Here, a tight connection between different components in the system affects the overall performance while a too loosely coupled approach affects the resource selection criteria. A lack of well defined responsibilities of the components can increase the communication overhead.
4.1 Models for Resource Allocation
The non-trivial issue of selecting the best resources for a given set of tasks has been addressed with many different approaches. Realizing the com- plexity of the task an abstract level approach has been adopted by defining taxonomies. In [65], this approach has been studied in detail in the context of the computational grids.
A number of grid middlewares are currently using different models for resource allocation [39]. For the meta-level scheduler, a centralized or a dis-
tributed brokering model can be used. The centralized model can provide a complete view of the overall load on the system, hence a more effective distribution of the load on the available resources can be achieved. gLite and Condor are examples of middlewares using the centralized resource al- location model. In the distributed model, each user has a separate broker (a user-centric brokering model). The ARC middleware uses an implemen- tation of the distributed model. Agent based approaches are also employed for efficient and reliable resource allocation. Here, agents are software com- ponents considered to have intelligence, autonomous in nature, capability of self-healing and can take decisions. [54, 85, 86] are examples of systems using agents for resource allocation.
These basic models have been further developed in models using market oriented resource allocation [66, 64]. Here, the concept is to create a virtual market in which the resources (computational or storage) are considered as commodities. Resources can be purchased from the resource providers.
The prices varies according to the resource demand, as for a real market.
Nimrod-G and Tycoon uses a market based strategy for resource allocation.
For mission critical applications, the result is needed within a certain time frame. Finding a resource which can fulfill the job requirements and also provide the result within a given time adds another level of complexity to the allocation model. To address such requirements the concept of ad- vanced reservations [48] has emerged. An advanced reservation allows for determining the job’s starting time in advance.
Chapter 5
Summary of Papers in the Thesis
5.1 Paper-I
This paper presents a reliable, robust and user-friendly environment for man- aging jobs on grids. The presented architecture is based on the integration of the LUNARC Application Portal (LAP) and The Grid Job Management Framework (GJMF). LAP provides a user-friendly environment for handling applications whereas GJMF contributes with a reliable, robust middleware independent job management. A JAVA Based component, the Portal In- tegration Extensions (PIE) is used as an integration bridge between LAP and GJMF. The scalability and flexibility of the integration architecture results in that a single LAP can make use of multiple GJMFs, while multi- ple LAPs can make use of the same GJMF. Similarly, a single GJMF can make use of multiple middleware installations concurrently, as can multiple GJMFs utilize the same middleware installation. The components of the ar- chitecture are designed to function non-intrusively for seamless integration in production Grid environments. The architecture also allows for backward compatibility. Using the proposed model and with the help of applications from different research fields the results presented show that such applica- tion environment can enhance the progress of research in the application fields.
5.2 Paper-II
This paper describes a Grid-enabled problem solving environment (PSE) for Quantitative Trait Loci (QTL) analysis, which allows end-users to op- erate within familiar settings and provide transparent access to computa- tional Grid resources. The computational environment is targeted towards end-users with limited experience of grid computing, and supports work-
flows where small tasks are performed locally on PSE hosts, while larger, more computationally intensive tasks are allocated to grid resources. In this model, the grid computations are scheduled asynchronously. The architec- ture integrates the R statistical environment with the computational power of grid environments. By exploiting GJMF within this architecture the PSE is decoupled from a specific grid middleware and reliable access to the grid resources through concurrent use of multiple Grid middlewares is provided.
5.3 Paper-III
In this paper we present the architecture of a self-healing, grid-aware and resilient storage cloud called Chelonia. This storage system is based on a Service Oriented Architecture (SOA) in which each service is responsible for a well defined task. Chelonia consists of five core services. The Bartender, which is a stateless service, provides a high level interface for user interac- tion. The Liberian is a stateless service which works as a catalog service.
The metadata store, A-Hash, follows a master client model and provide metadata replication amongst the available A-Hashes. The Shepherd runs as the storage node and is responsible for checking all the available files and sending reports to Librarian. The Hopi service provides the actual transfer service. The security in Chelonia is divided into three levels Inter-service, Transfer and High level security. By using a gateway module, Chelonia also provides access third party storage systems. The first proof-of-concept test setup described in this paper shows the self-healing and resilient capabilities of Chelonia Cloud.
5.4 Paper-IV
This paper provides extensive performance and stability test using different deployments of the Chelonia cloud. For example, the depth test show the average amount of time taken by the system to create and list collections and the width test illustrates the average response time when a collection contains 1000 entries. The performance of the system while multiple clients are interacting simultaneously is also examined, and the difference of per- formance while using a centralized and distributed A-Hash is studied. It is expected that some of the storage nodes will become offline and later again join the system. File replication test describes how the system identifies if any Shepherd is offline and the response to replicate files to the other available Shepherds to achieve the high availability of the data. Stability test depict a full week Chelonia service’s memory and CPU utilization while clients were regularly interacting with the system.
5.5 Paper-V
This technical report reviews different models for resource allocation and provides a brief comparison which highlights the advantages and disadvan- tages of them. Also, an architecture for improving the resource allocation model in the ARC middleware is presented. This is based on that the infor- mation related to the resources is divided into static and dynamic categories.
The proposed modifications in the ARC resource allocation allows to pre- filter the available resources on the basis of static information (accessed via the information system) and then contact individual resources for dynamic information. This approach reduces the number of connections to retrieve the information from individual resource which requires most of the time.
The results shows that the proposed model significantly improves the re- source allocation process.
Acknowledgment
First of all, I would like to express my deep gratitude towards my supervisor Sverker Holmgren for his valuable support, guidance and encouragement.
Our regular discussions and frequent email exchange have given me the confidence to come up with this work. I am also very gratefull to my second supervisor Mattias Ellert for his excellent technical inputs. Since 2008, I am working with the team of ARC developers, I am really thankful for their thought-provoking discussions. A very special thanks to the Chelonia team Bjarte Mohn, Zsombor Nagy and Jon. K. Nilsen for such a great team work.
I would also like to acknowledge my other co-authors P–O. ¨Ostberg, Mahen Jayawardena, Carl Nettelblad, Jonas Lindemann, David Cameron and Erik Elmroth for sharing valuable ideas. I am also thankful to Tore Sundqvist, Jukka Komminaho and Carina Lindgren for always being very helpful.
At personal level, I would like to thank my family, especially my parents, my wife Sana and my brother Imran Toor for always being very supportive.
The work presented in this thesis is co-funded by the Innovative Tools and Services for NorduGrid (NGIn) project from Nordunet3 program and Uppsala University, Sweden.
Bibliography
[1] Amazon ec2 : http://aws.amazon.com/ec2/ [12th jan 2010].
[2] Amazon mapreduce : http://aws.amazon.com/rds/ [12th jan 2010].
[3] Amazon rds : http://aws.amazon.com/rds/ [12th jan 2010].
[4] Amazon s3 : http://aws.amazon.com/ec2/ [12th jan 2010].
[5] Amazon simpledb : http://aws.amazon.com/simpledb/ [12th jan 2010].
[6] Amazon web services: http://aws.amazon.com/what-is-aws/.
[7] Arc monitor: http://www.nordugrid.org/monitor [4th jan 2010].
[8] e-science: http://www.e-science.stfc.ac.uk/guide/index.html [14th jan 2010].
[9] Egee: http://project.eu-egee.org/index.php?id=417 [4th jan 2010].
[10] Execution mangement: http://www.globus.org/toolkit/docs/4.0/execution/
[13th jan 2010].
[11] extensible markup language (xml): http://www.w3.org/xml/ [12 jan 2010].
[12] Ftm: https://twiki.cern.ch/twiki/bin/view/lcg/ftsftm [15th jan 2010].
[13] glite: http://glite.web.cern.ch/glite/.
[14] Global grid forum: http://www.ogf.org [12 jan 2010].
[15] Globus alliance: http://www.globus.org/alliance/.
[16] gridsphere portal framework: http://www.gridsphere.org/gridsphere/gridsphere [12th jan 2010].
[17] Hadoop: http://hadoop.apache.org/index.html [7th jan 2010].
[18] http://www.globus.org/toolkit/docs/4.0/data/key/ [13th jan 2010].
[19] Knowarc project: http://www.knowarc.eu.
[20] Monitoring and discovery service: http://www.globus.org/toolkit/mds/
[13th jan 2010].
[21] Nordu grid: http://www.nordugrid.org/.
[22] Nordugrid: http://www.nordugrid.org.
[23] Nordugrid papers: http://www.nordugrid.org/papers.html.
[24] Oasis reference model for soa: http://www.oasis- open.org/committees/download.php/16587/wd-soa-rm-cd1ed.pdf [11th jan 2010].
[25] Open grid service architecture (ogsa):
http://www.gridforum.org/documents/gwd-i-e/gfd-i.030.pdf [12 jan 2010].
[26] Particle physics data grid: http://www.ppdg.net/ [7th jan 2010].
[27] Pegasus : http://pegasus.isi.edu/index.php [12th jan 2010].
[28] Security component: http://www.globus.org/toolkit/docs/4.0/security/key- index.html [14th jan 2010].
[29] Simple object access protocol (soap):
http://www.w3.org/tr/2000/note-soap-20000508/ [12 jan 2010].
[30] Tahoe: http://allmydata.org/ warner/pycon-tahoe.html [6th jan 2010].
[31] Triana : http://www.trianacode.org/index.html [12th jan 2010].
[32] Twenty experts define cloud computing: http://cloudcomputing.sys- con.com/node/612375 [5th jan 2010].
[33] Unicore: http://www.unicore.eu/.
[34] Web service description language (wsdl): http://www.w3.org/tr/wsdl [12 jan 2010].
[35] A. Falzone P. Kunszt G. Lo Re A. Pulvirenti A. Andronico, R. Barbera and A. Rodolico. Genius: a simple and easy way to access computa- tional and data grids. Future Generation Computer Systems, 19:805–
813.
[36] David Abramson, Rajkumar Buyya, and Jonathan Giddy. A compu- tational economy for grid computing and its implementation in the nimrod-g resource broker. Future Gener. Comput. Syst., 18(8):1061–
1074, 2002.
[37] Lisha Ma Werner Nutt James Magowan Manfred Oevers Paul Tay- lor Rob Byrom Laurence Field Steve Hicks Jason Leake Manish Soni Antony Wilson Roney Cordenonsi Linda Cornwall Abdeslem Djaoui Steve Fisher Norbert Podhorszki Brian Coghlan Stuart Kenny Andy Cooke, Alasdair J.G. Gray and David OrsquoCallaghan. R- gma: An information integration system for grid monitoring. volume 2888/2003 of Lecture Notes in Computer Science, pages 462–481, Berlin / Heidelberg, 2003. Springer.
[38] Carl Kesselman Charles Salisbury Ann Chervenak, Ian Foster and Steven Tuecke. The data grid: Towards an architecture for the dis- tributed management and analysis of large scientific datasets. Journal of Network and Computer Applications, Volume 23, Issue 3:187–200, 2000.
[39] S. DiNucci D Buyya, R. Chapin. Architectural models for resource man- agement in the grid. LECTURE NOTES IN COMPUTER SCIENCE, Volume 1971/2000:18–35, 1999.
[40] D. Cameron, M. Ellert, J. J¨onemo, A. Konstantinov, I. Marton, B. Mohn, J. K. Nilsen, M. Nord´en, W. Qiang, G. R˝oczei, F. Szalai, and A. W¨a¨an¨anen. The Hosting Environment of the Advanced Resource Connector middleware. NorduGrid. NORDUGRID-TECH-19.
[41] Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Debo- rah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber. Bigtable: A distributed storage system for struc- tured data. ACM Trans. Comput. Syst., 26(2):1–26, 2008.
[42] R´obert Lovas Csaba N´emeth, G´abor D´ozsa and P´eter Kacsuk. The p-grade grid portal. Volume 3044/2004:10–19.
[43] Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubra- manian, Peter Vosshall, and Werner Vogels. Dynamo: amazon’s highly available key-value store. In SOSP ’07: Proceedings of twenty-first ACM SIGOPS symposium on Operating systems principles, pages 205–220, New York, NY, USA, 2007. ACM.
[44] Daniel Mallmann Roger Menday Mathilde Romberg Volker Sander Bernd Schuller Philipp Wieder Dirk Breuer, Dietmar Erwin. Scientific computing with unicore. In NICSymposium2004, Proceedings, pages 429–440, 2003.
[45] Rub´en S. Montero Eduardo Huedo and Ignacio M. Llorente. The grid- way framework for adaptive scheduling and execution on grids. Scalable Computing - Practice and Experience 6 (3): 1-8, 2005.
[46] M. Ellert, M. Grønager, A. Konstantinov, B. K´onya, J. Linde- mann, I. Livenson, J. L. Nielsen, M. Niinim¨aki, O. Smirnova, and A. W¨a¨an¨anen. Advanced resource connector middleware for lightweight computational grids. Future Gener. Comput. Syst., 23(2):219–240, 2007.
[47] E. Elmroth, P. Gardfj¨all, A. Norberg, J. Tordsson, and P-O. ¨Ostberg.
Designing general, composable, and middleware-independent grid in- frastructure tools for multi-tiered job management. In T. Priol and M. Vaneschi, editors, Towards Next Generation Grids, pages 175–184.
Springer-Verlag, 2007.
[48] Erik Elmroth and Johan Tordsson. Grid resource brokering algorithms enabling advance reservations and resource selection based on perfor- mance predictions. Future Gener. Comput. Syst., 24(6):585–593, 2008.
[49] I. Foster. The grid: A new infrastructure for 21st century science.
Physics Today, pages 42–47, 2002.
[50] I. Foster, Yong Zhao, I. Raicu, and S. Lu. Cloud computing and grid computing 360-degree compared. In Grid Computing Environments Workshop, 2008. GCE ’08, pages 1–10, Nov. 2008.
[51] Ian Foster and Carl Kesselman. The globus toolkit. pages 259–278, 1999.
[52] Ian Foster, Carl Kesselman, and Steven Tuecke. The anatomy of the grid: Enabling scalable virtual organizations. Int. J. High Perform.
Comput. Appl., 15(3):200–222, 2001.
[53] Geoffrey Fox and Dennis Gannon. Workflow in grid systems. Concur- rency and Computation: Practice and Experience, 2006:1009–1019.
[54] Pascal Dugenie Stefano A. Cerri Frederic Duvert, Clement Jonquet.
Agent-grid integration ontology. Volume 4277/2006(1):136–146, 2005.
[55] Patrick Fuhrmann and Volker Gulzow. dcache, storage system for the future. In Euro-Par 2006 Parallel Processing, volume Volume 4128/2006 of Lecture Notes in Computer Science, pages 1106–1113.
Springer Berlin / Heidelberg, 2006.
[56] Ivan Marton Gabor Roczei, Gabor Szigeti. ARC peer-to-peer informa- tion system. NorduGrid. NORDUGRID-TECH-21.
[57] Fabrizio Gagliardi. The egee european grid infrastructure project. vol- ume 3402/2005 of Lecture Notes in Computer Science, pages 194–203, Berlin / Heidelberg, 2005. Springer.
[58] S. Garfinkel. Commodity grid computing with amazon’s s3 and ec2,.
[59] Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung. The google file system. SIGOPS Oper. Syst. Rev., 37(5):29–43, 2003.
[60] Felix Hupfeld, Toni Cortes, Bj¨orn Kolbeck, Jan Stender, Erich Focht, Matthias Hess, Jesus Malo, Jonathan Marti, and Eugenio Cesario. The xtreemfs architecture—a case for object-based file systems in grids.
Concurr. Comput. : Pract. Exper., 20(17):2049–2060, 2008.
[61] Carl Kesselman Ian Foster. Globus: a metacomputing infrastructure toolkit. International Journal of High Performance Computing Appli- cations, 11, No. 2:115–128.
[62] Seidel E.; Allen G.; Merzky A.; Nabrzyski J. Gridlab: a grid application toolkit and testbed.
[63] Amarnath Gupta Mark James Bertram Ludaescher Maryann E. Mar- tone Philip M. Papadopoulos Steven T. Peltier Arcot Rajasekar Si- mone Santini Ilya N. Zaslavsky Mark H. Ellisman Jeffrey S. Grethe, Chaitan Baru. Biomedical informatics research network: Building a national collaboratory to hasten the derivation of new understanding and treatment of disease. In From Grid to Healthgrid: Proceedings of Healthgrid 2005, volume Volume 112/2005, pages 100–109. IOS Press, 2005.
[64] Ying Li Feng Hong Ming Gao Jiadi Yu, Minglu Li. A framework for price-based resource allocation on the grid. Volume 3320/2005(1):341–
344, 2005.
[65] R. Buyya K. Krauter and M. Maheswaran. A taxonomy and survey of grid resource management systems for distributed computing. SOFT- WARE PRACTICE AND EXPERIENCE, 32:135–164, 2002.
[66] Bernardo A. Huberman Kevin Lai and Leslie Fine. Tycoon: A Dis- tributed Market-based Resource Allocation System. Technical Report arXiv:cs.DC/0404013, HP Labs, Palo Alto, CA, USA, April 2004.
[67] Rohit Khare and Richard N. Taylor. Extending the representational state transfer (rest) architectural style for decentralized systems. In ICSE ’04: Proceedings of the 26th International Conference on Soft- ware Engineering, pages 428–437, Washington, DC, USA, 2004. IEEE Computer Society.
[68] Malcolm Atkinson Neil Chue Hong Tom Sugden Alastair Hume Mike Jackson Amrey Krause Konstantinos Karasavvas, Mario Antonioletti and Charaka Palansuriya. Introduction to ogsa-dai services. Volume 3458/2005:1–12.
[69] A. Konstantinov. The ARC Computational Job Management Module - A-REX. NorduGrid. NORDUGRID-TECH-14.
[70] E Laure, F Hemmer, F Prelz, S Beco, S Fisher, M Livny, L Guy, M Bar- roso, P Buncic, Peter Z Kunszt, A Di Meglio, A Aimar, A Edlund, D Groep, F Pacini, M Sgaravatto, and O Mulmo. Middleware for the next generation grid infrastructure. (EGEE-PUB-2004-002):4 p, 2004.
[71] Jonas Lindemann and G¨oran Sandberg. An extendable grid application portal. Volume 3470/2005:1012–1021.
[72] Michael Litzkow, Miron Livny, and Matthew Mutka. Condor - a hunter of idle workstations. In Proceedings of the 8th International Conference of Distributed Computing Systems, June 1988.
[73] Cecchi Marco, Capannini Fabio, Dorigo Alvise, Ghiselli Antonia, Gia- comini Francesco, Maraschini Alessandro, Marzolla Moreno, Monforte Salvatore, Pacini Fabrizio, Petronzio Luca, and Prelz Francesco. The glite workload management system. In GPC ’09: Proceedings of the 4th International Conference on Advances in Grid and Pervasive Com- puting, pages 256–268, Berlin, Heidelberg, 2009. Springer-Verlag.
[74] Alain Roy Jeff Weberand Kent Wenger Peter Couvares, Tevfik Kosar.
Workflow management in condor. pages 357–375.
[75] Mathilde Romberg. The unicore architecture: Seamless access to dis- tributed resources. High-Performance Distributed Computing, Interna- tional Symposium on, 0:44, 1999.
[76] A. Negro S. Fiore, S. Vadacca and G. Aloisio. Data issues at the euro- mediterranean centre for climate change. pages 23–35.
[77] Latha Srinivasan and Jem Treadwell. An overview of service- oriented architecture, web services and grid computing:
http://devresource.hp.com/drc/technical papers/grid soa/soa-grid- hp.pdf [11th jan 2010].
[78] Graeme A Stewart, David Cameron, Greig A Cowan, and Gavin Mc- Cance. Storage and data management in egee. In ACSW ’07: Pro- ceedings of the fifth Australasian symposium on ACSW frontiers, pages 69–77, Darlinghurst, Australia, Australia, 2007. Australian Computer Society, Inc.
[79] Ariel Goyeneche Gabor Terstyanszky Stephen Winter Thierry Delaitre, Tamas Kiss and Peter Kacsuk. Gemlca: Running legacy code applica- tions as grid services. Journal of Grid Computing, pages 75–90.
[80] Justin Ferris Darren Marvin Martin Senger Mark Greenwood Tim Carver Kevin Glover Matthew R. Pocock Anil Wipat Tom Oinn, Matthew Addis and Peter Li. Taverna: a tool for the composition and enactment of bioinformatics workflows. 2004.
[81] Luis M. Vaquero, Luis Rodero-Merino, Juan Caceres, and Maik Lind- ner. A break in the clouds: towards a cloud definition. SIGCOMM Comput. Commun. Rev., 39(1):50–55, 2009.
[82] Srikumar Venugopal, Rajkumar Buyya, and Kotagiri Ramamohanarao.
A taxonomy of data grids for distributed data sharing, management, and processing. ACM Comput. Surv., 38(1):3, 2006.
[83] Lizhe Wang, Jie Tao, M. Kunze, A.C. Castellanos, D. Kramer, and W. Karl. Scientific cloud computing: Early definition and experience.
In High Performance Computing and Communications, 2008. HPCC
’08. 10th IEEE International Conference on, pages 825–830, Sept. 2008.
[84] Andrea Weise, Mike Wan, Wayne Schroeder, and Adil Hasan. Manag- ing groups of files in a rule oriented data management system (irods).
In ICCS ’08: Proceedings of the 8th international conference on Com- putational Science, Part III, pages 321–330, Berlin, Heidelberg, 2008.
Springer-Verlag.
[85] Maria Ganzha Maciej Gawinecki Ivan Lirkov Svetozar Margenov Wo- jciech Kuranowski, Marcin Paprzycki. Agents as resource brokers in grids ˜N forming agent teams. Future Gener. Comput. Syst., Volume 4818/2008(1):489–491, 2005.
[86] Maria Ganzha Maciej Gawinecki Ivan Lirkov Svetozar Margenov Wo- jciech Kuranowski, Marcin Paprzycki. Efficient matchmaking in an agent-based grid resource brokering system. pages 327–335, 2006.
[87] Jia Yu and Rajkumar Buyya. A taxonomy of scientific workflow systems for grid computing. SIGMOD Rec., 34(3):44–49, 2005.
Recent licentiate theses from the Department of Information Technology
2010-002 Carl Nettelblad: Using Markov Models and a Stochastic Lipschitz Condition for Genetic Analyses
2010-001 Anna Nissen: Absorbing Boundary Techniques for the Time-dependent Schr¨odinger Equation
2009-005 Martin Kronbichler: Numerical Methods for the Navier-Stokes Equations Ap- plied to Turbulent Flow and to Multi-Phase Flow
2009-004 Katharina Kormann: Numerical Methods for Quantum Molecular Dynamics 2009-003 Marta L´arusd´ottir: Listen to Your Users - The Effect of Usability Evaluation on
Software Development Practice
2009-002 Elina Eriksson: Making Sense of Usability - Organizational Change and Sense- making when Introducing User-Centred Systems Design in Public Authorities 2009-001 Joakim Eriksson: Detailed Simulation of Heterogeneous Wireless Sensor Net-
works
2008-003 Andreas Hellander: Numerical Simulation of Well Stirred Biochemical Reac- tion Networks Governed by the Master Equation
2008-002 Ioana Rodhe: Query Authentication and Data Confidentiality in Wireless Sen- sor Networks
2008-001 Mattias Wiggberg: Unwinding Processes in Computer Science Student Projects
2007-006 Bj¨orn Halvarsson: Interaction Analysis and Control of Bioreactors for Nitro- gen Removal
2007-005 Mahen Jayawardena: Parallel Algorithms and Implementations for Genetic Analysis of Quantitative Traits