• No results found

Reactive Async: Safety and efficiency of new abstractions for reactive, asynchronous programming

N/A
N/A
Protected

Academic year: 2022

Share "Reactive Async: Safety and efficiency of new abstractions for reactive, asynchronous programming"

Copied!
57
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS

STOCKHOLM SWEDEN 2016,

Reactive Async

Safety and efficiency of new abstractions for reactive, asynchronous programming

SIMON GERIES

KTH ROYAL INSTITUTE OF TECHNOLOGY

(2)
(3)

Reactive Async

Safety and efficiency of new abstractions for reactive, asynchronous programming

SIMON GERIES

Master’s Thesis at CSC Supervisor: Philipp Haller

Examiner: Mads Dam

(4)
(5)

Abstract

Futures and Promises have become an essential part of asynchronous programming. However, they have important limitations, allowing at most one result and no support for cyclic dependencies, instead resulting in deadlocks.

Reactive Async is a prototype of an event-based asynchronous parallel programming model that extends the functionality of Futures and Promises, supporting refinement of results according to an application-specific lattice. Furthermore, it allows for completion of cyclic dependencies through quiescence detection of a thread pool.

The thesis demonstrates the practical applicability of Reactive Async by applying the model to a large static analysis framework, OPAL. Benchmarks comparing Reac- tive Async with Futures and Promises show an efficiency

(6)

Referat

Reaktiv Asynkronicitet - Säkerhet och effektivitet av nya abstraktioner för reaktiv, asynkron pro- grammering

Futures och Promises har blivit en viktig del av asynkron programmering. Men de har viktiga begränsning- ar, bland annat att tillåta högst ett resultat skrivas och inget stöd för cykliska beroenden, som istället resulterar i baklås.

Reactive Async är en prototyp av en händelsebaserad asynkron parallell programmeringsmodell som utökar funk- tionaliteten hos Futures och Promises, för att stödja föräd- ling av resultat enligt en applikation specifik lattice. Dess- utom möjliggör modellen slutförandet av cykliska beroen- den genom att upptäcka när en tråd pool ej har några oav- slutade uppgifter.

Masteruppsatsen visar att Reactive Async är praktisk tillämpbar genom att applicera modellen på ett stort ram- verk för statiska analyser. Prestandatester som jämför Re- active Async med Futures och Promises visar att flexibi- liteten av modellen kompromissas med effektiviteten vid användning.

(7)

Contents

1 Introduction 1

1.1 Background . . . 2

1.2 Goal . . . 3

1.3 Why Reactive Async? . . . 4

1.4 Scala syntax . . . 4

1.5 Ethics . . . 5

1.6 Outline . . . 5

2 Programming with Reactive Async 7 2.1 Write operations . . . 10

2.2 Dependencies . . . 11

2.3 Callbacks . . . 13

2.4 Thread pool and quiescence . . . 14

3 Implementation 17 3.1 Cell . . . 17

3.1.1 Callbacks . . . 18

3.1.2 Dependencies . . . 19

3.1.3 State . . . 22

3.2 HandlerPool . . . 23

4 Case study 27 4.1 Purity analysis . . . 27

4.2 Immutability analysis . . . 29

4.3 Results . . . 34

5 Performance evaluation 37 5.1 Static analyses . . . 37

5.2 Micro Benchmarks . . . 38

6 Related Work 43 6.1 FlowPools . . . 43

6.2 Habanero-Scala . . . 43

(8)

7 Conclusion and future work 45

Bibliography 47

(9)

Chapter 1

Introduction

No more can we rely on achieving faster running programs by increasing the process- ing power of a computer. Most devices are being shipped with multiple processing cores making parallelism and distribution the way to increase performance. By using multi-threaded programming, one can utilize all cores in order to make an application or system run faster. However, building such systems is not always a trivial task to apprehend due to common hazards encountered in parallel program- ming like race conditions and deadlocks. Because of multiple threads running in parallel on different cores, the order in which threads will finish depends on the process scheduler. This can lead to some hard-to-reproduce bugs, putting a lot of responsibility on the programmer to consider these things. Moreover, maintaining software systems is costly for companies and organizations. By reducing the like- lihood of introducing bugs in a program, companies can reallocate their resources to develop the system, instead of having a high focus on bug hunting, reducing the maintenance cost. Reducing the amount of bugs also adds to the safety aspect of a program, making it harder to use applications used in the society in an unintended way by exploiting bugs. This increases the need to find new abstractions to thread handling, that can reduce, or even remove the possibilities of a programmer writing an application with incorrect behavior due to the concurrency.

In this thesis, a prototype of a new parallel programming model called Reactive Async is introduced. It is implemented using the Scala programming language and is inspired by the LVars deterministic-by-construction parallel programming model [12]. Moreover, the model is an extension of the functionality provided by Futures [6] and Promises [14], where it maintains the expressivity of the Scala versions [5].

A case study was made showing that applying Reactive Async to a Scala-based static analysis from the OPAL static analysis framework [3] could reduce the code size significantly, while still performing slightly better. However, when compar- ing Reactive Async to Futures and Promises in Scala, by applying similar oper- ations we can observe a significant performance loss due to the additional over- head. Finally, in terms of memory usage, Futures and Promises are significantly more light weight compared to Reactive Async.

(10)

CHAPTER 1. INTRODUCTION

1.1 Background

One risk factor when developing multi-threaded applications is mutability, which is a key concept in object-oriented (OO) and imperative languages such as Java, C++

and C#. Mutating a shared variable, that is, a variable that can be accessed by multiple threads at the same time, has to be done in a mutual exclusive fashion. By using mutual exclusion with locks, one can prevent unexpected interleaving threads causing bugs, at the cost of the parallelism of an application. Therefore, one strives to achieve an asynchronous parallel programming model, that is, a lock-free model containing no thread-blocking parts in the code.

Sometimes, limitations can be a good thing, such as forcing or encouraging a programmer to use immutable data [10]. Another limitation used by pure data- parallel languages is to force concurrent tasks to independently produce new results [9]. Though these strategies are mostly adopted by functional languages, an in- creasing number of OO and imperative languages are blending the two paradigms together, such as Java 8 and Scala, giving languages some benefits of both worlds.

Although shared variables potentially introduce problems, some algorithms are more naturally written using shared variables. In those cases, one can use atomic operations such as compare-and-swap (CAS), which guarantees variable changes to be done by one thread at a time. CAS operations usually take two parameters, the expected current value in a variable and the new value that is to be set. If the current value matches the value in the variable, it replaces the current value with the new, otherwise it fails to set the new value and returns false.

However, using CAS operations does not guarantee determinism in a concurrent program. Threads may still be scheduled to execute in different orders at different runs of an application. Guaranteed determinism, that is, an execution succeeds every run with the same result is a rarely pursued goal in concurrent programming due to the difficulty of achieving it. Instead, deterministic parallel programming models, such as the LVars model [11], and deterministic data structures, such as FlowPools [17], use a quasi-deterministic definition which states: If an execution fails for some execution schedule, then it fails for any execution schedule, other- wise it always terminates with the same final result.

There are many programming models that use more event-based approaches [12, 15, 5] to communicate results in different forms. One event-based approach is to allow attaching defined callback functions to a construct, that are triggered and run asynchronously once some intermediate or final result has been computed [12].

Another is to allow subscription and notification communication, where a dependent process running on a thread can subscribe to a process of another thread. When a result is received, a notification message is sent to all subscribers [15].

Using event-based approaches based on callback functions can have multiple advantages. Utilizing such a model encourages running task-unique threads, which makes it safe to run asynchronously. Although chopping a program into threads that are task-unique is not always possible, an event-based approach raises the abstraction level, making it more explicit what each task depends on.

2

(11)

1.2. GOAL

Some of the most commonly used models are Futures [6] and Promises [14].

In Scala, a future is a reference to a future value and a promise is a one time writable placeholder for a value. The result of a future can be determined by an asynchronously running function. These models are event-based, where one can define callback functions to run once a future or promise has a result. However, Futures and Promises have some limitations, such as allowing at most one write.

Consider a case where a value is being computed, and at one point in the compu- tation, we have a preliminary result we want to write. Later in the computation, we achieve a new better result we want to update previous result with. Refining results in such a way is not possible with Futures and Promises because of the one time writable limitation. One possible workaround for this is to create an array of futures, where each future can hold an intermediate result. Though this sounds feasible, the amount of intermediate results needed is not always possible to deter- mine beforehand. Another limitation is that futures do not support resolving cyclic dependencies, where such dependencies result in a deadlock.

It is very important that program executions are efficient on modern multi- core processors. Today, the energy consumption of data centers has been shown to have an environmental impact [4]. Therefore, it is essential that one utilizes the capacities of data centers in the most efficient way possible. Given this, it is important to evaluate performance and efficiency of the implementation of Re- active Async, which is done in chapter 5.

1.2 Goal

The aim of this thesis is to implement a prototype parallel programming model, namely Reactive Async, where the objective of the model is to be an extension of Futures and Promises, both in functionality and expressivity. For the extended functionality, it should support dependency handling, allowing for cyclic dependency resolution. Also, it should support refinement of results. The implementation of the model is to include the following key properties:

• Performance. Reactive Async should have reasonable performance, com- pared to Futures and Promises for small tasks, measured in execution time.

• Determinism. Reactive Async should include some determinism properties by limiting write operations to be monotonic (see chapter 2). However, no claims are made that the model itself is deterministic.

• Practicality. Reactive Async should be applicable to real scenarios and tasks.

Also, it should be applicable to large applications and heavy use of the model.

(12)

CHAPTER 1. INTRODUCTION

1.3 Why Reactive Async?

There are already many existing usable parallel programming model and all have their advantages and disadvantages. A model usually tries to tackle some specific problem or use case which eases the process of creating a concurrent application. In addition some try to reduce the possibility of common concurrent hazards occurring.

All this while still keeping good performance. Reactive Async also attempts to tackle these issues. The following shows what characterizes Reactive Async.

An asynchronous model Reactive Async is an asynchronous parallel programming model, that is, it is completely lock-free. This means, there is no thread-blocking used in the model to synchronize two threads, which does not reduce parallelism of the model. Instead, Reactive Async uses CAS operations when a state changes.

Builds on Futures and Promises Because Futures and Promises are already widely used and effective, Reactive Async builds on that functionality. By preserv- ing many of the key features in Futures and Promises, one can use Reactive Async the same way as one would use Futures and Promises.

Has internal dependency control Concurrent programming has many use cases where one threads result depends on some result of another computing thread (section 4.1 and 4.2 show two implemented examples). An example of this is an application applying the Producer and Consumer concurrency model, where sev- eral threads produce data and other threads uses, or consumes, that data in some way. Dependency handling is supported in Reactive Async in the form of events to ease the use of thread dependencies and still keep the model asynchronous. More- over, the dependency handling allows for detection of cyclic dependencies, which is used for preventing deadlocks or incomplete results.

Refinement of results Futures and Promises only allow writing one time, which can be a troublesome limitation for some use cases. Refinement of results is sup- ported in Reactive Async, by allowing shared variables to be updated with some limitation, which is explained in chapter 2 (see section 4.2 for use cases).

Determinism properties Reactive Async contains some determinism properties by allowing for monotonic updates through lattice-based operations (see chapter 2).

1.4 Scala syntax

The following gives a brief description of how the Scala syntax is used in this thesis, in order to make it more understandable for readers not familiar with the syntax.

4

(13)

1.5. ETHICS

val defines a final value

var defines a variable

def defines a method

Unit void

trait similar to Java interface object defines a new singleton object sealed trait used as enums

case object used as enum values

Finally, a method with the name apply is different from normal methods, and is easier described with an example:

object Print {

def apply(s: String) = println(s) }

Print("Hello")

The Print("Hello") will invoke the apply method in the object Print with the parameter "Hello".

1.5 Ethics

Ethical considerations do not apply to this project, since no experiments involv- ing humans or animals were performed.

1.6 Outline

The next chapter describes the properties of Reactive Async and how it works from a programmer’s perspective. Chapter 3 goes into implementation details of the model.

Chapter 4 talks about how Reactive Async was used to implement two Scala-based static analyses from the OPAL static analysis framework [3]. Chapter 5 shows performance results for the static analyses implemented using Reactive Async, and micro benchmarks, comparing Reactive Async to Futures and Promises. Chapter 6 compares Reactive Async with existing parallel programming models and data structures, showing the similarities and differences between them. Finally, chapter 7 concludes and points in which direction the model needs to move next.

(14)
(15)

Chapter 2

Programming with Reactive Async

Reactive Async is a prototype implementation of an event-based asynchronous par- allel programming model that can be used to create multi-threaded applications.

The model can be decomposed into five parts: Cell, CellCompleter, HandlerPool, Key and Lattice.

A cell is a shared memory location, that is, an object that can be accessed and written to by several threads at the same time. It is also a placeholder for some value, meaning it contains some value similar to how a future contains a value. Writing to a cell is done by using a cell completer, similar to how a promise is used for writing to a future. However, the write operations are limited to being monotonic, where the writes are defined by an application-specific lattice. Furthermore, a cell can be completed, meaning permanently locking a cell from further changes.

A lattice is a partially ordered set, that is, a set where each element is ranked by some order. For example, a natural number lattice can be defined, where ev- ery element is a natural number, and the order of the elements are defined by the magnitude of their number (as shown in figure 2.1). Another example, is a set lattice, where every element is a set, and the order of each element is defined by the set inclusion. Every two elements in a lattice have a unique least upper bound, or join. In Reactive Async, the lattice is represented as the trait Lattice[V], where V is the type of the value in a cell.

trait Lattice[V] {

def join(current: V, next: V): V def empty: V

}

A Lattice[V] is required to define a join operation and some empty element.

join takes two elements and returns their least upper bound, and empty de- fines the smallest element of the lattice.

Example 2.1.

class NaturalNumberLattice extends Lattice[Int] { override def join(current: Int, next: Int): Int = {

if(current < next) next else current

(16)

CHAPTER 2. PROGRAMMING WITH REACTIVE ASYNC

}

override def empty: Int = 0 }

Example 2.2.

class SetLattice[V] extends Lattice[Set[V]] {

override def join(current: Set[V], next: Set[V]): Set[V] = { current union next

}

override def empty: Set[V] = Set() }

Example 2.1 and 2.2 shows implementations of the natural number lattice and the set lattice. For the natural number lattice, the join of two numbers is the maximum. Given an instance val nnl = new NaturalNumberLattice, an example of the join operation is nnl.join(2, 5) = 5. The element with the lowest order in the natural numbers lattice is 0. For the set lattice, the join of two sets is defined by the union of the two sets. Given an instance val sl = new SetLattice[Int], an example of the join operation is sl.join(Set(1, 2), Set(2, 3)) = Set(1, 2, 3). The element with the lowest order in the set lattice is an empty set.

Figure 2.1: Shows a natural number lattice order, where the element with the lowest order is the bottom element, and the one with the highest order is >.

Tying together Cell with Lattice, the empty element is the initial value of the cell and join is used when writing to a cell. A write does not actually infer writing some specific value, instead, it means writing the least upper bound of the current value of the cell and the new value. More formally:

Definition 2.3. A write to a cell with some value next, implies writing the result value of lattice.join(current, next), where lattice is an instance of a subclass to Lattice[V] and current is the current value in a cell.

8

(17)

If we take an example of a cell that uses the set lattice, containing the current value Set(2, 3), then writing Set(2, 4) to that cell, it will actually result in receiving Set(2, 3, 4).

The purpose of using a lattice to limit writes to a cell is to reduce the risk of introducing non-determinism to an application. Moreover, in some cases it could also help by explicitly showing non-determinism in an application by throwing an exception.

For example, if we have a lattice that allows for a one time write to a cell (see section 4.1 for an example of such a lattice), it would explicitly show by throwing a LatticeViolationException if an application was to write twice with two different values to the same cell.

trait Key[V] {

val lattice: Lattice[V]

...

}

What lattice a cell receives is determined by the key the cell receives when created (as shown in example 2.4). Key[V] requires all subclasses to hold some instance of type Lattice[V], that is used to assign a cell’s initial value and determines how the monotonic write operations should work for a cell by using join. For the NaturalNumberLattice, the key looks like the following:

object NaturalNumberKey extends Key[Int] { val lattice = new NaturalNumberLattice ...

}

There are two different CellCompleters in Reactive Async, a trait CellCompleter[K <: Key[V], V]and a singleton object CellCompleter.

The trait is used for describing the API of a cell completer, taking two type parame- ters, a subtype of Key[V], and a value type. The singleton object CellCompleter is a factory object used to create both a cell completer and a cell. A cell completer works as a placeholder for the cell it created and is used to operate on that cell.

Example 2.4.

val pool = new HandlerPool val cellCompleter =

CellCompleter[NaturalNumberKey.type, Int](pool, NaturalNumberKey) val cell = cellCompleter.cell

Example 2.4 describes how to create a cell completer and a cell that are restricted to the natural number lattice. As shown, CellCompleter takes two parameters, a key object, which has the same type as specified in the type parameter, and a HandlerPool object. HandlerPool is explained in section 2.4.

(18)

CHAPTER 2. PROGRAMMING WITH REACTIVE ASYNC

2.1 Write operations

A cell completer is used to directly apply a write to the cell it is holding.

There are two methods supported for this:

def putNext(x: V): Unit def putFinal(x: V): Unit

• putNext(x)

– Incomplete cells. When writing to an incomplete cell, putNext(x) writes some value x.

– Complete cells. When writing to a completed cell, the result of putNext(x) depends on what the result of the join is. The oper- ation fails, that is, throws an exception if join(current, x) !=

current, otherwise nothing happens, that is, if join(current, x)

== current.

• putFinal(x)

– Incomplete cells. When writing to an incomplete cell, putFinal(x) first writes x if it is an element with a higher order than the current value, or if the two values are equal, then finishes by completing the cell. The operation fails if the current value in the cell is a higher order element than x.

– Complete cells. When writing to a completed cell, if current != x, then putFinal(x) fails, otherwise nothing happens.

If there are many threads performing a putNext on one cell at the same time, the cell will always result in the same value, due to each write being a lattice join op- eration. For example, for the natural numbers lattice, it is the element with the high- est number. However, if a thread performs cc.putNext(5) for a cell completer ccand some other thread performs a cc.putFinal(3), then the execution will always fail, no matter which threads succeeds in writing first. This is because, either

1. cc.putNext(5) writes to a complete cell containing the value 3

2. or cc.putFinal(3) writes to an incomplete cell containing the value 5 which according to the explanations above results in failing.

However, if cc.putNext(1) is performed by a thread and cc.putFinal(2) is performed on another thread, it will always result in a completed cell with the value 2. Finally, if you have many threads performing a putFinal on one cell at the same where one of the elements written is different from the others, then the execu- tion always fails, because of the case when performing putFinal on a complete cell.

This explains the determinism properties achieved by having monotonic write operations. It also shows that there can be no contention when completing a cell that can affect the determinism property.

10

(19)

2.2. DEPENDENCIES

2.2 Dependencies

Dependency handling is provided in Cell, where one can explicitly specify that the result of a cell depends on the result of another cell. Explicit dependency assignment is mainly used so Reactive Async can keep track of all the dependencies, in order to later find the cyclic dependencies and resolve them.

Definition 2.5. If cell A depends on cell B, then A is referred to as the dependent, Bas the dependee, and the relationship expressed as A dep B.

There are two types of dependencies that can be assigned by invoking the following two methods of the dependent:

def whenNext(dependee: Cell[K, V], predicate: V => WhenNextPredicate, shortcutValue: Option[V]): Unit def whenComplete(dependee: Cell[K, V],

predicate: V => Boolean,

shortcutValue: Option[V]): Unit

These two methods cause a callback function to be triggered that can write to the dependent cell when the dependee is written to with a new value. However, this will be referred to as a dependency being triggered.

The parameters of the method are the following:

– dependee, which is the dependee cell – predicate, which is a function

– shortcutValue, which is the new value to possibly be written to the de- pendent cell

The predicate function determines if the shortcutValue is written to the dependent cell once the dependency is triggered. Finally, the shortcutValue is an Option,1 which can be defined as Some(v), where the value v is written to the dependent cell. It can also be defined as None, implying that the value written to the dependent cell is the same value as the new value that was written to the dependee that triggered the dependency.

That means, if the dependee is written to with value 3, triggering the depen- dency, then the dependent is also written to with value 3. Whenever a depen- dency is assigned, a callback function is created that evaluates the predicate and applies the changes accordingly. This callback is executed asynchronously when- ever the dependency is triggered. The dependency relationship between two cells is removed once the dependee is completed. However, what triggers the depen- dency, hinges on the dependency operation.

The whenNext dependency is triggered either when the dependee value changes, when it is completed or if the dependee is already completed when the dependency is being assigned. Each time the dependency is triggered, the

1http://www.scala-lang.org/api/2.11.8/#scala.Option

(20)

CHAPTER 2. PROGRAMMING WITH REACTIVE ASYNC

predicate is evaluated with the written value (the join of the current and new value). The return type of the predicate is WhenNextPredicate, which is a sealed trait with three case objects extending it. The returned result determines outcome of a triggered whenNext dependency.

DoNothing indicates that nothing happens to the dependent cell.

DoPutNext indicates that the dependency triggers a putNext operation with the shortcutValue.

DoPutFinal indicates that the dependency triggers a putFinal operation with the shortcutValue.

Example 2.6.

cell1.whenNext(cell2, (x: Int) =>

x match {

case 1 => DoPutNext case 2 => DoPutFinal case _ => DoNothing },

Some(3))

Example 2.6 shows cell1 assigning a whenNext dependency on cell2, where if cell2 receives a new value 1, then putNext(3) is performed on cell1.

If cell2 receives a new value 2, then putFinal(3) is performed on cell1.

Otherwise no changes are made to cell1.

Example 2.7.

cell1.whenNext(cell2, (x: Int) =>

x match {

case 1 => DoPutNext case 2 => DoPutFinal case _ => DoNothing },

None)

Example 2.7 is similar to example 2.6. The difference is that in example 2.6 al- ways write 3 to the dependent, whereas in example 2.7, whatever the written value to the dependee is the same value written to the dependent. For exam- ple, if cell2 receives a new value 1, then putNext(1) is performed on cell1.

If cell2 receives a new value 2, then putFinal(2) is performed on cell1.

Otherwise no changes are made to cell1.

The whenComplete dependency is triggered either when the dependee is being completed or if the dependee is already completed when the dependency is assigned.

Similar to whenNext, the predicate is evaluated with the written value, but has a return type Boolean that can only conclude the following:

false indicates that nothing happens to the dependent cell.

12

(21)

2.3. CALLBACKS

true indicates that the dependency triggers a putFinal operation with the shortcutValue.

Example 2.8.

cell1.whenComplete(cell2, (x: Int) => x == 1, Some(3))

Example 2.8 shows cell1 assigning a whenComplete dependency on cell2, where if cell2 is completed a new value 1, then putFinal(3) is performed on cell1. Otherwise no changes are made to cell1.

It is possible to have both a whenNext and a whenComplete dependency on the same dependee cell, from the same dependent cell. In this scenario, ev- erything works as previously explained, except that completing the dependee cell only triggers the whenComplete dependency.

2.3 Callbacks

Similar to Futures and Promises, one can assign callback functions on a cell that are run asynchronously once triggered. There are two types of callback functions:

def onNext[U](callback: Try[V] => U): Unit def onComplete[U](callback: Try[V] => U): Unit

onNext The callback function is triggered whenever a cell receives a new intermediate value or is completed if there are no onComplete callback functions is assigned to the same cell.

onComplete The callback function is triggered once a cell is completed.

Both onNext and onComplete take a function with a parameter of type Try[V]

and returns some value of type U. A Try[V] object can either contain Success(v), where v is a succeeded value of type V, or Failure(e), where e is an exception.

Example 2.9.

cell1.onNext {

case Success(v) => println("This is my intermediate result: " + v) case Failure(e) => println("Error: " + e)

}

cell1.putNext(4)

Example 2.9 shows how to assign an onNext callback to cell1, which is triggered by the putNext operation on cell1. If it is a successfully written intermediate value 4, the callback prints it, otherwise printing the exception. However, one has to be careful how they are placed due to the asynchronous execution so they can finish in time before an application is terminated.

(22)

CHAPTER 2. PROGRAMMING WITH REACTIVE ASYNC

2.4 Thread pool and quiescence

Reactive Async has its own thread handling interface, which is provided by the HandlerPool. When creating an instance of a HandlerPool, one can specify the number of threads the pool should contain. A pool is used to assign some given task to be executed on it by calling execute with some given function. Creating a cell completer requires a pool to be given, which is used to execute the call- back functions asynchronously as tasks on the same pool. Further, HandlerPool supports detection of quiescence in a thread pool.

Definition 2.10. A pool is quiescent when there are no unfinished submitted tasks currently queued or running on it.

This is useful due to two cases where a pool can finish all task executions and become quiescent, but still have incomplete cells.

1. A cyclic dependency, that is, each cell’s result depends on another cell’s result, where this dependency chain forms a cycle. A simple example is, if we have three cells A, B and C, where A dep B, B dep C and C dep A. A cyclic dependency where each cell in the cycle can only reach cells within the cycle is called a closed strongly connected component (CSCC) (see figure 2.4 (a)).

2. Cells without any dependencies that cannot be completed for some reason, which might be a dependee for some other dependent cells that are unable to complete for this reason. Such a dependee cell is referred to as an independent unresolvable cell (IUC) (see figure 2.4 (b)).

(a)

(b)

Figure 2.2: (a) shows a dependency chain where cell2, cell3 and cell4 form a CSCC; (b) shows a dependency chain where cell3 is an IUC.

In order to not deadlock or have unfinished computations when a CSCC or IUC occurs, one needs to have some defined behavior of how these cases are to be handled and resolved. For this, Reactive Async provides an interface for resolving

14

(23)

2.4. THREAD POOL AND QUIESCENCE

those cells. This is where Key[V] plays a role. Apart from holding an instance of a lattice used by a cell, Key[V] requires defining two methods:

trait Key[V] { ...

def resolve[K <: Key[V]](cells: Seq[Cell[K, V]]): Seq[(Cell[K, V], V)]

def default[K <: Key[V]](cells: Seq[Cell[K, V]]): Seq[(Cell[K, V], V)]

}

resolve A method that takes a list of cells, where the dependencies of the cells forms a CSCC, and returns a list of tuples (cell, value), where the first element is a cell, and the second element is the value that cell is to be completed with.

default A method that takes a list of incomplete cells that do not construct a CSCC and returns a list of tuples (cell, value), where the first element is a cell, and the second element is the value that cell is to be completed with.

These methods are invoked by a method called quiescentResolveCell provided in HandlerPool. This method returns a future that can be used to place a barrier, preventing an application to terminate before all results are computed.

Example 2.11.

val pool = new HandlerPool(4) pool.execute(() => someHeavyTask()) val future = pool.quiescentResolveCell Await.ready(future, 15.minutes)

Example 2.11 starts by creating a thread pool with 4 threads. Then submits a task to the thread pool which is executed when there is a free thread in the thread pool. The current thread continues and invokes quiescentResolveCell which returns a future. The future is used to create a barrier using Await.ready, where the current thread waits until all submitted tasks are finished executing or when 15 minutes have passed. When all submitted tasks are finished, the pool becomes quiescent. Then quiescentResolveCell resolves all incomplete cells according to the resolve and default methods, and finally completes the future, which consequently lowers the barrier so the current blocked thread can proceed.

The quiescentResolveCell first finds all CSCCs by using an algorithm provided by OPAL, a static analysis framework, and then invokes resolve for each CSCC, with the incomplete cells forming that CSCC. Then finally, default is invoked with the rest of the cells, that are incomplete due to some IUCs.

To see two complete static analysis implementations using Reactive Async, go to chapter 4.

(24)
(25)

Chapter 3

Implementation

Reactive Async is a prototype implementation of an event-based asynchronous pro- gramming model implement in Scala. The implementation provides an API us- able for developing concurrent applications using refinable shared variables, ac- cording to an application-specific lattice.

The two major components in Reactive Async, a cell and a thread pool. A cell is represented using two different interface types, where the different interface types are used for reading and writing to a cell. These types are also generic in two different parts, the value the cell contain and the key which determines the lattice, the resolution of cyclic dependencies and provides default values. The res- olution of a cell ends in a callback function being executed by the HandlerPool, which registers tasks for each execution.

3.1 Cell

The interface type used for reading from a cell is Cell[K, V] and for writing to a cell is CellCompleter[K, V]. However, this CellCompleter[K, V]

should not be confused with the factory singleton object CellCompleter, which is used for creating a cell. The following code shows the factory singleton object CellCompleter.

object CellCompleter {

def apply[K <: Key[V], V](pool: HandlerPool, key: K):

CellCompleter[K, V] = {

val impl = new CellImpl[K, V](pool, key) pool.register(impl)

impl } }

It creates an object of CellImpl[K, V] which is a class that implements the functionality of both Cell[K, V] and CellCompleter[K, V], where K is the key type and V is the value type. It takes a HandlerPool parameter that determines which pool the cell registers to, which is then used for executing

(26)

CHAPTER 3. IMPLEMENTATION

the callback functions assigned to the created cell. Then it registers the cell to the pool and finally returns the CellImpl[K, V] object, which is returned as the type CellCompleter[K, V]. The Cell[K, V] object is contained in the CellCompleter[K, V] object, which you can see in example 2.4. The CellCompleter[K, V] object is used to perform putNext and putFinal operations on a cell, while the Cell[K, V] object is used for assigning callbacks and dependencies. A cell contains the following information:

• A value: The value the cell holds

• Callbacks: The functions that are to be executed when this cell is written to

• Dependencies: Which cells this cell depends on

The type of the value a cell holds is determined by the type parameter V, which is specified at the creation of a cell and a cell completer.

3.1.1 Callbacks

Callbacks are separated into two categories:

1. NextCallback 2. CompleteCallback

These two sorts of callbacks are represented as classes that contain the necessary information about the callbacks. Both work almost identically, the difference is that NextCallback is used for onNext typed callbacks, that is, when a cell receives a new intermediate value, while CompleteCallback is used for onComplete typed callbacks, that is, when a cell completes.

Objects of NextCallback are created either by assigning onNext callbacks or whenNext dependencies. Similarly, all objects of CompleteCallback are cre- ated either by assigning onComplete callbacks or whenComplete dependencies.

Furthermore, all callback objects are stored in the cell that triggers them. How everything works for the dependencies is explained in section 3.1.2.

Both NextCallback and CompleteCallback take the following information as parameters:

– A HandlerPool providing the threads to execute callbacks on – A callback function to execute when triggered

– A source cell, that is, the cell used to create the callback

A callback function contained in a NextCallback object will be referred to as a NextCallback, whereas a callback function contained in a CompleteCallback object will be referred to as a CompleteCallback.

def onNext[U](callback: Try[V] => U): Unit = { val newNextCallback =

new NextCallback[K, V](pool, callback, this) dispatchOrAddNextCallback(newNextCallback) }

18

(27)

3.1. CELL

def onComplete[U](callback: Try[V] => U): Unit = { val newCompleteCallback =

new CompleteCallback[K, V](pool, callback, this) dispatchOrAddCompleteCallback(newCompleteCallback) }

The code above shows the implementation of onNext and onComplete, where they create a new object of NextCallback and CompleteCallback respectively.

The pool the objects take is the same pool assigned to the cell, while the source cell for onNext and onComplete assignments is the same cell executing the function.

Example 3.1.

c.onNext {

case Success(v) => println("Value: " + v) case Failure(e) => println("Error: " + e) }

Take example 3.1 for instance, where the cell c is calling the onNext method.

The new NextCallback object created in onNext takes the same pool that was assigned to c when it was created, where c is the source cell due to it being the cell used to create the callback. Finally, when creating a callback, it is either executed instantly or stored in the cell that the callback was assigned to. This is determined by the dispatchOrAddNextCallback and dispatchOrAddCompleteCallback, where if the callback is assigned to an already complete cell, then the callback is triggered instantly with the value the cell was completed with, otherwise the callback is stored. In example 3.1, the callback is stored in c. How they determine if a cell is completed or not, is explained in section 3.1.3.

The NextCallbacks that are executed in a cell c differs for a putNext and a putFinal operation performed on c.

putNext All NextCallbacks are executed if putNext causes a value change in c

putFinal A NextCallback nc is executed if there exists no CompleteCallbacks in c that have the same source cell as nc

The CompleteCallbacks in c are executed once a putFinal operation is per- formed on c.

The only callbacks that can be removed are callbacks assigned by whenNext and whenComplete dependencies, which is explained in the following section.

3.1.2 Dependencies

Similar to callbacks, the dependencies are also separated into two categories:

1. NextDependency

(28)

CHAPTER 3. IMPLEMENTATION

2. CompleteDependency

These two are represented as classes that contain the necessary information about the dependencies. NextDependency and CompleteDependency take the following information as parameters:

– The dependee cell

– A predicate to be evaluated – A shortcut value

– The dependent cell completer

For each whenNext assignment, a new object of NextDependency is created and stored in the dependent cell, where all the parameters specified in the whenNext call are transferred to the new instance of NextDependency.

def whenNext(dependee: Cell[K, V],

predicate: V => WhenNextPredicate, shortcutValue: Option[V]): Unit = { ...

val newDep = new NextDependency(dependee, predicate, shortcutValue, this) ...

dependee.addNextCallback(newDep, this) ...

}

def addNextCallback[U](callback: Try[V] => U, source: Cell[K, V]): Unit = {

val newNextCallback = new NextCallback[K, V](pool, callback, source) dispatchOrAddNextCallback(newNextCallback)

}

The code shows the important parts of the whenNext implementation. It starts by creating a new NextDependency, where this is the dependent cell com- pleter, then later calls a method addNextCallback using the dependee, which creates a new NextCallback. As explained, all callbacks are stored in the cells that trigger them. In the case of a dependency, the cell that is to trigger the callback is the dependee, that is why addNextCallback is called using the de- pendee. The addNextCallback takes two parameters, the callback function and the source cell. It may seem that addNextCallback is taking the newDep ob- ject, but it is actually taking the apply method defined in NextDependency as the callback function, which is explained later in this section. The source cell for the dependency cases is always the dependent cell. The addNextCallback then creates a new NextCallback object which takes the same pool that was as- signed for the cell, the callback function and the source cell. Then finally calls the dispatchOrAddNextCallback method similar to the onNext method.

Everything explained about the whenNext implementation, and how NextDependency and NextCallback are used works almost identical with

20

(29)

3.1. CELL

the whenComplete implementation. The difference is that whenComplete uses CompleteDependencys apply method as a callback function, which is created as a new CompleteCallback instead. Also the apply method in CompleteDependency differs from the one in NextDependency.

def apply(x: Try[V]): Unit

In general, for both NextDependency and CompleteDependency, the apply method takes the newly written value that triggered the callbacks as an object of type Try[V]. If the apply parameter x contains Failure(e), then nothing happens, otherwise, if x contains Success(v), then predicate is evaluated with the value v. The differences between the apply method in NextDependency and the one in CompleteDependency are the following:

1. The predicate is handled differently due to the differences in the return value.

2. The removal of the dependency objects stored in the dependent cell needs to be handled differently.

In the case of a NextDependency nd, if nd is triggered due to a completing dependee cell, then the apply method in nd removes all NextDependencys from the dependent cell that have a matching dependee cell to nd. Whereas in the case of CompleteDependency, it depends on if predicate returns true or false:

false The apply will removes both all the NextDependency objects and the CompleteDependencyobjects in the dependent cell that have matching dependee cells as the triggered CompleteDependency callback.

true Implies that the dependent cell is also completed, and therefore there is no need to remove any dependencies.

As for removal of NextCallbacks and CompleteCallbacks created by de- pendency assignments, they are removed when the dependent cell is completed, that is, when putFinal is performed on the dependent cell.

Lets consider a complete scenario where every step of creating, storing and removing callback objects and dependency objects is described:

val pool = new HandlerPool val completer1 =

CellCompleter[NaturalNumberKey.type, Int](pool, NaturalNumberKey) val completer2 =

CellCompleter[NaturalNumberKey.type, Int](pool, NaturalNumberKey) val cell1 = completer1.cell

val cell2 = completer2.cell cell1.whenNext(cell2, (x: Int) =>

x match {

case 2 => DoPutNext case _ => DoNothing },

(30)

CHAPTER 3. IMPLEMENTATION

Some(4))

completer1.putFinal(1) completer2.putNext(2)

The code shows how cell1 adds a whenNext dependency on cell2, where if cell2 receives a new value 2, then putNext(4) is performed on cell1. A NextDependency object created with whenNext and stored in cell1, while the NextCallback object created is stored in cell2. Then completer1 completes cell1 with value 1 by calling putFinal, triggering the dependency callback in cell2. The callback completes by removing the NextDependency object in cell1. Finally, putFinal ends by removing the callback from cell2. When putNext(2) is executed, no callbacks are executed, because cell2 does not contain one anymore.

3.1.3 State

Everything in a cell that can be changed by the public API asynchronously are shared variables, which are the cell value, the NextDependency and CompleteDependencyobjects, and the NextCallback and CompleteCallback objects. The cell’s state changes when one of these shared variables are changed, which is done whenever the following operations are used:

– putNext, updates a cell – putFinal, completes a cell

– whenNext, assigns an update dependency – whenComplete, assigns a complete dependency – onNext, adds an update callback

– onComplete, adds a complete callback

In order to ensure that a state change is atomic, all shared variables for a cell are clustered and contained in a single state object, which is an instance of the class State. In State, all dependencies and callbacks are contained as hash maps. The callbacks are separated into two maps, where source cell is the key, and maps to a list of NextCallback or CompleteCallback objects as values.

The dependencies are also separated into two maps, where the dependee cell is the key, and maps to a list of NextDependency or CompleteDependency objects.

This ensures fast look ups and removals no matter how many dependencies or callbacks there are. Furthermore, the state is an AtomicReference,1 and is manipulated using a CAS operation. When a state is changed, the following steps are used to ensure the change is made atomically:

1. Read the current state

2. Create a new state object that contains the change in the state 3. Executes compareAndSet(currentState, newState)

1https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/AtomicReference.html

22

(31)

3.2. HANDLERPOOL

4. If the CAS operation fails, then go to step 1

By using this approach to change the state of the cell, the efficiency of the model can be affected when there is contention for changing the state of a cell, that is, when several threads uses the operations mentioned above on the same cell at the same time. Because this was not observed, there is nothing currently implemented to ease on the contention if it occurs. However, if this was to be an issue in the future, one could solve the contention problems with exponential backoff. This means, one can ease on the contention by making the thread wait for a short time before retrying to update the state again, where each retry increases the waiting time.

When a cell is incomplete, the state object is of type State, which contains all information explained previously. However, when a cell is completed, the state ob- ject type changes to Try[V]. This is how dispatchOrAddCompleteCallback and dispatchOrAddNextCallback knows if a cell is completed or not, by look- ing at the type of the state object in a cell. When the cell receives type Try[V], then no more state changes can be applied to a cell.

Recall that Try[V] only can contain a value of type V. When a cell’s state switches to type Try[V], it only contains the value it was completed with, and consequently looses all information about the dependencies and callbacks.

Therefore there is no need to remove any dependencies or callbacks that are attached to a cell when it has been completed.

3.2 HandlerPool

The HandlerPool has two roles, it is a thread pool used to execute tasks asyn- chronously on, which can be user specified tasks or callbacks. Furthermore, it monitors the resolution status of a set of cells. The HandlerPool is imple- mented using a ForkJoinPool [13], a thread pool containing an amount of alive threads which one can execute tasks on.

Monitoring of cells Whenever a cell is created, it is registered to the HandlerPool, by adding it to a hash map of incomplete cells. When a cell is completed, it is deregistered from the HandlerPool, that is, it is removed from the hash map of incomplete cells. Due to it being resolved and there is no need to monitor the resolution status of the cell anymore. Registering and deregistering cells can be done asynchronously, therefore the hash map is represented as an AtomicReference, where changes are made using the same steps as explained in section 3.1.3. Also, similar to cell state changes, performance can be affected by con- tention when many threads are registering and deregistering cells at the same time.

The implementation does not include a way to ease on the contention if it occurs.

Quiescence The HandlerPool as mentioned in section 2.4 can detect if the pool becomes quiescent. This is done by tracking the amount of unfinished submit- ted tasks, that is, the amount of functions that are executed or queued to be executed

(32)

CHAPTER 3. IMPLEMENTATION

using the execute method provided in the HandlerPool. Whenever execute is called with some function, a counter is increased, registering the new submitted task to the pool. When a task is finished executing, the counter is decreased, dereg- istering the task from the pool. A pool becomes quiescent when this counter reaches zero, which causes all onQuiescent callbacks to be executed asynchronously. An onQuiescentcallback is the product of a quiescentResolveCell call, which creates and assign an onQuiescent callback function to the pool. This call- back is later executed once the pool becomes quiescent. Because both the task counter and the onQuiescent callbacks are both strictly correlated to the qui- escence of a pool, these are contained in a state object, similar to the state of a cell. This state object is an instance of the class PoolState, which is repre- sented as a AtomicReference due to it being changeable asynchronously. The pool state changes are made using the same steps as mentioned in section 3.1.3.

Contention could be a factor for decreasing the performance if many threads are submitting tasks, while many tasks are being finished. The implementation does not include a way to ease on the contention if it occurs.

Resolution of incomplete cells When the pool becomes quiescent, and there are still incomplete cells, a resolution process starts, which is defined by the resolve and default method. These are executed when the onQuiescent callback function created and assigned by quiescentResolveCell is executed.

def quiescentResolveCell[K <: Key[V], V]: Future[Boolean] = { val p = Promise[Boolean]

this.onQuiescent { () =>

// Find a l l cSCCs

val incompleteCells = ...

val cSCCs =

closedSCCs(incompleteCells,

(cell: Cell[K, V]) => cell.totalCellDependencies) cSCCs.foreach(cSCC => resolveCycle(cSCC))

// Finds t h e r e s t o f t h e u n r e s o l v e d c e l l s val rest = ...

resolveDefault(rest) p.success(true) }

p.future }

The quiescentResolveCell method first creates a promise of type Boolean, then creates the onQuiescent callback, and finally returns the future of the promise. The created onQuiescent callback first finds all CSCCs formed by the incomplete cells by using closedSCCs provided by the OPAL framework. Then for each CSCC, the onQuiescent callback calls the methods resolveCycle.

def resolveCycle[K <: Key[V], V](CSCC: Seq[Cell[K, V]]): Unit = { val key = CSCC.head.key

val result = key.resolve(CSCC)

24

(33)

3.2. HANDLERPOOL

for((c, v) <- result) c.resolveWithValue(v) }

The resolveCycle takes a list of cells which form the CSCC, then extracts the key of a cell and invokes its resolve method with the CSCC. The resolve method returns a list of tuples (cell, value), where cell is to be completed with value. Finally, resolveCycle iterates through the tuples and calls resolveWithValue(v), a method that performs putFinal(v) on the cell that is calling the method.

def resolveDefault[K <: Key[V], V](cells: Seq[Cell[K, V]]): Unit = { val key = cells.head.key

val result = key.default(cells)

for((c, v) <- result) c.resolveWithValue(v) }

Once done with the cycle resolving, quiescentResolveCell finds the rest of the incomplete cells, then invokes resolveDefault with those cells. The resolveDefault method is identical to resolveCycle, except that it invokes the default method with the rest of the incomplete cells.

The last thing the onQuiescent callback in the quiescentResolveCell does is completing the future with the value true, that was returned by quiescentResolveCell. This gives the ability to create a barrier until the onQuiescent callback is finished.

(34)
(35)

Chapter 4

Case study

The OPAL [3] static analysis framework implements many different forms of con- currently executed analyses for Java Bytecode. In order to show the practical appli- cability of Reactive Async, two analyses from the OPAL framework have been reim- plemented using the model, namely a purity analysis and an immutability analysis.

This chapter gives some insight on determining the purity of a method and the immutability of a class. An explanation of how the analyses work, what approach was used to implement the analyses, and then how they actually are implemented using Reactive Async. The final section reports on the experience of applying Re- active Async to these analyses, and shows what the differences are between the OPAL implementations and the Reactive Async implementations. Finally, end- ing with some results showing that applying Reactive Async to an OPAL analysis implementation could reduce the code size significantly. This is shown for the im- mutability analysis, where using Reactive Async bisects the code size.

4.1 Purity analysis

Purity analysis is about determining if a method is pure or impure. A method is called a pure method if all the following statements hold:

– The method body does not contain an instruction that reads from or writes to a mutable field.

– All invoked methods are pure.

If one of these statements does not hold, then consequently, a method is called an impure method. That means, a pure method is a method that given the same input, always produces the same output, whereas an impure method might not.

Example 4.1.

class Demo {

val finalField = 0 var mutableField = 0

(36)

CHAPTER 4. CASE STUDY

def pure(): Int = finalField + 5

def impure(): Int = mutableField + finalField }

Example 4.1 shows that pure is a pure method, due to only read- ing from an immutable value, whereas impure is an impure method, due to reading from a mutable field.

sealed trait Purity

case object UnknownPurity extends Purity case object Pure extends Purity

case object Impure extends Purity

Each method is represented as a cell that holds the Purity value of the method, which can be either UnknownPurity, Pure or Impure.

class PurityLattice extends Lattice[Purity] {

override def join(current: Purity, next: Purity): Purity = { if(current == UnknownPurity) next

else if(current == next) current

else throw LatticeViolationException(current, next) }

override def empty: Purity = UnknownPurity }

Figure 4.1: Purity lattice.

The purity lattice allows for a one time write to a cell, where UnknownPurity has the lowest order and is the initial value of a cell, as defined by empty and shown in figure 4.1. If join(Pure, Impure) ever occurs, then the method throws a LatticeViolationException, which only happens if a cell contains Pure and then receives Impure, or vice versa.

An analysis for each method is executed on a thread pool asynchronously to determine the purity of each method. For each executed analysis, all Java Bytecode instructions in the body of the method being analyzed are read. If an instruction is encountered that causes the method to be impure, then the cell completer per- forms a putFinal(Impure) on that method’s cell. If the analysis for a method

28

(37)

4.2. IMMUTABILITY ANALYSIS

encounters a method invocation, the cell adds a whenComplete dependency on the other cell which represents the invoked method.

invokerCell.whenComplete(invokedMethodCell,

(x: Purity) => x == Impure, Some(Impure))

The dependency implies that, if the dependee is completed with the value Impure, then the dependent cell, that is, the invoker, is also com- pleted with the value Impure.

object PurityKey extends Key[Purity] { val lattice = new PurityLattice

def resolve[K <: Key[Purity]](cells: Seq[Cell[K, Purity]]):

Seq[(Cell[K, Purity], Purity)] = { cells.map(cell => (cell, Pure)) }

def default[K <: Key[Purity]](cells: Seq[Cell[K, Purity]]):

Seq[(Cell[K, Purity], Purity)] = { cells.map(cell => (cell, Pure)) }

}

When the pool become quiescent and there are still incomplete cells left forming CSCCs or are IUC, then no impurity was found for the methods, meaning, the cells could not be resolved to Impure. Consequently, all incomplete cells must represent pure methods, and thereby receive the value Pure according to the PurityKey’s resolve and default methods shown above.

4.2 Immutability analysis

Immutability analysis is a more advanced and larger analysis. It is about deter- mining the immutability of a class, that is, if a class is immutable, conditionally immutable or mutable. Furthermore, the immutability of a class is built on two different immutabilities: object immutability and type immutability. A class that has object immutability value mutable, is referred to as MutableObject and a class that has type immutability value mutable is referred to as MutableType.

The same pattern is used in referring to all the possible immutability combinations.

Object immutability is determined by the immutability of the fields in a class. If some field in a class or in its superclasses is mutable, that means the class is a MutableObject. For ConditionallyImmutableObject, consider a field that is an immutable reference, but what it holds is mutable.

An example of this would be an immutable list, containing objects that can change state. If there exists such a field in a class or its superclasses, then the class is a ConditionallyImmutableObject. Finally, if a class is neither

(38)

CHAPTER 4. CASE STUDY

found to be a MutableObject, nor a ConditionallyImmutableObject, then consequently it is an ImmutableObject.

Type Immutability of a class is restricted to the type immutability of its subclasses. If no subclasses exist for a class, then the type immutability of that class is restricted to the object immutability of that class. For exam- ple, if a class has no subclasses, and the object immutability of that class is MutableObject, then the type immutability is MutableType for that class. If there are subclasses to a class, where some are ImmutableType and some are ConditionallyImmutableType, then the class is a ConditionallyImmutableType.

Example 4.2.

class A (val i: Int) // I m m u t a b l e O b j e c t // ImmutableType class X (val i: Int) // I m m u t a b l e O b j e c t // MutableType

class Y (var j: Int) extends X(j) // M u t a b l e O b j e c t // MutableType class Z { // C o n d i t i o n a l l y I m m u t a b l e O b j e c t // C o n d i t i o n a l l y I m m u t a b l e T y p e

val x: X = new X(10) }

Example 4.2 shows four different classes with different immutability proper- ties. The first class A is an ImmutableObject due to it only having immutable field references, where the fields are references to objects that also are immutable.

Due to A not having any subclasses, the type immutability is restricted to the object immutability, therefore the class is a ImmutableType. However, if we look at Y, the class is a MutableObject, because it has a mutable field refer- ence. Because there are no subclasses to Y, the type immutability is restricted to the object immutability of the class, which is why the class is a MutableType.

X is a ImmutableObject due to the same reason class A is. However, X is a MutableTypebecause there exists a subclass, namely Y, that is a MutableType.

Finally, Z is a ConditionallyImmutableObject because it has an immutable field reference, but what it refers to is a MutableType. Because Z has no sub- classes, the type immutability is restricted to the object immutability of the class, which is why the class is a ConditionallyImmutableType.

sealed trait Immutability

case object Mutable extends Immutability

case object ConditionallyImmutable extends Immutability case object Immutable extends Immutability

Each class has one cell representing the object immutability, and one repre- senting the type immutability of that class. However, both object immutability and type immutability use Immutability values, that is, either containing Immutable, ConditionallyImmutable or Mutable. If a cell representing a class’s object immutability contains Immutable, then that is considered as ImmutableObject. The same goes for the cells representing a class’s type immutability, only that it is considered as ImmutableType.

30

References

Related documents

Creating more time for marketing buyers gives them opportunities to engage in value creating strategy work benefiting the brand.. Volvo Cars is used as a case study

The value at risk model is a method to measure the market risk of portfolios of financial assets by the way of specifying the size of a potential loss under a

A six weeks observation period took place at a control department that governs the risk management issues of a business unit named IA (Investment Advisory). IA is

Here’s a very straightforward kind of explanation, along these lines: Pleasure (and pain) causally influence our first-order evaluations. Thus, if pleasure and

For example, functional programming requires functions to be pure and data to be immutable, whereas reactive pro- gramming requires a means of defining a current system's state as

Basically, a parent processor can expose HTTP end-points for 2 functions: createOutStream that push the infinite stream, and since that allows child side-effect processors to ask

producing electricity and present factors that influence electricity production costs. Their project will also highlight different system costs for integration of generated

Since the data collected is at a national level, it  cannot be determined whether fighting was fiercer surrounding the natural resources, and the  concrete effects of natural