• No results found

Distributed balanced edge-cut partitioning of large graphs having weighted vertices

N/A
N/A
Protected

Academic year: 2022

Share "Distributed balanced edge-cut partitioning of large graphs having weighted vertices"

Copied!
61
0
0

Loading.... (view fulltext now)

Full text

(1)

Distributed balanced edge-cut partitioning of large graphs having weighted vertices

JOHAN ELANDER AMAN

Master’s Thesis at SICS Swedish ICT Supervisor: Fatemeh Rahimian

Examiner: Seif Haridi

TRITA-ICT-EX-2015:46

(2)

Abstract

Large scale graphs are sometimes too big to store and process on a single machine. Instead, these graphs have to be divided into smaller parts and distributed over several machines, while minimizing the dependency between the different parts. This is known as the graph partitioning problem, which has been shown to be NP-complete. The problem is well studied, however most solutions are either not suitable for a distributed environment or unable to do balanced partitioning of graphs having weighted vertices.

This thesis presents an extension to the distributed balanced

graph partitioning algorithm JA-BE-JA, that solves the balanced

partitioning problem for graph having weighted vertices. The ex-

tension, called wJA-BE-JA, is implemented in both the Spark

framework and in Scala. The two main contributions of this re-

port are the algorithm and a comprehensive evaluation of its per-

formance, including a comparison with the recognized METIS

graph partitioner. The evaluation shows that a random sam-

pling policy in combination with the simulated annealing tech-

nique gives good results. It further shows that the algorithm is

competitive to METIS, as the extension outperforms METIS in

17 of 20 tests.

(3)

Contents

1 Introduction 1

2 Background information 3

2.1 Problem statement and terminology . . . . 3

2.2 Related work . . . . 4

2.3 JA-BE-JA . . . . 5

2.3.1 Problem statement . . . . 5

2.3.2 Solution . . . . 5

2.3.3 Findings . . . . 6

2.3.4 Limitations . . . . 8

2.4 Spark . . . . 8

3 Solution 9 3.1 wJA-BE-JA . . . . 9

3.1.1 Complexity . . . . 15

3.1.2 False evaluation . . . . 16

3.2 Implementation . . . . 16

3.3 Distributed initialization . . . . 18

4 Evaluation methods 21 4.1 What to measure . . . . 21

4.2 Graph selection . . . . 22

4.2.1 Stanford Large Network Dataset Collection . . . . 22

4.2.2 Facebook and Twitter . . . . 22

4.2.3 Initialization of weights . . . . 23

4.2.4 Summary of test graphs . . . . 23

4.3 Test environment . . . . 23

5 Results 25 5.1 Effect of sample size . . . . 26

5.2 Effect of sampling policy . . . . 30

5.3 Effect of δ parameter . . . . 31

5.4 Effect of T

0

parameter . . . . 35

5.5 Effect of α parameter . . . . 39

5.6 Scaling and comparison to METIS . . . . 43

5.7 Complexity . . . . 44

6 Analysis 45

6.1 Sample size . . . . 45

(4)

6.2 Sampling policy . . . . 46

6.3 δ parameter . . . . 46

6.4 T

0

parameter . . . . 47

6.5 α parameter . . . . 47

6.6 Scaling and comparison to METIS . . . . 47

6.7 Complexity . . . . 48

6.8 Data migration . . . . 48

7 Conclusion 49 Bibliography 51 Appendix A 55 List of Figures 2.1 Example of potential color exchange [1]. . . . 6

3.1 Message flow for a node p during one iteration . . . . 15

5.1 Effect of sample size Twitter . . . . 26

5.2 Effect of sample size Facebook I . . . . 27

5.3 Effect of sample size Brightkite. . . . 28

5.4 Effect of sample size Facebook II . . . . 29

5.5 Effect of sampling policy . . . . 30

5.6 Effect of δ Twitter . . . . 31

5.7 Effect of δ Facebook I . . . . 32

5.8 Effect of δ Brightkite . . . . 33

5.9 Effect of δ Facebook II . . . . 34

5.10 Effect of T

0

Twitter . . . . 35

5.11 Effect of T

0

Facebook I . . . . 36

5.12 Effect of T

0

Brightkite . . . . 37

5.13 Effect of T

0

Facebook II . . . . 38

5.14 Effect of α Twitter . . . . 39

5.15 Effect of α Facebook I . . . . 40

5.16 Effect of α Brightkite . . . . 41

5.17 Effect of α Facebook II . . . . 42

5.18 Comparison between wJA-BE-JA and METIS for k = {4, 8, 16, 32, 64} . . . . . 43

5.19 Histogram of complexity . . . . 44

(5)

List of Tables

3.1 Input parameters . . . . 17

4.1 Input parameters selected for evaluation . . . . 22

4.2 Test graphs . . . . 23

5.1 Resulting values of sample size test Twitter . . . . 26

5.2 Resulting values of sample size test Facebook I . . . . 27

5.3 Resulting values of sample size test Brightkite. . . . 28

5.4 Resulting values of sample size test Facebook II . . . . 29

5.5 Resulting values of sampling policy test . . . . 30

5.6 Resulting values of δ test Twitter . . . . 31

5.7 Resulting values of δ test Facebook I . . . . 32

5.8 Resulting values of δ test Brightkite . . . . 33

5.9 Resulting values of δ test Facebook II . . . . 34

5.10 Resulting values of T

0

test Twitter . . . . 35

5.11 Resulting values of T

0

test Facebook I . . . . 36

5.12 Resulting values of T

0

test Brightkite . . . . 37

5.13 Resulting values of T

0

test Facebook II . . . . 38

5.14 Resulting values of α test Twitter . . . . 39

5.15 Resulting values of α test Facebook I . . . . 40

5.16 Resulting values of α test Brightkite . . . . 41

5.17 Resulting values of α test Facebook II . . . . 42

5.18 Comparison between wJA-BE-JA and METIS for k = {4, 8, 16, 32, 64} . . . . . 43

(6)
(7)

Chapter 1

Introduction

Graphs are widely used to model data and dependencies between data. They have many applications, a good example is modeling the members of a social network as nodes and let the edges represent connections between members. Other examples are websites and links between them or biological networks [2] [3]. The number of nodes contained in a graph can range from just a few to millions or even billions [2] [1]. Graphs of a large magnitude are sometimes too big to store and process on a single machine. Instead, these graphs have to be divided into several smaller parts and distributed over several machines, while minimizing the dependency between the different parts. Dividing or partitioning a graph into k parts while minimizing the number of cross partition edges (edge-cut) is known as the graph partitioning problem, which is proven to be NP-complete [4].

There exist several versions of the graph partitioning problem, depending on the nature of the graph or the application of the graph. The common problem of all graph partitioning algorithms is to divide a graph into a number k non-empty partitions such that the number of edges between different partitions is minimized. A very common variation of this problem is to add the condition that all partitions should contain the same number of nodes, this is known as the balanced graph partitioning problem. Some graphs may have weights or a cost associated with each edge, this could represent, e.g., cost of traversing a link in a network. In this case the goal is to minimize the total edge-weight of all cross partition edges, as opposed to minimizing the number of cross partition edges. Weights can also be associated with nodes, which could represent, e.g., how much data is associated with a social network user. When partitioning graphs having weighted nodes, the problem is to have the same total weight in each partition, instead of having the same number of nodes.

A graph could naturally have weights associated with both nodes and edges, in that case both weight balancing and edge weight are considered. However, it heavily depends on the application which one is more important, as each goal is usually a trade-off of the other.

Graph partitioning is a well studied problem with many proposed solutions. However many

of them, including parallel ones, are not suited for a distributed environment and require

cheap global access to the entire graph [5–9]. However, there exist algorithms designed for

distributed environments, one of which is JA-BE-JA [1]. The authors propose a distributed,

local knowledge, heuristic algorithm to solve the balanced graph partitioning problem. It

has been shown that JA-BE-JA have good performance in comparison to the recognized

METIS [6] algorithm. JA-BE-JA can not, in its current form, handle the problem of

(8)

balanced partitioning of a graph with weights associated with the nodes such that each partition has roughly the same weights.

The main objective of this thesis is to add an extension to JA-BE-JA that allows it to, in a distributed environment only using local knowledge, partition graphs to clusters of roughly equal weights. The solution is delimited to solve the problem without relaxing the equal partition weights condition. Further, the solution is delimited to not consider edge weights or directed graphs. The extension is both implemented in the Spark cluster computing framework [10] and in Scala. Spark is built on Scala and designed to be used in a distributed environment to achieve scalability while being fault tolerant. It achieves this by introducing a storage abstraction named resilient distributed datasets (RDD). Spark is presented in detail in section 2.4.

The contributions of this thesis are as follows:

• an algorithm that can solve the balanced partitioning of graphs having weighted nodes, only using local knowledge.

• a Spark and a Scala implementation of the algorithm.

• a comprehensive evaluation of the performance. Which shows that the algorithm is capable of reducing the edge-cut, depending on input graph, to between 4 − 24% of the initial edge-cut, for a 4-way partition.

• a comparison between algorithm and the METIS graph partitioner.

(9)

Chapter 2

Background information

This chapter covers the background information for this thesis. Section 2.1 presents both the problem and the terminology of it. Section 2.2 presents work related to this thesis.

Section 2.3 describes the original JA-BE-JA algorithm in detail. The Spark framework is presented in section 2.4.

2.1 Problem statement and terminology

An unweighted graph is denoted G(V, E) and consists of a set of vertices V and a set of edges E. A graph having weighted nodes will be denoted as G(V, E, w

v

), where w

v

is a set of node and weight pairs (v, w). Edges can be directed or undirected, depending on if they can be traversed in one or both directions, respectively. A graph is referred to as directed or undirected depending on which edge type it contains. Throughout this report the terms nodes and vertices may be used interchangeably.

The problem is to partition an undirected weighted graph G(V, E, w

v

) into k non-empty partitions P (i), such that all partitions have the same weight, while minimizing the edge- cut. Recall, the edge-cut is defined as the number of cross partition edges. The number of partitions is denoted with k and it is said that a node p has a color c

p

∈ { 1, .., k} which represents the partition it belongs to. The total weight of the graph, partition i, and a node p will be denoted as G

w

, P

w

(i), and p

w

, respectively. If a node p has a weight of p

w

= 3 and the color c

p

= c, the node has three units of color c. The degree of node with respect to a color c, denoted as d(c), is defined as the number of neighbors that have the color c.

The set of nodes returned from a sampling method will be referred to as the candidate set . A small set of nodes that have the combined weight equal to a predefined value, e.i., weight of another node, will be referred to as a combination C, and the total weight of the combination will be denoted as C

w

. A combination is not necessarily the same as a candidate set, however, it is a subset of a candidate set, C ⊆ candidateSet.

Some algorithms relax the balancing condition in order to achieve a lower edge-cut. This

is defined by a value called imbalance. By allowing an imbalance of x%, the maximum

deviation from the optimal size is Size_opt ± x%, or in the case with weighted nodes

P

w

_opt ± x%. This thesis only considers 0% imbalance.

(10)

2.2 Related work

Garey et al. [4] show that the problem of partitioning a graph into balanced partitions while minimizing edge-cut is NP-complete, which means that there is no efficient computation for finding the best solution without testing all solutions. A good, however not necessarily the best, solution can be found in a reasonable time by using a heuristic algorithm.

Rahimian et al. [1] present a heuristic algorithm that is designed to partition large non- weighted vertex graphs G(V, E) into k balanced partitions such that the edge-cut is mini- mized. The algorithm is fully distributed and does not require any strict synchronization.

In a distributed environment there is only partial knowledge about the graph and it is not possible to have global access to the entire graph. The partial view could consist of neigh- boring nodes or a random subset of nodes. It could also consist, of a combination of both.

The solution they propose is to first uniformly assign a color c ∈ {1, .., k} to each node, which represents which partition it belongs to. After the initialization, nodes will swap colors with each other, if the outcome of the swap is that the edge-cut is decreased. The swapping is executed iteratively until the partitioning converges. By swapping colors the amount of color in the graph is preserved while edge-cut is decreased. Their algorithm is, however, limited to partitioning unweighed graphs only. This algorithm is covered in more detail in section 2.3.

Chow et al. [11] present both an vertex partitioner and an edge partitioner. The partitioners are based on breadth-first-search and require synchronization on each level of the search.

They exclusively test their implementations on small diameter graphs, where they show that it is possible to achieve good performance. Their main contribution is a comparison between the two partitioner. Their vertex graph partitioner is limited to partition unweighed graphs only. The edge partitioner solve the problem of partitioning edges instead of vertices, and are of no interest to this thesis.

The bisection algorithm [12] recursively divides an unweighted graph into two non-empty subgraphs until a predefined limit is reached. In each division step, the algorithm attempts to find partitions such that edge-cut is minimized.

Both McSherry [13] and Boppana [14] discuss a spectral method for partitioning graphs.

The partitioner is similar to bisection partitioners, but uses mathematical models from linear algebra to determine how to partition the graph. Stanton and Kliot [15] show that spectral methods do not scale to large graphs since they require global access to the graph.

A common approach to solve the balanced partitioning problem is to use some variation on the multilevel graph partitioning (MGP) algorithm. There are many papers [2, 6, 8, 9, 16–

19] that cover slightly different versions of the algorithm. MGP generally works in three

phases. First it coarsens the graph in several steps, until the graph has a small number

of vertices. In the second phase the now coarsened graph is partitioned using a k-way

partitioner. In the last phase the coarsened graph is stepwise projected back on to the

original graph. Many of the authors are mainly concerned with solving the related problem

of partitioning an unweighted graph. Most of the papers cover serial or parallel versions

that require cheap global access to the entire graph, however [19] proposes distributed

versions, although some of its underlying components requires cheap global access. The

graph partitioning algorithm developed by Kernighen and Lin [7], or some variation of it,

is often used as sthe k-way partitioner in MGP [6,16,18]. Their algorithm is similar to the

(11)

bisection algorithm [12]. METIS [6] is a well recognized MGP implementation, which has the option to perform balanced partitioning of vertex weighted graphs, and it will be used for performance comparison.

2.3 JA-BE-JA

This sections covers JA-BE-JA in detail, and it is based on the work of Rahimian et al. [1].

2.3.1 Problem statement

As previously stated in section 2.2, JA-BE-JA is a heuristic iterative algorithm designed to solve the balanced graph partitioning problem in a distributed environment, by only using local knowledge and without any strict synchronization. Given a graph G(V, E) it should be partitioned into k non-empty partitions such that each partition holds equal number of nodes, while minimizing the edge-cut.

2.3.2 Solution

The algorithm treats each node as a processing unit, with information about itself, its neighbors, and optionally a small random subset of other nodes. Each node is initialized with a color, selected uniformly at random from a predefined set of k colors. Based on the information about its neighbors, each node tries to swap its color with other nodes, in order to achieve a lower edge-cut. This is done iteratively, until the partitioning converges.

By swapping colors with each other, the number of units with a color c is preserved, and therefore the uniformly sized partitions preserve their sizes. A node evaluates candidate nodes from its neighboring set, or random set, in order to find a possible swapping partner.

If a swap that lowers the pairwise edge-cut is found, a color exchange handshake is executed, otherwise, the node preserves its color. The local view of a graph that each node maintains, is often a very small subset of the graph. As local views are so small, the solution may find local optima. To overcome this problem JA-BE-JA, uses the simulated annealing (SA) technique [20], presented in detail below. The core of the JA-BE-JA algorithm is shown in algorithm 1.

Swapping

JA-BE-JA has three different modes of choosing candidate sets to evaluate:

• Local (L): the node evaluates its neighboring nodes.

• Random (R): the node evaluates a uniformly random sample set.

• Hybrid (H): the node first evaluates its neighbors (L). If it did not find a candidate to swap with, it tries to find another candidate from the random set (R).

After a candidate set is selected, the node must determine which node to swap with. To achieve minimized pairwise edge-cut, the node will maximize the degree of each node. The condition for swapping between node p and q is defined as:

d

p

(c

q

)

α

+ d

q

(c

p

)

α

> d

p

(c

p

)

α

+ d

q

(c

q

)

α

(12)

The parameter α regulates how the swapping is executed. If α = 1 only exchanges that decrease the pair-wise edge-cut are accepted. However, if α > 1, some swaps that do not decrease the edge-cut, will also get accepted. These swaps works in favor of the higher degree nodes. Figure 2.1 presents the following example, where d

p

(c

q

) = 1, d

q

(c

p

) = 3, d

p

(c

p

) = 2, d

q

(c

q

) = 2. In the case where α = 1 the condition will look like 1 + 3 ≯ 2 + 2, and it will not accept the swap. However, if α > 1, the swap will get accepted. This does not decrease the edge-cut, however it increases the probability for the two yellow nodes neighboring node q to swap in the future. Introducing α in the condition has a potential drawback, e.g., d

p

(c

q

) = 1, d

q

(c

p

) = 6, d

p

(c

p

) = 4, d

q

(c

q

) = 4, and α = 2, where the condition becomes 37 > 32 even though 1 + 6  4 + 4.

Figure 2.1: Example of potential color exchange [1].

Simulated Annealing

As discussed above, the local search policy may get stuck in local optima. JA-BE-JA utilizes simulated annealing to overcome this problem. By introducing a factor temperature T in the swapping condition. The new condition is defined as:

(d

p

(c

q

)

α

+ d

q

(c

p

)

α

) ∗ T > d

p

(c

p

)

α

+ d

q

(c

q

)

α

This allows the algorithm to relax the condition and perform swaps that increase the pair- wise edge-cut. By overestimating the post-swap utility, nodes can escape local optima. The factor T decreases with a value δ each iteration until it reaches a threshold. This process is referred to as cooling. During the cooling process, the condition will get stricter.

2.3.3 Findings

The authors evaluate the algorithm by conducting tests on a collection of graphs. They test

how the parameters affect the result of the partitioner. Their testing shows that both the

random and hybrid sampling modes are superior to the local, as the local policy is more

likely to get stuck in local optima. The best results are most often achieved with α = 2,

for their test set. They further show that using an initial T

0

= 2 for the SA generally

yields the best results and the δ parameter is a trade-off between the number of swap and

the quality of partitioning. A lower δ value yields a lower edge-cut, at the cost of more

swaps. A noteworthy observation is that graphs representing social networks seem to get

good partitionings even with a high δ value.

(13)

Algorithm 1 Original JA-BE-JA Algorithm [1].

Requires : Any node p in the graph has the following methods:

• getNeighbours(): returns p’s neighbors.

• getSample(): returns a uniform sample of all the nodes.

• getDegree(c): returns the number of p’s neighbors that have color c.

1:

//Sample and Swap algorithm at node p

2:

procedure SampleAndSwap

3:

partner ← F indpartner (p.getNeigbours(), T

r

)

4:

if partner = null then

5:

partner ← F indpartner (p.getSample(), T

r

)

6:

end if

7:

if partner 6 = null then

8:

color exchange handshake between p and partner

9:

end if

10:

T

r

← T

r

− δ

11:

if T

r

< 1 then

12:

T

r

← 1

13:

end if

14:

end procedure

15:

//Find the best node as swap partner for node p

16:

function FindPartner(Node[] nodes, float T

r

)

17:

highest ← 0

18:

bestP artner ← null

19:

for q ∈ nodes do

20:

d

pp

← p.getDegree (p.color)

21:

d

qq

← q.getDegree (q.color)

22:

old ← d

αpp

+ d

αqq

23:

d

pq

← p.getDegree (q.color)

24:

d

qp

← q.getDegree (p.color)

25:

new ← d

αpq

+ d

αqp

26:

if (new ∗ T > old) ∧ (new > highest) then

27:

bestP artner ← q

28:

highest ← new

29:

end if

30:

end for

31:

return bestP artner

32:

end function

(14)

2.3.4 Limitations

JA-BE-JA is designed to partition graphs with unweighted nodes. It lacks the notion of node weight and is therefore unable to do balanced partitioning of graphs having weighted nodes.

2.4 Spark

Spark [10] is an open-source cluster computing framework, first presented in 2010. It is designed to provide both scalability and fault tolerance. The framework achieves this by introducing a new storage abstraction called resilient distributed datasets (RDD) [21]. Spark was first developed at UC Berkeley and it is, to this date, an Apache top-level project with more than 250 developers and new versions are released frequently. The framework was originally developed for Scala, although later, both JAVA and Python APIs were added [22].

The Spark programming model is based around RDDs and parallel operations on the RDDs.

The core of a Spark application is a driver program that defines the operations to be executed on the RDDs. Worker threads are deployed by the driver, which executes those operations and return the results to the driver. The Spark application can be launched in standalone mode or be deployed on a cluster.

A RDD is a read-only set of objects, which is partitioned and distributed over the machines in a cluster. As it is read-only, it can not be modified, however transformations can be applied to it which return a new RDD with the post transformation state. A RDD can also be created either directly from a file, by parallelizing a collection, e.i., an array, or changing the persistence level of an existing RDD. By adding filters to the transformations, the programmer can choose not to perform transformations of objects that do not fit the criteria.

This gives a fine grained control over the data flow of the execution. The transformations can be performed in parallel by the workers. All transformations are lazy and will only be computed when required. The RDD achieves fault tolerance by storing the lineage, which consist of a record of all transformations performed on it. In case of failure it can apply the lineage to the RDD in storage, to recompute the RDD state.

The Spark framework also supports two types of shared variables, broadcast variables and accumulators . The broadcast variable is read-only object which is meant to be used as a look-up object. A copy is sent to each worker only once, thus, avoiding to send it each time a worker need to do a look-up. The accumulator is a shared variable for parallel sums.

Workers are only allowed to perform simple addition to the accumulator, and the driver is

the only thread that is allowed to read the accumulator variable.

(15)

Chapter 3

Solution

This chapter presents the solution, its parameters and different modes of operation. Sec- tion 3.1 presents the algorithm and its subparts. Section 3.2 presents the implementations.

Section 3.3 presents an alternative initialization that could be used in a distributed envi- ronment.

3.1 wJA-BE-JA

This solution is, as previously mentioned, based on JA-BE-JA [1], which treats each node as a processing unit. The basic idea is to add a function that, instead of single candidate nodes, finds combinations of nodes with a given color and weight to swap with. Similar to JA-BE-JA, it starts with assigning a color to each node in the graph, uniformly at random, such that each partition has the same weight. After the color assignment, each node tries to decrease its edge-cut by iteratively swapping colors with other nodes. The balance achieved by the initialization of the partitions is preserved during color swapping, as no node is allowed to otherwise change its color. In the original algorithm each node evaluates a candidate set in pursuit of finding a candidate node that yields the lowest pair- wise edge-cut. If such a node is found, the nodes perform a color exchange handshake. The two nodes can swap colors without considering the quantity of color units each node has, as each node only has one unit of color. However, when a node p has a weight p

w

> 1, it must either find another node q, such that, p

w

= q

w

, or find a combination of nodes, all having the same color, {v

0

, v

1

...v

n

} = C, such that, V

w

(p) = C

w

. If a combination, suitable for a swap, is found and all the nodes participating in the swap accept the proposal, an all-or- nothing swap is executed, meaning that either all nodes swap colors or none of the nodes perform the swap. The all-or-nothing condition must be satisfied at all times in order to guarantee that the balance is preserved. Just as the original algorithm, the extension may find local optima, and is therefore using the SA technique as well. The extension provides the same sampling policies, local, random, and hybrid, as the original JA-BE-JA algorithm provides, described in section 2.3.2.

Start up and initialization

The first step of the algorithm is to assign colors to the nodes in the graph. Depending on

the implementation, it can either use a centralized or a distributed assignment. The latter is

presented in more detail in section 3.3. With a centralized solution it is possible to perform

a perfectly balanced assignment, since the total weight of the graph is known. If G

w

is not

(16)

divisible by k, the remaining units of weight are distributed as evenly as possible over the partitions. This means that the largest difference in weight between any two partitions is one unit. In a distributed environment perfect assignment is highly unlikely, however it is possible to produce partitions with sizes that only differ a small amount. When all nodes are initialized, each node starts the iterative swapping part of the algorithm. The pseudo code of wJA-BE-JA is shown in algorithm 2.

Algorithm 2 wJA-BE-JA (Hybrid sampling policy) Requires :

• initiateColors(): initiates the color units.

• generateRequestMessages(partners): creates request messages

• findP artners(policy): returns combination that offer the highest degree. Presented in detail below.

• evaluateRequest(messages): evaluated request and generates a response message.

Presented in detail below.

• evaluateResponses(messages): evaluated response and returns new node states. Pre- sented in detail below.

• requestMailbox/responseMailbox: object that other nodes can send messages to

1:

procedure SampleAndSwap(T

0

, δ, maxIter, sampleSize)

2:

initiateColors ()

3:

T

r

← T

0

4:

for i → maxIter do

5:

T

r

← T

r

− δ

6:

if T

r

< 1 then

7:

T

r

← 1

8:

end if

9:

partners ← f indP artners (getNeigbours)

10:

if partners = null then

11:

partners ← f indP artners (getRandomSample(sampleSize), T

r

)

12:

end if

13:

if partners 6 = null then

14:

send (generateRequestMessages(partners))

15:

end if

16:

responseM essages ← evaluateRequests (requestMailbox)

17:

if responseM essages 6 = null then

18:

send (responseMessages)

19:

end if

20:

allAccept ← evaluateResponses (responseMailbox)

21:

if allAccept = true then

22:

all-or-nothing swap

23:

end if

24:

end for

25:

end procedure

(17)

Find partners

The function is very similar to the original findPartner function, shown in algorithm 1. The new version, presented in algorithm 3, evaluates combinations of a given weight C

w

, instead of evaluating a single candidate node. Most notably is the addition of the loop (lines 10 to 17), where the pre- and post-swap degree of the combinations are calculated. Recall that the degree is the sum of neighboring nodes that have a given color c, and by maximizing it the edge-cut is minimized. The other addition compared to JA-BE-JA is the combinationsAll function (line 2) that computes all possible combinations with a given weight C

w

, such that all nodes in the combination have the same color. The combinationsAll function is shown in algorithm 7 (appendix A). After the function have evaluated all combinations in the candidate set, it returns the best degree and the corresponding combination. If no suitable combination is found, the function will terminate.

Algorithm 3 findPartner algorithm.

Requires : Any node p in the graph has the following methods:

• combinationsAll(p, nodeSample): returns all combinations of nodes that fulfills C

w

= p

w

. Shown in algorithm 7 (appendix A).

• getDegree(c): returns the number of p’s neighbors that have color c.

1:

function findPartner(self , nodeSample, T )

2:

combinations [] = combinationsAll(self, nodeSample)

3:

highestDegree ← 0

4:

highestDegreeIndex ← − 1

5:

i ← 0

6:

p ← self

7:

for C ∈ combinations do

8:

old ← 0

9:

new ← 0

10:

for q ∈ C do

11:

d

pp

← p.getDegree (p.color)

12:

d

qq

← q.getDegree (q.color)

13:

old ← old + d

αpp

+ d

αqq 14:

d

pq

← p.getDegree (q.color)

15:

d

qp

← q.getDegree (p.color)

16:

new ← new + d

αpq

+ d

αqp 17:

end for

18:

if (new ∗ T > old) ∧ (new > highest) then

19:

highestDegree ← new

20:

highestDegreeIndex ← i

21:

end if

22:

i ← i + 1

23:

end for

24:

if (highestDegreeIndex = −1) then

25:

return null

26:

end if

27:

return (combinations(highestDegreeIndex), highestDegree)

28:

end function

(18)

Generate request messages

If a combination is found, swap request messages are generated, one for each node in the combination. The message consists of a destination address, a source address, utility of swapping, and optionally information about which iteration the message was sent in. The requesting node must store the utility it offered, as it is later used to compare against offers received from other nodes. It can do so by either storing the information locally, or by sending one additional message to itself. If the node does not find any combinations it skips this step and waits for incoming request messages. A flowchart of the message handling is shown in figure 3.1

Evaluate requests

When a node receives request messages, it evaluates them and selects the offer with the highest utility. A node is only allowed to answer one request in each iteration, as it must swap all color units in each swap. If a node’s own offer outweighs the highest received bid, it will not send a response message to any other node, and will instead wait for responses to its own request. However, in case the node did not offer the highest bid it will abort its swap request, and sends a response message to the highest bidder. This function is presented in algorithm 4.

In order to minimize the number of messages, only accept messages are generated. The algorithm is designed to not require any decline messages.

Algorithm 4 evaluateRequests

Requires : Any message msg requires the following methods:

• getUtility(): returns utility offered.

• getSender(): returns id of the sender.

Requires : Any node p requires the following method:

• getId(): returns id of a node.

1:

function evaluateRequests(self, msgArray[])

2:

highest ← − 1

3:

sender ← − 1

4:

myID ← self.getId ()

5:

// the nodes self message is included in msgArray

6:

for msg ∈ msgArray do

7:

if msg.getU tility () > highest then

8:

highest ← msg.getU tility ()

9:

sender ← msg.getSender ()

10:

end if

11:

end for

12:

if sender = myID then

13:

return null

14:

end if

15:

return (sender, myID, highest)

16:

end function

(19)

Evaluate responses

All nodes evaluate their received response messages, unless they have previously aborted

their own request in favor of a higher bid. A node does not store a list of those nodes

it previously sent messages to. Instead it checks that the weight of all the responding

nodes sums up to its own weight. This means that no decline messages are required, as

discussed earlier. If the weight condition is met, an all-or-nothing swap is performed. The

all-or-nothing swap is initiated by the evaluating node, which sends a swap message to each

node in the combination and then swaps its own color. A failure detector is required in a

distributed environment, to guarantee that all nodes swap their colors. The node initiating

a swap, knows that all nodes in the combination have accepted the request. Which means

that the other nodes in the combination do not need to check with each other, if they have

accepted. The function is presented in algorithm 5.

(20)

Algorithm 5 evaluateResponse

Requires : Any node self requires the following:

• getId(): returns id of a node

• getW eight(): returns weight of a node.

• getColor(): returns color of a node.

Requires : Any message msg requires the following:

• getId(): returns the id the sender.

• getW eight(): returns the weight of the sender.

• getColor(): returns color of the node sending the message.

1:

function evaluateResponse(self , myOfferedBid, myReceivedBid, msgs)

2:

if myOf f eredBid < = myReceivedBid then

3:

return null

4:

end if

5:

myId ← self.getId ()

6:

myW eight ← self.getW eight ()

7:

myColor ← self.getColor ()

8:

swapColor ← − 1

9:

weightSum ← 0

10:

res [] ← newArray[]

11:

for msg ∈ msgs do

12:

if msg.getId 6 = myId then

13:

weightSum ← weightSum + msg.getW eight()

14:

swapColor = msg.getColor()

15:

end if

16:

end for

17:

if weightSum = myW eight then

18:

for msg ∈ msgs do

19:

if msg.getId 6 = myId then

20:

res [] ← res[] + ((msg.getId(), msg.getW eight(), myColor))

21:

end if

22:

end for

23:

res [] ← res[] + self

24:

return res

25:

end if

26:

return null

27:

end function

(21)

Figure 3.1: Message flow for a node p during one iteration

3.1.1 Complexity

It is important that the computational complexity of each node is low in order for the algorithm to scale with the number of nodes. The theoretical worst case complexity of the combinationsAll function in findPartners is (s(s − 1)/2) + 1, where s is the sample size.

This is not good if s is large, as the growth is almost quadratic, however s is small (less

than 10). As shown in figure 5.19, the average complexity is about half of the theoretical

(22)

worst case. The complexity of the loop (lines 7-23) in findPartner (algorithm 3) is c ∗ m, where c is the number of combinations and m is the number of nodes in the combination.

The larger c is the smaller the maximum of m is. It is important to note that m is not necessarily the same for all combinations. The worst case for c is s(s − 1)/2, and the worst case for m is s. However, as figure 5.19 shows, the average complexity of c∗m is less than s.

As the average cases for both findPartner and combinationsAll are quite small, the average workload will be low as well. The workload is higher compared to JA-BE-JA, which only has the complexity of s for findPartner and does not have equivalent to the combinationsAll function. However this is expected, as it requires more information in order to solve the balanced partitioning problem for graphs having weighted nodes.

3.1.2 False evaluation

When using a random sampling policy, a node will calculate its utility of swapping with the assumption that its neighboring nodes will not swap their colors. The side effect of two neighboring nodes doing this at the same time, if both swap their colors, is that the utility is different compared to the calculated value. This phenomenon will be called false evaluation . It will occur frequently, however if the outcome is bad, a swap to correct this will probably occur in the next iteration. The next iteration swap may also contain false evaluation. However, the frequency and impact of false evaluation will decrease over time as the graph converges and the swap rate decreases.

3.2 Implementation

The objective of this thesis is to develop a distributed algorithm in the Spark Framework, in order to solve the balanced partitioning problem for graphs having weighted nodes. A distributed implementation was developed. However, the implementation had extremely poor performance, and it was infeasible to run any kind of tests using it. The poor per- formance was due to the slow communication between random nodes. Therefore, another implementation that uses a mailbox system to overcome the slow communication was im- plemented. This effectively means that the solution is not distributed, however it still only requires local knowledge and it is applicable for parallel execution.

Since the solution is implemented in Spark, it is designed around RDDs and a driver pro-

gram. The graph is stored in two RDDs, one for the nodes and one for the edges, the

weights are stored within the nodes. The driver allows for centralized initialization of the

graph, which means that it is possible to achieve perfect balancing. The driver invokes all

functions to be performed by the workers, collects and handles results from the workers, and

updates the state of the graph. The workers carry out the main workload of the algorithm,

which is node centric evaluation of combinations, as well as generation and evaluation of

messages. Messages returned form the workers to the driver are inserted in a mailbox ac-

cording to recipient. Spark requires that all nodes that do not generate messages to return

a default message to the driver instead. These messages are filtered out by the driver and

not inserted in the mailbox. During the generation of request messages a node will generate

and send a message to itself. By doing so, it will later know the utility it offered without

storing the information locally. Storing it locally would require each node that made an

offer to be updated, which is more expensive than sending one additional message to that

node. If all nodes accept the swap proposal, the initiating node sends information to the

driver, about the nodes that shall be updated along with their new states post-swap states.

(23)

As each node can only participate in one swap each iteration no collisions can occur when updating the state of a node.

A Scala version was also developed by replacing all Spark data structures and functions with ordinary Scala equivalents. The Scala version produces the same results, given the same input and seed, however with better runtimes. A common pseudo code for both driver implementations is shown in algorithm 6 and the input parameters for both implementations are shown in table 3.1.

Table 3.1: Input parameters

Parameter Description

k number of partitions that the graph shall be divided into.

s maximum sample size that is returned by the getNonOverlap- pingSample function.

Sample policy how sampling shall be executed.

Seed initial global random generator seed.

T

0

initial temperature T

0

for the simulated annealing (SA).

δ speed of the cooling of T

α α parameter of the swapping condition.

(24)

Algorithm 6 Driver algorithm.

Requires : The driver requires

• initiateColors(seed): initiates the graph.

• updateGraph(vertices): updates the current state of the graph.

• generateRequestMessages(Combinations): creates request messages

• fillMailbox(messages): inserts messages in mailbox according to recipient id.

Requires : Any node p requires:

• findP artners(size, T

r

): returns combination that offer the highest degree.

• evaluateResponses(messages): evaluated response and returns new node states.

• evaluateRequest(messages) evaluated request and generates a response message.

Requires : Any graph G requires:

• getV ertices(): returns the vertices in the graph.

1:

procedure Driver(G, T

0

, δ, maxIter, sampleSize, seed)

2:

initiateColors (seed)

3:

T

r

← T

0

4:

for i → maxIter do

5:

v ← G.getV ertices ()

6:

T

r

← T

r

− δ

7:

if T

r

< 1 then

8:

T

r

← 1

9:

end if

10:

requestM sgs ← generateRequestM sgs (v.map(findP artners(sampleSize, T

r

)))

11:

requestM ailbox ← f illM ailbox (requestMsgs)

12:

evaluatedM sgs ← v.map (evaluateMsgs(requestMailbox)

1

13:

responseM ailbox ← f illM ailbox (evaluatedMsgs)

14:

updatedV ertices ← v.f latM ap (evaluateResponses(responseMailbox))

2

15:

v ← v / ∈ updatedV ertices

16:

v ← v + updatedV ertices

17:

G ← updateGraph (v)

18:

end for

19:

return G

20:

end procedure

3.3 Distributed initialization

If the algorithm should be deployed in an environment where centralized perfect initial- ization can not be performed, a distributed initialization is possible. In this version each node initializes its units of color using a uniform random initiator. Each node iteratively

1map(f ): applies a function f to each element in a collection

2f latmap(f ): same as map, but it flattens the results

(25)

swap individual color units until all units in the node have the same color, or a predefined max iteration threshold value is reached. The node is still allowed to swap units when it is internally balanced, however at that point it is only allowed to swap all of them at the same time with a single color. Any node that still has more than one color at the time of termination, assigns all of the units to the color it has most of.

During initial development a serialized version of this initialization was tested. However as

Spark gives the opportunity to use central initialization, it was no longer required and was

therefore not further developed. The initial testing showed consistently good results, using

the hybrid sample policy, with a maximum deviation 0.81% from optimal P

w

(i) for a graph

with of 22499 nodes and k = 4. Since this initialization only requires local knowledge and

the hybrid swapping policy, it should be implementable in a distributed environment.

(26)
(27)

Chapter 4

Evaluation methods

This chapter the presents how the tests are conducted and how performance is measured.

It further presents the test graphs, input parameters, and the reasoning of selecting them.

Section 4.1 describes the how tests are conducted and what evaluation metrics are used.

Section 4.2 presents the graphs selected for the evaluation, the reasoning behind the choices, and modifications applied to them. Section 4.3 presents the test environment and software version used.

4.1 What to measure

The most important metric of the evaluation is the achieved edge-cut. The mean value each parameter produces is of interest, as there is a high probability that those settings will reproduce similar results. The minimum edge-cut is also of interest as it is the best observed value, however achieving the minimum edge-cut does not only depend on the parameters, but also on the global random generator seed. The edge-cut is presented relative to the initial edge-cut, achieved by random initialization. Furthermore, the workload of the algorithm is also measured, in order to capture the average complexity of the algorithm.

The efficiency of the algorithm is captured by measuring the total number of swaps required to achieve the final result. Another metric that indicates the efficiency of the algorithm is data migration, that is how many nodes have changed partition from their initial partition.

The global random generator seed is very important to the algorithm, as it decides the sample returned from the random sampling. Since the seed is central to the execution, each test is performed with ten different global seeds. When testing the effect of a parameter, all other parameters are set to their respective default value. All parameters affect each other, more or less, however only one parameter can be changed at a time, in order to get a meaningful analysis of the impact of that individual parameter. The default parameters are both chosen based on indications of behavior during development and on the results of the JA-BE-JA paper [1]. Note that the results from JA-BE-JA are not from the same problem type, but from a similar problem. A short summary of the findings in the JA-BE-JA paper can be found in section 2.3.3.

The first test investigate if there is a sample size, for each graph in the test set, that yields

good performance, with all other parameters set to their default values. The sample size

(28)

that yields the best result for each graph is used in the subsequent tests. The sampling policy is also very crucial to the results as a good result is only obtainable if each node finds good partners to swap with. The second test investigates how the sampling policies affect the execution. Based on the characteristics of JA-BE-JA [1], the local sampling policy is excluded as it does not yield good results. Further, the impact of the SA is investigated by tuning its parameters. The parameters are strongly related, as T

0

gives the number of iterations during which the SA is active. The impact of the α parameter, as well as the performance with different values of k, are also investigated.

The parameter values chosen for evaluation are presented in table 4.1. The default param- eters chosen are: k = 4, T

0

= 2.0, δ = 0.003, α = 1, sampling policy: random. These are used for the scaling and comparison tests as well.

Table 4.1: Input parameters selected for evaluation Parameter Value

k {4, 8, 16, 32, 64}

s 2 - 10

Seed 0 - 9

Sampling policy Random, Hybrid

α {1, 2, 3}

T

0

{1.0, 2.0, 3.0, 4.0}

δ {0.001, 0.003, 0.01, 0.03, 0.1, 0.3}

4.2 Graph selection

Graphs from categories such as social networks and Internet networks are of interest, as these are of a nature applicable to this problem. Recall that a node in the graph could model a social network user and the edge could model its connections. Each partition can be seen as a server storing the users. The weight of the node can represent the amount of data that is associated with the user, e.i., meta-data, photos, videos, etc. Since the distribution of such data across different users is highly skewed, a balanced partition that considers node weights becomes critical.

4.2.1 Stanford Large Network Dataset Collection

The collection [23] consists of more than 50 graphs from various domains, such as social networks, web graphs, road networks, Internet networks, etc. It contains suitable undirected social network graphs that are sampled from sites such as Facebook and Brightkite, both these graphs are used in the tests and are referred to as Facebook I and Brightkite.

4.2.2 Facebook and Twitter

An anonymized Facebook graph from work of Viswanath et al. [24], is obtainable, through request, at their webpage [25]. The Facebook graph was created by breadth-first-search crawling of open profiles, starting from a single user in New Orleans. This graph is referred to as Facebook II.

The graph sampled from Twitter, that was produced as part of the work by Galuba et

al. [26], is well suited for the problem. The graph was created in by querying Twitters

(29)

public API for a period of 300 hours. The graph was originally directed, but, for the experiments, it was made undirected by dropping all directions.

4.2.3 Initialization of weights

The graphs obtained for testing are unweighed and therefore not directly applicable to the problem. In the lack of real-world data for graphs with weighted vertices, we synthetically assigned weights to the vertices with the following weight distribution: 50% of the nodes have w = 1, 25% have w = 2, 12.5% have w = 3 and the remaining 12.5% have w = 4. The distribution was chosen based on a long tail distribution [27], as it is reasonable to believe that there are more casual, lightweight, users compared to heavy users.

4.2.4 Summary of test graphs

Table 4.2: Test graphs

Graph Vertices Edges Abbreviation Type Source

Twitter 2731 329258 T Social network [26]

Facebook I 4039 88234 FB1 Social network [23]

Britekite 58228 214078 BK Social network [23]

Facebook II 63731 817090 FB2 Social network [24]

4.3 Test environment

The tests are conducted on a virtual machine with 4,096 MB of RAM and two virtual dual

core Intel® Core

TM

2 Duo T7700 CPUs. Each core is clocked at 2.4 GHz. The virtual

machine is running Ubuntu 14.04 on kernel-version 3.13.0-24-generic. The Spark tests are

made with newest stable release of Spark, which of the time of testing was 1.1.0. The Spark

implementation is written with the Scala version of the API and version 2.10.4 of Scala is

used. The Scala implementation is used for the evaluation, recall that they produce the

same results given the same input and seed. Version 2.10.4 of Scala is used for the evaluation

as well. For the comparison tests version 5.1.0 of METIS is used.

(30)
(31)

Chapter 5

Results

This chapter presents the results of all tests in figures 5.1 to 5.18, each figure is coupled with a corresponding table (tables 5.1 to 5.18). Further, figure 5.19, shows the measured workload of the algorithm.

Subplot (a) in figures 5.1 to 5.17 shows the achieved edge-cut in percentage, relative to initial state. Each marker represents an observed value. The mean value and a 95% confidence interval is also shown. An edge-cut of 100% means that the result has not been improved at all compared to the initial state achieved by the initial partitioning. An edge-cut of 0%

means that the edge-cut is zero, which is not possible when k > 1. Subplot (b) in figures 5.1 to 5.17 shows the edge-cut, in percent, over the iterations, for the seed which was closest to the mean value. Subplot (c) in figures 5.1 to 5.17 shows the cumulative sum of the number of swaps, over the iterations, for the seed that is closest to the mean value. The vertical line shown in both subplot (b) and (c) shows where the SA stops being active.

Based on the results in section 5.1, an appropriate sample size is chosen for the remainder of the tests, for each graph. The sample size test for Brightkite was done with 20 seeds in order to get a conclusive result.

The best observed mean edge-cut of each test is marked in bold in all subsequent tables in this chapter. Additionally, the sample sizes that are chosen for the remainder of the tests are marked in bold in tables 5.1, 5.2, 5.3 and 5.4. The lowest observed edge-cut is also be reported, as it is the best observed result.

Section 5.6 presents both the performance scaling when k increases and also a comparison with METIS. All percentages in figure 5.18 and table 5.18 are relative to the initial state of the wJA-BE-JA, which is equivalent to a random partitioning. The best performance for each test is marked in bold in table 5.18.

Section 5.7 presents the measured workload of the algorithm. Figure 5.19 shows histograms

of how often the two functions combinationAll and findPartner run for a certain amount

of iterations. The graphs show the measured workload for the seed which was closest to

the mean value in the sample size test, as the parameters are the same. The dashed line

shows the measured average complexity, while the solid line shows the theoretical worst

case complexity.

(32)

5.1 Effect of sample size

(a) Final Edge-cut

(b) Edge-cut (c) Cumulative Sum of Swaps

Figure 5.1: Effect of sample size Twitter Sample

size avg.

Edge-cut (%)

avg. Edge-cut avg. nr. of

swaps avg. Data

migration (%)

min Edge-cut (%)

min Edge-cut

2 32.24 79744.4 336470.7 75.20 31.53 78002

3 31.59 78138.8 365533.0 74.73 31.31 77448

4 31.73 78489.0 375430.9 74.84 31.05 76808

5 31.86 78801.8 383673.7 74.75 31.48 77872

6 32.18 79604.2 384362.6 75.06 31.71 78450

7 32.31 79927.0 385542.1 74.93 31.58 78120

8 32.33 79978.2 382444.8 75.00 31.90 78912

9 32.12 79464.4 375307.1 75.16 31.68 78372

10 32.42 80185.8 370463.3 74.97 31.98 79110

Table 5.1: Resulting values of sample size test Twitter

(33)

(a) Final Edge-cut

(b) Edge-cut (c) Cumulative Sum of Swaps

Figure 5.2: Effect of sample size Facebook I Sample

size avg.

Edge-cut (%)

avg. Edge-cut avg. nr. of swaps avg. Data migration (%)

min Edge-cut (%)

min Edge-cut

2 5.45 3603.9 389016.6 74.96 4.34 2868

3 5.06 3346.7 467151.4 74.69 3.19 2108

4 5.33 3522.3 518499.2 74.59 3.32 2196

5 5.13 3394.6 554019.9 75.01 4.08 2700

6 4.91 3247.3 574760.5 74.96 3.04 2013

7 4.59 3036.6 583135.3 74.62 3.03 2005

8 5.20 3436.7 585031.9 74.83 3.55 2349

9 4.71 3113.6 582055.1 74.93 3.10 2050

10 4.63 3062.6 576281.8 75.02 2.82 1865

Table 5.2: Resulting values of sample size test Facebook I

(34)

(a) Final Edge-cut

(b) Edge-cut (c) Cumulative Sum of Swaps

Figure 5.3: Effect of sample size Brightkite.

Sample size avg.

Edge-cut (%)

avg. Edge-cut avg. nr. of swaps avg. Data migration min

Edge-cut (%)

min Edge-cut

2 21.91 35232.9 5544497.85 74.89 20.46 32904

3 21.92 35256.6 6780750.25 74.90 20.72 33327

4 21.75 34980.85 7569092.20 74.90 20.52 32996

5 21.72 34926.7 7954241.95 74.77 20.32 32686

6 21.57 34696.45 8096611.85 74.84 20.48 32940

7 21.84 35121.45 8083638.65 74.82 20.78 33414

8 22.06 35476.75 7987246.10 74.84 20.69 33277

9 22.23 35746.15 7853532.75 74.80 21.98 33735

10 22.55 36261.65 7693189.10 74.84 21.09 33957

Table 5.3: Resulting values of sample size test Brightkite.

(35)

(a) Final Edge-cut

(b) Edge-cut (c) Cumulative Sum of Swaps

Figure 5.4: Effect of sample size Facebook II

Sample size avg.

Edge-cut (%)

avg. Edge-cut avg. nr. of swaps avg. Data migration (%)

min Edge-cut (%)

min Edge-cut

2 19.97 122848.9 6585271.6 75.00 18.92 116391

3 19.11 117585.6 7676907.5 75.01 17.89 110027

4 18.66 114808.1 8376006.3 74.95 17.81 109551

5 18.90 116234.4 8755634.5 74.98 17.72 109004

6 19.06 117266.9 8917889.8 74.98 17.98 110628

7

1

18.69 114995.8 8914624.1 74.89 18.23 112156

8 18.99 116805.2 8818279.0 74.93 18.19 111925

9 18.65 114737.0 8689378.8 74.89 17.63 108466

10 18.89 116210.7 8540164.6 74.89 17.61 108320

Table 5.4: Resulting values of sample size test Facebook II

1Was chosen even though it does not achieve the lowest mean edge-cut, as its result is only slightly higher and with better confidence

(36)

5.2 Effect of sampling policy

(a) Final Edge-cut

(b) Edge-cut (c) Cumulative Sum of Swaps

Figure 5.5: Effect of sampling policy

Graph &

Policy avg.

Edge-cut (%)

avg. Edge-cut avg. nr.

of swaps avg. Data migration (%)

min Edge-cut (%)

min Edge-cut

T (H) 33.24 82222.8 94989.9 74.63 32.95 81518

T (R) 31.59 78138.8 365533.0 74.73 31.31 77448

FB1 (H) 10.11 6683.4 244176.1 73.02 9.10 6020

FB1 (R) 4.59 3036.6 583135.3 74.62 3.03 2005

BK (H) 29.02 46670.8 3973304.7 72.9 26.32 42337

BK (R) 21.67 34850.3 8102061.1 74.82 20.48 32940

FB2 (H) 23.79 146320.1 3129735.6 74.51 22.16 136321

FB2 (R) 18.69 114995.8 8914624.1 74.89 18.23 112156

Table 5.5: Resulting values of sampling policy test

(37)

5.3 Effect of δ parameter

(a) Final Edge-cut

(b) Edge-cut (c) Cumulative Sum of Swaps

Figure 5.6: Effect of δ Twitter

δ avg.

Edge-cut (%)

avg. Edge-cut avg. nr. of swaps avg. Data migration (%)

min Edge-cut (%)

min Edge-cut

0.001 31.99 79126.0 1055995.2 74.91 31.46 77818

0.003 31.59 78138.8 365533.0 74.73 31.31 77448

0.01 31.91 78944.4 114130.8 74.95 31.09 76904

0.03 32.11 79423.4 43380.9 74.99 31.34 77518

0.1 32.49 80359.0 18314.8 74.79 31.97 79096

0.3 32.72 80934.2 12157.7 74.79 32.38 80088

Table 5.6: Resulting values of δ test Twitter

(38)

(a) Final Edge-cut

(b) Edge-cut (c) Cumulative Sum of Swaps

Figure 5.7: Effect of δ Facebook I

δ avg.

Edge-cut (%)

avg. Edge-cut avg. nr. of swaps avg. Data migration (%)

min Edge-cut (%)

min Edge-cut

0.001 4.54 3005.2 1752559.6 74.66 2.93 1936

0.003 4.59 3036.6 583135.3 74.62 3.03 2005

0.01 4.88 3229.9 173872.6 74.33 3.62 2395

0.03 5.55 3672.6 60355.2 74.79 3.31 2187

0.1 6.90 4561.0 22836.2 74.39 5.16 3413

0.3 7.27 4804.7 15088.9 73.85 5.54 3661

Table 5.7: Resulting values of δ test Facebook I

(39)

(a) Final Edge-cut

(b) Edge-cut (c) Cumulative Sum of Swaps

Figure 5.8: Effect of δ Brightkite

δ avg.

Edge-cut (%)

avg. Edge-cut avg. nr. of swaps avg. Data migration (%)

min Edge-cut (%)

min Edge-cut

0.001 21.59 34716.1 24347379.3 74.79 20.24 32555

0.003 21.67 34850.3 8102061.1 74.82 20.48 32940

0.01 22.38 35994.8 2412286.6 74.78 21.00 33773

0.03 23.78 38246.4 823107.5 74.56 22.06 35479

0.1 26.23 42188.0 291926.7 73.74 24.33 39129

0.3 27.56 44317.5 188419.3 72.85 25.80 41497

Table 5.8: Resulting values of δ test Brightkite

(40)

(a) Final Edge-cut

(b) Edge-cut (c) Cumulative Sum of Swaps

Figure 5.9: Effect of δ Facebook II

δ avg.

Edge-cut (%)

avg. Edge-cut avg. nr. of swaps avg. Data migration (%)

min Edge-cut (%)

min Edge-cut

0.001 18.64 114650.8 26892292.7 74.89 17.71 108970

0.003 18.69 114995.8 8914624.1 74.89 18.23 112156

0.01 19.20 118087.8 2631432.3 74.92 18.18 111841

0.03 20.23 124424.3 904386.2 75.01 18.87 116095

0.1 21.74 133733.5 370236.0 74.73 19.36 119108

0.3 22.51 138440.7 272352.3 74.64 20.95 128872

Table 5.9: Resulting values of δ test Facebook II

(41)

5.4 Effect of T

0

parameter

(a) Final Edge-cut

(b) Edge-cut (c) Cumulative Sum of Swaps

Figure 5.10: Effect of T

0

Twitter

T

0

avg.

Edge-cut (%)

avg. Edge-cut avg. nr. of swaps avg. Data migration (%)

min Edge-cut (%)

min Edge-cut

1.0 32.49 80381.6 9036.9 75.05 32.01 79194

2.0 31.59 78138.8 365533.0 74.73 31.31 77448

3.0 31.62 78229.0 780530.7 74.84 31.46 77816

4.0 32.86 81290.6 1198811.9 74.94 31.69 78380

Table 5.10: Resulting values of T

0

test Twitter

References

Related documents

The tracking as well as the velocity estimation performance obtained with the motion control system described in Section 2.2 is shown in Figure 3.2.. The observer used for estimation

• I am proposing that ants could be a promising source for novel antibiotics and that non-fungal farming ants use antibiotic active bacteria!. symbiotically just like

Att joystickförare anpassar sitt körsätt mer än icke funktionshindrade förare kan bero på att de är mycket beroende av sina bilar, och även om risken skulle vara liten att

Dur- ing the same period, the electricity demand increased by 1.3%, and solar PV power generation increased by 16.9%, regardless of the pandemic lockdown measures being

Många större kommuner däremot har (eller har behov av) system för att registrera och ta fram statistik över trafik, olyckor, skadefall, åtgärder mm. De system som används

A common mission in such scenarios involves gathering information (e.g. building a 3D model) in a rescue region that will be used for planning and executing future actions taken

Just like the Rado graph is both generated as the unique homogeneous struc- ture from an amalgamation class and as the a unique structure satisfying an almost sure theory, so do

Då många företag som kan komma att vara intresse- rade av aktuell bolagsform förmodas vara mindre är det troligt att aktieägarna inte besitter den kompetens som krävs för