• No results found

discus-Strong link Weak link Link between clusters downlink 10 Mb/s 384 Kb/s 100 Gb/s

uplink 10 Mb/s 64 Kb/s 100 Gb/s

delay 5 ms 115 ms 50 ms

Table 4.1: Properties of access links in simulation

sion on the results and methodology used.

Timeout [s]

Neighbor ping period 4

Leafset maintenance 5

Local routing table maintenance 5 Global routing table maintenance 10 Data storing maintenance 10

Table 4.2: Management traffic periods in seconds

longer than the time it takes them to do the requests they want from the DHT due to price models and limitations in battery. With a flat-rate pricing, users may be more positive to let nodes stay online for longer periods of time, but we have not taken that into account in our scenarios - hence the quite heavy churn.

In addition to bandwidth, delay and other network-related properties, each node in a DHT also has a number of properties that are directly related to the DHT service, i.e., timeouts, how often management data is sent etc..

We used the parameters described in table 4.2 for both our simulations and real experiments.

4.3.1 Simulation setup

We have performed simulations using our own NS-2 implementation of a Bamboo-like DHT. The details about the implementation and our approach to simulations can be found in a technical report[9].

To compare with our PlanetLab experiments later described, we simu-lated networks with a size of 170 nodes as that was the number of nodes we could use on PlanetLab. Each run is 10 minutes and have different ratios of weak nodes ranging from 20 to 50%. 10 strong nodes were used as bootstrap nodes which new nodes connected to when joining the network. Nodes in the DHT were evenly distributed over a physical network modeled with three clusters of nodes, connected by very high bandwidth links with high delay (figure 4.1). The clusters represents different continents, while the links con-necting the clusters have high delays and high bandwidth to model backbone transcontinental connections. For simulations, we limited the bandwidth and delay of access links according to table 4.1. These values are chosen based on results from a previous study of a commercially available 3G service[5].

To limit the time needed to simulate, the overlay network is built offline.

This means that we let all nodes have complete knowledge about what nodes participate in the DHT. This allows all nodes to have as populated routing

Figure 4.1: A NS-2 network layout with 3 clusters where the clusters model continents.

tables and leafsets as possible from the beginning of the simulation - the only restriction is that nodes can not optimize for network latencies as they have not yet measured it. To start the network in such an optimal state is of course unrealistic, but as simulations have showed that a network started in an ordinary fashion will eventually reach a similar stable state, we believe it to be a acceptable method. With our approach Bamboo will stabilize within 80 seconds from when it starts . We want to reduce the stabilization time as much as possible as we will filter out that period of time later on - by doing this optimization we will thus get a larger amount of useful data from the same amount of simulated time. As the time scales we can simulate are limited, this makes a significant difference.

After the DHT has stabilized, each node performs a GET operation every 10 seconds to measure the performance of the network. For each GET, we measure whether the operation is successful and if so, how long it took. For the simulations, each node holds about 10 keys that can be requested by other nodes through GET operations. This is a smaller amount of keys compared to the PlanetLab experiments, where we could insert more keys per node.

4.3.2 PlanetLab setup

The distributed testbed PlanetLab[2] has become a popular approach to evaluate new distributed services and systems. PlanetLab is a collection of 700+ Linux machines spread over the world on which researchers can get accounts to run application-level experiments.

Unfortunately it is hard to create the heterogeneous network environ-ments we want to study with PlanetLab nodes, as you do not have privi-leges to modify the network stack. For this purpose, we have developed a lightweight connectivity emulation library called Dtour.

Figure 4.2: Dtour design

Dtour

The Dtour design is based on our need to filter an unmodified application in user space to mimic network dynamics as perceived by the application.

That need is met by implementing a layer between the application and the network stack (figure 4.2). All system calls that involve outgoing network traffic goes through Dtour where it is filtered. Dtour might drop packets due to for example emulated bandwidth limitations or loss models.

The design of Dtour is deliberately kept simple. All functionality is im-plemented in a dynamically loaded library without any active threads or dae-mons. This means that we do all filtering and state updates when a library function is called. If we instead let a separate thread handle the filtering, we could do state updates continuously, but it would increase the complexity of Dtour. Some operating systems offer the possibility to have shared libraries loaded before the normal system libraries.The library functions in libdtour.so are an entry point into the Dtour system.

Currently, only outgoing traffic is filtered in Dtour, so the strong nodes filter traffic destined to weak nodes. The path from a strong to a weak node is limited to 384 kbits/s and all outgoing traffic from a weak node goes through a 64 kbits/s bandwidth limiter. We have however added a static filter connected to the read() function which logs the amount of received traffic.

Events

When using Dtour, network dynamics are expressed as events. A typical event might be that at time t, add a path to the path set, initiated as down.

The time can be expressed either in global time or as relative time from when the scenario is started. What kind of time you use for to describe events is

configurable at runtime. The IP and port numbers can be set to 0, which matches all values.

Dtour can react to events in two modes. First you can provide a scenario file with network events to be loaded when the libdtour.so library is initiated.

The event file is parsed and the events are stored sorted per path to minimize lookup time when filtering.

The second mode of Dtour is to use it interactively. If this mode is enabled Dtour polls a named pipe for events to be applied as they are read from the pipe. The events written to the pipe are in the same format as in the scenario file apart from not having a timestamp. The two modes can be combined by providing a scenario file and later, or in parallel, modify the links interactively.

Function overriding

When the rule set is loaded, Dtour opens the actual system libraries using the dlopen() system call to be able to reach the functions to be overridden.

Any number of system calls could be overridden by Dtour but we currently override the functions that are used to send data.

We have considered to override read(), recvfrom(), etc. but have not yet implemented it. We believe that it will be harder to be completely transparent to an application if we would like to alter how reads are done.

We would probably have to alter the behavior of select() to handle incoming data without returning to the application.

When we simulated the same scenario we could add extra delay on the weak nodes access links, but that possibility is currently unavailable to nodes on PlanetLab. While we do not make strong nodes churn, they experience a low churn rate caused by the dynamics in the Internet in combination with occasional crashes of PlanetLab nodes. Due to the dynamics of PlanetLab and the roll out of PlanetLab v4 we limited the size of the DHT network to about 170 nodes, even though we in simulation could simulate up to 500 nodes on short time scales.

Related documents