• No results found

One-Way Transit Time Measurements

N/A
N/A
Protected

Academic year: 2022

Share "One-Way Transit Time Measurements"

Copied!
46
0
0

Loading.... (view fulltext now)

Full text

(1)Blekinge Institute of Technology Research Report 2004:06. One-Way Transit Time Measurements Doru Constantinescu Patrik Carlsson Adrian Popescu. Department of Telecommunication Systems School of Engineering Blekinge Institute of Technology.

(2) One-Way Transit Time Measurements Doru Constantinescu dco@bth.se. Patrik Carlsson pca@bth.se. Adrian Popescu apo@bth.se. Dept. of Telecommunication Systems School of Engineering Blekinge Institute of Technology 371 79 Karlskrona, Sweden. Abstract The main goals of the technical report are towards an understanding of the delay process in best-effort Internet for both non-congested and congested networks. A dedicated measurement system is reported for delay measurements in IP routers, which follows specifications of the IETF RFC 2679. The system is using both passive measurements and active probing. Dedicated application-layer software is used to generate UDP traffic with TCP-like characteristics. Pareto traffic models are used to generate self-similar traffic in the link. The reported results are in the form of several important statistics regarding processing delay of a router, router delay for a single data flow, router delay for more data flows as well as end-to-end delay for a chain of routers. We confirm results reported earlier about the fact that the delay in IP routers is generally influenced by traffic characteristics, link conditions and, at some extent, details in hardware implementation and different IOS releases. The delay in IP routers usually shows heavy-tailed characteristics. It may also occasionally show extreme values, which are due to improper functioning of the routers. Keywords: traffic measurements, one-way delay, traffic capturing software, IP routers, traffic self-similarity. 1 Introduction As the Internet emerges as the backbone of worldwide business and commercial activities, end-to-end (e2e) Quality of Service (QoS) for data transfer becomes a significant factor. End-to-end delay is a key metric in evaluating the performance of networks as well as the quality of service perceived by end users. Today network capacities are being deliberately overengineered in the Internet so that the packet loss rate is very low. Throughput maximization can be done by minimizing the e2e delay. However, given the heterogeneity of the network and the fact that the overengineering solution is not adopted everywhere, especially not by backbone teleoperators in developing countries, the question arises as to how the delay performance impacts the e2e performance. There are several important parameters that may impact the e2e delay performance in the link, e.g., traffic self-similarity, routing flaps and link utilization [10, 12]. Several papers report on e2e delay performance, and both Round-Trip Time (RTT) and One-Way Transit Time (OWTT) are considered [3, 6, 12, 13]. Traffic measurements based on both passive measurements and/or active probing are used. Generally, RTT measurements are simpler but the analysis is more complex due to different problems related to clock synchronization, packet timestamping, protocol complexity, asymmetries in direct and return paths as well as path variations [6]. Other problems related to difficulties in measuring queueing delays in operational routers and switches further complicates the picture [12]. As a general comment, it has been observed that both RTT and OWTT show large ”peak-to-peak” variations, in the sense that maximum delays far exceed minimum delays. Further, it has been observed that OWTT variations (for opposite directions) are asymmetric in most cases, with different delay distributions, and they seem to be correlated with packet loss rates [13]. Periodic delay spikes and packet losses have been also observed, which seem to be a consequence of routing flaps [12]. Typical distributions for OWTT have been observed to have a Gamma-like shape and to possess heavy tail [3, 11]. The parameters of the Gamma distribution have been observed to depend upon the path (e.g., regional, backbone) and the time of the day. The main goals of the technical report are towards an understanding of the delay process in best-effort Internet for both non-congested and congested networks. We have designed a measurement system to do delay measurements in IP routers, which follows specifications of the IETF RFC 2679 [1]. The system uses both passive measurements and active probing. Dedicated application-layer software is used to generate UDP traffic with TCP-like characteristics. The well-known interactions between TCP sources and network are thus avoided. 1/44.

(3) UDP is not aware of any network congestion, and this gives us the choice of doing experiments where the focus is on the network only. The software consists of a client and a server running on two different hosts, which are separated by a number of routers. Pareto traffic models are used to generate self-similar traffic in the link. Both packet inter-arrival times and packet sizes are matching real traffic models. A passive measurement system is used for data collection that is based on using several so-called Measurement Points (MPs), each of them equipped with DAG monitoring cards [5, 8]. Hashing is used for the identification and matching of packets. The combination of passive measurements and active probing, together with using the DAG monitoring system, gives us an unique possibility to perform precise traffic measurements as well as the flexibility needed to compensate for the lack of analytic solutions. The real value of our study lies in the hop-by-hop instrumentation of the devices involved in the transfer of IP packets. The mixture of passive and active traffic measurements allows us to study changes in traffic patterns relative to specific reference points and to observe different contributing factors to the observed changes. This approach offers us the choice of better understanding of diverse components that may impact on the performance of one-way transit time as well as to measure queueing delays in operational routers. The rest of the report is organized as follows. In Section 2 we provide a short review of methodologies used in traffic measurements. In Section 3 we describe the delay components associated with One-Way Transit Time. In section 4 we describe the measurement system and the technology used to collect data. We discuss the implementation as well as the accuracy and the limitations of our system. In Section 5 we describe several experiments done and we report specific details related to these experiments. Section 6 is dedicated to reporting the results obtained on delay performance. Finally, Section 7 concludes the technical report.. 2 Traffic Measurements There are two main approaches to do traffic measurements, i.e., active probing and passive traffic measurements. Active probing consists of creating and injecting pre-defined, artificial packets, into the network. These packets might be captured and timestamped with the help of various capturing software, e.g., tcpdump [15]. Different traffic metrics (e.g., minimum, maximum, variance, probability distribution function, etc.) are then calculated. The main drawback of this approach is related to the potential distortion and interference of the injected traffic with real traffic. If, for example, too much artificial traffic is inserted into the network, then this could easily lead to an overload situation and the obtained results are no longer relevant for the network under study. This situation must be carefully considered when the measurements are conducted under periods of high network load. Passive traffic measurements do not have such a limitation. No ”artificial” traffic is generated in this case, with the consequence of no interference with existing network traffic. Passive measurements rely instead on directly capturing the traffic at the link layer. This approach often uses specialized capturing hardware, i.e., DAG-cards [8]. An important aspect to be considered in the case of passive measurements is related to storage requirements. A passive measurement tool often needs large amounts of storage capacity. As such a passive tool usually has the possibility of capturing every packet on the link, and the storage requirements might grow fast in the order of tens and hundreds of gigabytes and even terabytes of storage capacity. The active measurement approach scales much better as the storage requirement is often much lower since it captures only the injected traffic. Another important aspect to be considered is regarding user privacy. In the case of passive measurements, the captured traffic contains user data. This is a major source of difficulty when trying to monitor an operational network. Some of the privacy concerns can be overcome by eliminating unnecessary data from the captured packets and by anonymization of the monitored IP addresses. The active probing approach is however not affected by this issue as it generates own traffic and it is only this specific traffic that is captured and stored for later analysis. We have designed a hybrid traffic measurement system for doing traffic measurements in IP best-effort networks. Our system uses both passive measurements and active probing. We are using specialized commercial hardware to obtain accurate and high-precision traffic traces while the active probing is done with the help of dedicated application-layer software that generates artificial traffic with ”real-life” characteristics. The passive measurement system used for data collection is based on using several so-called Measurement Points (MPs), each of them equipped with DAG monitoring cards [5, 8]. Dedicated application-layer software is used to generate UDP traffic with TCP-like characteristics. By doing so we are avoiding the well-known interactions between TCP sources and the network. UDP is not aware of any network congestion and this gives us the choice of doing experiments where the focus lies on the network behavior only. The traffic generating software uses the client-server paradigm. The client and the server run on different hosts separated by a number of routers. We are generating network traffic where packet interarrival 2/44.

(4) times and packet sizes are matching real traffic models. Pareto traffic models are used to generate self-similar traffic at the link level. An important aspect is regarding the correct identification of packets present on multiple captured traces. We use both hashing and masking before processing the collected traffic traces. The hashing function is based on the SHA-1 Secure Hash Algorithm [16]. SHA-1 provides a very low probability of hash collision. The packet identification software, i.e., hashing and matching, is implemented in C/C++. In order to store all relevant information necessary for fast and accurate packet processing, we use the template containers defined by the Standard Template Library (STL) [14]. The combination of passive and active measurements, together with the use of a DAG monitoring system, gives us an unique possibility of performing precise traffic measurements as well as the flexibility needed to compensate for the lack of analytic solutions.. 3 Delay Components One-Way Transit Time (OWTT) is measured by timestamping a specific packet at the sender, sending the packet into the network, and comparing then the timestamp with the timestamp generated at the receiver [1]. Packet timestamping can be done either by software (for the case of delay measurements at the application level) or by hardware (for the case of delay measurements in the network), and in this case special hardware is used. Clock synchronization between the sender and the receiver nodes is important for the precision of one-way delay measurements. On top of that, delay measurements at the application level are sensitive to possible uncertainties related to the difference between the ”wire time” and the ”host time” as well. ”Wire time” is defined as being the time between when the first bit of the packet leaves the network interface of the sender or when the last bit of the packet completely arrives at the network interface at receiver. If timestamping at the sender and receiver is done by software instead, then these timestamps can be measured only after the software just sends or receives respective packet. This is referred to as ”host times” [1]. OWTT has several components: N. OW T T = D prop + ∑ Dn,i. (1). i=0. where the delay per node i, Dn,i is given by: Dn,i = Dtr,i + D proc,i + Dq,i. (2). The components are as follows: • D prop is the total propagation delay along the physical links that make up the Internet path between the sender and the receiver. This time is solely determined by the properties of the communication channel as well as the distance. It is independent of traffic conditions on the links. • N is the number of nodes between the sender and the receiver. • Dtr,i is the transmission time for the node i. This is the time it takes for the node i to copy the packet into the first buffer as well as to serialize the packet over the communication link. It depends on the packet length and it is inversely proportional to the link speed. • D proc,i is the processing delay at the node i. This is the time needed to process an incoming packet (e.g., to decode the packet header, to check for bit errors, to lookup routes in a routing table, to recompute the checksum of the IP header) as well as the time needed to prepare the packet for further transmission, on another link. This delay depends on parameters like network protocol, computational power at node i, and efficiency of network interface cards. • Dq,i is the queueing delay in the node i. This delay refers to the waiting times in buffers, and depends upon traffic characteristics, link conditions (e.g., link utilization, interference with other IP packets) as well as implementation details of the node. Statistics like mean, median, maximum, minimum, standard deviation, variance, peakedness, probability distribution function, etc., are usually used in the calculation of delay for all non-corrupted packets. Typical values obtained for OWTT range from tens of µs (between two hosts on the same LAN) to hundreds of ms (in the case of hosts placed in different continents) [4].. 3/44.

(5) For a general discussion, the OWTT delay can be partitioned into two components, a deterministic delay D d and a stochastic delay Ds . OW T T = Dd + Ds. (3). D prop, Dtr and (partly) D proc are contributing to the deterministic delay Dd , whereas the stochastic delay Ds is created by Dq and, at some extent, D proc . The stochastic part of the router processing delay can be observed especially in the case of low and very low link utilization, i.e., when the queueing delays are minor.. 4 Measurement Setup We are reporting delay measurements done at the network level [5]. Figure 1 shows the measurement configuration used in our experiments. The key component in the system is a Measurement Point (MP), the device that does the actual packet capturing. The capabilities of an MP are decided by the capture hardware that is installed in the MP, and in our experiments we use the DAG 3.5E network monitoring card. The MPs are capable of collecting and timestamping frames with an accuracy of less than 100 ns. Data analysis is done off-line and the MPs are synchronised locally. L1 wiretap. A. DAG3.5E. L2. L3 DAG3.5E. DAG3.5E. R1. R3. R2. E. DAG3.5E wiretap. MP03. MP04 B. MP05 C. MP06 D. Figure 1: Measurement configuration The system is capable of collecting and timestamping traces consisting of the first 96 bytes (and possibly more) of every frame captured on Ethernet links of 10 Mbps. Thus, the Ethernet, IP and transport headers are collected as well as part of the payload. Packets smaller than 96 bytes are zero-padded. The clocks of the DAG cards, which generate the timestamps, are synchronized locally in the sense that all clocks are synchronised to one MP’s DAG clock which, in turn, is synchronised to a NTP server. A high accuracy of the timestamps is obtained (less than 100 ns), as compared with the computer timestamps of about 10 µs. This offers the advantage of fairly accurate delay measurements that are suitable for our experiments. For instance, the smallest events on a 10 Mbps Ethernet (back-to-back 64 byte packets) have a minimum inter-frame gap time of 9.6 µs, and therefore the timestamping system provides a precision that is about two orders of magnitude better relative to the object of observation. Trace collection is done by using four Measurement Points (MPs) [5]. The networks that we are measuring are 10 Mbps full duplex Ethernets. On a 10 Mbps Ethernet the maximum frame rate is 14881 frames/s, and this equals a frame interarrival time of 67.2 µs. This time is significantly larger than the timestamp accuracy of the MP. The routers R1, R2 and R3 are all of the same type (Cisco 3620). The source host A, the sink host E and the hosts that generate cross traffic B, C, and D are all identical with regards to hardware and software configuration. During a test run each MP generates a packet trace and stores it locally to a harddisk. MP03 differs from the others MPs in the sense that it uses two independent wiretaps to collect data. However, this has no effect on the collected trace. Once the test has been completed all traces are collected and analyzed off-line. To calculate the delay that a packet experiences we need to accurately identify the packets as they pass the MPs on their way through the routers. Hashing is used for the identification and matching of packets. The hashing function is implemented with the SHA-1 Secure Hash Algorithm [16]. All captured packets are bitmasked before hashing. The hash covers the entire IP header including the source and destination IP addresses, the IP header Identification field etc., with the exception of Time To Live (TTL) and Header Checksum fields (as they are changed at every router). 37 bytes of the IP payload (including IP options and eventual padding) are included in the hash as well. The traffic generating software uses the client-server model and consists of a. 4/44.

(6) client (traffic sink) and a server (traffic generator) running on two different computers separated by a number of routers.. 4.1 Packet Generation Traffic generation can be described as a process where artificial traffic is inserted into a network. The generated traffic can be used for different network tests, under the conditions that normal traffic is low or none existing. Traffic can be generated at almost any layer in a protocol stack. When traffic is generated at the application layer, the resulting behavior of the traffic at the link layer is also influenced by the intermediate layers and not only by the characteristics of the traffic generator. If the goal is to generate traffic at the link layer that contains ”real” TCP segments, then the traffic generator should inject traffic to a TCP socket. On the other hand, if the goal is to generate IP packets with properties similar to TCP segments, then the traffic generation can be done at a lower layer, or by using a UDP socket. Assuming that the goal is to generate traffic at the application level, then the generated traffic can be described by two independent random variables X and Y and the associated probability density functions, PX and PY . The random variable X specifies the payload length (in bytes) while Y specifies the inter-packet time. Inter-packet time is defined as being the time interval between the end of a packet and the arrival time of the next packet at the transport layer, under the assumption that the transport layer has a constant processing speed of C bps.. Application. Transport. T s,1 T r,1 T T,1 = 8L 1/C DT T IP,1 T s,2 T r,2. 8L 2 /C. Figure 2: Packet generation: Timing issues A difficult problem to solve is that we do not know when the data segments are processed by the transport layer. Figure 2 illustrates this situation. At the time Ts,1 we send L1 bytes to the transport layer. This is done by using a blocking call, and at the time Tr,1 this call is completed. However, the blocking does not imply that the data segment has been processed. This only indicates that the buffer at the transport layer had enough space to store L1 bytes. If the transport layer had enough space at the time of sending, then the call will be blocked until one of two things occurs: either space becomes available or a we have timeout. Based on this, the call processing time is given by Td = Tr,1 − Ts,1 . Assuming that no problems occur, the Td will be generally very small. To estimate the pause time required for the traffic generator until generating the next data segment, we assume that the transport layer started the segment processing at Ts,1 . Ideally, the pause time will be given by the sum of the transmission time, i.e., TT = 8L1 /C and the inter-packet time TIP , provided that the call processing time is acknowledged in the case the call was blocked. Thus, the total time needed for generating a L 1 bytes long data segment is given by: DT = TT,1 − Td + TIP,1. (4). After the pause time is over, a new data and a new inter-packet time can be generated and the entire procedure is repeated. Although this type of traffic generator is very crude, as there are no requirements for the contents of the data segments to be delivered to the transport layer except the length, it is suitable for the specific traffic measurements we require. Our goal is to generate data at the application layer while using UDP as the transport protocol. The data lengths are selected from a truncated Pareto distribution. Truncation is done in such way that the maximum. 5/44.

(7) allowed IP datagram size is not violated, i.e., 65515 bytes. The inter-packet times are selected from an exponential distribution. Furthermore, since the OWTT estimation software uses parts of the IP payload field for identification, a specific data structure is used to create unique payloads of each generated UDP packet (figure 3). This is achieved by including a packet identification number and a timestamp in each payload. The timestamp variable contains the time just prior to the transmission of the first packet. By this, it is easy to identify the ”session” that a given sequence of packets belongs to. The packet identification number variable is updated after each transmitted packet. In addition to these two fields, the packet payload also contains a block of ASCII data. The ASCII block is used in order to have a more rapid and simplified procedure for creating packets of arbitrary data sizes. By using this specific data structure and the way the send function is implemented (it requests a pointer to the data structure and the number of bytes to read), we can easily send different sized payloads without need to generate new data.. Packet = { int counter; timeval firstPkt; char junk[66000]; } Figure 3: Packet payload data structure. Two generators have been used for traffic generation. The first generator emulates a single traffic source while the second aims at emulating an arbitrary number of traffic sources. The main difference between the two generators is regarding the implementation of the pause function. For the case of one generation process a high accuracy pause function can be used. Pause accuracy defines how close the actual pause time is with reference to the requested time. The use of the high accuracy pause function can be illustrated in the follwing pseudo code: Tend = getTimeofDay() + DT ; while(Tend > getTimeofDay()) { nop; } return; The accuracy is obtained by not allowing the host’s operating system to handle the pause, as it is done in a normal situation: pause = nanosleep (∆T); return; However, high accuracy is obtained at the cost that only one traffic generation process can run at a time. Due to this, we only use the high accuracy pause function for the traffic generator that emulates a single source. Apart from this difference in handling the pause function, the traffic generation software is identical for both traffic generators. The generating software has three possible operating modes: packet, time and infinite. In the packet mode, the software generates a fixed number of packets and terminates. In the time mode, the traffic is generated over a finite amount of time. Finally, in the infinite mode the traffic generation terminates when CTRL-C is pressed. In addition to the operating modes, the software also considers several distribution parameters, e.g., the desired shape parameter α and location parameter β for the Pareto distribution, λ for the exponential distribution as well as existing link capacity and the destination IP address of the traffic sink.. 4.2 Packet Identification A meaningful analysis of network delay measurements requires correct identification of packets present on multiple traffic traces. Hashing is used to uniquely represent each captured packet from the traffic traces. Before hashing, each packet is masked (bitwise ANDed) with a mask previously defined. As an IP packet travels through the network the payload remains unchanged and some fields in the IP header (i.e., TTL, Header Checksum) alter their values. In order to correctly identify each packet we must first mask out these fields. The bit-mask can be easily adapted to serve different purposes in more specific traffic measurements. 6/44.

(8) We have chosen to only process uncorrupted IP packets carrying UDP payload. Main reason is that we want to analise the injected traffic as this gives us control on traffic conditions. Thus, we filter out any other traffic that may appear in a trace, i.e., ARP, ICMP, HELLO etc, in the sense that such traffic is discarded. 4.2.1 Hashing The main purpose of a hash function is to produce a unique identification of a file or message, known as the message digest, e.g., h = H(M), where M is the message, H() the one-way hash function and h is the result of length m. We are using an implementation of the SHA-1 hashing algorithm. SHA-1 identifies and matches packets efficiently and accurately [16]. The algorithm also provides the necessary requirements for hash collision avoidance as well. SHA-1 is a cryptographic one-way hash function with an output size of 20 bytes. It takes as input a message of length L (of maximum 264 bits) and outputs a 160-bit message digest. The total length of the message must be a multiple of 448 modulo 512. If needed, the message is padded with a 1000...0 bit sequence. After padding, the 64-bit representation of the message length L is appended for security reasons as it makes the SHA-1 algorithm more robust to a specific type of attack known as ”padding attack” [7]. The original message is thus divided in 512-bit blocks that are processed sequentially, i.e., each block is the input for the next block (figure 4). This ensures that the output of the algorithm produces results that are well mixed. Finally, the last stage of the algorithm yields the 160-bit message digest of the initial message (SHA-1 algorithm uses an initial 160-bit buffer as input to the first 512-bit block). SHA-1 algorithm provides a very low collision probability. It takes 264 attempts to find two messages with the same hash value, assuming that all hashed items are independent and equally likely. Input Message. Padding. L. 1000...0 64. input message length = L bits. (1 to 512 bits). . block 1. (512 bits).

(9)

(10) . 160.   SHA−1. . . block 2.  . ....... 160. block k.        . SHA−1. block N. ....... (512 bits). (512 bits). 160. ....... 160. SHA−1. 160. (512 bits). .......  . 160. SHA−1. 160.                 . h = SHA−1{Input Message} (h = message digest). Figure 4: SHA-1 message hashing In our case we need hashing to correctly ”tag” target packets from other traffic existent in the network (cross traffic). Hashing brings also an important advantage in the form of increased computational speed as we only need to check 20 bytes (the hash value) instead of 96 bytes representing the total length of each captured packet. This is of course only valid if the computation power required for hashing is lower than that required for comparing 96 bytes (e.g., done with memcmp()). The use of both masking and hashing ensures that every packet is uniquely identified by its hash value. An example on how masking works for a 96 bytes packet with the associated bit-mask is illustrated in figure 5. The packet (shaded area) is bitwise logical ANDed with a previously defined mask (darker shade). Both the bit-mask and the packet must have equal lengths in order to avoid erroneous bit-by-bit logical faults. The obtained result is a modified packet with the ”undesired” information removed, or ”masked away”, represented in the figure with a lighter shaded area. If other bit-masks are needed, then they can be easily defined in an intuitive way in the header file. 7/44.

(11) 0. 95. 0. original packet. 95. 0. mask applied. 95. resulting packet (original + mask) Figure 5: Packet masking. As we process each packet we need to store both the hash value and the original packet. Copying is needed as we would otherwise lose the most valuable information, the timestamping, as this field must be masked in the frame header because it changes at every MP. This is done with the help of the Vector class from the STL library. We use STL for frame handling as it provides a rich variety of generic algorithms designed to operate on data structures defined within the STL framework. We define our own object classes used for packet and hash processing (StorePkt and HashPkt), which are based on the Vector templates. 4.2.2 Matching An important factor in the OWTT delay analysis is regarding the correct identification of target packets at each hop (Measurement Point). In this section we describe our solution for uniquely identifying the target packets. The packets are read sequentially from the input trace file. They are copied into a StorePkt Vector and then hashed. The SHA-1 hash value is stored into a HashPkt Vector data structure, together with an unique identification of the specific frame, namely its index in the StorePkt Vector data structure. Each frame provided by the MPs is copied and masked before hashing. The masking ensures that only 54 bytes are hashed, starting from the IP header. The hash covers selected fields in the IP header including the source and destination IP addresses, the IP header Identification number and other fields including possible IP options. The only fields in the IP header that are not hashed are the Time-To-Live field and the Header Checksum field, as they are changed at every router. Before the matching procedure begins, the HashPkt Vector is sorted based upon the hash values it contains. Sorting is done to improve the computational performance in packet matching as any sorted data structure is searched much quicker than an unsorted one. For sorting, we use the sort and find if generic algorithms provided by the STL library. The sort algorithm makes use of an underlying quicksort, which sorts a sequence of length N using O(N log N) comparisons on the average [14]. The find if algorithm is the predicate version of the find algorithm implemented in STL and searches for the first occurrence of an element for which the given predicate is true [14]. In our case it uses a predicate object class to the HashPkt Vector for accurate identification of the hashed objects types. Our predicate class, FindHash, overloads the needed assignment operator for the HashPkt Vector as well as its default copy constructor. The function call for find if algorithm can be illustrated in pseudo code as follows: search = find if(Vector.begin(), Vector.end(), FindHash(targetHash); where search is the matched hash value, targetHash is the target predicate hash value and Vector is the vector to be searched for the occurrence of targetHash. Vector is the vector containing hash values obtained from the processing of the input trace file taken at the next measurement point. The matching procedure can be simply described by the pseudo algorithm below and the flowchart for the identification/matching software is illustrated in figure 6. Pseudo algorithm for the identification/matching software. For each of the input capture files, do the following: • Read each packet from the trace file (until EOF reached). • Copy and hash the bit-masked packet. • Store the original packet in a StorePkt Vector and the hash value in a HashPkt Vector together with a pointer to the original packet. 8/44.

(12) ie: Defined by MASK &. START. another flochart,. PROGRAM. STORE etc.. Initialisation. Copy packet Mask packet. of filter parameters. Hash masked packet. Store object. Read packet. according to type. from input capture file. Packet. Store hash value. Store packet into. N. OK ?. Sort. into Vector. HashPkt_Vector. StorePkt_Vector. Y End MASK & STORE. ie: Defined by MASK & another flochart,. STORE etc.. N. EOF ?. Y Store vector according to type ie: Defined by another flochart, MATCHING etc.. Store hash vector. Store packet vector. Read hash value. into. into. from vector. StorageHash_Vector. StorgePacket_Vector. More. Store selected. Y. input files ?. Matching. Y. hash ?. N N ie: Defined by another flochart, MATCHING. N. StorageHash_Vector. etc.. empty ?. Y End. END PROGRAM. MATCHING. Figure 6: Matching software flowchart. 9/44. Also information on. information in. unmatched packets. output file. and duplicate packets.

(13) • If the hash value already exists, i.e., when the packet being hashed is a duplicate packet, mark this event in the HashPkt object for later analysis and discard the duplicate packet. • Sort the HashPkt Vector based upon the hash value. • Store both the StorePkt Vector and the corresponding HashPkt Vector in another, separate, STL vector. These vectors are StoragePacket Vector and StorageHash Vector, respectively. • Read and process next input file in a similar way. When all input files are processed continue with next step. • Read, in order, each hash, which we call target hash, from StorageHash Vector n , and look for a match in the next vector, StorageHash Vectorn+1 . • When a match is found, extract all relevant information from StoragePacket Vector n (recall that we have a pointer to the original packet in each entry in the HashPkt Vector) and the matched packet in StoragePacket Vectorn+1 . • If the target hash is not matched in StorageHash Vectorn+1 , i.e., when a packet was discarded, mark this event in the output text file with a timestamp denoted 0.0. • Similarly, if the hash has duplicates, find how many duplicates a specific packet has and write this information to the text file for later processing. The strength of the identification software is that it is not restricted to a certain number of input files (traffic traces). It can process any given number of traces, resulting in output files of timestamp readings (and possibly any information stored in the IP datagram), for two consecutive MPs, i.e., n input files will result in n − 1 output files. A restriction for the hashing and matching software is related to physical limitations of the computer on which the software is running, e.g., processor capacity and/or internal memory existent. Large input traces (e.g., millions of captured packets) require large amounts of computational capacity as in such cases we must deal with very large data structures which must be read, sorted and searched.. 4.3 OWTT Estimation The matching and identification software outputs a formatted text file that contains all information needed for the OWTT estimation. The output file (figure 7) can be easily modified to contain any other information present in the captured packet. In the OWTT analysis we are using the timestamping information Ti for a given packet, place for timestamping, i.e., the DAG interface on which the packet was captured as well as the size of the packet’s payload with disregard to its content. The timestamps taken at different locations, i.e., different MPs, gives us the possibility of estimating the one-way delay. OWTT is estimated by taking the time difference between two consecutive timestamps readings for the same packet n: OW T Ti, j (n) = Ti (n) − T j (n). (5). nth. where OW T Ti, j (n) is the estimated one-way transit time for the packet, Ti (n) represents the timestamp reading taken at the measurement point i and T j (n) is the timestamp reading taken at the measurement point j.. 4.4 Sources of Errors in OWTT Estimation In order to have a precise and correct OWTT estimation, we need accurate samples of the Ti (n) and T j (n). The most common sources of error in obtaining accurate timestamps are the presence of duplicate and unmatched packets in the traffic traces. The presence of duplicate packets in network traffic has been reported in several publications [12, 13]. Duplicate packets in network traffic can be created by, e.g., datagrams taking different ways towards destination, some protocols sending identical packets, wrong configuration of a DHCP-server, faulty manual configuration of a MAC-address, and defect hardware, e.g., faulty switches, routers. Because we work in a strictly controlled environment and generate own network traffic, the probability of duplicate occurrence is low. For the duplicate packets, all 54 bytes hashed are identical thus giving the same SHA-1 digest. For a correct handling of possible duplicate packets we considered that a match of a packet must occur within a specified time. We have selected this time to be of three seconds, as this is likely to be the maximum delay that a datagram may experience in a 10/44.

(14) DAG1 Timestamp1 DAG2 Timestamp2 UDP-load ====================================================================== da00 1077891807.498262047750 da00 1077891807.498436808500 115 da00 1077891747.259334564250 da00 1077891747.259505927500 97 da00 1077891724.471761584250 da00 1077891724.472232580250 92 da00 1077891750.368737459250 da00 1077891750.368934393000 70 da00 1077891665.152725994500 da00 1077891665.153405010750 154 da00 1077891737.334839403750 da00 1077891737.335472524250 75 da00 1077891653.611759066500 da00 1077891653.611901641000 68 da00 1077891755.960862875000 da00 1077891755.961207032250 345 da00 1077891673.019720614000 da00 1077891673.020247757500 70 da00 1077891699.723473131750 da00 1077891699.724875092500 130 da00 1077891724.630867004500 da00 1077891724.631119907000 71 da00 1077891683.186750173500 da00 1077891683.186886668250 72 da00 1077891698.999204635500 da00 1077891698.999365449000 270 da00 1077891656.918469905750 da00 1077891656.918698728000 88 da00 1077891741.106358170500 da00 1077891741.106673956000 68 da00 1077891631.460798323250 da00 1077891631.461426258000 76 da00 1077891793.429994821500 da00 1077891793.430272519500 1269 . . . . . . . . . . . . . . .. Figure 7: Example of output file router. The collected traffic traces show that, when a duplicate occurs, the duplicate packet arrived at the MPj after more than 10 seconds, thus making identification of duplicate packets unambiguous. In fact, the collected traffic traces recorded only duplicate ARP packets and no duplicate IP packets in any experiment. Occurrence of unmatched packets can be explained by, e.g., local cross traffic at Ethernet hubs, packets destined for, or originating from one of the routers IP addresses, fragmentation required in a router for an outgoing link. In our experiments we have found a low occurrence of unmatched packets (between 0.01 % and 5 % of unmatched packets for more than 1.000.000 packets processed). We believe that unmatched packets appear because of other interferring traffic, i.e., ARP and inter-router traffic as well as discarded datagrams due to congestion avoidance in the router during heavy-load traffic conditions. The routers were configured to run OSPF and it is a known fact that all routing protocols generate network traffic, i.e., when the router advertises its link status, responds to database inquiry or other routing related messages.. 5 Experiments A dedicated application-layer software is used to generate UDP traffic with TCP-like characteristics. The traffic generated between the source A and the sink E (see figure 1) follows a Pareto distribution for the packet length with the shape parameter α (which determines the mean and the variance) and the location parameter β (which determines the minimum value). An exponential distribution with parameter λ is used for inter-packet gap times and the (measured) link utilization ρ depends on λ. This model matches well traffic models observed for the World Wide Web, which is one of the most important contributors to the traffic in Internet [9]. Higher traffic intensities (than those measured in real networks) are considered in our experiments as well, with the consequence of higher loads on routers, and thus allowing us to obtain better delay models. The cross traffic generated by the traffic generators B, C, D has a form that approaches fractional Brownian motion (fBm) traffic with self-similar characteristics, which is typical for Ethernet [10]. This traffic is generated in a similar way, i.e., Pareto distribution for packet sizes and exponential distribution for inter-packet times. The difference however is that a large number of processes are used in this case to generate a large number of Pareto traffic flows in every traffic generator. This gives us the choice of doing experiments where we can control diverse parameters of the traffic mixture in the link, especially the Hurst parameter H and the link utilization ρ. Table 1 shows the summary of experiments and the associated traffic parameters. In experiment 1, only the source computer A generates traffic, with diverse characteristics. One process is used to generate Pareto traffic with α parameter shown in table and a traffic intensity λ that corresponds to the (measured) ρ values shown in table. Further, the parameter β = 40. Nine traces have been generated in this case, which are shown as 1-1, 1-2, to 1-9. In experiment 2, both computers A and B are generating traffic. The traffic generated by computer A has 11/44.

(15) Table 1: Summary of experiments and traffic generation parameters Expa 1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9. A α, ρ (1) 2, 0.2 2, 0.4 2, 0.6 1.6, 0.2 1.6, 0.4 1.6, 0.6 1.2, 0.2 1.2, 0.4 1.2, 0.6. Expb 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9. A α, ρ (1) 2, 0.2 2, 0.4 2, 0.6 1.6, 0.2 1.6, 0.4 1.6, 0.6 1.2, 0.2 1.2, 0.4 1.2, 0.6. B α, ρ (100). 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1. A α, ρ (1) 2, 0.2 2, 0.4 2, 0.6 1.6, 0.2 1.6, 0.4 1.6, 0.6 1.2, 0.2 1.2, 0.4 1.2, 0.6. Expc 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9. B α, ρ (50+50) 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1. C α, ρ (50+50) 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1. D α, ρ (50+50) 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1 1.2, 0.1. a Experiment. 1 2 c Experiment 3. b Experiment. the same characteristics as the traffic generated in experiment 1. Computer B generates fBm-like traffic with H ≈ 0.9 and ρ ≈ 0.1. One hundred processes are used for traffic generation in computer B. Nine traces have been generated in experiment 2, which are shown as 2-1, 2-2, to 2-9. For experiment 3, computer A is still generating traffic with the same characteristics as in experiment 1. However, the difference is that now all three computers B, C and D are generating fBm-like traffic flows, with H ≈ 0.9 and ρ ≈ 0.1. One hundred processes are used for traffic generation in every computer. The generated traffic flows are broken up in the routers R1, R2 and R3 in the sense that 50 % of every flow is merging the traffic coming from computer A and the rest of 50 % (for every flow) is crossing the routers only. Nine traces have been generated in experiment 3 as well, which are shown as 3-1, 3-2, to 3-9. Figure 8 shows examples of traces collected at the computers A, B, C, D, and E, together with the associated histograms.. 0. 100. 200 300 Time [s]. 400. 500. 10. 5. 0. 0. 100. 200 300 Time [s]. 400. Input to E [Mbps]. 5. 0. 0. 100. 200 300 Time [s]. 400. 500. 400 200 0 600. 300. 500. 10. 600. Number of samples. 0 Output from B,C and D [Mbps]. Number of samples. 5. 800. Number of samples. Output from A [Mbps]. 10. 0. 2. 4 6 Bit rate [Mbps]. 8. 10. 0. 2. 4 6 Bit rate [Mbps]. 8. 10. 0. 2. 4 6 Bit rate [Mbps] ∆=100kbps. 8. 10. 400 200 0. 200 100 0. Figure 8: Traffic collected in the measurement set-up and the associated histograms. 12/44.

(16) 6 Delay Performance 6.1 Processing Delay of a Router Figure 9 shows typical processing delays (D proc) in a router, which are measured for IP packets containing ICMP and UDP payloads, and for variable payload sizes (between 32 and 1450 bytes). The associated histograms are reported in the figures 10 (for ICMP payload) and 11 (for UDP payload), respectively. Cisco 3620 routers have been used for this experiment. A number of 10000 samples have been generated for every payload size. The ICMP tests were done by using ping, and no options were used that would have required extra processing burden for the router. Very low traffic intensities were used (in the order of 10 frames/s) such as to avoid the presence of queueing delay in the router. Further, the transmission times Dtr of the IP packets have been removed as well. 103. UDP ICMP Mean UDP Mean ICMP. 102. Dproc [µs]. 101. 100. 99. 98. 97. 200. 400. 600 800 Payload size [bytes]. 1000. 1200. 1400. Figure 9: Example of router processing delay for ICMP and UDP payloads ICMP. UDP. 2000. 2500. Number of samples. Number of samples. 2000 1500. 1000. 500. 1500. 1000. 500. 1400. 0 0. 1400. 0 0. 1200 50. 1200 50. 100. 100. 700. 150 200. 150 200. 200. 250 300 350. 32. 700 200. 250 300 350. Payload [bytes]. Dproc [µs]. 32. Payload. Dproc [µs]. Figure 10: Router processing delay for ICMP payloads with different sizes. Figure 11: Router processing delay for UDP payloads with different sizes. The mean delay for UDP samples was found to be 97.9 µs, with a minimum of 74.2 µs and a maximum of 1861.2 µs. The mean delay for ICMP samples was found to be higher (as expected), 101 µs. The minimum delay for ICMP samples was found to be 76.5 µs and the maximum 1958.8 µs. We have also done similar experiments on other routers (e.g., Cisco 1605, Cisco 2514). We observed that the processing delays look similar, the difference however is that the delays may vary with up to about 100 µs compared to the values reported above. This is clearly a difference that depends upon details in hardware implementation and different IOS (Internetwork Operating System) releases.. 6.2 Router Delay for a Single Data Flow This is a set of experiments where the first Cisco 3620 router R1, is receiving traffic from the computer A only. A single process is used for UDP traffic generation, which follows a Pareto distribution for the packet length with the parameters β = 40 and α = 2, 1.6, 1.2. Furthermore, for every α value, a number of three experiments have been done with different levels of link utilization, i.e., Lu = 0.2, 0.4 and 0.6 respectively. The traffic generated in link has no fBm-like characteristics. The Measurement Points MP03 and MP06 are used to capture the traffic. The routers R2 and R3 are not used and the MP04 and MP05 are not connected either. Nine traces have been generated, which correspond to 13/44.

(17) different values for the burstiness and the traffic intensities of the generated traffic. Every generated trace is quite big and it has about one million packets. The reason for that is because of the well-known problems related to the slow convergence to steady-state in the case of heavy-tailed workloads [10]. Table 2 reports the main statistics of the router delay measured between the sink computer E and the generator A. The most obvious feature that we observe is related to the rather limited disparity of these results. The traces show that packets experience delays that have quite similar statistics, with mean and variance values that are slightly increasing with H and ρ. Further, the experiment is also showing that the number of samples with large delays is quite low. Most of samples have maximum delays that are less than 7 ms, and we believe that they are due to queueing at the output link. There is however one delay (in the trace 1-3), which has a maximum that is unusual large (55.5 ms) compared to the rest of traces. This is likely not created by queueing, but due to other reasons like the router stopping forwarding packets for a while. Further, the associated histograms have been observed to have a Gamma-like shape with a heavy tail that is dependent on α and ρ. Table 2: Summary of OWTT results for experiment 1 Experiment 1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 a Total b Total. Mean 0.205 0.241 0.287 0.227 0.267 0.317 0.262 0.301 0.349. 95% Conf. ± 0.0002335 ± 0.0002895 ± 0.0006054 ± 0.0003033 ± 0.0003604 ± 0.0004015 ± 0.0004293 ± 0.0004874 ± 0.0005405. Variance 0.0142 0.02182 0.09539 0.02395 0.03381 0.04195 0.04798 0.06184 0.07604. Max 3.98 1.79 55.5 2.08 6.63 1.97 3.0 3.62 2.11. Min 0.13 0.12 0.12 0.12 0.13 0.13 0.13 0.13 0.13. Peakedness 0.069 0.091 0.33 0.11 0.13 0.13 0.18 0.21 0.22. Dupsa / Unmb 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0. number of duplicate packets number of unmatched packets. Figures 13 to 21 (appendix A) show the results of this experiment. All the figures are aggregated in the same way, i.e., in the upper left corner is the plot for the traffic collected at the wiretap from MP03 near host A. The associated histogram is plotted at the right of the figure. Below is plotted the traffic captured by MP06 with the associated histogram to the right. Next plot shows the measured OWTT followed underneath by the associated histogram and the summary statistics for the particular trace.. 6.3 Router Delay for More Data Flows In experiment 2 the router R1 is receiving traffic from both computers A and B. In this case, computer A is generating non-fBm-like traffic with the same characteristics like in experiment 1. Computer B is generating fBm-like traffic. The routers R2 and R3 are still not used and the MP04 and MP05 are not connected either. That means that one process is generating UDP traffic with Pareto distribution for the packet length with the parameters β = 40 and α = 2, 1.6, 1.2. For every α value, different traffic intensities are generated, such as the link utilization of computer A is Lu = 0.2, 0.4 and 0.6, respectively. The computer B is generating cross traffic with fBm-like characteristics with H = 0.9 and Lu ≈ 0.1 in all experiments. One hundred processes are used in this case to generate the fBm-like traffic. Every process generates UDP traffic with Pareto distribution for the packet lengths with β = 40 and α of different values, i.e., α = 2, 1.6, 1.2. The traffic generated by the hundred processes are mixed at the UDP level, creating so the fBmlike traffic observed in the link. As a result, nine traces have been generated, with different ρ for the traffic observed at the sink computer E. Every trace is about one million packets long. The Hurst parameter of the bit rate has been measured to be H = 0.9 for most of traces. There are however several traces showing larger values for H (e.g., H = 1.09) for large ρ and low α. These are traces corresponding to traffic with infinite mean. We are aware of this problem, we wanted however to test the router under very severe conditions. We therefore did not exclude these traces from our experiments. Table 3 reports the main statistics of the router delay measured between the sink computer E and the generator A. We observe in this case that the delays show a larger disparity in the range of statistics. Though the mean values are quite similar, the variance and the peakedness show however a larger disparity. Furthermore, this experiment shows that the number of samples with large delays is rather high, and we believe that most of them are due to queueing. No delay is observed in this case that is extremely large compared to the rest. The associated histograms have been observed to have a Gamma-like shape with a heavy tail that is dependent both on α and ρ. The results are reported in figures 22 to 30 (appendix B). The plots uses the same format as for previous experiment, i.e., the traffic from source A is plotted in the upper left corner and associated histogram is plotted. 14/44.

(18) to the right. The trace collected at the destination E is plotted below the corresponding plot for A together with the associated histogram and finally, we plot the measured OWTT followed underneath by the histogram and the summary statistics. Table 3: Summary of OWTT results for experiment 2 Experiment 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9 a Total b Total. Mean 0.758 1.39 1.82 0.723 0.855 1.7 0.873 1.41 3.62. 95%-Conf. ± 0.003186 ± 0.005123 ± 0.006031 ± 0.002944 ± 0.003588 ± 0.006046 ± 0.003843 ± 0.00584 ± 0.009987. Variance 2.64 6.808 9.421 2.253 3.347 9.467 3.822 8.854 25.68. Max 27.8 34.9 33.8 36.4 33.7 44.4 49.3 51.4 55.8. Min 0.12 0.12 0.13 0.13 0.13 0.13 0.12 0.13 0.13. Peakedness 3.5 4.9 5.2 3.1 3.9 5.6 4.4 6.3 7.1. Dupsa / Unmb 0 / 463 0 / 2792 0 / 3999 0 / 211 0 / 607 0 / 3887 0 / 4912 0 / 1861 0 / 9997. number of duplicate packets number of unmatched packets. 6.4 End-to-End Delay for a Chain of Routers In experiment 3 the entire setup shown in figure 1 is used. Computer A generates the same traffic pattern as in the previous experiments. Computers B, C and D are generating both merging traffic and crossing traffic. The crossing traffic enters on the same router port as the merging traffic, but leaves on a port different from the port used by the A to E traffic stream. Table 4 reports the main statistics of the router delay measured between the sink computer E and the generator A. The main observation is regarding the large disparity for all statistics collected, except for the minimum. We also observe a large number of samples with large delays, and we believe that most of them are due to queueing in routers. We do not observe any sample with extremely large delay, the conclusion of which is that all three routers seem to work properly. The associated histograms have been observed to have a Gamma-like shape with a heavy tail. The tail has been observed to look similar for most of traces and it is dependent on α and ρ. Figures 31 to 39 (appendix C) show the results obtained in this experiment. The plots are formatted in a similar way as for the first two experiments. Finally, figure 12 shows the measured one-way delay performance (with regard to mean and variance) obtained for all three experiments. It is observed the strong dependence of the mean and the variance on traffic characteristics and link utilization. Further, it is also observed that it is the α parameter of the generated traffic that mostly influences the router behavior at large link utilization. This is of course the consequence of queueing behavior, which typically shows heavy-tailed delay performance (e.g., Weibull distribution) in the case of traffic with Long-Range Dependence.. 7 Conclusions In this report we presented a dedicated passive measurement system designated for collecting high-quality and accurate data traces. The measurement infrastructure deals with problems like synchronization, high-accuracy in timestamping and processing of multiple data traces collected at various measurement points.. Table 4: Summary of results OWTT for experiment 3 Experiment 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9. Mean 2.59 4.88 7.56 2.58 3.02 3.02 3.13 5.1 16. 95%-Conf. ± 0.008419 ± 0.01214 ± 0.01324 ± 0.008468 ± 0.009749 ± 0.009749 ± 0.01016 ± 0.01493 ± 0.02087. Variance 18.32 37.36 43.56 18.56 24.53 24.53 26.69 56.96 100.2. Max 70 92.2 77.7 77.3 80.9 80.9 102 107 110. a Total b. number of duplicate packets Total number of unmatched packets. 15/44. Min 0.4 0.41 0.41 0.41 0.41 0.41 0.41 0.41 0.42. Peakedness 7.1 7.6 5.8 7.2 8.1 8.1 8.5 11 6.3. Dupsa / Unmb 0 / 7180 0 / 25657 0 / 44961 0 / 5880 0 / 8496 0 / 51138 0 / 6199 0 / 18238 0 / 115763.

(19) 1 α=2 α=1.6 α=1.2. 0.5. 0. 0. 0.2. 0.4 0.6 0.8 Link Utilization, ρ. Variance. Experiment 1 Mean Delay [ms]. 1. 0.5. 0. 1. 0. 0.2. 0.4 0.6 0.8 Link Utilization, ρ. 1. 30 α=2 α=1.6 α=1.2. 4. Variance. Experiment 2 Mean Delay [ms]. 6. 2 0. α=2 α=1.6 α=1.2. 0. 0.2. 0.4 0.6 0.8 Link Utilization, ρ. 10 0. 1. α=2 α=1.6 α=1.2. 20. 0. 0.2. 0.4 0.6 0.8 Link Utilization, ρ. 1. α=2 α=1.6 α=1.2. 15 10. α=2 α=1.6 α=1.2. 100 Variance. Experiment 3 Mean Delay[ms]. 20. 50. 5 0. 0. 0.2. 0.4 0.6 0.8 Link Utilization, ρ. 1. 0. 0. 0.2. 0.4 0.6 0.8 Link Utilization, ρ. 1. Figure 12: Summary of measured OWTT performance A measurement study of one-way delays through operational IP routers has been reported. The measurement setup is reported in a detailed way together with the set of experiments done. Several important statistics are reported about the delay for a router and for a chain of routers. Our results confirm other results reported earlier that the delay in IP routers is generally influenced by traffic characteristics, link conditions and, to some extent, details in hardware implementation and IOS releases. Our future work will be to obtain an analytical model for OWTT. This model includes possible existing correlations between queues and also traffic characteristics. The targeted analytical model should be then tested and validated in real networks, under conditions of real network traffic.. References [1] Almes G., Kalidindi S. and Zekauskas M., A One-way Delay Metric for IPPM, IETF RFC 2679, 1999. [2] Borella M. S., Uludaq S. and Sidhu I., Self-Similarity of Internet Packet Delay, IEEE JCC’97, Montreal, Quebec, Canada, 1997. [3] Bovy C. J., Mertodimedjo H. T., Hooghiemstra G., Uijterwaal H. and Van Mieghem P., Analysis of End-toEnd Delay Measurements in Internet, ACM PAM, Fort Collins, Colorado, USA, 2002. [4] Caida 2001 Network Measurement Metrics WG., http://www.caida.org/outreach/metricswg/faq.xml [5] Carlsson P., Ekberg A. and Fiedler M., On an Implementation of a Distributed Passive Measurement Infrastructure, COST279 TD(03)042, 2003. [6] Claffy K. C., Polyzos G. C. and Braun H. W., Measurement Considerations for Assessing Unidirectional Latencies, Journal of Internetworking, Vol. 4, No. 3, 1993. [7] Tsudik G., Message Authentication with One-Way Hash Functions INFOCOM ’92, May 1992. [8] Endace Measurement Systems, http://www.endace.com [9] Jena A. K., Popescu A. and Nilsson A. A., Modeling and Evaluation of Internet Applications, International Teletraffic Congress ITC-18, Berlin, Germany, 2003. [10] Leland W. E., Taqqu M. S., Willinger W. and Wilson D. V., On the Self-Similar Nature of Ethernet Traffic (Extended Version), IEEE/ACM Transactions on Networking, Vol. 2, No. 1, 1994. 16/44.

(20) [11] Mukherjee A., On the Dynamics and Significance of Low Frequency Components of Internet Load, Internetworking: Research and Experience, Vol. 5, 1994. [12] Papagiannaki K., Moon S., Fraleigh C., Thiran P. and Diot C., Measurement and Analysis of Single-Hop Delay on an IP Backbone Network, IEEE Journal on Selected Areas in Communications, Vol. 21, No. 5, August 2003. [13] Paxson V., Measurements and Analysis of End-to-End Internet Dynamics, PhD Dissertation, University of California at Berkeley, 1997. [14] Musser, David R., Saini, Atul, STL Tutorial and Reference, Addison Wesley, 1999 [15] TCPDUMP, http://www.tcpdump.org [16] Secure Hash Algorithm, Announcement of Weakness in the Secure Hash Standard, National Institute of Standards and Technology (NIST), 1994. 17/44.

(21) A OWTT for a Single Data Flow for a Single Router Plots, histograms and summary statistics obtained in Experiment 1.. OWTT for a Single Data Flow − Single Router A: ρ = 0.2, α = 2 No cross traffic 0.4 P[X1<X<X2]. Mbps. 6 4 2 0. 0. 1000. 2000. 3000. 4000. 5000. 0.1 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.4 P[X1<X<X2]. Mbps. 0.2. 6000. 6 4 2 0. 0.3. 0. 1000. 2000. 3000. 4000. 5000. 0.3 0.2 0.1 0 0. 6000. OWTT [ms]. 4 3 2 1 0 0. 100000. 200000. 300000. 400000. 1. 2. P[X <X<X ]. 0.4. 500000 600000 Sample Number. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 0.3 var: std.dev: median: mean: max: min:. 0.2 0.1 0. 0. 1. 2. 3. 0.0142 0.119 0.16 0.205 3.98 0.128 4. 5 OWTT [ms]. 6. Figure 13: Results obtained in experiment 1-1 and associated histograms. 18/44.

(22) 8. 0.4. 6. 0.3. P[X1<X<X2]. Mbps. OWTT for a Single Data Flow − Single Router A: ρ = 0.4, α = 2 No cross traffic. 4 2 0. 0. 500. 1000. 1500. 2000. 2500. 3000. 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.4 P[X1<X<X2]. Mbps. 0.1. 3500. 8 6 4 2 0. 0.2. 0. 500. 1000. 1500. 2000. 2500. 3000. 0.3 0.2 0.1 0 0. 3500. OWTT [ms]. 2 1.5 1 0.5 0 0. 100000. 200000. 300000. 400000. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 0.3 var: std.dev: median: mean: max: min:. 0.2. 1. 2. P[X <X<X ]. 0.4. 500000 600000 Sample Number. 0.1 0. 0. 1. 2. 3. 0.0218 0.148 0.172 0.241 1.79 0.125 4. 5 OWTT [ms]. 6. Figure 14: Results obtained in experiment 1-2 and associated histograms. 19/44.

(23) OWTT for a Single Data Flow − Single Router A: ρ = 0.6, α = 2 No cross traffic 0.4 P[X1<X<X2]. 8 Mbps. 6 4 2 0. 0. 500. 1000. 1500. 2000. 0.1 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. P[X1<X<X2]. 0.4. 6 Mbps. 0.2. 2500. 8. 4 2 0. 0.3. 0. 500. 1000. 1500. 2000. 0.3 0.2 0.1 0 0. 2500. OWTT [ms]. 60 40 20 0 0. 100000. 200000. 300000. 400000. P[X1<X<X2]. 0.2. 500000 600000 Sample Number. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 0.15 var: std.dev: median: mean: max: min:. 0.1. 0.05 0. 0. 1. 2. 3. 0.0954 0.309 0.216 0.287 55.5 0.125 4. 5 OWTT [ms]. 6. Figure 15: Results obtained in experiment 1-3 and associated histograms. 20/44.

(24) 8. 0.4. 6. 0.3. P[X1<X<X2]. Mbps. OWTT for a Single Data Flow − Single Router A: ρ = 0.2, α = 1.6 No cross traffic. 4 2 0. 0. 1000. 2000. 3000. 4000. 5000. 6000. 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.4 P[X1<X<X2]. Mbps. 0.1. 7000. 8 6 4 2 0. 0.2. 0. 1000. 2000. 3000. 4000. 5000. 6000. 0.3 0.2 0.1 0 0. 7000. OWTT [ms]. 3 2 1 0 0. 100000. 200000. 300000. 400000. P[X1<X<X2]. 0.4. 500000 600000 Sample Number. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 0.3 var: std.dev: median: mean: max: min:. 0.2 0.1 0. 0. 1. 2. 3. 0.024 0.155 0.168 0.227 2.08 0.125 4. 5 OWTT [ms]. 6. Figure 16: Results obtained in experiment 1-4 and associated histograms. 21/44.

(25) OWTT for a Single Data Flow − Single Router A: ρ = 0.4, α = 1.6 No cross traffic 0.2 P[X1<X<X2]. 8. 0.15. Mbps. 6 4 2 0. 0.05. 0. 1000. 2000. 3000. 0 0. 4000. 6 Mbps. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.2 P[X1<X<X2]. 8. 4 2 0. 0.1. 0. 1000. 2000. 0 0. 100000. 200000. 3000. 0.15 0.1 0.05 0 0. 4000. OWTT [ms]. 8 6 4 2 300000. 400000. 0.2. 500000 600000 Sample Number. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. P[X1<X<X2]. 0.15 var: std.dev: median: mean: max: min:. 0.1. 0.05 0. 0. 1. 2. 3. 0.0338 0.184 0.187 0.267 6.63 0.127 4. 5 OWTT [ms]. 6. Figure 17: Results obtained in experiment 1-5 and associated histograms. 22/44.

(26) 8. 0.2. 6. 0.15. P[X1<X<X2]. Mbps. OWTT for a Single Data Flow − Single Router A: ρ = 0.6, α = 1.6 No cross traffic. 4 2 0. 500. 1000. 1500. 2000. 2500. 0.05 0 0. 3000. 8. 0.2. 6. 0.15. P[X1<X<X2]. Mbps. 0. 0.1. 4 2 0. 0. 500. 1000. 1500. 2000. 2500. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.1 0.05 0 0. 3000. OWTT [ms]. 2 1.5 1 0.5 0 0. 100000. 200000. 300000. 400000. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 0.15 var: std.dev: median: mean: max: min:. 0.1. 1. 2. P[X <X<X ]. 0.2. 500000 600000 Sample Number. 0.05 0. 0. 1. 2. 3. 0.042 0.205 0.241 0.317 1.97 0.125 4. 5 OWTT [ms]. 6. Figure 18: Results obtained in experiment 1-6 and associated histograms. 23/44.

(27) OWTT for a Single Data Flow − Single Router A: ρ = 0.2, α = 1.2 No cross traffic 8. 0.15. P[X1<X<X2]. 0.2. Mbps. 10. 6 4 2 0. 2000. 4000. 6000. 8000. 10000. 0.05 0 0. 12000. 8. 0.2. 6. 0.15. P[X1<X<X2]. Mbps. 0. 0.1. 4 2 0. 0. 2000. 4000. 6000. 8000. 10000. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.1 0.05 0 0. 12000. OWTT [ms]. 4 3 2 1 0 0. 100000. 200000. 300000. 400000. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 0.15 var: std.dev: median: mean: max: min:. 0.1. 1. 2. P[X <X<X ]. 0.2. 500000 600000 Sample Number. 0.05 0. 0. 1. 2. 3. 0.048 0.219 0.178 0.262 3 0.127 4. 5 OWTT [ms]. 6. Figure 19: Results obtained in experiment 1-7 and associated histograms. 24/44.

(28) OWTT for a Single Data Flow − Single Router A: ρ = 0.4, α = 1.2 No cross traffic 0.1 P[X1<X<X2]. Mbps. 10. 5. 0. 0. 1000. 2000. 3000. 4000. 5000. 6000. 0 0. 7000. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.1 P[X1<X<X2]. Mbps. 10. 5. 0. 0.05. 0. 1000. 2000. 3000. 4000. 5000. 6000. 0.05. 0 0. 7000. OWTT [ms]. 4 3 2 1 0 0. 100000. 200000. 300000. 400000. 1. 2. P[X <X<X ]. 0.2. 500000 600000 Sample Number. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 0.15 var: std.dev: median: mean: max: min:. 0.1 0.05 0. 0. 1. 2. 3. 0.0618 0.249 0.196 0.301 3.62 0.128 4. 5 OWTT [ms]. 6. Figure 20: Results obtained in experiment 1-8 and associated histograms. 25/44.

(29) OWTT for a Single Data Flow − Single Router A: ρ = 0.6, α = 1.2 No cross traffic 0.1 P[X1<X<X2]. Mbps. 10. 5. 0. 0 6 x 10. 1000. 2000. 3000. 4000. 0 0. 5000. 5. OWTT [ms]. 0. 1000. 2000. 3000. 4000. 2. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.06 0.04 0.02 0 0. 5000. 2 1. 100000. 200000. 300000. 400000. 0.2. 1. 4 6 Bitrate [Mbps]. 3. 0 0. P[X <X<X ]. 2. 0.08 P[X1<X<X2]. Mbps. 10. 0. 0.05. 500000 600000 Sample Number. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 0.15 var: std.dev: median: mean: max: min:. 0.1 0.05 0. 0. 1. 2. 3. 0.076 0.276 0.235 0.349 2.11 0.128 4. 5 OWTT [ms]. 6. Figure 21: Results obtained in experiment 1-9 and associated histograms. 26/44.

(30) B OWTT for More Data Flows for a Single Router Plots, histograms and summary statistics obtained for Experiment 2.. OWTT for More Data Flows − Single Router A: ρ = 0.2, α = 2 B: 100 sources with ρ = 0.1 0.4 P[X1<X<X2]. Mbps. 6 4 2 0. 0. 1000. 2000. 3000. 4000. 5000. 0.1 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.1 P[X1<X<X2]. Mbps. 0.2. 6000. 10. 5. 0. 0.3. 0. 1000. 2000. 3000. 4000. 5000. 6000. 0.05. 0 0. 7000. OWTT [ms]. 30 20 10 0 0. 100000. 200000. 300000. 400000. 1. 2. P[X <X<X ]. 0.2. 500000 600000 Sample Number. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 0.15 var: std.dev: median: mean: max: min:. 0.1 0.05 0. 0. 1. 2. 3. 2.64 1.62 0.273 0.758 27.8 0.124 4. 5 OWTT [ms]. 6. Figure 22: Results obtained in experiment 2-1 and associated histograms. 27/44.

(31) OWTT for More Data Flows − Single Router A: ρ = 0.4, α=2 B: 100 sources with ρ = 0.1 0.4 P[X1<X<X2]. Mbps. 6 4 2 0. 0. 500. 1000. 1500. 2000. 2500. 0.1 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.1 P[X1<X<X2]. Mbps. 0.2. 3000. 10. 5. 0. 0.3. 0. 500. 1000. 1500. 2000. 2500. 3000. 0.05. 0 0. 3500. OWTT [ms]. 40 30 20 10 0 0. 100000. 200000. 300000. 400000. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 1. 2. P[X <X<X ]. 0.08. 500000 600000 Sample Number. 0.06. var: std.dev: median: mean: max: min:. 0.04 0.02 0. 0. 1. 2. 3. 6.81 2.61 0.601 1.39 34.9 0.124 4. 5 OWTT [ms]. 6. Figure 23: Results obtained in experiment 2-2 and associated histograms. 28/44.

(32) OWTT for More Data Flows − Single Router A: ρ = 0.6, α = 2 B: 100 sources with ρ = 0.1 0.4 P[X1<X<X2]. 8. Mbps. 6 4 2 0. 0. 500. 1000. 1500. 2000. 0.1 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.1 P[X1<X<X2]. Mbps. 0.2. 2500. 10. 5. 0. 0.3. 0. 500. 1000. 1500. 2000. 2500. 0.05. 0 0. 3000. OWTT [ms]. 40 30 20 10 0 0. 100000. 200000. 300000. 400000. 1. 2. P[X <X<X ]. 0.04 0.03. var: std.dev: median: mean: max: min:. 0.02 0.01 0. 0. 1. 2. 3. 500000 600000 Sample Number. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 9.42 3.07 0.869 1.82 33.8 0.126 4. 5 OWTT [ms]. 6. Figure 24: Results obtained in experiment 2-3 and associated histograms. 29/44.

(33) OWTT for More Data Flows − Single Router A: ρ = 0.2, α = 1.6 B: 100 sources with ρ = 0.1 0.4 P[X1<X<X2]. 8. Mbps. 6 4 2 0. 0. 2000. 4000. 6000. 8000. 0.1 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.1 P[X1<X<X2]. Mbps. 0.2. 10000. 10. 5. 0. 0.3. 0. 2000. 4000. 6000. 8000. 0.05. 0 0. 10000. OWTT [ms]. 40 30 20 10 0 0. 100000. 200000. 300000. 400000. 2. 0.15. 1. P[X <X<X ]. 0.2. 0.1. var: std.dev: median: mean: max: min:. 0.05 0. 0. 1. 2. 3. 500000 600000 Sample Number. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 2.25 1.5 0.268 0.723 36.4 0.127 4. 5 OWTT [ms]. 6. Figure 25: Results obtained in experiment 2-4 and associated histograms. 30/44.

(34) OWTT for More Data Flows − Single Router A: 1 ρ = 0.4, α = 1.6 B: 100 sources with ρ = 0.1 0.4 P[X1<X<X2]. 8. Mbps. 6 4 2 0. 0. 1000. 2000. 3000. 4000. 5000. 6000. 0.1 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.1 P[X1<X<X2]. Mbps. 0.2. 7000. 10. 5. 0. 0.3. 0. 1000. 2000. 3000. 4000. 5000. 6000. 0.05. 0 0. 7000. OWTT [ms]. 40 30 20 10 0 0. 100000. 200000. 300000. 400000. 2. 0.15. 1. P[X <X<X ]. 0.2. 0.1. var: std.dev: median: mean: max: min:. 0.05 0. 0. 1. 2. 3. 500000 600000 Sample Number. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 3.35 1.83 0.324 0.855 33.7 0.127 4. 5 OWTT [ms]. 6. Figure 26: Results obtained in experiment 2-5 and associated histograms. 31/44.

(35) OWTT for More Data Flows − Single Router A: ρ = 0.6, α = 1.6 B: 100 sources with ρ = 0.1 0.2 P[X1<X<X2]. 8. Mbps. 6 4 2 0. 0. 500. 1000. 1500. 2000. 2500. 3000. 0.05 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.1 P[X1<X<X2]. Mbps. 0.1. 3500. 10. 5. 0. 0.15. 0. 500. 1000. 1500. 2000. 2500. 3000. 0.05. 0 0. 3500. OWTT [ms]. 60 40 20 0 0. 100000. 200000. 300000. 400000. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 2. P[X <X<X ]. 0.06. 500000 600000 Sample Number. 0.04. 1. var: std.dev: median: mean: max: min:. 0.02 0. 0. 1. 2. 3. 9.47 3.08 0.738 1.7 44.4 0.126 4. 5 OWTT [ms]. 6. Figure 27: Results obtained in experiment 2-6 and associated histograms. 32/44.

(36) OWTT for More Data Flows − Single Router A: ρ = 0.2, α = 1.2 B: 100 sources with ρ = 0.1 0.2 P[X1<X<X2]. Mbps. 10. 5. 0. 0. 2000. 4000. 6000. 8000. 10000. 0.05 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.08 P[X1<X<X2]. Mbps. 0.1. 12000. 10. 5. 0. 0.15. 0. 2000. 4000. 6000. 8000. 10000. 0.06 0.04 0.02 0 0. 12000. OWTT [ms]. 60 40 20 0 0. 100000. 200000. 300000. 400000. 2. 0.15. 1. P[X <X<X ]. 0.2. 0.1. var: std.dev: median: mean: max: min:. 0.05 0. 0. 1. 2. 3. 500000 600000 Sample Number. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 3.82 1.96 0.311 0.873 49.3 0.125 4. 5 OWTT [ms]. 6. Figure 28: Results obtained in experiment 2-7 and associated histograms. 33/44.

(37) OWTT for More Data Flows − Single Router A: ρ = 0.4, α = 1.2 B: 100 sources with ρ = 0.1 0.1 P[X1<X<X2]. Mbps. 10. 5. 0. 0. 1000. 2000. 3000. 4000. 5000. 6000. 0 0. 7000. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.06 P[X1<X<X2]. Mbps. 10. 5. 0. 0.05. 0. 1000. 2000. 3000. 4000. 5000. 6000. 0.04 0.02 0 0. 7000. OWTT [ms]. 60 40 20 0 0. 100000. 200000. 300000. 400000. 2. 0.06. 1. P[X <X<X ]. 0.08. 0.04. var: std.dev: median: mean: max: min:. 0.02 0. 0. 1. 2. 3. 500000 600000 Sample Number. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 8.85 2.98 0.493 1.41 51.4 0.126 4. 5 OWTT [ms]. 6. Figure 29: Results obtained in experiment 2-8 and associated histograms. 34/44.

(38) OWTT for More Data Flows − Single Router A: ρ = 0.6, α = 1.2 B: 100 sources with ρ = 0.1 0.08 P[X1<X<X2]. Mbps. 10. 5. 0. 0. 1000. 2000. 3000. 0.02 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.06 P[X1<X<X2]. Mbps. 0.04. 4000. 10. 5. 0. 0.06. 0. 1000. 2000. 0 0. 100000. 200000. 3000. 0.04 0.02 0 0. 4000. OWTT [ms]. 60 40 20. 300000. 400000. 700000. 800000. 900000. 1000000. 0.015. 1. 2. P[X <X<X ]. 0.02. 500000 600000 Sample Number. var: std.dev: median: mean: max: min:. 0.01. 0.005 0. 0. 1. 2. 3. 4. 5 OWTT [ms]. 6. 7. 25.7 5.07 1.57 3.62 55.8 0.127. 8. Figure 30: Results obtained in experiment 2-9 and associated histograms. 35/44. 9. 10.

(39) C OWTT for More Data Flows for a Chain of Routers Plots, histograms and summary statistics obtained in Experiment 3.. OWTT for More Data Flows − Chain of Routers A: ρ = 0.2, α = 2 B,C,D: 50+50 sources with ρ = 0.1 0.4 P[X1<X<X2]. 4. Mbps. 3 2 1 0. 0. 1000. 2000. 3000. 4000. 5000. 0.1 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.08 P[X1<X<X2]. Mbps. 0.2. 6000. 10. 5. 0. 0.3. 0. 1000. 2000. 3000. 4000. 5000. 0.06 0.04 0.02 0 0. 6000. OWTT [ms]. 80 60 40 20 0 0. 100000. 200000. 300000. 400000. P[X1<X<X2]. 0.02. 500000 600000 Sample Number. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 0.015 var: std.dev: median: mean: max: min:. 0.01. 0.005 0. 0. 1. 2. 3. 18.3 4.28 1.26 2.59 70 0.403 4. 5 OWTT [ms]. 6. Figure 31: Results obtained in experiment 3-1 and associated histograms. 36/44.

(40) OWTT for More Data Flows − Chain of Routers A: ρ = 0.4, α = 2 B,C,D: 50+50 sources with ρ = 0.1 0.4 P[X1<X<X2]. 8. Mbps. 6 4 2 0. 0. 500. 1000. 1500. 2000. 2500. 0.1 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.08 P[X1<X<X2]. Mbps. 0.2. 3000. 10. 5. 0. 0.3. 0. 500. 1000. 1500. 2000. 2500. 0.06 0.04 0.02 0 0. 3000. OWTT [ms]. 100. 50. P[X1<X<X2]. 0 0 −3 x 10 3. 100000. 200000. 300000. 400000. 500000 600000 Sample Number. 700000. var: std.dev: median: mean: max: min:. 2 1 0. 800000. 0. 1. 2. 3. 4. 5 OWTT [ms]. 6. 7. 1000000. 9. 10. 37.4 6.11 2.91 4.88 92.2 0.407. 8. Figure 32: Results obtained in experiment 3-2 and associated histograms. 37/44. 900000.

(41) OWTT for More Data Flows − Chain of Routers A: ρ = 0.6, α = 2 B,C,D: 50+50 sources with ρ = 0.1 0.4 P[X1<X<X2]. Mbps. 6 4 2 0. 0. 500. 1000. 1500. 2000. 0.1 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.4 P[X1<X<X2]. Mbps. 0.2. 2500. 10. 5. 0. 0.3. 0. 500. 1000. 1500. 2000. 0.3 0.2 0.1 0 0. 2500. OWTT [ms]. 80 60 40 20. P[X1<X<X2]. 0 0 −3 x 10 1.5. 100000. 200000. 300000. 1. var: std.dev: median: mean: max: min:. 0.5 0. 400000. 0. 1. 2. 3. 500000 600000 Sample Number. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 43.6 6.6 5.94 7.56 77.7 0.412 4. 5 OWTT [ms]. 6. Figure 33: Results obtained in experiment 3-3 and associated histograms. 38/44.

(42) OWTT for More Data Flows − Chain of Routers A: ρ = 0.2, α = 1.6 B,C,D: 50+50 sources with ρ = 0.1 0.4 P[X1<X<X2]. 8. Mbps. 6 4 2 0. 0. 2000. 4000. 6000. 8000. 0.1 0 0. 2. 4 6 Bitrate [Mbps]. 8. 10. 2. 4 6 Bitrate [Mbps]. 8. 10. 0.08 P[X1<X<X2]. Mbps. 0.2. 10000. 10. 5. 0. 0.3. 0. 2000. 4000. 6000. 8000. 0.06 0.04 0.02 0 0. 10000. OWTT [ms]. 80 60 40 20 100000. 200000. 300000. 400000. 0.015. 500000 600000 Sample Number. 700000. 800000. 900000. 1000000. 7. 8. 9. 10. 2. P[X <X<X ]. 0 0. 0.01. 1. var: std.dev: median: mean: max: min:. 0.005 0. 0. 1. 2. 3. 18.6 4.31 1.21 2.58 77.3 0.406 4. 5 OWTT [ms]. 6. Figure 34: Results obtained in experiment 3-4 and associated histograms. 39/44.

References

Related documents

Distance between nodes, hops between nodes, packet length, physical environ- ment and communication environment are five factors that affect the end-to-end delay in wireless

Go to the myGeolog service provider (relying party) at http://dev.mygeolog.com Click in the OpenID URI field (to the right on the page, below ordinary login) and enter the

En prospektiv studie på Ögonkliniken Universitetssjukhuset Örebro visade att innebandy stod för 26 stycken (53%) av alla sportrelaterade ögonskador under år 2014.. Även här

Undersökningen har omfattat 26 stockar från småländska höglandet som p g a extrema dimensioner (diameter &gt; 55 cm) lagts undan vid Unnefors Sågverk. Stockarna sågades och

I resultatet framkom det att kvinnor som genomgått en RALH i generell anestesi kombinerat med spinal självskattade sin smärta lägre än de kvinnor som erhållit generell anestesi 30

IMS uppgift är att bidra till utvecklingen av en bättre praktik i socialt arbete genom att förse det sociala området med kunskapsöversikter över vilka insatser och metoder

Similar to the modeling results of first experiment, it has been observed that the minimum delay, queueing delay, service time and OWTT can be well modeled with the help of

Experiment 1 Mean Delay [ms].. Experiment 2 Mean