• No results found

Multipath TCP : Performance in a LTE Environment

N/A
N/A
Protected

Academic year: 2021

Share "Multipath TCP : Performance in a LTE Environment"

Copied!
84
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet SE–581 83 Linköping

Linköping University | Department of Computer Science

Master thesis, 30 ECTS | Datateknik

2016 | LIU-IDA/LITH-EX-A--16/015--SE

Multipath TCP

Performance in a LTE Environment

Axel Pyk

Supervisor : Vengatanathan Krishnamoorthi Examiner : Niklas Carlsson

(2)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under 25 år från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och admin-istrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sam-manhang som är kränkande för upphovsmannenslitterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet – or its possible replacement – for a period of 25 years starting from the date of publication barring exceptional circum-stances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the con-sent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping Uni-versity Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

c

(3)

Abstract

§ The market penetration of mobile access devices with multiple network interfaces has increased dramatically over the last few years. As a consequence, the quest for a widespread multi-path transport protocol that takes advantage of all available interfaces si-multaneously to increase data throughput and improve robustness, has received consider-able attention. One prominent protocol introduced by the IETF is Multipath TCP (MPTCP). MPTCP is an extension to the predominant single-path transport protocol, the Transport Control Protocol (TCP) that enables multihomed devices to aggregate available resources transparently to the applications.

Combining multiple radio access technologies, like LTE and Wi-Fi, with diverse char-acteristics in terms of transmission rates and fluctuations opens for novel challenges that may disrupt and even harm the data throughput. Therefore MPTCP must take path hetero-geneity into account. For MPTCP to supersede single-path TCP it is required that MPTCP always achieve at least the throughput of the best individual TCP path.

This thesis investigates if MPTCP with uncoupled congestion control fulfills this con-dition, and if so, how much it improves the throughput. By examining the protocol in a de-terministic emulated environment defined by the characteristics of LTE, we conclude two key factors impacting the outcome: the download size and the difference in characteristics between the paths. Our experiments show that MPTCP overall fulfills this task, especially during path homogeneity with near aggregated results. But we also show that MPTCP may decrease data throughput with 16 % compared to TCP during path heterogeneity. Hence MPTCP does not always fulfill the goal of throughput. We therefore conclude further intel-ligence is needed for the packet scheduling mechanism to avoid throughput degradation in the initial phase of a transmission.

(4)
(5)

Acknowledgments

Thank you to my supervisor Vengatanathan Krishnamoorthi, PhD student, and examiner Niklas Carlsson, Associate Professor, at Linköping University for the help and support dur-ing this whole process. I also thank Rasmus Axén, Thomas Walldeen and Helena Westlinder at Ericsson, Linköping, for being most accommodating and helpful in my work of progress. I would also like to thank the responsive technician of the prototype implementation at Eric-sson R&D, Stockholm, for providing detail information.

(6)

Contents

Abstract iii

Acknowledgments v

Contents vi

List of Figures viii

List of Tables xi

1 Introduction 1

2 Motivation 3

3 Background and Theory 7

3.1 Internet’s challenges for MPTCP . . . 9

3.2 Bandwidth aggregating solutions . . . 13

3.3 Long Term Evolution (LTE) . . . 14

3.4 Transmission Control Protocol (TCP) . . . 20

3.5 Multipath TCP (MPTCP) . . . 26

4 Methodology 33 4.1 UE experimental setup . . . 34

4.2 Server experimental setup . . . 35

4.3 Experiments . . . 37

5 Results 39 5.1 Aggressive retransmit feature . . . 39

5.2 HTTP GET experiment . . . 42

5.3 Bulk transfer experiment . . . 45

5.4 Varying characteristics experiments . . . 48

6 Discussion 53 6.1 Results . . . 54

6.2 Methodology . . . 55

6.3 The work in a wider context . . . 55

7 Conclusions 57

A Terminology and acronyms 59

B Tables 61

(7)
(8)

List of Figures

2.1 shows the global smartphone data traffic per region and month in exabytes. . . 4 3.1 illustrates a modified 5-layer version of the TCP/IP model with descriptions and

example techniques. . . 7 3.2 illustrates the principle of encapsulation used in network communication systems. 8 3.3 displays an incomplete illustration of the hourglass architecture of the Internet. . . 10 3.4 illustrates the initial design of the Internet where the transport layer provided

end-to-end functionality. . . 10 3.5 illustrates how a middlebox violates the end-to-end argument. Falsely indicating

content is delivered to destination by acknowledging data to the sender. Due to the middlebox the transport layer does not fulfill end-to-end principle. . . 11 3.6 illustrates the partitioning of the current transport layer into three different layers

by the TNG model. . . 12 3.7 illustrates the TNG architecture with end-to-end at transport layer. . . 13 3.8 illustrates an incomplete overview of the Evolved Packet System (EPS) including

bearers. . . 15 3.9 illustrates the E-UTRAN user plane protocol stack. . . 16 3.10 illustrates three different allocation schemes. OFDMA is used in LTE downlink

and SC-FDMA is used in LTE uplink. . . 16 3.11 illustrates the generic radio frame of LTE. . . 17 3.12 illustrates a block diagram of the physical layer of LTE when transmitting a bit

stream. . . 18 3.13 illustrates the LTE Quality of Service (QoS) topology. . . 19 3.14 illustrates the TCP header structure in solid with accompanying data part dotted. . 20 3.15 illustrates the three-way handshake exchange. . . 21 3.16 illustrates the TCP flow control. . . 22 3.17 shows the impact of packet error probability p on the maximum data rate of a TCP

transmission with packet RTT=10ms & 60ms and 1460 octets MSS. . . 24 3.18 illustrates the four-way handshake process used when terminating a TCP

connec-tion using the finished (FIN) and acknowledged (ACK) flags. . . 25 3.19 illustrates the relationship between standard TCP (left) and MPTCP (right) . . . 26 3.20 shows the MPTCP data sequence numbering. . . 28 3.21 illustrates a blocking scenario during path heterogeneity leading to throughput

degradation. . . 29 3.22 shows the mandatory MPTCP option field format. . . 31 3.23 illustrates the connection sequence of a MPTCP capable operation followed by a

MPTCP join connection operation, where . . . 31 4.1 shows a blueprint of the experimental setup with flow of data. . . 33

(9)

4.2 shows the architecture of the simulated receiving side user equipment in the ex-perimental setup as a stack. On the top are the applications (wget and netperf) and in the bottom the network interface controllers. Traffic comes in through the NIC moves up the stack to destined application, then down the stack and out through the NIC. . . 35 4.3 shows the server side architecture in the experimental setup as a stack with the

applications at the top and network interface controller at the bottom. . . 36 5.1 shows the throughput T in two separate 20 second TCP transmissions. TCP1

allo-cate 40 Mbit/s stationary. TCP2alternate allocated bandwidth between 40 and 1

Mbit/s with 0.2 Hz. . . 39 5.2 shows the throughput T for two MPTCP subflows; Subflow1and Subflow2, and

the MPTCP goodput G when the aggressive retransmit feature is off. Subflow1

allocate 40 Mbit/s stationary. Subflow2alternate allocated bandwidth between 40

and 1 Mbit/s with 0.2 Hz. This is the same characteristics scenario as Figure 5.1. The arrows mark the areas where Subflow1is affected by the changes in Subflow2

and flow starvation occurs. . . 40 5.3 shows the throughput T for two MPTCP subflows; Subflow1 and Subflow2,

and the MPTCP goodput G when the aggressive retransmit feature is activated. Subflow1allocate 40 Mbit/s stationary. Subflow2alternate allocated bandwidth

between 40 and 1 Mbit/s with 0.2 Hz. This is the same characteristics scenario as Figure 5.1. . . 41 5.4 shows the average download times as a function of file size when using either

regular TCP (red squares) and MPTCP (blue circles) for the best-case scenario. Both lines are equipped with a dashed linear fit. The graph is equipped with a zoomed area displaying file sizes 32KB to 512KB. . . 42 5.5 shows the MPTCP rate in percentage compared to the rate the best TCP path given

the test scenario in Figure 5.4. . . 43 5.6 shows the average download times as a function of file size when using either

regular TCP (red squares) and MPTCP (blue circles) for the worst-case scenario. Both lines are equipped with a dashed linear fit. The graph is equipped with a zoomed area displaying file sizes 32KB to 512KB. . . 43 5.7 shows the MPTCP rate in percentage compared to the rate the best TCP path given

the test scenario in Figure 5.6. . . 44 5.8 shows the average download times as a function of file size between 32KB and

4MB when using either MPTCP or regular TCP. Red squares indicate the best download time when using regular TCP, blue circles indicate the worst attained download time when using MPTCP and green cross indicates the best download time when using MPTCP. Both lines are equipped with a dashed linear fit. Grayed area indicate MPTCP upper- and lower bound. Each gray dot is a test result. The graph is equipped with a zoomed area for sizes 32 to 512KB. . . 44 5.9 shows the normalized gain compared to the best TCP path as a function of two

subflow RTTs when BW1=40 Mbit/s and BW2=8 Mbit/s. The markers indicate

the three different packet error probabilities. . . 45 5.10 shows the normalized gain compared to the best TCP path as a function of two

subflow RTTs when BW1= BW2=40 Mbit/s. The markers indicate three

differ-ent packet error probabilities. . . 46 5.11 shows the normalized gain compared to the best TCP path as a function of two

subflow RTTs for two test scenarios. The markers indicate the three different packet error probabilities. . . 47 5.12 shows the throughput T for Subflow1and Sublow2, the RTT for Subflow2and the

MPTCP goodput in the scenario without the bufferbloat phenomena. At T = 0 the bandwidth drop from 40 Mbit/s to 256 Kbit/s. . . 48

(10)

5.13 shows the throughput T for Subflow1and Sublow2, the RTT for Subflow2and the

MPTCP goodput in the scenario with the bufferbloat phenomena. At T = 0 the bandwidth drop from 40 Mbit/s to 256 Kbit/s. It is reset after 5 seconds. . . 49 5.14 shows the RTT of Subflow2 when either exposed (solid blue) or unexposed

(dashed red) to the bufferbloat phenomena. . . 49 5.15 shows the MPTCP transmission rate for both subflows and the goodput forcing

the bufferbloat phenomenon when changing the bandwidth of Subflow2between

40 and 1 Mbit/s with a frequency of 2Hz. Subflow1allocate stationary 40 Mbit/s

and RTT =10ms and p=0%. . . 50 5.16 shows the results from the packet loss experiment. Both subflows allocate 40

Mbit/s and∆RTT = 50ms. At T = 0 the packet error probability changes from 0% to 0.1% and p is reset at T =5. The lines indicate the average rate during the interval. . . 51 C.1 shows the MPTCP resource utilization percentage as a function of two subflow

RTTs when BW1= 40 Mbit/s and BW2 =40 Mbit/s. The markers indicate three

different packet error probabilities. . . 67 C.2 shows the MPTCP resource utilization percentage as a function of two subflow

RTTs when BW1 = 40 Mbit/s and BW2 = 8 Mbit/s. The markers indicate three

different packet error probabilities. . . 68 C.3 shows the MPTCP resource utilization percentage as a function of two subflow

RTTs when BW1 = 40 Mbit/s and BW2 = 1 Mbit/s. The markers indicate three

(11)

List of Tables

B.1 LTE QoS Class Identifiers (QCI). . . 61 B.3 The MPTCP option subtypes. . . 62 B.5 Details the average improvement when using MPTCP compared the best

con-stituent path with TCP. . . 62 B.6 Details the resulting throughput in Mbit/s from the bulk experiment when

Subflow1allocate 40 Mbit/s stationary and Subflow2allocate 8 Mbit/s. Viewed in

the table are the two different packet latencies used in the bulk experiment; to the left a latency difference of 0ms (10ms, 10ms), and to the right a 50ms difference (10ms, 60ms). . . 62 B.7 Details the resulting throughput in Mbit/s from the bulk experiment when

Subflow1allocate 40 Mbit/s stationary and Subflow2allocate 40 Mbit/s. Viewed

in the table are the two different packet latencies used in the bulk experiment; to the left a latency difference of 0ms (10ms, 10ms), and to the right a 50ms difference (10ms, 60ms). . . 63 B.8 Details the resulting throughput in Mbit/s from the bulk experiment when

Subflow1allocate 40 Mbit/s stationary and Subflow2allocate 1 Mbit/s. Viewed in

the table are the two different packet latencies used in the bulk experiment; to the left a latency difference of 0ms (10ms, 10ms), and to the right a 50ms difference (10ms, 60ms). . . 63 B.9 The goals of MPTCP architecture. . . 64 B.11 The specifications of the experimental setup hardware. . . 65

(12)
(13)

1

Introduction

In recent years, a variety of Radio Access Technologies (RATs) with diverse transmission fea-tures have enabled network operators to offer services to customers with varying demands. Examples include short- and long ranged RATs such as IEEE 802.11, commonly known as Wi-Fi, and Long Term Evolution (LTE), which forms the basis for the fourth generation mo-bile broadband (4G). RATs like these are responses to increasing demands for momo-bile network resources, due to an on-going influx of bandwidth intensive services, increased availability of low-price portable devices and the evolution of Internet of Things (IoT)1. According to these factors and more, it is forecast that mobile data traffic will eight-fold between 2014 and 2020 globally [14]. Because todays’ radio resources already are severely limited one vision discussed by the telecommunication industry are wireless networks that allow an integration of different RATs to form heterogeneous wireless networks [41].

A Heterogeneous Wireless Network (HWN) allows network operators to combine advan-tages and thereby overcome disadvanadvan-tages of different RATs [11]. They allow operators to deliver high-quality services to customers with smartphones, tablets and other multi-RAT portable devices. A multi-RAT device is a unit equipped with multiple controllers hence able to communicate using different radio technologies. Benefits of HWNs include addi-tional capacity, better coverage and better bandwidth. Depending on desired characteristics, subscribers with multi-RAT devices can switch between or combine RATs in overlapping coverage areas to meet their application requirements. By creating a single logical link sepa-rated over multiple resource, the principle of bandwidth aggregation is achievable. It allows resource maximization and increased redundancy without any changes at the application level. Multi-RAT devices are potentially capable of aggregating bandwidth since they are equipped with multiple network controllers [65].

The most used transport protocol within the core of Internet is the Transmission Control Protocol (TCP), standardized in 1981 in RFC 793 [43]. TCP was designed and introduced along with the Internet Protocol (IP) [42] to ensure reliability, avoid congestion collapse and facilitate fair resource sharing in a network. They are commonly known as the TCP/IP pro-tocol suite. The generic term TCP/IP usually means anything and everything related to the specific protocols of TCP and IP. It can include other protocols, applications, and even the

1Internet of Things is a network of interconnecting physical objects, or “things", to offer advanced exchange of

information between devices, systems and services. This is to achieve greater value to the customers by exchanging data with the manufacturer, operator and/or other connected devices [59].

(14)

network medium [55]. Although TCP has been mostly unchanged in the past thirty years, TCP still fulfills its purpose eminently. Over the same time period, applications and services communicating over the Internet have evolved exponentially and sandwiched TCP/IP be-tween an influx of application layer protocols and link layer protocols, making development at the transport layer difficult. This is further explained in Chapter 3.

Because modern networks aim for heterogeneity with the ability to combine resources from multiple technologies, this has been a subject to solve for the research community past decade. Today’s networks are restricted by TCP’s inability to operate over multiple inter-faces. In order to solve this issue the Internet Engineering Task Force (IETF) published RFC 6824 [18] in January 2013. RFC 6824 details an experimental protocol standard called Multi-path TCP (MPTCP) with a set of extensions to TCP. These extensions enable TCP to operate across multiple paths simultaneously. MPTCP operates by transparently opening multiple TCP connections to a MPTCP capable host with the goal of improving throughput and re-silience to network failures.

Aggregating different technologies’ resources with dissimilar characteristics in terms of bandwidth, latencies and packet error probabilities opens the possibility of new disruptive objectives like synchronization, that may degrade the throughput of a multi path transmis-sion [65]. One of the functional goals of MPTCP quoted from RFC 6182 is “to meet the minimum performance incentives for deployment, a Multipath TCP connection over mul-tiple paths should achieve no worse throughput than a single TCP connection over the best constituent path." This thesis investigates if it is possible to fulfill this goal practically with proposed design and asymmetric characteristics. We perform a performance evaluation on MPTCP to determine how MPTCP impacts application layer throughput in short- and bulk transfers given dissimilar transmission characteristics compared to regular TCP. Experiments were conducted in a closed experimental setup to create reproducible test-scenarios with min-imal influence from external artifacts.

Our results conclude that MPTCP attains a rate close to the aggregated rates of the con-stituent links during path homogeneity in bulk transfers. With increasing differences during path heterogeneity the gain decreases until the rate is near equal the rate of the best TCP con-nection, thus MPTCP more or less fulfills its goal in bulk transfers. Our key findings include that MPTCP can degrade the performance with up to „ 16% compared to the best TCP path. Especially web-browsing most likely is to be harmed by an introduction of MPTCP. Hence the download size is a key factor if uncoupled MPTCP is advantageous over TCP. This opens the discussion on how and when to introduce MPTCP ubiquitously.

The remainder of this thesis is structured as follows: Chapter 2 details the motivation of this thesis. Chapter 3 provides a brief background about the problems behind the develop-ment of MPTCP with the next generation architecture. This is followed by an overview of transport layer bandwidth aggregation, LTE radio access technology and its characteristics according to its class-based QoS. The reminder of Chapter 3 details the theory of TCP and MPTCP, in terms of design, goals and theory. Chapter 4 details the methodology of the the-sis. The results from the experiments are detailed in Chapter 5, followed by the discussion including future work in Chapter 6. The conclusions is presented in Chapter 7.

(15)

2

Motivation

Since the introduction of the Internet Protocol (IP) and Transmission Control Protocol (TCP) in the beginning of the eighties, the Internet1 has reached unbelievable proportions. As a result of the World Wide Web (WWW) the Internet moved into family homes during the nineties hence established it globally. Twenty years later the number of active users today on the Internet is estimated to be approximately three billion, which is one third of the world’s population [63]. Today’s interconnectivity between people and services makes Internet a key in most industrial peoples’ daily life. Social medias like Facebook and Twitter active users ranging from hundreds of millions to billions per month. Services earlier dominated by physical retail e.g. audio and video are moving from the streets to the cloud with on-demand streaming services like Netflix, Spotify, HBO, etc. Thus the transition to the cloud of audio-/video services entail Internet to supersede the CD-/DVD era.

Driven by fixed broadband in the past, network operators today provide high-speed con-nections with mobile standards like Wideband Code Division Multiple Access (WCDMA), High Speed Packet Access (HSPA) and Long Term Evolution (LTE). Providing peak rates at 1 Gbit/s stationary and 100 Mbit/s in motion, mobile broadband and LTE are able to play a complementary or even replacing role to fixed broadband [2]. With on-going infrastructural development of mobile broadband and increased availability of low-price smartphones, peo-ple are able to change from cellular phones to smartphones, especially in the developing world. Thus the number of mobile broadband subscribers is forecast to grow nearly three times from 2.9 to 8.4 billions between 2014 and 2020 [15]. The largest portion of the growth is related to LTE (3.1 billion).

Along with the increasing number of mobile-broadband subscribers and services moving to the cloud, applications become more network intensive. The evolution of portable devices commits access networks to deliver larger amounts of data to provide higher image-, music-and video qualities. Ericsson forecast 45% annual growth in the video segment between 2014 and 2020. With such growth video will comprise the largest segment of all segments (e.g. audio, social networking, web browsing, etc.) of mobile data traffic in 2020 (55%) [15]. Given these factors, the global average data traffic per smartphone and month 2020 is pro-jected to be four times more than 2014, 4 GB compared to 1 GB. Besides this, the growing

1Internet can be defined as world’s largest meta-network. It is a global system of interconnected computer

networks without centralized governance. Interconnected networks are implemented by a common network model referred as the Internet protocol suite or TCP/IP-model referring to the core protocols of the Internet [62].

(16)

phenomenon referred as the Internet of Things require enormous amounts of traffic when billions of devices communicate.

All these factors together increase the demand for mobile network resources. Since the amount of radio frequencies is already constrained and therefore the amount of resources limited, this will be a challenge for the network operators. Figure 2.1 is referenced from Ericsson Mobility Report February 2015 [15] and displays the global data traffic per month in exabytes separated in geographical regions for the smartphone segment. Viewed in the figure the global mobile data traffic is projected to grow 40% annually leading to a 8-fold increase in six years. According to Ericsson 70% of mobile broadband data traffic will be from smartphones by the end of 2020 [15].

6 ERICSSON MOBILITY REPORT MWC EDITION FEBRUARY 2015

Smartphone

traffic dominates

Global mobile traffic (monthly ExaBytes)

25 0 5 10 15 20 2020 2014 2017

Data: mobile PCs, tablets and mobile routers Data: smartphones Voice 0 5 10 15 20 2014 2017 2020

Smartphone data traffic per region

(monthly ExaBytes)

Latin America North America Asia Pacific

Middle East and Africa Central and Eastern Europe Western Europe

Total mobile data traffic is expected to rise

at a compound annual growth rate (CAGR)

of around 40 percent

The rising number of smartphone subscriptions

and increasing data consumption per subscriber

are driving mobile data traffic growth. This will result

in an 8-fold increase in traffic by the end of 2020.

The growth in data traffic between 2019 and 2020 will

be greater than the total sum of all mobile data traffic

up to the end of 2013.

There are large differences in subscribers’ data

consumption patterns between networks, markets

and subscriber segments. Factors such as data plans,

user device capabilities and network performance all

impact data consumption per subscriber.

70% of mobile data

traffic will be from

smartphones by the

end of 2020

In 2020, smartphones

alone will generate five

times the total mobile

traffic of today

Asia Pacific will generate 50 percent of

smartphone traffic by the end of 2020

Monthly smartphone data consumption per active

subscription in Asia Pacific (3.2 GB) will only be

50 percent of that in North America (6.0 GB) and

Western Europe (6.5 GB). However, the Asia Pacific

region will have the largest share of total smartphone

traffic in 2020, due to subscription growth.

X5

Source: Ericsson Mobility Report February 2015

Figure 2.1 shows the global smartphone data traffic per region and month in exabytes. When the Internet evolved from the ARPANET2backbone, multi-path was not a primary concern and therefore was not included in the initial design. With the increased demand for network resources, multi-connectivity alternatives can offload demands by moving traf-fic to other less congested paths and improve performance by combining resources. Pooling resources is one of the most established principles to enhance performance in terms of ro-bustness and throughput. In order to understand the challenges in extending the Internet model to enable multi-path, it is therefore necessary to understand the architecture behind it. In the next chapter we detail why earlier multipath transport protocols have not succeeded and how the new design used in Multipath TCP, make the protocol with most potential.

The first ubiquitous operating system released with Multipath TCP (MPTCP) included was Apple iOS 7 in September 2013 [24]. However the implementation of MPTCP in iOS is restricted to their Siri application, to enhance robustness and not throughput. From an end-user’s perspective, the quality of a network connection is often determined by the application level throughput. There are essentially three fundamental factors impacting the throughput of a network transmission:

• Bottleneck bandwidth defines the amount of resources available to transfer information. • Packet latency defines the time it takes a packet to travel from one end-point to another. Round-Trip Time (RTT) refers to the duration between a packet sent and an acknowl-edgment received, indicating the packet was successfully received at destination.

2Advanced Research Projects Agency NETwork (ARPANET) was developed by U.S. Department of Defense

(17)

• Packet error probability defines the amount of packets lost in transmission or corrupt at destination.

To understand the impact with an introduction of MPTCP to multi-RAT devices, there is a need to understand how the throughput of MPTCP in practice depends on different trans-mission characteristics in different scenarios. Other studies exist in the area, most of them are simulation- and measurement-based. Raiciu et al. [47], [66] shows that MPTCP improve load distribution in data centers compared to standard TCP in combination with randomized flow-level load balancing. Chen et al. [7], [6] show that MPTCP is robust in achieving close to the best single path TCP connection, where the download size is a key factor to the gain of MPTCP compared to TCP and that MPTCP supports mobility without breaking existing connections. Paasch et al. [40] studied the mobile/Wi-Fi handover performance of MPTCP when the Wi-Fi interface goes down. They conclude MPTCP is robust during mobile/Wi-Fi handovers.

Measurement studies have also reported download speed differences that can be lever-aged by multi-homed clients. For example, Linder et al. [32] use crowd-based measurements to characterize the download speeds seen by different technologies and operators. In their work they identify many locations with significant download speeds differences between operators. With no operator consistently outperforming the other operators and the best operator often varying between locations, their data suggests that there may be significant advantages for multi-homed users running a multi-path transport protocol such as MPTCP. In contrast to their work, this thesis focuses on the potential performance improvement seen by such clients.

This Master’s Thesis presents an initial and comprehensive study using a performance-focused prototype implementation to evaluate an introduction of MPTCP. Based on the fun-damental functional goal of MPTCP mentioned earlier the following questions were defined: a) Does an introduction of MPTCP degrade end-user throughput compared to best path

of regular TCP in path heterogeneity?

b) How responsive is MPTCP to varying link characteristics? c) Does MPTCP utilize given resources?

Our goal is to determine if MPTCP fulfills its goal in short-burst traffic and bulk trans-fers. In our results we compare the gain, resource utilization and responsiveness of MPTCP with situations related to LTE. The MPTCP stack was provided by Ericsson Research & De-velopment department and later applied in a closed network environment with the ability to emulate desired characteristics. Based on the class-based QoS of LTE, we design a set of experiments with characteristics ranging from good to bad. To determine attained gain when introducing MPTCP we use the best case scenario with single path TCP as reference value in the process of evaluation. In the second part we stress-test the protocol by performing rapid changes in the characteristics and monitoring how the protocol reacts.

(18)
(19)

3

Background and Theory

This chapter discuss the model design of the Internet and why it is difficult to implement new protocols like Multipath TCP. The chapter then describe the benefits of bandwidth ag-gregation, the technology of LTE with the background of radio access technologies. Finally it discuss the Transmission Control Protocol and the intended protocol; Multipath TCP. The design of modern communication systems can be illustrated with a variety of models, e.g. the four layered TCP/IP model, the seven layered OSI model [49], AppleTalk, Internetwork Packet Exchange/Sequenced Packet Exchange (IPX/SPX), etc. We use a five-layered stack model, based on the four layers in the TCP/IP model including the physical layer from the OSI model [29]. The model is shown below in Figure 3.1.

Physical Layer Media, Signal and Binary Transmission

Data Link Layer MAC and LLC (Physical Addressing)

Network Layer Path Determination and IP (Logical Addressing)

Transport Layer End-to-End Connection and Reliability

Application Layer Network Process to Application HTTP, HTTPS, DNS, SSL, FTP, SMTP ...

TCP, UDP, SCTP, UDP-Lite ...

IP, IPSec, ICMP, ARP, DHCP ...

Ethernet, LTE, UMTS, WiMAX, HSPA, WiFi, Bluetooth, IP ... NR 5 4 3 2 1

Extended TCP/IP Model Internet Protocols

H o s t N e tw o rk

Figure 3.1 illustrates a modified 5-layer version of the TCP/IP model with descriptions and example techniques.

The goal of these models is to characterize and standardize the internal functionality of communication systems according to a stack. Our stack is composed of five independent layers, each providing a service to the layer above. Structuring communications according to these models form a flexible, robust and interoperable architecture that allows systems to communicate using a broad range of transmission mediums and services without any hard-ware or softhard-ware requirements. By allowing the principle of protocol multiplexing, multiple instances per protocol are allowed to operate simultaneously within the same infrastructure over a single network link.

(20)

The fundamental principle behind this layered model is the principle of encapsulation. Encapsulation creates abstraction between the services at different layers. When data is sent from an application, logically higher protocols is further encapsulated at each layer with ad-ditional header (and footer) information included to direct it towards its destination.

Link Layer Transport Layer Network Layer Application Layer DATA TCP DATA MESSAGE TCP HEADER IP HEADER FRAME HEADER SEGMENT DATAGRAM IP DATA

FRAME DATA FRAME FOOTER

FRAME

Figure 3.2 illustrates the principle of encapsulation used in network communication systems.

The application layer is the highest layer closest to the end-users through the network ap-plications. It provides process-to-process services between two or more entities in a com-munication system across a TCP/IP internet. Usually network applications rely on existing network protocols to provide intercommunications between hosts. A network protocol is a system of rules that defines a syntax, semantics and synchronization standard of transmitted messages to provide a service. Example includes common protocols like the File Transfer Pro-tocol (FTP), Hypertext Transfer ProPro-tocol (HTTP), Simple Mail Transfer ProPro-tocol (SMTP) and many more. There are no requirement for a network application to use an existing network protocol. Instead they are able to send information directly to the endpoint of the application layer using the network socket Application Programming Interface (API). The socket API al-lows direct access to the network. A network socket is a process end-point, defined by a IP address, that determines the destination host, and a port number directing the information to the correct process.

The transport layer is the layer closest to the network and defines the edge between the host and the network. It provides host-to-host transport services to applications using differ-ent transport layer protocols. This is to hide the complexity of the network from the upper layers. Depending on desired behaviour by the applications, the transport layer provides a wide variety of services e.g. connection-oriented communication, reliability, flow and con-gestion control. Connection-oriented communication is a mode that states that a transmis-sion sestransmis-sion must be established before information can be transferred and that delivery is always error-corrected and in-order. TCP is a connection-oriented protocol. The opposite is connection-less communication, found in stateless protocols e.g. User Datagram Protocol (UDP) and IP. Within computer networking, reliability refers to the sender’s assurance that the content is successfully delivered to the destination.

The network layer defines the entry point of the network. It provides global interconnec-tivity with the task to route packets across networks toward multiple remote destinations. The most used interconnecting network layer protocol is the Internet Protocol. It provides connection-less packet routing, host addressing and packet forwarding. To receive informa-tion from an interconnected unit a device must have at least one unique IP-address within that particular network. To interconnect multiple networks, devices named routers or gate-ways direct packets towards their destination using information in their routing tables or routing policies. A network may be a home or small office network that routes to Internet

(21)

3.1. Internet’s challenges for MPTCP

through a router connected to a Internet Service Provider (ISP). Larger business or ISP net-works use enterprise routers to connect up to powerful core routers in the conglomeration of multiple, redundant networks owned by numerous companies, referred as the Internet backbone.

The data link layer focus on local delivery between adjacent nodes within a subnetwork. Nodes interconnected by network devices operating at the physical- or data link layer, e.g. hubs, bridges and switches. It provide services like physical addressing, packet switching, error detection and correction for information sent at the physical layer, flow and QoS control, frame synchronization and media arbitration. The data link layer can be subdivided into two sublayers. The Logical Link Control (LLC) sublayer provides node-to-node flow- and error-management control using ARQ protocols, as in the transport layer. The underlying Media Access Control (MAC) sublayer acts as an interface between the LLC and the physical layer. It provides addressing within a subnetwork using unique MAC-addresses and channel access control mechanisms to detect and avoid interference between nodes sharing the same medium.

The physical layer is the lowest layer in the network model and refers to the basic hard-ware communication technologies used within a transmission. It defines the means of trans-mitting raw bits rather than logical data packets over a physical link between two network nodes. The bits are converted to a physical signal that is transmitted over hardware trans-mission mediums, e.g. electrical connectors, electromagnetic waves.

3.1

Internet’s challenges for MPTCP

Numerous multi-path transport protocols already have been published. Some based on TCP [20], [23], [33], [50] and others based on the Stream Control Transmission Protocol (SCTP) [26], [31]. Still TCP is the most used transport protocol and it is restricted to operate over a single path. This section clarifies the origin of the problem in more detail.

When networks’ extensibility at both high and low levels have evolved, the core of the In-ternet has become trapped in the middle. This has resulted in a hourglass design illustrated in Figure 3.3. On top of the already complex design is the current version of the transport layer providing a widespread selection of functionality: identifying application end-points via port numbers [44], [42], end-to-end congestion control and fairness [30], utilizing alter-nate end-to-end paths [58] and providing reliable in-order delivery [42], [58]. The approach of merging much functionality into a single layer has made the transport layer difficult to evolve. Thus there are only two transport layer protocols working ubiquitously: TCP and the User Datagram Protocol (UDP) [44].

To obtain neutral networks, network devices e.g. routers, bridges and hubs, are designed to operate between network layers in the initial design of Internet, illustrated in Figure 3.4. Switches and hubs would only operate at data link- and physical layers to interconnect nodes. Flexibility was a main factor in the initial design of the Internet. Intermediate systems should not care about the transferred content stated by the end-to-end principle, only provide in-terconnected hop-to-hop services using the IP protocol. During the time computer networks have evolved, a variety of extensions affecting the behaviour of the networks have been in-troduced.

Kurose et al. [45] published an analysis that showed that approximately half of the IP-traffic carrying information in the options field were discarded due to middleboxes, raising serious dependability issues. The provision of the optional header field allows protocols to be extended with further functionality without affecting the standard. Network component manufacturers implemented such drop modes, e.g. Cisco’s “ACL IP Options Selective Drop"

(22)

3.1. Internet’s challenges for MPTCP

[9], to prevent their routers’ CPU from overloading. Although IP options were fully standard-ized in RFC 791 [42], the demand for improved throughput, monitoring and security made the Internet irregular.

Application Layer

Transport Layer

Network Layer

Data Link Layer

Firefox Hotmail GoogleMail FileZilla Transmission

HTTP SMTP FTP P2P BitTorrent

TCP UDP

IP

Starcraft 2 Skype

RTP

Ethernet IEEE 802 LTE

PPP

Physical Layer Optical

fiber IEEE 802.ad

Coaxial cable

Twisted

pair OFDM

Figure 3.3 displays an incomplete illustration of the hourglass architecture of the Internet.

Figure 3.4 illustrates the initial design of the Internet where the transport layer provided end-to-end functionality.

Middlebox

A middlebox is defined as any intermediary device performing functionality other than stan-dard functions of a network router on the path between a source host and destination host [39]. By this definition middleboxes include a variety of devices, e.g. firewalls, NATs, applica-tion layer gateways, performance enhancing proxies, traffic normalizers, etc. The funcapplica-tional- functional-ities of such devices are essential in todays access networks. Examples include functionality to prevent malicious users from accessing confidential information and causing damage to the network, routing and improving performance. The proliferation of mobile devices is set to further expand the range of middlebox applications when new markets are faced with new challenges and possibilities.

The fundamental principle behind Internet and large distributed communication net-works is the principle of the end-to-end argument. Quoted from Saltzer et al. [51] in their article about system design:

"The principle, called the end-to-end argument, suggests that functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level."

(23)

3.1. Internet’s challenges for MPTCP

The end-to-end argument implies neutral networks where communications protocol op-erations should be defined to occur at the end-points of the communication system rather than in intermediary nodes. Provided they can be implemented completely and correctly in the end hosts. This is to avoid implementing intelligence beyond bit error recovery, security using encryption, duplicate message suppression, recovery from system crashes and delivery acknowledgment in the networks. This is necessary because any network, however carefully designed will subject to failures of transmission at some statistically determined rate. Hence the best solution is to place responsibility for the integrity of communication to the end sys-tems.

An example how middleboxes violate the end-to-end principle including TCP is illus-trated in Figure 3.5. When a middlebox receives data it sends an acknowledgment to the sender. The sender is than falsely indicated the transmission successfully transmitted to the host, but the data is than sent towards the destination from the middlebox and lost. The outcome of the incident is the sender falsely thinks the content is delivered. This has left the application layer as the first end-to-end layer.

Figure 3.5 illustrates how a middlebox violates the end-to-end argument. Falsely indicating content is delivered to destination by acknowledging data to the sender. Due to the middle-box the transport layer does not fulfill end-to-end principle.

The problem in introduction of multi-path communication arises due to the uncertainty of how already deployed middlebox applications behaves to these new standards. Past intro-ductions of transport layer protocols have been unsuccessful as middleboxes performs pro-cessing at higher layers thus terminates the new protocols and preventing widespread dis-semination. Examples include SCTP [58] and Datagram Congestion Control Protocol (DCCP) [16]. Network component manufactures spread diversity of functionality throughout net-works when implementing proprietary solutions in middleboxes that override standards. In-stead of concentrating them at the end-systems. They may damage the end-to-end nature of the Internet. Without any global standard how middleboxes should behave to new protocols and act upon failures, no one can predict how middleboxes will behave towards MPTCP.

In 2012 the first large-scale study of middlebox deployment was studied by Sherry et al. [54]. Their study is based on surveyed data of 57 enterprise networks. They concluded the number of middleboxes operating on networks is almost as large as the number of routers. This emphasizes the importance of middleboxes in networks and considering middleboxes when designing new protocols to introduce centralized standards.

As the middlebox-problem grew, the number of unsuccessful transport protocols provid-ing simultaneous use of multiple paths increased. Thereby IETF and the research community realized they had to adapt. Rather then introducing a new protocol, they had to reuse an existing one and design MPTCP around the middlebox-problem.

(24)

3.1. Internet’s challenges for MPTCP

Middleboxes effect on TCP Options field

To successfully introduce a transport protocol at global level, it is necessary to design the protocol to be accepted by already deployed intermediate devices in the networks. Honda et al. [22] published an extensive study on 142 networks in 24 countries, where they probed the networks for middleboxes violating the end-to-end principle. Their study showed that at least 25% of paths interfered with traffic communication at the transport layer in some way, firewalls excluded. But their study also concluded that it is possible to extend TCP using the options field, as long such design is robust and adaptive.

Next generation transport layer model

To solve the middlebox-problem, mentioned in Section 3.1 (page 9), and reapply the end-to-end argument to the transport layer, Ford et al. [19] presented a new design model expanding the transport layer called Transport Next Generation (TNG). TNG was invented with the goal to regain end-to-end connectivity to the transport layer. Their approach suggested a new layering model, factoring the transport layer into three separate layers, illustrated in Figure 3.6.

Figure 3.6 illustrates the partitioning of the current transport layer into three different layers by the TNG model.

The three layers can be divided into two categories: application-oriented and network-oriented. The logically highest layer, supersedes current transport layer and provides limited application-oriented functionality to protect and support the end-to-end communication for higher-level layers, e.g. reliability, ordering and error recovery. The two lower network-oriented layers provide network functionality and operate segment-by-segment in the trans-mission. The Endpoint layer is a extension of the network layer and provides shared func-tionality among transport protocols e.g. application end-point identification via port number. The Flow Regulation layer, in the future called Flow layer, provides congestion control and performance-related mechanisms.

The most important benefit when refactoring the functionality of the transport layer into application- and network-oriented layers, is intermediate devices are allowed to traverse the network-oriented layers without affecting the transport layer. This preserves the end-to-end principle at the transport layer, illustrated in Figure 3.7. TNG also benefit deployments of new transport protocols by facilitating the transaction when moving new protocols from user-space libraries into operating system kernels. Additionally TNG extends the flexibility for applications to decide which transport protocol to use rather than opening new sessions per specific protocol. Similarity of functionality is found today at the Session- and Presentation layers in the OSI model.

It may seem optimistic to introduce a major architectural change as TNG in Internet, but it is possible to achieve incremental deployment. By utilizing existing protocols in the lower layers, e.g. UDP at the endpoint layer or TCP at the flow regulation layer, it provides imme-diate deployment and a basis for long-term evolution.

(25)

3.2. Bandwidth aggregating solutions

Figure 3.7 illustrates the TNG architecture with end-to-end at transport layer. Due to the large diversity within network architectures, the problem has been unsolved for decades. But the promising extension of MPTCP, an solution may be in reach as long it does not impact the current situation within the networks. This is not included in the scope of this thesis.

3.2

Bandwidth aggregating solutions

In this section we further detail benefits of transport layer bandwidth aggregation and the aggregating approach used in MPTCP.

According to an overview published by Ramaboli et al. [48], bandwidth aggregating solu-tions can be divided into two approaches:

a) Non-adaptive solutions do not adjust their resource allocation and traffic schedules. This means links send data whenever possible, expecting the data to be successfully transferred and upon failure handles the consequences. The downside is non-adaptive solutions may be impacted by the event of changes in characteristics.

b) Adaptive solutions corrects the pitfalls of non-adaptive practices. By listening to the behaviour of the network and adapting the transmission accordingly. They are more complex and harder to implement.

MPTCP solution: The MPTCP standard defined in RFC 6824 [18] can be classified as a non-adaptive solution. The protocol does not include intelligent interface selection and traffic distribution mechanisms to reduce the occurrence of packet reordering at the destination in an heterogeneous network. To handle reordering, MPTCP employs a buffer to store out-of-order segments before sending them in-out-of-order to the application.

Benefits of aggregating heterogeneous RATs

As mentioned in the beginning of Chapter 3, the primary benefits of aggregating resources are increased bandwidth and improved failure robustness. When combining different RATs there are also a couple of additional benefits.

a) Aggregating resources of several radio access technologies allows multi-RAT devices to improve bandwidth without the need for infrastructural developments [48].

b) Using more than one RAT, traffic can be distributed evenly or unevenly over multi-ple computing resources depending on different conditions. Hence congestion may be avoided and efficient usage of network resources allowed. By moving load balancing from the Internet Service Providers (ISPs) to the end-systems resource utilization can be optimized globally, not only locally.

(26)

3.3. Long Term Evolution (LTE)

c) Transport layer protocols provides full congestion detection and avoidance of the links and direct contact to the destination. This makes it possible to leverage TCP handshakes directly thus bootstrap subflows quickly. Thereby will transport layer protocols achieve better load balancing than application layer protocols [48].

3.3

Long Term Evolution (LTE)

This section details factors impacting the end-user experience in Radio Access Technologies and why the physical layer within Radio Access Networks (RANs) often is the main bottle-neck.

The theoretical limit information can be transmitted through any analog communication channel, regardless of the technology, is fundamentally limited by two parameters: the amount of allocated bandwidth and the signal-to-noise ratio between transmitter and receiver using a single antenna. This is known as the Shannon–Hartley channel capacity [53]:

C=BW ˚ log2(1+

S

N). (3.1)

Here, C denotes the channel capacity which defines the theoretical upper information rate (excluding error correcting codes) of clean (or arbitrarily low bit error rate) data that can be sent through an analog communication channel in bits/s. The channel capacity is af-fected by the amount of allocated bandwidth BW of the channel in hertz, the average signal power S in watts and the average noise or interference power N in watts. The ratio NS is commonly referred as the signal-to-noise ratio (SNR). Due to the occurrence of interference in radio communication systems compared to wired, it is necessary to include it using the signal-to-interference-plus-noise ratio (SINR). With encapsulation overhead excluded from the channel capacity the theoretical rate of a radio transmission is referred as the peak rate.

Shown in the Shannon–Hartley equation the capacity depends on quality of a radio signal. The reduction in power density a electromagnetic wave is subjected to when it propagates through the environment is referred as the signal path loss.

To handle larger differences in SNR there have been an extensive drive for the develop-ment of new adaptive modulation schemes in RATs compared to wired access technologies. Two prominent radio technologies that are used are, spatial multiplexing, also known as multiple-input multiple-output (MIMO) and adaptive modulation. MIMO provides the abil-ity the achieve link aggregation at the physical layer and adaptive modulation enhances the throughput by adapting the robustness of the digital encoding (error rate coding) according to the signal quality, thus balances the throughput with error resilience.

History

When further development of 3G standards became costly due to inherent design limitations in UMTS, 3GPP initiated a work item termed “3G Long Term Evolution - Evolved Packet System RAN", referred as LTE. The LTE work item was commenced to redesign both the core network and the radio networks to create a new highly flexible standard. This standard would lay the groundwork in the quest to achieve the requirements for the fourth genera-tion (4G) mobile access service. These requirements are issued by the Radiocommunicagenera-tion Sector of the International Telecommunication Union and is referred as International Mobile Telecommunications-Advanced, or IMT-Advanced. IMT-Advanced require a nominal data rate of 100 Mbit/s during high speed vehicular mobility (120 to 350 km/h) and 1 Gbit/s sta-tionary. IMT-Advanced systems shall also be able to achieve a user plane latency of less than 10 ms in unloaded conditions [25].

The development of a new global standard was divided into two work items: LTE target-ing the radio access network and System Architecture Evolution (SAE), targettarget-ing the packet

(27)

3.3. Long Term Evolution (LTE)

core network. The output from the work items is embodied in the Evolved Universal Ter-restrial Radio Access Network (E-UTRAN) and the Evolved Packet Core (EPC) standards, together referred as the Evolved Packet System (EPS). In contrast to earlier standards, the LTE standard was designed only to be built with a packet switched domain, thus removing the circuit switched domain. This reduced the number of network elements involved in LTE.

Architecture

The LTE access network, also referred as the E-UTRAN, is the access part to the SAE core net-work and comprises a netnet-work of base stations, named Evolved Node B (eNodeB), illustrated in Figure 3.8. E-UTRAN operates without a centralized intelligent controller and distributes its intelligence amongst the base stations. This enhances the connection set-up and reduce the time when performing a handover.

UE

LTE-Uu Serving Gateway

(S-GW) PDN Gateway (P-GW) Internet S5 S1-U Mobility Management Entity (MME) S1-MME

Radio Bearer S1 Bearer S5/S8 Bearer

External Bearer EPS Bearer

end-to-end service

E-UTRAN (LTE) SAE

Figure 3.8 illustrates an incomplete overview of the Evolved Packet System (EPS) including bearers.

The Mobility Management Entity (MME) is the control-node of E-UTRAN and handles the bearer activation/deactivation process, the paging of idle mode UEs and choosing a SGW for a UE. The Serving Gateway (S-GW) routes user data packets and acts as an anchor during both inter-eNodeB handovers and vertical handovers between LTE and other 3GPP tech-nologies. The Packet Data Network (PDN) Gateway (PDW) connects SAE to external access networks and is the point of exit and entry of traffic for the UE. The PDW also is responsible for IP address allocation for the UE, as well as QoS enforcement by filtering downlink packets into different QoS-bearers.

To successfully transmit data between the PDW, through the eNodeB to the UE, IP packets are encapsulated and tunneled with different tunneling protocols across different interfaces. Between the SAE interfaces (S1 and S5/S8) the 3GPP-specific tunneling protocol called the GPRS Tunneling Protocol (GTP) is used, for further information see [1].

The E-UTRAN user plane protocol stack in shown in gray in Figure 3.9, with the sublayers Packet Data Convergence Protocol (PDCP), Radio Link Control (RLC) and Medium Access Control (MAC). They are terminated in the eNodeB on the network side. For further details regarding the protocols can be found in Chapter 4 of [52].

(28)

3.3. Long Term Evolution (LTE) GTP-U UDP/IP L2 L1 GTP-U Application IP RLC MAC L1 UE RLC MAC L1 LTE-Uu PDCP PDCP Relay UDP/IP L2 L1 GTP-U UDP/IP L2 L1 Relay GTP-U UDP/IP L2 L1 IP S1-U S5/S8 eNodeB S-GW P-GW

Figure 3.9 illustrates the E-UTRAN user plane protocol stack.

Technology

The key element of LTE is the technology used as signal bearer, called Orthogonal Frequency-Division Multiplex (OFDM). In LTE OFDM is used with two adapted access schemes depend-ing on downlink operations (eNodeB to UE) or uplink operations (UE to eNodeB):

a) Orthogonal Frequency-Division Multiple Access (OFDMA) in downlink operations. b) Single-Carrier Frequency-Division Multiple Access (SC-FDMA) in uplink operations.

OFDM is a commonly used modulation format found in other wireless standards, e.g. Wi-Fi. An OFDM signal of LTE comprises a maximum of 2048 different close-spaced sub-carriers, allocating 15 kHz each. These sub-carriers are orthogonal to each other to create a transmission scheme with the ability to support parallel channel operations. Thus transmit-ted data is spread across the multiple carriers of a OFDM signal with reduced data rate per carrier.

Figure 3.10 illustrates the difference between the allocation schemes OFDM, OFDMA and SC-FDMA. Each row refers to a sub-carrier and the colors refers to data transmitted by a specific user per time unit. In OFDM all frequency resources (sub-carriers) are allocated to provide data for a single user per time unit. OFDMA enables dynamic allocation in both time- and frequency domain, thus multiple users can allocate bandwidth per time unit.

TIME F R E Q U E N C Y (s u b -c ar ri er s) (a) OFDM TIME F R E Q U E N C Y (s ub -c ar ri er s) (b) OFDMA (downlink) TIME F R E Q U E N C Y (s ub -c ar ri er s) (c) SC-FDMA (uplink)

Figure 3.10 illustrates three different allocation schemes. OFDMA is used in LTE downlink and SC-FDMA is used in LTE uplink.

This increases the efficiency and interference resilience. SC-FDMA is beneficial in LTE up-link operations as it lowers peak-to-average power ratio and thereby benefits the UEs

(29)

trans-3.3. Long Term Evolution (LTE)

mit power efficiency compared to OFDM and OFDMA. The decision to use OFDM modula-tion scheme with the support of both Frequency-Division Duplex (FDD) and Time-Division Duplex (TDD) yields the flexibility of the LTE standard. In FDD the downlink and uplink are separated by frequency and in TDD they are separated in time.

The generic radio frame of LTE has a time duration of 10ms, divided into 10 sub-frames. One sub-frame consist of two slots of length 0.5ms, illustrated in Figure 3.11.

Each slot consists of seven or six OFDM symbols with a guard bands between. A guard band, marked with light gray in Figure 3.11, is a space included between symbols with purpose of preventing interference and multipath propagation. Depending on the number of symbols used per slot, they are fitted with either short/normal Cyclic Prefixes (CPs) or long/extended CPs. The CP is a portion of the end of sample of the time domain block, copied to the beginning of the symbol. CPs along with guard bands are the overhead in an OFDM system, as they does not carry any useful information.

#0 #1 #2 #18 #19

One Radio Frame - 10ms

Slot - 0.5ms

#0 #1 #2 #3 #4 #5 #6

OFDM Symbols in a Slot

Cyclic Prefix OFDM Symbol Sub-Frame - 1ms

Figure 3.11 illustrates the generic radio frame of LTE.

Twelve OFDM slots are then grouped into a so-called resource block with total size of 12 ˚ 15 kHz = 180 kHz in frequency domain and 0.5ms in time domain. A user is allocated a number of resource blocks. Either in time-frequency domain during downlink operations with OFDMA or in frequency domain during uplink operations with SC-FDMA, shown in Figure 3.10 b) and c). The more resource blocks a user allocate and the higher the modulation format used, the higher the bit rate.

Figure 3.12 show a block diagram of the physical layer in LTE. When viewing the block diagram it is notable that SC-FDMA can be considered as an OFDM system with a Discrete Fourier Transform (DFT) mapper by the extra DFT component viewed in gray.

A OFDM transmitter includes a baseband modulator, sub-carrier mapping, Inverse Fast Fourier Transform (IFFT), parallel to serial conversion, Cyclic Prefix (CP) addition, Pulse Shaping (PS), Digital to Analog Conversion (DAC) followed by a Radio Frequency (RF) mod-ulator.

The baseband modulator transforms the data into different modulation formats. Here the adaptive modulation balances the robustness of the format against the SNR. LTE provides several modulation formats: Quadrature Phase-Shift Keying (QPSK) and 16 or 64 Quadra-ture Amplitude Modulation (16QAM/64QAM). The modulated symbols are then mapped to sub-carriers. It is possible to use different modulation formats over multiple sub-carriers as the interference level on the carriers can vary with time. The Inverse Fast Fourier Transform (IFFT) transforms the modulated sub-carriers in frequency domain to time domain samples. Cyclic prefixes are then applied to the samples before transforming them from digital to

(30)

ana-3.3. Long Term Evolution (LTE)

log signals. OFDM receiver then reverts the process according to the scenario in the lower row of Figure 3.12 to decode the binary stream.

Mod Symbol Mapping Bit Stream S -t o -P DFT Sub-carrier Mapping IFFT P -t o -S Add CP/PS DAC/RF 1, 2, 4 or 8 layers

- S-to-P: Serial to Parallel - P-to-S: Parallel to Serial - CP: Cyclic Prefix - PS: Pulse Shaping 1, 2, 4 or 8 antenna ports RF/ADC Remove CP S -t o -P FFT Sub-carrier Demapping/ Equalization IDFT P -t o -S Detection & Equalizing Bit Stream Channel

- DFT: Discrete Fourier Transform - IDFT: Inverse Discrete Fourier Transform - FFT: Fast Fourier Transform - IFFT: Inverse Fast Fourier Transform

- DAC: Digital to Analog Conversion - RF: Radio Frequency SIGNAL - ADC: Analog to Digital Conversion

Both OFDMA and SC-FDMA

Only SC-FDMA

Figure 3.12 illustrates a block diagram of the physical layer of LTE when transmitting a bit stream.

The difference between SC-FDMA and OFDM is the transformation to frequency domain by a Discrete Fourier Transform (DFT) before going through the standard OFDM modulation. Shown as dark gray in Figure 3.12.

LTE is able to deploy and allocate spectrum from 1.4 MHz up to 20 MHz, t1.4, 3, 5, 10, 15, 20u MHz. Combined with spatial multiplexing of four layers (4 ˆ 4 MIMO), LTE release 8 frozen in December 2008, supported peak data rates up to 300 Mbit/s in the downlink and 75 Mbit/s in the uplink using single-antenna (1 ˆ 1 MIMO). Since LTE re-lease 10, frozen in April 2011, also referred as LTE-Advanced, the standard fulfills the IMT-Advanced requirements of 4G with 8 ˆ 8 MIMO [10]. These values defines the theoretical limit an compatible UE can obtain in lab environment with ideal radio signal quality and no error rate coding, which is not applicable in real-life scenarios. With 5/6 error rate coding the peak rate decreases to „200 Mbit/s.

Quality of Service (QoS)

To cope with the demands from subscribers and to attain better end-user experience, EPS implements a sophisticated class-based QoS and defines priorities for certain customers and services during times of high congestion in the network. In the EPS architecture, QoS is im-plemented using a set of bearers between a UE and the PDN gateway, illustrated in Figure 3.8. A bearer is basically a virtual concept that defines how data should be treated when it travels across the network, e.g. “UE A will always achieve at least 512 Kbit/s download speed while UE B is not guaranteed any bit rate and might face extremely bad download speed at times with high congestion". LTE QoS can be divided into two types of EPS bearers: Dedicated and Default, creating a topology architecture illustrated in Figure 3.13.

A LTE QoS Class Indicator (QCI) defines the IP level packet characteristics, the table defin-ing the classifiers is reproduced below from Table 6.1.7 of 3GPP TS 23.203 (v11.3.0).

Default bearer When LTE UE attaches to the network, it is assigned a default bearer which maps the UE to IP address and remains attached as long as the UE is connected. Default bearers provide best effort service and do not guarantee any bit rate. Therefore QoS class identifiers (QCIs) 5-9 (non-GBR) can be assigned to a default bearer, see Table B.1 (page 61). The other parameters (APN-/UE-AMBR etc.) specify the upper rate of a subscriber, for fur-ther information see A.1.1.4 in 3GPP TS 23.203 V11.6.0 [3].

(31)

3.3. Long Term Evolution (LTE)

Dedicated bearer The dedicated bearer can be divided into a non-guar-anteed bit rate (non-GBR) or a guaranteed bit rate ((non-GBR) service. A dedicated bearer is applied on top of a default bearer to provide an dedicated tunnel to one or more services. For high priority services, like VoLTE, dedicated bearers are essential. To achieve the mapping between the services and the dedicated bearer it uses Traffic Flow Templates (TFTs) compared to the default bearer. A TFT gives special treatment to a specific service. Dedicated GBR bearer map to QCI 1-4, where as non-GBR map to QCI 5-9.

Figure 3.13 illustrates the LTE Quality of Service (QoS) topology.

Transmission characteristics

There are several main factors impacting the end-user experience, especially in short-burst traffic. First, the duration time it takes for a UE to access the radio and the core network resources. Here the benefit of LTE’s flat IP architecture becomes obvious. The number of connection states is reduced to only two states compared to previous four states in HSPA. This is possible as the number of network elements involved in the access network path is fewer. Thus the control plane latency requirement of IMT-Advanced is to be lower than 50ms to establish an connection with the access network and post-establishment less than 5ms latency. This characteristic is comparable with Wi-Fi technologies such as IEEE 802.11n-2009 and 802.11ac.

Bufferbloat is a phenomena that lead to large differences in RTT due to existence use of buffers in networks. The bufferbloat phenomena is especially common in radio access net-works where large buffers are used to compensate the bandwidth bottleneck of the air in-terface that occur when the SNR increases. During an transmission, the RTT can multiply hundreds of times as the buffers grow thus may lead to flow starvation. Flow starvation occurs when a flow is restricted from continuing the transmission because it is denied nec-essary resources to process its work. This may occur if a flow is waiting for the last packet sent to be acknowledged, either sent on its own flow or another flow. Thus the flow is starv-ing and the throughput is degraded. The occurrence of bufferbloat is less prominent in most wired/Wi-Fi networks. Chen et al. [8] present a study regarding the impact of the bufferbloat phenomenon on MPTCP in wireless networks. Their study revealed that MPTCP in some cases suffered to severe performance degradation due to the occurrence of the bufferbloat phenomenon when combining resource from both cellular and Wi-Fi networks. To further understand how MPTCP behaves when exposed to the bufferbloat phenomena, one of our experiments include a scenario that exposes MPTCP to the phenomena.

(32)

3.4. Transmission Control Protocol (TCP)

3.4

Transmission Control Protocol (TCP)

Since MPTCP is an extension to TCP this section details the TCP header stucture, the flow-and congestion control, flow-and the impact on the TCP throughput of packet error probability. This section of followed by the details regarding the theory of MPTCP.

Illustrated in Figure 3.3, there are two prominent transport protocols used ubiquitously: the connection-oriented protocol TCP [43] and the connection-less protocol UDP [44]. A connection-oriented communication session is established before any data can be transmit-ted and the stream of data sent by the protocol will be delivered ordered and error-checked at the destination port if the transmission succeeds. But providing such functionality on top of potentially unreliable datagram services in lower-level protocols comes with a price in terms of increased overhead and latency compared to using UDP.

The TCP protocol data unit is referred as a TCP segment. Figure 3.14 illustrates the struc-ture of a TCP segment, divided into two blocks:

a) A TCP header block that directs the segment to its destination. b) A TCP data block carrying the application layer payload.

To successfully provide its functionalities, the TCP header is structured with ten manda-tory fields, resulting in a minimum header length of 20 octets or 160 bits (the optional field excluded). Given these fields, TCP achieves the ability to exchange information bidirectional between the hosts.

32 64 96 128 160 Offset in bits Source port Sequence number Destination port

Acknowledgment number (if ACK set)

Data offset Reserved 0 0 0 Window Size

Checksum Urgent pointer (if URG set)

Flags

0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1

1 2 3

Data

TCP Optional Field (if Data offset > 5)

MSS

Figure 3.14 illustrates the TCP header structure in solid with accompanying data part dotted. When the transport protocol receives a stream of information to be transmitted from the upper-layer application, it is fragmented into payload blocks, with the size of the Maximum Segment Size (MSS) in octets. The Maximum Transmission Unit (MTU) of the outgoing inter-connecting service for that particular link derives the MSS. In LTE the MTU is 1428 octets and 1500 in Ethernet. Depending on the interconnecting protocol, IP version 4 (IPv4) [42] or IP version 6 (IPv6) [12], the IP header is 20 resp. 40 octets. Therefore the MSS of LTE is 1428 ´ 20 ´ 20=1388 resp. 1428 ´ 40 ´ 20 = 1368 octets leading to an overhead of „ 2.8% resp. „ 4.2%. In comparison to Ethernet the MSS is 1460 resp. 1440 octets.

Connection initiation

In order to exchange the starting sequence number two entities plan to use, the procedure to establish a TCP connection is a handshake process involving an exchange of three messages, referred as a three-way handshake. The three-way handshake was first introduced 1978 by

References

Related documents

To implement high quality packet based networks, used in for example 5G radio networks, there is a need to be able to measure packet latency.

Till detta syfte knöt vi tre delambitioner: att försöka svara på varför man engagerar sig i politiska partier över huvud taget, att föra en diskussion om huruvida avhopp

Study III explores longitudinally whether engaged lifestyle, including mar- riage and leisure activity, directly affects memory performance, or whether the effect is mediated

This Master thesis presents real time, two way available bandwidth measurement tool using Two Way Active Measurement Protocol (TWAMP) for Android OS based devices to measure high

i) Kasul and Heldt (79), by measuring crack growth velocity in air for B2-ordered Fe-35A1, determined the critical velocity for environmental embrittlement to be -7 x

If the used probe-traffic intensity is too low with respect to the available bandwidth (i.e. the probe traffic in combination with cross traffic do not over- load the bottleneck

For network end users, it is only feasible to obtain bandwidth properties of a path by actively probing the network with probe packets, and to perform estimation based

To give the reader unacquainted with BART a more comprehensive view, [2], in which the original idea [1] is duly refer- enced, was used in the thesis and successive papers.