• No results found

Vendor-Independent Software-Defined Networking : Beyond The Hype

N/A
N/A
Protected

Academic year: 2021

Share "Vendor-Independent Software-Defined Networking : Beyond The Hype"

Copied!
100
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet SE–581 83 Linköping

Linköping University | Department of Computer and Information Science

Master thesis, 30 ECTS | Datateknik

2019 | LIU-IDA/LITH-EX-A--19/031--SE

Vendor-Independent

Software-Defined Networking

Beyond the hype

Leverantörsoberoende Mjukvarudefinerade Nätverk

Santiago Pagola Moledo

Supervisor : Abhimanyu Rawat Examiner : Andrei Gurtov

(2)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet - eller dess framtida ersättare - under 25 år från publicer-ingsdatum under förutsättning att inga extraordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka ko-pior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervis-ning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säker-heten och tillgängligsäker-heten finns lösningar av teknisk och administrativ art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsman-nens litterära eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet - or its possible replacement - for a period of 25 years starting from the date of publication barring exceptional circumstances.

The online availability of the document implies permanent permission for anyone to read, to down-load, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility.

According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement.

For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

(3)

Abstract

Software-Defined Networking (SDN) is an emerging trend in networking that offers a number of advantages such as smoother network management over traditional networks. By decoupling the control and data planes from network elements, a huge amount of new opportunities arise, especially in network virtualization. In cloud datacenters, where vir-tualization plays a fundamental role, SDN presents itself as the perfect candidate to ease infrastructure management and to ensure correct operation. Even if the original SDN ideol-ogy advocates openness of source and interfaces, multiple networking vendors offer their own proprietary solutions. In this work, an open-source SDN solution, named Tungsten Fabric, will be deployed in a virtualized datacenter and a number of SDN-related use-cases will be examined. The main goal of this work is to determine whether Tungsten Fabric can deliver the same set of use-cases than a proprietary solution from Juniper, named Contrail Cloud. Finally, this work will give some guidelines on whether open-source SDN is the right candidate for Ericsson.

(4)

Acknowledgments

First of all, I would like to express my gratitude to Ericsson for giving me the opportunity to do this Master’s thesis work. During the past months I have worked under the guidance of many professionals and I have learned a lot from them. I am very grateful for all the help and supervision that I have been given.

I would like to mention my team members David, Thorsten, Peter, Marcus and Younger for their continuous support and feedback I have received along this thesis work. A special mention goes to Michael Lennartz for his constant feedback and guidance when planning the Tungsten Fabric deployment. Thank you all for the time you took when I needed help.

Last, but not least, a huge thanks goes to Abhi and Andrei for the valuable support and guidance I have received throughout this thesis.

And of course, I would not be here without my parents, my brother and Ida. Thank you for providing me with your non-technical but endless support.

(5)

Contents

Abstract iii

Acknowledgments iv

Contents v

List of Figures vii

List of Tables viii

1 Introduction 1 1.1 Motivation . . . 1 1.2 Aim . . . 3 1.3 Research questions . . . 4 1.4 Delimitations . . . 4 1.5 Document structure . . . 5 2 Background 6 2.1 SDN architecture . . . 6 2.2 History . . . 9 2.3 SDN standardization . . . 14 2.4 SDN and NFV . . . 18 3 Related Work 21 3.1 General applications of SDN . . . 22 3.2 SDN in datacenters . . . 25

3.3 Network management with SDN . . . 28

3.4 Extending cloud computing platforms with SDN . . . 29

4 Method 33 4.1 Foreword . . . 33

4.2 Pre-study . . . 33

4.3 Tungsten Fabric deployment . . . 37

4.4 Evaluation of Tungsten Fabric . . . 43

5 Results 52 5.1 Tungsten Fabric Evaluation . . . 52

6 Discussion 62 6.1 Results . . . 62

6.2 Tungsten Fabric on E2C . . . 68

6.3 Method . . . 70

6.4 Economic impact of adopting Tungsten Fabric in Ericsson’s datacenters . . . . 73

(6)

7 Conclusion 76

7.1 Future work . . . 77

Bibliography 79

A Abbreviations 84

B Tungsten Fabric Overview 86

B.1 Introduction . . . 86 B.2 Architecture Overview . . . 86 B.3 Tungsten Fabric Docker containers . . . 89

(7)

List of Figures

1.1 Simplified conceptual view of any of Ericsson’s three GIC networks. . . 3

2.1 Simplified logical view of traditional vs SDN-based network architectures with some example devices interconnected. . . 7

2.2 Simplified logical layer-wise decomposition of the SDN architecture. . . 7

2.3 Simplified hardware abstraction from the underlying infrastructure for SDN (left) and a regular operating system (right) . . . 8

2.4 Simplified logical view of the NFV architecture, showing the main interfaces be-tween its key components and interconnections. . . 19

3.1 Concept of migrating a VM from one server (light green) to a destination (dark green). . . 25

3.2 Concept of Service Function Chaining (SFC). . . 27

3.3 Logical view of Openstack’s most common building blocks. . . 30

4.1 Layer-wise representation of the mini-PoC. . . 34

4.2 Host networking setup for the Tungsten Fabric PoC . . . 38

4.3 Use Case 0: A VM with external connectivity beyond the DC-GW . . . 44

4.4 Use Case 1: Two VMs, same PoD, interconnected by a virtual L2 domain . . . 45

4.5 Use Case 2: Two VMs, in different PoDs, interconnected by a virtual L2 domain . . 45

4.6 Use Case 3a: Two initially isolated IP networks interconnected by a logical router . 46 4.7 Use Case 3b: Two initially isolated IP networks interconnected by a network policy 46 4.8 Concept of importing a cloud-internal virtual network into an existing MPLS L3VPN 47 4.9 Proposed modification to the deployed infrastructure . . . 48

4.10 Use Case 5: Migration (regardless of its nature) between availability zones . . . 50

5.1 Ping Round-Trip Time (RTT) during a live migration between availability zones . . 58

5.2 Ping Round-Trip Time (RTT) during a cold migration . . . 59

5.3 Ping Round-Trip Time (RTT) during a cold migration . . . 60

6.1 Ping Round-Trip Time (RTT) during a live migration, with a ping interval of 100 ms 66 6.2 Datacenter layout of the parallel Contrail Cloud PoC . . . 68

6.3 Monthly salary given by the number of internal manpower dedicated to Tungsten Fabric . . . 73

B.1 Physical implementation and logical outcome of Tungsten Fabric. . . 87

B.2 Architectural overview of the vRouter within a compute host . . . 88

(8)

List of Tables

2.1 High-level key components of an OpenFlow-compliant network device. . . 9

2.2 Components of a flow entry within a flow table. . . 10

2.3 Standardization organisms, working groups within these , their area of focus and status as of the date of writing of this document (March 2019). . . 18

4.1 Network types in the Tungsten Fabric ecosystem . . . 38

4.2 Host inventory of this PoC . . . 39

4.3 Evaluation metrics for the use-cases to be executed . . . 43

6.1 Extract of virtual network "VN1"’s VRF on OS-CM01, where VM1 runs . . . 64

6.2 Extract of virtual network "VN1"’s VRF on OS-CM03, where VM1 runs . . . 65

6.3 Approximate values of the different steps in a cold VM migration . . . 67

6.4 Inventory of the Contrail Cloud PoC . . . 69

6.5 Important configuration files and directories for the Tungsten Fabric deployment in contrail-ansible-deployer . . . 70

6.6 Important configuration files and directories for the Tungsten Fabric deployment in contrail-kolla-ansible . . . 70

A.1 Abbreviations and their meanings. . . 85

B.1 Container grouping into PoDs in the microservice-oriented Tungsten Fabric archi-tecture. . . 89

(9)

1

Introduction

Software-Defined Networking (SDN) has long been a promising technology with potential to change both the economics of networking and the way we design and manage our network infrastructure [22]. Unlike traditional IP networks, where configuration and maintenance is carried out on every network element (NE) individually in a time-consuming manner [6], SDN is believed to provide new ways to automate parts of today’s network configuration, particularly within the context of cloud environments [64].

SDN is a networking paradigm that aims to separate network control and data planes. In addition, it proposes a centralized network control that has a global overview of the un-derlying forwarding plane, thus making network management more effective, scalable and agile [45]. By transitioning from a distributed to a centralized network control, the network becomes programmable, thus yielding a smoother configuration and maintenance. For a more detailed history of SDN and its predecessor ideas the reader is referred to section 2.

1.1

Motivation

The concept of SDN has been promoted for a long time within the scientific community and network industry, especially after the first release of the OpenFlow protocol back in 2008 [54], but its large-scale application has turned out to be somewhat limited. This is partly due to two reasons, the latter being a consequence of the former: the historical lack of standard-ization of the SDN architecture and the never-ending vendor race to produce proprietary SDN solutions. Still nowadays it appears that these vendor-dependent solutions continue to dominate, which yields a complex multi-vendor network ecosystem.

As a consequence, vendor-interoperability issues start to arise. For instance, if a company C acquires vendor A’s proprietary SDN solution and later on tries to add more network de-vices provided by vendor B, the integration may fail due to the use of vendor-dependent protocols. Another problem that company C may face is that it will remain tightly coupled to vendor A. This may be undesired if another vendor D starts offering more economic solutions so that company C needs to re-acquire all its network infrastructure from D, thus resulting in a significant capital investment. As a consequence, these example scenarios break the

(10)

fun-1.1. Motivation

damental SDN ideology of decoupling the control and data planes and their interoperability becomes somewhat limited.

The problem

Ericsson owns three Global ICT Centers (GICs) across the globe, two of which in Sweden (Linköping and Rosersberg) and one in Canada (Montreal), forming the Global GIC Network (GGN). GICs are data centers for global R&D and product testing activities. These GICs are interconnected with each other and to other networks, composing the GGN.

One of the first limitations GGN faced when it was deployed was that if a user wanted to spawn a VM for e.g. having a dedicated and isolated environment for testing purposes, there was no standard way of requesting it. After finding the right person within some IT team, a time-consuming process started which would forward the VM request across several IT teams, and after a some time, access was granted to the user that requested the VM. This was an inflexible solution, making users wait a long time and thus affecting their productivity. As a result of this, a cloud-based, multi-site solution built on top of the GGN architec-ture, named Ericsson’s Engineering Cloud (E2C), was launched with the vision to provide a fast transformation towards a cutting-edge engineering environment. However, there are currently other limitations this network presents, two of which are described below. These limitations have made Ericsson start exploring SDN-based solutions. Figure 1.1 shows a sim-plified view of the GIC network architecture. Points-of-Delivery (PoDs) are shown at the bottom, two of which contain a virtual machine (VM).

• Complexity of spawning virtual networks. If VM1 and VM2 need to lay in the same virtual network, the solution nowadays is to manually edit the static configuration on the uppermost routers, since many dependencies exist. This process can take up to several hours, and is error-prone.

• VM migration problems. If, for load-balancing purposes, a certain VM needs to be moved from one compute server (or PoD) to another, there’s currently no way of per-forming live migrations, i.e., moving one VM from a physical server to another without interrupting its activity. Even doing the opposite migration, called cold migration, this is a cumbersome process and cannot currently be done in a proper way.

These limitations exist due to the fact that the GIC network is not currently implemented as a virtualized solution on top of a static underlay infrastructure, as most modern cloud solutions are, but it is deployed on physical routers and switches with static configuration. This results in problems ranging from IP networks being tied to static L2 domains to VMs within the same subnet needing to be hosted within the same PoD.

With this in mind, Ericsson is exploring SDN-based networking solutions, such as Con-trail Cloud from Juniper or its open-source equivalent Tungsten Fabric from the Linux Foun-dation that will mitigate all of these existing problems and will make network administration and management a more straightforward task.

(11)

1.2. Aim

Figure 1.1: Simplified conceptual view of any of Ericsson’s three GIC networks.

1.2

Aim

The primary goal of this Master’s thesis work is to explore the possibility of deploying Tungsten Fabric [27] on Ericsson’s engineering cloud, E2C, in order to provide means for a transition towards an open-source and vendor-independent SDN-based cloud ecosystem. This goal will be pursued by carrying out a Proof-of-Concept (PoC) in a contained lab envi-ronment. Formerly OpenContrail [60], Tungsten Fabric is a network virtualization solution to provide secure connectivity across well-known orchestrators such as OpenStack, Kuber-netes or vCenter. Acting as a "plug-in" between such orchestrators, Tungsten fabric offers a vast array of features and services ranging from networking (overlay network deployments, DHCP/DNS services, etc.) to multi-tenant management. More about Tungsten Fabric’s architecture follows in section B.

Another goal of this thesis is to make an objective side-by-side use-case comparison be-tween Tungsten Fabric and Juniper’s Contrail Cloud SDN solution, which, as of February 2019, has been deployed in one of Ericsson’s data centers. The purpose for this comparison is to decide whether it is feasible to deploy Tungsten Fabric in a massive scale and whether the same functionality can be achieved compared to Juniper’s proprietary solution. This way, not only will the vendor-dependency be removed, but also active development will be made possible for Ericsson’s network administrators, thus achieving more flexibility by not having to deal with a black box solution.

(12)

1.3. Research questions

1.3

Research questions

The following is a list of research questions that this Master’s thesis work will try to answer: 1. What are the shortcomings OpenStack has that make SDN-based solutions such as

Con-trail Cloud or Tungsten Fabric necessary?

OpenStack is an open-source cloud computing platform usually deployed as Infrastructure-as-a-Service (IaaS) that aims to control and manage large pools of cloud resources, such as compute, networking and storage, through a centralized dashboard. Widely used in datacenters, OpenStack is a collection of projects that together form a complete toolbox to provide resources on demand via a web or command line inter-face. OpenStack has previously proven to be the most complete cloud orchestration solution compared to other projects such as Eucalyptus and CloudStack [46], but still cloud-oriented SDN solutions as Tungsten Fabric aim for an ubiquitous network vir-tualization standard. Are there any limitations that OpenStack’s networking project "Neutron" presents that SDN-based solutions can mitigate?

2. Does Tungsten Fabric offer better use-cases compared to its counterpart proprietary solution Contrail Cloud?

Contrail Cloud is a telecommunications cloud solution developed and maintained by Juniper Networks aimed to operate and manage network infrastructure [40]. It offers a huge amount of features, such as VNF service assurance, underlying Red Hat Open-Stack (RHOS) for virtualization management, Red Hat Ceph for Software-Defined Stor-age (SDS) and Contrail Networking, among others. Tungsten Fabric also aims to pro-vide similar features to yield connectivity across popular cloud orchestrators such as Kubernetes or OpenStack. The question that arises at this point is: does Tungsten Fabric offer use-cases that are of higher interest for Ericsson’s E2C compared to those offered by Contrail Cloud?

3. What are the challenges and implications that may arise when scaling up the Tungsten Fabric setup to Ericsson’s E2C if it is to replace Juniper’s Contrail Cloud in the foresee-able future?

In the previous section it is stated that a PoC will be done where Tungsten Fabric will be studied and deployed. This will naturally be done in a lab environment with a handful of servers and OpenStack instances. When it comes about a massive-scale deployment, as it is the case for E2C, a number of hinders are likely to come up. These need to be identified, analyzed and bypassed by conducting a thorough analysis of both E2C’s and Tungsten Fabric’s respective architectures.

1.4

Delimitations

The deployment of Tungsten Fabric, although it conceptually targets Ericsson’s engineering cloud E2C, will be carried out in a contained lab environment. The reason for that is the following: as already mentioned, Juniper’s proprietary Contrail Cloud solution has recently been deployed in Ericsson’s facilities. This means that there already is a working SDN solu-tion in service. As mensolu-tioned in 1.2, the goal of this thesis is to carry out a feasibility study on whether Tungsten Fabric may replace Contrail Cloud in the future.

Generally speaking, a well-known pitfall of transitioning towards open-source software is the lack of 24/7 support when problems arise. This support is instead replaced by an active community of developers, network administrators and users. This poses the following possible risk: other people may have never stumbled upon the exact same problem that one

(13)

1.5. Document structure

is facing, thus needing to wait until a discussion is started or until one solves the problem him/herself.

1.5

Document structure

This thesis report is structured as follows: chapter 2 covers a summarized history of the SDN concept, standardization organisms and a brief comparison of the SDN and NFV concepts. In chapter 3 related work on applications around SDN will be presented. Chapter 4 covers the methodology used in this thesis work, chapters 5 and 6 present the results obtained and a discussion of these, respectively, and finally the conclusions are drawn on chapter 7. The various appendices are shown in Appendices A, C and B.

(14)

2

Background

The current chapter will provide some background about the SDN architecture, its history and evolution since the first network virtualization attempts, and will also present the past and current standardization efforts together with the primary drivers behind those.

2.1

SDN architecture

Software-Defined Networking (SDN) is a popular network architecture paradigm that, as opposed to traditional IP networks, aims to logically detach control from the network forwarding devices. In other words: separating the control and data planes. In legacy IP networks, the control layer, responsible for making decisions on the incoming packets, such as routing, forwarding and path computation, is tightly coupled to the data layer, which actually performs the actions dictated by such decisions. This has a limitation: when a need to reconfigure the network arises, every device needs to be reprogrammed individually which is considerably time-consuming and error prone. By pulling out the control plane from every network device, these devices become much simpler and network configuration is highly simplified. As an analogy, the control layer is often referred to as the "brains" of the network, providing the intelligence and decision-making, whereas the data/forwarding layer is referred to as the "muscles", i.e., the actual forwarding of packets through the network. In an SDN architecture, the control of the network is logically centralized, meaning that a single, logical entity, called the SDN controller or Network Operating System (NOS) has an abstract view of the underlying data plane composed by the forwarding devices. These devices may be switches, routers or middleboxes (such as firewalls). The NOS provides means for configuring these devices, located on the data plane, via southbound application programming interfaces (API’s), the most common, widely-accepted and extremely config-urable being OpenFlow, originally designed to work with Ethernet-based switches [54].

Figure 2.1 illustrates the clear separation between control and data planes described above. The red color depicts the control layer and the blue color represents the forwarding plane or data layer. In figure 2.1b the SDN Controller (or NOS) has an abstract view of the data plane (blue) representing the network topology as opposed to figure 2.1a, where every

(15)

2.1. SDN architecture

(a) Traditional IP network example (b) SDN-based network example

Figure 2.1: Simplified logical view of traditional vs SDN-based network architectures with some example devices interconnected.

forwarding device has policies and configuration directly done on it.

Above the control layer resides a third layer, usually referred to as the management or appli-cation layer. This is the layer where network configuration and management is initiated from. The NOS is connected to this layer through its northbound API, and provides an interface for applications residing on this layer to talk to the NOS.

Figure 2.2: Simplified logical layer-wise decomposition of the SDN architecture.

A layer-wise representation of SDN is shown in figure 2.2. It can be seen that two addi-tional interfaces between SDN controllers, i.e., the eastbound and westbound interfaces have also been depicted. These are used for inter-controller communication [39]. The reason

(16)

2.1. SDN architecture

for this name is because in the traditional "vertical" representation of network stacks, the y (vertical) dimension is normally used to show a layer’s relationship with upper and lower layers, whereas in a multi-SDN scenario communication may be needed between several NOS:es, thus requiring an extra x (horizontal) dimension.

With this in mind, one can think of SDN as a means of abstracting the underlying com-plexity of the network topology from the administrator who configures it, in a similar way that a regular operating system abstracts the details of the hardware it is running on from the user. It is this layer of abstraction that makes applications running in user-space (or in the management plane in the case of SDN) perceive an abstract and simplified view of the network, thus making configuration - and use - noticeably easier. Figure 2.3 depicts the re-semblance of the abstraction layer that the (network) operating system provides.

Figure 2.3: Simplified hardware abstraction from the underlying infrastructure for SDN (left) and a regular operating system (right)

As already explained, the forwarding devices residing on the data plane of an SDN ar-chitecture perform the forwarding of packets according to the decisions made by the NOS. An aspect of such forwarding actions worth mentioning is that these decisions are flow-based, as opposed to packet-based as is the case for traditional networks [45]. A flow is defined as a sequence of one or more packets from a source to a destination. Examples of flows are e.g. a file download via FTP or all web traffic from a server to a client. The next subsection presents the OpenFlow protocol.

OpenFlow

Undoubtedly being the most common southbound protocol for network device configuration, OpenFlow is one of the first standards within SDN. Maintained by the Open Networking Foundation (ONF), this protocol allows network administrators to define rules that are pushed down to the forwarding layer devices in order to control flows, perform traffic engi-neering and test new configurations. The latest version of OpenFlow is 1.5.1, released in 2015 by the ONF.

In order to be able to use the OpenFlow protocol, the underlying networking hardware must include support for it. There are a number of advantages this protocol offers, such

(17)

2.2. History

Item Description

Group Table Consists of group entries, where each entry is has a unique identifier, group type, some counters and action buckets.

Flow Tables Flow entries or rules are installed in flow tables. An OpenFlow-based network device can contain multiple flow tables, and these form its internal pipeline through which every incoming packet travels.

OpenFlow Channels These act as the interface towards the SDN controller on the south-bound interface. It is thanks to these channels that SDN controllers can both manage OpenFlow-based networking devices and receive events from them.

Table 2.1: High-level key components of an OpenFlow-compliant network device.

as enforcing a centralized intelligence (as the SDN paradigm dictates) and facilitating in-novation due to its ease of use. Furthermore, any given OpenFlow-compliant networking device can either be OpenFlow-only or OpenFlow-hybrid. In contrast to the former which is self-explanatory, the latter supports both OpenFlow and traditional Ethernet L2 switching operations.

Originally developed in 2008 by Nick McKeown et al. [54] at Stanford University, Open-Flow was introduced as a draft for running experimental protocols in Ethernet-based switches. They believed that network protocol development had reached a flat line due to the enormous amount of already deployed IP-based network equipment. This motivated the authors to propose OpenFlow as a means of facilitating programmable networks that could be adapted or configured according to the researchers’ needs. This would eventually lead to the coining of the SDN concept [31].

OpenFlow is based around the idea of enabling flow-table programming of networking devices, such as switches or routers. An OpenFlow-compliant network device is comprised of the following elements: a group table, one or more flow tables and one or more OpenFlow channels [26]. Table 2.1 summarizes their functionality.

Of particular interest are flow entries, which reside inside of flow tables. Each flow entry is composed of a number of match fields, a set of counters and instructions to execute upon successful match. Table 2.2 provides a brief description of these components:

Although being a widely supported southbound protocol in SDN networks, there exist other common protocols such as NETCONF, Open vSwitch Database (OVSDB) or Border Gateway Protocol (BGP) that are supported by well-known SDN controllers such as Open-Daylight. Another example is Tungsten Fabric, which, as will be thoroughly explained in appendix B, uses an Extensible Messaging and Presence Protocol (XMPP)-based variant on its southbound interface between the Tungsten Fabric SDN controller and the vRouter. Never-theless, the OpenFlow protocol still allows rapid prototyping and deployment of experimen-tal software-defined networks, something of particular interest for the scientific community due to the compatibility with traditional and vendor-based Ethernet switches, as was Das et al. [17]’s original vision.

2.2

History

Even though the concept of SDN has attracted a lot of attention throughout the past 10 years due to being a promising technology to interact with networks, the underlying pillars SDN is

(18)

2.2. History

Item Description

Match Fields A subset of bits within a flow entry that is matched against an in-coming packet. OpenFlow supports a vast array of matching crite-ria, such as source/destination MAC address, VLAN ID, L4 Trans-port Protocol, source/destination IP address, switch incoming Trans-port, etc.

Counter Set These provide OpenFlow statistics that can be queried by the SDN Controller, such as packets or bytes matching the criteria specified in the match fields, number of dropped packets, etc.

Instruction Set Upon successful match of a given packet against a given flow en-try, these are the actions that will take place. Examples of instruc-tions are, but not limited to: dropping a package, forward the packet through a specific port of the switch, send it back to the controller, etc.

Table 2.2: Components of a flow entry within a flow table.

based upon have a longer history. Some of these foundations are: network programmability, separation of control and data planes, and network virtualization. This section aims to pro-vide a brief timeline of related efforts made within these topics throughout the years which would eventually lead to the inception of the software-defined networking concept.

Active Networks

Many scientific articles [45, 22, 56, 67, 11] about SDN history and evolution agree on active networks being as the first attempt of network programmability. The term originated within the Defense Advanced Research Projects Agency (DARPA) in 1994 [66] when the future of networking was being discussed. In short, the main idea behind active networks is that networking devices, such as switches and routers, have the ability to perform simple com-putations or modifications on packets flowing through them. Incoming packets can contain small-sized executable code originated from a user application that is ran on such devices. This is where the idea of active networking came from: that networks are active in the sense that nodes can modify packets instead of acting as purely passive forwarding devices.

In a survey conducted by Tennenhouse et al. [66] in 1997, the authors present the research status of active networks until then, and they discuss two approaches that the active net-working community was pursuing at that time. They claim that active networks research was driven by technology pushes, such as the emergence of new active technologies that make network programmability possible, and pulls representing the collection of network elements that enable user-originated computations on the network nodes. The authors go on to present two distinct approaches to active networks that were actively pursued, namely the programmable switches and capsule models. The former clearly separates the code to execute at each node from the processing of messages contained in incoming packets, whereas the latter encapsulates the executable code in transmission frames which is ran at every node along the path. Later on, the capsule model would end up being more closely related to active networking in the sense that it conceived the idea of data-plane programmability by carrying code in data packets.

Other authors, such as Calvert et al. [8], focus on the future directions active networking was heading towards in 1998. In particular, they state the challenges such networks should address, such as preventing networks from wasting resources caused by implicit limita-tions, development of efficient schedulers to tackle protocol-related processing issues and

(19)

2.2. History

fast access to low-level resources. In tight relation to the last mentioned challenge, Hicks et al. [35] present a lightweight programming language called Packet Language for Active Networks (PLAN) to program packets flowing through the network which would replace packet headers and thus simplify packet processing.

Many lessons have been learned from active networks since their creation, three of which are remarked by the authors of [22], who believe that such contributions are tightly coupled to SDN: programmable network parts (or functions) which enable innovation, network vir-tualization (in terms of experimenting with multiple network programming models running on top of the same physical infrastructure) together with fast packet demultiplexing (as was the goal of PLAN), and the perception of a unified architecture for orchestrating middlebox instances.

Separation of control and data planes

As described in section 2.1, one of the fundamental aspects that SDN advocates is the clear separation of both control and data planes. This can be seen as a key enabling factor for achieving programmable networks, since the intelligence of the network is being pulled out from the packet forwarding devices, thus achieving a centralized network control.

However, this idea of control and data plane separation has its roots back in the 1980s, when Sheinbein et al. [63] proposed the Stored Program Controller (SPC) Network, an ar-chitectural concept applied to AT&T’s so-called 800 service used in their national telephony infrastructure. This SPC network was composed of several interconnected switching offices (or ACPs) using a packet signaling system called Common-Channel Interoffice Signaling (CCIS). This would allow AT&T to overcome existing challenges in their legacy infrastruc-ture such as routing inflexibility and restricted network management. Of particular interest in this architecture was the introduction of processor-controller databases across the network that were accessible via CCIS, called Network Control Points (NCPs). ACPs could then query the different NCPs to find out how to establish a call. The global view of the network that these NCPs provided boosted innovation and efficiency in AT&T’s networks. This is considered as one of the first attempts to separate the control (NCPs) and data (underlying switching infrastructure) planes and to provide a logically centralized network view (but physically distributed in multiple NCPs).

Since NCPs there have been numerous initiatives and working groups (WGs) that have targeted the separation of control and data planes. One of the most important WGs that has published a handful of standards within this topic is the Internet Engineering Task Force’s (IETF) Forwarding and Control Element Separation (ForCES) [23]. The primary motivation behind the creation of ForCES was the need for open and standardized interfaces between Control Elements - CEs and Forwarding Elements - FEs. During its active period, this WG produced a set of standards that systematically define how to perform such separation by creating a well-defined system model of the forwarding plane network elements. Moreover, the proposed standards by ForCES provide means to build ForCES-based network elements and information on how to integrate these in already deployed network infrastructure. The WG was terminated in April 2015 and no further standards nor activity are expected1. Refer to section 2.3 for a list of some standards produced by ForCES.

Haleplidis et al. [33] present in their work an in-depth study of how ForCES defines control and data plane separation. The authors believe that a major advantage that this model offers is that any ForCES-compliant network element is virtually identical to that of a 1The mailing list thread can be found here and the working group was later on marked as concluded in the IETF’s

(20)

2.2. History

proprietary vendor, thus making integration and deployment uncomplicated. They perform a comparison between the ForCES model and some state of the art management (SNMP, NETCONF) and control (GSMP, OpenFlow) protocols and how these relate to ForCES.

There have been more recent approaches of separating the control and data layers. Two notable projects are SANE [10] and ETHANE [9], both authored by Martin Casado, co-founder of Nicira Networks. Secure Architecture for the Networked Enterprise (SANE) was brought to life in 2006 as a way to address some limitations that enterprise network connec-tivity was facing, such as multiple Access Control Lists (ACLs) and complex routing and bridging policies that hindered enterprise network protection. They provide an overview of the existing techniques for securing networks, identify the flaws in them (such as distributed policies and management complexity [6]), and as a result of that they propose SANE. The fundamental ideas behind SANE are the following: (1) the allowance of high-level, topology-independent simple policies that are enforced in the network devices; (2) policy enforcement to be carried out at the link-layer; (3) hiding the network topology from attack-vulnerable end-hosts and (4) having a unique, logically centralized trusted entity in the network that is responsible for enforcing the defined policies. They go on to analyze practical implications of deploying SANE on a real network and show that it can successfully be deployed perform-ing only some light modifications to it, despite beperform-ing a clean-slate approach to enterprise network security.

ETHANE was proposed in 2007, roughly a year after SANE’s creation. Being very similar to SANE, the authors [9] motivated that the deployment of SANE turned out to be more difficult than they had anticipated due to the fact that such approach was unprecedented. ETHANE extended SANE in terms of incremental deployment: while its predecessor re-quired enterprises to replace the whole network infrastructure and end-hosts, ETHANE made it possible to have a progressive delivery process so that installation would be incre-mental, thus making it more attractive for enterprises. Another considerable improvement with respect to SANE is that utilizing a policy-aware wide approach to enterprise network management will also make security easier since, according to the authors, it is a subset of management. They describe their experience with the deployment of ETHANE in the Stanford’s Computer Science department’s Ethernet network with approximately 300 hosts. The authors finally conclude by stating that (1) ETHANE management was easier than ex-pected; (2) inclusion of new features or protocols is straightforward; (3) the controller can be scaled-up to support thousands of hosts and (4) switches perform at their best when they are kept simple, with minimal or no control capabilities whatsoever.

From SANE and ETHANE one can see how clearly they aim for control and data plane separation by defining a logically centralized network control plane that supervises under-lying forwarding plane elements. One of the most recent approaches of such separation is, as described in section 2.1, the OpenFlow protocol, managed by the ONF. One of the key characteristics that these protocols and projects have in common is that they don’t require substantial amount of modification of the forwarding devices (most likely a careful firmware upgrade [48]). This is one important reason why OpenFlow has gained a lot of traction in industry during the past years, with big network hardware vendors such as Cisco2, Juniper3 or D-Link4.

2Cisco Plug-in for OpenFlow Configuration Guide 1.1.5 - link 3OpenFlow Support on Juniper Networks Devices - link 4D-Link press release about offering SDN-enabled switches - link

(21)

2.2. History

Network virtualization

There is a third important idea from which SDN takes inspiration, namely network vir-tualization. It can defined as the process of defining a network by abstracting it from its underlying hardware infrastructure. In network virtualization, multiple (virtual) networks can co-exist on the same physical infrastructure (or substrate), thus allowing for network technology innovation and mitigating the so-called network ossification [68].

One of the pioneer efforts on virtual networking was The Tempest [55], dating back to the late 1990s. In their work, van der Merwe et al. introduce The Tempest, a framework for providing a programmable network environment in ATM networks. It is based on the idea of partitioning the resources of existing switches into separate logical entities, called switch-lets [47], which are managed by distinct switch controllers, thus allowing for multiple control architectures running on top of the same physical switch fabric. Thus, The Tempest allows both creation and enforcement of network policies that are internal to the different virtual networks. Furthermore, it allows end-user to act as network administrators in the sense that they can dynamically associate and control network resources from an application level, a concept the authors refer to as connection closure. As a consequence, The Tempest makes network operators not worry anymore about needing to define a unified control architecture that supports all possible future services, but rather it offers the freedom to select the right architecture for their proprietary services.

A more recent network virtualization initiative is carried out by Koponen et al. [43], where they introduce NVP, an SDN-based network virtualization platform whose main target envi-ronment are enterprise multitenant datacenters (MTDs). This comes as a response to the lack of scientific papers written in the topic of actual implementation of network virtualization systems. The authors believe that MTDs are complex to construct because networking is not virtualized as a single abstract entity, but rather individual parts of it, such as virtualized L2 domains as VLANs or virtualized paths as MPLS. This makes network configuration error-prone and excessive operator activity. NVP is implemented around the concept of a network hypervisor which provides the necessary virtualization control and packet abstrac-tions needed to host and manage the overlay virtual networks. The network hypervisor resides on a separate logical layer between the physical forwarding infrastructure and the control plane of the tenants. NVP uses Open vSwitch (OVS) to forward packets through the nodes and OpenFlow as a southbound protocol to configure those switches.

Feamster et al. [22] draw a relationship between SDN and network virtualization stating that both concepts are interrelated in three ways: firstly, they believe SDN acts as a tech-nology that facilitates network virtualization. By having a logically centralized control as SDN dictates, virtualization solutions can be deployed as overlay networks (as is the case for NVP [43]) to serve tenants with the view of a single switch interconnecting all virtual machines. This logical switch is managed by rules (flow-entries) installed from the controller, following the SDN philosophy. Secondly, network virtualization serves as a technology to experiment with SDN. Feamster et al. mention Mininet [58] as an example of a process-based network virtualization platform broadly used in the scientific community to test new SDN-based architectures in a controlled environment. Finally, SDN virtualization or slicing becomes feasible since virtualizing a network device that has no embedded control plane on it is much easier than having to virtualize its traditional predecessor. This is mainly due to the fact that the control plane must be virtualized too, per virtual device instance.

In a similar manner, Liyanage et al. [50] explain in a pedagogical manner the relation of SDN to network virtualization and NFV. Similar to what Feamster et al. [22] describe, the idea of decoupling control from the underlying physical infrastructure is common ground

(22)

2.3. SDN standardization

for both SDN and network virtualization. Furthermore, Liyanage et al. [50] remark that SDN is not necessarily a requisite for achieving network virtualization, since SDN can act as an enabler for network virtualization, while this can also be implemented with the help of SDN.

In conclusion, even if SDN appears to be a new technology, its foundations date back to up to 40 years ago, and they have made it possible for SDN to gain the attention and popularity it has today in modern networks. Although arguments such as reduction in capital and operational expenditure (more known as CAPEX and OPEX respectively) and dynamic network configuration are commonly quoted as benefits of SDN, many of such advantages come from the very foundations SDN is based upon and which have been described in this section.

2.3

SDN standardization

This section aims to provide an overview of some Standards Organizations (SDOs) who focus on SDN or NFV, past and current Working Groups (WGs) within these and their purpose.

Institute of Electrical and Electronics Engineers (IEEE)

The Standards Association of the well-known engineering institution (IEEE-SA) has numer-ous ongoing working groups focusing on different areas, such as future overlay networks, virtualized environments and middleware for network control and management.

Among their most notable standard contributions is IEEE 1903.3-2017 [20] for next gen-eration service overlay networks. The working group in charge of this standard bears the same name, i.e., NG-SON, and it attempts to specify self-organizing management protocols in such networks. Other working groups worth remarking are SVE, PVE and RVE which focus on the security, performance and reliability of virtualized environments, respectively.

In addition, IEEE-SA also has an additional working group, called QuantumComm, whose focus is Software-Defined Quantum Communication (SDQC), an application-layer protocol which attempts to include quantum endpoints in communication networks. This represents a long-term vision and belief in SDN from IEEE.

Internet Engineering Task Force (IETF)

The IETF is undoubtedly one of the main drivers of SDN standardization. They have nu-merous areas where they produce standards: applications and real-time, general, Internet, operation and management, routing, security and transport. Within each of these areas, numerous working groups work in specific standards and recommendations that are then sent for review and eventually published.

There are a number of standards that the IETF has produced in SDN and that are widely used and adopted in industry, such as the MPLS protocol, ForCES and NETCONF. The work-ing groups in charge of these standards bear the same name as the protocols they standardize. It is worth noting that while the IETF has working groups, which focus on short-term engi-neering issues and standardization, the IRTF, a main group within IETF, has research groups that focus instead on longer term Internet-related issues.

Internet Research Task Force (IRTF)

Being one of the most fundamental groups within the IETF, the IRTF promotes long-term research on Internet-related issues. As already mentioned, it is formed by various research

(23)

2.3. SDN standardization

groups, the most relevant to SDN of which being SDNRG.

Closed in January 2017 [24], this research group focused on areas of interest such as defi-nitions and taxonomies of SDN models, scalability issues, system complexity and surveys of SDN approaches and technologies.

Open Networking Foundation (ONF)

The ONF is a member of the Linux Foundation (LF), meaning that all ONF-hosted projects are considered to be part of the LF’s project portfolio. It is a non-profit organization led by members from multiple companies with different roles: partners from e.g. AT&T, COMCAST, Google, DELL EMC; innovators from Ericsson, Broadcom, Huawei, etc; and collaborators from Akamai, Infosys and Ubuntu, among others. The ONF advocates open source software, network disaggregation and software-defined standards, and it is actively aiming to address existing limitations in both computer and communication networks [69].

All SDN-related standards the ONF works are published as either (1)Technical Specifi-cations, including protocol definitions, information models and framework specifiSpecifi-cations, (2)Technical Recommendations, i.e., standards containing APIs, data models and protocols licensed under the Apache 2.0 license or even copyrighted; or (3)Informational documents, which help convey ONF’s mission and development.

Among the most famous protocols and standards managed by the ONF are Open-Flow [26], Mininet [58] and ONOS [25]. As of this day, the ONF is the primary organization behind SDN which actively promotes standardization of SDN technology.

3rd Generation Partnership Project (3GPP)

Although 3GPP mostly covers telecommunication networks’ standardization, one working group within the Service and Systems Aspects (TSG-SA), named SA5 (Telecom Management) has produced multiple specifications in the area of NFV applicability in mobile core networks, for instance TS-28.500 [30], TS-32.871 [29] and TS-28.311 [28].

European Telecommunications Standards Institute (ETSI)

ETSI produces multiple standards in areas such as NFV and telecommunications because they believe that standards provide reliability, interoperability and business benefits such as open market access. ETSI’s Industry Specification Group (ISG) for NFV was formed by a multitude of telecom network operators in November 2012 in order to produce recommen-dations and requirement specifications in NFV.

ETSI ISG NFV releases NFV architectural versions every two years, the latest release of them being NFV Release 3. The different working groups within the ISG work on various architectural areas such as information modeling, policy management and security analysis.

Table 2.3 summarizes the aforementioned SDOs, WGs, their objective and current status.

SDO WG Purpose Status

SDN-MCM SDN-based Middleware for Control and Management.

Middle-ware specification for vendor independent management and control of wireless networks in accordance with the Software Defined Networking (SDN) paradigm.

Active

(24)

2.3. SDN standardization

SDO WG Purpose Status

SDNBP SDN Bootstrapping Procedures. Specification of bootstrapping

mechanisms for SDN architectures.

Active

IEEE NG-SON Next Generation of Service Overlay Networks. Protocol

devel-opment for service composition, content delivery and self-organizing management.

Active

PVE,RVE,SVE Performance/Reliability/Security for Virtualized Environments.

Framework development, including characteristics, metrics, requirements, models, and use-cases for SDN/NFV.

Active

Quantum Comm Software-Defined Quantum Communication. Defines the

Software-Defined Quantum Communication (SDQC) proto-col that enables configuration of quantum endpoints in a communication network in order to dynamically create, mod-ify, or remove quantum protocols or applications.

Active

SDNRG Software-Defined Networking Research Group. One of the main

RG’s within the IRTF and no longer active, SDNRG focused on various SDN model like classification, definitions and tax-onomies, as well as scalability, applicability, programmabil-ity and complexprogrammabil-ity of SDN networks. Issues such as securprogrammabil-ity, network description languages and interfaces were also in-vestigated.

Inactive

I2RS Interface to the Routing System. Real-time and event-driven

interaction with the routing system in IP networks through protocols, abstractions and interfaces. High-level architecture design for I2RS, specific and well-defined use cases such as interaction with the RIB/FIB.

Active

MPLS Multiprotocol Label Switching. Standardization of technology

for label switching, implementations for label-switched paths to be used in packet-based link-level environments.

Active

PCE Path Computation Element. Responsible for specifying

proto-cols for a PCE-based path computation in MPLS/GMPLS net-works. They also work on definition and extension of existing architectures for Traffic Engineering.

Active

IETF,

IRTF NVO3 Network Virtualization Overlays. Development of protocols

and/or protocol extensions to enable network virtualization data centers with IP-based underlays.

Active

SFC Service Function Chaining. Among their most notable

contri-butions are RFCs 7665 and 8300 for service function chaining.

Active

TEAS Traffic Engineering Architecture and Signaling. Responsible for

defining IP, MPLS and GMPLS traffic engineering architec-ture and identifying required related control-protocol func-tions, i.e., routing and path computation element functions. The TEAS group is also responsible for standardizing RSVP-TE signaling protocol mechanisms that are not related to a specific switching technology.

Active

NETCONF Network Configuration. Development and maintenance of the

NETCONF and RESTCONF protocols.

Active

(25)

2.3. SDN standardization

SDO WG Purpose Status

ForCES Forwarding and Control Element Separation. Create a

frame-work, requirements and protocols for ForCES, an initiative to separate and standardize interfaces between Control Ele-ments - CEs and Forwarding EleEle-ments - FEs of networks. As described in 2.2, this working group advocated the need of open and well-defined interfaces between control and data planes.

Inactive

TC Test & Certification. Accelerate development and adoption of

OpenFlow through testing and certification.

Active

MEC Market Education Commitee. Educate the SDN community on

OpenFlow-based SDN network solutions and promote ONF standards.

Active

EXT Extensibility. Address the needs for OpenFlow switch

deploy-ments and application-specific extensions to be developed.

Active

CM Configuration & Management. Address core operations,

ad-ministration and management issues such as bootstrapping operations of OpenFlow switches.

Active

ONF Migration Development of methods to migrate traditional network

ser-vices to an OpenFlow-based SDN architecture.

Active

ARCH Architecture & Framework. Identification of broad problems to

be addressed in the SDN architecture

Active

OT Optical Transport. SDN and OpenFlow-based control

capabil-ities for optical transport networks.

Active

FA Forwarding Abstractions. Definition of hardware forwarding

abstractions such as TTP and the forwarding plane model.

Active

WM Wireless & Mobility. Identification and development of use

cases and extension to ONF-based technologies to these two domains

Active

NBI Northbound Interfaces. Standardization of various SDN

con-troller northbound interfaces.

Active

3GPP SA5 Telecom Management. Specification of requirements,

architec-ture and solutions for network provisioning and manage-ment.

Active

NFV-TST Testing, Experimentation and Open Source. Specific metrics,

testing and benchmarking.

Active

NFV-SOL Solutions. Definition of APIs, data models and artifacts. Active

NFV-REL Reliability Availability and Assurance. Development of

tech-niques and specifications in areas of reliability and availabil-ity in NFV-based virtual environments.

Active

NFV-IFA Interfaces & Architecture. Description of architecture,

inter-faces and information models.

Active

ETSI NFV-SEC Security. Identification of security challenges and

recom-mended actions

Active

NFV-EVE Evolution and Ecosystem. Documentation of NFV-related

best practices, use-case analysis of infrastructure-, software-, management- and orchestration-related features.

Active

(26)

2.4. SDN and NFV

SDO WG Purpose Status

NFV-SWA Software Architecture. Definition of a reference software

archi-tecture for VNFs in NFV environments.

Inactive

NFV-MAN Management and Orchestration. Management & operations

role of NFV-based virtual environments. It has been contin-ued as part of the OPNFV project hosted by the Linux Foun-dation.

Inactive

NFV-PER Performance. Performance- and portability-related

recom-mendations and NFV issue identification.

Inactive

NFV-INF Architecture of the Virtualization Infrastructure. Proposal of and

requirements for a reference architecture of the NFV virtual-ization infrastructure.

Inactive

Table 2.3: Standardization organisms, working groups within these , their area of focus and status as of the date of writing of this document (March 2019).

2.4

SDN and NFV

In this section, the concept of SDN has been thoroughly introduced, first by presenting its architecture, then by establishing a technology timeline about the very foundations SDN is built upon, and the previous subsection discussed the standardization efforts being done in SDN, showing the major organizations and working groups within these that are actively involved in the development of requirement specifications and standards to yield an SDN open ecosystem.

One of the main SDN-related technologies that is likely to appear when reading about SDN is Network Functions Virtualization (NFV). Although mostly targeting telecommu-nication networks, NFV is tightly linked to SDN and cloud computing in the sense that it advocates virtualization in terms of functionality abstraction from the underlying hard-ware [50]. NFV can be defined as a networking paradigm where the fundamental core concept is to decouple the network functions (NFs) from the hardware they run on. This way, capital and operational expenses are dramatically reduced since multiple Virtual Network Functions (VNFs) can co-exist in the same hardware platform, thus removing the need to acquire dedicated equipment to run these. Another advantage that NFV presents is the rapid deployment of services that this decoupling of software from hardware enables, as well as dynamic network scaling of the already in-place VNFs that provide great flexibility to the network [57].

The concept of NFV was born in 2012 after a joint-effort made by multiple telecommu-nication service providers (TSPs) [13]. Shortly after that, the European Telecommutelecommu-nications Standards Institute (ETSI) would be designated as the organization that would lead NFV-related standardization [36], specifically the Industry Specification Group (ISG) within ETSI.The ISG has proposed a handful of use-cases of NFV, such as virtualization of mobile core network and the IP Multimedia Subsystem (IMS) and virtualization of mobile base stations, detailed in [21].

NFV has three key components in its architecture, as defined by ETSI, namely the NFV Infrastructure (NFVI), the VNFs and Services and the NFV Management and Orchestration (NFV MANO), depicted in figure 2.4. NFVI provides the software and hardware resource infrastructure (compute, storage, network, etc), VNFs and services, which can be groups of

(27)

2.4. SDN and NFV

Figure 2.4: Simplified logical view of the NFV architecture, showing the main interfaces be-tween its key components and interconnections.

VNFs, run on top of the NFVI, and finally the NFV MANO provides the needed functionality to manage and configure the VNFs and operate on the NFVI these VNFs are running on.

Even though SDN and NFV appear to be similar technologies at first glance, they are slightly different concepts in the sense that the former aims to create an abstraction of the network, putting more focus on the network programmability aspect from a logically cen-tralized control layer, whereas the latter targets individual service or function abstraction or, in other words, data-plane programmability. In addition, NFV’s main target are telecom-munication networks, whereas SDN has most commonly been deployed on datacenters. Nevertheless, both technologies can be used together to bring even more flexibility to the network and to allow programmability at both the control and data planes. A good example of how SDN and NFV can complement each other, according to the authors of [57], is that an SDN controller can run as a single software instance running in standard server technology, thus being possible to deliver it as a VNF.

IHS research director and advisor Jeff Wilson wrote a white paper back in 2015 where he motivated how SDN and NFV can deliver security "virtually everywhere" [71]. In his work, Wilson believes that both technologies aim at the same direction: to make networks and services more programmable and agile. He claims that service providers around the world are willing to leverage SDN and NFV in order to achieve improvements in cost and agility. He goes on to list the primary drivers behind SDN and NFV: on one hand, NFV aims for a quick, on-demand service scalability, reduced CAPEX and OPEX, quick new service introduction and real-time network optimization, among others. On the other hand, SDN’s objectives are, but not limited to, quick network service creation, simplification of network provisioning, real-time network configuration and optimization, and simple, low-cost networking devices. To conclude, Wilson states that a transition to SDN/NFV based ecosystem is already on its way due to the enormous advantages both technologies separately provide, such as service

(28)

2.4. SDN and NFV

agility leading to a shorter time-to-market and global view of multi-vendor, multi-network domains.

(29)

3

Related Work

Software-Defined Networking has been thoroughly researched for a long time, and as de-scribed in section 2.2, the name was coined after several previous efforts done in related fields such as network virtualization and decoupling of control and forwarding planes. It is no surprise that there have also been multiple surveys about SDN, presenting all studies leading to the date and providing a thorough overview of the state-of-the-art of this network paradigm. Although most surveys do usually share a common backbone, i.e., a history or timeline of some kind, three well-known studies follow below that use different approaches to introduce what SDN is and where it comes from.

A comprehensive survey about SDN was carried out by Kreutz et al. [45] in 2015 where they present an extensive review of SDN: ranging from a full architectural description to future challenges. The authors motivate the need of SDN based on the fact that traditional vertically-integrated (tightly-coupled control and data planes inside of network elements), IP-based network management is cumbersome [6]. They introduce the architectural details of SDN in great detail using a bottom-up approach. They start off with the forwarding devices composing the data plane, moving on to the southbound interfaces connecting the forward-ing and control planes, introducforward-ing the concepts of network hypervisors and operatforward-ing system (NOS) running on the SDN controller, to finally reach the management plane (MP) via the different northbound interfaces. In addition, the authors present with an extensive level of detail the different SDOs and working groups within these that have contributed to the standardization of the SDN architecture. According to the authors, this particular work represents the most comprehensive literature survey of SDN until the date of the writing (2015).

Feamster et al. [22] conducted an study about the history of programmable networks and their impact on SDN. Their approach to introduce SDN is to present the different related technologies and fundamental ideas SDN is based upon, such as active networks, control and data plane decoupling and network virtualization. In addition, the authors provide the different technology pushes and pulls that each of these experienced and provided, and the intellectual contributions that these efforts made in order to consolidate the SDN concept. Furthermore, the flaws and limitations of these are also described, thus providing a peda-gogical explanation of why these related technologies didn’t succeed as individual efforts

(30)

3.1. General applications of SDN

but rather inspired and enabled the creation of SDN.

Another popular survey on the state-of-the-art and research challenges of SDN was done by Jammal et al. [38]. In this thorough paper, the authors set out to provide an architectural definition of the SDN architecture, having as a starting point argument that concepts such as interconnected datacenters and virtualized servers have significantly increased network demand during the past years. They go on to motivate that in order to achieve a flexi-ble network management an SDN-based solution is necessary. Hence, the authors start by providing a detailed architectural description of SDN, stating the numerous benefits it offers, such as enabling network programmability and ease of device configuration and trou-bleshooting. Next, they move on to explain OpenFlow’s architecture and how it is the first standardized southbound interface protocol to manage the forwarding layer devices. Net-work virtualization is also introduced as a fundamental outcome (and foundation) of SDN, NFV is introduced as an application of virtualization and a comparison is drawn between SDN and NFV. One of the most remarkable sections of this work is the section about SDN applications: on one hand, the idea of applying SDN in datacenter networks (DCNs) is intro-duced, motivating how it can make DCNs more flexible and centrally manageable. On the other hand, the authors present SDN as a means of providing Network-as-a-Service (NaaS) in terms of using SDN as a technology to control and view network layers in NaaS scenarios. Finally, the existing SDN challenges such as reliability, scalability and performance are stated and solutions to these are discussed, and some current research initiatives are presented.

3.1

General applications of SDN

SDN is a technology that offers a vast array of use-cases and applications that have dramat-ically changed the way networks are operated. Examples of successful applications range from SDN-based architectural models for 5G to improvements made to existing IoT applica-tions.

For instance, Guerzoni et al. [32] propose a plastic architecture for the advanced 5G mobile network using an SDN-based approach. In their work, the authors present a unified control plane composed of three logical controllers: (1) the device controller (DC) running on the UE device which is responsible for providing L1 connectivity to the 5G network and handling access-stratum (AS) functions; (2) the EDGE controller (EC) which implements the 5G network control plane and comprises functions such as network access control, packet routing and security. This controller can be further split into two instances: one running in the edge cloud infrastructure and the other one implemented directly in the UE device; and (3) the orchestration controller (OC), whose objective is to coordinate cloud resource utilization, e.g., compute, memory or storage. The OC is divided into the resource orches-tration (RO) and topology management (TM) modules responsible for defining the physical resource allocation needed to instantiate EC control applications and directly managing the physical resources, respectively. Furthermore, the authors define a new, clean-slate 5G data plane design with no dedicated data plane network elements nor logical elements for the device population. They propose that when a UE performs a network attachment request using the Radio Access Network (RAN), an address and a Last Hop Routing Element (LHRE) is assigned to it, the latter being responsible for chaining the access point of the UE to the backhaul infrastructure. After introducing the proposed architecture, the authors go on to present some high-level 5G procedures such as the initial attachment and general device mo-bility management, and conclude by making a theoretical comparison of latencies currently present in 4G networks and how these, by using an SDN-based approach in 5G networks,

References

Related documents

OpenFlow is considered as the most common deployed software defined networking ar- chitecture. This model consists of mainly three network layers that is, the application layer,

Of course our model is a very coarse approximation since the path loss exponent often is larger than two, which favour the multihop architecture On the other hand the energy

The software architecture is there whether we as software engineers make it explicit or not. If we decide to not be aware of the architecture we have no way of 1) controlling

Although immature technology and services were, undoubtedly, in those cases important factors, the key quest still is how many people wants to do anything more with their

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av

In this study an outline is presented of the semantic network of the preposition up in American English in sentences extracted from the Corpus of Contemporary American English (COCA),