• No results found

Name Resolution Information Distribution in NetInf

N/A
N/A
Protected

Academic year: 2021

Share "Name Resolution Information Distribution in NetInf"

Copied!
57
0
0

Loading.... (view fulltext now)

Full text

(1)

Name Resolution Information Distribution in NetInf

Jiawei Zhang & Yongchao Wu

Master’s Degree Project

Stockholm, Sweden 2015-05

(2)
(3)

Name Resolution Information Distribution in NetInf

Jiawei Zhang jiaweiz@kth.se

Yongchao Wu yonwu@kth.se

Examiner:

Peter Sjödin

KTH Royal Institute of Technology

Supervisor:

Anders Eriksson

Adeel Mohammad Malik Börje Ohlman

Karl-Åke Persson

Ericsson AB

(4)
(5)

Abstract

Information-Centric Networking (ICN) is a different architecture from today’s Internet, which is host-centric. In ICN, content is requested by the names of content objects instead of network or host addresses. This feature allows for a number of advantages, such as in-network caching and request aggregation.

Network of Information (NetInf) is an ICN architecture. It is an overlay on TCP/IP, which translates content object names to locators or IP addresses.

NetInf is designed to facilitate initial deployment and migration from today’s networks to ICN.

In an ICN network, content can be cached at numerous locations giving a client the possibility to retrieve content from a number of available sources. In order to retrieve a content object, a client does a lookup in a Name Resolution Ser- vice (NRS) to resolve the content name into a set of locators where the content is stored. By distributing the location information of content objects from the NRS to NetInf nodes in the network the lookup time and overhead caused by the lookup messages can be reduced. Hence resulting in better end-user experience and more efficient network utilization. In this thesis, two methods to distribute location information of content objects in a NetInf network have been proposed and evaluated against a reference model where the location information is hosted centrally in an NRS. The effectiveness of the proposed methods has been eval- uated by running simulations on a NetInf simulator (built on OMNeT++) that was built during the course of this project.

Evaluation results show that the proposed methods reduce the lookup/name resolution latency of content objects. The results also compare the overhead caused by each one of the proposed methods in terms of network utilization.

We also show the results that the network topology has an impact on the ef-

fectiveness of the proposed methods and therefore is a factor that needs due

consideration in deciding which method is suitable when.

(6)

Acknowledgments

We would like to thank our manager Patrick Sellstedt for giving us the opportu- nity to perform our Master Thesis project in Evolved Packet Core team within Ericsson Research. It is our honor to work with such a great team. Thanks to Patrick, we could have the chance to be evolved in cutting edge technology and project, and work with the most professional experts in ICN technology. It will be our precious memory during our life.

Thanks to our supervisors in Ericsson Research. They are Anders Eriksson, Adeel Mohammad Malik, B¨ orje Ohlman and Karl-˚ Ake Persson. Thanks for all the supports, discussions, and suggestions. We appreciate the freedom and trust we had from beginning to the end of the project, from proposing our ideas to making decisions in all of the stages during the project. It is our pleasure to work with you for one year.

We would like to express our sincere gratitude to our examiner Peter Sj¨ odin in KTH for giving us enough time to finish a lot of hard work in this project and insightful comments on the thesis.

Thanks to my parents in China, who supported me throughout my whole education years. They always show me unconditional loves and support when I got stuck in some parts of the project. Thanks to my friends in Stockholm.

Special thanks to Yao Lu for giving me a lot of useful technical support on MATLAB during the project.

Jiawei

I would like to thank my friend Jiawei. He helped me a lot during the project, and shared lots of skills in programming and project management with me; therefore I have made a great progress during the project. Most importantly, I would like to thank my family. With their support, I could realize my dream to study in such a wonderful country and a great university.

Yongchao

(7)

Contents

List of Figures iii

List of Tables iv

1 Introduction 1

1.1 Background . . . . 1

1.2 Problem . . . . 2

1.3 Purpose . . . . 3

1.4 Goal . . . . 3

1.4.1 Ethics . . . . 3

1.4.2 Sustainability . . . . 3

1.5 Methodology . . . . 3

1.6 Contribution . . . . 4

1.7 Outline . . . . 4

2 Background 5 2.1 Information-Centric Networking . . . . 5

2.2 Network of Information . . . . 5

2.2.1 Named Data Objects . . . . 5

2.2.2 NetInf Protocol . . . . 6

2.2.3 Convergence Layer . . . . 7

2.2.4 Routing and Name Resolution . . . . 7

2.2.5 On-Path and Off-Path Caching . . . . 7

2.2.6 Message Flow . . . . 7

2.3 Related Works . . . . 8

2.3.1 Breadcrumbs . . . . 8

2.3.2 Cache ”Less for more” . . . . 9

3 Environment 10 3.1 OMNeT++ . . . . 10

3.1.1 Modeling . . . . 10

3.1.2 Result Recording . . . . 10

3.2 MATLAB . . . . 11

4 Design and Implementation 12 4.1 Method Design . . . . 12

4.1.1 Motivations . . . . 12

4.1.2 Limitations . . . . 12

(8)

4.1.3 Overview . . . . 12

4.1.4 Neighbor Discovery . . . . 13

4.1.5 Method 1 - Central . . . . 14

4.1.6 Method 2 - Active . . . . 15

4.1.7 Method 3 - Passive . . . . 16

4.1.8 Method 4 - Hybrid . . . . 16

4.1.9 Discussion . . . . 17

4.2 Implementation . . . . 17

4.2.1 Message . . . . 17

4.2.2 Routing Function . . . . 18

4.2.3 Name Resolution . . . . 18

4.2.4 Request Aggregation . . . . 19

4.2.5 Source Selection and Cost Model . . . . 21

4.2.6 Node . . . . 22

4.2.7 Methods . . . . 22

5 Evaluation 24 5.1 Scenarios . . . . 24

5.1.1 Network Topology . . . . 24

5.1.2 Content Objects . . . . 25

5.1.3 Requests . . . . 26

5.1.4 Packet Length . . . . 26

5.1.5 Compared Methods . . . . 27

5.1.6 Warm-Up Period . . . . 27

5.2 Results . . . . 27

5.2.1 NDO Retrieval Time . . . . 28

5.2.2 Signaling Overhead . . . . 30

5.3 Discussion . . . . 31

6 Conclusion and Future Work 32 6.1 Conclusion . . . . 32

6.2 Future Work . . . . 33

6.2.1 More Complicated Cost Model . . . . 33

6.2.2 Dynamic Networks . . . . 33

6.2.3 NRS Information Update . . . . 33

6.2.4 Network Caches . . . . 34

Bibliography 35

Appendix A. Terminology 38

Appendix B. Method Description 39

Appendix C. Packet Length Specification 47

(9)

List of Figures

1.1 Mobile devices using cellular connection . . . . 2

2.1 NetInf Protocol Stack . . . . 6

2.2 Message Flow . . . . 8

4.1 Neighbor Discovery Process . . . . 13

4.2 Abstract Structure of a NetInf Message . . . . 18

4.3 Name Resolution Service Table (NRST) . . . . 19

4.4 Request Aggregation . . . . 20

4.5 Request Aggregation List . . . . 20

4.6 Request Aggregation (Mode 1) . . . . 21

4.7 Request Aggregation (Mode 2) . . . . 21

4.8 Source Selection based on Weight . . . . 22

4.9 Node Architecture . . . . 23

4.10 Method Composition . . . . 23

5.1 Tree Topology . . . . 25

5.2 Mesh Topology . . . . 26

5.3 NDO Retrieval Time . . . . 27

5.4 NDO Retrieval Time (frequency) . . . . 28

5.5 NDO Retrieval Time (interval) . . . . 29

5.6 Signaling Data Ratio (interval) . . . . 31

(10)

List of Tables

4.1 Message Kind . . . . 18

5.1 Common Parameters . . . . 24

5.2 Parameters for Tree Topology . . . . 25

5.3 Parameters for Mesh Topology . . . . 25

(11)

Chapter 1

Introduction

Information-Centric Networking (ICN) [1] is a different architecture from to- day’s Internet, which is host-centric. ICN provides the service of locating con- tent. The clients request content using names or identifiers without having to worry about where the content locates. It is unlike the current host-centric Internet paradigm where end-hosts explicitly communicate with each other.

1.1 Background

Figure 1.1 shows the trend of increase of mobile devices using cellular connection, including Smartphones, mobile PCs, tablets, etc. By 2019, there will be more than 6 billion cellular connections in the world, which will result in higher demand for network capacity than ever before.

Large crowd events are also a problem in today’s mobile networks. When numer- ous users access the same content in the network simultaneously (for example, a popular football game), the server in the host-centric network will face un- usual pressure. Thus, the users may suffer from network congestion and large latency, which results in a bad user experience. As one to many communication scenarios generated most of the Internet traffic today, more efficient methods to deal with them are needed [2].

To handle the problem, Information-Centric Networking has been proposed by

the research community. In recent year, many information-centric approaches

have emerged: Content Centric Networking (CCN) [3], Data Oriented Net-

work Architecture (DONA) [4], Publish/Subscribe Internet Routing Paradigm

(PSIRP) [5], Networking of Information (NetInf) [6, 7], etc. Different ap-

proaches have different models and components; however, they all share the

same principle: focusing on the content itself rather than the location of the

content in the network. This could have a better fit than host-centric architec-

tures to the current Internet traffic trends [8].

(12)

Figure 1.1: Mobile devices using cellular connection

1.2 Problem

In NetInf, content providers publish the locations of content objects into Name Resolution Service (NRS) [9]. NRS is a service that provides the functionality of resolving the names of content objects into locators or IP addresses in NetInf. In order to retrieve a content object, a client does a lookup in a Name Resolution Service (NRS) to resolve the content name into a set of locators where the content is stored. Due to the mechanism that all requests should be forwarded to the NRS, the clients may suffer from a large latency, and the NRS server might be overloaded if the traffic is very large. Thus, the Name Resolution mechanism needs to be improved to reduce the latency, the workload of the NRS and save the bandwidth.

In this thesis, we are focusing on the design of methods to reduce the latency (content retrieval time) and network overhead. The problem in this thesis is defined by the following question:

How should the method for distributing Name Resolution Information be de- signed in order to reduce the latency of content retrieval and network overhead?

The effectiveness of the proposed methods has been evaluated by running sim-

ulations on a NetInf simulator (built on OMNeT++) that was built during the

course of this project.

(13)

1.3 Purpose

The thesis presents the work of designing and evaluating experimental solutions for improving the Name Resolution mechanism in NetInf.

1.4 Goal

The goal of the work is to distribute the Name Resolution Information in Net- Inf to reduce the content retrieval time and network overhead. The result of the work leads to an increased understanding of how the latency and network overhead are affected by different experimental methods for distributing Name Resolution Information and what method is suitable for which scenario. The result could benefit the further research on utilization of the Name Resolution Information in NetInf.

1.4.1 Ethics

There are some ethical issues in the ICN field. One of them is the cache on path mechanism. If the content objects can be cached by any node in the network, there might be some legal issues. The owner should have some access control over the content they published. Another issue is the confidentiality.

The requests sent by the clients are forwarded by the nodes in the network so that the nodes may inspect the content of the request and its response. The privacy of the clients might be violated. To ensure the privacy of the clients, the content might be encrypted.

1.4.2 Sustainability

The method proposed in this thesis brings the information about content loca- tions closer to the clients, thus reducing the traffic for name resolution process.

The caches in the network also decrease the distance from clients to the source node. It also reduces the traffic in the network. These features of the NetInf can be used to build an energy-efficient network that is good for sustainable development.

1.5 Methodology

The project consists of four main phases: literature study, method design, im- plementation, evaluation, and documentation.

During the literature study, several scientific papers related to ICN are reviewed

to understand the background, differences from the host-centric network and the

NetInf architecture. During the methods design phase, the scope of the thesis is

(14)

defined, and initial solutions are proposed. During the implementation phase, a NetInf simulator is implemented step by step and the methods are implemented and deployed within the simulator. The quantitative method is applied to the evaluation and documentation phase. Also, since numerical data is collected in experiments, experimental research method, and deductive approach would be the best approach to be employed. In this phase, experiments are conducted on two different scenarios with the simulator. MATLAB is used to process and visualize the data generated during the simulation runs. According to the metrics, conclusions are made and documented.

1.6 Contribution

The three major contributions from the thesis project are:

• (1) Two experimental methods (Active and Passive methods) for distribut- ing Name Resolution Information were designed to reduce the content retrieval time and network overhead in NetInf;

• (2) The methods proposed in the thesis were evaluated together with the conventional method in NetInf to show the pro and con of the different methods;

• (3) For the research community, the result of the evaluation can lead to a better understanding how to take advantage of Name Resolution Information to improve the performance of the network.

1.7 Outline

Chapter 1 gives a brief introduction to the thesis work. Chapter 2 describes

the Information-Centric Networking (ICN) and Network of Information (Net-

Inf). Chapter 3 gives a general introduction to the technical environment for

evaluations. Chapter 4 describes the design and implementation of the NetInf

simulator and the proposed methods in detail. Chapter 5 presents the scenarios

used for evaluation and the results generated from the data collected. Chapter

6 surmises the work and presents the conclusion and future work.

(15)

Chapter 2

Background

2.1 Information-Centric Networking

Information Centric-Networking (ICN) is a different architecture from today’s Internet, which is host-centric. The core idea of ICN is to focus on the content objects rather than devices and networks. It means the focus is on the content objects to be retrieved (not where the content is stored) in ICN. ICN is a new research field, and there are currently many different approaches being developed [3, 4, 5, 6, 7]. Some attempts have been made to define ICN at a mechanism-independent service level [9].

2.2 Network of Information

Network of Information (NetInf) is an ICN approach. It was first developed in FP7 4Ward Project [10, 11], then got continued development In FP7 SAIL Project [12, 13].

2.2.1 Named Data Objects

NetInf is an approach that provides service to access Named Data Objects (NDOs) in the network. NDOs can be web pages, videos, pictures, etc. Each NDO in the NetInf has a unique name. A flat URI based naming scheme is employed in NetInf:

ni://[Authority]/[Digest Algo];[Digest Value]?[Query Params]

The Authority field is used to help applications to access content objects.

Both name-based routing and name resolution can make use of it. The Digest

Algo field is used to specify which hash function is employed to calculate the

Digest Value field. Digest Value field is used to represent the content.

(16)

With these two fields, NetInf can provide name-data integrity validation, which is the basic security service in NetInf. The validation can be performed without infrastructure support like a Public Key Infrastructure (PKI). It is implemented by adding the output of employing hash function of the NDO to its name [14].

Query Params is an optional field, it can be used to provide some extra parameters for different purposes [15].

2.2.2 NetInf Protocol

Figure 2.1 shows the stack of NetInf Protocol [6]. In real world, networks can be very different. Different employment may base on different link layers and underlays. NetInf solves this problem by introducing Convergence Layers (CLs) that converts the conceptual protocol to the specific messages for concrete proto- col. For example, NetInf-over-IP requires a CL that encapsulate and fragments NetInf messages into IP packets and validates message integrity. Some spe- cific CLs can also provide transport functions, such as reliability, flow control, congestion control.

Applications Transport

NetInf

CL 1 CL 2

Lower Layer 1 Lower Layer 2 Physical

Figure 2.1: NetInf Protocol Stack In the NetInf Protocol, following message types are defined:

GET/GET-RESP A GET message is sent to request for an NDO in NetInf capable network. A GET message contains the unique name of an NDO, which is following NI URL scheme. The node that receives a GET request will generate a GET-RESP or forward it and wait for a reasonable time (timeout) before sending the response.

PUBLISH/PUBLISH-RESP A PUBLISH message is sent to tell a NetInf node (usually, it is NRS) the existence and basic information of an NDO in the network. The PUBLISH message should contain the name of an NDO and locators showing where the NDO is stored in the network. A PUBLISH-RESP message is used to notify the publisher if the publication succeeds or not. It contains some status values showing the result of the operation.

SEARCH/SEARCH-RESP A SEARCH message is sent to look for a spe-

cific NDO in NetInf network. The SEARCH request contains some rules (to-

kens) describing what NDOs it looks for. A SEARCH-RESP may contain a list

of NDO name that match the rules (tokens) in the request.

(17)

2.2.3 Convergence Layer

Convergence Layer (CL) provides services for NetInf nodes. It enables commu- nications between NetInf nodes. Currently, there are three CLs being worked on: HTTP CL, UDP CL [7] and Bluetooth CL [2]. The names of these CLs reflect the lower layer protocols they are built on.

2.2.4 Routing and Name Resolution

NetInf supports both name resolution and name-based routing. Name resolution service in the network maps NDO names to network locations. Name-based routing enables a NetInf node to find out the next hop of the request or response only by NDO names. More details will be discussed in the implementation part.

The name-based routing could simply match the NDOs names with routing rules by pattern matching. In this way, a request from a local network can reach an edge with external access, at which point name resolution service is provided.

NetInf nodes are responsible for forwarding request and response messages. Dif- ferent routing protocol will apply to different parts of networks, just like multiple routing protocols for Internet Protocol (IP) today. For example, NetInf support Open Shortest Path First (OSPF) for local domains and Border Gateway Pro- tocol (BGP) for global level.

2.2.5 On-Path and Off-Path Caching

NetInf supports on-path (request/data path) and off-path caching. When a GET-RESP with NDO data traverses the NetInf node which does on-path caching (cache on path), the NDO might be cached on the node according to the configurations of the node. On-path caching brings content closer to consumers;

hence the bandwidth consumption and content retrieval time can be reduced [6]. Off-path caching is an alternative mechanism; it can avoid the duplications and increase the overall hit rate. It tries to cache the popular content to optimal locations [16].

2.2.6 Message Flow

Figure 2.2 shows an example of name-based routing, name resolution and the

hybrid in NetInf. Steps A1-A6 shows how name-based routing works. First, the

client sends out a GET message (step A1) to Router 1. Then, the NDO name

in the GET message is checked by the Router 1 to decide which is the next hop

(step A2). Router 2 also checks the NDO name to decide the next hop, in this

case, it is the Source node (step A3). Finally, the NDO will be sent back to the

client (A4-A5-A6). In name-based routing, the GET request is forwarded by

name-based routing hop by hop until a copy of the NDO is found. If the router

does not have enough information about where to forward the request in step

A2, a name resolution step can be performed (step A1.1-A1.2) before step A2.

(18)

NRS Server NRS Server

Router 1 Router 2

Source Source

GET

Locators

GET DATA GET

GET

GET

GET

Routing hints

DATA

DATA DATA

A2 A1

A3 A4

A5 A6

A1.1 A1.2

B1

B2

B3

B4

Figure 2.2: Message Flow

It is the hybrid approach, which is a combination of name-based routing and name resolution.

Steps B1-B4 shows how name resolution works. First, a client sends a GET request to the NRS Server (step B1). Then, the NRS translate the NDO name into source locations and send them to the client (step B2). After receiving the resolved locations, the client sends another GET request to the source location from its perspective (step B3). Finally, the source node sends the NDO back to the client (step B4). In name resolution, A GET message sent to NRS will be resolved with locators, then the locators are used to retrieve the object via the underlying network, for example, IPv4 network [6].

2.3 Related Works

2.3.1 Breadcrumbs

Breadcrumbs [17] is an architecture for caching guidance information in the

network. It is a simple content caching, location and routing system. In BCs

network, each router has a local log file that logs the content objects passing

through it. When the content object is downloaded, the routers on the download

path generate minimal information to route request. When a request for content

encounters this guidance information on its way to the source node, the request

is routed to the nodes that may have the content. It enables clients to make use

of the content downloaded previously by other nodes in the network. Recently, a

demo on Contents Sharing among Mobile Users in Breadcrumbs-enabled Cache

(19)

Network has been published, it proposed several extensions to overcome the drawbacks of original Breadcrumbs, such as ABC (Active Bread- crumbs) [18], HBC (Hop-aware Breadcrumbs) [19], BC+ [20] and MSCR (Mapping Server with Cache Resolution) [21].

Breadcrumbs is similar to the Passive method developed in this thesis project, but they are different. Both Breadcrumbs and the Passive method store guid- ance information in the network, but the guidance information and the routing method are different. In Breadcrumbs the guidance information contains the ID of node from which the file arrived and the ID of node to which the file was forwarded [17], while in the Passive method, the guidance information contains the location of the source node that really has the content. In short words, Breadcrumb can be used to improve the name-based routing in NetInf while the Passive method in this thesis is designed to improve the name resolution.

Since we are focusing on the distribution of name resolution information in this thesis, there is no need to include Breadcrumbs in the analysis.

2.3.2 Cache ”Less for more”

In-network caching is one of the outstanding features of Information-Centric Networking (ICN). The Cache ”Less for more” approach [22] is proposed to replace the universal caching strategy, in which the nodes always cache all the content. The goal is to reduce the cache replacement rate while still caching content that is more probable to be hit in the cache. The mean idea of the work is only to cache the popular content using popularity factor.

In this thesis, the methods developed shares the concept ”less for more” in a

different way. In ”Less for more” approach, it caches the most popular content

in the network while in our methods only the name resolution information is

cached rather than the content itself. A least recently used (LRU) cache eviction

policy from ”Less for more” can also be used to update the name resolution

information stored in the network.

(20)

Chapter 3

Environment

This chapter gives a brief introduction to the simulation framework applied during the implementation of NetInf simulator and the analysis tools during the evaluation process.

3.1 OMNeT++

OMNeT++ is an extensible, modular, component-based C++ simulation li- brary for building network simulators [23]. It is a discrete event simulation environment. It provides an eclipse-based IDE, a graphical runtime environ- ment, and many other supporting frameworks. OMNeT++ 4.5 is used in the thesis to build a NetInf simulator, implement methods and record results.

3.1.1 Modeling

NED language is used in OMNeT++ to describe the structure of a simulation.

NED stands for Network Description. Users can declare simple modules and connections between modules. Compound modules can be assembled from sim- ple modules, which enable users to describe more complex networks. Simple modules are active components of the model. Simple modules are defined by NED and implemented in C++ code. All of the handlers (that do the real work) are implemented in simple modules. Gates are the connection points of modules, which are the abstraction of the network interface in OMNeT++. Channels are the links between gates. Users can set data rate, drop rate and loss rate of the channels.

3.1.2 Result Recording

OMNeT++ provides two methods for result recording. One is signal-based

statistic, the other one is to use built-in C++ library. With built-in support

(21)

for recording simulation results, users can record anything that is useful to get a full picture of what happened in the model during the simulation run. For the thesis, signal-based statistic is chosen for result recording because it enables users to record the result in the form they need, without continuous tweaking of the simulation code.

3.2 MATLAB

MATLAB is used to process and visualize the data generated by the simulations.

MATLAB is a high-level language and interactive environment for numerical

computation, visualization and programming [24] . OMNeT++ also has a built-

in analysis tool, but it is a little weak as well as some bugs when processing

a large amount of data. That is the reason MATLAB is chosen instead of

OMNeT++ built in tools.

(22)

Chapter 4

Design and Implementation

Section 4.1 introduces the methods proposed and developed in this thesis project.

Section 4.2 gives specifications of how these methods are implemented.

4.1 Method Design

In this section, details of the methods developed during the thesis are presented.

4.1.1 Motivations

The motivation of the methods proposed in the thesis is to distribute the Name Resolution Information in the network in order to reduce NDO retrieval time and network overhead.

4.1.2 Limitations

There are a lot of KPIs can be used to measure the performance of different methods. With the limitation of the scope of the thesis, only the latency of content retrieval and signaling overhead are considered.

4.1.3 Overview

A Neighbor Discovery method is proposed and presented in Section 4.1.4. The Neighbor Discovery method is the fundamental module of the Active method, in which the information about NetInf neighbors is used to populate the infor- mation about NDOs.

In the Active method, when a node caches a new NDO, it will send out a special

type of message to its neighbors telling them what NDO it has. A NetInf node

(23)

takes this action actively, so the method is called Active method. In contrary to active, a Passive method is proposed.

In the Passive method, the Neighbor Discovery method is not needed. The in- formation about available NDOs in the network is populated (cached) passively on path (request/NRS).

The Central method is the conventional method in NetInf; the NRS server will always resolve requests. In other words, the information about NDOs is only centralized in NRS server. That is why it is called Central methods in this thesis.

The Hybrid method is a combination of Active and Passive methods, which takes the good features of both methods.

4.1.4 Neighbor Discovery

For the purpose of discovering neighbors in the NetInf network for a NetInf node, some extensions are made to NetInf protocol. They are HELLO and HELLO-RESP messages. One thing to be noticed is that the neighbor in the thesis means the direct neighbor in NetInf network. A direct neighbor is a node’s neighbor within one hop in the overlay of NetInf.

When the Neighbor Discovery process starts, the node that needs to know its neighbors will broadcast HELLO messages. A HELLO message contains the sender’s address and optionality the sender’s host name. These HELLO mes- sages cannot be forwarded by other nodes by limiting the NetInf TTL of the message to 1. NetInf TTL means NetInf Time-To-Live, which controls how many hops a NetInf message can be forwarded. It ensures only the direct neigh- bors have the possibility to receive and reply HELLO messages.

NIT Neighbor

NetInf Network HELLO

HELLO-RESP

HELLO

HELLO-RESP HELLO

HELLO-RESP

Router 1

Bob

Address Router 1 192.168.31.1

Bob 192.168.0.25 Alice

NIT Neighbor Address

Alice 192.168.31.66 NIT

Neighbor Address Alice 192.168.31.66

Figure 4.1: Neighbor Discovery Process

The node that receives a HELLO message will update its Neighbor Information

Table (NIT) first, and then send back a HELLO-RESP message to the sender of

HELLO. NIT maintains the list of neighbors of a NetInf node. Figure 4.1 shows

an example of Neighbor Discover process and NIT tables. Alice broadcasts

(24)

messages into networks, and then waits for replies. When NetInf Router 1 or Bob receives the HELLO message, they will send back HELLO-RESP messages and add the information about Alice into their NITs. Upon receiving a HELLO- RESP message, Alice adds the information of the sender into her NIT. After performing the Neighbor Discovery process, the NITs should have the same status in 4.1.

4.1.5 Method 1 - Central

The Central method is the conventional method in NetInf. The reason it is called Central method in this thesis is that the information about where to retrieve an NDO centralizes in NRS server. When a client requests for an NDO in the network, it first sends out a GET message to NRS. Then the NRS will resolve the NDO name to locators and send these locators back to the client by a GET-RESP message. After receiving the GET-RESP from NRS, the client does Source Selection locally, and then sends another GET request to the selected source. Finally, the NDO will be sent to the client by the source via a GET- RESP message.

Upon receiving a GET message, a NetInf node will check the conditions and perform one of following operations in order. Only the first matching operation will be performed.

• (1) If the NDO requested is stored in the local cache, a GET-RESP mes- sage with NDO data will be generated and sent back to the requester;

• (2) If the node is NRS, it will resolve the NDO name into locators after querying the Name Resolution Service Table (NRST), then send a GET- RESP with locators to the requester;

• (3) If the node is the destination of the GET message and the requested NDO is not in its local cache, a GET-RESP without NDO data will be sent back to the requester, indicating the request is failed;

• (*) If no condition above is true, the node just forwards the message to its destination.

Upon receiving a GET-RESP message, a NetInf node will check the conditions and perform one of following operations in order. Only the first matching op- eration will be performed.

• (1) If the GET-RESP message contains NDO data and the current node does cache on path function, a copy of NDO will be stored into its local cache;

• (2) If the current node is the destination of the GET-RESP message with NDO data, the current node will empty the aggregation list and send out GET-RESP messages with NDO to the requesters aggregated;

• (3) If the current node is the destination of the GET-RESP message with

locators, the Source Selection process will be performed and another GET

(25)

message will be sent out to the best source location; (When Request Ag- gregation Mode = 1)

• (4) If the current node is the destination of the GET-RESP message with locators, the node will send the GET-RESP to the requesters aggregated and empty the aggregation list; (When Request Aggregation Mode = 2)

• (*) If no condition above is true, the current node only forwards the mes- sage to its destination.

The details about Request Aggregation are presented in Section 4.2.4.

4.1.6 Method 2 - Active

In the Active method, when a node caches a new NDO, it will send out NRS Information Update (NIU). An NIU message contains the information about what NDOs are available at the sender host. It is an active way to populate Name Resolution Information in the network, so it is called Active method.

The Active method is designed to reduce the lookup time as well as take the advantage of the NDOs that have already downloaded in the network to reduce the network load.

Upon receiving a GET message, a NetInf node will check the conditions and perform one of following operations in order. Only the first matching operation will be performed.

• (1) (3) Same as Central method;

• (4) If the node can resolve the NDO name into locators using NRST, a Source Selection process will be performed and another GET message will be sent out to the best source from the perspective of current node; (When Request Aggregation Mode = 1)

• (5) if the node can resolve the NDO name into locators using NRST, a GET-RESP message with these locators will be sent back to the requester;

(When Request Aggregation Mode = 2)

• (*) If no condition above is true, the current node only forwards the mes- sage to its destination.

Upon receiving a GET-RESP message, a NetInf node will check the conditions and perform one of following operations in order. Only the first matching op- eration will be performed.

• (1) If the GET-RESP message contains NDO data and the current node does cache on path function, a copy of NDO will be stored into its local cache and NIU messages will be sent to its neighbors in NIT;

• (2) (4) Same as Central method;

• (5) If the current node is the destination of GET-RESP message with

NDO data and it stores the NDO in its local cache, NIU messages will be

(26)

sent to neighbors in NIT;

• (*) If no condition above is true, the current node only forwards the mes- sage to its destination.

Upon receiving a HELLO/HELLO-RESP message, a NetInf node will perform the following operation.

• (1) The node inserts the sender’s information of HELLO/HELLO-RESP into its NIT after checking duplication. If the message type is HELLO, besides the updating of NIT, an HELLO-RESP with the information of current node will be sent back to the sender of HELLO.

Upon receiving an NIU message, a NetInf node will perform the following op- eration.

• (1) The TTL of NIU will be decreased by 1. The node will extract the Name Resolution Information in NIU and add them into its NRST. If the TTL of the message is still greater than 1 after deduction, the node will send the NIU to its neighbors in NIT.

4.1.7 Method 3 - Passive

In contrary to the Active method, a Passive method is proposed. The Passive method also populates Name Resolution Information in the network, but in a passive way. The passive method brings the information in NRS closer to clients. The Passive method is designed to make NDO names got resolved as early as possible. Thus, the NDO retrieval time can be reduced.

Upon receiving a GET message, a NetInf node will check the same conditions and perform the same operations in Active method.

Upon receiving a GET-RESP message, a NetInf node will check the conditions and perform one of following operations in order. Only the first matching op- eration will be performed.

• (1) (4) Same as Central method;

• (5) If the GET-RESP contains locators and the node does cache NRS info on path, the node will extract Name Resolution Information from the GET-RESP and update its NRST.

• (*) If no condition above is true, the current node only forwards the mes- sage to its destination.

4.1.8 Method 4 - Hybrid

The Hybrid method is a combination of Active and Passive method. It is pro-

posed to find out if the combination of two methods turns out to be a better

performance than one or two of them.

(27)

In Hybrid method, there are no new operations to be performed besides the Ac- tive and Passive method. Upon receiving a GET/HELLO/HELLO-RESP/NIU message, a NetInf node will perform the same operations in Active method.

Upon receiving a GET-RESP message, a NetInf node will perform the same operations in Active method for messages with NDO data and Passive method for messages with locators.

4.1.9 Discussion

The Central method is not proposed in this thesis; it is the conventional method in NetInf. It is used for comparison with other methods during the evaluations.

In the Active method, after finding the neighbors, an NIU is sent out once the node gets a new cache. NIU is designed to carry one or more name resolution entries. A name resolution entry is a name-locator binding. Thus, to reduce the signaling overhead caused by NIU, NIU can be sent out periodically with more than one name resolution entries.

The Passive method is inspired by Active method, but is designed to make the name resolved as early as possible without increasing the signaling overhead.

The Hybrid method takes both advantages in Active and Passive method in order to achieve a better performance than any single one method.

4.2 Implementation

To evaluate a new or extension of a network protocol, it is straightforward to use a simulator, because the complexity and scale of the experiment are difficult to be deployed in the real world.

4.2.1 Message

NetInf is a message-based protocol; it provides functions such as forwarding requests and responses, caching, and name resolution. The representations of different types of NetInf messages are also a fundamental part of the NetInf Simulator. Figure 4.2 shows the abstract structure of a NetInf message. Table 4.1 shows different types of messages and their brief description.

The message with type value 5, 6 and 7 are not in the original NetInf protocol.

They are proposed extensions to the protocol in order to help solve the problem

addressed in this thesis. PUBLISH and PUBLISH-RESP messages are only

used to set up the initial state of the simulation during the warm-up period.

(28)

TYPE NDO ID METADATA Key: value ...

DATA (PAYLOAD)

Figure 4.2: Abstract Structure of a NetInf Message

Kind Type Short Description

1 GET Request to retrieve an NDO.

2 GET-RESP The response to GET. It should carry an NDO or a list of locators.

3 PUBLISH Publish an NDO to NRS.

4 PUBLISH-RESP The response to PUBLISH which contains the results of the operation.

5 HELLO Find the direct neighbor (within one NetInf hop).

6 HELLO-RESP Response to HELLO.

7 NIU Populate Name Resolution Information.

Table 4.1: Message Kind

4.2.2 Routing Function

To enable the NetInf routing function, the basic underlying routing function need to be implemented in the simulator. The lower layer routing function delivers a message to its destination according to the destination field of the message via the shortest path.

NetInf supports both name-based routing and name resolution. Name-based routing is somehow pre-configured routing mechanism. For example, it can match the prefix of NDO’s name to decide which node it should be sent to or what is the next hop. In this thesis project, the name resolution service is implemented for experiments. In the real world, these two routing mechanisms can be used simultaneously as a hybrid mechanism.

In the implementation, a NetInf node checks its local Name Resolution Informa- tion upon receiving a request. If it has the Name Resolution Information, it will send the resolved information (source locators or routing hints) to the requester or does Request Aggregation, according to the specific settings. Details about Request Aggregation will be discussed in section 4.2.4.

4.2.3 Name Resolution

Name Resolution maps NDO names to network or host identifiers in different

name spaces, which is called routing hints. Routing hints are the information

about where to find copies of the object. In NetInf, different kinds of routing

hints are supported, such as IP addresses of the source node and pointers to

(29)

another node which has the information about the source nodes [6].

Figure 4.3 shows a simple example of the Name Resolution Service Table (NRST).

It stores the name resolution information entires. Each entry in this table maps an NDO name to a routing hint. For the thesis, routing hints stored in NRST are the node addresses (which are integers) of the source nodes that have copies of the NDO.

NDO ID NI://EAB1400001 NI://EAB1400005 NI://KTH1400001 NI://KTH1400027 NI://KTH1400050

Publisher Ericsson AB Ericsson AB

KTH KTH KTH

Locator 5 5 6 6 6

Figure 4.3: Name Resolution Service Table (NRST)

The NRST is used not only by the Name Resolution Server (NRS), but also normal NetInf nodes. It means normal NetInf nodes can also have NRST to provide services to other nodes for Name Resolution.

4.2.4 Request Aggregation

Request Aggregation is a very important feature in NetInf and other ICN ap- proaches, such as pending interest table in CCN [25]. Request Aggregation is a mechanism to reduce network load. One case is a lot of users request the same object simultaneously. It might be a live streaming of a popular event. Without request aggregation, a large amount of traffic will be added to the infrastructure and the experience of users may be affected due to the high load.

For a NetInf node, all incoming requests for the same NDO are aggregated, and

a single request for the NDO is sent outgoing. Figure 4.4 shows how Request

Aggregation works. There are four clients requesting the same NDO at the same

time. The requests sent by Client 1 and Client 2 will be aggregated by NetInf

router R1, and only one request will be forwarded to R0. The requests sent by

Client 3 and Client 4 also follows the same pattern. In the end, NetInf router

R0 only sends one request to the source node S. To implement this function,

the nodes that do request aggregation will create several aggregation lists for

aggregating the requests for the same NDO. When the aggregation point receives

the response, it will empty and delete the list after sending responses to all the

requesters in the list. An aggregation list maintains the pending requests that

have been aggregated by a node for an NDO. It contains the information about

who is the requester and what object is requested. Figure 4.5 shows an example

(30)

1 2 3 4

Client Client Client Client

R1 R2

R0 S

Figure 4.4: Request Aggregation

of an aggregation list.

NDO ID NI://EAB1400001

Requester Node 1 Node 3

NDO ID NI://EAB1400014

Requester Node 4 Node 7 Node 2

Figure 4.5: Request Aggregation List

The destination of a request is rarely the source node in the network. Requests are usually sent to NRS. Then, a response carrying routing hints (IP address or locators) will be sent back by some node which can resolve the NDO name in the request message. In this case, the aggregation point has two choices.

The first choice (Mode 1) is that the node does not empty the list; it sends out

another request using the routing hints. Source Selection may be performed in

this case if more than one routing hint is available. The second choice (Mode

2) is that the node will empty the aggregation list and send routing hints back

to the requesters aggregated. Figure 4.6 and 4.7 shows the sequence charts of

(31)

these two modes.

Client B Aggregation Point NRS Source

GET (to NRS)

GET (to NRS)

GET-RESP

GET (to SRC)

GET-RESP

GET-RESP GET (to NRS) Client A

GET-RESP

Figure 4.6: Request Aggregation (Mode 1)

Client B Aggregation Point NRS Source

GET (to NRS)

GET (to NRS)

GET-RESP

GET (to SRC)

GET-RESP GET-RESP

GET (to NRS) Client A

GET-RESP

GET-RESP GET-RESP

GET (to SRC)

GET (to SRC)

Figure 4.7: Request Aggregation (Mode 2)

Why are there two modes for Request Aggregation? Because there is no specific definition of Request Aggregation, it can be implemented in different ways.

Some implementations may cause problems in NetInf network. In this project, Mode 2 is activated for all the methods.

4.2.5 Source Selection and Cost Model

In NetInf, there might be multiple copies of the same NDO stored in the network.

When a client requests for an NDO, the client might receive a response that

(32)

contains many different source locations or routing hints. Then the client knows there are multiple copies (sources) available in the network. The client will do some comparison among the source locations, and then it sends another request to the best source location. The process of finding out the best source among many source locations is called Source Selection. When a client needs to do Source Selection, it should make the decision based on some knowledge. In the implementation of the simulator, the Source Selection is done with the help of path cost. The path cost is computed from the link weights on the path. Figure 4.8 shows how the Source Selection works.

NDO = “NI://KTH1400050”

Locations = [3, 4]

3

1 2 10

3

4

+

Client

Source Source

1 3

Weight from to = 3 + 10 = 13

1 4

Weight from to = 3 + 7 = 10

∵ 10 < 13, ∴ is the best source.4

Figure 4.8: Source Selection based on Weight

4.2.6 Node

Figure 4.9 shows the basic architecture of a NetInf node. In the implementation, all nodes are sharing the same definition, but they can have different behaviors given different initialization parameters.

4.2.7 Methods

There are four methods to be implemented in the simulator. Some components

(handlers for messages) can be shared by more than one method. Figure 4.10

show the composition of each method. ”GET (Central)” indicates the handler

method for GET messages in Central method. More detail about how each

method is implemented in the simulator can be found in Appendix B.

(33)

Cache NRST NIT Handler - GET Handler - GET-RESP

Handler - PUBLISH Handler - PUBLISH-RESP

Handler - HELLO Handler – HELLO-RESP

Handler - NIU

Figure 4.9: Node Architecture

GET (Central) GET-RESP with Data (Central) GET-RESP with Locators (Central)

GET (Active) GET-RESP with Data (Active) GET-RESP with Locators (Central)

NIU (Active) Neighbor Discovery

GET (Active) GET (Active)

GET-RESP with Locators (Passive) GET-RESP with Data (Central)

GET (Active)

GET-RESP with Locators (Passive) GET-RESP with Data (Active)

NIU (Active) Neighbor Discovery

Central Active Passive Hybrid

Figure 4.10: Method Composition

(34)

Chapter 5

Evaluation

5.1 Scenarios

Simulations are conducted on two scenarios (tree scenario and mesh scenario) with Central, Active, Passive and Hybrid methods. The Central method is the conventional method in NetInf. Active and Passive are the methods developed in the thesis to distribute Name Resolution Information in the network. The Hybrid method combines the features of Active and Passive methods. The scenarios are deployed in the NetInf simulator built in Chapter 4.

5.1.1 Network Topology

Two typical structures of topology are chosen for evaluation: tree and mesh.

Tree topology is more like the network deployed in an organization and todays Internet is more similar to mesh topology. Common parameters for both tree and mesh scenarios are shown in Table 5.1.

Parameter Value

# Nodes 26

# Clients 11

# Named Data Object (NDO) 500 Average Interval of request 100s Simulation time per run 100000s Simulation runs per method 100 times

Interval of log 1000s

Link bandwidth 1Mbps

Link propagation delay 100ms Request Aggregation Mode 2

Table 5.1: Common Parameters

(35)

• Tree Topology

The structure of the tree topology is shown in Figure 5.1. Besides common parameters, some specific parameters for tree topology are shown in Table 5.2.

5 6 9

0

16 21 22 24

15

NRS

SRC

25 2

10

3

7 8

13

4

18

11 12 14 17

SRC

19 20 23

SRC SRC SRC

1

Figure 5.1: Tree Topology

Node Type Value (Node index)

Router (Blue) 1, 2, 3, 4, 5, 6, 7, 8, 9

Client (White) 10, 11, 13, 14, 16, 18, 20, 21, 22, 24, 25 Source (Orange) 12, 15, 17, 19, 23

NRS Server (Green) 0

Table 5.2: Parameters for Tree Topology

• Mesh Topology

The structure of mesh topology is shown in Figure 5.2. Some specific parameters for tree topology are shown in Table 5.3.

Node Type Value (Node index)

Router (Blue) 1, 2, 3, 4, 5, 6, 7, 8, 9

Client (White) 10, 11, 13, 14, 16, 17, 18, 20, 22, 23, 25 Source (Orange) 12, 15, 19, 21, 24

NRS Server (Green) 0

Table 5.3: Parameters for Mesh Topology

5.1.2 Content Objects

At the beginning of each simulation run, 100 distinct NDOs are generated in each source node. Source nodes are represented by orange circles in Figure 5.1 and Figure 5.2. All of the NDOs generated have the same size of 50,000 Bytes.

For each NDO, a name is allocated. This name is selected from a set of 10000

(36)

5 3

8

0

22

14

16

17 21

NRS

SRC 13

9

25 1

7

4 23

2

20

11 10

12

24

SRC

19 18

15 SRC

SRC

SRC

6

Figure 5.2: Mesh Topology

unique names. Thus, there might be some duplication of NDOs in the network.

After the generation of NDOs on source nodes, the information of these NDOs will be published to NRS Server during the warm-up period of the simulation.

It means these publish messages are not counted as signaling messages since they are used to set up the initial state of the simulation.

5.1.3 Requests

During the simulation, clients should send requests only for NDO names that exist in the network. To ensure it happens, a list of names is generated during the warm-up period for objects that exist in the network. Each client determines which NDO to request at random following uniform distribution from available NDO names in the network. Each client requests an NDO at an exponentially distributed random interval. The average interval is 100 seconds.

5.1.4 Packet Length

The length of each packet is computed dynamically according to what is con-

tained in the packet. Appendix C shows the packet length specification for

packets. The propagation delays are same for all links, which is 100ms. Packet

length affects the transmission delay of the packet; thus the big packets will

suffer high network delay than small packets.

(37)

5.1.5 Compared Methods

Following four methods are compared: Central method, Active method, Passive method, Hybrid method. There is nothing technologically novel about the Cen- tral method. The Central method is only used as a reference for comparison to evaluate the effectiveness of the other methods in terms of overhead and latency.

5.1.6 Warm-Up Period

When the simulation is started, it first enters warm-up period. In order to build up the initial state of the simulation, following actions are performed during this period.

• Source nodes generate NDOs and publish them to NRS. All the NDOs in the source nodes are published to NRS during this warm-up period.

• In Active method, the source nodes send NIU messages to neighbors after generation all the NDOs. All the NIU messages sent during the warm-up period arrive their destination within the period.

After the warm-up period, the first GET request will be sent out by the client in the network. The messages transmitted during this period are ignored in the simulation results.

5.2 Results

After running the simulation for 100 times for every method on the tree and mesh scenarios with different random seeds, the data collected is utilized to generate metrics. The random seeds are used to increase the randomization for each simulation runs. For each simulation run, a random seed is used to generate NDOs and requests. The results are shown in Figure 5.4, 5.5 and 5.6.

Client NRS Source

GET (to NRS)

GET-RESP

GET (to SRC) GET-RESP NRT = t1 – t0

t0

t1

Figure 5.3: NDO Retrieval Time

(38)

NDO Retrieval Time (NRT) and Signaling overhead metrics are generated to show the performance of each method in different scenarios. As shown in Figure 5.3, NRT is the time cost to retrieve an NDO after sending out the initial GET request to NRS by a client.

5.2.1 NDO Retrieval Time

Figure 5.4 shows the frequency of NRT for all methods in tree and mesh sce- narios.

(a) Tree scenario (b) Mesh scenario

Figure 5.4: NDO Retrieval Time (frequency)

As can be seen in the Figure 5.4(a), there are some peaks in the graph. For

example, the Central method (blue solid line) has three peaks. Two of them

are near 2*10

5

, and the last one is near 6.5*10

5

on the Y-axis. Thus, the higher

peak is nearly twice higher than the small peaks. It is reasonable in the tree

scenario. Since the requests are generated following the uniform distribution, it

means each source node may have the same chance to receive a request. In the

tree topology shown in Figure 5.1, if a client (for example, node 10) requests

for an NDO, the NRT mainly depends on from which source node the NDO is

retrieved. From the perspective of Node 10, Node 12 is two hops away. Node 15

is four hops away, and Node 17, 19, 23 are all six hops away. Since the bandwidth

is same for all the links in the simulation, the transmission time of the NDO

mainly dominates the NRT. Therefore, in the graph there are two small peaks

and one big peak. For other methods, they all have three peaks. The reason for

the peaks is same. One may notice the Active method is shifted left compared to

Central method. It means Active method has a better performance than Central

method from this graph. The Passive method is shifted left farther than Active

method compared to Central method. It might have a better performance than

Active and Central method. The Hybrid method nearly covers the first two

peaks with Active method and third peak with passive. It is caused by the

combination of the feature of the Active method and Passive method to achieve

better performance than these two. In Figure 5.4(b) , Active, Passive and

Hybrid method all have a smaller NRT compared to Central method, but the

(39)

differences between each other are not very clear.

Figure 5.4 can be used to get a general overview of the performance of each method in different scenarios. In order to know more what is happening during the simulation. Figure 5.5 is introduced.

(a) Tree scenario (b) Mesh scenario

Figure 5.5: NDO Retrieval Time (interval)

The simulation runs for 100,000 seconds (virtual time in the simulator), and the logging operation is performed every 100 seconds. This results in 100000/1000=100 intervals. In other words, the simulation can be divided into 100 equal length intervals in order to find out the interesting results in each interval. In Figure 5.5 the X axis is the interval number of simulation, and the Y axis is the average NRT during that interval.

Figure 5.5(a) shows the NRTs for each simulation interval in the tree topology.

Since the information about NDOs in source nodes are already populated into the network by Active method during the warm-up period, the Active method has a stable NRT time near 3.35 seconds from the beginning to the end of the simulation. For Central method, the NRT time is always near 3.7 seconds because there is no information about the NDO locations is exchanged or cached as in Active or Passive method. At the beginning of the simulation, the NRT of Passive method is nearly the same as Central method, but it decreases with the increase of the interval number. During interval 60 to 100 it becomes stable.

Since there is no memory limitation of caching Name Resolution Information in nodes for the Passive method, all the information saved in NRS server will be cached in the network eventually. That is the reason the Passive method becomes stable during interval 60 to 100. During interval 1 to 10, the Active method has a lower NRT than the Passive method. It is because during these intervals, the Active method has already cached enough information about the NDO locations, while the Passive method just starts to cache Name Resolution Information. After interval 10, the Passive method has a lower NRT than Active method. It is because the Name Resolution Information exchanged by NIU in the Active method can only cover parts of the tree topology, while the Passive method can bring Name Resolution Information to all the nodes eventually.

For Hybrid method, it has the same start point as Active method and the

(40)

NRT decrease faster than Passive method. It is because before caching the Name Resolution Information on path by Passive method feature, some Name Resolution Information populated by the Active method already exist in the network. All of the methods proposed obviously have lower NRT than Central method, but which method has the best performance depends on many factors, such as the simulation length. If the simulation runs for only the first ten intervals, the Passive method will never perform better than Active method.

Compare to Active and Passive methods, the Hybrid method always better than any of them, since it takes the advantage of each method.

Figure 5.5(b) shows the NTR for each simulation interval in mesh topology. The Passive method follows the same trend as in the tree scenario. One thing to be highlighted is the Active method nearly has the lowest NRT from beginning to the end of the simulation. The reason that it looks quite different from the graph of tree topology is that the population of Name Resolution Information by Active method may cover nearly all the nodes in the network in the mesh topology. The TTL of NIU used in the Active method is 3. In the mesh topology, most of the node can be reached within three hops. Thus, the initial state of the simulation for Active method is quite different from the tree scenario. In tree scenario, the population of Name Resolution Information during the warm-up period can only reach the subtree where the sender resides.

5.2.2 Signaling Overhead

Figure 5.6 shows the signaling overhead for all the methods in the tree and mesh scenarios. The interval number has the same meaning as Figure 5.5 in Section 5.2.1. The signaling data ratio is calculated by the following equation. The signaling data includes NRS look-up and response messages, Neighbor Discovery messages and GET requests (to source).

Signaling Data Ratio = Signaling Data T ransmitted in the N etwork T otal Data T ransmitted in the N etwork Both in tree and mesh scenarios, the Central method has a higher signaling overhead compared to other methods. One thing to be noticed is in Active method no NIU messages are sent out by clients during the simulation since the clients do not save the NDO after retrieving it and no node caches NDO on path. For the Active and the Passive method, once the Name Resolution Information is cached in nodes in the network, the subsequent NRS look-up requests would not have to traverse links all the way to the NRS. NRS look-ups can be resolved by nodes on path and hence reduce the signaling. The periodical Neighbor Discovery process causes the small peaks in Active method curve. In the simulation, the interval between two Neighbor Discovery processes is set to 300 seconds, in which nodes broadcast messages to the network to find their neighbors.

In Figure 5.6(a), the signaling overhead of Passive method decreases between

interval 1 to 60, and become stable after interval 60. The trend in 5.6(a) is

(41)

(a) Tree scenario (b) Mesh scenario

Figure 5.6: Signaling Data Ratio (interval)

due to similar reasons for trend in Figure 5.5(a). The Passive method needs some time to populate the Name Resolution Information in the network. In figure 5.6(b), the Hybrid method has the similar trend of the Active method.

This is caused by the mesh topology. In mesh topology, the Name Resolution Information are populated to most nodes in the network during the warm-up period, therefore, the information will be used to resolve NDO names in the network and only a small part of requests are sent to NRS and get resolved. In this case, that is the reason the Hybrid method curve is just slightly below the Active method and does not have a similar trend as the Passive method.

5.3 Discussion

In the simulation, the network is static, and the content object are always avail- able, which is quite different from the real world. In the real word, the network is dynamic with nodes joining and leaving and content might be available only for a short period. Thus, it is not possible to say which method is the best all the time. It depends on the network situation and the structure of the network.

For example, in Figure 5.5(a), if the churn of the network is very big, the NRT

may keep suffering from the same situation of interval 1 to 10. An algorithm

may work well in one topology but not in the other. For example, as shown

in 5.5(a) and 5.5(b), the Active method performs better in the mesh topology

than tree topology.

(42)

Chapter 6

Conclusion and Future Work

6.1 Conclusion

The goal of the research in this thesis is to investigate different possible methods for improve the performance of NetInf network by distributing Name Resolution Information to nodes in the network. The methods proposed are evaluated on the tree and mesh topologies. The tree topology is more like the network in an organization; the mesh topology is more like today’s Internet. The design and the implementation of the methods are only used for experiments. If someone want to deploy these methods in real-world networks, more factors need to be considered, for example, the expired information about NDO. Even though the methods proposed in the thesis is more experimental, the evaluations of these methods could lead to a better understanding of what can be done for improving the performance of NetInf networks.

The Active, Passive, and Hybrid methods enable clients to resolve the NDO names in a more efficient way than the Central method. From the results pre- sented in Chapter 5, it is obvious all of the two proposed methods and their combination have better performance than Central method on NDO Retrieval Time and signaling overhead. The performance of each method is also affected by the topology it is deployed in. Some deduction may be made from the results.

If the network is static (node rarely joins and leaves) and the running time is

very long, the Passive and Hybrid method might have better performance than

Active method in the tree topology; however, if the network is more dynamic

(nodes join and leave very often), the Passive method might suffer from it and

have a worse performance than Active method. In mesh topology, if the scale

of the network is relatively small or the TTL of NIU in Active method is very

large, the Active method tends to have better performance than Passive method,

but the population of Name Resolution Information may increase the signaling

overhead of the network. The TTL value of NIU messages in Active method

should be a trade-off between lower latency and higher signaling overhead.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

DIN representerar Tyskland i ISO och CEN, och har en permanent plats i ISO:s råd. Det ger dem en bra position för att påverka strategiska frågor inom den internationella

Av 2012 års danska handlingsplan för Indien framgår att det finns en ambition att även ingå ett samförståndsavtal avseende högre utbildning vilket skulle främja utbildnings-,

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

The ambiguous space for recognition of doctoral supervision in the fine and performing arts Åsa Lindberg-Sand, Henrik Frisk &amp; Karin Johansson, Lund University.. In 2010, a