• No results found

Are MIRCC and Rate-based Congestion Control in ICN READY for Variable Link Capacity?

N/A
N/A
Protected

Academic year: 2021

Share "Are MIRCC and Rate-based Congestion Control in ICN READY for Variable Link Capacity?"

Copied!
5
0
0

Loading.... (view fulltext now)

Full text

(1)

http://www.diva-portal.org

Postprint

This is the accepted version of a paper presented at The 13th Swedish National Computer

Networking Workshop (SNCNW 2017), Halmstad University, May 29-30.

Citation for the original published paper:

Ahlgren, B., Hurtig, P., Abrahamsson, H., Grinnemo, K-J., Brunström, A. (2017)

Are MIRCC and Rate-based Congestion Control in ICN READY for Variable Link

Capacity?

In: Halmstad: gskolan i Halmstad

N.B. When citing this work, cite the original published paper.

Permanent link to this version:

(2)

Are MIRCC and Rate-based Congestion Control in ICN

READY for Variable Link Capacity?

Bengt Ahlgren

, Per Hurtig

, Henrik Abrahamsson

,

Karl-Johan Grinnemo

, Anna Brunstrom

RISE SICS,‡Karlstad University

ABSTRACT

Information-centric networking (ICN) has been introduced as a potential future networking architecture. ICN promises an architecture that makes information independent from lo-cation, applilo-cation, storage, and transportation. Still, it is not without challenges. Notably, there are several outstanding issues regarding congestion control: Since ICN is more or less oblivious to the location of information, it opens up for a single application flow to have several sources, something which blurs the notion of transport flows, and makes it very difficult to employ traditional end-to-end congestion control schemes in these networks. Instead, ICN networks often make use of hop-by-hop congestion control schemes. How-ever, these schemes are also tainted with problems, e.g., sev-eral of the proposed ICN congestion controls assume fixed link capacities that are known beforehand. Since this seldom is the case, this paper evaluates the consequences in terms of latency, throughput, and link usage, variable link capacities have on a hop-by-hop congestion control scheme, such as the one employed by the Multipath-aware ICN Rate-based Con-gestion Control (MIRCC). The evaluation was carried out in the OMNeT++ simulator, and demonstrates how seem-ingly small variations in link capacity significantly deterio-rate both latency and throughput, and often result in ineffi-cient network link usage.

Keywords

ICN, congestion control, hop-by-hop, MIRCC,link capacity

1.

INTRODUCTION

The tremendous growth of the internet in the past decade, together with a new breed of applications, such as Face-book, Twitter, YouTube, Netflix, that are content rather than location centric, put a great pressure on the way internet currently works, and in many ways challenge its architec-ture. Information Centric Networking (ICN) emerged as an alternative architecture that addresses the requirements of content-centric applications —it rethinks and redesigns the current internet architecture, which originally was designed for communication between pairs of end-hosts. In contrast,

at the heart of the ICN architecture lies retrieving content by location-independent names that might correspond with content copied and stored on several hosts.

Communication in CCN, one particular ICN design, is initiated by the consumer in the form of an interest packet. When an interest packet reaches a content provider, a data packet is sent back with the requested content. Since the content name is included in both the interest as well as the data packet, the data packet is able to retrace the path of the interest packet in reverse. Nodes in CCN, both end nodes and routers, have three primary data structures: a Forward-ing Information Base (FIB), a Content Store (CS), and a Pending Interest Table (PIT). The FIB acts like a routing ta-ble and stores next hops for content names; the CS caches copies of data packets passing through the node; and, the PIT maintains an entry for each incoming interest packet un-til its corresponding data packet arrives. Whenever an inter-est packet arrives at a CCN node, the node tries to look up the associated content name in the CS. If the content name is found, the corresponding data packet is returned to the con-sumer who is the originator of the interest packet; if not, the node searches the PIT for the content name. Only if the con-tent name is not in the PIT, the interest packet is passed to the FIB, who lookups the next node and forwards the interest packet. Otherwise, the interest packet is aggregated on a list in PIT holding pending interests on this content name, i.e., waits on an already-sent interest packet requesting the data packet with this content name.

Although ICN is indeed a promising alternative internet architecture that solves many problems inherent with the cur-rent internet architecture, there remains several open issues that need to be solved, e.g., congestion control. Since ICN intrinsically supports multiple providers for a single, named content item, with interest and data packets not tied to any particular network paths between consumer and providers, the single-source, single-path congestion control schemes used in the current internet architecture cannot be directly applied in this type of network architecture. In view of this, hop-by-hop congestion control schemes have been proposed, complemented with end-to-end congestion control schemes [4, 9]. Since hop-by-hop congestion control schemes work on

(3)

a per-link basis, the traditional congestion control schemes of the current internet apply. Still, as a way of simplifying its design, several hop-by-hop congestion control schemes work under the assumption that the link capacity is fixed and known beforehand. In this paper, we demonstrate the consequences of such an assumption. Particularly, the paper shows that fairly small fluctuations in the link capacity could have a significant negative impact on throughput, latency, as well as link utilization, for a hop-by-hop congestion control scheme such as the one employed by Multipath-aware ICN Rate-based Congestion Control (MIRCC) [4].

The remainder of the paper is organized as follows. Sec-tion 2 puts this work into context by providing a brief overview of related work with a bias towards the works of Wang et al. [9] and Mahdian et al. [4], which constitute the founda-tion of the work presented in this paper. Secfounda-tion 3 describes our study: the experiment setup; the methodology used; and, the results from the study. The paper concludes in Section 4 with a summary of the results from the paper, their signifi-cance, and how we intend to utilize these results in our future work.

2.

RELATED WORK

As previously mentioned, ICN employs a consumer-driven communication model in which requested data packets can be accessible from several different providers, something which to a large extent invalidates the use of the traditional congestion control schemes found in the current internet. For example, it makes it very hard to use an RTO mechanism similar to transport protocols such as TCP and SCTP, and the self-clocking mechanism employed by these and other transport protocols, due to in-network caching, cause fair-ness problems between competing flows. Still, some of the earliest proposed congestion control schemes for ICN work under the assumption of a single provider: Both the Inter-est Control Protocol (ICP) [2] and the Information Centric Transport Protocol (ICTP) [8] use AIMD window-based schemes that work very much the same as in TCP.

As a way to solve the multi-provider problem inherent with ICN, proposals such as ConTug [1] and Content Cen-tric TCP (CCTCP) [7] use separate congestion windows and RTO values for each provider location, something which ei-ther impose some severe restrictions on the way data is re-trieved, or result in complex solutions: ConTug assumes that the consumer knows beforehand the locations of the content it is about to request, and that these locations do not change; CCTCP does not have this limitation, however, in order to keep track of the locations of content items, it implements an intricate prediction algorithm.

An alternative way of dealing with the multi-provider prob-lem of ICN, is to handle congestion on a per-link or hop-by-hop basis: Each node along the path from a consumer to a provider is responsible for detecting congestion, and to adjust the rate of data packets from the provider by adjust-ing the forwardadjust-ing rate of interest packets. A typical

repre-sentative of this category of congestion control schemes is the Hop-by-Hop Interest Shaping (HoBHIS) [6]. In HoB-HIS, nodes monitor the size of the queue of arriving data packets and adjust the forwarding rate of interest packets accordingly. As a natural continuation of HoBHIS, Yi et al. [10], propose a congestion control scheme that comple-ments the adjustment of the interest-packet rate with an in-terest NACK: When a node can neither satisfy nor forward an interest packet, it sends an interest NACK back to the downstream node. In so doing, it avoids so-called dangling PIT entries that might prevent other interest packets to be forwarded due to lack of space in the PIT of nodes.

Since the interest packets are typically much smaller than the data packets, most hop-by-hop congestion control schemes only consider the contribution of data packets to congestion. As an exception, the congestion control scheme suggested by Wang et al. [9] takes into account data packets as well as interest packets in its interest-packet rate computations: On the basis of the constraint that the sum of the interest- and data-packet rates should be less than the link capacity, the optimal interest-packet rate over a link is computed; the ac-tual incoming interest-packet rate to a node is estimated, and the outgoing interest-packet rate is adjusted based on the dif-ference between the optimal and actual interest-packet rates. In particular, the outgoing interest-packet rate (io) is com-puted as follows: io= imino+  imaxo− imino  1 − iobsi iexpmini 2 ,

where imino, imaxo are the minimum and maximum outgoing interest-packet rates; iobsi is the observed incoming interest-packet rate, and iexpmini is the expected minimum incoming

interest-packet rate.

A major problem with the congestion control scheme sug-gested by Wang et al., and with several other proposed hop-by-hop congestion control schemes for that matter, is that they rely on a known link capacity, something that in many cases is unrealistic to assume. For example, link bandwidths dynamically change over wireless links, and in those cases the ICN network is built as an overlay network, the link be-tween two ICN nodes may consist of several physical links.

Since hop-by-hop congestion control schemes only work on a per-link basis, they need to be complemented with con-gestion control schemes that work between the consumer and the providers. The Multipath-aware ICN Rate-based Congestion Control (MIRCC) [4] is an example of a conges-tion control scheme that works on a hop-by-hop basis as well as between end nodes. In fact, it incorporates the hop-by-hop congestion control scheme of Wang et al., and extends this scheme with a rate-based end-to-end congestion control scheme which borrows heavily from the Rate Control Pro-tocol (RCP) [3]. As a consequence of MIRCC re-using the hop-by-hop congestion control scheme of Wang et al., it in-herits its deficiencies. Particularly, it inin-herits the problem of requiring known link capacities, the extent of which is

(4)

stud-Scenario

Data Packet Throughput (Mbps) RTTaverage(ms)

OMNeT++ ndnSim (Wang et al [9]) OMNeT++

Client/Server1 Client/Server2 Client/Server1 Client/Server2 Client/Server1 Client/Server2

No hop-by-hop controller

1) Baseline 8.20 8.04 N/A N/A 165.2 165.7

Wang controller, known link capacity

1) Basline 9.53 9.52 9.56 9.56 88.5 88.1

2) Randomised packet size 9.11 9.16 9.43 9.43 90.6 90.1

3) Asymmetric size ratio 9.64 9.64 9.37 9.33 109.3 108.9

4) Asymmetric link bandwidth 9.44 0.749 9.77 0.720 92.8 543.7

Wang controller, link capacity estimation error

1) Baseline 7.51 8.19 N/A N/A 87.8 86.6

Table 1: Summary of Simulation Results

Client/Server1 100 Mbps 10 ms Router1 10 Mbps 10 ms 100 Mbps 10 ms Router2 Client/Server2

Figure 1: Network Topology

ied in the following section.

3.

EVALUATION

As previously mentioned, this paper aims to evaluate the performance effects that follow from assuming known, and fixed, link capacities in hop-by-hop congestion controls. Since one option is to omit a hop-by-hop controller altogether, we start out by demonstrating the effects of only having an end-to-end congestion control. Next, we show that Wang et al’s hop-by-hop congestion control scheme [9] works very well in a fixed and known bandwidth regimen, corroborating their original results obtained in ndnSim [5]. Finally, we evaluate the hop-by-hop scheme when the underlying link capacity is unknown, and varying, by introducing a small error in the algorithm’s link capacity estimation.

In all simulations, we used the the network topology il-lustrated in Figure 1, and in all simulations the end nodes employed a window-based congestion control scheme that mimicked the AIMD congestion control scheme of TCP. The clients at both ends issued requests to the servers at the other ends. Four scenarios were considered: (1) a scenario with fixed interest and data packet sizes; (2) a scenario with variable-sized interest and data packet sizes; (3) a scenario in which the ratio of the data and interest packets was not the same in both directions; and (4) an asymmetric-link scenario in which the bandwidth from Router1 to Router2 was limited to 1 Mbps while remaining 10 Mbps in the reverse direction. The interest packets were of size 24 Bytes in the fixed-size packet simulations, and uniformly distributed between 27 Bytes and 62 Bytes in the variable-sized packet simula-tions; the data packets were of size 1000 Bytes in the fixed-size packet simulations, and uniformly distributed between 600 Bytes and 1400 Bytes in the variable-sized packet sim-ulations. In the third scenario, the asymmetric-size ratio scenario, the data packets returned by Client/Server2 were 500 Bytes, while the data packets returned by Client/Server1 were 1000 Bytes; the interest packets in the third scenario

were of the same size as in the fixed-size packet simulations, 24 Bytes. Finally, in the fourth scenario, the sizes of the in-terest and data packets were the same as in the fixed-size packet simulations. All simulations lasted for 70 s and were repeated 12 times; the session start times were randomly picked between 0 s and 5 s.

The results from the simulations are summarized in Ta-ble 1 as averages of the 12 repetitions. They are comple-mented in Figure 2 with plots over bottleneck link utilization and queue length in the simulations of Scenario 2.

3.1

Without Hop-by-Hop Congestion Control

The top section in Table 1 lists the results with no hop-by-hop congestion control for scenario 1) with fixed packet sizes. It thus relies only on the TCP-like end-to-end AIMD controller operated by the clients for congestion control. We had to increase the router buffer sizes from 60 to 200 packets to get decent throughput (around 8 Mbps) and link utilisation (80-90%) – see Figure 2(a)). This however came with the price of increased delay (RTT).

3.2

Known Link Capacity

The simulations with known link capacities were designed to mimic as closely as possible the baseline topology simula-tions by Wang et al. [9]. Particularly, the shaper queues were of size 60 packets, and the outgoing interest-packet rate (io) was shaped to 98% of the link capacity. The middle section of Table 1 summarizes the known link capacity simulations. As shown, our simulations confirmed that Wang et al’s hop-by-hop congestion control scheme is able to efficiently uti-lize the available bandwidth on the 10 Mbps bottleneck link provided the link capacities are fixed and known. However, the RTT measurements show that efficient utilization of the bottleneck link came with a queueing delay; especially in the third scenario, i.e., the scenario with dissimilar data-interest packet ratios in the two directions.

3.3

Unknown Link Capacity

As a simple way to investigate the effect of unknown link capacity, we introduce a link capacity estimation error in the simulation. The bottom section of Table 1 shows re-sults when the estimation error is given by a uniform dis-tribution in the range 0.7-1.3, that is, an estimation error of plus/minus 30% for the baseline scenario with fixed packet

(5)

10 20 30 40 50 60 70 0.0 0.2 0.4 0.6 0.8 1.0 Time (s)

Bottleneck link utilisation (1s a

v er age) ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● No rate controller Wang rate controller Wang, w link estimation error

(a) Evolution of bottleneck link utilization over simulation time.

10 20 30 40 50 60 70 0 50 100 150 200 250 Time (s)

Queue length (pack

ets)

No rate controller Wang rate controller Wang, w link estimation error

(b) Evolution of bottleneck shaper queue over simulation time. Figure 2: Bottleneck link dynamics in Scenario 1.

sizes. We can see that Wang et al’s scheme, as expected, does not cope well with this situation. The average through-put is at the level with no hop-by-hop controller, and the link utilisation (Figure 2(a)) is unstable, jumping between 50 and 100%. The queues are also oscillating, regularly filling up to the max of 60 packets as shown in 2(b), but that does not in-crease the total RTT, rather decreasing it slightly due to the periods of under-utilisation.

4.

CONCLUSIONS

While ICN may solve many of the problems inherent in the current internet architecture it still harbours a number of open issues. This paper has taken a closer look at one of these problems, congestion control. While most research on ICN congestion control have recognized the need for effi-cient hop-by-hop algorithms, few have dropped the assump-tion of known, and fixed, link capacities. The evaluaassump-tions in this paper show that even small estimation errors in the ac-tual link capacity can have significant impact on both through-put and latency, as well as efficient link utilization. To ad-dress this issue, future work includes the design and imple-mentation of a hop-by-hop ICN congestion control that in-corporates varying and unknown network conditions, such as link capacities.

Acknowledgments

This work was funded by The Knowledge Foundation (KKS) through the SIDUS READY project. The views expressed are solely those of the author(s).

5.

REFERENCES

[1] S. Arianfar, P. Nikander, L. Eggert, and J. Ott. Contug: A receiver-driven transport protocol for content centric networks. In Proceedings of ICNP’10 Poster Session, 2010. [2] G. Carofiglio, M. Gallo, and L. Muscariello. ICP: Design and

evaluation of an interest control protocol for content-centric networking. In IEEE INFOCOM Workshops, March 2012. [3] N. Dukkipati. Rate Control Protocol (Rcp): Congestion

Control to Make Flows Complete Quickly. PhD thesis, Stanford, CA, USA, 2008. AAI3292347.

[4] M. Mahdian, S. Arianfar, J. Gibson, and D. Oran. MIRCC: Multipath-aware ICN Rate-based Congestion Control. In Proceedings of the 3rd ACM Conference on ICN, Kyoto, Japan, 2016.

[5] S. Mastorakis, A. Afanasyev, I. Moiseenko, and L. Zhang. ndnSIM 2: An updated NDN simulator for NS-3. Technical Report NDN-0028, Revision 2, NDN, November 2016. [6] N. Rozhnova and S. Fdida. An effective hop-by-hop interest

shaping mechanism for ccn communications. In 2012 Proceedings IEEE INFOCOM Workshops, March 2012. [7] L. Saino, C. Cocora, and G. Pavlou. Cctcp: A scalable

receiver-driven congestion control protocol for content centric networking. In IEEE ICC, June 2013.

[8] S. Salsano, A. Detti, M. Cancellieri, M. Pomposini, and N. Blefari-Melazzi. Receiver-driven interest control protocol for content-centric networks. In ACM SIGCOMM Workshop on Information Centric Networking (ICN), August 2012. [9] Y. Wang, N. Rozhnova, A. Narayanan, D. Oran, and I. Rhee.

An improved hop-by-hop interest shaper for congestion control in named data networking. In Proceedings of the 3rd ACM SIGCOMM Workshop on ICN, Hong Kong, 2013. [10] C. Yi, A. Afanasyev, I. Moiseenko, L. Wang, B. Zhang, and

L. Zhang. A case for stateful forwarding plane. Comput. Commun., 36(7):779–791, Apr. 2013.

Figure

Table 1: Summary of Simulation Results
Figure 2: Bottleneck link dynamics in Scenario 1.

References

Related documents

To increase cell throughputs and to minimize the queueing delays of realtime traffic simultaneously, we have proposed a new multiplexing scheme which multiplexes realtime

The main purpose of the error control mechanisms in multicast environments is to make use of the available transmission rate to minimize the undesired effect given by packet

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Influence of Power Control and Link-Level Retransmissions on Wireless tcp The link can detect block damage (this is needed for power control anyway), and it can use that information

We remark, however, that the performance will be exacerbated when complexities are added back to the model in order to mimic real systems (sampled and quantized information of

On top of this, we have an outer power control loop, (2), that tries to keep the frame error rate constant, by adjusting the target sinr of the inner loop.. Next, we have

expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be

– 77 procent av alla pengar som skick- as till Afghanistan går förbi den regering vars land vi försöker återuppbygga, sa forskaren Hamish Nixon och visade upp några