• No results found

Beneficial Limitations: Rate caps for Enhanced Branched Video Streaming Experience

N/A
N/A
Protected

Academic year: 2021

Share "Beneficial Limitations: Rate caps for Enhanced Branched Video Streaming Experience"

Copied!
39
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet

Linköping University | Department of Computer and Information Science

Bachelor’s thesis, 16 ECTS | Computer Science

2019 | LIU-IDA/LITH-EX-G--19/043--SE

Beneficial Limitations: Rate caps

for Enhanced Branched Video

Streaming Experience

Fördelaktiga begränsningar: Hastighetsbegränsningar för

för-bättrad branched video upplevelse

Kristoffer Sandberg

Måns Fredriksson Franzén

Supervisor : Niklas Carlsson Examiner : Marcus Bendtsen

(2)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet - eller dess framtida ersättare - under 25 år från publicer-ingsdatum under förutsättning att inga extraordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka ko-pior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervis-ning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säker-heten och tillgängligsäker-heten finns lösningar av teknisk och administrativ art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsman-nens litterära eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet - or its possible replacement - for a period of 25 years starting from the date of publication barring exceptional circumstances.

The online availability of the document implies permanent permission for anyone to read, to down-load, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility.

According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement.

For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

© Kristoffer Sandberg Måns Fredriksson Franzén

(3)

Students in the 5 year Information Technology program complete a semester-long software develop-ment project during their sixth semester (third year). The project is completed in mid-sized groups, and the students implement a mobile application intended to be used in a multi-actor setting, cur-rently a search and rescue scenario. In parallel they study several topics relevant to the technical and ethical considerations in the project. The project culminates by demonstrating a working product and a written report documenting the results of the practical development process including requirements elicitation. During the final stage of the semester, students create small groups and specialise in one topic, resulting in a bachelor thesis. The current report represents the results obtained during this specialisation work. Hence, the thesis should be viewed as part of a larger body of work required to pass the semester, including the conditions and requirements for a bachelor thesis.

(4)

Abstract

The demand for on-demand video streaming has seen an enormous increased usage and is today the main contributor to Internet traffic. Technological developments com-bined with the accessibility of sufficiently powerful end-user hardware, large bandwidth capacities and significantly reduced storage cost are major contributors to this trend. We have built a simulation environment where multiple clients stream linear and branched video while competing over a shared bottleneck network. We examine how rate caps can be implemented to increase the overall Quality of Experience (QoE). First we present sim-ulation results demonstrating the impact that rate caps have on clients playing linear video and compare and relate the results with prior work. Second we simulate the performance implementation of branched video and consider how its performance is affected by rate caps. Here, we highlight and discuss the trade-off patterns between playback quality and stability observed when a cap is implemented. To derive our conclusions we consider a range of scenarios, in which we vary different variables when a rate cap is set or not, and measure the (i) requested encodings, (ii) buffer occupancy, and (iii) the amounts of switches between encodings made by the clients during the playback sequence. The rate cap implementation is shown to generate less switches between encodings, providing an enhanced stability and thus contributing to a better QoE in both the linear and branched environment.

(5)

Acknowledgments

We would like to thank our supervisor Niklas Carlsson for his guidance and support during the project. We would also like to thank Martin Lindblom, Mimmi Cromsjö, Martin Chris-tensson and Oscar Järpehult for providing useful feedback.

(6)

Contents

Abstract iv

Acknowledgments v

Contents vi

List of Figures viii

List of Tables ix 1 Introduction 1 1.1 Motivation . . . 1 1.2 Aim . . . 2 1.3 Research questions . . . 2 1.4 Contributions . . . 2 2 Background 4 2.1 HTTP Adaptive Streaming (HAS) . . . 4

2.2 Quality of Experience . . . 4

2.3 Instability and unfairness . . . 5

2.4 Overestimations and available bandwidth . . . 5

2.4.1 ON-OFF periods . . . 6

2.4.2 Fair share estimation . . . 6

2.4.3 Network conditions . . . 6

2.5 Fixed rate cap . . . 6

2.6 Branched video . . . 6

2.7 HTTP/2 over QUIC . . . 7

2.8 Related work . . . 7

3 Simulation Design 9 3.1 Simulation setup . . . 9

3.1.1 With linear video . . . 10

3.1.2 With branched video . . . 10

(7)

3.3 Implemented caps . . . 10

3.4 Limitations . . . 10

4 Simulation Result 12 4.1 Linear video . . . 12

4.1.1 Overestimation and bandwidth variation . . . 12

4.1.2 Requested encoding . . . 13

4.1.3 Buffer occupancy . . . 14

4.1.4 Switches between encodings . . . 14

4.1.5 Client link bandwidth estimation . . . 14

4.2 Branched video . . . 15

4.2.1 Buffer occupancy . . . 15

4.2.2 Requested encoding . . . 16

4.2.3 Switches between encodings . . . 18

4.2.4 ON-OFF periods . . . 18

4.2.5 Path buffer thresholds . . . 20

4.2.6 Competing clients . . . 20

4.2.7 Branch point frequency . . . 21

4.2.8 Chunk sizes . . . 22

4.2.9 Summarized table of results . . . 22

5 Discussion 24 5.1 Linear video . . . 24

5.2 Branched video . . . 24

5.2.1 Advantages with cap implemented . . . 24

5.2.2 ON-OFF periods and network conditions . . . 24

5.2.3 Fair share estimation . . . 25

5.2.4 Bandwidth competing . . . 25

5.2.5 Buffer stability . . . 25

5.3 Fixed cap sweet spot . . . 25

5.4 Quality and stability tradeoff . . . 26

5.5 The work in a wider context . . . 26

6 Conclusion 27 6.1 Further work . . . 27

(8)

List of Figures

2.1 HTTP Adaptive Streaming . . . 5

2.2 Branched Video . . . 7

4.1 Client 1´s requested encoding, available bandwidth and estimated bandwidth, default lin-ear scenario . . . 13

4.2 Requested encoding, default linear scenario . . . 13

4.3 Buffer size, default linear scenario . . . 14

4.4 Number of switches from six simulations, linear scenario. Bars shows average values, top-whiskers the max values, bottom-whiskers the min values. . . 15

4.5 Instantaneous bandwidth seen at client, default linear scenario . . . 15

4.6 Buffer size, default branched video scenario . . . 16

4.7 Requested encoding, default branched video scenario . . . 16

4.8 Impact of alternative rate cap limits, branched video scenario . . . 17

4.9 Another comparison of different rate cap limits, branched video scenario . . . 17

4.10 Number of switches from six simulations, branched video scenario. Bars shows average values, top-whiskers the max values, bottom-whiskers the min values. . . 18

4.11 Average requested encoding from six simulations, branched video scenario . . . 19

4.12 Percentage of each clients time in OFF-period and times entering a OFF-period, default branched video scenario . . . 19

4.13 Buffer size when branch path buffer has a buffer threshold set to Tmax= 4 s . . . 20

4.14 Number of switches from six simulations with two clients, branched video scenario. Lines shows average values, top-whiskers the max values, bottom-whiskers the min values. . . . 21

4.15 Number of switches from six simulations with six clients, branched video scenario. Lines shows average values, top-whiskers the max values, bottom-whiskers the min values. . . . 21

4.16 Number of switches from six simulations with different time between branch points. Bars shows average values, top-whiskers the max values, bottom-whiskers the min values. . . . 22

(9)

List of Tables

(10)

1

Introduction

1.1

Motivation

The most common family of protocols used for streaming is HTTP Adaptive Streaming (HAS). By adapting the quality of the video streams based on the clients’ conditions, these protocols effectively uses the available infrastructure in modern network structures and im-prove the user’s quality of experience (QoE). In order to keep up with the pace of the increas-ing video traffic, new designs and solutions have been developed. The main reason why this technology is widely implemented is its ability to adapt playback quality to the bandwidth conditions [4] [16]. HAS-based solutions typically use the Transmission Control Protocol (TCP) rather than the User Demand Protocol (UDP). Having said that, in recent develop-ments, Google and Akamai have worked with the implementation of Quick UDP Internet Connections (QUIC). Motivated by QUIC’s use of a single connection and to evaluate users fairly, our simulations assumes that each client obtains all its downloaded data over a single connection.

Even with this well-performing service, some issues arise when one or more adaptive streaming players compete over bandwidth in a shared network bottleneck. When multiple clients attempt to adjust to a non-sustaining bit rate level it could potentially lead to sig-nificant variability in the available bandwidth, unwanted stalling and instability. Another dimension contributing to a sub-optimal service and under-utilize of bandwidth is the sce-nario when the streaming service ignores client context. Former studies have concluded that such scenarios can lead to under-utilization of bandwidth, issues within performance and an increasing amount of switched between encodings. To avoid bandwidth waste, stalls and playback instability Akhshabi et al. [16] suggest an implementation of data rate caps for clients. One way of doing this is by limiting the maximum bandwidth of each video stream and adjusting the maximum quality and bit rate requested by the client. Individual rate cap is a mechanism that effectively implements this solution [12]. Since delay and stalling is the most influencing factors on QoE [7] [8] [13] one thing that can be worth investigating is how rate caps can contribute to a enhanced video experience.

A trend that is being developed in parallel with the demand for streaming services is the personalizing on the Web [14]. As the Web is becoming a more natural element of daily ac-tivities, users will and want to adjust the applications to individual preferences. With our observations of these trends, the conclusion is that Web personalization and streaming

(11)

even-1.2. Aim

tually will interlace. A fair prediction is that the user wants to choose a customized video playback sequence that is based on user preferences. Content providers have started to look into the possibility of meeting this demand. Up to date, the traditional way of consuming streaming media is largely limited to linear playback order. Medias such as audio and video streaming are examples of media objects which traditionally are linear. These objects are or-dered in a strict linear order to ensure a strict playback series. For example, a typical video consists of a sequence of frames that are arranged in order along with a time dimension. Thus, ensuring that users consuming this media will receive the information in the same order. The technology of tomorrow, however, suggests another possibility for media streaming in a non-linear order. This is also referred to as interactive branched video (also called “multi path”). With this implementation, content providers can customize the experience with one or more interactions from the users which affect their playback experience. These interactions allow the user to affect and choose different traversing of outcome for the plot, possibly leading to increased QoE if HAS is implemented well [10].

1.2

Aim

From all observed trends, an interesting investigation is how these fields interact with each other. Our goal is to investigate how rate caps can be implemented and how they affect HAS performance for branched video with multiple clients sharing a bottleneck network link.

In this thesis, we focus on observations made from simulations. Our approach is to di-vide the problem into smaller parts by making multiple simulations with different varying parameters such as chunk sizes and available bandwidth. Thereafter we assemble the results and put them in a larger context. The initial simulation is made with the intention to achieve similar results with prior work. This is to validate the simulators reliability before advanc-ing with new simulations. Through our simulations we aim to answer research questions that contribute to understand whether the implementation of rate caps could contribute to a better QoE. Furthermore, the simulations are made by calculations of each client buffer size, encoding, and individual rate cap. To strengthen our understanding of how rate caps directly impact simple linear HAS for multiple video streaming clients we start out by conducting this scenario in the simulation.

1.3

Research questions

Through our simulations we aim to answer the following research questions: • How is the buffer size affected (i) without a rate cap and (ii) with a rate cap? • How is the requested encoding affected (i) without a rate cap (ii) with a rate cap? • How is the rate caps affecting the QoE for each client user in (i) a linear video and (ii) a

branched video?

• Is the implementation of rate caps in HAS for branched video with multiple clients over the same network a beneficial solution to increase each individual total QoE?

1.4

Contributions

We present clear indications that rate caps within a linear environment with multiple clients have a strong positive effect on the QoE and directly preventing earlier presented issues such as fluctuating playback quality. With the insights from that basis, we traverse the simulation with the implementation of branched video for the clients and also analyze how rate cap affect the results in such an environment.

(12)

1.4. Contributions

The contribution of this paper is the analysis of how rate cap is beneficial in aspects of overall QoE when multiple clients stream branched video. We will present a series of our simulations with all different parameters and illustrate this graphically including descrip-tions for easy following.

(13)

2

Background

With streaming services contributing to the majority of all Internet traffic some of it’s related technology is subjects that need to be investigated further.

2.1

HTTP Adaptive Streaming (HAS)

HTTP adaptive streaming (HAS) is a dynamic application handling simultaneous downloads and concurrent playback of content. The transmission of video data is done via HTTP where the client stores it in a buffer application. When a considerable amount of data is buffered the client will start to play the data from the buffer. Meanwhile, the video is transmitted over TCP simultaneously. Figure 2.1 illustrate the data transmission within a HAS player.

A prerequisite for HAS to work is that the video is available in multiple bit rates i.e. that the quality is available for different playback representations. These bit rates are split into small chunks that each represent a few seconds of playback time. The client is making cal-culations based on its current bandwidth and buffer status, with this information the next requested chunk of bits will be in an appropriate bit rate. With this dynamic procedure, the bit rate will be adapted in a way such that empty buffers may interrupt the playback and leading to that stalling is avoided effectively. This method is contributing to a possible more utilized playback experience [7] [13] . However, some issues may arise if a real-world sce-nario is applicable to this service. Things like latency and instantaneous throughput are both contributing to a fluctuating performance. These disturbing elements typically appear in net-works where the control of the network infrastructure is limited and not being end-to-end supervised. In that case, the network will adopt the best effort approach and good perfor-mance cannot always be guaranteed [13].

2.2

Quality of Experience

Quality of Service (QoS) is a term that describes how well a service operates and is consist-ing of parameters such as packet loss, delay or jitter. These parameters are considered to have a direct impact on Quality of Experience (QoE). QoE is a measurement of the overall customer experience of service performance. QoE is highly subjective but there are some aspects that are commonly considered to directly deteriorating the user’s experience. The

(14)

2.3. Instability and unfairness

Figure 2.1: HTTP Adaptive Streaming

measurements are done by the end-to-end performance of a specific streaming service and the focus is from the consumers perspective of the overall experience. With the tremendous demand for streaming content network traffic has become a crucial part since streaming is vastly reliable on network performance. However, as Skorin-Kapov [17] suggest, QoE-driven applications have mainly been addressing and adapting with the end-user perspective. Such a mechanism is the HAS-players ability to dynamically adapt to varying network conditions which contributes to maintaining a high level of QoE. In contrast to such a service with the end-user focus, network providers also implement mechanisms for enhanced QoE by traffic monitoring solutions to retrieve insights which are called QoE-driven network management mechanisms. QoE modeling, monitoring, and control in wireless networks have been widely investigated in the last couple of years. An exhaustive comparison of previous studies is done by Barakovic and Skorin-Kapov [3] who place particular focus on these insights.

Within streaming experiences, the most relevant part of QoE is within systematic influ-encing factors. These factors can be categorized of being content-related or media-related such as encoding performance and resolution. The most important metrics in QoE while streaming video content, regardless of content genre, if the buffering and bit rate. With these insights, QoE is a key element when designing systems and many different solutions have been proposed to improve the overall QoE. Solutions such as pro-active bit rate selection, rate switching, and different buffering techniques are concluded to be tools that could be designed to counteract the issues [6].

QoE is subjective but there are some subjects that have been shown to have a significant effect on the QoE. These can be represented by objective metrics, such as visual quality (e.g., expressed as a bit rate), stalls, quality switching, and startup delay [12].

2.3

Instability and unfairness

Instability occurs when a player often switches between different encodings creating oscilla-tions.

Unfairness is when different clients use the same bottleneck bandwidth and the amount of bandwidth is shared unevenly between them. If the bottleneck bandwidth is C and it is shared evenly by two clients the fair share is C2. If this changes and one client gets more of the fair share and the other less then the fair share unfairness has occurred.

2.4

Overestimations and available bandwidth

In this paper, we examine how instability and unfairness occur when HAS-players share the same bottleneck bandwidth. There are some main factors that affect the stability and available bandwidth. These are ON-OFF periods and network conditions. How ON-OFF periods, overestimations and network conditions have an impact on stability and unfairness will be explained in the following subsections.

(15)

2.5. Fixed rate cap

2.4.1

ON-OFF periods

Next follows a short explanation of ON-OFF periods and how it causes instability.

A HAS-player mainly operates in one of two states. The Buffer-state is when a player requests a new chunk and when the previous chunk has been downloaded. The Steady-state is when the player requests a new chunk periodically. This to maintain a constant playback buffer size. When the player is in Steady-state and periodically makes new requests leads to ON-OFF activities. A ON-period is when a chunk is downloading and a OFF-period is when staying idle. The root problem is that a player can not estimate the available bandwidth during OFF-periods since it does not transfer any data during this time[1] [2].

2.4.2

Fair share estimation

It is common that clients streaming with HAS compete over a shared bottleneck link. These competitions can arise when several users in the same house stream multiple videos at the same time. In such a scenario instability and oscillations between different video qualities can appear and affect the QoE. The oscillation is based on that the different HAS-players makes overestimations on the available bandwidth due to each clients ON-OFF periods [2]. An example scenario is when there are two competing players X and Y with the shared bottleneck bandwidth C. If both X and Y are in their ON-period respectively, both will estimate the bandwidth to C2. Which is the fair share bandwidth for each player. If only X is in ON-period, the available bandwidth will then be overestimated to C by X, which is twice the fair share. It is then possible that X chooses to switch to a higher encoding since estimated bandwidth is C. When both X and Y are in ON-period the estimated bandwidth once again will be C2, which is the fair share. X will then change back to a lower encoding. This will create oscillations since the bandwidth cannot be correctly observed [1][2].

2.4.3

Network conditions

The available bandwidth depends a lot on the current network conditions. These network conditions can vary if a client is changing location, i.e. a person is walking around using WiFi or is located in a car using mobile network. It can also vary depending on which connec-tion that is used. In this paper the simulaconnec-tions is made with the bandwidth trace files that generates data corresponding to real network conditions.

2.5

Fixed rate cap

When using fixed rate cap each player that is competing over the same bottleneck link is assigned a fixed amount of available link bandwidth. This rate cap limits the available max-imum bandwidth each player can use. Instead of the fair share solution, where each client may overestimate the fair share at#clientsC each client gets a rate cap. This cap can be set at the fair share or to a fixed cap regardless of the bottleneck bandwidth. This will be implemented in the simulation and enables further analysis on what impact such a limitation can have on stability and unfairness.

2.6

Branched video

Media streaming is traditionally limited to linear, this method is based on media objects such as video that consists of data sets arranged in a strict order. The data set consists of video frames in a linear sequence that is arranged via a dimension of time. All clients that request these files will receive it in the same structure for encoding. Previous work suggests that a generalization of this structure to a tree or a graph format will enable sequences to run in parallel. As shown in Figure 2.2 chunks of data is encoded through a branch path until

(16)

2.7. HTTP/2 over QUIC

a branch point is reached. When a branch point is reached the client have made a choice of the next branch. When the branch applied that specific chunks will be downloaded and the process will be repeated for the entire playback sequence. This will allow the clients to adapt to a specific selection of frames during their playback experience. If this technique is applicated on streaming media applications it will extend the possible total experience providing multi path playback opportunities for a plot [18].

Figure 2.2: Branched Video

2.7

HTTP/2 over QUIC

Quick UDP Internet Connections (QUIC) is a fairly new protocol designed to enhance HTTPS performance. In some fields, it has been preferred instead of its predecessor and more tradi-tional HTTPS stack containing HTTP/2, TLS, and TCP. One main disadvantage of TCP is that it is considered to lack visibility when requests are done parallel which limit the performance gains [5]. The transportation is encrypted which strengthen the security and prevent un-wanted adjustments during the traverse through networks. During the handshake sequence, QUIC has implemented a cryptographic function. By removing operations that could lead to redundant handshake sequences and knowing credentials for servers on repeat connections the latency can be minimized for most connections. To eliminate head-of-line delays that can lead to blockades QUIC uses an abstract data-structure called streams. The streams operate by multiplexing within single connections[4].

2.8

Related work

Why competing HAS-players have a negative impact on stability has been investigated by Akhshabi et al. [2]. The research made was about how several factors such as ON-OFF duration, available bandwidth and its relation to available bit rates and how the number of competing players affects stability. The conclusion from their work is that the root cause for instability was the behavior of a HAS-player in Steady-State phase, the phase where the periods of activity (ON periods) were followed by periods of inactivity (OFF periods).

(17)

2.8. Related work

To reduce instability Akhshabi et al. [1] propose a server-based method that can reduce oscillations due to ON-OFF activity. They created a solution that is activated when oscilla-tions are detected and adjust so that a player should request the highest available encoding and to being able to maintain stability.

Other work, such as Krishnamoorthi et al. [10] propose the idea of using HAS together with branched video. By using HAS the QoE can be increased since HAS makes it possible to select suitable encoding. This paper made a basic proof-of-concept implementation and later an extension of this study was made Krishnamoorthi et al. [11] by addressing when and how chunks should be downloaded.

Studies with competing HAS-players with cap-based client-networking has been done by Krishnamoorthi et al. [12]. This study came to the conclusion that by using fixed rate cap on multiple competing HAS-players it could be shown that many positive effects followed with better continuity of playback quality, better buffer stability, and data savings. But there were also some drawbacks including slower buffer fill, slower startup, and stall recovery.

They also introduced and evaluated a framework to improve these drawbacks. However, this framework will not be implemented in this thesis.

(18)

3

Simulation Design

To our knowledge, no studies concerning fixed rate cap with competing HAS-players and branched video has been done before. Therefore we first make the simulation without imple-menting branched video, i.e. only with linear streaming and compare these results to validate them with results from Krishnamoorthi et al. [12]. With these validated results, we can con-clude that our simulation works in an satisfying way providing reasonable results before we advance with the implementation of fixed client rate cap for branched video in a competitive environment.

3.1

Simulation setup

For the simulation, we developed a simulator written in Java.The simulator captures each clients buffer size and encoding. In order to most easily validate our results, we choose to use the same parameters as Krishnamoorthi et al. [12]. We have set up four competing clients sharing a 6000 kbit/s bottleneck as default value and the clients’ start time is staggered by 10 seconds. To calculate an estimation of the available throughput each client uses Exponentially Weighted Moving Average (EWMA) with α = 0.4. The client then selects the highest available encoding below 80 % of this estimation. The available ecnodings are 144 kbit/s, 268 kbit/s, 625 kbit/s, 1124 kbit/s and 2217 kbit/s. The fair share bandwidth is calculated each time a client makes a new request.

For each simulation we set the main buffer thresholds to Tminand Tmax. This means that

a client will not start playing the video until the main buffer for the first time has a size of at least Tminseconds of playback time. Once it reaches the Tmax a client will not request a new

chunk until the playback buffer size is below Tmin(client is in Steady-State).

The reason for using the main buffer is that once we implement branched video the sim-ulation will have extra buffers for each possible path. The path buffer thresholds will behave different in comparison to the main buffer thresholds and this will be explained under Section 3.1.2.

Even if the simulation in theory has partially implemented a TCP protocol, our simulation will not capture all the features of TCP. Every time a client has been in an OFF-period and a new connection is established TCP would initialize TCP-slow-start. This is one of the features that our simulations wont include. This will therefor contribute to a different result than from a real TCP connection scenario.

(19)

3.2. Bandwidth and chunk-size

3.1.1

With linear video

Our simulation is an extension from the simulation made by Krishnamoorthi et al. [12]. The simulations are not completely equal but use the same parameters such as encodings, buffer threshold, and encoding estimation. The comparison to our results with the result from their work will be shown to make sure our simulations create similar results.

The main buffer thresholds are set as Tmin= 30 s and Tmax= 40 s.

3.1.2

With branched video

The simulation where branched video is implemented is an extension of the simulation with linear video. The main buffer thresholds are set as Tmin= 30 s and Tmax = 40 s.

Each branch path has its own buffer and each buffer has a threshold set to Tmax = 15 s.

Tminis redundant for branch paths since it never will be used and therefore set to zero. Once

a path buffer has a size above Tmax the clients will not request new chunks for that specific

path. The Tmax is implemented to make sure no stalls occur and that no excessive buffering

occurs in the case that the path is the non-selected option by the user. When a branch point is about to be reached and the client has downloaded all the chunks until the branch point the client will start to download chunks for each path after the branch point. Once the branch point is reached, the buffer from the chosen path will be added to the main buffer.

In the default settings, there are a total of three branch points. First at time t = 80 s, second at t = 180 s and third at t = 300 s. Every branch point is followed by three different paths. A client uses QUIC and therefor only one connection is set up and the same connection is used to download chunks for all three paths.

3.2

Bandwidth and chunk-size

The bandwidth available for each client is based on traces from Riiser et al. [15]. The traces are not taken from situations where clients compete over the same link bandwidth and the maximal available bandwidth is equal to the minimum of each clients’ individual cap. Several simulations have been done with different bandwidth traces for each client. Not all chunks are in the same size even if they have the same playback time and encoding. Chunk sizes are obtained in chunk-size sequences from YouTube videos and there is a sequence for each specific encoding. These sequences are used in the simulation. Each chunk has a playback time at four seconds. The traces and chunk-sizes are the same as in the simulation done by Krishnamoorthi et al. [12].

3.3

Implemented caps

In the default settings, four clients share a 6000 kbit/s link bottleneck. When fixed caps are applied the default limit for each client is at 1500 kbit/s, which is the fair share cap. The simu-lations are also done without the implementation of fixed caps so that results can be compared easily. Each client’s available bandwidth is limited by their individual bandwidth from the traces. All clients available bandwidth summarized never exceed the total link bandwidth of 6000 kbit/s.

3.4

Limitations

We always assume a shared bottleneck where the bandwidth is fairly shared by active clients. In practice, some clients may be bottlenecked by other factors in the network.

The bandwidth traces are taken from 3G cellular networks. We also wanted to make the simulation with different connections as WiFi and Ethernet but did not have traces for this kind of options. Since the bandwidth traces are taken from a 3G cellular network with a

(20)

3.4. Limitations

moving client the traces have more variations than if the client had been stationary and using WiFi or an Ethernet connection. This can also affect the the oscillation in the estimation of encoding for each client. To avoid to much variations in bandwidth and the fact that we use four trace files for a single simulation some results are presented from only six simulations.

Our simulation does not capture the handshake sequence within TCP slow start and the default congestion control algorithms used by TCP and QUIC. Instead we assume the fair share bottleneck bandwidth among active clients. Furthermore, similar to QUIC, we assume that the clients downloads all the chunks over a single connection. These simplifications results in somewhat higher fairness between competing clients than what would be achieved in a real network.

Another aspect is that we could have used open source frameworks for media players such as OSM or dash.js to be able to make more thorough conclusions. Due to the limited time for this study, we stuck to the option of creating a simulation.

Even if our simulation behaves like a real scenario of competing video streams we cannot completely determine that our results are fully accurate. However, the results are enough in line with previous work and therefore judged as reliable to some degree. With this observa-tion, the proceedings with the work are motivated.

(21)

4

Simulation Result

Initially the result from simulations with linear video be presented followed by the simula-tions where branched video is implemented.

4.1

Linear video

Several simulations have been done with different bandwidths for each client. However, some of the results that follow only include one of the simulations. This since the result from all simulations shows the same pattern and therefore we can draw the same conclusions from them as from the one that is presented. Even if we use the same traces as Krishnamoorthi et al.[12] we compare our simulations result with their experimental result and not the result from their simulation, therefore some differences could also be observed.

4.1.1

Overestimation and bandwidth variation

Before the results for each client are presented, we start off by only illustrating the results for the requested encoding for Client 1. This is extracted along with Client 1’s available bandwidth and estimated bandwidth. This illustrate the factors such as network conditions mentioned in section 2.4.

Figure 4.1 shows Client 1’s requested encoding, available bandwidth and estimated band-width without cap (a) and with fixed cap (b). By having all these parameters graphically il-lustrated together, the possibility of easily identifying how an implemented cap impacts the outcome of the results.

By observing the results between time t = 140 s and t = 170 s in Figure 4.1 (a) we can con-clude that the client has after an OFF-period overestimated the available bandwidth. Before entering the OFF-period at time t = 130 s the available bandwidth is around 5000 kbit/s. This since the other clients are in OFF-period and Client 1 overestimates the fair share with 3500 kbit/s. In other words, by overestimating the fair share, the client requests a too optimistic encoding at 2217 kbit/s.

The reason for the overestimation is because of the client’s inability to estimate fair share and make a correct estimation of available bandwidth during an OFF-period. These factors combined create instability. The instability would not occur as clearly if the network con-ditions had been more stable. So these three main factors together indicate how instability

(22)

4.1. Linear video

could appear for a client. A similar scenario can be observed between time t = 215 s and t = 225 s. These factors can also in singularity trigger switches between encodings. For ex-ample at time t = 300 s when the client requests encoding at 144 kbit/s due to poor network conditions.

Figure 4.1 (b) shows the results with the same parameters except for the additional imple-mentation of a fixed cap at 1500 kbit/s. At time t = 300 s it is still possible to see how poor network conditions affect the requested encoding. Even if the client at one point requests en-coding at 1124 kbit/s the clients’ requests during the simulation is more stable. With a fixed cap at 1500 kbit/s, we can conclude that the fair share is never overestimated. This results in less variation in the available bandwidth and estimated bandwidth which in turn leads to less overestimation after being in OFF-period. Even if overestimations after being in OFF-period do occur it does not create the same instability as when not using cap.

(a) Without cap (b) With fixed cap at 1500 kbit/s

Figure 4.1: Client 1´s requested encoding, available bandwidth and estimated bandwidth, default linear scenario

4.1.2

Requested encoding

Figure 4.2 (a) show the requested encodings without cap and Figure 4.2 (b) describes the re-sults when a fixed cap at 1500 kbit/s is added. By observing the graphs, we can see that more uniform level of requests of encoding is done in Figure 4.2(b). The results can be explained

(a) Without cap (b) With fixed cap at 1500 kbit/s

(23)

4.1. Linear video

the same way as in Section 4.1.1. Some instability do occur with fixed cap (Figure 4.2(b)) but compared to Figure 4.2(a) it is more stable.

4.1.3

Buffer occupancy

Figure 4.3 (a) and (b) shows the results of the buffer size over time in a setup where multiple clients compete. In Figure 4.3 (b) a cap at 1500 kbit/s is also set. By comparing both graphs one can see that all the clients in the uncapped environment seem to have a less stable be-havior for its buffer occupancy in comparison to the clients shown in Figure 4.3(b). Another observation is that when a client is in an ON-period the buffer size increases and when it is in OFF-period the buffer size decreases. A clients buffer reaches below Tmin= 30 s because it

overestimates the available bandwidth after an OFF-period. If we compare Client 1:s buffer size in Figure 4.3(a) with Figure 4.1(a) it can be observed that the client’s buffer size is below Tmin= 30 s at the same time as the clients overestimate the available bandwidth.

Figure 4.3(b) illustrates that overestimations also can occur even with a fixed cap for the client.

(a) Without cap (b) With fixed cap at 1500 kbit/s

Figure 4.3: Buffer size, default linear scenario

4.1.4

Switches between encodings

In Figure 4.4 the number of switches is from six simulations with different bandwidth traces are illustrated. Top-whiskers and bottom-whiskers shows the calculated maximum values respective minimum values. We can observe that the amount of switches between encodings is smaller when a rate cap is implemented. However, the difference between with cap and without cap may seem small. This is because during all simulations we had a fixed cap at 1500 kbit/s. This cap was not optimal for all simulations because some trace files had very poor bandwidth relative to other traces. In these cases, a lower fixed cap would have been more preferable. If a more suitable fixed cap had been set for each simulation the differences in the number of switches would have been bigger.

4.1.5

Client link bandwidth estimation

Figure 4.5 (a) shows the estimated available link bandwidth for each client without a fixed rate cap whereas Figure 4.5 (b) show when a cap is set. The fair share bottleneck bandwidth is at 1500 kbit/s. The results in Figure 4.5 (a) clearly show how each client multiple times overestimate their fair share bandwidth. When a client overestimates the fair share band-width to 6000 kbit/s the other clients have either not yet started playing or is currently in an OFF-period. Just to make sure, since it can be difficult to determine from the graph, we

(24)

4.2. Branched video

Figure 4.4: Number of switches from six simulations, linear scenario. Bars shows average values, top-whiskers the max values, bottom-whiskers the min values.

calculated the total bandwidth at time s used by all clients to make sure it does exceeds the limit at 6000 kbit/s. Figure 4.5(b) shows a fixed rate cap at 1500 kbit/s. Even if a client is in an OFF-period, the rate cap is set to 1500 kbit/s.

(a) Without cap (b) With fixed cap at 1500 kbit/s

Figure 4.5: Instantaneous bandwidth seen at client, default linear scenario

4.2

Branched video

In this section, results will be graphically illustrated and explained when branched video is added to the simulation.

4.2.1

Buffer occupancy

Figure 4.6 shows the buffer sizes over time. In Figure 4.6 (b) the fixed cap is also implemented. The pattern observed is similar to the result retrieved with linear video in Figure 4.3. The buffer occupancy is more stable with a fixed cap than without. The critical part in branched video is when an overestimation is done after a branch point when a client is in a Buffer-state. If the overestimation is too big after a branch point there is a risk that a stall will occur. When a client is in Buffer-state, after a branch point, usually a more stable buffer build-up occurs with a fixed cap implemented. This means that the risk for a stall is lower with cap than without. The start up delay is usually unchanged whether a cap is set or not. Also, there is no difference in start up delay between linear video and branched video.

(25)

4.2. Branched video

4.2.2

Requested encoding

Similar to the linear video, we also analyze the requested encoding for branched video. Fig-ure 4.7 (a) and (b) shows the requested encoding and in FigFig-ure 4.6 (b) a cap is implemented. As seen in the graphs a more consistent level of encoding requests is made with fixed cap than without cap. Since the rate cap limits the available bandwidth, a lower level of encoding will follow which is shown in the graph. If the results were to be compared with the linear settings in Figure 4.2 both of the graphs shows a result with more instability without a cap.

(a) Without cap (b) With fixed cap at 1500 kbit/s

Figure 4.6: Buffer size, default branched video scenario

(a) Without cap (b) With fixed cap at 1500 kbit/s

Figure 4.7: Requested encoding, default branched video scenario

In order to examine how the simulation results are affected when the fixed rate cap is changed we readjust the cap to different levels. The different levels are set to 1300, 1400 and 1600 kbit/s. In Figure 4.8 (c) with the cap at 1600 kbit/s, the graph clearly describes a deteri-oration in the number of switches between encodings and an obvious instability behavior is apparent. If we put the rate cap limitation at 1400 kbit/s an improvement were made which is observable in 4.8 (b). We also made a simulation with a cap set to 1300 kbit/s, in this case the result became worse again. This indicate that the sweet spot is somewhere around 1400 kbit/s for these particular bandwidth conditions. Another aspect of this worth mentioning is that this most suitable level at 1400 kbit/s applies only to branched video setup and not in the linear video simulation. When the same rate cap was implemented on linear video 1500 kbit/s was the cap level led to the most favorable result. After changing the fixed cap and conduct multiple simulations with different trace files, one conclusion is that it is difficult to

(26)

4.2. Branched video

find a cap reaching the optimal result for all simulations. The main reason is that the band-width in the trace files varies greatly. Overall, a lower fixed maximum limit gives a stable result, but this also means that the playback quality is limited to a lower level. Clients who have the opportunity to theoretically play at higher quality are forced to play at a lower even though an acceptable level of stability is met when playing in the higher quality option. This can be seen in Figure 4.9 (a) where Client 1 reaches a stable level, but the other clients do not. In Figure 4.9 (b), all clients are on a more stable but lower level. Note that the results in Figure 4.9 are done with other trace files than in Figure 4.8.

(a) With fixed cap at 1300 kbit/s (b) With fixed cap at 1400 kbit/s

(c) With fixed cap at 1600 kbit/s

Figure 4.8: Impact of alternative rate cap limits, branched video scenario

(a) With fixed cap at 1300 kbit/s (b) With fixed cap at 700 kbit/s

(27)

4.2. Branched video

4.2.3

Switches between encodings

The result varies from both different traces and at which time branch points occur. In Figure 4.10 the results of the number of switches from six simulations with different bandwidth traces are illustrated. Changing time for branch points did result in different results regarding when the switches occurred but did not change the total number of switches. Since the results of changing time for branch points are strongly linked to our bandwidth traces and therefor not applicable to a real case scenario the results from changing time for branch point will not be presented.

In Figure 4.10 it can be seen that a more stable result is given when a cap set to 1500 kbit/s is used compared to without a cap. The differences between top-whiskers and bottom-whiskers values with cap at 1500 kbit/s is very large, the reason for this is that the bandwidth from the trace files varies a lot. For trace files with poor bandwidth, a cap at 1500 kbit/s is not appropriate. We therefore chose to make the simulations with a cap at 700 kbit/s. The result is considerably more stable when a cap of 700 kbit/s is set, as it is more inclusive and generates better results for all trace files, at least when consideration is strictly taken to bring stability. On the other hand, a higher encoding is generally requested when the cap is set to 1500 kbit/s and when the cap is 700 kbit/s, these are forced down to request encodings at 268 kbit/s.

Figure 4.11 shows the average requested encoding from six simulations. The highest av-erage requested encoding occurs when no cap is implemented. In some simulations, like the one in Figure 4.7, the average requested encoding are above 625 kbit/s. Other simulations with poor bandwidth have an average requested encoding below 625 kbit/s. In total, this gives an average om 625 kbit/s. When implementing a cap at 1500 kbit/s the average re-quested encoding for some simulations were reduced, as seen in Figure 4.7 (b). However, simulations with poor bandwidth remained at a low average below 625 kbit/s and the im-plemented cap did not have any effect on these simulations. Therefore the result with a cap at 1500 kbit/s jumps between encoding 268 and 625 kbit/s, seen in 4.11. When implementing a cap at 700 kbit/s all simulations were affected and gave a stable but low average requested encoding. As earlier mentioned, this cap forced some clients to request a lower encoding.

Figure 4.10: Number of switches from six simulations, branched video scenario. Bars shows average values, top-whiskers the max values, bottom-whiskers the min values.

4.2.4

ON-OFF periods

Figure 4.12(a) illustrates how much time in percentage of the simulation (in total 400 seconds) a client spends in an OFF-period (the rest is in an ON-period). Without a cap, it is around 40 %, and with a cap at 1500 kbit/s implemented it is roughly around 50%. We can observe that with a cap implemented more time is spent in an OFF-period. The explanation for this

(28)

4.2. Branched video

Figure 4.11: Average requested encoding from six simulations, branched video scenario

can be done by observing Client 3 in Figure 4.6. When the client has reached the first branch point while no cap is implemented, it takes approximately 31 seconds until it reaches Tmaxat

40 seconds. This since it overestimates the bandwidth after being in an OFF-period. With cap however it takes 14 seconds to reach Tmax.

Figure 4.12(b) illustrates how many times a client enters an OFF-period during a simu-lation. Since a OFF-period normally is around 10 seconds, except when getting close to a branch point, clients mainly enter an OFF-period more times with cap than without cap. This is reasonable since with a cap a client is in OFF-period more time in total than without cap being implemented.

The fact that a client is in OFF-period longer and enters OFF-periods more times with cap than without is worth analyzing deeper. In section 2.4.1 it is illustrated that one of the root problems for instability is that once a client is in an OFF-period it cannot estimate the correct bandwidth when entering ON-period once again. The reason that there is less instability with cap, even if it enters more OFF-periods and is spending more time in that condition in total, is that with cap there is a lower estimation of the bandwidth after an OFF-period. Which can be seen in Figure 4.1(b). This contributes to that the encoding a client request is more appropriate. By selecting a more appropriate encoding, chunks can be downloaded faster, and that results in less time in ON-period and more time in OFF-period in total.

(a) Time in OFF-period (b) Times entering a OFF-period

Figure 4.12: Percentage of each clients time in OFF-period and times entering a OFF-period, default branched video scenario

(29)

4.2. Branched video

4.2.5

Path buffer thresholds

To analyze the buffer behavior over time we have illustrated the results graphically in Figure 4.13 (a) and 4.13 (b) when the cap is implemented. When the branch path buffer threshold is set to Tmax= 4 s stalls occur both when cap is not implemented 4.13(a) and when it is 4.13(b)

for client 1 and 2. Many simulations are performed with different settings for when branch points appear and also different trace files. It is not possible to see any clear signs that fixed rate cap would be advantageous to avoid stalls when the branch path buffer threshold is set low. The reason is that clients with fixed cap also make an overestimation of bandwidth so that stalls occur. With fixed cap however, the overestimation is not as large as without the cap, which means that stalls periods are usually shorter with fixed cap. When Tmax was set

to 8 s, stalls no longer occurred with or without cap. Otherwise, the results are more stable with cap than without.

(a) Without cap (b) With fixed cap at 1500 kbit/s

Figure 4.13: Buffer size when branch path buffer has a buffer threshold set to Tmax= 4 s

4.2.6

Competing clients

Figure 4.14 shows the average number of switches for two clients sharing 6000 kbit/s band-width. Top-whiskers shows the maximum values and bottom-whiskers the minimum values calculated over six simulations. The result shown is without cap, with cap at 1500 kbit/s, and with cap at 700 kbit/s. We also conduct the results when the simulation had cap set to fair share at 3000 kbit/s which is not illustrated in the graph since it gave the same result as the simulation without cap. The least number of switches occurs when the cap was set to 700 kbit/s. However, this means that clients requests a lower encoding of 268 kbit/s. When the cap is set to 1500 kbit/s many requests are for a higher encoding of 625 kbit/s. It is possible to determine that a cap is also advantageous when the number of clients decreases, but the cap should be set smaller than the fair share to have an improved result if the shared bandwidth is large.

In Figure 4.15, the requested encoding for six competing clients without cap, with cap at 1000 kbit/s, which is the fair share, and cap at 700 kbit/s is illustrated. When a cap is set to fair share of 1000 kbit/s, a slightly improved result is given. When the cap is set to 700 kbit/s, an improved result is also given in comparison to the no cap settings. We also test to run a simulation with 9000 kbit/s as shared bandwidth which enable to set cap to the fair share limit at 1500 kbit/s. This gave a more stable result compared to when the cap is 1000 kbit/s in the figure. The reason why it did not generate a more stable result with a cap of 1000 kbit/s is because this cap is not optimal since it enforces the clients to jump between requests of 625 kbit/s and 268 kbit/s. When cap is set to 700 kbit/s, everything is usually requested

(30)

4.2. Branched video

Figure 4.14: Number of switches from six simulations with two clients, branched video scenario. Lines shows average values, top-whiskers the max values, bottom-whiskers the min values.

Figure 4.15: Number of switches from six simulations with six clients, branched video scenario. Lines shows average values, top-whiskers the max values, bottom-whiskers the min values.

as an encoding of 268 kbit/s. Here too, a cap should be set lower than fair share to achieve the most stable result possible.

If the results of the number of switches without cap for the simulations with two clients, shown in Figure 4.14, are compared with the results of six clients, shown in Figure 4.15 it can be seen that the average number of switches and the difference between top-whisker and bottom-whisker are less in the case with six clients. This is reasonable as there are more clients who can be in an ON-period when a client makes estimation of fair share and in this way it will not overestimate their fair share as much. A trivial conclusion is that when a cap of 700 kbit/s is set, the client will not be able to overestimate its fair share. For that reason, a similar result will be given regardless of 2 or 6 competing clients in the network.

4.2.7

Branch point frequency

Figure 4.16 illustrates the average amount of switches when the time between branch points varies and how the result differs with and without cap. The first branch point is at t = 40 s, then it differs 10 s, 20 s and 40 s between the next branch points respectively. Branch path buffer threshold had to be set to Tmax = 8 s to be able to have branch points at 10 s intervals.

In order not to change several parameters at the same time, Tmax= 8 s is used in all the cases.

The default result is illustrated at the far right and is the same result as in Figure 4.10. This simulation only contains three branch points as opposed to the others containing 10, and buffer threshold is Tmax= 15 s.

(31)

4.2. Branched video

The number of switches is very similar regardless of how long it is between the branch points both with and without cap. It is in all cases advantageous to use fixed cap. The buffer is also more stable in all cases when the fixed cap is implemented, stalls does not occur in any of the cases. When the cap was set to 700 kbit/s, the switches decreased further but then a lower encoding is also requested. So when branch points occurs close to each other, a fixed cap is still beneficial.

Figure 4.16: Number of switches from six simulations with different time between branch points. Bars shows average values, top-whiskers the max values, bottom-whiskers the min values.

4.2.8

Chunk sizes

The impact of varying chunk sizes is another parameter worth analyzing. This has been implemented in an environment where this specific variable has been changed. The sizes of the chunks are taken from video clips from Youtube and we examine what results are obtained if the size of the chunks increases by 10% and decreases by 10%. When the size was reduced by 10%, the number of switches without cap did not change. However, they increased at fixed cap at 1500 kbit/s. When the size increased by 10%, the number of switches was the same both with and without cap. Although the number of switches was the same, these occurred at different times when the sizes were changed. When fixed cap was set to 1400 kbit/s, a less number of switches were made in all three cases (standard size, reduced size and increased size) compared to 1500 kbit/s.

The conclusion is that if the size of the chunks decreases then the fixed cap may need to follow along with the decrease to achieve the most stable results. And vice versa, if the chunks were to increase.

4.2.9

Summarized table of results

Table 4.1 summarizes the results. The table includes the different parameters used in the sim-ulation and their respective default value. When a parameter is assign a new value the result from the simulation with the new value can be seen in a specific figure or section. Only one parameter is changed at a time while the other parameters have the default value.

(32)

4.2. Branched video

Parameter setup

Parameter Default New value Result

Traces Standard

Fixed cap 1500 kbit/s 1300 kbit/s Figure 4.8(a)

1400 kbit/s Figure 4.8(b) 1600 kbit/s Figure 4.8(c)

Clients 4 2 Figure 4.14

6 Figure 4.15

Path buffer threshold 15 s 4s Figure. 4.13

Chunk size Standard Increased 10% Section 4.2.8

Decreased 10% Section 4.2.8

Branch point frequency >100 s 10 s Figure 4.16

20 s Figure 4.16

40 s Figure 4.16

(33)

5

Discussion

5.1

Linear video

As previously mentioned, we first made a simulation with linear video to ensure that our sim-ulation behaves correctly. When comparing our results with the results from Krishnamoorthi et al. [12], there are clear similarities between the results. The small differences that occur are due to that our results are based on simulations and theirs are based on real experiments. Based on this comparison, we can perform simulations with branched video and also produce credible results.

5.2

Branched video

The same benefits of using the fixed cap in the linear video can also be seen in branched video. Using a fixed cap contributes to several advantages. The number of switches between quality decreases and that buffer becomes more stable. However, a conclusion is that there is some difficulty in finding a fixed cap that fits multiple clients with different network connections.

5.2.1

Advantages with cap implemented

The reason why instability and unfairness arise is because of overestimation and networking. Below we discuss how fixed cap affects these areas.

5.2.2

ON-OFF periods and network conditions

Two of the main causes of instability are that a client cannot estimate the available bandwidth when it is in OFF period and when varying network conditions prevail. Although a fixed cap does not solve these problems, it reduces its consequences. This is despite the fact that a client is more time in OFF period and goes into OFF period more often with cap than without, shown in Figure 4.12. The reason for this is that the estimation of the bandwidth is at a more stable level with cap than without, see Figure 4.1. This allows the client to adapt more quickly to the prevailing network conditions. Although the client for both cases, with cap and without, has to settle on a lower quality, the buffer is built up faster and becomes more stable with cap. This is important if bad network conditions occur when a branch point is

(34)

5.3. Fixed cap sweet spot

reached. Without the cap, the risk of stalls would increase in comparison with if the cap was implemented.

5.2.3

Fair share estimation

The main advantageous reason for using cap is for optimal fair share estimation. By using cap a client will never be able to overestimate its fair share. Based on our results, some of the instability occur because of fair share overestimations and thus this issue would be eliminated. This plays a greater role when the number of clients increases and if the network conditions become more stable.

5.2.4

Bandwidth competing

From Figure 4.14 it is evident that if a fixed rate cap is set too high, in this case cap is set at the fair share level, the benefits mentioned earlier disappear. This means that even though the clients sharing the bandwidth, a fixed cap should be set independently of the number of clients. From the same figure it can also be seen that even though fixed cap is set to 1500 kbit/s which is each clients fair share, it leads to instability because of the number of switches. When the cap is set to 700 kbit/s the average number of switches decreases vastly. This means that fixed cap should be set lower than the fair share to generate a more stable behavior.

When the number of clients increases to six, shown in Figure 4.15 the number of switches between requested encoding also increase. A slightly more stable result with fixed cap is given in comparison with when no cap is set. The most stable result was generated when fixed rate cap was set at 700 kbit/s and not at fair share of 1000 kbit/s. A more stable result was also given by the cap of 700 kbit/s in the 6 client environment.

5.2.5

Buffer stability

The stability of the buffer is improved by setting a rate cap. Based on our results, however, we cannot determine whether this has a major impact on the viewer’s experience. This is since the viewer notices the quality of the video and not the size of the buffer as long as the buffer is not empty and the video pauses. However, this can have a impact when a branch point just has been passed and the buffer is small. Our results in Figure 4.13 do not show that the use of cap will reduce the risk of stalling after a branch point. However, based on other results, we see risk that the video will be stalling after a branch point is higher without cap than with cap. This is because the buffer becomes more stable with cap than without.

5.3

Fixed cap sweet spot

During the simulations, a fixed cap of 1500 kbit/s has been the default cap. This cap has given a more stable result with simulations of linear video and branched video. These simulations where made with varying parameters; i.e., different trace files, buffer thresholds, time for branch points, number of clients. However, it has been found that other lower caps of 1400 kbit/s or 700 kbit/s has been more advantageous in certain situations. It has been difficult to find a cap that constantly gives the best result which is also mentioned by Huang et al. [9]. This results in a great challenge for video streaming providers. If the cap is fixed at a too high level, the client could experience sequences issues already addressed. If instead the cap is set to low, poor playback quality will follow. This brings no perfect solution and could potentially result in a client choosing another video content provider [6]. However, it can be concluded that a fixed cap should not be set too high since its advantages can disappear. Figure 4.14 shows that if fixed cap is set to the same as fair share and fair share is too high the result will not improve. This because the cause of oscillations is mainly due to overestimation

(35)

5.4. Quality and stability tradeoff

after having been in the OFF-period and network conditions affects more than that a client overestimates his fair share.

Setting a cap generating stability is also considerably more difficult when the clients have different network conditions. A client with good network condition can be stable at a cap of 1500 kbit/s while another needs a cap of 700 kbit/s to become stable. To be able to put differ-ent fixed caps individually for differdiffer-ent clidiffer-ents would then have been more advantageous.

We could also observe that when the simulation was ran with the same trace files for both linear video and branch video, the sweet spots became different. For linear video, this was for fixed cap at 1500 kbit / s but with branched 1400 kbit / s. This means that it may be beneficial to put a slightly lower fixed cap when it is a branch video compared to a linear video.

5.4

Quality and stability tradeoff

We have found that the most stable cap overall in our simulations is 700 kbit/s. Intuitively, this means that cap should always be set to 700 kbit/s. But the reason for this is that with this cap, a very low encoding is requested. Therefore, a difficult tradeoff discussion is required. Should a higher cap be set risking more unstable results or a low cap with stable results but with a lower quality as a consequence. Although it has been shown that instability affects QoE, it is difficult to determine how much instability that is acceptable for time sequences with higher quality during the playback experience. Much of this is probably highly subjec-tive and will be tough to determine with ease.

5.5

The work in a wider context

After every branch point there will be at least two different paths. This means that compared to linear video, which only has one path, a branched video will demand twice as much band-width to fill the buffer for each path. And even more if there are more paths. Today, linear video is the most common type of video stream but if or when branched video will take its place the demand for bandwidth will increase. And since only one path after a branch point can be selected, the bandwidth used to download chunks for other paths will be wasted. So when the usage of branched video increases the waste of bandwidth will increase as well.

In order to reach an increased QoE, an implementation of rate caps could be applied by the network distributor. Since each household has a unique set of conditions, this implemen-tation should be automated and be adaptive, the function should also be very scalable since active users vary greatly. Because the amount of households each distributor handles is often significant, it is likely that such a solution will claim a lot of processing power and thereby a lot of energy. Another aspect is that rate caps limit the energy distribution because each client is assigned a fixed level of bandwidth, with regard to just the bandwidth consumption it should thus be easier to predict what each client consumes. Also, if the network distrib-utor needs more information about the clients; e.g. if they stream branched or linear video, this implementation claims areas of integrity and the privacy. Which means that an ethical analysis should be carried out before proceeding with this type of solution.

(36)

6

Conclusion

This thesis has explored whether the implementation of a rate cap could be beneficial for enhanced QoE in a branch video environment where clients compete for bandwidth. We have formulated relevant research questions which aim to partially contribute to answering this. To examine the research questions we have conducted a simulation by using trace files. By varying relevant parameters in the simulator, we have achieved versatile perspectives or factors that could contribute. To consolidate the simulation reliability we validate the result with previous work. We identified that the implementation of rate cap on clients in a competitive bandwidth environment is beneficial if the cap level is set at a certain limit. These client caps are counter-intuitively not at the fair share limit which gives the most bandwidth. This conclusion has been drawn since the results have shown that a slightly lower limit than the fair share reduces the number of switches between encodings, hence providing more stable playback experience than the fair share.

When applying the same rate cap within linear and branched settings different rate caps were shown to be most suitable. This indicates that a rate cap implementation will need to be dynamically configured to find the best match for client switching between branched and linear video streaming. Also, if clients in a household have a mixture between streaming linear and branched the rate caps need to adapted accordingly. Another aspect contributing to the difficulty of finding the most suitable level of rate cap is that it is unique for certain situations and that a parameter such as bandwidth conditions are having a direct impact on the best cap. When the bandwidth conditions were changed in the traces, another rate cap was given the least switches and thus the best results. Since bandwidth forecasting is very difficult, finding the best situational rate cap is also. With all these considerations a mind, a rate cap is preferable for enhanced QoE while streaming in a competing network since the reduced number of switches. However, the tradeoff is that playback quality is limited to the encoding level made by the cap. If the difficulties of finding the optimal rate cap are added to the equation, it will be difficult to implement a solution that takes all these constants and caters into considerations, especially if trying to find the optimal individual caps.

6.1

Further work

This thesis has focused on QoE primarily, thus all the presented results have been weighted with its impact on QoE in general and quality switches in particular. However, with our

(37)

6.1. Further work

simulator, it is also possible to present results with other parameters in focus. For example, an interesting focus would be to have wasted bandwidth. Another approach would be to broaden the perspective to include more QoE parameters, such as maintaining the highest playback quality as possible with only a fixed number of switches. This amount of switches could arguably be the number of switches considered acceptable by the user.. To find this specific number of switches, user studies could be made where respondents would be asked to watch a video of varied quality and then rate the experience. Thereafter, implement this in the simulator to at least in theory, reach a higher overall QoE than previously presented work.

Another promising direction for future work is to make simulations with open source frameworks for media players such as OSMF or Dash.js. Even other techniques can be used to avoid bandwidth underutilization as well as the problems that arise due to overestimation of bandwidth. Even how to find the fixed cap sweet spot in the best way should be investigated so that the fixed cap is used in the best way.

References

Related documents

Konstruerad Ritad Granskad Godkänd Datum

[r]

Äldre personer som vårdas på sjukhus på grund av akut sjukdom löper ökad risk för försämrad funktionell status.. Studier indikerar att träning under sjukhusvistelsen kan

VYKRES MATERIAL POZNAMKA JED. OZNACENI

[r]

Om motståndarna kommer till omställning så uppehåller närmsta spelare bollhållaren (mycket viktigt man gör de jobbet för laget) och resten faller tillbaka (retirerar) för att

OCM täcker spel in centralt i planen och HY kliver in och ger ett understöd men ska vara beredd att snabbt att gå ut om laget vinner bollen och får inte hamna för långt ner i

This Master thesis presents real time, two way available bandwidth measurement tool using Two Way Active Measurement Protocol (TWAMP) for Android OS based devices to measure high