• No results found

HTTP/2, Server Push and Branched Video: Evaluation of using HTTP/2 Server Push in Dynamic Adaptive Streaming over HTTP with linear and non-linear prefetching algorithms

N/A
N/A
Protected

Academic year: 2021

Share "HTTP/2, Server Push and Branched Video: Evaluation of using HTTP/2 Server Push in Dynamic Adaptive Streaming over HTTP with linear and non-linear prefetching algorithms"

Copied!
36
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet SE–581 83 Linköping

Linköping University | Department of Computer Science

Bachelor thesis, 16 ECTS | Datateknik

2017 | LIU-IDA/LITH-EX-G--17/073--SE

HTTP/2, Server Push and

Branched Video

Evaluation of using HTTP/2 Server Push in Dynamic

Adaptive Streaming over HTTP with linear and non-linear

prefetching algorithms.

Utvärdering av HTTP/2 Server Push vid adaptiv

videoströmning.

Summia Al-mufti

Rasmus Jönsson

Supervisor : Niklas Carlsson Examiner : Nahid Shahmehri

(2)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under 25 år från publiceringsdatum under förutsättning att inga extraordinära omständigheter upps tår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och admin-istrativ art. Upphovsmannens ideella rätt innefa ttar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sam-manhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet – or its possible replacement – for a period of 25 years starting from the date of publication barring exceptional circum-stances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the con-sent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as desc ribed above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

c

Summia Al-mufti Rasmus Jönsson

(3)

Students in the 5 year Information Technology program complete a semester-long software development project during their sixth semester (third year). The project is completed in mid-sized groups, and the students implement a mobile application intended to be used in a multi-actor setting, currently a search and rescue scenario. In parallel they study several topics relevant to the technical and ethical considerations in the project. The project culminates by demonstrating a working product and a written report documenting the results of the practical development process including requirements elicitation. During the final stage of the semester, students create small groups and specialise in one topic, resulting in a bachelor thesis. The current report represents the results obtained during this specialisation work. Hence, the thesis should be viewed as part of a larger body of work required to pass the semester, including the conditions and requirements for a bachelor thesis.

(4)

Abstract

The purpose of this thesis is to investigate and test the usage of HTTP/2 in dynamic adap-tive video streaming as well as to take a look into how it can be used to benefit prefetching algorithms used with branched video. With a series of experiments the performance gains of using HTTP/2 rather than the older standard HTTP/1.1 has been investigated. The results has shown no significant change to player quality and buffer occupancy when us-ing HTTP/2, though our tests has shown in a slight decrease in overall playback quality when using HTTP/2. When using a linear prefetch of two fragments an average quality improvement of 4.59% has been shown, however, the result is inconclusive due to vari-ations in average quality between different values for how many fragments to prefetch. Average buffer occupancy has shown promise with a maximum increase of 12.58%, when using linear prefetch with three fragments. The values for buffer occupancy gains are con-clusive. Two implementations for non-linear prefetching has been made. The first one uses HTTP/2 server push to deliver fragments for prefetching and the second one uses client-side invoked HTTP requests to pull fragments from the server. Using HTTP/2 server push has shown in a decrease of 2.5% in average total load time while using client-side pulling has shown in a decrease of 34% in average total load time.

(5)

Acknowledgments

We would like to thank our supervisor Niklas Carlsson and his colleague Vengatanathan Krishnamoorthi for the guidance, help and support during the entire project. We also want to thank the team behind DASH.js and the Go programming language.

(6)

Contents

Abstract iv

Acknowledgments v

Contents vi

List of Figures vii

List of Tables viii

1 Introduction 1 1.1 Motivation . . . 1 1.2 Aim . . . 1 1.3 Research questions . . . 2 1.4 Delimitations . . . 2 2 Theory 3 2.1 HTTP/1.1 . . . 3 2.2 HTTP/2 . . . 4

2.3 Dynamic adaptive streaming over HTTP . . . 5

2.4 Branched video and prefetching . . . 7

2.5 Related works . . . 8 3 Method 10 3.1 Web server . . . 10 3.2 Web client . . . 11 3.3 Environment . . . 11 3.4 Linear prefetching . . . 11 3.5 Non-linear prefetching . . . 13 4 Results 15 4.1 Video file . . . 15 4.2 Linear prefetching . . . 16

4.3 Prefetching with branched video . . . 21

5 Discussion 22 5.1 Linear prefetching . . . 22

5.2 Non-linear prefetching . . . 23

5.3 Prefetching branched video . . . 24

5.4 Method . . . 24

5.5 The work in a wider context . . . 24

5.6 Conclusion . . . 26

(7)

List of Figures

2.1 Comparison of simplex, multiplex and server push. . . 5

2.2 A 20 second video file divided into several fragments of 5 seconds and different qualities . . . 5

2.3 Client request a fragment that is delivered by the server . . . 6

2.4 An example of a branched video with two branch points. . . 7

3.1 A request served with a K-push method. . . 12

4.1 Average quality level and buffer level when using HTTP/1.1 . . . 16

4.2 Average quality level and buffer level over time when using HTTP/2 without lin-ear prefetching . . . 16

4.3 Average quality level and buffer level over time when using HTTP/2 and linear prefetch with k=1 . . . 17

4.4 Average quality level and buffer level over time when using HTTP/2 and linear prefetch with k=2 . . . 17

4.5 Average quality level and buffer level over time when using HTTP/2 and linear prefetch with k=3 . . . 18

4.6 Average quality level and buffer level over time when using HTTP/2 and linear prefetch with k=4 . . . 18

4.7 Results for linear prefetching with confidence intervals . . . 20

(8)

List of Tables

4.1 Quality index and corresponding bit-rates. . . 15

4.2 Comparison between the test using HTTP/1.1 and HTTP/2 . . . 19

4.3 Comparison between the test using HTTP/2 and linear prefetching . . . 19

4.4 Standard deviation for all the tests . . . 19

4.5 Variation for all the tests . . . 20

(9)

1

Introduction

1.1

Motivation

Video streaming is becoming increasingly popular and more people choose to consume me-dia content via their Internet connection. The multinational technology conglomerate Cisco predicts that by the year of 2020 video streaming will account for about 80% of the Inter-net data traffic [3]. Video streaming is without a doubt a hot-topic in the technology sec-tor. HTTP1, commonly used to deliver video content, was not originally designed for video streaming. Because of this, more adaptation at the application level is required to give bet-ter support for video streaming. In the search for a more effective solution for HTTP, the newer version of HTTP, named HTTP/2, has been developed. This thesis covers how video streaming can benefit from features in the relatively new standard HTTP/2, which is there to complement and eventually replace the old standard HTTP/1.1. The improvement of video streaming can have a huge impact for the end user. This means reduced data usage, reduced battery usage and increased video quality. One of the new features in HTTP/2 is server push that allows the web server to push multiple resource to the client without requiring an explicit request by the client for those resources.

1.2

Aim

The aim of this report is to investigate whether or not using HTTP/2 and server push, in the video streaming context, has any performance gains. And how it can be implemented in branched video streaming. In this thesis we have investigated when HTTP/2 and server push is preferred over HTTP/1.1 and what issues HTTP/2 server push can solve compared to HTTP/1.1 in video streaming with the DASH2protocol. An investigation to find if HTTP/2 can help solve the problems of HTTP/1.1, when using it to stream video, has also been made. These investigations have been done by implementing a video streaming mechanism using HTTP/2 server push technology.

1Hyper Text Transfer Protocol

(10)

1.3. Research questions

1.3

Research questions

• Are there any benefits of using HTTP/2 in video streaming?

• Are there any benefits of using HTTP/2 with server push in video streaming? • How does these benefits affect the end-user when watching video?

• How can a non-linear prefetch approach be used with HTTP/2 in branched video streaming?

1.4

Delimitations

HTTP/2 has many new features but in this thesis the focus is on the server push functional-ity. To limit the results from the experiments only quantitative results will be collected and considered, such as how much video is buffered at a given point or what quality is played back. An implementation with server push in a common web server and customization’s for video streaming over HTTP/2 has been made. The first implementation is based on a linear prefetch strategy, other strategies to using server push in video streaming, like non-linear prefetching, have also been investigated. The second implementations is an algorithm for non-linear prefetching when playing media of type branched video.

(11)

2

Theory

This theory chapter covers how HTTP/2 differs from HTTP/1.1 and the basics of how dy-namic adaptive streaming over HTTP works. It also covers a background about branched video streaming and differences between linear and non linear prefetching.

2.1

HTTP/1.1

HTTP/1.1 was first drafted in the RFC 2616 standard and has been around since year 1999 [4]. Today most websites on the Internet still use HTTP/1.1 and the transformation into HTTP/2 is a constant ongoing process, many large Internet companies has already adopted HTTP/2 and many others are expected to follow. Because HTTP/1.1 is still relatively common it is very relevant to look at how it works. HTTP in general is a request driven protocol where a client, usually refereed to as a web browser, send a request to a server to fetch a web page with some content. The resource fetched usually contains links to other resources that also needs to be fetched to display a page in full, such as images and style-sheets. HTTP/1.1 does not feature any type of multiplexing, meaning that resources requested by the client cannot be streamed asynchronously. So the client will have to wait for one response to finish before receiving a response for the next request. This means that if the client wants to load two resources, one fast and one slow, and the slow resource gets requested first, the fast resource cannot be sent until the slow one has finished which causes unnecessary waiting, this type of flow is illustrated in Figure 2.1. This synchronous process is not suitable for today’s rich web pages because they often contain a vast amount of links to other required resources, which requires a lot of request to complete rendering of a web page. Because video streaming requests often share the same pipeline as the web page they are located on, they will also take a part in the waiting process for other resources. If another request is being responded to when the web browser is trying to request another video fragment then the video request will have to wait for the other request to finish, this might cause a video to stall1 during playback.

Another problem with HTTP/1.1 is that it in general requires one round trip to the server per request made. This becomes a big issue in high-latency networks. To address this HTTP pipelining was introduced where a number of request can be served over the same TCP

(12)

2.2. HTTP/2

nection, which removes the need for additional round trips for every request. However, HTTP pipelining is not as widely used and does not add any support for multiplexing. A common way to work around the issue of no multiplexing as default in HTTP/1.1 is to use multiple TCP connections. However, this is not a formal part of the standard of HTTP/1.1 and is not required. It is also usually limited to a maximum of six parallel connections and has no support for prioritizing responses.

2.2

HTTP/2

Internet Engineering Task Force (IETF) introduced a new version of HTTP in the RFC 7540 standard named HTTP/2 [1]. HTTP/2 is an updated version of HTTP/1.1 that includes a variety new features like server push, stream multiplexing and minimizing protocol over-head via efficient compression of HTTP over-header fields. The aim is to address the issues of HTTP/1.1 and in turn contribute to a faster Internet. The server push technology was orig-inally designed to help reduce web page load latency by allowing the web server to push content that the client might need in the near future, such as external resources on a page the client is currently rendering. This means that some resources may be transmitted to the client before the client is done parsing the link to that content. The idea is that the server can maintain a persistent connection with the client and push multiple resources at the same time without a need to require a request for every resource. This means that the round trip delay which comes with the request-response model can be avoided and the server can simply push all resources the client will need when requesting a specific page into its web browser cache-memory. When the client eventually wants to make a request for a specific resource the request can be instantly loaded if the content is already stored in the web browsers cache-memory.

Figure 2.1a shows a typical request/response pattern when requesting a website from a HTTP/1.1 server without multiplexing. All resources requested are loaded synchronously. The inclination of the response arrows indicate the size of the resource that is being trans-ferred, a very steep slope would indicate a large file as it would take longer time for it to be transferred. Figure 2.1b shows a typical request/response pattern when using HTTP/2. Responses and requests have the ability to multiplex and be transferred asynchronously. Fig-ure 2.1c shows a typical request/response patterns when using HTTP/2 and server push enabled. Responses are both loaded asynchronously and responses to resource 2 and 3 are pushed by the server instantly rather than requested.

(13)

2.3. Dynamic adaptive streaming over HTTP

(b) Multiplex: Resources are loaded in parallel (a)

Simplex:

Resources are loaded in sequence

request 1 response 1 request 2 response 2 request 3 response 3 request 1 response 1 request 2 response 2 request 3 response 3 (c)

Server push & multiplexing: Resources are pushed and loaded in

parallel request 1 response 1 push 2 push 3 finish finish finish Client Time Server Client Time Server Client Time Server

Figure 2.1: Comparison of simplex, multiplex and server push.

In the context of video streaming some clear benefits of using HTTP/2 can be identified; multiplexing, which keeps requests from blocking the ability for video content to reach the user in time to be played and server push, which allows video content to be pushed to the user instead of being transmitted upon request.

2.3

Dynamic adaptive streaming over HTTP

DASH is a protocol used to deliver video content to the user in an adaptive mode. Adaptive mode means that the bit-rate of the video quality selected for playback should not exceed the available bandwidth. To allow for this adaption process the original video is split into differ-ent fragmdiffer-ents of a pre-defined length, these fragmdiffer-ents are then encoded into differdiffer-ent quality levels at different bit-rates making them different in size. Figure 2.2 shows an illustration of how a video could be split into different fragments.

Fragment 1: 1200.0 kbps Fragment 2: 1200.0 kbps Fragment 3: 1200.0 kbps Fragment 4: 1200.0 kbps

Fragment 1: 570.0 kbps Fragment 2: 570.0 kbps Fragment 3: 570.0 kbps Fragment 4: 570.0 kbps 5 seconds

Video file

5 seconds 5 seconds 5 seconds

Quality index 2

Quality index 1

Figure 2.2: A 20 second video file divided into several fragments of 5 seconds and different qualities

Because the video is split into these fragments of different qualities the client may now freely select which of these fragment to download, as illustrated in Figure 2.3. If the selected video quality which the client downloads exceeds the available bandwidth the client will not be able to download video fragments in time to be played back, this causes the video to stall,

(14)

2.3. Dynamic adaptive streaming over HTTP

which causes buffering2. This type of behaviour is unwanted. The available bandwidth is calculated by the client as it downloads video fragments and it is up to the client to choose which fragments to request and download from the server. This client driven interaction is very suitable for the distributed structure of many video streaming services, as there is no need for servers to keep track of different states or do any bandwidth estimations, as this is managed by the client.

Figure 2.3: Client request a fragment that is delivered by the server

DASH uses an MPD3file which is first fetched by the client when a video streaming process is about to begin. The MPD file have a variety of options and tells the client where to find the different video fragments as well as what video encoding has been used to encode the fragments. The MPD file also contains options for minimum and/or maximum buffer. The minimum buffer is a value of how many seconds of video data the client needs to have in its buffer before starting a video playback, if the buffer content falls below this threshold the video continues to play. If the client buffer is entirely empty it will not continue playback until enough fragments has been fetched to reach the minimum buffer threshold. This option helps to keep the client from constantly freezing in bad bandwidth conditions. The maximum buffer option can be used to restrict the client from storing too much content in its buffer. This helps the client to not store too much video data of a too low quality if the bandwidth conditions improves over time. By selecting these factors with care the client can be given instructions to help balance great video quality with the risk of stalling to buffer more video content.

The goal of this adaptive streaming mode is to allow users to consume video data in a smooth playback even though bandwidth conditions are bad. The downside of adaptive streaming is that users with unstable connections might experience a lot of shifts in video quality, but the gains in having adaptive streaming mode considered superior to this factor.

It should also be stated that different implementations of the DASH protocol may have different characteristics such as different buffer thresholds.

Dash.js

A commonly used implementation of DASH for websites is dash.js written in JavaScript. This implementation is specially suitable for usage in web browsers as most browsers has support for JavaScript. It works by including a JavaScript file into the web-page and then defining a HTML5 video element with a link pointing to the MDP file. This implementation is the one used in the experiments conducted and was selected due to its common usage and active development.

Bandwidth estimation in dash.js

One of the factors that has to be taken into consideration when using a specific implemen-tation of DASH is to look at how the bandwidth is estimated. Dash.js estimates bandwidth by using the first received response byte for measuring latency and the time to final byte to

2The process of downloading content to be played 3Media Preparation Description

(15)

2.4. Branched video and prefetching

measure bandwidth. When using this estimation for adaptive bit-rate selection the measured bandwidth is averaged by a sliding scale. Special consideration in the experiments as to not affect the ability to correctly estimate bandwidth has been taken.

2.4

Branched video and prefetching

Linear media is media arranged in a pre-defined order with no end-user interaction, every user receives the same flow of information. Non-linear is the opposite where every user receives a different flow of information depending on their interactions. A common usage for non-linear video streaming is 3D graphics where the user can navigate around a 3D scenery, in this case streaming the video of the users current viewing angle [5].

1 3 5 2 4 Figure 2.4: An example of a branched video with two branch points.

Branched video is a type of non-linear media of which the users can select different playback paths through the video without noticing interruptions in the video quality [8][10]. A playback path is in most cases selected by the consumer. A path could represent a sub-story or an al-ternative chain of events that the consumer can choose to watch. Because of the interactive nature of branched video the wait time between selecting a branch until the switch-ing is done will need to be fairly low, as not to affect the consumers experience. Figure 2.4 shows how a branched video might be divided into different paths.

Branched video is similar to the concept of alternative videos or recommended videos. Where the user can se-lect from a range of different videos to continue to watch

after the current video has finished. This is common among different video streaming providers. Although branched video puts even more requirements on the low switch times when branching similar techniques can be applied on both of the concepts.

Linear prefetching

Linear prefetching is when the prefetching algorithm downloads fragments of a single video stream. Most often also the same quality of which is played back. This is most suitable for linear, single stream, media as it usually only consists of one stream.

Non-linear prefetching

Non-linear prefetching is when the prefetching algorithm downloads fragments from differ-ent video streams. These fragmdiffer-ents could either be of the video currdiffer-ently being consumed or of alternative videos that will be presented to the user when the current video finish. The method is most suitable for non-linear media.

To access a seamless switching between different paths in non-linear media, prefetching and buffer management is often used [8][10]. The player prefetch fragments for each possible path into a prefetch buffer and manages the content of the video [8]. Prefetching is also use-ful to provide attractive recommended video choices to the users that often switch quickly between different videos. Studies shows that today’s users in online video on-demand (VoD) often switch the video being streamed within a few minutes of viewing [11]. These recom-mended videos need to be prefetched and buffer managed parallel to the video being played to achieve an instant playback when a user select one of these recommended videos [9].

(16)

2.5. Related works

Branched video

Krishnamoorthi et al. [10] define branched video as a traditional linear HAS4 video that allows the video designer to define arbitrary playback sequences through the underlying linear video and allows the user to choose between alternative playback sequences. The paper covers the design of optimized prefetching policies and buffer management schemes. The goal is to allow a seamless playback even if the user switch from one branch path to another to the last possible moment. And give an uninterrupted playback and maximize the video quality.

Alternative videos

Krishnamoorthi et al. [9] covers a design, implementation and evaluation of an HAS frame-work to provide prefetching and buffer managing of the alternative videos. The prefetching design and performance was based on three policy classes called best-effort, token-based and deadline based. In Best-effort policies chunks from the alternative videos will be prefetched only when the buffer occupancy reaches maximum value Tmax. Token-based policies prefetch chunks in the same way as in best-effort policies but there are differences in how these two policies decides from which video to be prefetched next. The token-based method uses a con-stant rate to determine when to prefetch fragments of a video. This policy provides greater control at which time an alternative video may first be prefetched. Deadline-based policies, as opposed to token-based policies, provides specific deadlines at which the alternative videos need to be done prefetching rather then when to start prefetching them. This is done to ensure that the alternative videos will be prefetched in time to be played and ensuring that the video is of the highest quality. This type of policies are important for users that prefer a smooth switching between the alternative videos.

Multi-video stream bundles

Carlsson et al. [2] introduce and present a system design of a general multi-video stream bun-dle framework. The multi-video stream bunbun-dle consists of multiple "parallel" video stream which are synchronized in time. Each of these multiple video streams provides the video from different cameras capturing the same shot or movie. The idea is to allow for example producers to give the freedom to users to switch between different perspective at different times. The multi-video stream bundle framework is to impact the quality adaptive features and time-based chunking of HAS, but also including adaptation in both rate and content.

Predictive prefetching

Many factors add delays to resources served over HTTP. Such as the disk latency of the server or the time it takes for the web client to process the received data. The use of predictive prefetching can address some of the latency issues in the web. Predictive prefetching basically means having resources that are likely to be accesses later be prefetched by the client. This would in turn limit the load time for these resources [12].

2.5

Related works

There are some papers that covers implementations of DASH featuring HTTP/2. Wei. et. al. [14] have focused on investigating energy efficiency benefits to mobile clients server push could have in adaptive streaming. Their focus has been on testing if using push could al-low the mobile device radio unit to go into sleep mode between bulks of pushing a certain amount of video fragments. They describe the HTTP/2 server push functionality as "[...] an

(17)

2.5. Related works

elegant way of changing the HTTP request schedule in video streaming without compromis-ing the scalability of HTTP streamcompromis-ing or makcompromis-ing changes to the HTTP resources.". A method they refereed to as the K-push strategy was introduced, where the server would push the k subsequent video fragments for the given bit-rate when the client sends their request for a video fragment of that bit-rate. This strategy would limit the amount of request the client has to send to the server as well as transmitting a set of fragments in bulk rather than spreading them out with a given interval. Sending fragments in bulk means that for a given period of time, when the sent fragments has been received and are awaiting to be played, the radio unit can enter sleep mode until another request is sent for the next k fragments. One of the problems covered in the report is selecting a suitable value for k, such as allowing the radio unit to enter sleep mode. This value can be derived from the power consumption model of the mobile device to reach an optimal value.

Van der Hooft et. al. [6] discuss the merits of an HTTP server push approach. The paper is a study based on first performing measurements on the available network in real 4G/LTE networks within the city of Ghent in Belgium. Secondly analyzing the induced bit-rate over-head for video fragments with a sub-second duration. They determined that the fragment duration time should not be lower than 500 ms to limit the overhead to 9.2%. They have done the experiments by using MiniNet framework with a single client and streaming the encoded video from a Jetty server. The client is implemented on the libdash library which is the official reference software of the MPEG-DASH standard. Their results showed in higher video quality (+7.5%) and a lower freeze time (-50.4%) compared to solutions over HTTP/1.1. Huysegems et. al. [7] present ten HTTP/2-based methods to improve the QoE5of HAS. The paper covers how to improve the QoE of HAS by reducing the number of video freezes caused by rebuffering. How to reduce the number of quality level changes, reduce the latency for live streaming and reduce the interactivity delay by using HTTP/2 features. The main focus is to design and implement an HTTP/2 push approach in live streaming. The push-based strategy uses HTTP/2 server push to push very short fragments from server to client. Their result shows that with an RTT of 300 ms the average server-to-display delay can be reduced by 90.1% and the average start-up delay can be reduced by 40.1%.

(18)

3

Method

To test the viability of using HTTP/2 and server push in video streaming context we have carefully measured the impact of using HTTP/2 compared to HTTP/1.1. We have done this with two different types of prefetching: one linear prefetching algorithm and one non-linear prefetching algorithm. For these two types of prefetching we have conducted a series of tests. By doing this we have been able to determine whether or not HTTP/2 has any performance gains, when using server push. We have investigated two factors identified as related to the overall user experience; buffer occupancy and video quality. The buffer occupancy is mea-sured as the average seconds of video the player has buffered during the playback. The over-all player quality is the average quality level at which the player is requesting and playing fragments. To make it possible to focus on the main problem we have used an existing DASH implementation and a web server that supports HTTP/2. However, to enable the HTTP/2 server push feature we have implemented a server side logic using the programming lan-guage Go/1.8.1. Because it is also important to use a web client that support HTTP/2, Opera has been selected to perform the tests.

3.1

Web server

When traditionally implementing DASH most web servers are sufficient as no special web server support or server side logic is needed to make DASH function, outside of supporting the HTTP standard. Using HTTP/2 will not require any server side logic either, only a web server supporting HTTP/2 will be sufficient. However, when implementing server push, some server side logic is required to take decisions on what data to push to the user.

For selecting the web server we wanted to have good support for HTTP/2, a confor-mance of over 90% is considered to be acceptable as long as there is full support for server push. Because HTTP/2 is such a wide, and relatively new, standard a 100% conformance is not expected by almost any modern web servers. And our test should not be affected by a small deviation from this, as long as the deviation does not relate to a core HTTP/2 feature. To test this we used a RFC 7540 [1] and RFC 7541 [13] compliant conformance testing tool called h2spec1. h2spec provides a test suite that can run against any web server to check what features of HTTP/2 are supported.

(19)

3.2. Web client

Because we needed to implement some server side logic to get our prefetching algorithms to work properly we wanted to use a web server implementation that allowed us to customize what happens when a request is served. We therefore turned to the programming language Go which has a built in HTTP/2 package and support for server push. When we ran a confor-mance test using the h2spec testing tool we got a conforconfor-mance result for the HTTP/2 package in Go/1.8.1 of 92%. The tests showed that Go/1.8.1 had full support for server push. Because of this factor Go/1.8.1 was selected as the web server implementation.

In the web-server we have implemented a bandwidth throttling which gives the ability to properly limit the available bandwidth by simply delaying the chunks outgoing data ac-cording to a specified limit, this limit is given in Mbit/s and can be changed through a simple API. The bandwidth throttling is built with a steady function meaning that the end result is stable, no deviations or overshoot in the supplied bandwidth occurs. The throttling feature uses a timed loop to copy the response bytes at a desired rate. This rate is also divided by the number of concurrent connections to simulate one throttled connection. We have also tested with Woundershaper, a program to limit bandwidth, but it gave overshoot which we did not want in our tests. The effect of the throttling code has been tested by downloading a large file and using the web client speed indicator as reference.

3.2

Web client

The web client used in the tests had no specific requirements other than a full support for HTTP/2. Opera/43.0.2442.1165 was used in the tests. A HTML web page was created with a dash.js client added. The pages also features an added live preview graph that gives instant insight in the test results and a simple set of selectors for adjusting the bandwidth throttling. These settings also persists during test resets. The web page also features a timer, when the timer runs out the test results are automatically stored to a file and the test is reset. This allow for running many consecutive tests without the need for any interaction except for monitoring the overall health of the test.

3.3

Environment

The environment of which the tests ran was setup with a web server and web browser with the DASH client running on the same virtual machine. The machine used for the tests with the linear prefetching algorithm was running Windows/10.0.14393 and had no other software except for the browser and the server installed. For the tests with the non-linear prefetching algorithm we ran MacOS/10.12.3.

3.4

Linear prefetching

For the linear prefetch strategy tested in this report, similarly to the strategy explained in Section 2.5. Where the web server pushes the k next video fragments of the same bit-rate when a request for a specific fragment is received, illustrated in Figure 3.1. The value of the k parameter, how many fragments to prefetch, has been varied across the range of 0-4 to observe any trends in the overall performance impact for the end-user.

(20)

3.4. Linear prefetching

Figure 3.1: A request served with a K-push method.

We made some server side logic to read the video fragment requests coming from the client to identify what fragments to push when using the linear prefetch method. When a fragment is requested the Go web server reads the request and responds with the fragment requested, it also pushes the k next fragments of the same quality. To allow for easy change of the value k we added an API endpoint on the server to allow dynamic changes from the web client.

To fully investigate the gains of using HTTP/2 with server push when consuming linear media there has been six different test configurations where two of these, the configurations with HTTP/1.1 and HTTP/2 without server push, are used as baselines. One test produces one data point per second, that is 180 data points for average quality and 180 data points for average buffer occupancy. The test using HTTP/2 without server push will be compared to the test with HTTP/1.1. The test using server push, with a linear prefetch of 1-4 fragments, has been compared to the HTTP/2 baseline test. For each of the six tests configurations we have ran 10 tests. The results of these 10 tests per configuration has then been averaged across every second of which the test ran, so that 10 tests using a given configuration are reduced to one data set of 120 data points for quality and 120 data points for buffer occupancy representing the total average results of the tests with that configuration.

With HTTP/1.1

A simple video streaming test with dash.js using HTTP/1.1 has been conducted and is the HTTP/1.1 baseline used to compare the following two tests. The test was conducted by adding a dash.js video player to a simple web page and rendering it in the browser without HTTP/2 enabled on the server. The video then started and played for 180 seconds until the results where automatically stored and the test restarted, this procedure repeated 10 times. Network throttling was enabled to limit the available bandwidth to 2.5 Mbit/s

With HTTP/2

A HTTP/2 baseline test was also run to compare with the next test. The playback process and bandwidth throttling was the same as in the previous test. The test was run to make sure that no other HTTP/2 features other than server push, like multiplexing, is misinterpret as a performance gain in using server push.

With HTTP/2 and server push

In this test the linear prefetch strategy was implemented and enabled. The video playback process and bandwidth variation was the same as in the previous tests. A few different values of the parameter k was tested, which can be found in Chapter 4.

Extracting data

Player quality and buffer occupancy was logged in JavaScript once a second during the video playback. At the end of the test, at 180 second, the data collected in JavaScript was posted

(21)

3.5. Non-linear prefetching

to the web server. The data was dumped to a file named according to the settings used and the time at which the test finished. When 10 tests had been run for every configuration we combined the results and averaged them. A separate program built in Go reads the files from a collection of files depending on the settings used during the test and synchronizes the time-stamps. It then averages the quality and buffer occupancy for every time-stamp, removes the first 60 seconds of the tests and dumps the data to a single file. From this data a plot is generated in gnuplot2, which is presented in Chapter 4.

Comparing the results

The first test performed was to measure the buffer occupancy and quality level with HTTP/2 and no linear prefetch. We then wanted to compare this result to the tests run over HTTP/2 with different linear prefetch fragments. The last test performed was again to measure buffer occupancy and quality level but now with HTTP/2 disabled and with no linear prefetch. To remove interference and irregularities by the start-up time in the video stream the first 60 seconds has been cut out. We averaged the 10 tests for each configuration to account for some deviation in the results.

Quality and buffer occupancy variation

To investigate how the playback quality and buffer occupancy varies we have selected two measurements, the standard deviation and another equation inspired by a report by Yin et. al. [15]. By using the equations (3.1) and (3.2) inspired by Yin et. al. [15] we can measure how the quality and buffer varies over time.

1 K ´ 1 K ÿ k=a |qk´1´qk| (3.1) 1 K ´ 1 K ÿ k=a |bk´1´bk| (3.2)

A high value would indicate "choppiness" in the values and a low value would indicate less changes in the data. K is defined as the total time the test ran, a is the point when to start considering variations (in our case 60 seconds), qkis the quality index at time k and bkis the buffer occupancy at time k.

3.5

Non-linear prefetching

We have also created an implementation to prefetch for branched media with the goal of reducing the initial load time when branching a video. The initial load time is calculated from the point the page has loaded until the video starts playing. By automatically branching the video after 60 seconds and sampling the initial load time when using prefetching and when not using prefetching we have established some results. We have made two different tests with non-linear prefetching. The first test with HTTP/2 server push and the second test with client-side invoked HTTP requests, which we call HTTP pull. The difference between the two methods is that the HTTP pull method uses HTTP requests invoked at the client-side to prefetch fragments of the upcoming branches. The method using HTTP/2 server push uses server-side invoked pushes to push the fragments of the upcoming branches. We have made 15 tests in total to average the results. 5 tests without prefetching, 5 tests when prefetching with server push and the other 5 tests when prefetching with HTTP pull. The test is conducted by first playing a video, called stream 1. We sample the initial load time

(22)

3.5. Non-linear prefetching

for stream 1. After 60 seconds we branch to a second video, stream 2. We sample the initial load time for stream 2. We repeat this procedure 5 times and average the initial load time for stream 1 and stream 2. We then enable our prefetching algorithm and repeat the tests. Our prefetching algorithm uses HTTP/2 server push to push the first 5 fragments of stream 2 as fragments for stream 1 is requested. The fragments that are pushed for stream 2 are of a pre-defined quality level, this is also the quality level that will be enforced on stream 2 during the first 10 seconds of playback. The reason that a pre-defined quality level has been selected is that dash.js would otherwise select a higher quality that can be played back because of the near instant response time of the pushed fragments. We sample the initial load time in the same way as in the tests without prefetching. By comparing these results we can observe what impact our prefetching algorithm has to the initial load time of stream 2.

(23)

4

Results

This chapter covers the results of the tests performed over HTTP/1.1 and HTTP/2 with and without linear prefetching and the tests with non-linear prefetching.

4.1

Video file

Quality indexes varies from 1-18, Table 4.1 shows the quality indexes with corresponding bit-rates for the video used in the tests. Although there are 19 available qualities the results has been focused around quality index 8-11. It is noticeable that the bandwidth for the average quality is lower than the 2.5 Mbit/s we throttled to in our tests. This might be an indication that the throttling is unstable or incorrect. It may also be an indication of the connection being slower for other reasons or that the video player selects a lower quality than what can be played back given the available bandwidth.

Table 4.1: Quality index and corresponding bit-rates.

Quality Bit-rate Quality Bit-rate Quality Bit-rate Quality Bit-rate

0 45.0kbps 5 256.0kbps 10 783.0kbps 15 2.4Mbps

1 89.0kbps 6 323.0kbps 11 1.0Mbps 16 2.9Mbps

2 129.0kbps 7 378.0kbps 12 1.2Mbps 17 3.3Mbps

3 177.0kbps 8 509.0kbps 13 1.5Mbps 18 3.6Mbps

(24)

4.2. Linear prefetching

4.2

Linear prefetching

HTTP/1.1

Figure 4.1 shows the results of the test ran with HTTP/2 disabled, i.e. using HTTP/1.1. The plot shows the average buffer occupancy and quality level of all 10 test for each second. We can observe that the quality level is rather stable and does not incrementally decrease or increase. The buffer occupancy level moves up and down as video is played (removed from buffer) and downloaded (added to buffer). Note that the first 60 seconds of the tests has been discarded to stabilize the results and remove irregularities in initial buffer time, as mentioned in Chapter 3. 7 7.5 8 8.5 9 9.5 10 60 80 100 120 140 160 180 8 9 10 11 12 13 14 15 16 Average qua lity i ndex Average buffer level [ seconds] Time [seconds] Quality index Buffer level

Figure 4.1: Average quality level and buffer level when using HTTP/1.1

HTTP/2

Figure 4.2 shows a section of the average values for the tests conducted without linear prefetch. The quality line shows that the quality level is unstable and incrementally decreases throughout the playback.

7 7.5 8 8.5 9 9.5 10 60 80 100 120 140 160 180 8 9 10 11 12 13 14 15 16 Average qua lity i ndex Average buffer level [ seconds] Time [seconds] Quality index Buffer level

Figure 4.2: Average quality level and buffer level over time when using HTTP/2 without linear prefetching

(25)

4.2. Linear prefetching

HTTP/2 - Server push enabled

Linear prefetch with one fragment

Figure 4.3 shows the average result of the tests conducted with linear prefetch with one frag-ment. Also here we can see that the quality level is unstable and incrementally decreases at the beginning of the tests at the same way as mentioned earlier in Figure 4.2. The decreasing quality could be an indication of an error when combining HTTP/2 server push and band-width estimation, although we have not been able to find the reason for this behaviour.

7 7.5 8 8.5 9 9.5 10 60 80 100 120 140 160 180 8 9 10 11 12 13 14 15 16 Average qua lity i ndex Average buffer level [ seconds] Time [seconds] Quality index Buffer level

Figure 4.3: Average quality level and buffer level over time when using HTTP/2 and linear prefetch with k=1

Linear prefetch with two fragments

Figure 4.4 shows the average result for all 10 tests over HTTP/2 with linear prefetch with two fragments. The quality level line shows that the quality level is unstable as in Figure 4.2 and Figure 4.3. This means there are no big differences on how the quality level acts with these three different settings for prefetching fragments.

7 7.5 8 8.5 9 9.5 10 60 80 100 120 140 160 180 8 9 10 11 12 13 14 15 16 Average qua lity i ndex Average buffer level [ seconds] Time [seconds] Quality index Buffer level

Figure 4.4: Average quality level and buffer level over time when using HTTP/2 and linear prefetch with k=2

(26)

4.2. Linear prefetching

Linear prefetch with three fragment

The average buffer occupancy and quality level with three prefetching fragments of all 10 tests can be shown in figure 4.5 below. As seen in the plot, the quality level here decreases through the playback.

7 7.5 8 8.5 9 9.5 10 60 80 100 120 140 160 180 8 9 10 11 12 13 14 15 16 Average qua lity i ndex Average buffer level [ seconds] Time [seconds] Quality index Buffer level

Figure 4.5: Average quality level and buffer level over time when using HTTP/2 and linear prefetch with k=3

Linear prefetch with four fragment

The average result of linear prefetching with four fragments shows in Figure 4.6. The average quality level incrementally decreases throughout the playback. We can notice some improve-ment in the quality level compared to linear prefetching of three fragimprove-ments seen in Figure 4.5. 7 7.5 8 8.5 9 9.5 10 60 80 100 120 140 160 180 8 9 10 11 12 13 14 15 16 Average qua lity i ndex Average buffer level [ seconds] Time [seconds] Quality index Buffer level

Figure 4.6: Average quality level and buffer level over time when using HTTP/2 and linear prefetch with k=4

(27)

4.2. Linear prefetching

Comparison

Table 4.2 compares the results using HTTP/1.1 and HTTP/2. Note that the tests with HTTP/1.1 ran without TLS1 encryption. Our tests shows that HTTP/2 has no major ad-vantage or disadad-vantage over HTTP/1.1.

Table 4.2: Comparison between the test using HTTP/1.1 and HTTP/2

Push fragment Avg. buffer level Avg. quality level Buffer gain Quality gain

HTTP/1.1 10.93s 9.10 (baseline) (baseline)

HTTP/2 (k=0) 11.13s 8.49 1.82% -6.7%

A decrease of 6.7% in average quality is not a very significant change. Because the test to-tals 120 seconds 6.7% only accounts for about 8 seconds of decreased quality on average for HTTP/2 compared to HTTP/1.1.

Table 4.3 shows the collected results over the test we made with 2.5 mbit/s bandwidth throttling. The gain indicates the change in relation to the HTTP/2 baseline. An overall positive gain in buffer level has been shown in all the tests with server push enabled. How-ever, the type of impact on the average player quality varies between the different values of k showing a slight decrease for k value 3 and a slight increase for k values 1, 2 and 4. The maximum increase, when using k=2, only accounts for about 6 seconds on average spent on a higher quality meaning that the impact to the end user is minimal. Table 4.3 also shows the confidence interval for quality level.

Table 4.3: Comparison between the test using HTTP/2 and linear prefetching Push fragment Avg. buffer level Avg. quality level Buffer gain Quality gain

K=0 (no push) 11.13 8.49 (baseline) (baseline)

K=1 12.05s 8.70 ˘ 0, 037 8.27% 2.47%

K=2 12.45s 8.88 ˘ 0.041 11.86% 4.59%

K=3 12.53s 8.18 ˘ 0.051 12.58% -3.65%

K=4 11.63s 8.57 ˘ 0, 07 4.49% 0.94%

Table 4.4 shows the standard deviation σ over the data points collected for each of the five tests with HTTP/2. The variation has been measured on all data points, not the averaged results. The standard deviation indicates the stability in the results collected. This shows that the standard deviation for each buffer and quality does not give a huge different between each k value. This means that the test are stable.

Table 4.4: Standard deviation for all the tests

Test σ for buffer σ for quality

HTTP/1.1 0.12 0.74 K=0 0.30 0.67 K=1 0.32 0.66 K=2 0.49 0.73 K=3 0.48 0.90 K=4 0.31 1.24

These results shows a viability to benefit from HTTP/2 features without compromising the stability of video streaming quality or buffer occupancy on the client side. However, the standard deviation only tells us how much the average quality an buffer occupancy varies but not how it varies.

(28)

4.2. Linear prefetching

Table 4.5 shows that the quality variation decreases slightly the more fragments we push. However, the variation of quality when HTTP/2 is enabled is still higher compared to HTTP/1.1. The tests ran over HTTP/1.1 was more stable compared to HTTP/2 and that may explain why the quality index varies more with HTTP/2.

Table 4.5: Variation for all the tests

Test Variation of quality Variation of buffer Buffer change Quality change

HTTP/1.1 0.05 0.29 (baseline) (baseline) k=0 0.14 1.05 1.66% 2.57% k=1 0.11 0.06 1.02% -0.80% k=2 0.11 0.65 1.13% 1.22% k=3 0.10 0.26 0.94% -0.13% k=4 0.08 1.02 0.57% 2.46%

Figure 4.7 shows the collected results for the linear prefetcher including marked confidence intervals. The confidence of the results are fairly high.

-5 -4 -3 -2 -1 0 1 2 3 4 5 6 k=1 k=2 k=3 k=4 Average Qua lity C hange [%]

Number of prefetched fragments

(29)

4.3. Prefetching with branched video

Figure 4.8 shows the variations compared to each other. We can notice that the variations for the buffer occupancy is scattered which means that the results are less stable. Because of this it will be harder to draw conclusions based on the average buffer occupancy measurement.

0.02 0.04 0.06 0.08 0.1 0.12 0.14 HTTP/1.1 K=0 K=1 K=2 K=3 K=4 0.2 0.4 0.6 0.8 1 1.2 Quali ty vari ati on Buffer var iati on Quality variation Buffer variation

Figure 4.8: Variations for quality and buffer occupancy compared

4.3

Prefetching with branched video

Table 4.6 shows the results for the average load time for the tests with branched video. The load time is roughly the same when no prefetching is being used, which is expected. When using server push to prefetch video data for stream 2 the total load time is 2.5% lower than using no prefetching. The load time for stream 1 is higher, most likely because data for stream 2 is loaded simultaneously with stream 1. When using the HTTP pull method the total load time is lowered by 34% compared to not using any prefetching. The load time for stream 1 is also increased but not as much as in the tests with server push. Both the HTTP pull and HTTP/2 server push method loads stream 2 simultaneously as stream 1 when prefetching, why the load times for stream 1 differs between these two methods is unknown.

Table 4.6: Load times with and without prefetching and comparison on total load time. Prefetching Load time stream 1 Load time stream 2 Total load time Change

No prefetching 2836.33ms 2294ms 5130.33ms (baseline)

Server push 4757.6ms 243.4ms 5001ms -2.5%

(30)

5

Discussion

5.1

Linear prefetching

The effect of the slightly increased average quality that we have observed in the results when using server push is that the video player spends slightly more time playing a higher quality. A decrease in average player quality an indication that more time has been spent playing a lower quality. It is generally desirable to have high player quality for as long time as possible but still have enough buffer so that the player does not run out of video to play resulting is stalls. However, the observed differences are rather small and does not really indicate in any significant differences between our tests. Because of this, other factors should also be considered when choosing whether or not to use HTTP/2 with a server push based on linear prefetching.

Overall, in our tests, HTTP/2 has been slightly worse than HTTP/1.1. Because of this, the introduction of HTTP/2 in video streaming could cause some issues. It is however important to note that this conclusion is based entirely on the results shown in this thesis. HTTP/2 also has other features to take benefit from that has not been investigated here. Also the fact that TLS encryption has been used for HTTP/2 must be taken into consideration. Not using TLS could also be the explanation as to why HTTP/1.1 had slightly better results than HTTP/2.

The stability of the tests has been measured by looking at the variations of quality and buffer occupancy, as presented in Chapter 4. When using HTTP/2 without server push the variations were at its largest value. When using server push the variations decreased slightly. HTTP/1.1 had the most stable results. Having a stable buffer occupancy is not all too relevant to the users direct experience, the buffer occupancy only becomes relevant if it falls to 0 and the video stalls. We have not had any stalls in any of our tests. Having a more stable quality is more noticeable by the user. If the quality varies too much too often the user might get distracted from the video.

In all of the tests with HTTP/2 we can see a slight decrease in quality throughout the tests. We do not have a conclusive reason for this. It could relate to either the server, browser, video streaming implementation or underlying network infrastructure. One factor that can come to play is the fact that HTTP/2 uses a long lived, single, TCP connection between the client and server. The tests has also been run with no competing traffic on the network. It is possible that the introduction of competing traffic might change the results shown in our tests.

(31)

5.2. Non-linear prefetching

The observed gains when using server push compared to not using server push with HTTP/2 has shown a maximum quality increase of 4.59% at k=2. This is in line with what have been observed by Van der Hooft et. al. [6], described in Chapter 2.

5.2

Non-linear prefetching

In section 5.1 we have discussed the viability of using HTTP/2 in video streaming based on the results from our tests. We have shown that using HTTP/2 to deliver video content only has a slight disadvantage. We have also shown that HTTP/2 can be used in non-linear prefetching to enhance loading times for branched video.

For the non-linear prefetching method best-effort, presented in Chapter 2, video frag-ments of the available branches to the current video should be prefetched once the client buffer occupancy of the current video reaches a given value Tmax. However, because server push relies on strictly server-side logic, the best-effort model is less suitable for usage with server push. Unless the client can signal when the value Tmaxhas been reached, as the client is alone in knowing the value of the buffer occupancy.

For token-based prefetching the fragments of available branches are prefetched at a given rate s from a given start time t. If t= 0, meaning that the prefetching process should begin instantly when the video is requested, server push can be used to periodically push fragments of the available branches until the client-server connection is closed. However, if t ą 0, it will be harder for the server to determine the actual value of t. Since the client is alone in knowing what point in the video is currently playing. Similarly to the best-effort this would require the client to signal the server to start pushing fragments for prefetching.

For deadline based prefetching the prefetching process is started at a point in time t = a ´ b so that every fragment to be prefetched finished at a given point in time t= a. Where b is the time it would take to download all of the fragments to be prefetched. To determine b the prefetching initializer needs to be aware of the file size and current bandwidth available. Because of this, the server would have to determine the average bandwidth upload speed towards the client when using server push to prefetch.

The simplest solution to using server push when prefetching non-linear media, among those mentioned in this report, should be the token-based model as this requires the least amount of server side logic to be implemented. However, using HTTP/2 server push in a long session is risky as the persistence of the underlying TCP connection cannot be insured. Therefore it is better to use server push in the start of a session rather than pushing some time after the initialization has begun. In our experiments we experienced these problems with the long lived TCP connection. One reason for this could be a low TCP Keep Alive set-ting that would evict the TCP connection prematurely while we are still pushing fragments. This causes the server-client connection to stall. Because of this we choose to implement our prefetching for non-linear media at the start of a session. This has other benefits such as the fact that users doesn’t always wait until the end before switching video stream.

With the introduction of predictive methods, such as machine learning, to decide what fragments to prefetch we see a much bigger benefit of using HTTP/2 server push. Because the predictive models often reside at the server side the server can take automatic decisions on what fragments to push instead of requiring the client to actively fetch them. A simple example is a machine learning model that is trained with data of a users context such as time of day, location, age etc. whenever a user makes a choice of a branched video path. Based on that data the server could predict branches by similar users and push the start of the branch that the user is most likely to take.

(32)

5.3. Prefetching branched video

5.3

Prefetching branched video

When we implemented a simple prefetching algorithm for branched video, where the first 5 fragments for the available branches are pushed at the initialization of a video, we saw a significant decrease in initial load time for the branches. This algorithm is based on the token-based model where t = 0 meaning that fragments are starting to be pushed at the initialization of the first video. This means that the transition from one stream to the other is going to be more smooth even on connections with low bandwidth. Some more extensive testing could have been made using this method by changing the number of fragments to prefetch and the number of branches that are available.

One improvement to the prefetching algorithm is to more intelligently select which qual-ity to prefetch for the branches. Also, if there are multiple branches, what branches to actually prefetch. We suggest taking an average quality of the current video during the first x seconds of playback and use that quality when prefetching. This would match the prefetched quality with a quality suitable for the clients available bandwidth. Another thing to consider is the ability to program server-side logic to use prediction to determine what branches to push to the user. This would allow the server to customize what branches are prefetched for a specific user based on what branch the user is most likely to proceed. A benefit of placing the task of prefetching on the server-side is that the server can take decisions based on observations from other users. A problem with doing this on the server-side only is that the client video player cannot detect if a fragment has been pushed using HTTP/2 server push. This could lead to the client missing the opportunity to play a prefetched fragment and in term waste data traffic.

5.4

Method

To test the viability of using HTTP/2 with server push for video streaming we have imple-mented a simple linear prefetching algorithm. Due to some technical limitations and prob-lems we have been unable to complete a non-linear prefetching algorithm. This mostly due to limitations in the standard library in Go/1.8.1 that wont allow us to initialize a push after a request has finished. This could of course be worked around by initializing a request from the client every time a push is wanted, however, this is a bit contradictory to the point of eliminating unnecessary client requests by using server push in the first place. So as of now, a correct non-linear prefetching approach would be rather difficult to implement without changing the underlying software. Having more time to fine-tune the implementation would have given a higher chance of making a successful solution with a non-linear prefetching al-gorithm. We have also noted some stability problems when using server push in long lived TCP connections where the connection between server and client stalls sometimes. Possible improvements to the method could be to increase the stability in the network setup by fine tuning the TCP Keep Alive parameters. Another improvement is to increase the amount of tests made and to do the tests over a varied bandwidth situation. This would be a closer rep-resentation of a real world scenario. Also, testing other non-linear prefetching policies such as deadline-based and best-effort to compare with our prefetching algorithm might show other results. Which could help to explain the relativley small gains observen in using HTTP/2 server push for prefetching non-linear media.

5.5

The work in a wider context

There are many advantages of prefetching for both linear and non-linear media. With the current development of media such as 360˝video, virtual reality, branched video there is def-initely a need to improve and adapt video streaming. Not only do users want to consume higher quality content but also more content overall, as mentioned in the introduction. Also,

(33)

5.5. The work in a wider context

users who live in areas with worse bandwidth conditions or users who pay per usage are es-pecially affected by small optimization’s. A wide introduction of HTTP/2 in video streaming could also have a negative effect, given the results from our investigations, as we have shown in an overall decreased quality level when using HTTP/2.

(34)

5.6. Conclusion

5.6

Conclusion

In our experiments we have tested HTTP/2 when using Dynamic Adaptive Streaming over HTTP and shown that HTTP/2 does not cause any large performance drops compared to HTTP/1.1. The experiments shows that using HTTP/2 does give some minor drops in end-user video quality of -6.7%. Buffer occupancy is increased with 1.82% when using HTTP/2.

When using the HTTP/2 feature server push to implement a linear prefetch strategy some performance gains has been shown. Average buffer occupancy is increased by up to 12.58%, when using a linear prefetch of three fragments. Average video quality is increased by up to 4.59%, when using a linear prefetch of two fragments. These gains are similar to those ob-served in other related papers. This results shows a viability of using HTTP/2 server push to implement linear prefetching in DASH. A problem with a constant decrease in video quality when using HTTP/2 has been shown in our test, but a conclusive reason behind this has not been determined.

Non-linear prefetching for branched video has also been discussed and has shown to be viable. Especially when combined with a predictive prefetching pattern where resources are prefetched based on server-side predictions. In this report three different approaches to non-linear prefetching has been investigated; best-effort, token-based and deadline-based. We have also made a simple implementation of a non-linear prefetcher based on the token-based model. Out of the three models mentioned token-based has been identified as the simplest method to implement in practice.

We also implemented two non-linear prefetching algorithms using HTTP/2 server push and HTTP pull. Out of these methods the HTTP pull algorithm has shown the greatest overall improvement to user experience with a lowered total load time of 34%.

The end result of improving video streaming is faster and more stable experience for the end user. Improvements also benefit users who have limited access to bandwidth. This is important because video streaming is becoming increasingly popular.

(35)

Bibliography

[1] M. Belshe, R. Peon, and Ed. M. Thomson. “Hypertext Transfer Protocol Version 2 (HTTP/2)”. In: RFC 7540 (2015).

[2] N. Carlsson, D. Eager, V. Krishnamoorthi, and T. Polishchuk. “Optimized Adaptive Streaming of Multi-video Stream Bundles”. In: IEEE TMM (2017).

[3] Cisco. “Cisco VNI Forecast and Methodology”. In: Cisco VNI (2016).

[4] R. Fielding, UC Irvine, J. Gettys, J. Mogul, H. Frystyk, L. Masinter, P. Leach, and T. Berners-Lee. “Hypertext Transfer Protocol – HTTP/1.1”. In: RFC 2616 (1999).

[5] D. Gotz. “Scalable and adaptive streaming for non-linear media”. In: Proc. ACM MM (2006).

[6] J. van der Hooft, S. Petrangeli, T. Wauters, R.Huysegems, P. Rondao Alface, T. Bostoen, and F. De Turck. “HTTP/2-Based Adaptive Streaming of HEVC Video Over 4G/LTE Networks”. In: IEEE Communications Letters (2015).

[7] R. Huysegems, T. Bostoen, P. Rondao Alface, J. van der Hooft, S. Petrangeli, T. Wauters, and F. De Turck. “HTTP/2-Based Methods to Improve the Live Experience of Adaptive Streaming”. In: Proc. ACM MM (2015).

[8] V. Krishnamoorthi, P. Bergström, N. Carlsson, D. Eager, A. Mahanti, and N. Shah-mehri. “Empowering the Creative User: Personalized HTTP-based Adaptive Streaming of Multi-path Nonlinear Video”. In: Proc. ACM SIGCOMM (2013).

[9] V. Krishnamoorthi, N. Carlsson, D. Eager, A. Mahanti, and N. Shahmehri. “Bandwidth-aware Prefetching for Proactive Multi-video Preloading and Improved HAS Perfor-mance”. In: Proc. ACM MM (2015).

[10] V. Krishnamoorthi, N. Carlsson, D. Eager, A. Mahanti, and N. Shahmehri. “Quality-adaptive Prefetching for Interactive Branched Video using HTTP-based Adaptive Streaming”. In: Proc. ACM MM (2014).

[11] L.Chen, Y.Zhou, and D.Ming Chiu. “A study of user behavior in online VoD services”. In: Computer Communications (2014).

[12] Venkata N. Padmanabhan and Jeffrey C. Mogul. “Using Predictive Prefetching to Im-prove World Wide Web Latency”. In: Proc. ACM SIGCOMM (1996).

[13] R. Peon and H. Ruellan. “HPACK: Header Compression for HTTP/2”. In: RFC 7541 (2015).

(36)

Bibliography

[14] S. Wei, V.Swaminathan, and M. Xiao. “Power Efficient Mobile Video Streaming Using HTTP/2 Server Push”. In: Proc. IEEE MMSP (2015).

[15] X. Yin, A. Jindal, V. Sekar, and B. Sinopoli. “A Control-Theoretic Approach for Dynamic Adaptive Video Streaming over HTTP”. In: proc. ACM SIGCOMM (2015).

References

Related documents

DB-1 When a media object is to be streamed to a user software client shall content be accessed from the server (edge or main) that contains the selected media and is optimal for

This work started by choosing an open source video player. The open source video player is then developed further according to requirements of this project. The player is

Experiment results show that Transmission Control Protocol (TCP) [2] memory size, TCP Congestion Control Algorithm (CCA), Delay Variation (DV), playout buffer length and

Sidx provides binary information in ISO box structure on accessible units of data in a media segment where each unit is described by byte range in the segments (easy access through

The simulations in Figure 2 of section demonstrated that in a wireless channel with a constant packet loss rate of 10%, using 1000 source packets per FEC-encoded unit, 12.6%

Thus, a decision is of great importance for an organization for which it is interesting for us to see how they make their decisions regarding a purchase and foremost how the

The above storylines are all concerned with ideas and arguments showcasing the ambition of reaching emissions reductions to stave off the climate crisis. However, the assumptions

The Role of Nitric Oxide in Host Defence Against Mycobacterium tuberculosis.