• No results found

Measuring one-way Packet Delay in a Radio Network

N/A
N/A
Protected

Academic year: 2021

Share "Measuring one-way Packet Delay in a Radio Network"

Copied!
42
0
0

Loading.... (view fulltext now)

Full text

(1)

Master of Science Thesis in Electrical Engineering

Department of Electrical Engineering, Linköping University, 2018

Measuring one-way Packet

Delay in a Radio Network

Daniel Fahlborg

(2)

Master of Science Thesis in Electrical Engineering

Measuring one-way Packet Delay in a Radio Network

Daniel Fahlborg LiTH-ISY-EX–18/5144–SE Supervisor: Staffan Wiklund

Ericsson, BID Test Tools

Özgecan Özdogan

isy, Linköping university

Examiner: Danyo Danev

isy, Linköpings university

Communication Systems (KS) Department of Electrical Engineering

Linköping University SE-581 83 Linköping, Sweden Copyright © 2018 Daniel Fahlborg

(3)

Abstract

Radio networks are expanding, becoming more advanced, and pushing the limits of what is possible. Services utilizing the radio networks are also being developed in order to provide new functionality to end-users worldwide. When discussing 5G radio networks, concepts such as driverless vehicles, drones and near zero communication delay are recurrent. However, measures of delay are needed in order to verify that such services can be provided – and measuring this is an extensive task. Ericsson has developed a platform for simulating a radio environ-ment surrounding a radio base station. Using this simulator, this project involved measuring one-way packet delay in a radio network, and performing a Quality of Service evaluation of a radio network with a number of network applications in concern. Application data corresponding to video streams, or Voice over IP conversations, were simulated and packet delay measurements were used to cal-culate and evaluate the Quality of Service provided by a radio network. One of the main conclusions of this project was that packet delay variations are asym-metric in uplink, which suggests usage of non-conventional jitter measurement techniques.

(4)
(5)

Contents

Notation vii 1 Introduction 1 1.1 Motivation . . . 2 1.2 Purpose . . . 2 1.3 Problem formulation . . . 3 1.4 Limitations . . . 3 2 Background 5 2.1 LTE Overview . . . 5

2.1.1 The Packet Data Convergence Protocol - PDCP . . . 6

2.1.2 The Radio Link Control protocol - RLC . . . 7

2.1.3 The Medium Access Control protocol - MAC . . . 7

2.1.4 Physical layer . . . 8

2.2 CSIM-FsUE – System Description . . . 8

2.2.1 Central Processing Module - CPM . . . 9

2.2.2 Data generator . . . 10

2.2.3 Lightweight Processing Platform . . . 10

2.3 Quality of Service . . . 10

2.3.1 Measuring Quality of Service . . . 11

2.3.2 Quality of Service, statistic measures . . . 12

2.4 Quality of service Class Identifier - QCI . . . 13

3 Implementation 15 3.1 Implementation Strategy . . . 15

3.2 CPM signaling . . . 16

3.3 Defining test cases . . . 20

3.4 Calculating Quality of Service Measurements . . . 20

3.4.1 Comparing with Quality of Service requirements . . . 21

4 Result 23 4.1 Implementation . . . 23

4.2 Quality of Service measurements . . . 23

(6)

vi Contents

4.3 Test cases . . . 24

4.4 Packet delay measurements . . . 25

5 Discussion 27 5.1 Result . . . 27

5.2 Method . . . 27

5.2.1 Implementation strategy . . . 28

5.2.2 Quality of Service measurements . . . 28

5.3 Future work . . . 29

5.4 Conclusions . . . 30

(7)

Notation

Terms

Term Meaning Quality of

Service General, standardized set of measures used for evalu-ation of e.g. network services.

Jitter Quality of Service measure, describing how packet de-lay measurements vary. Should not be mistaken for measures of signal variations when reading this re-port.

eNodeB Base station used in an radio network

Voice over IP A network service allowing for voice conversations

(8)

viii Notation

Abbreviations

Abbreviation Meaning CN Core Network CM Common Memory

CPM Central Processing Module CSIM-FsUE Component Based Simulator

DSP Digital Signal Processor EM External Memory

EMCA Ericsson Multi Core Architecture GBR Guaranteed Bit Rate

GTE Generic Test Environment ipdv IP Packet Delay Variation LDM Local Direct Memory

LTE Long Term Evolution

LPP Lightweight Processing Platform MAC Medium Access Control (protocol) MME Mobility Management Entity MTU Maximum Transmission Unit NAS Non Access Stratum (protocol) PDCP Packet Data Convergence Protocol

PHY Physical layer

RLC Radio Link Control (protocol) RRC Radio Resource Control (protocol) SGW Serving Gateway

UE User Equipment

(9)

1

Introduction

Ericsson is running in the competition of becoming the market leading supplier of the 5G radio technology, developing new base stations and tools for system validation and evaluation. New applications, relating to concepts such as Inter-net of Things and Voice over IP, are pushing the requirements on base station performance in handling more connected devices with higher data transfer rates and higher demands on reliability. Performance, or the quality of the developed technology, needs to be evaluated, compared and presented in order for Ericsson to become market leading. It is essential to verify that new base stations can with-stand demands from upcoming scenarios and new radio network applications.

In an LTE (Long Term Evolution) radio network data rates may exceed down-link rates 300 Mb/s and updown-link rates of 75 Mb/s, as stated in the article by Larmo et al. [9]. To implement high quality packet based networks there is a need for measuring and optimizing the network with respect to Quality of Service. One measure of quality is packet delay, defined as the time elapsed from when a packet is sent until it is received. A high packet delay means, in some appli-cations, a low experienced quality seen from a user point-of-view. Larmo et al. also states that "the one-way latency target is set to be less than 5 ms between terminal and base station". For applications, such as Voice over IP, another qual-ity measure of great importance is the variation of packet delay as a function of time as such applications rely on a steady stream of data. Users of applications such as Voice over IP or video streaming expect a continuous flow of information, where buffers are used to compensate for packet delay variations. However, if variations are larger than the buffering capacity, this will cause disruptions to be experienced by the user.

This project focused on measuring Quality of Service provided by a radio net-work during simulated test cases. Measuring quality involves observing various and different sets of measures, depending on the simulated application. During

(10)

2 1 Introduction

this project, Quality of Service measurements will be considered as quantitative measures and is differentiated from Quality of Experience measurements, that involves evaluation of the actual end-user experience. CSIM-FsUE, Component based Simulator, is a tool for system verification of radio base stations, developed by Ericsson. This tool provides an environment were implementation of mea-sures reflecting Quality of Service is possible to an unique extent. This project offers the opportunity of investigating how parameters, describing a simulation, affect the Quality of Service and accurately measuring one-way packet latency in both uplink and downlink.

1.1

Motivation

The CSIM-FsUE simulates the radio environment surrounding a base station, in-cluding User Equipment (UE) and Core Network (CN). The one way propagation delay of a data packet can be calculated, since the CSIM-FsUE simulates both UE and CN, as the time difference between the time of transmission and reception. These simulated test cases corresponds realistically to real time usage of a radio network under perfect radio conditions, were the physical radio channel has zero probability of packet loss.

Many networking systems are evaluated by measuring the round-trip-time, meaning the time taken from packet transmission to reception of a correspond-ing reply. In some applications round-trip-time corresponds exactly to the delay experienced by a user. However, this method of measuring packet delay does not provide any information on how delays differ from User Equipment to Core Net-work (uplink) and from Core NetNet-work to User Equipment (downlink). Addition-ally such measures provide little or no information regarding packet delays for a stream of packets. Depending on the application, delays in uplink and downlink may have different impact on the experienced quality seen from a user point-of-view, hence motivating one-way packet delay measurements in both uplink and downlink directions.

1.2

Purpose

The main purpose of measuring packet delays, as seen from the company Eric-sson’s point of view, is to have the ability to measure and evaluate Quality of Service in simulated test cases. The simulator CSIM-FsUE is a verification tool used to test and verify the performance of a base station. Besides covering more aspects of verification and comparison of base stations, packet delay measure-ments would also allow for Quality of Service evaluation of all protocols involved in a radio network.

(11)

1.3 Problem formulation 3

1.3

Problem formulation

• How are simulation parameters affecting Quality of Service? • Is the system behaving as expected in terms of packet delay?

• Can packet delay measurements be implemented without affecting perfor-mance of the simulator at high data load?

• Are Quality of Service measurements efficiently descriptive?

1.4

Limitations

Technical details regarding base stations and results from simulations are con-sidered sensitive information and thus cannot be revealed in this report. This includes measures of packet delay and Quality of Service.

Implementation is restricted to the CSIM-FsUE, and no changes are to be made in the software of base stations used in simulations. At an early stage an idea was to implement packet latency measures on the lowest layer in the proto-col stack in order to measure delay in base stations only, and minimize influence from the simulator. Packets are fragmented, modulated and encrypted when received by the lowest protocol layers, making it hard to identify packets and comparing these to packets received or sent in the Core Network. This problem was deemed to hard to solve given time restrictions.

The hardware used for simulating network application data has a limited memory capacity. This will have an impact on the implementation strategy. Added functionality can not be allowed to affect the over all performance of the simula-tion process and needs to be restrictive in its memory usage.

(12)
(13)

2

Background

In order to understand what causes packet delay, it is essential to understand the radio network itself. A radio network is a large and complex system involving end-users, charging-operators, base stations interchange of information, load bal-ancing, and changing radio environment. Radio newtorkw can be configured in numerous ways in order ot provide benefits in terms of scalability. This chapter will present the most fundamental functionality provided by a radio network, in preparation for later discussions on how they may affect Quality of Service. Also, this chapter describes the simulation process CSIM-FsUE, how data is being gen-erated, and how the radio environment surrounding a base station is simulated. The final sections of this chapter discuss the concept of Quality of Service and how measurements should be taken and interpreted given certain scenarios.

2.1

LTE Overview

The article by Okubo et al. [13] provides an overview of a complete LTE network and is used as a primary source of information during this section. LTE base stations, referred to as eNodeB, are connected to other nearby eNodeBs and the Evolved Packet Core (EPC) core network. The EPC is accommodating the LTE and separates handling of application data (user plane data) and control plane data. Control plane data is used for authentication and communication of param-eters and instructions to all underlying protocols on how application data will be transferred. All control plane data is managed by a Mobility Management Entity (MME) and all user plane data is managed and forwarded by a Serving Gateway (SGW) in order to reach out on the Internet.

Each eNodeB has the responsibility of Radio Resource Management, such as call admission control, handover, bearer management, scheduling of shared data channels, and all other forms of radio control. These management tasks are

(14)

6 2 Background

plemented in separated radio network protocols. Protocols adds header infor-mation to each data packet to be interpreted by the receiving counterpart of each protocol. The following is a listing with a description of relevant features in these protocols. A radio network implements complex automatic routines for manag-ing users in a large amount of scenarios. However, such routines and control plane data were not considered for measurements during this project. Thus, this report will not account for any deeper explanation of the extensive functionali-ties provided by the control plane. Figure 2.1 illustrates relations between some of the key components of an LTE network.

Figure 2.1: Radio overview - A radio network consists of multiple complex components, utilizing special internal protocols for data transfer and con-trol.

Radio networks implement a series of communication protocols in a similar manner as for internet communication, divided into three groups. The layer3 is the highest of protocol layers and corresponds to the control plane, implement-ing the protocols Radio Resource Control (RRC) and Non Access Stratum (NAS). Layer2 refers to Packet Data Convergence Protocol (PDCP), Radio Link Control protocol (RLC) and Medium Access Control protocol (MAC). Layer1 refers to the Physical layer (PHY). These protocols are part of a standard in how to transfer data and communicate within a radio network and are implemented in base sta-tions and all UEs that are able to connect to the radio network.

2.1.1

The Packet Data Convergence Protocol - PDCP

At connection set up between an UE and an eNodeB critical and sensitive data, regarding authorization, is exchanged in order for the MME to grant access to the radio network. Application data can also contain sensitive or private information and one of the main tasks of the PDCP is to encrypt this data using advanced ciphering/deciphering keys and algorithms. The PDCP also implements Robust

(15)

2.1 LTE Overview 7

Header Compression (ROHC), meaning that IP packet headers are compressed and decompressed. Fields in the IP header that are not changing between packets are stored and removed before transmission over the radio link channel to be reassembled in the receiving PDCP before forwarding. ROHC has a significant impact on data load when the size of actual data in packets is small in comparison to the IP header, e.g. in applications such as voice over IP, where many small packets are transmitted at constant rates with tough demands on delay. ROHC operates in several modes, all thoroughly described in [1]. PDCP entities has one configuration for each radio bearer, specified by the MME.

Transmitted data packets are fitted with a PDCP header, integrity protected, ciphered, compressed and transferred to the underlying RLC protocol. Received data packets are received from the RLC protocol, deciphered, decompressed and transferred to an application, as for UEs, or the S-GW, as for eNodeBs.

2.1.2

The Radio Link Control protocol - RLC

As for PDCP, the MME specifies configurations for each and every RLC entity and radio bearer. The RLC protocol has three modes - Acknowledged Mode (AM), Un-acknowledged Mode (UM) and Transparent Mode (TM) in which packets simply pass through the protocol without any changes. In RLC-AM all transmitted pack-ets are acknowledged by the receiving RLC, and retransmitted if not acknowl-edged using the Automatic Repeat Request (ARQ) control if a packet was lost.

Segmentation and concatenation of packets is one of the main tasks of the RLC protocol (AM and UM). Packets are divided in lengths suitable to fit in transport blocks. Note that no segmentation or concatenation is performed when the RLC protocol operates in TM. At the receiving side, RLC also performs reordering of packets and duplicate removal.

Transmitted data packets are segmented, fitted with an RLC header and trans-ferred to the underlying MAC protocol. Received data packets are received from the MAC protocol, acknowledged (RLC-AM), reordered, concatenated and trans-ferred to the higher protocol, PDCP.

2.1.3

The Medium Access Control protocol - MAC

This protocol implements scheduling, which is a key component in a radio net-work, involving distribution of shared channel resources. Multiple UEs are likely to be connected to each base station and the number of available channels are limited. The scheduling algorithm controls how data is transferred over each channel, based on channel quality and available resources. In downlink, the eN-odeB scheduler determines how to multiplex packets on a transport block, and decides upon which radio bearer and UE to transmit the transport block to. In uplink, the eNodeB scheduler distributes and allocates resources to connected UEs, which in turn multiplex packets on a transport block to be transmitted over a radio bearer. The MME may affect scheduling in prioritizing certain packets in order to provide e.g. guaranteed bit rates. Scheduling involves providing data transfer on a channel at certain Quality of Service levels, specified by a Quality

(16)

8 2 Background

of service Class Identifier (QCI) level. Scheduling also involves decisions of mod-ulation and coding scheme, and whether or not to use MIMO (Multiple Input, Multiple Output) or beamforming [9].

The MAC protocol also implements, in addition to the ARQ recovery of lost packets used in the RLC protocol, a Hybrid ARQ (HARQ) with fast retransmis-sion of transport blocks. These two protocols with ARQ packet loss recovery can provide a low probability of packet loss in an efficient manner.

2.1.4

Physical layer

The Physical layer, also referred to as layer 1, provides the most fundamental functionality of a radio network: transmission and reception of data between the UE and the base station. Information bits are modulated into analog signals to be carried at frequencies specific for each radio channel. Quadrature Amplitude Moduling (QAM), described in detail in a book by Hanzo et al. [7], can be used in different modes to carry data of different sizes. E.g. a 16QAM modulation implies that data is modulated to signals, each carrying characters with a value range of 0 − 15, or in other word 4 bits. 16QAM, 64QAM, 128QAM and 256QAM are common for usage where the moduling scheme choice depends on channel quality. A moduling scheme with a higher value range provides higher bit rate, but is more sensitive to channel distortions. The Physical layer also implements a 24bit Cyclic Redundancy Check (CRC) checksum, allowing for recovery of bit errors and rejection of distorted packets. A detailed CRC description is provided by McDaniel [12].

2.2

CSIM-FsUE – System Description

The Component based Simulator - Full stack User equipment (CSIM-FsUE) sim-ulates a radio network surrounding a base station, including all radio communi-cation protocols used in production. A base station is connected to the simulated core network via a Network Interface and to the simulated UEs via a radio in-terface. Data is transferred over a physical cable rather than an air interface, meaning that the CSIM-FsUE always operates in perfect radio environment with low or no packet loss.

An Ericsson Multi Core Architecture (EMCA) is a hardware unit commonly used for data processing in Ericsson radio products, utilizing many DSP-cores (Digital Signal Processor). The multi core architecture of the EMCAs allows for processing of many instructions in parallel threads. An EMCA utilizes three memory types: Local Direct Memory (LDM), Common Memory (CM) and Exter-nal Memory (EM), where each DSP has direct and instant access to one LDM. CM and EM are common for all DSPs within an EMCA but differ in storage capacity and access time.

The CSIM-FsUE utilizes a Generic Test Environment (GTE) server for control applications such as MSUE, MME and SGW, executing in an Erlang1

(17)

2.2 CSIM-FsUE – System Description 9

Figure 2.2: CSIM-FsUE overview - An illustration of the GTE Server, CPM and number of EMCAs, in relation to the eNodeB base station.

ment. This component manages test cases and UE and CN control plane data. The GTE also implements a Radio Resource Control (RRC), informing the base station about current radio conditions of the simulated UEs. A Central Process-ing Module (CPM) component implements Erlang interface logic between GTE and applications running on the EMCAs.

2.2.1

Central Processing Module - CPM

The simulator utilizes a main processor, mostly referred to as the Central Pro-cessing Module (CPM), providing an interface between the GTE and the simula-tion process. The CPM implements separated controller modules for the radio communication protocols, data generator, serving gateway and MME, all imple-mented in the programming language Erlang. Test cases can be defined on top of the controller modules in the CPM by an operator, handling states and defining the simulation process via various parameters and scripts. The CPM communi-cates simulation instructions to EMCAs using predefined signals.

All data traffic is generated, sent and received on the EMCAs, according to instructions provided by overlying GTE, via the CPM. Data traffic in uplink (from UE to CN) and downlink (from CN to UE) are handled independently on separate EMCAs. Communication protocols are distributed over the EMCAs, in order to benefit load balancing as high throughput and massive data load may be intended for verification of base station performance.

(18)

10 2 Background

2.2.2

Data generator

The data generator is a software module running within an EMCA and used to cre-ate, send and receive IP data packets. The data generator is used in simulations to create and transmit data in both uplink and downlink, hence is the software simulating UEs and CN. The test cases in the GTE server provide instructions on how data shall be generated, and is characterized by by the parametersQCI

(Quality of service Class Identifier),Number of UEs, Simulation time, Packet size, Number of packets per burst, and Data interval.

TheQCI is further described in Section 2.4 and provides certain Quality of

Service guarantees. The Packet size determines the size of all generated

pack-ets. Number of packets per burst allows for sending several packets at once and

theData interval parameters determines the time interval between each burst of

packets. Thenumber of packets per burst is, in general, increased when the amount

of data transmitted in a burst exceeds the Maximum Transmission Unit (MTU), described in the IP protocol. The MTU often limits packets to a maximum size of 1500 bytes.

2.2.3

Lightweight Processing Platform

Lightweight Processing Platform (LPP) is a software platform running within each EMCA, providing an extension of the C library for programming instruc-tions. LPP provides an Application Programming Interface (API) for signaling, memory management, parallel processes, and handling semaphores. The LPP API also implements timer functionalities which will become essential for mea-suring packet delay, generating timestamps in BFN (eNodeB Frame Number), which can be translated to milliseconds or microseconds. Hardware components communicate via predefined signals, with unique signal numbers and a fixed structure for the data carried within.

2.3

Quality of Service

B Crosby states in an article [3] that quality is an elusive and indistinct construct, often mistaken for adjectives like "goodness, or luxury, or shininess or weight". It is also stated that "Quality is zero defects - doing it right the first time". This suggests that measuring quality involves measuring and weighting misstakes or problems in a service. This section will clarify what is necessary to measure in order to justify claims of high or poor quality in some applications of special interest. It is important to consider the application when measuring Quality of Service, as different measures have more or less importance for the experienced quality, depending on the application.

Quality of Service measurements are preferably used to gain a prior knowl-edge about Quality of Experience. García-de-Blas et al. [4] describe a method for measuring Quality of Experience, considering both data provided by end users rating quality of a service application to form a mean opinion score, and also advanced measuring techniques such as Perceptual Evaluation of Video Quality,

(19)

2.3 Quality of Service 11

considering visual defects in video image frames, and DiversifEye2, considering the flow of data in a video stream. In [4] it is also stated that waiting time is less relevant for the purposes of perceived quality of a video streaming service. For video streaming applications, the methods of measuring Quality of Experi-ence suggests that Quality of Service measurements should reflect the amount of transferred data per time unit in order to present a high quality image to the user. Here, Quality of Service measurements should also reflect the data flow provided by the service, and waiting time is considered less relevant.

For Voice over IP applications, Markopoulou et al. [11] state that quality is re-flected by delay and signal distortion. When users communicate using Voice over IP, delay will affect conversations in an unnatural manner. Users expect to have an immediate response from the counterpart or else the Quality of Experience will be poorly rated. Distortions will appear where IP packets are lost, distorted or when sound is aggressively compressed. When measuring Quality of Service for Voice over IP applications, measurements need to reflect packet delay, packet loss and the amount of transferred data per time unit.

For the application of TCP file download, involving retransmission of lost packets and a guaranteed successful file transfer, it is essential to have a low bandwidth-delay product, as stated by Larmo et al. [9]. This implies tough de-mands on packet delay, especially at high data rates, in order to achieve good Quality of Experience ratings. Additionally, TCP file download applications are sensitive to packet loss in the sense of Quality of Experience, as files may not be delivered and presented until all packets are successfully received. When measur-ing Quality of Service for TCP file download applications, measurements need to reflect packet delay and packet loss.

2.3.1

Measuring Quality of Service

Demichelis et al. [5] contributed with an extensive methodology on how to mea-sure an IP Packet Delay Variation (ipdv) Metric for IP Performance Metrics (IPPM). This involves measuring one-way-packet delay and calculation of one-way packet delay variation in such a way that is reflecting Quality of Service. This metric is sometimes referred to as packet "Jitter" and is commonly used in Quality of Ser-vice evaluations. The suggested algorithm requires registered timestamps of each transferred packet at transmission and reception. The one-way delay of packet i is then defined as

di = ti,receptionti,transmission (2.1)

from which an ipdv metric can be calculated. The ipdv metric is defined as

ddk = didj, i, j ∈ Ik (2.2)

for an interval of measured one-way packet delays in a stream of packets, where

i and j are packet identifiers and Ik specifies an interval of packets in the stream

of packets. A selection function is used to select two representative packets from

(20)

12 2 Background

the interval, from which the ipdv value is calculated. Multiple selection functions are suggested in [5]. For instance, the most simple selection function

ddk = didi+1 (2.3)

were Ik is every in order pair of packet delay measurements. Another suggested

selection function is the maximum ipdv selection function

ddk,max = max i,j∈Ik

didj (2.4)

resulting in the maximum ipdv in each sub interval of a packet stream. Choos-ing the maximum delay variance selection function is motivated for stream of packets, as for e.g. video streaming applications, where buffers need to compen-sate for the largest unexpected delay variations in order to reconstruct a video without disruptions. Note that equation (2.3) is a special case of (2.4).

Another measure of Quality of Service is throughput, defined as the number of packets received per time unit. Throughput is defined in a time interval and calculated as R = 1 tendtstart X t∈I |pt|, I = [tend, tstart] (2.5) where |pt|is the length of packet ptin bytes.

2.3.2

Quality of Service, statistic measures

The method of measuring Quality of Service, as described in section 2.3.1, pro-vides a set of measurements to be presented and evaluated. Statistical methods are used in order to attain one descriptive value for each set of measurements. An average packet delay measure is interesting in the sense that this describes delay in the entire simulation session. Also, the statistical standard deviation of the packet delay measures may be considered as a measure describing how packet delay varies, which in turn can be compared to the ipdv measures.

Jacobson et al. [8] suggest the usage of an exponential filter, with parameter 1/16, when constructing a statistic value for ipdv measures. In [8] the selection function (2.3) is used when calculating ipdv measurements. The packet Jitter is calculated from the recursive formula

J0= dd0, Ji+1 = Ji +

1

16(ddkJi), k = 1, ..., N − 1 (2.6) where Ji is updated for each ipdv value ddkand JN is defined as the packet Jitter.

This is one of the measurements often used to reflect Quality of Service. The term Jitter are sometimes used when measuring signal quality and signal variations, but be aware that in this project Jitter is always referred to as a measure of packet delay variations.

(21)

2.4 Quality of service Class Identifier - QCI 13

2.4

Quality of service Class Identifier - QCI

QCI corresponds to different configurations of radio network parameters in order to provide guarantees in certain Quality of Service aspects. A QCI can guarantee a bit rate, provide high priority, low packet loss rate or a low transmission time, depending on what Quality of Service aspect is relevant to the application used. Forconi et al. [6] present the QCIs and the guarantees provided, illustrated in Table 2.1. The QCI value is reported to the MME, which in turn decides upon configuration parameters for the radio protocols, in order to provide the desired quality aspects.

QCI Resource Priority Delay budget Error loss rate Example services 1 GBR 2 100 ms 10−2 Conversational voice 2 GBR 4 150 ms 10−3 Conversational video (live streaming) 3 GBR 3 50 ms 10−3 Real time gaming 4 GBR 5 300 ms 10−6 Non-conversational video (buffered streaming)

5 Non-GBR 1 100 ms 10−3 IMS signaling

6 Non-GBR 6 300 ms 10−6 video (buffered streaming), TCP 7 Non-GBR 7 100 ms 10−6 Voice, live video, in-teractive gaming 8 Non-GBR 8 300 ms 10−3 Video (buffered streaming), TCP 9 Non-GBR 9 300 ms 10−6 Video (buffered streaming), TCP

Table 2.1: QCI-table, with respective guaranteed Quality of service aspect and suggested applications.

(22)
(23)

3

Implementation

Numerous implementation strategies were suggested and in need of thorough in-vestigation in terms of scalability, performance and feasibility when considering time restrictions. It is of most importance that packet delay measurements do not affect the performance of the simulator. This was a challenging task, in need of a considerate implementation strategy as the simulator is meant to generate high data traffic loads in order to verify that base stations are not overloaded when used in real radio networks. Similar tasks and implementations, involving col-lecting statistics from test cases, were thoroughly investigated and reconsidered to the task of collecting packet delay statistics. However, most such implementa-tions were deemed unsuitable for transfer of the large amount of measurement data expected from test cases with a high data traffic load. Due to storage limita-tions in the EMCAs, it is preferred to keep data in motion and minimize buffer sizes. This chapter will describe the chosen implementation strategy, as well as the methods used for gathering packet delay measurements and calculation of Quality of Service measures.

3.1

Implementation Strategy

When measuring packet delay, a timestamp is taken at both packet transmission and reception and the packet transfer delay is calculated as the time difference of the two timestamps. The first task was to decide exactly what to measure and where to implement timestamp measures of packet reception and transmis-sion. All protocols were candidates for implementation but the data generator was found most interesting and feasible. The data generator creates, sends and receives all packets used in simulation in both uplink and downlink and there-fore allows for simple implementation and measuring of packet delay. Such delay measure implementation would include processing delay contributions from all

(24)

16 3 Implementation

Figure 3.1:Timestamps are taken at transmission and reception in the sim-ulated Serving Gateway (SGW) and User Equipment. The data generator simulates the SGW and all UEs.

protocols used in a radio network and should be proportional to packet delays experienced in real mobile radio units, at least during transmissions with high channel quality and low probability of packet loss. Another advantage of imple-menting measurements in the data generator is that only one clock is needed and problems with clock synchronization can be overseen, as the data generator is implemented in one EMCA.

It is important that measurements do not affect performance of the simulator. An EMCA provides many DSP cores and high processing capacity but is limited in memory storage, especially in LDM and CM. In order to minimize memory usage in the data generator, each timestamp measure is associated with a packet identifier and transferred to the CPM instantly at transmission and reception of packets. A mailbox client is implemented to receive the data indications con-taining the timestamp information. Further, all data indications, received by the CPM, are stored in a file and processed after simulation. This implementation strategy does not only allow for calculation of packet delay, but also calculations of throughput in any subinterval and concluding if any of the sent packages were lost. The ability to store data in a file provides a convenient way to allow for easy testing and evaluation of different analysis methods and analysis by exter-nal tools.

3.2

CPM signaling

Measured data needed transfer from the data generator, implemented on an EMCA, to the CPM in order to be stored. These communicate using predefined signals, transferred over a bus, and mailbox clients are implemented with a state machine

(25)

3.2 CPM signaling 17

and interpretation of signals. Figure 3.2 illustrates signal definitions involved in packet information transfer. Each signal is identified from a unique signal num-ber and specified to carry data defined as variables. Signals, described in Tables 3.2c - 3.2f, represent signals used to start and stop transfer of data from the data generator and these signals do not need to carry any additional information. Ta-ble 3.2a and TaTa-ble 3.2b represent signals used to carry measurement data. The reason for having two signal types here is to reduce the amount of data needed to transfer. When a packet is transmitted a data indication signal 3.2a is sent to CPM, containing data variables describing the current state of the simulator. It is not as interesting to consider the state of the simulator when a packet is received, hence the smaller 3.2b data indication is used. This means that each packet delay measurement is associated with measures, describing the simulation state, when each packet was transmitted. Data indications contain packet identifiers:

Sequence number, radio bearer index, and a boolean uplink to indicate uplink or

downlink direction.

It might be essential to consider the characteristics of the data used in sim-ulation when analyzing measurements. Therefore, are each measurement asso-ciated with parameters describing the data used in simulation and the current state of the simulator. The parameters packetSize, noOfPktsPerBurst, dataRate,

andnoOfUsers describes the data used for simulation and the parameters noOf-PktsInProcUl, noOfPktsInProcDl describes the state of the simulator. The number

of packets in process is calculated, using internal counters on the Data Generator, as the number of packets sent in uplink or downlink minus the number of pack-ets received. These two latter parameters provide information on the state of the simulator but also provides the ability to evaluate the packet delay implementa-tion. After a simulation the measured number of lost packets should correspond to these two parameters, or else the packet delay measurement implementation is likely to have failed.

Figure 3.3 describes the state machine implemented on the receiving CPM and Figure 3.4 describes the state machine implemented in the data generator. The implementation of the CPM mailbox client exports functions allowing test cases to start and stop transfer of packet delay measurements during simulations. These functions changes the state of the CPM and data generator mailboxes by transmitting the data start or stop request signals. When the data generator re-ceives the start request signal, the sender of the request signal is registered as the receiver of the corresponding confirm signal and the following data indication signals. When the data generator receives the stop request signal, the data gener-ator state is updated and the stop request is confirmed. When the CPM receives data indication signals, all data parameters are extracted and written to a file, to be processed after simulation.

(26)

18 3 Implementation

DataGenPacketLatencyInd Parameter Data type Explanation

signalNumber uint32 Signal identification

seqNumber uint32 Packet identification, increased for each packet

uplink bool Uplink or downlink direction

rbIndex uint32 Radio bearer index, unique for each UE timestamp uint32 Timestamp in microseconds

packetSize uint32 Packet size in bytes

noOfPktsPerBurst uint32 Number of packets sent in a burst dataRate uint32 Time interval between bursts of packets noOfUsers uint32 Number of active UEs, not used

noOfPktsInProcUl uint32 Number of packets in sent but not yet re-ceived in uplink

noOfPktsInProcDl uint32 Number of packets in sent but not yet re-ceived in downlink

(a)Data indication signal

DataGenPacketLatencyShortInd Parameter Data type signalNumber uint32 seqNumber uint32 uplink bool rbIndex uint32 timestamp uint32

(b)Short data indication signal

DataGenPacketLatencyStartReq Parameter Data type signalNumber uint32

(c)Start request signal

DataGenPacketLatencyStartCfm Parameter Data type signalNumber uint32

(d)Start confirm signal

DataGenPacketLatencyStopReq Parameter Data type signalNumber uint32

(e)Stop request signal

DataGenPacketLatencyStopCfm Parameter Data type signalNumber uint32

(f)Stop confirm signal

Figure 3.2: Definitions of signals used for transferring packet latency data from the data generator on EMCA to the CPM.

(27)

3.2 CPM signaling 19 Data indications stopped Store data indications Receive start confirm

If function call, Start data indications: Send start request

Receive data indication: Resend stop request

If function call, Stop data indications: Send stop request Receive data indi-cation: Store data

Receive stop confirm

Figure 3.3:State machine description of CPM mailbox client

Data indications

stopped

Send data indications Receive start request:

Send start confirm

Receive packet: Send short data indication

Sending packet: Send data indication

Receive stop request: Send stop confirm

(28)

20 3 Implementation

3.3

Defining test cases

Test cases are defined here in the GTE server, in an Erlang environment, using the APIs provided by the CPM for controlling the simulations. Test cases are specified, amongst many parameters, with size of packets, number of packets per burst and data interval, together specifying a throughput as a load on the system,

for each radio bearer. A QCI is also specified for each UE, or radio bearer, pro-viding guarantees in certain Quality of Service aspects as described in Table 2.1, on page 13. Before generating any data, test cases make calls to the start data indications and stop data indications functions, exported from the mailbox client

implemented on the CPM.

The three parameters;size of packets, number of packets per burst and data inter-val, are characteristic for different applications and are used to simulate different

user scenarios. In this project, Voice over IP and video applications are consid-ered and the three parameters are chosen to correspond to recommendations pro-vided by application developers. As an example Mumble1is a voice over IP chat service, primarily used while online gaming. For high quality mumble it is sug-gested to use 72 kbit/s bandwidth and 10 ms delay between packets, meaning that each packet contains 10 ms of speech data. This quality setting translates in the three parameters to;data interval = 10 ms, number of packets per burst = 1 and size of packets = 90 bytes and a QCI value such as 1 or 7 (see Table 2.1) is

recom-mended. Recommendations for other applications are translated into these three parameters in the same manner.

3.4

Calculating Quality of Service Measurements

Previous sections describe how data regarding timestamps, of packets being sent and received, are collected and stored in a .csv data file on the CPM. This data allows for calculation of several Quality of Service measurements. A file parser was created, using python, to extract lines in the file. Each packet corresponds to one line with transmission data and one line with reception data. In order to find the two lines corresponding to one packet, the fields Sequence number,

the booleanUplink and Radio bearer index are used as identifiers. If a packet do

not have a line with a time of reception, the packet is deemed lost. As times-tamps are given as 32-bit unsigned integer, wraparound need to be considered and compensated. Such compensation is implemented by counting the number of wraparounds and subtract thenumber of wraparounds times the timer max value

from each timestamp. The file parser extracts packet delay measurements to be stored in a list, together with simulation parameters. From here Quality of Ser-vice measurements can be calculated according to the methodology described in Section 2.3.1 and packet delay measurements are plotted in a figure.

When calculating ipdv measurements an interval of packets is considered and measures are taken using the maximum selection function, as described in Sec-tion 2.3.1. This results in a set of ipdv measurements from which the packet jitter

(29)

3.4 Calculating Quality of Service Measurements 21

measurement can be calculated. If choosing small intervals for ipdv measure-ments, more data is considered when calculating the packet jitter measurement, hence smoothing out outliers. If choosing large intervals, the largest ipdv mea-surements will be dominant in the packet jitter measurement. Depending on the application, both small and large intervals may be of interest and a discussion is motivated.

Another measure of Quality of Service is throughput, defined as the amount of data received per time unit. High throughput is easily achieved in the simula-tor, hence is not very interesting as Quality of Service measure in this project, but for a constant stream of data throughput may vary in sub intervals. Such varia-tions may be an interesting Quality of Service aspect, and variavaria-tions in through-put are expected to be low and may be compared with the packet jitter mea-surements. Luan et al. [10] states that throughput variations have a significant impact on network services such as Voice over IP and online gaming, relying on a continuous stream of data. Note that packet delay measurements are not con-sidered when calculating throughput. Also note that if sub intervals are chosen to be smaller than thedata interval simulation parameter, some intervals will not

contain any received packets and the throughput variation metric will become illusive.

3.4.1

Comparing with Quality of Service requirements

Quality of Service measurements can be compared with requirements specified in the article by Chen et al. [2], were both human perception of network based applications and technological aspects are considered. This article specifies and classifies a number of applications to be considered when stating Quality of Ser-vice requirements. The Quality of SerSer-vice measures considered in this article are

delay, jitter,data rate, bandwidth, loss rate and error rate. Table 3.1 was found in [2],

providing delay recommendations for telecommunication.

One-way transmission time Effect of Delay on Human Voice Per-ception

0 to 150 ms Acceptable for most users 150 to 400 ms Acceptable, but had impact 400 ms and above Unacceptable

Table 3.1: Recommendations of one-way end-to-end delay for high-quality real time traffic in telecommunication

It is also stated that for Voice over IP applicationsdelay under 100 ms and jitter under 400 ms is tolerated. For video broadcasting applications delay under

150 ms andjitter under 50 - 150 ms is tolerated, depending on technical aspects

such as coding standards. For more detailed reading regarding requirements, [2] is referenced where more application classes are considered.

(30)
(31)

4

Result

Ericsson is restrictive in sharing results due to sensitivity of the data collected during this project. Therefore, readers with the intentions of taking part of actual observations are referred to a confidential appendix, belonging to Ericsson. The results presented in this chapter will only regard theoretical findings gained from this project, to be used as motivation in following discussions.

4.1

Implementation

The implementation was successful and packet delay measurements were col-lected at micro second precision during simulated test cases. Even though a large amount of simulations were executed, no test case has proven to have a nega-tive influence on the simulation process and all test cases tested came out with a successful result, when measuring packet delay. The number lost packets stated by the new packet delay measurement implementation were matching the inter-nal counters of the data generator, suggesting that no information was lost when transferring data to the CPM.

4.2

Quality of Service measurements

The implementation of packet delay measurements allows for Quality of service evaluation in terms of

• Graphs of packet delay measurements • Average packet delay

• Packet delay standard deviation

(32)

24 4 Result

• Packet Jitter

• Throughput standard deviation • Number of packets lost

In both uplink and downlink. These aspects were considered in evaluation of all test cases.

4.3

Test cases

All data rates are simulated constant and equal in uplink and downlink. Parame-ters are chosen to be similar to realistic application data, based on the recommen-dations in [2].

The parameters shown in following tables were consistent throughout each simulation and all UEs were simulated with the same data traffic. A compari-son on how Quality of Service is affected when simulating more, or a fewer, UEs was executed, using parameters shown in Table 4.1. Simulations, were only one parameter is varying, are compared according to Tables 4.2 – 4.4 in order to un-derstand how each parameter affect Quality of Service.

QCI Number of UEs Packet size [Bytes] Number of packets per burst Data inter-val [ms] Example service 3 10 200 1 50 Voice over IP 3 20 200 1 50 Voice over IP

Table 4.1: Simulation parameters - specification of simulation parameters when comparing how different number of UEs affect Quality of Service when using Voice over IP applications

QCI Number of UEs Packet size [Bytes] Number of packets per burst Data inter-val [ms] Example service 3 1 90 1 10 Voice over IP 3 1 200 1 10 Voice over IP

Table 4.2: Simulation parameters - specification of simulation parameters when comparing how packet size affect Quality of Service when using Voice over IP applications

(33)

4.4 Packet delay measurements 25 QCI Number of UEs Packet size [Bytes] Number of packets per burst Data inter-val [ms] Example service 3 1 200 1 400 Low qual-ity, Voice over IP 3 1 200 1 10 Voice over IP

Table 4.3: Simulation parameters - specification of simulation parameters when comparing how data intervals affect Quality of Service when using Voice over IP applications

QCI Number of UEs Packet size [Bytes] Number of packets per burst Data inter-val [ms] Example service 3 1 1500 20 100 Video on Demand 3 1 1500 50 100 Video on Demand Table 4.4: Simulation parameters - specification of simulation parameters when comparing how the number of packets affect Quality of Service when using interactive video on demand for multimedia on web

QCI Number of UEs Packet size [Bytes] Number of packets per burst Data inter-val [ms] Example service 1,3 2 200 1 100 Voice over IP 2,3 2 200 1 100 Voice over IP 7,3 2 200 1 100 Voice over IP

Table 4.5: Simulation parameters - specification of simulation parameters when comparing how QCI values affect Quality of Service with two UEs us-ing Voice over IP applications with different QCI values

4.4

Packet delay measurements

The user scenarios, specified in section 4.3, were simulated and the results were illustrated figures and Quality of Service measurements were calculated. All

(34)

26 4 Result

results regarding these simulations are found in the confidential appendix be-longing to Ericsson. In appendix, e.g. Figure 2.8, an example of packet delay characteristics is visualized. Note here how packet delay variance follow differ-ent patterns in uplink and downlink. This observation show that measuremdiffer-ents in uplink are not uniformly nor symmetrically distributed around the average value. The simulation in this case might be compared to a high quality Voice over IP application conversation.

(35)

5

Discussion

This project provided a deep insight in the radio communication systems devel-oped by Ericsson. Knowledge regarding the systems were provided from litera-ture, internal documents, and various discussions with professionals, in order to explain observations and draw conclusions. The following is a discussion with my own thoughts on what has been observed, learned, and accomplished during this project.

5.1

Result

The result in appendix Figure 2.8 suggests that packet delay variation character-istics differ in uplink and downlink. One probable cause of this observation is how the scheduler distributes channel resources and determines when UEs are permitted to transfer data in uplink. This kind of scheduling does not apply in downlink.

The parameters for the test cases in Section 4.3, were chosen to be comparable to real applications, with specifications as recommended by Chen et al. [2]. The intention of these tests were not to explore the limits of high data load, but rather to have an initial understanding of how Quality of Service is affected by these parameters.

5.2

Method

As seen in appendix Figure 2.8, there are differences in the characteristics of packet delay variance in uplink and downlink. It was also observed that packet delays in uplink and downlink differ. These observations confirms the motivation of measuring one-way packet delay in both uplink and downlink rather than i.e.

(36)

28 5 Discussion

measuring the round trip time.

5.2.1

Implementation strategy

The initial request from Ericsson was to find a method of evaluating Quality of Service of a base station, using the CSIM-FsUE for implementation and data generation. In order to have the smallest influence from the simulator itself it was suggested to implement one timestamp measurement for packets passing through the lowest protocol layers, closest to the base station, and one times-tamp measurement for packets received or transmitted from the data generator. This requires identification of packets in different protocols. This problem was deemed to hard to solve for the intended test cases, and no solution suggested provided satisfactoriness. Packets, that are distinguishable in the data generator, are ciphered in the PDCP and segmented in the RLC protocol making it hard in general to compare packets received in the MAC or physical protocol layers to packets created in the data generator. When applying the performance re-quirements of the packet delay measurement implementation, this problem was deemed to hard and extensive for this project.

However, packet delay measurements over the full stack of protocols is sug-gested to be comparable to usage in real radio networks. These measurements evaluates the protocol standards and the base station, implementing these proto-cols. For the intended usage, of evaluating Quality of Service in a base station, these measurements are still useful and can be used in verification and to com-pare Quality of Service provided by different base station implementations.

5.2.2

Quality of Service measurements

The article form Chen et al. [2], stating Quality of Service requirements, was written as early as in 2004. As technology develop, so does expectations on the technology and it is conceivable that these requirements are not applicable today. However, it is useful to have reference values to compare with measurements and this was the intention when using the results presented in [2], while keeping in mind that requirements of today are likely to be stricter.

A quote chosen from the article by Chen et al. [2] regarding user perception of jitter:

Jitter is an essential performance parameter of a network intended to support real time sound and image media. Of all information types, real time sound is the most sensitive to network jitter. The solution to over-coming jitter in real time sound transmissions is for the receiving system to wait a sufficient length of time, called a delay offset, before playing re-ceived sounds. Incoming blocks are stored in a temporary memory location, called a buffer. After an adequate delay, these sound blocks can be played smoothly. This process is called delay equalization or delay compensation.

As buffers infer more delay to the service, it is here clearly stated that jitter causes more delay to the system. More jitter calls for a higher delay offset in order

(37)

5.3 Future work 29

to smoothen the data flow and present a continuous flow of data to the users. The purposed method of calculating jitter accounts for all variances of packet transmission and that packets, transferred with an unexpectedly long or short delay, is negative in terms of jitter. However, if some packets are transmitted with a short delay, shorter than the average, these packets might actually benefit the buffering capacity in compensating for packets with longer delay.

As seen in the appendix of results from measuring uplink packet delays in Figure 2.8, many packets have a shorter delay than the average packet delay. I would argue that these shorter delays do not have the same negative effect on Quality of Service as the packets with longer delay than average. The maximum selection function used in this implementation calculates the delay difference between the packets with shortest and longest delay, in each interval. Consider an example were an ipdv measure is calculated from a set of measured packet delays. In this set dmaxis the longest measured delay and dminis the shortest and

the ipdv measure is the difference between the two. Now consider if dminwere

even shorter, then the difference between dmaxand dminis increased, resulting in

a direct corresponding incrementation of the ipdv metric and a negative influence on the Quality of Service measure.

For Quality of Service evaluation of these applications, implementing an off-set delay buffer, I suggest a new selection function that considers packets with delay longer than the average delay. An example of such a selection function is

ddk,amax= max i∈Ik

did (5.1)

where d is the average of packet delay measures within the considered interval. This function could be used instead of the selection functions (2.3) and (2.4) on page 12. This selection function is less restrictive than the maximum selection function but covers what is relevant in Quality of Service evaluations of the ap-plications of consideration in this project. If packet delay measurements are sym-metrically distributed then the maximum selection function is applicable, but observations suggest that data are asymmetrically distributed.

Section 3.4 mention the interval over which each ipdv measurement is calcu-lated with a selection function, such as the maximum selection function. If a large interval is chosen, there will be fewer maximum ipdv values from which to calcu-late the jitter value. If a small interval is chosen there will be more ipdv values, resulting in that the largest observed ipdv measurements are diminished when averaging and calculating the jitter value. The quote by Chen et al. [2] suggests that the jitter value need to reflect the buffering capacity needed to smoothen de-lay variations. I suggest choosing larger intervals is motivated as the outliers, or the largest ipdv values, are the ones having the most negative influence on the perceived quality in these applications.

5.3

Future work

As an extension to this work it is suggested to modify the implementation and Quality of Service measures, to be suitable for automatic test verification

(38)

execu-30 5 Discussion

tion, such as Jenkins jobs1. This might involve immediate calculation of Quality of Service metrics and presentation, rather than storing measures on a file for analysis after simulations.

It would also be interesting to see more thorough comparisons of how simu-lation parameters affect the radio network and use these results to provide rec-ommendations on how to optimize data traffic with respect to Quality of Service measurements. E.g. for a video streaming application, it may be preferable to send many packets with longer data intervals, or vice versa, send fewer packets with shorter intervals. All packet delay measurements are associated with infor-mation regarding the state of the simulation when the packet was sent. It would also be interesting to explore how this state correlate with Quality of Service.

It was suggested to measure the protocol processing time by sending one packet at a time and measure the time elapsed from transmission, at the data generator, to transmission from the physical layer. Such measurements could be used to gain a better understanding on how much packet delay measurements, used in this project, are affected by the simulator.

Quality of Experience evaluations are of great value to projects like this one. Ratings of human perception is a key aspect in evaluating the efficiency of Qual-ity of Service measurements. It would be interesting to redo the study executed by Chen et al. [2] and test my hypothesis that expectations on network applica-tions have become more restrictive. If expectaapplica-tions are proven to be the same, one might argue that these results are valid for a long period of time. If expectations have changed, one might argue that these studies need to be held at a regular basis.

The simulator is currently being developed to simulate 5G networks. The Quality of Service evaluations found and suggested during this project can be compared with the new technology. This new technology will also be in need of thorough evaluation and verification as Ericsson strive to be the market leading supplier of 5G technology.

5.4

Conclusions

Initial problem formulations:

• How are simulation parameters affecting Quality of Service? • Is the system behaving as expected in terms of packet delay?

• Can packet delay measurements be implemented without affecting perfor-mance of the simulator at high data load?

• Are Quality of Service measurements efficiently descriptive?

Conclusions regarding how parameters affect Quality of Service can not be revealed due to sensitivity of simulation data. Readers interested in taking part of

(39)

5.4 Conclusions 31

these conclusions are therefore once again referred to the confidential appendix, belonging to Ericsson.

Packet delay measurements were successfully implemented and tested using high data load. The fact that no test case run, up until now, overloaded the imple-mentation suggests stability but no performance guarantee.

After considering the characteristics of measured packet delay variance, sug-gestions have been stated on how to improve Quality of Service measurements. However, there may be a need for comparison with up-to-date user perception evaluations in order to state a good or poor efficiency of these measures.

(40)
(41)

Bibliography

[1] Carsten Bormann, Carsten Burmeister, Mikael Degermark, Hideaki Fukushima, Hans Hannu, Lars-Erik Jonsson, Rolf Hakenberg, Tmima Koren, Khiem Le, Zhigang Liu, et al. Robust header compression (rohc): Frame-work and four profiles: Rtp, udp, esp, and uncompressed. Technical report, 2001. Cited on page 7.

[2] Yan Chen, Toni Farley, and Nong Ye. Qos requirements of network applica-tions on the internet. Information Knowledge Systems Management, 4(1): 55–76, 2004. Cited on pages 21, 24, 27, 28, 29, and 30.

[3] Philip B Crosby. Quality is free: The art of marketing quality certain. New York: New American Library, 1979. Cited on page 10.

[4] Gerardo Garcia De Blas, Adrian Maeso Martin-Carnerero, Pablo MONTES MORENO, and Francisco Javier Ramon Salguero. Method to measure qual-ity of experience of a video service, November 13 2014. US Patent App. 14/357,890. Cited on pages 10 and 11.

[5] Carlo Demichelis and Philip Chimento. Ip packet delay variation metric for ip performance metrics (ippm). 2002. Cited on pages 11 and 12.

[6] Sonia Forconi and Alessandro Vizzarri. Review of studies on end-to-end qos in lte networks. pages 1–6, 10 2013. Cited on page 13.

[7] Lajos Hanzo, Soon Xin Ng, WT Webb, and T Keller. Quadrature ampli-tude modulation: From basics to adaptive trellis-coded, turbo-equalised and space-time coded OFDM, CDMA and MC-CDMA systems. IEEE Press-John Wiley, 2004. Cited on page 8.

[8] Van Jacobson, Ron Frederick, Steve Casner, and H Schulzrinne. Rtp: A trans-port protocol for real-time applications. 2003. Cited on page 12.

[9] Anna Larmo, Magnus Lindström, Michael Meyer, Ghyslain Pelletier, Johan Torsner, and Henning Wiemann. The lte link-layer design. IEEE Communi-cations magazine, 47(4), 2009. Cited on pages 1, 8, and 11.

(42)

34 Bibliography

[10] Tom H Luan, Xinhua Ling, and Xuemin Sherman Shen. Provisioning qos controlled media access in vehicular to infrastructure communications. Ad Hoc Networks, 10(2):231–242, 2012. Cited on page 21.

[11] Athina P Markopoulou, Fouad A Tobagi, and Mansour J Karam. Assessment of voip quality over internet backbones. In INFOCOM 2002. Twenty-First Annual Joint Conference of the IEEE Computer and Communications Soci-eties. Proceedings. IEEE, volume 1, pages 150–159. IEEE, 2002. Cited on page 11.

[12] Bill McDaniel. An algorithm for error correcting cyclic redundance checks. CC PLUS PLUS USERS JOURNAL, 21(6):6–17, 2003. Cited on page 8. [13] Naoto Okubo, Anil Umesh, Mikio Iwamura, and Hiroyuki Atarashi.

Overview of lte radio interface and radio network architecture for high speed, high capacity and low latency. Special Articles on" Xi"(Crossy) LTE Services-Toward Smart Innovation-Technology Reports, 2011. Cited on page 5.

References

Related documents

In order to analyze the impact of delay and delay variation on user‟s perceived quality in video streaming, we designed an experiment in which users were asked

Qualitative Analysis of Video Packet Loss Concealment with Gaussian Mixtures.. Daniel Persson, Thomas Eriksson and

In addition, high resolution Si 2p core level data are presented that sup- port the models with alternating strongly and weakly buckled dimers for the 0.5 ML 2⫻2 surface and

The final comment in this presentation of the teachers’ views is that Teacher 17 regarded a good background in using mathematics as essential for engineering students to develop

The main purpose with this book is to explain why the Estonian oil shale industry, afflicted with seemingly insurmountable problems, has not only managed to survive but done

Pain with abrupt onset, STE-ACS, symptoms such as vertigo or near syncope, experiencing the pain as frightening, interpreting the pain as cardiac in origin and knowledge

DB-1 When a media object is to be streamed to a user software client shall content be accessed from the server (edge or main) that contains the selected media and is optimal for

Här vill vi inom parentes informera om att FDA nu har gett tillstånd att testa SARS-CoV-2 i saliv, vilket innebär många fler kan testas och att provtagningspinnar inte behöver