• No results found

Providing Quality of Service for Streaming Applications in Evolved 3G Networks

N/A
N/A
Protected

Academic year: 2021

Share "Providing Quality of Service for Streaming Applications in Evolved 3G Networks"

Copied!
61
0
0

Loading.... (view fulltext now)

Full text

(1)

Institutionen f¨

or systemteknik

Department of Electrical Engineering

Examensarbete

Providing Quality of Service for Streaming

Applications in Evolved 3G Networks

Examensarbete utf¨ort i Kommunikationssystem vid Tekniska h¨ogskolan i Link¨oping

av

Jonas Eriksson

LiTH-ISY-EX-3493-2004

Link¨oping 2004

Department of Electrical Engineering Link¨opings tekniska h¨ogskola Link¨opings universitet Link¨opings universitet SE-581 83 Link¨oping, Sweden 581 83 Link¨oping

(2)
(3)

Providing Quality of Service for Streaming

Applications in Evolved 3G Networks

Examensarbete utf¨

ort i Kommunikationssystem

vid Tekniska h¨

ogskolan i Link¨

oping

av

Jonas Eriksson

LiTH-ISY-EX-3493-2004

Handledare: Frida Gunnarsson

isy, Link¨opings universitet

Gunnar Bark

Ericsson Research

Examinator: Fredrik Gunnarsson

isy, Link¨opings universitet

(4)
(5)

Avdelning, Institution Division, Department Institutionen f¨or systemteknik 581 83 LINK ¨OPING Datum Date 2004-01-29 Spr˚ak Language  Svenska/Swedish  Engelska/English   Rapporttyp Report category  Licentiatavhandling  Examensarbete  C-uppsats  D-uppsats  ¨Ovrig rapport  

URL f¨or elektronisk version

http://www.ep.liu.se/exjobb/isy/2004/3493/ ISBN

— ISRN

LiTH-ISY-EX-3493-2004 Serietitel och serienummer Title of series, numbering

ISSN —

Titel Title

Tillgodose tj¨anstekvalit´e f¨or str¨ommande media i vidareutvecklade 3G-system Providing Quality of Service for Streaming Applications in Evolved 3G Networks

F¨orfattare Author

Jonas Eriksson

Sammanfattning Abstract

The third generation, 3G, mobile telephone systems are designed for multimedia communication and will offer us similar services as in our stationary computers. This will involve large traffic loads, especially in the downlink direction, i.e. from base station to terminal. To improve the downlink capacity for packet data ser-vices a new concept is included in evolved 3G networks. The concept is called High Speed Data Packet Access, HSDPA, and provides peak bit rates of 14 Mbps. HSDPA uses a so-called best effort channel, i.e. it is developed for services that do not require guaranteed bit rates. The channel is divided in time between the users and a scheduling algorithm is used to allocate the channel among them. Streaming is a common technology for video transmission over the Internet and with 3G it is supposed to become popular also in our mobiles. Streaming generates lots of data traffic in the downlink direction and it would thus be satisfying to make use of the high bit rates HSDPA provides. The problem is that streaming requires reasonable stable bit rates, which is not guaranteed using HSDPA. The aim of this study is to modify the scheduling algorithms to prioritise streaming over web users and provide streaming Quality of Service, QoS. QoS is the ability to guarantee certain transmission characteristics.

The results of the study show that it is hard to improve the streaming capacity by modifications of the scheduling. Of course, a consequence is that the web user throughput is decreased and to avoid this, new users have to be rejected by the admission control. The solution is to prioritise the streaming users both in the scheduling algorithm and in the admission control, i.e. when the system is nearly full new web users are rejected. By doing so the results are significantly improved.

Nyckelord

(6)
(7)

Abstract

The third generation, 3G, mobile telephone systems are designed for multimedia communication and will offer us similar services as in our stationary computers. This will involve large traffic loads, especially in the downlink direction, i.e. from base station to terminal. To improve the downlink capacity for packet data ser-vices a new concept is included in evolved 3G networks. The concept is called High Speed Data Packet Access, HSDPA, and provides peak bit rates of 14 Mbps. HSDPA uses a so-called best effort channel, i.e. it is developed for services that do not require guaranteed bit rates. The channel is divided in time between the users and a scheduling algorithm is used to allocate the channel among them. Streaming is a common technology for video transmission over the Internet and with 3G it is supposed to become popular also in our mobiles. Streaming generates lots of data traffic in the downlink direction and it would thus be satisfying to make use of the high bit rates HSDPA provides. The problem is that streaming requires reasonable stable bit rates, which is not guaranteed using HSDPA. The aim of this study is to modify the scheduling algorithms to prioritise streaming over web users and provide streaming Quality of Service, QoS. QoS is the ability to guarantee certain transmission characteristics.

The results of the study show that it is hard to improve the streaming capacity by modifications of the scheduling. Of course, a consequence is that the web user throughput is decreased and to avoid this, new users have to be rejected by the admission control. The solution is to prioritise the streaming users both in the scheduling algorithm and in the admission control, i.e. when the system is nearly full new web users are rejected. By doing so the results are significantly improved.

(8)
(9)

Acknowledgements

I have performed my Master Thesis work at Ericsson Research in Link¨oping. It has been a great time in several aspects. Professionally, I have had the opportunity to learn a lot from people working with the latest technology within telecommunica-tions. Additionally I have met great people and there have been lots of interesting discussions during the lunches. I have also been invited to the LinLab champi-onships and I am very proud to have my name on the trophy. Thanks to all of you for a great time.

Then, I would like to thank my supervisors at Ericsson, Gunnar Bark and Niclas Wiberg. Special thanks to Niclas for help with the simulator and for valuable comments during the work. I would also like to thank my examiner, Fredrik Gunnarsson, and my supervisor at the University, Frida Gunnarsson, for comments and support.

Finally, I would like to thank Kristina Zetterberg, parallel Master Thesis student, for good company and helpful discussions.

Jonas Eriksson

Link¨oping, February 2004

(10)
(11)

Abbreviations

16QAM 16 Quadrature Amplitude Modulation Modulation with 16 symbols.

3G Third Generation mobile telephone systems 3GPP Third Generation Partnership Project

Partnership project setting standards for the third genera-tion mobile systems.

CDMA Code Division Multiple Access

Air interface using spread spectrum communication. CN Core Network

Link between the radio network and external networks such as the ordinary telephone system and the Internet. CQI Channel Quality Index

Information sent from the terminal to the base station con-taining information of the current channel condition for that specific terminal.

ETSI European Telecommunications Standards Institute One of the partners in 3GPP.

FDD Frequency Division Duplex

Method used to separate the downlink and uplink traffic in WCDMA.

FDMA Frequency Division Multiple Access

Air interface where users transmit in different frequency bands.

GPRS General Packet Radio Service

Step between GSM and UMTS for faster data transmission via the GSM network.

GSM Global System for Mobile communication The second generation mobile system used in Europe. HARQ Hybrid Automatic Retransmission Request

Strategy for reliable transmission using a combination of error correction and retransmissions.

HSDPA High Speed Downlink Packet Access

Concept for fast downlink transmission included in WCDMA release 5. Provides peak data rates of 14 Mbps.

(12)

x

HS-DPCCH High Speed Dedicated Physical Control Channel Uplink signalling channel in HSDPA.

HS-DSCH High Speed Downlink Shared Channel Data carrier channel in HSDPA.

HS-SCCH High Speed Shared Control Channel Downlink control channel in HSDPA. IP Internet Protocol

Protocol for packet data transmission over the Internet. ITU International Telecommunication Union

An organisation where governments and private sectors co-ordinate telecom networks and services.

Node B Notation for base station QoS Quality of Service

Performance properties of a network service and the ability of the network to deliver predictable results.

QPSK Quadrature Phase Shift Keying Modulation with four symbols. RNC Radio Network Controller

Equipment connecting the base stations to the core net-work. Part of the radio netnet-work.

RTP Real-time Transport Protocol

Internet-standard protocol for the transport of real-time data, including audio and video.

TDMA Time Division Multiple Access

Air interface where users transmit in different time slots. UMTS Universal Mobile Telecommunication Services

The European system for 3G communication. UTRAN UMTS Terrestrial Radio Access Network

Part of the communication system consisting of the Node Bs and the RNCs.

WCDMA Wideband Code Division Multiple Access

Air interface in the 3G networks utilising spread spectrum communication.

(13)

Contents

1 Introduction 1 1.1 Background . . . 1 1.2 Purpose of Thesis . . . 2 1.3 Previous Studies . . . 2 1.4 Thesis Outline . . . 3 2 UMTS Overview 5 2.1 Introduction . . . 5

2.2 UMTS Network Architecture . . . 5

2.3 Wideband Code Division Multiple Access . . . 6

2.3.1 Separation of Uplink and Downlink . . . 8

2.4 Radio Channels . . . 8

2.5 High Speed Data Packet Access, HSDPA . . . 9

2.5.1 High Speed Downlink Shared Channel, HS-DSCH . . . 9

2.5.2 High Speed Shared Control Channel, HS-SCCH . . . 10

2.5.3 High Speed Dedicated Physical Channel, HS-DPCCH . . . 11

3 Streaming 13 3.1 Introduction . . . 13 3.2 Codecs . . . 14 4 Scheduling 17 4.1 Theoretical Formulation . . . 17 4.2 Maxrate . . . 19 4.3 Round Robin, RR . . . 19 4.4 Proportionally Fair, PF . . . 20 4.5 Streamprio . . . 20 xi

(14)

4.6 Barrier Function, BF . . . 20

4.7 Buffer Level, BL . . . 21

5 Simulation Models 23 5.1 Propagation Model . . . 23

5.2 Physical Layer Model . . . 23

5.3 HSDPA Model . . . 24

5.4 Traffic models . . . 25

5.4.1 Streaming . . . 25

5.4.2 Web Browsing . . . 26

6 Simulations and Results 27 6.1 Performance Measures . . . 27

6.2 Introduction to the PF Algorithm . . . 28

6.3 New schedulers . . . 29 6.3.1 Streamprio . . . 30 6.3.2 Barrier Function . . . 30 6.3.3 Buffer Level . . . 32 6.4 Admission Priority . . . 33 7 Summary 39 7.1 Conclusions . . . 39 7.2 Future Work . . . 40 Bibliography 43 A Simulation Parameters 45

(15)

Chapter 1

Introduction

1.1

Background

In 1992 the World Administrative Radio Conference of the International Telecom-munication Union identified the frequencies around 2 GHz to be used for the third generation mobile systems. That was the starting point for the development of a new communication system. In Europe the 3rd generation, 3G, system is referred to as Universal Mobile Telecommunication Services, UMTS.

Almost ten years later, in October 2001, the Japanese mobile operator NTT DoCoMo launched the first commercial 3G system. Since then the technology has entered the market in many countries in Europe and the rest of the world. Because of economical problems many of the major operators have waited to in-troduce the new systems. But within a year it will be possible to use the new technology and the new services it provides in most of the countries in Western Europe.

The most important feature of UMTS is the opportunity for high bit rate de-manding services. In the future we are all supposed to use our mobile terminal for other things than ”just talking”. We have already seen digital cameras and music applications become a part of the cell phone, this in the second generation, GSM/GPRS, terminals. But if we want to take pictures and send them to our friends and if we want pictures of the same quality as the ordinary digital cameras of today offers, we need a system with higher capacity. The same goes if we want to download music or movies to the terminal directly over the mobile system. With higher bit rates we can use the cell phone as a mobile computer, which means browsing the Internet, watching video clips and so on.

The new kinds of traffic will need high capacity especially in the downlink direction, i.e. from the base station to the user equipment. A new technology for downlink traffic, called High Speed Data Packet Access, HSDPA, has therefore been included

(16)

2 Introduction

in UMTS. It supports higher capacity, reduced delay and peak data rates of 14 Mbps. HSDPA is not yet included in the commercial systems but within a couple of years it will be.

1.2

Purpose of Thesis

Services in UMTS are normally categorised in different Quality of Service, QoS, classes. QoS is the performance property of a network service and the goal is to provide guarantees on the ability of the network to deliver predictable results. There are four QoS classes, conversational, streaming, interactive and background. The delay sensitivity separates the classes. Examples of services the classes provide are, respectively, speech, real-time multimedia, web browsing and email.

HSDPA uses a so-called best effort channel and is therefore developed for services that do not need guaranteed bit rates. This means that mainly interactive and background services will use the technology. But it would be satisfying if QoS applications such as streaming could take advantage of the high bit rates HSPDA provides. Streaming is a kind of video application that will be used for watching video clips in the mobile terminal and it generates large amounts of downlink traffic. The aim of this study is to investigate if HSDPA could be used to improve the capacity and quality for streaming applications. The technology of streaming will be further described in chapter 3.

As mentioned above the aim of this study is to investigate the possibility to run streaming services with HSDPA. HSDPA uses a channel that is shared between users in the time domain. For each time interval there is a decision which user will get the chance to transmit. Note that the term transmit is used for transmission in the direction from base station to the user equipment. The decision can be based on different scheduling parameters and a couple of ways to choose these parameters are investigated. This includes both ordinary methods, which are developed for optimal system throughput or maximum fairness, as well as new methods that prioritise the streaming users.

To perform the comparison between different scheduling methods a simulator called Rasmus is used. Rasmus is a simulator developed at Ericsson Research and it is implemented in Java and C++. Some modifications of the simulator are made and they are discussed later.

1.3

Previous Studies

Scheduling, in general, is a well-known area investigated in many reports. In reference [8] there are comparisons between different algorithms for web and file transfer users. Also reference [3] covers this area and to some extent it also covers QoS handled by HSDPA. Here different services are given different priority levels in the scheduling algorithm. The conclusion drawn from the study is that differ-entiation of services to provide different QoS levels is possible. In reference [6],

(17)

1.4 Thesis Outline 3

QoS for HSDPA is discussed in terms of fixed bit rates. Fixed rates are interesting because streaming applications need quite stable average bit rates and it will be further investigated in this report.

Studies of streaming using HSDPA have also been performed, especially in com-parisons with other downlink traffic technologies. The results, usually illustrated as the fraction of satisfied users versus traffic load, show an improvement when HSDPA is deployed.

1.4

Thesis Outline

This report is written for people who have studied technology at University level, preferably within telecommunications. Chapter 2 and 3 contain introductions to, respectively, UMTS and streaming. To start with the basics of WCDMA, Wide-band Code Division Multiple Access, the air interface used, is explained and then the main features of HSDPA are covered. Readers familiar with radio commu-nication systems and/or streaming can skip these chapters. In chapter 4, the theoretical part of the thesis follows. The different scheduling algorithms, which will be investigated in the simulations, are introduced and discussed. Before the results of the simulations are presented in chapter 6 a description of the simulator Rasmus and the simulation models are described in chapter 5. Finally the report is concluded with a summary and suggestions to further studies.

(18)
(19)

Chapter 2

UMTS Overview

2.1

Introduction

The analogue cellular systems that where used in the 80s are referred to as the first generation mobile telephone systems. In the 90s the digital, second generation sys-tems, such as GSM, were introduced. The quality for speech services was increased and we also got some new features as for example text messaging. Now, in the be-ginning of the 21st century, the third generation of mobile communication, 3G, is launched around the world. The new systems are designed for multimedia commu-nication and will offer similar services as in our stationary computers. In Europe the 3rd generation systems are referred to as Universal Mobile Telecommunication Services, UMTS, which uses WCDMA, Wideband Code Division Multiple Access, as air interface. This was decided by the ETSI, European Telecommunications Standards Institute. ETSI is a part of 3GPP, 3rd Generation Partnership Project, a partnership project setting standards within this area. This chapter will start with a brief overview of UMTS and WCDMA. For a more detailed description see reference [4].

2.2

UMTS Network Architecture

The UMTS network consists of three main parts.

• User Equipment

• Radio Access Network, UTRAN • Core Network

Figure 2.1 shows a simple model of the network. UTRAN is the link between the mobile terminals and the Core Network. The Core Network is the connection to external networks such as the ordinary telephone system and the Internet.

(20)

6 UMTS Overview RNC Node B Node B RNC Node B Core Network

Figure 2.1. UMTS system architecture.

UTRAN consists of the Radio Network Controllers, RNCs, and the base stations, Node Bs. Each Node B consists of a number of cells covering a geographical area. Mobile terminals within a cell area are connected to a specific Node B. The cell areas normally intersect close to the cell borders and terminals in these regions can be connected to more than one Node B.

2.3

Wideband Code Division Multiple Access

There are several technologies to handle multiple users in a radio environment. Using TDMA, Time Division Multiple Access, each user is assigned a time slot when it is allowed to transmit in the entire frequency band. The opposite method is FDMA, Frequency Division Multiple Access, where the frequency band is di-vided into smaller sub bands. Each user is then assigned a sub band for the communication. GSM uses a combination of these two methods.

The technology to be used in the third generation systems differs a lot from ordinary TDMA/FDMA. It is called Wideband Code Division Multiple Access, WCDMA [4], and it uses direct sequence spread spectrum communication. The main idea is to spread the data over a larger frequency band by multiplying with a pseudo random sequence consisting of 1 and -1, the spreading code. The bit rate of this spreading code is much higher than the data sequence rate, which implies much larger bandwidth. The number of chips per data symbol is defined as the spreading factor, SF . The lower spreading factor the higher data rate. Fig-ure 2.2 illustrates spreading in direct sequence spread spectrum communication. In the example SF = 4. At the receiver side the signal is multiplied with the same random sequence to despread the signal. Different spreading codes are used

(21)

2.3 Wideband Code Division Multiple Access 7 Symbol 1 -1 1 -1 1 -1 Spreading code Spread signal Data Chip

Figure 2.2. Spreading in direct sequence spread spectrum communication, SF = 4.

Spread signal=Data×Spreading code

Power Frequency Received signal Power Frequency Original signal after despreading Power Frequency Original signal Spread signal

Figure 2.3. Spreading and despreading in the frequency domain. The received signal in

the second figure consists of data from four different users. Figure three shows the data after despreading. It contains all the original data plus some noise from the other users.

to separate different users. This allows multiple users to transmit at the same frequencies simultaneously. To reduce interference between users, the spreading codes are orthogonal. In the example, with SF = 4, the code for one symbol can be represented by the 1 × 4 vectors s1 and s2for two different users.

Orthogonal-ity between the codes involves s1sT2 = 0. For SF = 4 there are four orthogonal

codes. A description of direct sequence spread spectrum communication in the frequency domain is illustrated in figure 2.3. It shows how the received signal, after despreading, will contain the original data plus some noise from the other users.

The spread spectrum technology is used in several systems, for example in the CDMA2000 and IS-95 specifications of Code Division Multiple Access, CDMA. These systems, which do not provide as high capacity as WCDMA, use a band-width of just above 1 MHz that should be compared to 5 MHz for WCDMA.

(22)

8 UMTS Overview

2.3.1

Separation of Uplink and Downlink

The spread spectrum technology is used for separating different users from each other. But also the traffic to a certain user, in the different traffic directions, has to be separated. The traffic directions are referred to as the uplink, from terminal to base station, and the downlink, from base station to terminal. In the WCDMA standard, two ways to perform this separation are included, either in the time domain or in the frequency domain. In Europe, FDD, Frequency Division Duplex, is the most common method. Downlink and uplink are then operating in different frequency bands.

2.4

Radio Channels

Radio communication differs from ordinary wired transmission in terms of channel stability, i.e. the variation in time of the radio channel bit rate is comparatively large. This comes from the varying radio channel conditions. The attenuation of the radio signal on its way from the transmitter to the receiver is often referred to as fading. The main components of fading can be divided into three parts, distance attenuation, shadow fading and multipath fading.

Distance Attenuation

The distance attenuation describes how the radio signal is attenuated due to the distance between the transmitter and the receiver. It is slowly varying and increasing with the distance raised to a constant value depending on the environment.

Shadow Fading

Obstacles between the transmitter and the receiver cause shadow, or slow, fading. The mobile is almost always shielded from the base station, but due to reflections and diffraction, rays will still reach the receiver. Movements of the obstacles, transmitter and receiver cause a slow variation of the received signal. This fluctuation is, as the name implies, relatively slow compared to the multipath fading.

Multipath Fading

The received signal is the sum of several rays travelled different ways from the transmitter. The rays are reflected at different obstacles on their way. Every reflection causes a phase shift of the signal and when the rays are summarised at the receiver they are either added constructively or destructively. Because of movements of reflecting obstacles, transmitter and receiver, the phase of the received rays changes very quickly. This implies a fast variation of the radio signal and therefore this type of fading is also often referred to as fast fading.

Fading is the most important phenomenon for the varying radio channel and it results in varying bit rate capacity. In this study methods to compensate for the varying bit rates will be investigated.

(23)

2.5 High Speed Data Packet Access, HSDPA 9

2.5

High Speed Data Packet Access, HSDPA

In WCDMA release 5, a development called HSDPA, High Speed Data Packet Access, is included [5]. It mainly supports interactive and background services in the downlink direction, but to some extent also streaming applications. These are services that generate large traffic loads in the downlink. HSDPA will make use of a new transport channel called the High Speed Downlink Shared Channel, HS-DSCH. This channel supports higher capacity, reduced delay and higher peak data rate, 14 Mbps. Here follows an overview of HS-DSCH and the corresponding downlink/uplink control signalling. The channels are illustrated in figure 2.4.

2.5.1

High Speed Downlink Shared Channel, HS-DSCH

HS-DSCH is the data carrier channel used in the HSDPA concept. Here the most important channel specific features will be discussed.

Higher order modulation

In good conditions higher order modulation, to provide higher bit rates, may be assigned to the channel. The higher order modulation used is 16QAM in addition to the existing QPSK for ordinary radio conditions.

Fast link adaptation

This implies that the transmission parameters are modified with the variation of the channel quality. In good conditions higher order modulation and high code rates are used compared to channels with less favourable quality. Fast scheduling

For each cell and time interval a decision is made for which user the HS-DSCH should be allocated. Different scheduling algorithms, discussed in chapter 4, take care of this.

Fast Type 2 Hybrid Automatic Repeat Request, HARQ

Type 2 of HARQ uses soft combining. Information from the original trans-mission is combined with the retranstrans-missions. This to reduce the number of retransmissions and the time between them. For more information in this area see [2].

To support the above-mentioned features a new sub layer in the network architec-ture is introduced. The new layer is located in the Node B to reduce retransmission delays for the HARQ process and to get up-to-date estimates of the channel qual-ity. The use of shorter TTIs, 2 ms, allows fast channel dependent scheduling and time multiplexing of users. TTI, Transmission Time Interval, is the time between two subsequent transmissions.

The spreading factor to be used is fixed to SFHS−DSCH = 16 but at least one

code is reserved for mandatory signalling and the maximum number of codes used for HS-DSCH is then 15. However, typically the code reservation consists of less

(24)

10 UMTS Overview 2ms TTI 7.5 slots 1 slot HS-DPCCH HS-SCCH HS-DSCH Downlink Uplink Ack CQI

Figure 2.4. Timing relationship between corresponding TTIs in HS-DSCH, HS-SCCH

and HS-DPCCH.

than 15 codes. Code multiplexing between users is a possibility but it will not be used in this study. Only one user will get access to the HS-DSCH each TTI. The amount of power allocated to the HS-DSCH equals the remaining power when common and dedicated channels are served. It is controlled every TTI and is thus varying.

Every user utilises a downlink control channel, section 2.5.2, which allocates power. When there are lots of users in the system, the power for the control channels increases, leaving less resources for the data transmission. To avoid this, admission control is used to reject new users when the system is overloaded. When the total amount of power, for data transmission and control channels, exceeds a predetermined value, new users are rejected to use the HS-DSCH.

2.5.2

High Speed Shared Control Channel, HS-SCCH

To carry key information for the HS-DSCH the High Speed Shared Control Chan-nel is included in WCDMA. Each terminal will need to consider a maximum of four HS-SCCHs at a given time and the network signals which control channels there are to consider. The duration for a HS-SCCH block is 2 ms divided into three slots. The first part of the block, which duration is one slot, contains time-critical information. This implies information needed to start the demodulation process. The second part, two slots long, carries less time-critical parameters such as infor-mation about the HARQ process. The HS-SCCH uses convolutional coding with the two parts coded separately. To maximise the efficiency, the HS-SCCH block starts two slots prior the start of the corresponding HS-DSCH block. The time relationship between HS-DSCH and HS-SCCH is illustrated in figure 2.4.

(25)

2.5 High Speed Data Packet Access, HSDPA 11

2.5.3

High Speed Dedicated Physical Channel, HS-DPCCH

DPCCH, High Speed Dedicated Physical Control Channel, is the uplink HS-DSCH related signalling channel. It consists of two parts. The first part, one slot, carries the acknowledgements for the HARQ process and the second part, two slots, contains the CQI value. CQI, Channel Quality Index, is a term used for the information about the current channel conditions a specific terminal experiences. A spreading factor such that the 2 ms TTI consists of 30 bits is used. The HARQ information uses the first 10 bits for repetition coding of positive, 1, or negative, 0, acknowledgements. This information is transmitted from the user equipment approximately 7.5 slots or 5 ms after the end of corresponding HS-DSCH TTI. The CQI value consists of 5 bits that are block coded into 20 channel bits. Figure 2.4 illustrates the relation between all three discussed channels.

(26)
(27)

Chapter 3

Streaming

Streaming is a common technology for multimedia transmission and the aim of this thesis work is to evaluate the possibility of QoS streaming in the HS-DSCH. The concept of streaming is used for both audio and video and in principle the technologies are the same. However, in the rest of the report all streaming should be referred to as video streaming. Here follows a short introduction to the concept of streaming.

3.1

Introduction

Streaming has become a popular application to watch video clips on the Internet. Interesting examples are a goal in a hockey game or a weather report. Now these kinds of applications are supposed to be important in the new third generation mobile telephone systems. The basic principle of streaming is that data packets are transmitted, and at the same time, preceding packets, belonging to the same video clip, are consumed by the client application. So, the download does not need to be completed before the play out of the video clip can start. The scenarios mentioned above, the hockey clip and the weather report, are typical, a server in a fixed network streams video to a terminal. A feature that is important, especially if the terminal is mobile, is that the streaming applications not necessarily require a fixed data rate or strict delay bounds. The receiving terminal uses a buffer to avoid interruptions if short dips in the bit rate occur. The size of the buffer varies and depends on the chosen initial buffer delay. Typically the buffered data corresponds to a few seconds of video to be played. It is though important that the packets are transmitted at the same average bit rate as they are consumed in the mobile terminal. The relation between the transmission rate and the play out rate for a streaming application is illustrated in figure 3.1. The distance between the curves corresponds to the amount of data in the client buffer and if the play out curve reaches the transmission curve the play out will be interrupted.

(28)

14 Streaming

Play out curve

Initial buffering time Time

Accumulated bytes Incoming data to receiver buffer

Amount of data in the receiver buffer

Figure 3.1.The transmission curve and the play out curve for a streaming application.

The distance between the curves corresponds to the amount of data in the client buffer and if the play out curve reaches the transmission curve the play out will be interrupted.

3.2

Codecs

The bit rate of the video depends of the codec used to compress the data. Two codecs have been adopted by the 3GPP for video transmission, H.263 and MPEG4 [1]. These two methods use similar technologies to compress the video data. Basi-cally there are four steps in the compression process, motion compensation, trans-formation, quantisation and encoding. These four steps are discussed below and a block diagram for video coding is illustrated in figure 3.2.

Motion Compensation

To start with, the previous transmitted frame is subtracted from the current frame. The difference will in most cases contain much less data because large areas do not change from frame to frame, for example in the background. The next step is to estimate where areas of the previous frame have moved to in the current frame and compensate for the movements. The movement data is transmitted to the receiver in a motion vector.

Transformation

The new frames, which now contain less data, are transformed. The trans-form used is called DCT, Discrete Cosine Transtrans-form. It operates on 2-dimensional subframes of pixels and transforms them into a number of matrix coefficients. The output from the DCT block, see figure 3.2, is the matrix Y calculated as in equation 3.1. The input subframe is denoted X, and C is the N × N transformation matrix from equation 3.2.

(29)

3.2 Codecs 15 Y = CXCT (3.1) [C]i,j =    q 1 N cos (2j+1)iπ 2N i= 0, j = 0, 1, . . . , N − 1 q 2 N cos (2j+1)iπ 2N i= 1, 2, . . . , N − 1, j = 0, 1, . . . , N − 1 (3.2) Quantisation

For a typical block of pixels most of the coefficients after the transformation are close to zero. In the quantisation step these values are set to zero and the significant non-zero elements are quantised to a few number of coefficients. It is important to realise that some of the information gets lost in this step. Encoding

An entropy encoder is used to compress the quantised DCT coefficients, i.e. frequently occurring values are replaced by short binary codes and non-frequently values with longer codes. The data is then transmitted together with some additional information such as motion vectors. The motion vectors are used for reconstruction of the motion compensation at the receiver.

At the receiver side there is a decoder that uses inverse functions in the opposite direction to reconstruct the frames. For more information about video compression see reference [11].

Many streaming applications use adaptive streaming, i.e. the bit rate and the quality of the video clip varies with the transmission conditions. However, adaptive streaming will not be used in this study because of the difficulties to compare results in terms of satisfied streaming users. In this work a traffic model based on the H.263 codec will be used, see 5.4.1. The model, randomly generates frames that corresponds to the codec frame generation. For details about the H.263 codec, see reference [11].

(30)

16 Streaming DCT-1 Q Q-1 DCT Motion compensation Frame store Encoding + -+ + Video in Motion vector Data to transmit

X

Y

(31)

Chapter 4

Scheduling

One of the important new features used in HSDPA is fast scheduling, which is used to share the resources in an efficient and fair enough way. Especially when there are many users in the system the scheduling is important. If we then use an algorithm that utilise the channel conditions as the scheduling parameter we will get higher system throughput and the capacity will increase. In this thesis work scheduling is an essential part so here follows an introduction to scheduling and to the algorithms that will be investigated. In the end of the chapter new algorithms that prioritise streaming users are discussed. Previous studies of scheduling for HSDPA have been performed and can be read about in the references [3] and [8].

4.1

Theoretical Formulation

Here follows a mathematical formulation of the scheduling problem provided by Kelly [9]. The main idea is that each user is represented by a utility function. This function expresses the user satisfaction in terms of a chosen criterion that can be bit rate, queuing delay etc. The question how to allocate the channel resources is then an optimisation problem. Here a gradient ascent method, further explained in [6], is used. The resource allocation problem can be formulated as follows:

maximize F(~r) ≡ k X i=1 Ui(ri) (4.1) k X i=1 ri< C, ri>0, 1 6 i 6 k 17

(32)

18 Scheduling

k, number of users competing for the channel. ri, average bitrate for user i.

C, channel capacity. Ui(ri), the utility fuction.

If the utility function, for each user, is assumed to be strictly concave and differen-tiable, then the same holds for F . And since there is a compact region an optimal solution exists. The solution is unique and can be found by Langrangian methods. Both the number of users, k, and the channel capacity, C, vary with time and the optimal solution will then also be time varying. The best to do in each scheduling decision is then to move towards the, for the moment, optimal solution. Therefore the user, who results in movement along the maximum objective function, F , is served. To make this calculation, the average throughput of each user, ri, must be

computed. An exponentially smoothed filter is used to calculate the throughput:

ri(n + 1) =

 (1 − 1

τ)ri(n) + di(n)

τ if user i is served in slot n

(1 − 1

τ)ri(n) otherwise

(4.2)

In the calculations di(n) is the current estimated bit rate for user i in slot n and

τ = 800 is the filter constant. Now, the optimisation problem can be reduced to find the maximum gradient direction, i.e. maximize F0

j(~r(n)), the gradient in the

direction of serving user j. This could be done by parameterising the movement along the ray corresponding to serving user j. Parametrising by α and using equation 4.2 gives us:

Fj(α) = k X i=1 Ui  ri(n) + α ri(n + 1) − ri(n)  = k X i=1,i6=j Ui  ri(n) + α (1 − 1 τ)ri(n) − ri(n)  + Uj  rj(n) + α (1 − 1 τ)rj(n) + dj(n) τ − rj(n) 

Taking the derivative with respect to α and evaluate at α = 0 yields:

Fj0= − k X i=1,i6=j Ui0(ri(n)) ri(n) τ + U 0 j(rj(n))  −rj(n) τ + dj(n) τ  = U0 j(rj(n)) dj(n) τ − k X i=1 Ui0(ri(n)) ri(n) τ

We want to choose the user j, who results in movement along the maximum gradient direction. The summation term is common for all users and can then be removed. Finally, we choose the users for which:

(33)

4.2 Maxrate 19 arg max j  Fj(~r(n))  = arg max j  dj(n)Uj0(rj(n))  (4.3)

Different scheduling algorithms can then be computed using different utility func-tions. Here follows a theoretical description of the scheduling algorithms to be investigated in this study.

4.2

Maxrate

The Maxrate algorithm uses the utility function Uj(r) = βrj. Here, the function

expresses satisfaction in terms of delivered number of bits to a user. The scheduling algorithm will choose the user for which:

arg max

j dj(n) (4.4)

The estimated current bit rate, dj(n) is calculated from the Channel Quality Index,

CQI, that is reported from the mobiles.

This is the most efficient way to use the channel; the total throughput will be maximised. The drawback is that the system is unfair. Users with good channel conditions are served all the time at the expense of users with less favourable channel. As we will see there is always a trade-off between system throughput and fairness.

4.3

Round Robin, RR

Fairness can be measured in different ways. In this section two ways of channel independent scheduling will be discussed. Algorithm (4.5) uses fairness in terms of queue time, qj(n), the time since last transmission for user j currently in slot n.

The available resources are then shared equally. The second method (4.6) schedules the user with the lowest average bit rate. The algorithm is then fair in terms of average bit rate.

arg max j qj(n) (4.5) arg max j 1 rj(n) (4.6)

Many studies have been performed of the algorithms mentioned so far. The Round Robin algorithms have turned out to be very inefficient compared to the Maxrate scheduler and the fairness improvements are limited [3][8].

(34)

20 Scheduling

4.4

Proportionally Fair, PF

As mentioned above, there is a trade-off between system throughput and fairness. The Proportionally Fair algorithm takes both these aspects into account. The utility function used in this case is logarithmic, Uj(r) = ln(r), and it gives us the

following scheduling algorithm [7]:

arg max

j

dj(n)

rj(n)

(4.7)

Here users with comparatively good channel conditions are scheduled, the momen-tary estimated bit rate is compared to the average bit rate for each user. The first task of this thesis work is to implement the PF algorithm in the simulator and compare it to the RR and Maxrate methods.

4.5

Streamprio

The aim of this thesis work is to run streaming users in the HS-DSCH, which basically is a best effort channel. This should be performed by modifications of the scheduling algorithms. To do this, the scheduling process, located in the Node B, has to get information about what service each user is utilising. In the standard of HSDPA this information does not exist in the Node B. However, a priority parameter is included in the standard and maybe this parameter could be used to separate the service classes in the Node B. The parameter is a value, consisting of 16 levels, set in the RNC and transmitted to the Node B. A solution would thus be to allocate one level for streaming user identification. But the service class information would then still be needed to be transmitted to the RNC. For this work, the priority parameter is not utilised. For this study, the simulator is modified such that the service class is known in the Node B.

The first new method, Streamprio, is very simple. If there are streaming users that have data to transmit, they are served. Otherwise ordinary data traffic services can use the channel. To select among users in the same service class different methods can be used. Here, the PF algorithm is applied.

Streamprio is a method that prioritises all streaming users whether they need it or not. The following two methods, section 4.6 and 4.7, choose to prioritise streaming users only when they requires extra resources to remain satisfied.

4.6

Barrier Function, BF

This method is introduced to prioritise streaming users in critical state, i.e. when interruption in the video clip is getting close. Barrier functions are utilised to reduce movements outside a feasible region and here this region is specified in terms

(35)

4.7 Buffer Level, BL 21

of average bit rate. Consider the following utility function where the requested minimum bit rate, rmin, is known.

U(r) = r + (1 − e−β(r−rmin))

When the bit rate drops below the requested minimum bit rate the function will decrease rapidly. For large values of the bit rate the function is approximately the same as for the Maxrate scheduler. Taking the derivative according to equation 4.3 gives us:

U0(r) = 1 + βe−β(r−rmin)

To get larger flexibility to the algorithm, β is split into two different constants, α and β. The final scheduling algorithm is then:

arg max

j



dj(n)(1 + αe−β(r−rmin)) for streaming users

dj(n) otherwise (4.8)

This algorithm is based on the Maxrate scheduler and the web users will thus suffer of unfairness as earlier discussed. An alternative is then to use PF scheduling as the basic algorithm.

arg max

j

( dj(n)

rj(n)(1 + αe

−β(r−rmin)) for streaming users

dj(n)

rj(n) otherwise

(4.9)

Both these algorithms will be implemented and tested in the simulator.

4.7

Buffer Level, BL

Considering a streaming user, the mean bit rate from the server is roughly constant, i.e. data packets arrive constantly to the transmission system. Thus, if the rate from the base station to the client terminal is too low there will be buffered packets in the system. In this method the prioritisation will be based on the packet buffer level, bj(n), the sum of packets in the Node B and the RNC. The main advantage

of using the buffer level as the scheduling parameter is that it is a true variable compared to the average bit rate that is calculated.

arg max

j

( dj(n) rj(n)(1 + λe

µbj(n))) for streaming users

dj(n)

rj(n) otherwise

(4.10)

A problem with this algorithm is that in the end of the film there will be few packets left in the system and there will be no prioritisation of these users although they have not received any data for a long time. This is very severe because much capacity has been offered these users but still they get unsatisfied because they

(36)

22 Scheduling

do not receive the last frames in the film. There is no possibility for the scheduler to know what packets that should be regarded as end-film-packets, i.e. packets belonging to the end of the film. One simple solution of the problem is then to combine the original Buffer Level algorithm with the Barrier Function scheduler.

arg max

j

( dj(n)

rj(n)(1 + λe

µbj(n)))(1 + αeβ(r−rmin)) for streaming users

dj(n)

rj(n) otherwise

(4.11)

Both these methods will be investigated to see if the problem with the first one appears as expected. In equation 4.10 and 4.11 the PF algorithm is used as basic function but simulations using Maxrate, as was done for the BF scheduler, will also be performed.

(37)

Chapter 5

Simulation Models

To perform the comparisons of the different scheduling algorithms a simulator called Rasmus will be used. Rasmus is developed at Ericsson Research and is implemented in C++ and Java. In this chapter the most important simulator models is discussed.

5.1

Propagation Model

The propagation model describes how much the radio signal is attenuated on its way to the receiver. As mentioned in chapter 2.4 the signal variations can be split into a fast and slow part. The propagation model used in Rasmus is also generated in these two steps.

The first step, models the slow variations as distance attenuation and shadow fading. A set of maps is used to decide the current path loss. The position for each terminal is compared to the map and the path loss value is returned. In the simulations a 21-cells wrapped map that closes the area is used. The hexagonal cells are uniformly distributed and each base station covers three cells to simulate three-sector antennas, figure 5.1.

The fast model simulates the multipath fading and table lookup is used to decide the current fast varying path loss. The table is chosen according to the environment we want to simulate.

5.2

Physical Layer Model

The physical layer model estimates the received signal quality that is used to calculate the block error probabilities. A random number generator is then used to decide if the blocks are correctly received. Also the CQI value is calculated from the estimated signal quality but it is multiplied with a lognormal distributed

(38)

24 Simulation Models

Figure 5.1.21-cells wrapped map that closes the area. Three-sector antennas are used.

variable to simulate errors occurring during the transmission from the terminal. The propagation model is used as input for the physical layer model.

5.3

HSDPA Model

In the simulations there will be streaming and web traffic using the HS-DSCH. The channel allocates 10 codes and to prohibit too many users, admission control is used. Without admission control, the associated channels will use too much power and at heavy traffic load there will be no resources left for the data transmission. The HSDPA specific features discussed in 2.5 are modelled as follows.

Fast link adaption

Modulation and code rate are modified based on the current channel condi-tion as described in seccondi-tion 2.5.

Fast scheduling

For every 2 ms TTI there is a decision of what user to serve. Different scheduling algorithms that are both channel and service dependent will be used, see chapter 4. In some of the discussed algorithms the streaming users will be prioritised. Service dependent scheduling is implemented in the simulator for this study.

(39)

5.4 Traffic models 25

Type 2 HARQ

A maximum of four retransmissions will be used. If the data is still not correctly received the packet will instead be retransmitted from the RNC.

5.4

Traffic models

In the simulations there will be equal number of web and streaming users. The users are randomly uniformly distributed over the simulation area and the session duration is 60 seconds for both services. The two traffic models used are then streaming and web browsing.

5.4.1

Streaming

The streaming model [10] used in Rasmus is based on the H.263 coder. In this study no adaptive streaming will be used so the target bit rate is fixed to 55 kbps. This value corresponds to 64 kbps carrier bit rate. Frame sizes are adapted to the chosen bit rate and are changed due to the scenes in the film. In the simulator, frame sizes are modelled according to a Markov chain with several states where each state represents a type of scene in the film. There are states for fast scenes and slow scenes generating respectively large and small frames. Each state holds the information:

• Expected (mean) frame size. • Standard deviation of noise. • Minimum frame size. • Maximum frame size.

The frames are then put into RTP, Real-time Transport Protocol, packets. Large frames are split into a number of packets while small frames use only one RTP packet. The packets are sent to the client via IP, Internet Protocol for packet transmission.

In some situations the download data rate is too low for running streaming appli-cations. This can happen if there are too many users in the system or if a user experiences bad channel conditions. Then the streaming users are categorised as unsatisfied. There are several ways to measure to what extent a user is satisfied. The worst that can happen is probably if the film is interrupted in the middle. Then it is better not be able to start watching it. It would thus be a possibility to introduce some kind of satisfaction points to grade the satisfaction. But in this study only two levels are used, satisfied or not satisfied, and when a user once is regarded as unsatisfied it remains so for the whole session. The conditions to fulfil for the user to remain satisfied are:

(40)

26 Simulation Models

• No rejection by the admission control.

• The initial buffering time should be less than a predetermined time Tpbs,

where Tpbs> Tb. Tb is the film length initially needed to be buffered before

the play out starts.

• No waiting for rebuffering caused by starved receive buffer, i.e. when the playout curve reaches the transmission curve, illustrated in figure 3.1.

If the streaming application runs even worse the user will shut the session down. This seems reasonable in a real scenario and is favourable for the system. Session shutdown will take place if the initial buffering time exceeds Tpbh or the total

rebuffering time exceeds Trbh.

5.4.2

Web Browsing

In this study a web browsing model with read time will be used. Every web page is downloaded as a complete object. When a page download is finished the user reads it and then clicks again to get a new page. The reading time is exponentially distributed with mean mreads. With faster downloads there will be more frequent

clicking. The home page sizes, p, are lognormal distributed and truncated at pmax.

File sizes in bytes are generated as follows:

p= min(pmax,10σX+µ+ pmin)

X, random Gaussian variable. µ, lognormal expectation value. σ, lognormal standard deviation. pmax, maximum page size.

pmin, minimum page size.

If a web user is currently downloading a web page when the session time is over, the download of that page finishes before the session is shut down.

The constants are chosen such that the traffic load generated by the two services, web and streaming, will be in the same magnitude. All relevant parameter settings are included in appendix A.

(41)

Chapter 6

Simulations and Results

In this chapter the evaluation of the discussed scheduling algorithms is performed. To start with the three basic algorithms, Round Robin, Proportionally Fair and Maxrate are discussed in terms of system throughput, user fairness and stream-ing capacity. Then the algorithms for prioritisstream-ing streamstream-ing users are evaluated. Important simulation parameters can be found in appendix A. The scheduling pa-rameters are determined by testing and the chosen values are the best found but they are not necessarily the optimal choice. Especially when there are a number of parameters, the chosen values maybe correspond to a local optima. But first of all the performance measures to be used are presented.

6.1

Performance Measures

In this section, the most important measures that will be used are defined. Traffic load

The unit used is Erlang per cell, which means the average number of simultaneous streaming users in a cell. In the simulations, presented in the figures 6.2-6.11, there are equal number of web and streaming users so the total number of users is doubled.

Throughput

Throughput is the total number of successfully delivered bytes. This is most interesting for the web users.

Object bit rate

The bit rate for a delivered web object in kbps. Equals the object size divided by the time from web page request until successful delivery.

CDF

The Cumulative Distribution Function, CDF, will be used to present the object bit rate. The CDF is defined to be the probability P (X ≤ x), for all values x, where X is a random variable drawn from the distribution fX(x).

(42)

28 Simulations and Results

5th percentile

The statistical term percentile is defined as the value x for which, given p, equation 6.1 holds. For the 5th percentile, p = 0.05. Usually, the web users are defined as satisfied if the value x, the object bit rate, exceeds a predetermined value.

P(X ≤ x) = p (6.1)

6.2

Introduction to the PF Algorithm

The first task of this thesis work is to implement a PF algorithm in the simulator. As mentioned in section 4.4 this algorithm is a trade-off between system capacity and user fairness. The parameter used for the scheduling decision is, in addition to the current estimated bit rate, also the average bit rate, ri(n). The calculation

of the average bit rate is similar with equation 4.2 but here Ri(n), the bit rate for

a successful transmission in slot n, is used. τ = 800 is the filter constant.

ri(n + 1) =



(1 − 1τ)ri(n) +Riτ(n) if user i is served in slot n

(1 − 1τ)ri(n) otherwise

(6.2)

For each user the initial average bit rate has to be set. Suppose the initial value is chosen to be zero. For the PF scheduler this will result in strong prioritisation in the beginning of the transmission due to division by small numbers. Small objects will then be favoured because their calculated average bit rate will never reach the stable level. The stable level depends of the number of users in the system. Important is also that the average bit rate is calculated individually in each Node B. When a user changes cell the value has to be recalculated from zero. The consequence will be that users near the cell borders, frequently switching cells, will be prioritised. But these users suffer from bad channel conditions and the handovers are time-consuming. So in terms of user fairness the prioritisation of users near the cell borders is favourable because it compensates for the bad channel conditions. This has also been confirmed by simulations, not presented here, and therefore the initial average bit rate is set to zero.

In the first simulations, illustrated in figure 6.1, there are only web users. The traffic load is 20 users per cell and the differences between the three basic sched-ulers, Round Robin (eq 4.5), Proportionally Fair (eq 4.7) and Maxrate (eq 4.4), are shown. As expected, the Maxrate scheduler provides the highest system through-put but the CDF plot shows unfairness between the users. The CDF plot also shows that the PF scheduler is fair and provides comparatively high system ca-pacity. Note that complete fairness would imply a step in the CDF plot.

In figure 6.2, left plot, where there are equal number of streaming and web users, the performance in terms of satisfied streaming users is illustrated. Here the PF scheduler turns out to be much more effective than the other two algorithms. As already discovered the RR scheduler provides low system throughput and there-fore the streaming applications will not receive enough data. The problem with

(43)

6.3 New schedulers 29 RR PF Maxrate 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

System throughput, Mbps/cell

0 100 200 300 400 500 600 700 800 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Object bitrate, kbps CDF RR PF Maxrate

Figure 6.1. Total throughput and CDF for the three basic schedulers. Only web users,

20/cell. 6 8 10 12 14 16 18 20 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1

Streaming users, Erlang/cell

Fraction of satisfied streaming users

RR PF Maxrate 6 7 8 9 10 11 12 16 18 20 22 24 26 28 30 32 34

Streaming users, erlang/cell

Total # of received web bytes, Mb/cell

RR PF Maxrate

Figure 6.2. Left: Fraction of satisfied streaming users versus traffic load for the three

basic schedulers. Right: System throughput for the web users.

Maxrate scheduling is that it is too unfair for streaming applications. Large web objects requested by users in good channel conditions will allocate too much of the resources at the expense of streaming users in less favourable conditions. As expected, the web user throughput is significantly higher for the Maxrate sched-uler, figure 6.2 right plot. Based on these results the PF algorithm will be used as comparison to the new schedulers that will be investigated in this report.

6.3

New schedulers

In this section the simulation results of the streaming prioritising schedulers, in-troduced in chapter 4, will be presented. Proportionally Fair scheduling is used as comparison.

(44)

30 Simulations and Results

6.3.1

Streamprio

The Streamprio scheduler always prioritises streaming users, whether they need it or not. Results of the simulations can be seen in figure 6.3. The left plot shows the fraction of satisfied streaming users versus traffic load and the improvements compared to the PF scheduler are only marginal. Furthermore figure 6.4 illustrates that the web user capacity is significantly decreased. Left plot shows the total system throughput and to the right the 5th percentile object bit rate can be seen. The reason for the only marginal improvement for the streaming users can be explained by figure 6.3, right plot. Here the reason for the streaming users to not remain satisfied is illustrated. The unsatisfied users either suffer from starved buffer or they are rejected in the admission control. Starved buffer includes both rebuffering and too long prebuffering time. Apparently, the streaming users get unsatisfied because of admission rejection to a much larger extent using Streamprio compared to PF scheduling. On the other hand, users let into the system get satisfied to larger extent with the Streamprio scheduler. The explanation is that, using Streamprio scheduling, there will be web users stuck in the system. When the web session-time is out the users stay until the download of the current page is finished. If there are many streaming users in the system they are scheduled all the time while the web users are waiting for a free TTI and the sessions will not shut down. Then the total number of users in the system and the power for their associated channels increases, which in turn implies more frequent rejection of new users in the admission control. This observation and how it can be utilised to improve the results is further discussed in section 6.4.

6.3.2

Barrier Function

Barrier Function is the first of the schedulers in this study that takes the streaming users state into account. It prioritises users in a critical state more, using the average bit rate as the prioritising parameter, i.e. when the bit rate is too low the user will get higher priority. The algorithms, called PFbarrier (eq 4.9) and barrier (eq 4.8), are based on respectively the PF and the Maxrate schedulers. The results are plotted together with the PF scheduler as comparison, figures 6.5 and 6.6. Figure 6.5, left plot, shows an improvement in terms of streaming user satisfac-tion for the BF schedulers. The traffic load can be raised with about 20 percent compared to the PF scheduler. The difference between the two BF algorithms is in this aspect negligible, but for the web users a significant difference between them can be observed, figure 6.6. The barrier algorithm provides higher system throughput, left plot, but performs worse for the 5th percentile web objects, right plot, compared to PFbarrier scheduling. The difference between the two BF sched-ulers is expected according to the results previously discovered for the original PF and Maxrate schedulers. For heavy loads the original PF algorithm provides the highest web throughput. This is obvious because it does not need to prioritise the streaming users. In terms of streaming user satisfaction the both BF schedulers work well up to 12 Erlangs per cell. The 5 percent slowest web page downloads

(45)

6.3 New schedulers 31 6 8 10 12 14 16 18 20 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1

Streaming users, Erlang/cell

Fraction of satisfied streaming users

PF Streamprio 6 8 10 12 14 16 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Streaming users, Erlang/cell

Fraction of satisfied streaming users

Satisfied Admission denied Starved buffer PF Streamprio PF Streamprio PF Streamprio PF Streamprio PF Streamprio PF Streamprio

Figure 6.3. Streamprio. Left: Fraction of satisfied streaming users. Right: Fraction

of satisfied streaming users. The unsatisfied users are either rejected in the admission control or they suffer from starved buffer.

6 8 10 12 14 16 0 5 10 15 20 25 30

Streaming users, Erlang/cell

Total # of received web bytes, Mb/cell

PF Streamprio 6 8 10 12 14 16 0 10 20 30 40 50 60 70 80

Streaming users, Erlang/cell

5th percentile object bitrate, kbps

PF Streamprio

Figure 6.4. Streamprio. Left: System throughput for the web users. Right: 5th

(46)

32 Simulations and Results 6 8 10 12 14 16 18 20 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1

Streaming users, Erlang/cell

Fraction of satisfied streaming users

PF PFbarrier barrier 6 8 10 12 14 16 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Streaming users, Erlang/cell

Fraction of satisfied streaming users

Satisfied Admission denied Starved buffer PF PFbarrier barrier PF PFbarrier barrier PF PFbarrier barrier PF PFbarrier barrier PF PFbarrier barrier PF PFbarrier barrier

Figure 6.5. Barrier Functions. Left: Fraction of satisfied streaming users. Right:

Fraction of satisfied streaming users. The unsatisfied users are either rejected in the admission control or they suffer from starved buffer.

6 8 10 12 14 16 16 18 20 22 24 26 28 30

Streaming users, Erlang/cell

Total # of received web bytes, Mb/cell

PF PFbarrier barrier 6 8 10 12 14 16 0 10 20 30 40 50 60 70 80

Streaming users, Erlang/cell

5th percentile object bitrate, kbps

PF PFbarrier barrier

Figure 6.6. Barrier Functions. Left: System throughput for the web users. Right:

5th percentile web object bit rate.

are then about 17 kbps and 43 kbps for respectively the barrier and PFbarrier schedulers. So, if we want to support a guaranteed bit rate for the web users the PFbarrier scheduler is to prefer.

Figure 6.5, right plot, shows the reason why users become unsatisfied. Similar with the results from the Streamprio simulations, the fraction of rejected users increases compared to the PF scheduler. But also here, the probability to remain satisfied is comparatively large for users let into the system, using one of the two Barrier Function schedulers. More about this is discussed in section 6.4.

6.3.3

Buffer Level

The idea of Buffer Level scheduling is to prioritise the streaming users when the number of packets in the system increases. The main advantage by using the

(47)

6.4 Admission Priority 33

buffer level as the scheduling parameter is that it is a true variable compared to the average bit rate that is calculated. Especially in handovers when the average bit rate is recalculated from zero an error is added affecting the BF scheduler. The resulting average bit rate will be an underestimation of the correct bit rate and larger number of handovers will involve larger difference between the calculated and the correct value. However, as discussed in section 6.2, it is preferable to start at zero compared to other initial values. Another advantage by Buffer Level scheduling is that the algorithm can handle temporary varying bit rates from the streaming server.

To use the number of packets in the system as a true scheduling parameter would then be satisfying, but also here the handovers seems to be a problem. When switching cells the Node B packets are lost and the buffer level will decrease with-out any data being transmitted. However the decreased buffer level is just tem-porary. The lost packets will be retransmitted from higher layers and the buffer level will soon be adjusted to its correct value.

As mentioned in section 4.7 there is also a severe problem in the end of the stream-ing session, when the buffer level will not increase although no packets are trans-mitted. And as expected, initial simulations using the scheduler based on equation 4.10 show very poor results. The reason is that much effort is offered to satisfy the users but still they will be unsatisfied because the prioritising does not work for the last packets and the film will be interrupted in the very end. To solve this problem the Buffer Level algorithm is combined with the Barrier Function sched-uler. This is a simple solution to the problem and to find alternative methods are suggested as future work, chapter 7.2. The constants for the BF part, α and β in equation 4.11, are chosen such that it does not prioritise as hard as in the original Barrier Function scheduler. This to see the effect of the Buffer Level scheduling. The difficulty to choose the constants is a drawback with this combined solution. There will be four scheduling constants to optimise and to find the optimal values is very hard. Additionally, it is difficult to draw any conclusions concerning the effects of using the buffer level as the scheduling parameter because the two parts of the scheduling, Barrier Function and Buffer Level, cannot be separated. The results illustrated in figures 6.7 and 6.8 are similar with the results for the BF scheduler. Also here, the fraction of rejected users is large compared to the PF scheduler and more about this follows in the next section.

6.4

Admission Priority

Evaluations of the streaming prioritising schedulers show interesting results. It seems like hard prioritisation causes increasing number of users in the system. This because there are web users that get limited chances to transmit, which in turn implies very low bit rates. Although they will still remain in the system allocating power for their control channels until the current download is finished. With increasing number of users, the total amount of power for the control channels will increase and the final result is more frequent rejection of new users in the

(48)

34 Simulations and Results 6 8 10 12 14 16 18 20 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1

Streaming users, Erlang/cell

Fraction of satisfied streaming users

PF PFbuffer buffer 6 8 10 12 14 16 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Streaming users, Erlang/cell

Fraction of satisfied streaming users

Satisfied Admission denied Starved buffer PF PFbuffer buffer PF PFbuffer buffer PF PFbuffer buffer PF PFbuffer buffer PF PFbuffer buffer PF PFbuffer buffer

Figure 6.7. Buffer Level.Left: Fraction of satisfied streaming users. Right: Fraction

of satisfied streaming users. The unsatisfied users are either rejected in the admission control or they suffer from starved buffer.

6 8 10 12 14 16 12 14 16 18 20 22 24 26 28 30

Streaming users, Erlang/cell

Total # of received web bytes, Mb/cell

PF PFbuffer buffer 6 8 10 12 14 16 0 10 20 30 40 50 60 70 80

Streaming users, Erlang/cell

5th percentile object bitrate, kbps

PF PFbuffer buffer

Figure 6.8. Buffer Level. Left: System throughput for the web users. Right: 5th

(49)

6.4 Admission Priority 35 6 8 10 12 14 16 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Streaming users, Erlang/cell

Fraction of satisfied streaming users

Satisfied Admission denied Starved buffer

PF Streamprio barrier buffer PF Streamprio barrier buffer PF Streamprio barrier buffer PF Streamprio barrier buffer PF Streamprio barrier buffer PF Streamprio barrier buffer

Figure 6.9. Fraction of satisfied streaming users. The unsatisfied users are either

rejected in the admission control or they suffer from starved buffer.

admission control. Hard prioritisation makes users allowed into the system satisfied to larger extent but it implies more admission rejection. Figure 6.9 illustrates how the fraction of rejected users increases with the new schedulers compared to PF scheduling. It also shows that users let into the system usually remain satisfied, i.e. the fraction of users unsatisfied because of rebuffering or too long prebuffering time is significantly decreased. The results are the same as presented in the figures 6.3, 6.5 and 6.7, right plots, but here summarised in one plot. The conclusion is that prioritisation of the streaming users results in increased number of users and more frequent admission rejection but the result for users allowed into the system is improved.

A way to get around the problem is to drop web users that are in the system but do not get the chance to transmit or at least not allow more web users into the system. This is not in the main scope of this thesis work but some simple simulations with rejection of new users are performed. In the simulations so far, the power threshold is 15 W. When the total power, for the High Speed data channel and the associated control channels, exceeds this value new users are rejected. In the following simulation the streaming users will be prioritised also in the admission control. If the total downlink power exceeds dlp, new users are denied.

dlp= 

15W for streaming users 11W otherwise

The results, shown in figure 6.10, are interesting. This simple static prioritisation implies significantly improved results. The Buffer Level scheduler now manages up to 16 Erlangs per cell compared to 10 Erlangs for the original PF scheduler

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

It has been shown that the zeta-potential, defined as the electric potential on the slipping plane i n the electrolyte, is a fundamental physical property i n all

● How are management control systems used in different business models for enabling users to assess the trustworthiness of actors on

1) Piece Selection: As mentioned above, the original piece selection algorithm is not suited for streaming because of the rarest-first algorithm, and it is clear that some

Further experiments have been done below to test the performance of DenStream2 with change in some factors, such as noise, window, stream speed,

Publications Paper A: Coherency-based curve compression for high-order finite element model visualization Paper B: Guiding deep brain stimulation interventions by fusing