• No results found

Test Data Post-Processing and Analysis of Link Adaptation

N/A
N/A
Protected

Academic year: 2021

Share "Test Data Post-Processing and Analysis of Link Adaptation"

Copied!
80
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Computer and Information Science

Examensarbete

Test Data Post-Processing and

Analysis of Link Adaptation

av

Paul Nedstrand & Razmus Lindgren

LIU-IDA/LITH-EX-A--15/037--SE

2015-06-25

(2)

På svenska

Detta dokument hålls tillgängligt på Internet

– eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida

http://www.ep.liu.se/

In English

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page:

http://www.ep.liu.se/

(3)

Examensarbete

Test Data Post-Processing and

Analysis of Link Adaptation

av

Paul Nedstrand & Razmus Lindgren

LIU-IDA/LITH-EX-A--15/037--SE

2015-06-25

Handledare: Ola Leifler

(4)

Analysing the performance of cell phones and other wireless connected de-vices to mobile networks are key when validating if the standard of the system is achieved. This justifies having testing tools that can produce a good overview of the data between base stations and cell phones to see the performance of the cell phone. This master thesis involves developing a tool that produces graphs with statistics from the traffic data in the commu-nication link between a connected mobile device and a base station. The statistics will be the correlation between two parameters in the traffic data in the channel (e.g. throughput over the channel condition). The tool is ori-ented on analysis of link adaptation and by the produced graphs the testing personnel at Ericsson will be able to analyse the performance of one or sev-eral mobile equipments. We performed our own analysis on link adaptation using the tool to show that this type of analysis is possible with this tool. To show that the tool is useful for Ericsson we let test personnel answer a survey on the usability and user friendliness of it.

(5)
(6)

We want to express our gratitude to the following people for making this thesis possible. Rickard Wahlgren and Jonas Wiorek for creating the idea for this thesis and for helping us with the specifications of our work. Our supervisor Sonia Sangari and Roland Sevegran for believing we were the right people to do this thesis. Extra big thanks goes to Sonia for helping us to find study material, contacts and discussing ideas with us. The testing team at Ericsson IoDT in Link¨oping for the help they gave us with setting up simulations and for providing us with useful information about the LTE system. Johan Bergstr¨om and Abdullah Almamun for doing our evaluation survey of the tool. Our supervisor Ola Leifler at IDA, Link¨oping university, for providing us with lots of feedback on both our work and the report.

(7)

1 Introduction 6

1.1 Background . . . 6

1.2 Problem Formulation . . . 7

1.3 Thesis Outline . . . 8

2 Theoretical background 9 2.1 System overview of IoDT data collection . . . 9

2.2 Description of Logtool & the BbFilter . . . 10

2.3 Brief overview of LTE and eUTRAN . . . 11

2.4 eNodeB . . . 11

2.5 Layers in LTE . . . 12

2.6 Quadrature modulation . . . 12

2.7 Channel coding, Code Rate and FEC . . . 13

2.8 AWGN . . . 13

2.9 SINR . . . 13

2.10 CQI . . . 14

2.11 MCS . . . 14

2.12 BLER and BLER target . . . 14

2.13 Link Adaptation . . . 15

2.13.1 How Link Adaptation works . . . 15

2.13.2 Analysis of link adaptation . . . 15

2.14 HARQ Algorithm . . . 16

2.15 TBS . . . 16

2.16 PRB . . . 16

3 Methodology 18 3.1 Interviews and demonstrations . . . 18

3.1.1 Compilation of answers from interviews . . . 19

3.2 Testing and evaluating BbVisualizer . . . 19

3.2.1 Survey . . . 19

3.2.2 Analysis on link adaptation . . . 19

4 Interviews and discussions 20 4.1 Results . . . 20

(8)

5.2 Description of BbVisualizer . . . 22

5.2.1 BasicView . . . 24

5.2.2 Multiple Graph View . . . 26

5.2.3 AdvancedView . . . 27

5.3 Data validity in log files . . . 29

5.4 Survey on BbVisualizer’s usability . . . 32

6 Analysis of link adaptation and BLER target 34 6.1 Motivation . . . 34

6.2 Explanation of the BLER analysis . . . 34

6.3 The simulated channel model . . . 35

6.4 How data was collected . . . 36

6.5 How data was read . . . 36

6.6 Data parameters used in the analysis . . . 37

6.6.1 Throughput over time . . . 37

6.6.2 Throughput over SINR . . . 37

6.7 Parameters for data validation . . . 38

6.7.1 SINR over time . . . 38

6.7.2 BLER and MCS over time . . . 38

6.7.3 BLER over SINR . . . 38

6.7.4 MCS over SINR . . . 39

6.8 Result of the expected graphs . . . 40

6.8.1 BLER and MCS over time . . . 40

6.8.2 SINR over time . . . 41

6.8.3 MCS over SINR . . . 42

6.8.4 BLER over SINR . . . 43

6.9 Simulation results . . . 44

6.9.1 Throughput over time . . . 44

6.9.2 Throughput over SINR . . . 46

6.10 Validation of result . . . 48

6.11 Conclusion . . . 48

7 Future work 50 7.1 Unimplemented Work . . . 50

7.1.1 Potential new features . . . 50

8 Discussion 52 8.1 Result . . . 52

8.2 Method . . . 52

8.3 The thesis work in a wider context . . . 53

9 Conclusion 54

(9)
(10)

2.1 Overview of how IoDT collects traffic data . . . 9

2.2 Example of the content in bb-filtered data . . . 10

2.3 Picture of eUTRAN [8, p. 23] . . . 11

2.4 Picture of the different QAM schemes (from QPSK to 256QAM) [19, p. 456] . . . 12

2.5 MCS table for downlink and uplink (downlink left, uplink right) [7] . . . 14

2.6 Picture of PRB [28] . . . 17

5.1 Overview of how IoDT collects traffic data . . . 23

5.2 Picture of the basic graphs view . . . 24

5.3 Picture of the combined graphs view . . . 26

5.4 Picture of content in AdvancedView . . . 27

5.5 List of loaded files . . . 28

5.6 Available axis parameters in the custom axis DDL . . . 28

5.7 An example of how the console prints information about cor-rupted timestamps when reading data from a file . . . 29

5.8 Changing the granularity of a graph . . . 30

5.9 Graph with granularity value set to default(100) . . . 31

5.10 Same graph with granularity value set to 2000 . . . 31

6.1 picture of the content in a raw log file . . . 36

6.2 picture of the content of the bb-filter data . . . 37

6.3 picture of MCS and BLER over time . . . 40

6.4 picture of BLER over SINR . . . 41

6.5 picture of MCS over SINR . . . 42

6.6 picture of BLER over SINR . . . 43

6.7 picture of throughput over time . . . 44

6.8 picture of throughput over SINR . . . 46

6.9 picture of throughput over SINR . . . 47

B.1 Zoomed picture of SINR over time . . . 65

B.2 Zoomed picture of BLER and MCS over time . . . 66

B.3 Zoomed picture of BLER and MCS over time . . . 67

(11)

B.7 Zoomed picture of throughput over time . . . 71 B.8 Zoomed picture of throughput over time . . . 72

(12)
(13)

CQI . . . Channel Quality Index.

SINR . . . Signal to Interference plus Noise Ratio

eNodeB . . . eUTRA Node B: the LTE network base transceiver UE . . . User Equipment

LTE . . . Long Term Evolution PRB . . . Physical Resource Block

MCS . . . Modulation and Coding Scheme LA . . . Link Adaptation,

3GPP . . . Third Generation Partnership Project FEC . . . Forward Error Correction.

OFDMA . . . Orthogonal Frequency DeMultiplexing Access. SC-FDMA . . . Single Carrier Frequency Division Multiple Access DL . . . DownLink

UL . . . UpLink

BLER . . . BLock Error Rate

IoDT . . . Interoperability Design Test bb-filter . . . baseband-filter

(14)

Introduction

1.1

Background

Ericsson is a company that provides communication networks, telecom ser-vices and support solutions to customers. One of the important parts in telecom services Ericsson works with is testing the performance of user equipments (UE), e.g. a cell phone. Interoperability Design Test (IoDT) is a department at Ericsson that creates test specifications for UEs. They do not develop any testing tools themselves but relies on other sections in Ericsson to do that for them. One service IoDT offer UE-vendors is to test how their UEs behave in different types of simulated environments. Ericsson has internal testing tools for studying a UE’s traffic data in a simulated en-vironment. However, these tools are not suitable or created for performing in-depth analysis on the data. The way testers study a UE’s performance is to print its traffic data (in real time) in a console window or to save the same data to file and study it afterwards. Such traces could generate files with large amounts of data that can be hard to analyse.

(15)

1.2

Problem Formulation

IoDT do not have any tool or other software to make an in-depth analysis on the traffic data in the link between the UE and the base station (eNodeB). When they study the performance of the data in the link, they look in baseband-filtered log files that contain traffic data from a simulation. These log files can be quite large and are hard to analyse to see how the UE is performing. They can also study this traffic data live in a console window, which can be viewed in the same format as the baseband-filtered log files. Using this method for analysing the traffic is still hard for the testers since the data is updated very frequently. It is difficult to get a good overview of the UE’s performance by studying the data in these ways.

Link adaptation is an algorithm to enhance the throughput performance in the link between the eNodeB and the UE when the channel condition is varying. Analysing how the UE is adapting under changing channel condi-tions is important to see if link adaptation is enhancing the throughput as intended. The link adaptation algorithm is implemented in the LTE net-work. Both the UE and eNodeB can contain different hardware which can create differences in the performance of the UEs in the link. To compare this difference, it is necessary to be able to compare traffic data between different UEs.

The questions this thesis seeks to answer are: General problem

Q1: How can we help Ericsson to do in-depth data analysis oriented on link adaptation that helps them to analyse the performance of a UE?

Q2: How can we provide Ericsson with a better alternative to compare the performances between different UEs (or same UE with different configurations) instead of by manually comparing the traffic data in log files or in real time?

Ericsson requested a new tool that could present the traffic data in the link as graphs, such that they could perform analyses on link adaptation. We wanted to develop the tool so that it could read several normal sized trace data files and compare the data to each other. A normal trace is approximately 2 minutes long and the size of the log file that contains the traffic data is about 1GB. From this, an analysis on link adaptation shall be possible and this gave us the following questions:

Validation problem

QV1: Is the graph data produced by the tool representing the data in the link?

(16)

QV2: Can the tool handle several sets of log files? QV3: Can the tool handle normal sized log files?

QV4: Can an analysis oriented on link adaptation be made with the tool?

We wanted the tool to be useful for the test engineers who are supposed to use it. This gave us the following questions:

Usability problem

QU1: Can a test engineer use our tool the first time he/she encounters it?

QU2: Is the tool better than their current methods?

QU3: Does the tool contain enough functionality for its intended use?

1.3

Thesis Outline

This thesis is divided into the following chapters:

• Chapter 2, Theoretical background: In this chapter, all theoretical background needed to understand the content of this thesis is ex-plained. It involves theory about LTE, link adaptation and the phys-ical layer.

• Chapter 3, Methodology: This chapter describes how the given prob-lems were solved. It includes how data was collected and how the evaluation of the tool is formulated.

• Chapter 4, Interviews and discussions: This chapter presents the re-sults from the interviews.

• Chapter 5, The Analysis Tool - BbVisualizer: This chapter describes how the tool works and the motivation behind the functionalities in it.

• Chapter 6, Analysis of link adaptation and BLER target: This chapter describes how the analysis on BLER target was performed. It includes a description of the analysis, how it was analysed and the results of it. • Chapter 7, Future work: This chapter describes how the tool can be developed further. It includes functionality there were no time to implement and functionality that could improve the tool.

• Chapter 8, Discussion: This chapter discusses the results and method-ologies.

• Chapter 9, Conclusion: This chapter contains a summary and com-ments of the thesis.

(17)

Theoretical background

In this chapter the theoretical background is described to fully understand the content of the thesis.

2.1

System overview of IoDT data collection

This section describes how IoDT collects the data and how they analyse it.

(18)

There are two ways to view traffic data. The first is to view it in real time in a console window and the second is to store all traffic data to a file and view the content afterwards. In the first case, when the UE starts to send data to the eNodeB (or the other way around) a console window is opened and a through a ”viewer”-command with a flag specifying the ip-address of the eNodeB that is responsible for transmitting/receiving the data, the data can then be viewed in real time in the console window. For the second case, traffic data is first collected in a raw format in a lab at IoDT by using a command window to pipe the traffic data stream to a file. After the stream is closed, the raw log file can be opened in a tool called LogTool. LogTool uses a plug-in called BbFilter that baseband-filters (bb-filters) the raw log into a format that is readable. The bb-filtered data is presented in a table format so the user can scroll through the data.

2.2

Description of Logtool & the BbFilter

Logtool is a graphical tool for capturing and decoding traces, it is developed by Ericsson for internal use. It is developed in Java and uses the eclipse framework for its graphical interface. Logtools functionality lies in its ”ana-lyzers”, where each analyzer is a separate plug-in to the project containing new functionality for decoding / visualizing trace data. Adding each new analyzer as a plug-in allows flexibility in it.

BbFilter is a plug-in in Logtool which visualizes the data in a log file by showing how the bb-filtered data of the file looks in a table. An excerpt from that table can be seen in figure 2.2.

Figure 2.2: Example of the content in bb-filtered data

Each line in the bb-filtered data is one scheduled block of traffic data, which is sent each 1 ms between e.g. a cell phone and a base station. The columns represent different parameters in the data, like what the channel condition is or what the bandwidth is for that scheduled block etc. The parameters in the data are generally represented as integers but can in some cases be strings or doubles. The amount of data BbFilter generates varies depending on how many parameters the user wants to study and how long the trace file is. An example:

If a raw log file contains 1 minute of traffic data and is bb-filtered with 30 parameters of data, the size of it would be 4 · 30 · 60 · 1000 = 7200000 bytes = 7.2 Mbytes (if all parameters are integers and an integer is represented as 4 bytes).

(19)

2.3

Brief overview of LTE and eUTRAN

Figure 2.3: Picture of eUTRAN [8, p. 23]

LTE stands for Long Term Evolution and is a radio access technology spec-ified by 3GPP [21, p. 277], which is a group of telecommunications standard development organizations [1]. The requirements for LTE are high peak data rate, high spectral efficiency and flexible frequency and bandwidth. The high data rate is achieved with Orthogonal Frequency Division Mul-tiple Access (OFDMA), high order modulation schemes, large bandwidths (up to 20 MHz) and by sending data at several antennas at the same time (spatial multiplexing). This can achieve data rates as high as 300 Mbits/s and 75 Mbits/s in uplink [2]. LTE is also known as 4G but does not ful-fill all the requirements to call itself that, instead it is often referred as 3.9G [21, p. 589].

eUTRAN stands for Evolved Universal Mobile Telecommunications System Terrestrial Radio Access Network, or simply the Evolved UMTS Terrestrial Radio Access Network and is a network of base stations called eNodeBs that are connected to each other [8, p. 21].

2.4

eNodeB

eNodeB stands for eUTRAN Node B and is the hardware in the eUTRAN network that both transmits and receives data to / from the UE [8]. The eNodeB is an evolved version of the Node B which is the base station in the 3G network.

(20)

2.5

Layers in LTE

LTE consist of 3 layers where the data is processed. The layer that is studied in this thesis is called the physical layer, also known as the layer 1 [28]. This layer deals with operations that makes the data transmittable through the air, it deals with scheduling, modulation, coding, interleaving, multiplexing, etc. [21, p. 132].

2.6

Quadrature modulation

When sending digital data over a wireless channel, modulation is used. It is a way to map digital bits to an analogue wave that can be sent over the channel. A sequence of digital bits is mapped to a sinus wave called a carrier. A sinus wave has the parameters frequency, amplitude and phase. These parameters are used to represent the digital bits in the carrier [29, p. 8]. There exist a number of modulation schemes. The type used in LTE are quadrature modulation (QAM) and the constellations are QPSK (4QAM), 16QAM and 64QAM in downlink and QPSK and 16QAM in uplink. [6, p. 17]. In QAM, there are two sinusoidal carriers with different amplitudes depending on which bit sequence that was sent, where one of the carriers phase is shifted +π/2 radians [19, p. 47]. The higher modulation order the higher number of bits can be sent. For QPSK, 16QAM and 64QAM there can be sent two, four and six bits respectively for each symbol [19, p. 47].

Figure 2.4: Picture of the different QAM schemes (from QPSK to 256QAM) [19, p. 456]

Figure 2.4 shows how different QAM constellations looks like. Each point represents a multiple of digital bits, which are called symbols [23, p. 7]. The

(21)

value on each axis represents the amplitude of each carrier. For QPSK in figure 2.4 the signals amplitude is constant, but the phase values at the beginning of each symbol is either π/4, 3π/4, 5π/4 or 7π/4 [12, p. 216]. In this case it fits 2 bits in each symbol. The different possible bits the symbol can contain in the QPSK case are 00, 01, 10 and 11. In 16QAM and 64QAM each symbol contains 4 and 6 bits respectively [18, p. 47].

2.7

Channel coding, Code Rate and FEC

Channel coding is a way to create redundancy in the data that are sent in the channel so one can detect and correct errors in it [25, p. 1]. The data will consist of real data bits and coded bits. This way you are able to cor-rect bits which are incorcor-rectly received. The ability to detect and/or corcor-rect incorrectly received bits in the receiver is called Forward Error Correction (FEC) [16, p. 30]. The way it works is that redundant (coded) bits, which have no information, are added to the sent message. The more coded bits, the more errors can be detected and corrected [16, p. 30]. But this will also affect the throughput. The more coded bits, the slower data rate. [17, Chap-ter 10.1]. The code rate states how many bits are coded in a message. The code rate is defined as k/

n where k is the information bits (real bits) in the

message and n is the length of the whole message (information bits + coded bits) [14, Chapter 1.4.2.1]

An example: If the code rate is 0.73, then there are 73% information bits in the message and 27% coded bits.

2.8

AWGN

One of the most common channel models are AWGN [11, p. 282]. This is a model where the received signal can be expressed as r(t) = x(t) + n(t) where x(t) is the sent signal and n(t) is an AWGN process [11, p. 282]. AWGN stands for Additive White Gaussian Noise, additive means the noise is added to the signal, white means it is uniformly distributed over all frequencies, gaussian means it has a normal distribution in the time domain with an average of zero.

2.9

SINR

SINR stands for Signal to Interference plus Noise Ratio. It is defined as the powersignal

powerinterf erence+ powernoise

(2.1) SINR can be used used to measure the channel quality. The higher SINR, the better channel condition and vice versa.

(22)

2.10

CQI

CQI stand for Channel Quality Indicator and is a measurement of how high spectral efficiency the channel can have [22, p. 97]. It is similar to SINR but can only hold 15 different values. 15 indicates a very good channel quality and 0 a very poor one.

2.11

MCS

Modulation and Coding Scheme (MCS) is a key parameter in link adapta-tion, it describes how the data is modulated and coded. [17, Chapter 10.1]. There are 29 different MCSs in the downlink and uplink [7, p. 133] where each MCS is used in a different channel condition. In the best channel con-ditions, the data is sent with MCS 28 and in the worst MCS 0 is used. The modulation schemes used are 64QAM, 16QAM and QPSK, but not all UEs supports 64QAM. A list of how the data is modulated and coded is shown below. Modulation order 2, 4 and 6 corresponds to QPSK, 16QAM and 64QAM respectively. The code rate can be calculated when the transport block size is known [7, p. 48], which in turn can be calculated by the number of allocated physical resource blocks (PRB) and IT BS

Figure 2.5: MCS table for downlink and uplink (downlink left, uplink right) [7]

2.12

BLER and BLER target

BLER stands for Block Error Rate and is the percentage of the wrongly received blocks in the receiver (UE or eNodeB). It is defined as

#wrongreceivedblocks

#wrongreceivedblocks + #rightreceivedblocks (2.2)

where an erroneous block is defined as a transport block with corrupted data [5, p. 1684]. The eNodeB have a so-called BLER target, it determines how many blocks in percentage on average should be wrongly received [5,

(23)

p. 1684] in the link. The higher BLER target assigned in the receiver will lead to a higher MCS which in turn will lead to more sent bits per time unit, but more with retransmissions and vice versa.

2.13

Link Adaptation

Link adaptation (LA) is a way to enhance the performance of wireless sys-tems where the channel condition might vary in time [9]. Depending on the channel condition, the signals are modulated and coded in different ways to increase the throughput [9]. The better channel condition, the higher modulation scheme and higher code rate is used and the worse channel con-dition the lower modulation scheme and lower code rate is used [19, p. 47]. This way the signals can be sent with a good throughput in both bad and excellent channel conditions. The purpose of LA is to have the best MCS for the corresponding channel condition.

2.13.1

How Link Adaptation works

When data is sent from the base station to a UE (downlink), the UE will report a channel quality indicator (CQI) value to the eNodeB. From the CQI, the eNodeB decides an MCS which indicates how many bits that can be modulated in each symbol [17, p. 217]. MCS can take values between 0-28 in both uplink and downlink [7, p. 133] where each value represents a modulation scheme and code rate. MCS = 0 represents the worst chan-nel condition which has the lowest code rate and lowest modulation order (QPSK). MCS = 28 represents the best channel condition and has the high-est code rate and highhigh-est modulation order (64-QAM). LTE is using fast link adaptation. It means that both CQI and BLER are taken into consid-eration when deciding the MCS [10]. In downlink, the eNodeB sets an MCS from the CQI value. If the data holds the BLER assigned in the eNodeB then it will keep that MCS, if the BLER is lower or higher, the MCS is adjusted such that this BLER target is achieved. In the uplink the same principles are used but instead of looking at CQI, the MCS is determined directly from the SINR. The higher SINR the channel experience, the higher MCS the data will hold.

2.13.2

Analysis of link adaptation

There exist more than one reason as to why link adaptation should be anal-ysed. The main reason is there are a lot of parameters in the algorithm that are affected by the channel condition, which in turn will affect the per-formance of the UE. But there are also different types of link adaptation algorithms which affects the performance in different ways.

(24)

LA is important if you want to see how the different algorithms affect the performance of the system.

2.14

HARQ Algorithm

HARQ stands for Hybrid Automatic Repeat reQuest and is an algorithm that handles correct and wrongly received data [27, p. 221]. When data is sent from transmitter to receiver, the data might either be correct, incorrect and correctable or incorrect and uncorrectable. If the data is correct in the receiver, the receiver will send an ACK bit to the transmitter. ACK stands for acknowledge and means the received data was understood and the transmitter can send new data. If the data is incorrect but correctable the wrongly received bits will be corrected, the transmitter will receive an ACK and it can send new data. If the data is incorrect and uncorrectable the receiver will send a NACK (Not ACKnowledged) and the data will be retransmitted. It is from the ACKs and NACKs the BLER is calculated [4].

2.15

TBS

The data sent in the physical layer is sent by transport blocks and the size of it is measured in bits [15, p. 92]. The transmission time interval (TTI) tells how often a transport block is sent [15, p. 92] and is 1 ms in LTE [2].

2.16

PRB

PRB stands for Physical Resource Block and is the allocation space the subcarriers are inserted in [28]. One PRB consists of 7 OFDM symbols and 12 subcarriers [28]. This means that one PRB consist of 84 resource elements where one resource element is one symbol. The PRB is allocated in time and frequency. One PRB takes up 15000 ∗ 12 = 180000Hz in frequency and Ts·7 + (tguard= 49.53ms) = 66.667 · 10−

3

·7 + 49.53 = 50 ms in time, where Ts is the sampling frequency.

(25)
(26)

Methodology

For the purpose of clarification and ease, the functionality (the tool) we have created in this thesis is henceforth referred to as ”BbVisualizer”.

Ericsson wanted us to create a tool (BbVisualizer) that plots graphical data of the traffic data between a UE and an eNodeB. BbVisualizer shall be able to handle several input parameters from the traffic data and produce graphs from this data. These parameters shall be related to link adaptation and could for example be throughput, MCS, SINR, BLER etc. From these graphs, post data analysis of link adaptation shall be possible.

Another requirement was to add functionality so users can compare different UEs (or the same UE with different settings).

3.1

Interviews and demonstrations

Two personnel from Ericsson were available for interviews regarding what to implement in our tool. The personnel we interviewed had little time to spare, and many of the questions we needed answered could generate topical trajectories that might contain useful information. For these reasons we used semi-structured interviews as the layout in our interviews. Those we interviewed were Jonas Wiorek and Rickard Wahlgren. We chose these because they were the ones who came up with this thesis idea and they have a lot of knowledge about LTE. Therefore they should have suitable knowledge about which functionality should be implemented in the tool. We also talked informal with other staffs that were interested in BbVisualizer about what they wanted from it, how graphs could be presented, which types of traffic data parameters should be included and which quantities of data BbVisualizer should be able to handle. After the discussions, the desired functionality was implemented and then the personnel from IODT were asked for feedback on said implementation. This was repeated until Ericsson became satisfied with BbVisualizer. We also held larger demonstrations for IoDT employees were we could get feedback on how to extend it further.

(27)

3.1.1

Compilation of answers from interviews

The sound from the interviews was recorded and the answers were written down on paper.

3.2

Testing and evaluating BbVisualizer

3.2.1

Survey

To ensure testers at Ericsson could use BbVisualizer, that it performed as intended and had the functionality Ericsson needed; two testers were given a task to perform with the tool. The task was to follow a specified test case Ericsson has for testing link adaptation, but using BbVisualizer to do it. They were given a help documentation how to use BbVisualizer since they did not have any experience with it yet. Afterwards they were given a survey regarding their experience with BbVisualizer. The survey was focused on learnability, which means how easy it is for a user to perform a task the first time he/she encounters it [3]. This is a component when measuring the usability of a system. The survey also seeks to answer which functionality is useful and/or if there are some functionality missing. This way the usability questions in the problem formulation should be able to be answered.

3.2.2

Analysis on link adaptation

To answer the questions in Validation problem in section 1.2 we performed an analysis oriented on link adaptation. With this analysis we will show that the graphs are drawn correctly and that the tool can handle several normal sized log files. Handle means the tool shall be able to load one or several log files and produce a graph of its data without crashing or freezing. This will answer question QV1, QV2 and QV3 in Validation problem. To answer question QV4 in Validation problem, we will do an analysis where we will try analyse the following:

Analysis questions

• Is there a difference in the throughput when the BLER target is changed?

• Does the BLER target at 10% give the best throughput or are the difference insignificant?

• Are there any differences between the throughput over different SINRs with the different BLER targets?

(28)

Interviews and discussions

Interviews and discussions with personnel from Ericsson were the main source of information regarding how to implement BbVisualizer. For the interviews we had contact with two personnel, Richard Wahlgren and Jonas Wiorek, which have deep knowledge in the area of our studies. We also had informal discussions with testing personnel from IoDT.

4.1

Results

From the interviews and discussions we learned the following how to develop the tool:

How can we provide Ericsson with a better alternative to look at and com-pare the performances between different UEs (or same UE with different setup) instead of manually studying the traffic data in log files or in real time?

By creating functionality that allows the user to plot data from two (or more) log files in the same graph

What kind of data shall BbVisualizer handle and produce graphs from?

Throughput / time, throughput / SINR, Throughput / CQI, TBS / time, PRB / CQI, PRB / SINR, BLER/CQI, BLER/SINR, UL MCS /SINR, DL MCS / CQI, rank indicator / CQI, spectral efficiency/time.

What quantities of data shall the program be able to handle?

According to the testing personnel in the IoDT lab in Link¨oping, a normal trace is approximately 2 minutes long and the size of the log file that con-tains the traffic data is about 1GB.

What types of statistics are necessary? (max, min, average, median etc.)

(29)

The Analysis Tool

-BbVisualizer

The purpose of BbVisualizer is to make it easier for Ericsson to perform post analysis on the data traffic between the UE and eNodeB. BbVisualizer mainly uses a static set of graphs to show how certain data varies over different channel conditions. But it can also show graphs with any type data. BbVisualizer allows the user to compare traces in a smooth way such that multiple UEs easily can be compared to each other. BbVisualizer is built on top of an already existing internal Ericsson program called Logtool and is written in the Java programming language. It extends the functionality described in figure 2.1 by using the bb-filtered data produced by BbFilter to produce the graphs.

5.1

Motivation for choosing Logtool

In the beginning of the project we had to take a decision how we would create our analysis tool.

We needed to develop a program that: • Reads trace data from log files.

• Plot graphs with data from the analysis.

We felt the best way was to build it upon an already existing project or in an environment that supported the functionality we needed.

We started to analyse available tools by asking personnel from Ericsson if they had any preferences or any tools they already used. We found a tool developed by Ericsson called Logtool the lab testers use for handling trace data. It is written in Java and the program is built upon the eclipse framework, all of which we had experience with earlier. The team in charge of Logtool is located in Link¨oping so using Logtool would enable us to get

(30)

trace data in a good format without us writing code to extract traffic data from the raw log file. This would also not need any form of extra integration for Ericsson and employees would gain easy access to our project. Therefore we decided to develop our program as a plug-in to Logtool. The answer to Q1 in the problem formulation is therefore: to add the required functionality from Ericsson on top of Logtool.

5.2

Description of BbVisualizer

Ericsson requested us to do an analysis tool that would plot graphs in a basic interface. We included this and extra functionality with a total of 3 different views (a separate window in eclipse) with different functionalities. The first view contains a set of graphs which the user can save to a file. The second view uses saved graph data to plot in the same layout as the first view. It can load multiple datasets at the same time which allows dataset comparisons between different trace files. The third view has a more dynamic functionality that allows the user to both load csv-files and data from BbFilter and create any form of graph with the existing parameters in the data. It can handle several files at the same time so the testers can compare multiple UEs. It also has the capability to detect some errors in the files. These 3 views were added on top of BbFilter and uses (in some cases) the same data produced by the normal procedure of data collecting in IoDT (see figure 5.2).

(31)
(32)

5.2.1

BasicView

Figure 5.2: Picture of the basic graphs view

The first view contains a fixed set of graphs that show certain useful data to the testers for quickly getting a good overview of how the data behaves. Most data shown in these graphs are different parameters plotted over SINR and CQI (CQI for downlink and SINR for uplink). We got these values as suggestions from Rickard Wahlgren and Jonas Wiorek (two Ericsson em-ployees at Kista) which have good insight in LTE. The reason for plotting values over SINR in uplink and CQI in downlink is because the UE does not calculate SINR, it estimates a CQI, which also represent the channel condition.

(33)

The graphs available in BasicView are: DL Throughput/time UL Throughput/time DL Throughput/CQI UL Throughput/SINR DL PRB/CQI UL PRB/SINR DL spectral efficiency/CQI UL spectral efficiency/SINR DL BLER/CQI UL BLER/SINR DL RI/CQI DL PMI/CQI DL MCS/CQI UL MCS/SINR

These data are presented as a two dimensional graph as shown in picture 5.2.1. Where the data is on the Y-axis (vertical axis) and SINR/CQI is on the X-axis (horizontal axis).

The graphs are all calculated as the average value over the corresponding SINR or CQI. We thought it might be good to show max and min curves as well, but we skipped it since we realized the graphs would be hard to distinguish if you load several sets of data. Ericsson also felt the average was enough.

The user can save the graphs to a file (as a .graphdata file) which just con-tains the points in the graphs and not the whole bb-filtered data used to produce it. This makes the file smaller compared to if the bb-filtered data would be saved to a file, which saves storage space and makes it quicker to load.

(34)

5.2.2

Multiple Graph View

Figure 5.3: Picture of the combined graphs view

The second view contains functionality to load graphs that has been saved in BasicView. The user can load a multiple of graph datasets he/she wishes to compare. Each loaded dataset can be toggled on/off to allow easy mod-ification on which dataset to view in the graph. This view was created to offer supporting functionality to the BasicView rather than being a whole new view with new functionality. The functionality to load and store data in a graph format enables the ability to compare the performance of different data sets, which answers question Q2 in problem formulation.

1 - Load graph files

Saved graph files in the form .graphdata can be loaded into this view (see ring marked 1 in the image above).

2 - Loaded files

When a file has been loaded it is added as an object in the loaded files list. That object can be checked/unchecked to decide if the data from the file should be added/removed from the graphs. This allows control over which data is presented in the view.

(35)

5.2.3

AdvancedView

Figure 5.4: Picture of content in AdvancedView

The third view is called AdvancedView. In this view there is the ability to look at data in any form of a graph. You choose what to look at in the X and Y-axis and plot ”Y-values” over the ”X-values”. This idea came from a tester in the lab and was implemented because we believed the tool could be used in other areas than just in link adaptation. To only have graphs over SINR and CQI might be too narrow and is reducing the scalability of the tool.

1 - Load / Save

If the user has bb-filtered data in the BbVisualizer and open the Advanced-View, the data can be viewed there and later be saved as a csv file. Data can be loaded from multiple csv files which allows for comparison be-tween different UEs.

(36)

2 - Loaded files

When a file has been loaded it is added as an object in the loaded files list, just like MultipleGraphView. That object can be checked/unchecked to decide if data should be collected from that file when generating a new graph.

Figure 5.5: List of loaded files 3 - Axis Parameters

There are two DDLs (Drop Down List) in AdvancedView which contains the parameters stored in the first loaded csv file. The user selects a parameter of choice for both the X and Y-axis.

Figure 5.6: Available axis parameters in the custom axis DDL 4 - Graph Generator

When the user selects 2 axis parameters in the DDLs, the process of calcu-lating the graph data and plotting the data can be started.

(37)

5 - Graph Clearer

When the user has generated a graph, the data stays in that graph until the user manually clears the graph. This is because the user might want to have 2 different kinds of graph data in the same view.

6 - Console window

The console window provides the user with information. It will state, for example, if a file has been loaded successfully or if it could not be loaded. It will also state if there are errors in the bb-filtered data and in which row the error occurs. An example, if the timestamp parameter is not increasing as it should it, there will be a warning message in the console.

5.3

Data validity in log files

The log files BbVisualizer use to present statistics from is sometimes prone to have erroneous behaviour. One of the reasons for this is the baseband filtering of log files can produce data that contains strange time lapses. Another reason is the router which handles all traffic data, can sometimes be overloaded which makes the log file corrupt. Since there exist ways for the data to be corrupted it creates a need for BbVisualizer to try to discover corrupt data. One functionality that BbVisualizer has is to discover timestamps in the file where the difference in time is greater than or less than 1 ms between each row of data. The reason for this is that each packet of traffic data is supposed to be sent each millisecond, but does not always have those values in the timestamp data field.

Figure 5.7: An example of how the console prints information about cor-rupted timestamps when reading data from a file

As there might be a lot of errors, like in the image above, only some of the error printed so it will not be too time consuming.

(38)

Granularity

We implemented a function that controls the granularity of the data in the graph when some parameter is plotted over time. Graphs that are plotted over time can contain more than 100 000 data points, thus making the graph blurry and hard to analyse. The function takes a granularity value, which is how many timestamps the user wants to take the average over. The user puts a value in a text field (default is 100 = 100 ms) and the average value is calculated in this timespan. An example, if there are 140 000 values in the graph and one choose a granularity value of 100, the new graph will contain 1400 data points instead of 140 000, which will be easier to read.

Figure 5.8: Changing the granularity of a graph

On the next page is an example how the granularity value is used to make data more readable.

(39)

Below is a graph where BLER is plotted over time

Figure 5.9: Graph with granularity value set to default(100) Below is the same graph but with granularity value set to 2000

(40)

5.4

Survey on BbVisualizer’s usability

The survey seeks to answer the questions in Usability problem in the problem formulation and contains the following questions and answers. The answers marked with a star is the answers of Johan Bergstr¨om and the answers marked with a diamond is from Abdullah Almamun:

• What was your general opinion of the tool, was it easy or complicated to use?

⋆ Easy to use. Some undocumented tricks needed to be performed to speed up the loading of the log files.

⋄Easy.

• Did the help document contain sufficient explanations of how to use the tool? Explain.

⋆ Good enough. ⋄yes.

• Which functionality was useful in this tool for this test?

⋆ It is useful to be able to show a UE behaviour graphically instead of a logfile filled with numbers.

⋄I tested CQI MCS it was helpful for me.

• Which functionality is missing to be able to do this test? ⋆ N/A.

⋄I have not done the specific test, but if we can do a real time logging and plot that will be wonderful, because the Link Adaptation test cases are to put attenuation by steps and observe Layer 2 parameters like CQI MCS, and its next to impossible to save log file as its huge. If we use bbfilter and this tool real time and the tool plots the thp-cqi, cqi-mcs real time then it will be good to save a small log file from this tool with the graph instead of not saving the complete huge log file from eNB.

• Which tests were you unable to do completely or partially and what was it that you could not test / validate? (write down test number and what you could not test)

⋆ N/A. ⋄N/A.

• Could this tool be used in another testing area other than link adap-tation and HARQ? Give an example.

⋆ Maybe.

⋄For troubleshooting it can be used, if we see variation of thp or cqi and then it can be sampled or zoomed to find out the sfn, sf . . . info and look in to the log file or discuss with UE partners that this is the sfns we are seeing issue.

(41)

• Did you feel that you got more exact result with this tool than by only looking at bb-filter? Explain.

⋆ The tool makes it more visible. Not more exact. ⋄It gives a visual plot from bbfilter.

• Is it better to do these tests when the data is represented as graphs? If yes, explain why

⋆ Tool can be used for comparing UEs and presenting behaviour to UE vendors.

⋄ Yes, instead of saving the complete enb log which is very big if its run through this tool and then the graph can be saved.

• Did you feel that there were some parameters missing which you might want to look at in the tool? (By parameters we mean e.g. throughput, CQI SINR etc.) If yes, write which one.

⋆ Nope. ⋄No.

• Do you believe that you are able to find errors with this tool that you could not find without it? Explain.

⋆ The tool speeds up the process of finding an error. If a graph is not showing what is expected it is easy to spot at which time/SNR for example the problem lies

⋄Maybe.

• Do you believe that this tool creates the opportunity to improve / create new tests and test cases? Explain why / why not?

⋆ Don’t know yet. ⋄Not sure.

• What improvement or extra functionality could be useful to implement in this tool to make it better or easier to use?

⋆ Adding title to advanced graphs, multiple x and y-axis on same graph

⋄N/A.

From the surveys we can conclude that the test personnel can use our tool and perform tasks with it the first time they encounter it, which answers question QU1 in the problem formulation. The participants felt it was bet-ter to represent the log file data as graphs instead of having big files with numbers. One tester believes a potential error could be found faster. From these answers we believe we can answer yes on QU2 in problem formula-tion. The testers feels that there are enough functionality in the tool and that no functionality is missing, which answers question QU3 in the problem formulation. The original surveys with Abdullah’s and Johan’s answers are available in appendix A, the survey presented here have their both answers in the same survey with some spell correction.

(42)

Analysis of link adaptation

and BLER target

In this chapter the motivation of the analysis is formulated, the result is presented and a conclusion is made.

6.1

Motivation

The reason for doing this analysis was to show it is possible to perform a relevant analysis on link adaptation by using BbVisualizer. This analysis, we believe will validate that BbVisualizer contains enough functionality to do an analysis on link adaptation. The analysis will verify that BbVisualizer can handle several normal sized baseband traces at once and that they are plotted in the correct way. The analysis will, in addition to the above, put BbVisualizer in a scenario where we believe it can be used in the future.

6.2

Explanation of the BLER analysis

The analysis involved studying the throughput performance of different BLER targets in uplink (for the same UE). This kind of analysis has also been done in a 2002 study by Ahn and Sasase [13] and in a 2009 study by Cui et al. [20]. This type of analysis was done because the BLER target is a parameter in link adaptation [9]. In the eNodeB there is a BLER target system constant set for the data in downlink and uplink. It means that the data should keep this BLER (BLock Error Rate) on average regardless if the channel condition is good or bad. The modulation and coding scheme (MCS) is changed until the BLER target is achieved. How the MCS af-fects BLER and throughput is explained in chapter 2. The BLER target is affecting the throughput in the following way:

(43)

• The higher BLER target assigned, the higher MCS on each SINR, the higher number of bits can be sent in each transport block, but more retransmissions on average will occur.

• The lower BLER target assigned, the lower MCS on each SINR, the lower number of bits can be sent in each transport block, but fewer retransmissions on average will occur.

This analysis was focused on finding the BLER target that gives the highest throughput under different channel conditions (SINR). In the eNodeBs at Ericsson’s test lab, the BLER target is at 10% as default. To analyse the best target, several baseband traces were extracted with the same UE in the same channel model under the same channel conditions but with different BLER targets. A script was written (in the language Expect) that switched the SINR stepwise in the channel, from high to low, with a constant time interval. This makes the throughput high in the beginning and decreasing with time until the channel conditions are so poor that the UE detaches and cannot send any data. The reason why this script was used was both to achieve an identical environment in the channel and to be able to compare the throughput over time. The reason why the SINR was changed from high to low was to see if there were any differences in the throughput between the traces for different channel condition.

What we analysed was: Analysis questions

• Are there any differences in throughput when you change BLER tar-get?

• Does the BLER target at 10% gives the best throughput or are the differences between the BLER targets insignificant?

• Are there any differences between the throughputs over different SINRs with the different BLER targets?

These questions will help us answer question QV4.

6.3

The simulated channel model

The channel model used in this simulation is called 2x2 1x2 mimo static. It means that the eNodeB has two transmitting antennas and two receiving antennas. The UE has two receiving antennas and one transmitting antenna. Static means that the cell phone is not moving. The channel noise is of the type AWGN. This was used because it is a way to simulate a common environment [24].

The data direction was uplink and the reason for this is because the SINR is then calculated, which it is not in downlink. As stated earlier the CQI can only take 16 different values, which makes the SINR more exact.

(44)

6.4

How data was collected

The data was collected by connecting a UE (a cell phone) to an eNodeB in a test lab at Ericsson. The data traffic was started along with the script. The script changes the SINR stepwise with 1 dB interval in the channel, from high to low, with a four second interval. The initial SINR was 20 dB and the end SINR was -15 dB. These values represent a very good channel condition, i.e. max throughput, and a very bad one, i.e. the UE has a very low throughput or cannot send any data at all.

The traffic data was recorded and saved as a raw log file. Both the recording and the script were started and stopped at the same time. This way the traces have the same length in time which in this analysis was 144 seconds (4seconds ∗ (20dB − (−15dB) + 1). This length was chosen because it covers the size of a normal baseband trace. This procedure was done with different BLER target settings in the eNodeB. The BLER targets used in this anal-ysis were 1%, 5%, 7%, 9%, 11%, 13%, 20%, 35%. The traces were saved as log files which could later be baseband filtered by BbFilter in Logtool.

Figure 6.1: picture of the content in a raw log file

6.5

How data was read

The log files were read with the analyzer BbFilter in Logtool. The produced data were then saved as csv files which were loaded into the AdvancedView in BbVisualizer. This way any graph data could be produced for this analysis.

(45)

Figure 6.2: picture of the content of the bb-filter data

6.6

Data parameters used in the analysis

In this section it is stated which type of data parameters were analysed and what our hypotheses were before the analysis.

6.6.1

Throughput over time

The first analysis is to see which BLER target gives the overall best through-put. The hypothesis is that the trace with the highest throughput is the one with the BLER target at 10%, simply because this value is used in the eNodeB. The throughput which is studied is in the time interval 41 s to 111 s. It is in this time interval all the data holds its assigned BLER target (this is explained in 6.8.1).

6.6.2

Throughput over SINR

An analysis will be performed on how or if the throughput varies over differ-ent SINR values. Here it will be studied if there is any gain or loss to switch BLER target at a certain SINR value, which is studied in [13]. The rela-tion of how the throughput is behaving under different channel condirela-tions compared to BLER target 10% will be studied in this section. This way one can clearly see which BLER target (if any) gives the best throughput under different SINRs.

(46)

6.7

Parameters for data validation

This section introduces the parameters whose behaviour already were known, i.e. what their respective graph should look like. This was done to validate that the traces contain the data they should and that the graphs were drawn in a correct way. These parameters must have a correct behaviour if the analysis shall be possible. It will also show which SINR and time interval the throughput will be studied over.

6.7.1

SINR over time

Since the SINR is decreased over time, this graph should show this. The SINR should on average drop 1 dB per 4 seconds and it should decrease linearly, since this is how the script was coded. At the start, the SINR should be at 20 dB and in the end it should be around -15 db. It is important that there are no big differences between the traces SINRs over time, otherwise the data will not experience the same environment hence this analysis cannot be done.

6.7.2

BLER and MCS over time

In the beginning of the simulation the BLER should be around 0% because the channel conditions are very good. This means the MCS should be 24 (the highest value in uplink on this UE). The traces corresponding BLER target should after some time go up to its assigned value and stay around that target. At the time interval when the data has its assigned BLER target the MCS will drop and go from 24 stepwise down to 0 over time. When MCS = 0 is reached the BLER will go up to towards 100%. This is because the UE cannot adjust the MCS lower than 0 and thus cannot hold its BLER target.

The trace with the lowest BLER target assigned (BLER target 1%) should have its MCS decreased first. The other curves will follow it with a time delay and BLER 35% should have its MCS decreased last. This is because the SINR over time is exactly the same for all traces and a higher BLER target leads to a higher MCS.

6.7.3

BLER over SINR

The expectation in the BLER over SINR graph is that the BLER should be around 0% at high SINR. When the SINR is decreasing the BLER should go up to its target. When the SINR is decreasing further the BLER should increase towards 100%. This graph will show which SINR interval will be studied when analysing throughput over SINR. It is only between the SINR values where the BLER can hold its assigned target that is interesting when analysing throughput over SINR.

(47)

6.7.4

MCS over SINR

At high SINR the data should show a high MSC and it should decrease when the SINR is decreasing. The traces with the higher BLER target should have a higher MCS at each SINR value than the traces with the lower BLER target. This is because as stated earlier, the higher BLER target → higher MCS. At high SINR the MCS should be around 24 and at low SINR it should be 0.

(48)

6.8

Result of the expected graphs

To be able to do the analysis we had to verify that the baseband traces has the right data and that the graphs are presenting the right data. The following graphs were analysed to see if their behaviour was as stated in 6.7.1 to 6.7.4. If these graphs does not show the right data the analysis cannot be made.

6.8.1

BLER and MCS over time

Figure 6.3: picture of MCS and BLER over time

Figure 6.3 shows the simulated BLER and MCS over time. When time increases the MCS should drop and at that time the BLER should reach its target. It can be seen that the data does precisely that, which indicates that the graphs are presenting the right data under the assumption that the data were correct. The BLER should also hold its assigned target under some time interval. It does that except for BLER target 35%, 20% and 1%, the reason for this is unclear. We investigated these csv files manually with the help of excel and saw that the assigned BLER target cannot be held in these traces, the reason for this is unknown, but the graphs represent the data in the csv file correctly. According to us, the analysis can be done anyway, the difference will be that the traces assigned with 1%, 20% and

(49)

35% BLER target will instead have 2%, 17% and 28% respectively. When the traces MCS goes from MCS 24 and starts to decrease, the traces BLER is reaching its assigned target which is as expected. When the MCS goes towards zero, the BLER goes up towards 100%, which is expected. One can see that the BLER target is held between 41 and 111 second. This will be the time interval where the overall throughput will be analysed. A more detailed image is available in appendix B.

6.8.2

SINR over time

Figure 6.4: picture of BLER over SINR

Figure 6.4 shows SINR over time. One can see that the SINR is decreas-ing in time and it decreases with 1 dB every 4 seconds as it should. In the beginning, the traces have around 25 dB which they should not have. This however will not have any impact of the analysis, because the BLER is around 0% in this time interval. Most important is that there are no sig-nificant differences between the SINR values in the traces. At 113 seconds the SINR is not decreasing linearly. This is not good but it does not matter either because the analysis is not done on data after 111 seconds. The values were investigated by checking the bb-filtered csv file and we saw that after 113 seconds the SINR value becomes very irregular.

(50)

6.8.3

MCS over SINR

Figure 6.5: picture of MCS over SINR

Figure 6.5 shows the MCS over SINR. At the start it looks as expected, the MCS values for the traces with the lower BLER target are dropping faster than the simulations with a higher BLER target. At SINR -13 dB the MCS starts to increase. This was not expected and looks wrong. After some investigating we found that the SINR in the high region sometimes is dropping from around 20 dB to around -20 dB and back to 20 dB in the time span of two ms, the reason for this is unclear. These values will contribute when calculating the average of these SINRs. One could argue that looking at throughput over SINR could not be done since it would get a higher throughput at low values. Fortunately, all the blocks that were sending at these periods are NACKed, so this does not change the throughput over SINR. Also, these SINRs drop to around -20 dB which is not the SINR interval we are analysing.

(51)

6.8.4

BLER over SINR

Figure 6.6: picture of BLER over SINR

Figure 6.6 shows the BLER over SINR. This graph shows that for the highest SINR, the BLER is around 0%, then it goes up to each respective BLER target at around 14 dB. This value is held until SINR is at 0 dB where it rises towards 100% which it should do. It will be in this interval that the throughput over SINR will be analysed. In the end the BLER is decreasing and the reason for this is the same as in 6.8.3.

(52)

6.9

Simulation results

The following section shows the simulated result to be analysed.

6.9.1

Throughput over time

Figure 6.7: picture of throughput over time

Table 6.1: total throughput gain relative BLER target 10%

BLER target 1% 1.0145 BLER target 5% 1.0189 BLER target 7% 0.9930 BLER target 9% 1.0059 BLER target 11% 1.0001 BLER target 13% 0.9801 BLER target 20% 0.9327 BLER target 35% 0.8714

(53)

Figure 6.7 shows that the throughput is decreasing over time as expected. What is interesting is that after a 35 second interval, i.e. at 35 s, 70 s and 105 s, there is a huge dip in the graph. This looked strange, so the csv files (the files the data is plotted from) were examined for erroneous behaviour. We saw that in all files, there was a jump in time after a 35 second interval. This means that the cell phone either stopped sending data in this time interval, the eNodeB were unable to receive data or the baseband filtered data was generated in a wrong way. This jump in time is the reason for the throughput drop.

Otherwise there seems to be a little difference between the traces through-put. This is shown in table 6.1, where the exact difference in throughput for all the traces relative to the throughput with BLER target 10% between 41 s and 111 s. The values in the table were calculated by the following formula: bpsgainblerx = 111P s=41 RBblerx,s 111P s=41 RBbler10,s (6.1)

where RB stands for received bits. The data with lower BLER targets 1%, 5%, and 7% did not experience any loss in throughput, which were not expected, instead there seems to be some small gain for BLER 5% and BLER 1%. We can also see that there is barely no difference between BLER 9% and 11%, which shows that small differences in the BLER target has no real impact on the throughput. The small difference which can be seen is probably stochastic. We can see that there is a negative impact on the throughput, the higher BLER target that is assigned. At BLER 13% there is a 2% loss relative to BLER 10% and this loss increases with the higher BLER targets. The worst BLER target is 35%, as can be seen in table 6.1. The best BLER target in these simulations are 5% although it is a very small difference between them (only 1.89% gain), thus this gain might also be stochastic.

(54)

6.9.2

Throughput over SINR

Figure 6.8: picture of throughput over SINR

Figure 6.8 shows the throughput over the SINR (in the graphs it is called puschSINR). What can be seen is that the throughput is decreasing when the SINR decreases, which is expected. What looks strange is that the throughput over SINR 13 dB is worse than SINR 12 dB for all traces except the data with BLER target 5% and 1%. This was looked into further. The throughput over a specific SIN Rx is calculated as:

bpsP erSIN Rx=

P

line=SIN Rx

T BSline·HARQline

P

line=SIN Rx

1 ·1000 (6.2)

HARQ is either ACK = 1, or NACK = 0. TBS is the number of uncoded bits (real data bits) that are sent under one ms. line = SIN Rxis referring

(55)

to the line in the trace file where SIN R = SIN Rx. The multiplication by

1000 is to get the unit as Mbits/s.

These values were checked by using excel. The baseband filtered data was copied into excel and formula 6.2 was used to calculate the throughput for SINR 12 dB and SINR 13 dB respectively. The calculation showed that the throughput is actually lower at SINR 13 dB than 12 dB, so the graph data was correct in these points.

Figure 6.9: picture of throughput over SINR

Figure 6.9 shows the throughput gain for each BLER target in relation to BLER target 10%. In this graph only the SINR values between 14 and 0 dB are shown, which is the SINR interval where the BLER target is held as stated in section 6.8.4.

We can see that BLER 1% seems to have the best throughput at high SINRs but slightly drops with lower SINRs. The BLER target 1% and 35% seems to be the only ones that are affected by the changing SINR, where 1% is decreasing and 35% is increasing with the decreasing SINR compared to the throughput of BLER target 10%. The other traces hold a relative constant relation to BLER 10%. The difference between the targets 11% and 9% is very small. The BLER target 7% does a big dip around 6 dB just like BLER 1%, but recovers after that, this can also be seen in graph 6.8, but not as

(56)

clearly.

One can also see that between 14 dB and 9 dB, the lower BLER target that is assigned, the better throughput is achieved. The overall best throughput is BLER 5% and is almost constantly better than BLER 10% but loses its gain with the decreasing SINR. The best BLER target is 1% between 14 dB and 8 dB and 5% between 8 dB and 0 dB.

6.10

Validation of result

The difference from the simulations that are shown in figure 6.7 in section 6.9.1 and in figure 6.8 in section 6.9.2 were quite small. To show whether the throughput is really dependent on the BLER target and not a stochastic difference, we did an ANOVA type I test. To do this, a linear regression was made of the data serie, throughput / time with the BLER target of 10%. The other series were then normalized to this function. The result from the ANOVA gave a p-value of 1.11298 · 10−8. This value means that the chance

that the difference in throughput being stochastic is 1.11298 · 10−8 i.e. it is

most probably a dependency between throughput and the BLER target. To show if the BLER target of 5% is better than 10%, we did a t-test be-tween these two. A t-test calculates the probability of the difference bebe-tween two data sets is stochastic or not [26]. In this test we found that the null hypothesis was not rejected. This means that it cannot be excluded that the difference is stochastic. The p-value is 0.23 which means that it is a 23% chance that this difference is stochastic. Which is not enough to say that the difference is significant.

6.11

Conclusion

This analysis on link adaptation has been done by analysing how the through-put is behaving for a UE when different BLER targets are assigned to it. We also studied how the throughput is affected in different channel conditions for different BLER targets. This analysis was also done to test whether BbVisualizer’s graphs are representing the data in the link and that it can handle several normal sized log files at the same time. Section 6.8 shows that the graph data are plotted correctly. This answers question QV1 in problem formulation.

The result shows that BbVisualizer is able to load 9 different log files of 144 seconds each and compare them to each other without crashing. This gives a total allocation of 9 · 144 = 1296 seconds, which answers QV2 and QV3. In section 6.9 we can draw the conclusion that the different BLER targets have an impact on the throughput. This was also shown by an ANOVA test type I, which answers the first question in Analysis questions.

(57)

to this study, would increase the uplink throughput performance by 1.89%. This value we believe is too small to actually say that it is better than BLER target 10%. We verified whether this difference were stochastic or not with a T-test. The result shows that the chance of that the difference would be stochastic were 23%, which is too high to draw the conclusion that BLER 5% is better than 10%. Thus there is an insignificant small difference be-tween them. This answers the second question in Analysis questions. At higher SINRs there seem to be advantageous to have a low BLER target, but this gain decreases with lower SINRs. The higher SINR, the higher throughput gain a lower BLER target will achieve. Thus we can say that there is a dependency between the BLER target, the SINR and the through-put, which answers the third question in Analysis questions.

With the above answers we can conclude that this tool enables analysis on link adaptation, which answers QV4 in the problem formulation.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Syftet eller förväntan med denna rapport är inte heller att kunna ”mäta” effekter kvantita- tivt, utan att med huvudsakligt fokus på output och resultat i eller från

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av