• No results found

Mapping of User Quality-of-Experience to Application Perceived Performance for Web Application.

N/A
N/A
Protected

Academic year: 2021

Share "Mapping of User Quality-of-Experience to Application Perceived Performance for Web Application."

Copied!
92
0
0

Loading.... (view fulltext now)

Full text

(1)

Mapping of User

Quality-of-Experience to Application Perceived

Performance for Web Application.

Ashfaq Ahmad Shinwary

Karlskrona, February 2010 School of Computing Blekinge Institute of Technology

371 79 Karlskrona, Sweden

MEE09:86

(2)
(3)

iii

Abstract

Web browsing posses a major share among the activities on the Internet.

Heavy usage of web browsing makes the Web Quality of Experience (QoE) one of the critical factor in deciding the overall success of network services. Amongst others, Web QoE can be effected by the delays in network that can result in higher application download times.

In this thesis work, an effort has been made to map applications level down- load times to Quality of Experience. A subjective analysis on how the user takes into account the domain of web browsing has been carried out. For this purpose a testbed was developed at Blekinge Institute of Technology on which different users were tested. Specific sequences of delays were introduced on the network which resulted in desired application download times. Regression analysis was performed and a mapping between user QoE and application download times was carried out. Based on the results conclusions were made which are presented in this thesis report.

Keywords

Application Perceived Performance, Computer Network Measurements, QoE.

(4)
(5)

v

Acknowledgements

First I would like to thank Dr. Patrik Arlos for giving me the opportunity to be the part of this interesting research work, and for all the important discussions we had. My sincere appreciation to Junaid Shaikh for his constant guidance and valuable feedback.

My sincere thanks to my family for their unlimited support and prayers that made everything possible for me.

Ashfaq Ahmad Shinwary

Karlskrona, February 2010

(6)
(7)

Contents

Abstract iii

Acknowledgements v

1 Introduction 1

1.1 Significance of User-Centric Approach . . . 2

1.2 QoE and QoS . . . 2

1.3 Document Outline . . . 3

2 Related Work 5 2.1 User Experience . . . 5

2.2 QoE in context of Web . . . 7

3 Experiment Setup 9 3.1 Experiment Process . . . 9

3.1.1 User Testing . . . 11

3.2 Testbed Setup . . . 12

3.2.1 Experiment Controller . . . 12

3.2.2 Traffic Shaper . . . 12

3.2.3 Consumer . . . 12

3.2.4 Measurement Point . . . 14

3.2.5 Clients . . . 14

4 Results 15 4.1 User QoE Behavior . . . 15

4.2 Average Ratings . . . 16

4.3 Individual User Behavior . . . 23

4.4 Relationship between QoE and Download Time . . . 26

5 Conclusions and Future Work 29 5.1 Future work . . . 30

Bibliography 31 A Test Bed Setup 35 A.1 Experiment Setup Topology . . . 35 B Experiment Data; Individual User Behavior (Plots; Page wise) 37

(8)

viii

C Experiment Data; Individual User Plots: Average Download

Times and QoE Ratings 63

D NetEm Delay Behavior 79

(9)

List of Figures

2.1 Forlizzi and Ford definition of UX. . . 6

3.1 Experiment Process: Web Sessions. . . 10

3.2 Sessions: Graphs. . . 11

3.3 Snap shot of desgined Web Page for the experiment. . . 13

3.4 Block Diagram for Experiment Setup. . . 14

4.1 Average download times and user QoE ratings for the complete browsing session. . . 17

4.2 Individual page sizes for the complete browsing session. . . 23

4.3 Individual User Behavior. Download Times for QoE Ratings YES. 25 4.4 Individual User Behavior. Download Times for QoE Ratings MAYBE. 25 4.5 Individual User Behavior. Download Times for QoE Ratings NO. . 26

4.6 Average download times for the QoE ratings. . . 27

4.7 Average download times for the QoE ratings. . . 28

A.1 Experiment Setup Topology. . . 36

B.1 User 1. . . 38

B.2 User 2. . . 39

B.3 User 3. . . 40

B.4 User 4. . . 41

B.5 User 5. . . 42

B.6 User 6. . . 43

B.7 User 7. . . 44

B.8 User 8. . . 45

B.9 User 9. . . 46

B.10 User 10. . . 47

B.11 User 11. . . 48

B.12 User 12. . . 49

B.13 User 13. . . 50

B.14 User 14. . . 51

B.15 User 15. . . 52

B.16 User 16. . . 53

B.17 User 17. . . 54

B.18 User 19. . . 55

B.19 User 20. . . 56

B.20 User 22. . . 57

(10)

x

B.21 User 23. . . 58

B.22 User 24. . . 59

B.23 User 25. . . 60

B.24 User 26. . . 61

B.25 User 27. . . 62

C.1 User 1. . . 63

C.2 User 2. . . 64

C.3 User 3. . . 64

C.4 User 4. . . 65

C.5 User 5. . . 65

C.6 User 6. . . 66

C.7 User 7. . . 67

C.8 User 8. . . 67

C.9 User 9. . . 68

C.10 User 10. . . 68

C.11 User 11. . . 69

C.12 User 12. . . 69

C.13 User 13. . . 70

C.14 User 14. . . 70

C.15 User 15. . . 71

C.16 User 16. . . 71

C.17 User 17. . . 72

C.18 User 18. . . 72

C.19 User 19. . . 73

C.20 User 20. . . 73

C.21 User 21. . . 74

C.22 User 22. . . 74

C.23 User 23. . . 75

C.24 User 24. . . 76

C.25 User 25. . . 76

C.26 User 26. . . 77

C.27 User 27. . . 77

D.1 For 0 ms delay. . . 79

D.2 For 2 seconds delay. . . 79

D.3 For 4 seconds delay. . . 80

D.4 For 6 seconds delay. . . 80

D.5 For 8 seconds delay. . . 80

(11)

List of Tables

2.1 Bouch et al results. . . 7

3.1 Sessions. . . 11

4.1 QoE Analysis . . . 17

4.2 Rating Evaluation. . . 18

4.3 Averages Per Page. . . 19

4.4 Individual User Behavior. . . 24

4.5 Application Download Times. . . 26

4.6 Relationships. . . 27

(12)
(13)

Chapter 1

Introduction

User perceived network performance is dependent upon basically two things i.e. network performance (end-to-end) and application behavior. Application behavior in terms of application dependency on the network and intensity of user-application interaction [1]. This thesis work is about investigating the map- ping of user perception to the quality of service at the application level.

For successful provision of Quality of Experience (QoE); we will have to un- derstand all the aspects of the network and application; as perceived by the user.

The user would expect the service to be reliable, available when the user wants to, must be scalable with speed, accurate and efficient as well [2]. After all it is the user that who will judge the service and express the service in human jargon and not in any specific metrics.

The aim of this study is basically to carry out subjective analysis on how the users rate different application download times. For this purpose we devised a testbed on which users were tested for different web browsing sessions. Most of the previous studies [3, 4, 5] and user surveys [6, 7] proved that download times play a crucial role in the user’s perception. A sequence of delays were introduced in these experiment scenarios that resulted in different application download times. The users rated these different browsing sessions on the basis of how they perceived the different download times. The application on which the users were tested was a web browser. With mapping the user QoE to the application level metrics; we tried to derive a model that can be helpful to corre- late subjective QoE to objective nature Quality of Service (QoS). Such findings can also benefit the commercial service providers in providing them with efficient guidelines to keep their subscribers satisfied hence optimizing their service qual- ity.

The first step in fully understanding the mappings between QoE and QoS was to carefully review the studies that have been made so far in the concerned field. The findings in this study are backed up by active experimentation that were conducted for this specific study. This study will give an insight into how a user perceives web QoE.

This chapter follows as, Section 1.1 describes the basic motivation behind

(14)

Chapter 1. Introduction

this study, Section 1.2 explains the terms QoS and QoE and Section 1.3 defines the layout of the rest of this document.

1.1 Significance of User-Centric Approach

With the advancement of technologies, the Internet is growing rapidly. The Internet of today is not limited to academic or military use. With the advent of 3G technologies, UMTS or WiMax to name a few, use of Internet had gone beyond static desktops to mobile handhelds and portables which ultimately re- sulted in increase of consumers. The network operators have to keep up with the pace as well in order to survive in the competing market [8]. Due to the fierce competition for customers; the network operators have to provide reliable services to keep up with customer satisfaction [9].

The user is now an important entity consuming the services being offered to them by the network operators. The operators have every reason not to step back from user satisfaction by providing reliable services in order to have an edge over competitors [10]. Now that the user is such an important asset; it’s the user who has the control.

A study by Accenture [11] gives us an insight on the importance of user sat- isfaction. The study reveals that dissatisfied users can begin a chain reaction by telling their experience of the service to other people; thus leading to degrada- tion of operator’s profile in the market. Furthermore due to the competition in the market; a dissatisfied user might not think twice before switching to another service without bothering to complain about the service. So this basically leads us towards highlighting the issue that how fragile and important the user entity has become for an operator.

1.2 QoE and QoS

In the past decade satisfaction from a service was given in Quality of Ser- vice (QoS) metrics [12]. There are many formal definitions for QoS but some generalized definitions can be given as

“The set of those quantitative and qualitative characteristics of a dis- tributed multimedia system, which are necessary in order to achieve the required functionality of an application.” [13]

“Quality of Service (QoS) refers to the capability of a network to pro- vide better service to selected network traffic over various technologies, including Frame Relay, Asynchronous Transfer Mode (ATM), Ether- net and 802.1 networks, SONET, and IP-routed networks that may use any or all of these underlying technologies. ” [14]

“A set of quality requirements on the collective behavior of one or more objects.” [15]

(15)

1.3. Document Outline

QoS can be described with the help of parameters like latency, jitter or throughput that can have variable affects on data services. Basically the de- ployment of QoS mechanisms ensures that traffic can be prioritized by providing different guarantees like dedicated bandwidth or controlled delay. These guaran- tees can be provided in advance to different users depending on their requirements that can result in improved performance. QoS is itself a detailed subject and is beyond the scope of this thesis work hence no further details on QoS are provided.

QoS is much more of an objective nature. When it comes to users; they might not have any knowledge of how the networks work and where these technical terms come into play. Their knowledge about the networks might be of abstract level. This is where end-to-end QoS or more generally referred to as Quality of Experience (QoE) comes in. The term QoE was coined in the mid nineties and is much more of subjective nature and addressed to the requirements of a normal user. QoE can be differentiated from QoS as

“Quality of experience (QoE) is a subjective measure of performance in a system. QoE relies on human opinion and differs from Quality of Service (QoS), which can be precisely measured.” [16]

“Quality of Experience (QoE) has been defined as an extension of the traditional quality of service (QoS) in the sense that QoE provides information regarding the delivered services from an end-user point of view.” [17]

QoS metrics are of objective nature which cannot be related to the user’s subjective view of some specific service as perceived by the user. QoE in terms of telecommunications or data services can be defined as a subjective measure from the users point of view of the service offered to them [18]. This basically involves the user’s previous experiences and future expectations with the service. These aspects of QoE will be discussed further in detail in Chapter 2 in the section User Experience.

One of the many aspects that the user perceptions depend upon is service type. As this study is mainly focused on the web application hence the findings will be confined to the area of web application only. ITU-T have categorized web applications as responsive and error intolerant. Delay of 4 seconds is acceptable but information loss must be zero [19]. Web browsing applications are of inter- active nature. Keeping this in mind, the main factor that might affect the user’s perception can be the time it takes for a user request to get completed.

1.3 Document Outline

Rest of the document follows as Chapter 2 discusses some related work done before in the concerned area. Chapter 3 discusses the experiment process and setup details. Chapter 4 presents the results and analysis from this thesis work.

This document concludes with Chapter 5 that summarizes the basic findings of this study and mentions how this work can be extended in future.

(16)
(17)

Chapter 2

Related Work

To fully understand the scope of this thesis work, let us start with having an insight into two main focused areas related to this study i.e. User Experience and User Perception of Web Applications that can both result in a better un- derstanding of Quality of Experience (QoE). In this chapter will first discuss the User Experience and will then proceed with projecting some light on the User Experience in the domain of Web Browsing.

2.1 User Experience

This section discusses the area of User Experience (UX) to develop an un- derstanding about science of human perception that can provide us a better understanding of qualitative assessment of web browsing.

User Experience has been reviewed by many researchers in the fields of Human Computer Interaction (HCI), psychology, economy, academia and industrialists and they all seem to agree on one basic theme that UX is “dynamic, context- dependent and subjective” [20]. UX is defined in the ISO 9241 draft 210 [21]

as

“A person’s perceptions and responses that result from the use or an- ticipated use of a product, system or service.”

The study by Forlizzi and Ford [22] tried to discriminate the term “Experience”

from “User Experience”. They believed that the later term involved some kind of a product or real life facing and direct dealing involved in order to term it as UX.

They came up with a conclusion that UX is influenced by the user and product interaction and whatever surrounds it as shown in Figure 2.1. They are of the belief that user’s prior experience also plays a role driving to his/her perception of the product. This idea was also confirmed by the studies of Makela [23] in which the authors defined UX as

“a result of a motivated action in a certain context. The user’s pre- vious experiences and expectations influence the present experience, and the present experience leads to more experiences and modified expectations.”

(18)

Chapter 2. Related Work

Emotions Values Prior Experience

Form Language Features

Asthetic Qualities Usefulness

USER PRODUCT

Context of use Social and cultural factors.

Figure 2.1: Forlizzi and Ford (2000) [22]

These studies agree on the fact that the current UX is built from earlier ex- periences and expectations. How the users experienced the system in past will deeply affect their current usage of that very system. Similarly their future expec- tations will again have deep consequences on how they will perceive the current system.

The factors that can affect the UX were derived by Arhippainen and Tahti [24]. In their study they narrowed down five main factors which can affect the UX which included user, social factors, cultural factors, context of use and the product itself. These aspects can be further categorized together with the help of a study by Hassenzal and Tractinsky [3] in which they discussed different aspects of UX definition. They defined UX as

“as a consequence of user’s internal state, characteristics of the de- signed system and the context within which the interaction occurs.”

This study was carried further and elaborated if used in a specific case by Roto [25] in which the author narrows down three components as stated by ear- lier studies that can affect the UX; which are User, System and Context. This study has again taken the fact into consideration that present experiences are based on older acquaintances with the system plus how the user perceives the system. With the help of explanations from the study, lets elaborate further these components.

System: This component analyzes entities upon which this system is depen- dent. The rules for these entities are that they much interact fully with this system else they would be the part of the Context.

Context: Those entities which are not part of the system but can affect the

(19)

2.2. QoE in context of Web

UX lies in the Context.

User: Users themselves are components; as from the previous studies we con- cluded that the user’s past experience and future expectation affect the current UX in a deliberate way.

2.2 QoE in context of Web

Web browsing applications are responsive in nature. Studies in the past have tried to link objective QoS parameters to the user’s perception. In this section an explanation is given for the factors that effect the web browsing.

Studies from Bouch et al [4] and Ramsay et al [26] reveals that end-to-end response time has deep effect on the user’s perception. The end-to-end response time or the download time is the time it takes when a client requests something from a server till the client gets the response back. Users don’t like to wait for longer period of time for the page to be downloaded completely.

Nielsen [27] from his studies in the year 2000 concluded that a delivery time of 10 seconds is within the range of user satisfaction before the user gets bored and his/her perception is affected. The same year Bouch et al [4] in their study came up with similar guidelines as shown in Table 2.1 which shows delays and their respective ratings when the web browser was loading pages at once and incrementally. Now that the Internet has grown and high speed communication systems are available; the user requirement has also changed. Such conclusion is provided in this study itself which would be later discussed in Chapter 4 Results.

Table 2.1: Ratings and delays with different page loading methods.

Rating At once Incremental

High < 5 ≤ 39

Average 5 < t ≤ 11 39 ≤ t < 56 Low above > 11 > 56

Nielson [5] pointed out that factors like server’s throughput, server’s speed, user’s speed and the browser optimization itself come into role affecting the page download speeds. The brief surveys of Georgia Institute of Technology, Atlanta;

showed in their last study [6] that speed is so far the prominent problem when it comes to the user. Shubin [28] also pointed that user’s tolerance for delay decreases when they are expecting high quality. This was proved in a survey by Jupiter Research [7] that 33% of broadband users did not like to tolerate a delay of more than four seconds. We also derived similar conclusions from this study which consequently leads us to a conclusion that the 10 seconds delay standard from the previous studies is not applicable anymore. For these reasons we have carried out this study by considering the download time as the deciding factor that can deviate the user perception.

(20)

Chapter 2. Related Work

Web Content is also a factor that can affect user perception about web brows- ing. Enormous stuffing of information into a web page will not only affect the speed of the specific page but will also leave physiological effects on the user’s perception. Web pages heavily loaded with contents like flash animations or heavy graphics will ultimately take more time to load. Shubin [28] revealed that such practice can lead to a user’s dissatisfaction because user might be distracted away from the original task he/she was performing thus sometimes causing lack of interest too. This may often lead to unsuccessful completion of tasks as well.

Nielsen [5] and Shubin [28] stated that network conditions have their own role to play. Bottlenecks in the network also lead to deteriorating condition of performance thus affecting user’s perception. Latency from the network can lead to long page delays thus making the user frustrated with the web applications [29]. Hence, besides the design of the web pages and web servers, network techni- calities have their own role to play. One other issue that can be thought about is Download Success Probability. Sometimes users are more eager to get their infor- mation; setting aside the delay concern. Such practice is often dealt with when it comes to e-commerce scenarios. Study reveals that slow downloads also lead to user dissatisfaction thus resulting in abandonment of a service which in turn can have grave consequences for revenues as well. Such trend was observed in the studies conducted by Kohavi et al [38] (reported in A. King [38]) that slowing search results up to two seconds lead to less queries and lesser ad click from the users. Hence download probability plays a crucial role in user’s perception of the service.

(21)

Chapter 3

Experiment Setup

This chapter provides a detailed explanation on how the experiments were carried out to investigate the user experience in web browsing. It will begin with an insight into the experiment process and then proceeding into the detailed explanation of the experiment setup.

3.1 Experiment Process

In order to investigate the user perception in web browsing; we created an ex- perimental testbed on which these users were tested for specific scenarios. These scenarios were same for all the users that we tested for our investigation. This section represents the details of the whole process.

We devised the experiment in such a way that the users were tested with a sequence of web pages. These web pages included different pictures that were loaded and the users had to rate on how they perceived the whole service. All the pictures included in the experiment were of sceneries and landscapes so that the content of the pictures did not effect the user perception and instead have a neutral effect on the users. As one of the major parameters that affects the user perception is related to time it takes for a page to be loaded (as mentioned in Section 2.2); hence our study is based on the correlation of page download times to user ratings.

The browsing experiment was divided into four main sessions comprised of 94 web pages that the users had to browse. Each session was subdivided into sub sessions comprised of 6 pages each, except for Session 1 which had a total of 22 pages. These sessions are differentiated based on the type of sequence in which delays were introduced. Session 1 implemented no delay at all hence making it as our reference session. Session 2 was implementing delay sequences in increas- ing order and Session 3 implemented delay sequences in decreasing order. While Session 4 implemented an alternating delays sequence. The purpose of these se- quences was to see how the user perception deviated as the sessions progressed.

The user was not made aware of these sequences. A schematic working of the complete web browsing session is shown in Figure 3.1. When the experiment is started, Session 1 is initialized which run for a total number of 22 pages. The

(22)

Chapter 3. Experiment Setup

INDEX

Initialize Experiment

Session X

END

User Info

Initialize Session X

User Rating START

NEXT (IF Pages < 6) Page = Page + 1 NEXT (IF Pages > 6)

Page = 1 Session = Session + 1

NEXT Page = 1

Database

Session 1 Initialize Session 1

NEXT (IF Pages < 22) Page = Page +1 NEXT (IF Pages > 22) Page = 1

User Rating ABORT

ABORT

ABORT OR NEXT (IF Session > 4)

Get Session and Delay Information (X) ABORT

Figure 3.1: Experiment Process: Web Sessions.

user rating from these pages are written to the database. Once the 22 pages are browsed, the session is incremented and the page counter is again set to 1.

Session delay settings are read from the database as well. The delay is then implemented by the network shaper. After every 6 pages the session counter is again incremented and new delays setting are read from the database. This pro- cess continues until the experiment is completed for all the four sessions in the experiment. The respective ratings from all the pages are written to the database.

These delay sequences were made possible with the help of a network traffic shaper which would be later discussed in Section 3.2.2. The delay introduced in the network resulted in different download times at the application level. These delay sequences used in the experiment were decided keeping in mind the pre- vious work done of the same nature [4]. Figure 3.2 illustrates the sequence of application level download times throughout the experiment. Table 3.1 shows the delays introduced by the shaper (D) and the desirable application level download times ( ˜td). As the delays were introduced by the shaper for multiple users si- multaneously therefore download times varied on the scale of ±300 ms for delays of low intensities and ±1000 ms for the delays of higher intensities during the experiment.

(23)

3.1. Experiment Process

Table 3.1: Details about Sessions.

Session D [ms] t˜d[s]

1.1 0 0.3

2.1 0 0.3

2.2 35 2.0

2.3 85 4.0

2.4 185 8.0

3.1 135 6.0

3.2 85 4.0

3.3 35 2.0

3.4 0 0.3

4.1 35 2.0

4.2 85 4.0

4.3 35 2.0

4.4 135 6.0

22 28 34 40 46 52 58 64 70 76 82 88 94 0

1 2 3 4 5 6 7 8

Page Number

Download Time [s]

22 28 34 40 46 52 58 64 70 76 82 88 94 0

1 2 3 4 5 6 7 8

Page Number

Download Time [s]

1 2.1 2.2

2.3 2.4

3.1

3.2

3.3

3.4 4.1

4.2

4.3 4.4

Figure 3.2: Sessions behavior depicted in the graph.

3.1.1 User Testing

A total of 27 users were tested for the experiment. All subjects were males with ages between 24 years and 33 years, avid users of the Internet. All of them were students of telecommunication systems at graduate or doctorate level and were from different nationalities but mostly from South Asia. These users were tested in sets of two. In first set 13 users and in second set another 14 users were tested simultaneously. The users were assigned a desktop computer. Each user had to go through sequence of web pages in each session and provide their rating for each page as it was loaded. A snap shot of the designed web page for the

(24)

Chapter 3. Experiment Setup

experiment is shown in Figure 3.3. Each page contained a picture and a link to the next page. Furthermore underneath the loaded picture; a question was asked from the user that “Would you like to continue using this service?” The users had to answer this question by selecting from one of the three options provided to them namely:

YES: They were completely satisfied with the service.

MAYBE: They were not sure about the service.

NO: They were dissatisfied with the service.

The users were asked not to rate the content of the web page or the pic- ture and were specifically asked to rate the perceived performance of the service offered to them. As the users browsed through the sequence of pages; each page invoked the respective shaper setting assigned to the shaper.

3.2 Testbed Setup

This section discusses about components of the test bed where tests were per- formed. The experiment setup is illustrated with the help of a block diagram in Figure 3.4. The network traffic traces were collected by using Distributed Passive Measurements Infrastructure (DPMI)[40]. Following are the main components included in the testbed setup.

3.2.1 Experiment Controller

The experiment controller is a Linux machine that acts as the main server in the whole setup, responsible for controlling the flow of the experiment. It triggered all the components involved in the setup. Furthermore it acts as the web server and hosted the web pages that were being accessed by the users. It also acts as the database server for the setup and all relevant user information and logs is stored on it.

3.2.2 Traffic Shaper

Traffic shaper is a Linux machine that introduces the desired delays intro- duced in the network based on the parameters given as input by experiment controller. It uses NetEm[41], a network emulator that applies delays on packets according to Gaussian distribution. Histogram plots for different delays imple- mented by the network shaper are shown in Appendix D. Our main focus was to yield download times as listed in Table 3.1. These delays were implemented on the downlink from the web server to the client direction. As a result, these delays affected the application level download times at the user end.

3.2.3 Consumer

The consumer is a Linux machine that is used for storage of network traffic that can be later used for offline analysis. The scope of this thesis is limited to the application level only, hence this part is not discussed in detail.

(25)

3.2. Testbed Setup

Figure3.3:SnapshotofdesginedWebPagefortheexperiment.

(26)

Chapter 3. Experiment Setup

Switch Client

1

Client 16

Switch

Experiment Controller/

Web Server Traffic

Shaper

Consumer

Data Flow Control Flow .

. . . . . . . . . Client

2. . . . . . . . . .

Measurement Point

Measurement Point

Database

Figure 3.4: Block diagram for the experiment setup.

3.2.4 Measurement Point

The measurement point is a Linux machine equipped with DAG Endace Net- work cards[42] version 3.5 that are responsible for capturing of the network traffic.

These traces are captured with the help of predefined set of filters like filtering based on IP addresses or port numbers.

3.2.5 Clients

The clients are desktop computers running the Windows XP SP2. The users performed their web browsing tasks on these computers. Mozilla FireFox [43]

version 2.0 with modified FasterFox [44] utility is used as web browser on these computers. Logs from the browsing activity are stored on these subsequent clients with URL and download time of each webpage. System clocks are synchronized with the help of Network Time Protocol[45].

(27)

Chapter 4

Results

This chapter presents the results from the data logs gathered from the experi- ment. Special tools were developed in Perl and Matlab to derive results from the experimental data. First an overview about the densities of QoE rating is given.

It is followed by an investigation into how the download times affected the user QoE ratings during the browsing sessions. Then the individual user behavior is discussed and the last section provides a map between user QoE ratings and application download times.

4.1 User QoE Behavior

During experiments, the user responses to different scenarios were noted in the form of how they perceived the service. If they believed to be fully satisfied with the service; they rated it as YES. In case of doubt; they were given an option to rate it as MAYBE. In case of total dissatisfaction they rated the service as NO.

Table 4.1 gives us a relation between delay implemented and QoE by showing the density of user responses for the corresponding sessions and their delays. The first and second column show the details for the sessions and the delay (D) imple- mented within these sessions respectively. The third column shows the desired application download times (td). The next three columns show the number of user responses recorded YES (TY), MAYBE (TM) and NO (TN) during the com- plete browsing sessions. And the last column shows the sum of these three QoE ratings (Tresp). The table reveals that the user response are quite positive when there are no delays implemented (in case of Session 1 and 3.4). In contrary to this; the user ratings are inevitably poor in the sessions where high delays are im- plemented (in case of Session 2.4 (8 seconds), 3.1 (6 seconds) and 4.4 (6 seconds)).

As evident from the last column in Table 4.1; the user responses inside each session tally up to a total of 162 responses except for Session 1. This is because Session 1 had a total of 22 pages for the users to browse. A total of 27 users were tested for the experiment hence making a total of 567 responses in Session 1. The remaining sessions had four subsessions each comprising of 6 pages for the users to browse, thus giving us a total of 162 responses per each session by the 27 users tested for the experiment.

(28)

Chapter 4. Results

The QoE responses received from the users within each session are shown in Table 4.1. The table gives us an indication of the deviation in user perception about the application download times experienced in the browsing sessions. As discussed in Section 3.1 about the sequence of delays on which these browsing sessions were differentiated; its sole purpose was to see how the users reacted to these different delay sequences that resulted in different download times at the application end. The density for the positive rating YES is higher as expected in the Sessions 1 and Session 2 with almost 80% of the users satisfied with it as no delays were introduced in the network during these sessions. A decrease in densities for QoE rating YES is seen in the preceding sessions as the delay factor is increased till Session 2.4. Then as the delay factor is decreased; positive responses from the user increases as well. It is however quite interesting to note one behavior from the user side. Session 2.2 and Session 3.3 are implementing same network delays but user response in the later session is more positive (≈30%

increase in QoE ratings YES and ≈20% decrease in QoE rating NO in Session 3.3 when compared with Session 2.2.) Then in Session 3.4 in which again no delay is introduced from the network side; resulted with ≈97% user satisfaction which is higher than satisfaction percentages from Session 1 and 2.1

The user responses gives us an indication about how the user perception is affected due to the changes in download times at the application end. When the users were exposed to a series of pages with minimal delays at the start of their browsing activity; they were quite unsatisfied when they were introduced to the pages implementing some delays, hence the QoE ratings decreased. But as they were again exposed to sessions implementing delays in decreasing sequence, a positive trend was found in their QoE ratings. For the same application down- load times, as the users experienced in the early sessions and gave them low ratings; they rated them high this time. This shows how experience affects their perception of application download times.

However, same cannot be said about the Session 4 which was implement- ing alternating sequence of delays, if we look at the densities for QoE rating YES within this Session. But the densities in QoE rating NO do give us an idea of user dissatisfaction. Session 4.2 and 4.4 which implements high delays within the Session 4 resulted in greater user dissatisfaction percentages.

4.2 Average Ratings

One important aspect of this thesis work is to find out how application down- load times affected user QoE ratings. The evaluation of the user responses was carried on a 3-point scale which is shown in Table 4.2. As mentioned in Section 3.1.1 the users had to answer the QoE evaluation question by selecting YES, NO or MAYBE. For statistical analysis YES is evaluated with a rating of 3, MAYBE with rating of 2 and NO with rating of 1.

Figure 4.1 depicts the average download times throughout the whole experi- ment and how the users responded to it. Figure 4.1-A shows a plot for average download times and the web pages. Figure 4.1-B shows a plot for QoE ratings

(29)

4.2. Average Ratings

Table 4.1: QoE Ratings Responses: Overview.

Session ID D [ms] t˜d[s] TY TM TN Tresp

1 0 0.3 473 71 23 567

2.1 0 0.3 126 32 4 162

2.2 35 2.0 48 72 42 162

2.3 85 4.0 12 68 82 162

2.4 185 8.0 11 25 126 162

3.1 135 6.0 16 26 120 162

3.2 85 4.0 30 58 74 162

3.3 35 2.0 91 57 14 162

3.4 0 0.3 157 4 1 162

4.1 35 2.0 42 86 34 162

4.2 85 4.0 6 54 102 162

4.3 35 2.0 38 90 34 162

4.4 135 6.0 12 29 121 162

22 28 34 40 46 52 5858 64 70 76 82 88 94 0

2 4 6 8 10 12 14

Page Number

Download Time [s]

Average download times and rating A

22 28 34 40 46 52 58 64 70 76 82 88 94 1

2 3

Page Number

User Rating

B

Max DT Avg. DT Min DT

Avg Rating + Std.

Avg. Rating Avg. Rating − Std.

Figure 4.1: Average download times and user QoE ratings for the complete browsing session.

(30)

Chapter 4. Results

Table 4.2: Rating Evaluation.

Rating Evaluation

Yes 3

Maybe 2

No 1

and the web pages. High QoE ratings from the users were recorded in the sessions implementing low delay thresholds. This can be noted in the pages from 1 till 28 and then from pages 64 till 70. As also pointed in Section 4.1, it is interesting to note that the later pages yielded QoE ratings higher from the pages 1 till 28 which is due to the reason that after presenting the users with some higher delays;

they were relieved to minimal delays on their browsing activity hence, resulting in a positive attitude from the user. Similarly pages resulting in higher download times ended with relatively low QoE ratings. By following Figure 4.1-A, a spike in the average download time is observed at Page 8 which is due to a recording of a high download time for one of the samples (User 19: Figure C.18; Appendix B) hence resulting in a higher average.

The data on which Figure 4.1 is based are presented in detail in Table 4.3.

The first column shows the page number followed by the session number for the page. It is then followed by the desired delay ( ˜td) for the respective page num- ber. The fourth column shows the page size in MB noted for each of the page.

The fifth and sixth column show the average download time (td) and standard deviation of download time (Std(td)) noted for each page followed by average user QoE rating (QoE) and standard deviation of user QoE rating (Std(QoE)) respectively. By following the QoE ratings given by the users; another interest- ing factor that can be noted here is that delays of higher intensities were easily detected by the users and their ratings were more consistent as well. This can be concluded by looking at the standard deviations of QoE ratings presented in seventh column of Table 4.3. The standard deviation for pages implementing high delays (Session 2.3, 2.4, 3.1, 3.2, 4,2 and 4.4) is comparatively less than the pages with implementing low delays. The greater standard deviation values for QoE ratings in pages implementing lower delays might be related to the user per- ception as different users are experiencing differently thus resulting in different perceptions. This point would be elaborated further in the next section. Similar plots like Figure 4.1 for individual users are provided in Appendix B.

(31)

4.2. Average Ratings

Table4.3:AveragesPerPage. PageNumberSessionD[s]PageSize[MB]td[s]Std(td)[s]QoEStd(QoE) 1100.930.480.152.960.30 2101.000.390.112.810.29 3101.020.350.112.850.30 4100.950.350.102.810.29 5100.980.350.132.810.29 6100.930.340.102.810.29 7101.110.380.132.780.29 8101.001.133.982.850.30 9101.040.360.112.740.29 10101.010.350.112.930.30 11101.020.370.152.890.30 12101.080.370.132.780.29 13100.980.360.122.780.29 14101.020.360.122.810.30 15101.010.350.112.700.29 16101.040.350.092.740.29 17101.020.350.102.630.28 18101.140.360.122.810.30 19100.990.340.122.780.29 20101.050.380.142.810.29 21101.050.360.132.890.30 22100.950.420.282.630.28 232.101.110.500.132.630.28 242.100.990.350.102.700.28 252.100.980.360.102.740.29 Thistablecontinuesonthenextpage.

(32)

Chapter 4. Results

PageNumberSessionD[ms]PageSize[MB]td[s]Std(td)[s]QoEStd(QoE) 262.101.050.370.112.780.29 272.101.030.350.112.810.29 282.101.070.370.132.850.30 292.221.011.880.382.070.23 302.220.941.890.362.070.23 312.221.061.950.402.000.22 322.220.931.800.402.110.23 332.221.002.020.482.000.22 342.221.021.730.481.960.22 352.340.954.031.231.590.18 362.340.983.960.871.560.17 372.340.933.810.871.590.18 382.341.114.591.281.520.17 392.341.003.810.961.590.18 402.341.043.730.801.560.17 412.481.018.172.351.300.14 422.481.027.782.001.300.15 432.481.088.162.521.300.15 442.480.987.752.671.330.15 452.481.027.542.521.300.15 462.481.017.472.401.220.14 473.161.046.051.621.300.15 483.161.025.871.511.260.14 493.161.146.611.851.300.15 503.160.995.912.221.410.16 513.161.055.681.751.520.17 523.161.056.011.511.370.16 Thistablecontinuesonthenextpage.

(33)

4.2. Average Ratings

PageNumberSessionD[ms]PageSize[MB]td[s]Std(td)[s]QoEStd(QoE) 533.240.954.101.191.590.18 543.241.113.990.881.630.19 553.240.993.770.941.810.20 563.240.983.761.011.740.20 573.241.053.791.021.890.21 583.241.033.690.941.700.19 593.321.071.880.322.370.25 603.321.011.730.292.410.26 613.320.941.720.412.440.26 623.321.061.880.382.560.27 633.320.931.680.352.590.28 643.321.001.750.342.480.26 653.401.020.470.102.810.29 663.400.950.350.103.000.31 673.400.980.350.113.000.31 683.400.930.350.113.000.31 693.401.110.370.122.960.31 703.401.000.360.113.000.31 714.121.041.940.391.960.21 724.121.011.820.342.040.22 734.121.021.870.362.190.23 744.121.081.840.362.000.22 754.120.981.700.402.070.23 764.121.021.810.362.040.22 774.241.013.770.821.480.16 784.241.044.241.151.410.16 794.241.023.831.071.440.16 Thistablecontinuesonthenextpage.

(34)

Chapter 4. Results

PageNumberSessionD[ms]PageSize[MB]td[s]Std(td)[s]QoEStd(QoE) 804.241.143.921.161.300.14 814.240.993.791.291.410.16 824.241.054.041.131.410.16 834.321.052.010.541.960.21 844.320.951.730.522.000.22 854.321.112.030.592.040.22 864.320.992.030.552.000.22 874.320.981.870.532.070.22 884.321.051.920.412.070.22 894.461.035.481.491.300.15 904.461.075.832.281.150.13 914.461.015.591.951.440.16 924.460.946.121.951.410.16 934.461.065.812.011.220.14 944.460.935.111.581.440.16

(35)

4.3. Individual User Behavior

22 28 34 40 46 52 58 64 70 76 82 88 94 0.9

0.95 1 1.05 1.1 1.15

Page Number

Page Size [MB]

Page Sizes

Figure 4.2: Individual page sizes for the complete browsing session.

In order to further investigate the variation of download times for the differ- ent webpages, the webpage sizes are also considered here. The webpage sizes are presented in Table 4.3. The size of these webpages ranged between 0.93 MB and 1.14 MB. Figure 4.2 shows the respective webpages sizes depicted on the y-axis.

It was found that the webpage size had some affect on the time it took for the page to download. Also in some particular cases it was also observed that a page with less size took considerable longer time in download compared to lower download time taken for a page of higher size. To investigate this further, regres- sion analysis was performed for different relationships like linear, exponential, logarithmic and power between the page download times for individual sessions and their respective page sizes to confirm the nature of relationship that existed between these two factors. To further confirm this, an analysis was also carried out for an individual sample user. From the analysis it was found that the re- lationships between the different sessions were not common and apart from a linear relationship, most of them resulted in power, exponential or logarithmic relationships. A similarity pattern was not observed which gave us a conclusion that apart from the page sizes, some other factors also affect the download times.

Though the variation between the web page sizes and the download times was quite low but this matter still merits some further investigation as what other factors affected the download times for these webpages.

4.3 Individual User Behavior

During the analysis it was quite interesting to note the differences in the user perception. These differences also made it difficult to put a general conclusion to the relationship between QoE and download times, as for instance; one user might find a certain threshold unacceptable while another user would rate that very threshold as acceptable. This section elaborates the individual user behav- ior with respect to the different QoE ratings on which these users evaluated the pages.

Table 4.4 shows for individual users the average download times (td) within each QoE ratings. Figure 4.3 shows a plot for individual users what they con- sidered to be acceptable (QoE rating YES) download times. Each data point

(36)

Chapter 4. Results

Table 4.4: Individual User Behavior.

User td YES [s] Std. tdMAYBE [s] Std. tdNO [s] Std.

1 0.66 2.69 0.73 2.69 2.59 2.15

2 2.07 3.65 3.95 3.80 3.78 3.81

3 0.32 4.02 0.84 4.02 4.46 2.98

4 1.00 4.09 4.01 4.02 8.09 3.89

5 0.36 4.13 1.56 4.13 5.08 3.26

6 1.34 4.16 3.66 4.22 6.90 3.90

7 1.38 4.06 4.81 3.99 7.96 3.97

8 0.56 2.26 2.33 2.10 3.72 2.17

9 0.76 3.74 3.74 3.59 6.34 3.55

10 0.42 3.92 1.68 3.91 5.19 3.17

11 2.26 3.58 2.51 3.53 3.80 3.51

12 0.51 3.98 2.12 3.96 5.68 3.30

13 0.36 2.83 1.58 2.80 3.76 2.33

14 0.52 2.81 1.12 2.78 3.21 2.21

15 0.33 3.66 1.90 3.64 4.91 2.97

16 0.52 3.94 2.27 3.88 6.75 3.52

17 0.31 4.04 2.13 4.00 5.81 3.46

18 2.44 2.41 6.81 3.39 1.67 3.39

19 1.39 3.63 2.44 3.50 3.38 3.52

20 0.67 2.99 3.15 2.63 5.23 2.96

21 0.33 3.55 1.71 3.46 5.00 3.12

22 0.49 3.72 2.59 3.65 5.79 3.33

23 0.51 3.60 1.99 3.55 5.57 3.10

24 0.76 3.88 4.64 3.21 6.07 3.90

25 0.94 3.71 3.18 3.74 5.54 3.34

26 1.12 3.39 4.86 3.33 6.01 3.37

27 0.31 3.74 1.88 3.73 5.03 3.04

(37)

4.4. Relationship between QoE and Download Time

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 0

2 4 6 8 10 12

User

Download Time [s]

Average download times for QoE rating YES

Max Avg Min

Figure 4.3: Individual User Behavior. Download Times for QoE Ratings YES.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 0

2 4 6 8 10 12 14

Average download times for QoE rating MAYBE

User

Download Time [s]

Max Avg Min

Figure 4.4: Individual User Behavior. Download Times for QoE Ratings MAYBE.

represents the average download time (depicted on Y-axis) for QoE rating of YES for the different users (depicted on X-axis) along with the highest and low- est download time recorded for each user within this rating. Similarly the average download time plots for QoE rating MAYBE and NO are shown in Figure 4.4 and Figure 4.5 respectively. The variation in perception from user to user is evident from these plots.

(38)

Chapter 4. Results

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 0

2 4 6 8 10 12 14

Average download times for QoE rating NO

User

Download Time [s]

Max Avg Min

Figure 4.5: Individual User Behavior. Download Times for QoE Ratings NO.

4.4 Relationship between QoE and Download Time

One of the main aspect of this thesis work is to map web application metric to QoE. For this purpose an investigation was carried out showing that how the user rated the different sessions affected by their respective download times. In this section we have presented a mapping between the QoE ratings and download times.

Figure 4.6 shows a plot between the QoE ratings and the average down- load times for each of these ratings comprised on the data collected from the 27 users. Evidently low download times would result in high QoE ratings thus forming an inverse relationship between the two of them. But this might not be the case for each user tested in the experiment. Similar plots (like Figure 4.6) for individual users are included in Appendix C.

Regression analysis is performed to investigate which relationship fits on the mapping between average download times and QoE ratings. The regression anal- ysis is performed on average download times for the different QoE ratings. Table 4.5 shows the average application download times (td) within each QoE ratings based upon the information derived from the experiments. Correlation coeffi- cients are derived for linear, logarithmic, exponential and power relationships and comparison was made on the basis of statistical values. These regressions are shown in Table 4.6. Variable y represents the user QoE rating and variable x

Table 4.5: Application Download Times.

QoE Rating td[s]

Yes 0.99

Maybe 2.60

No 4.87

(39)

4.4. Relationship between QoE and Download Time

0 2 4 6 8 10

1 2 3

Download Time [s]

User Rating

Average download times for QoE ratings

Avg DT + Std.

Avg DT Avg DT − Std.

Figure 4.6: Average download times for the QoE ratings.

represents the download times. The first column in Table 4.6 shows the name of the regression followed by a derived equation for the respective regression, while the third and fourth column shows the correlation coefficient (r) and residual sum of squares (rss) respectively.

A negative correlation can be found in all of the four relationships derived, which shows that QoE ratings and download times vary together in opposite di- rections. Furthermore it is found that from all of these regressions; exponential regression is the strongest between the parameters QoE ratings and download times with a determination of coefficient of about 99.6% which indicates the variance in common between these two variables. It also resulted in the lowest residual sum of squares. Figure 4.7 represents the plots for the different regres- sions that are based upon the equations we derived as shown in Table 4.6

Table 4.6: Relationship between QoE ratings and download times.

Regression Equation r rss

Linear y = -0.51 × x + 3.44 -9.95 × 10−1 1.94 × 10−2 Logarithmic y = -1.24 × log(x ) + 3.05 -9.92 × 10−1 2.88 × 10−2 Exponential y = 4.06 × exp(-0.28 × x ) -9.98 × 10−1 5.94 × 10−3 Power y = 3.20 × x ˆ(-0.67) -9.63 × 10−1 1.53 × 10−1

(40)

Chapter 4. Results

0 1 2 3 4 5

1 2 3

Download Time [s]

User Rating

Avg. DT Power Logarithmic Exponential Linear

Figure 4.7: Average download times for the QoE ratings.

Hence we can conclude that there is an exponential relationship between application download times and QoE ratings. Furthermore it can be said that a strong negative correlation is found between the two of them. Low application download times would result in high QoE ratings.

(41)

Chapter 5

Conclusions and Future Work

In this thesis work we have analyzed the effect of application download times on user QoE ratings and a mapping has been carried out between these two parameters. An experimental testbed was developed and users were tested on it in order to carry out the user QoE evaluation. From the analysis it is found that pages with higher download times (8 seconds and 6 seconds) were easily detected by the users thus resulted in low QoE ratings and a small standard de- viation between these ratings. In addition, it is found that during the browsing sessions, previously experiences download times on the webpages influenced the user’s present ratings. As shown in Section 4.1 and 4.2 that after a series of high download times, user rated the same specific download time higher than the first time they rated during the same experiment. Furthermore it is also found that there existed differences in user perception about the download times. This is shown in Section 4.3 that how one user would find a certain download time satisfactory while another user rejecting.

One of the important aspect of this thesis work was to carry out mapping between user QoE and application download times. After analyzing the data logs and forming graphical relationships, it was concluded that a strong expo- nential relationship exists between user QoE and application download times.

Both of these parameters vary together in opposite direction. Low application download times would result in high QoE ratings. Important guidelines can be put forward from results presented in this thesis work that can be helpful for ISP’s and telecom operators. It is concluded from this work that an end-to end service resulting in download time of above 3 seconds would leave the user in a state of frustration and may result in abandonment of the service. Thus ISP’s and telecom operators have to make it sure that in order to fulfill their user re- quirements; their end-to-end services should not implement a download time of more than 3 seconds.

(42)

Chapter 5. Conclusions and Future Work

5.1 Future work

Correlating user QoE to application perceived performance is a challenging task. To fully address such mappings; one of the important and foremost issue that should be addressed in future research work is to design and implement an ef- ficient questionnaire for the users. Such mappings should also involve factors such as the user experience with the application and the user expectations about the application. Furthermore users should be tested in larger groups differentiated with respect to geographical regions, professional backgrounds, genders and ages.

This analysis was based on the parameter download time related to web application. Future work can be carried out by considering other important parameters like size of a webpage, the content of a webpage and how they affect the user experience. As pointed out in this thesis work that apart from size of a web page, there can be some other factors involved that have an affect on the download time. This matter needs some further investigation. The behavior of the network shaper NetEm was also not stable during the experiment and high standard deviations were observed in delays of higher intensities which also merits further investigation. Also in our experiment, the webpages comprised of single objects. In future experiments, users can be tested with multi object web pages and see how it can affect download times and user perception.

As this thesis work is limited till the domain of web application; future stud- ies can be based on evaluating user experience in video streaming applications as they are pretty much popular with the users today. Such studies would be helpful not only to evaluate the user satisfaction criteria but will also provide efficient guidelines for the commercial service providers.

(43)

References

[1] ITU-T G.1030: Estimating end-to-end in IP networks for data applications.

[2] Nokia White Paper: QoE of mobile services. Can it be mea- sured and improved. [Online; Verified June, 2009] Available:

www.nokia.com/NOKIA_COM_1/About_Nokia/Press/White_Papers/pdf_

files/whitepaper_qoe_net.pdf

[3] Hassenzahl, M., Tractinsky, N. (2006), User Experience a Research Agenda.

Behavior and Information Technology, Vol. 25, No. 2, March-April 2006, pp.

91-97.

[4] Bouch, A., Kuchinsky, and A., Bhatti, N. (2000), Quality is in the eye of the beholder: Meeting users requirements for internet quality of service. Pro- ceedings of the ACM CHI 2000 conference on human factors in computing systems. Hague, Netherlands.

[5] J. Nielsen (1997), The Need for Speed. [Online; Verified July 2009] Available:

http://www.useit.com/alertbox/9703a.html

[6] Georgia Institute of Technology, Atlanta US. GVW’S 10th WWW User Sur- vey 1998. [Online; Verified July, 2009] Available: http://www.cc.gatech.

edu/gvu/user_surveys/survey-1998-10/

[7] Jupiter Research (2006), RETAIL WEB SITE PERFORMANCE; Con- sumer Reaction to a Poor Online Shopping Experience. [Online; Veri- fied July, 2009] Avaliable: http://www.akamai.com/dl/reports/Site_

Abandonment_Final_Report.pdf

[8] PeerApp White Paper (2008) : Why QoE is important to service providers.

[9] Astellia Benchmarks Handsets Based on Users QoE (2008) [Online; Verified June, 2009] Available: http://www.ossnewsreview.com/telecom-oss/

astellia-benchmarks-handsets-based-on-users-qoe/

[10] QoS: An Important Differentiator for Mobile Operators (2006). [On- line; Verified June, 2009] Available: http://www1.alcatel-lucent.com/

serviceproviders/news/QoS.htm

[11] Accenture 2008 Customer Satisfaction Survey: High Performance in the Age of Customer Centricity. [Online; Verified June, 2009]

Available: http://www.accenture.com/Global/Consulting/Customer_

Relationship_Mgmt/R_and_I/Accenture2008Survey.htm

(44)

REFERENCES

[12] George A. Rovithakis, Argyris D. Matamis, Michael Zervakis (2000): Con- trolling Qos at the Application Level for Multimedia Applications using Ar- tificial Neural Networks: Experimental Results.

[13] Andreas Vogel, Brigitte Kerherve, Gregor von Bochmann, Jan Gecsei (1995):

Distributed Multimedia and QoS: A survey.

[14] CISCO Internetworking Technology Handbook: Chapter 49 QoS Network- ing. [Online; Verified June, 2009] Available: http://www.cisco.com/en/

US/docs/internetworking/technology/handbook/QoS.html

[15] ITU-T X.902: Information technology Open distributed processing Refer- ence Model

[16] QoE Definition. [Online; Verified June, 2009] Available: http://www.

pcmag.com/encyclopedia_term/0,2542,t=QoE&i=57607,00.asp

[17] Lopez, D.; Gonzalez, F.; Bellido, L.; Alonso, A., (2006): Adaptive multi- media streaming over IP based on customer oriented metrics, International Symposium on Computer Networks.

[18] Kalevi Kilkki (2007). Next Generation Internet and Quality of Experience [Online; Verified June, 2009] Available: kilkki.net/files/50ajatelmaa.

ajatukset.fi/.../kilkki_santander_v1.0.ppt [19] ITU-T G.1010: End-user multimedia QoS categories.

[20] Effie L-C. Law, Virpi Roto, Marc Hassenzahl, Arnold P.O.S. Vermeeren and Joke Kort (2009), Understanding, Scoping and Defining User eXperience:

A Survey Approach. Proceedings of CHI 2009, April 49, 2009, Boston, MA, USA.

[21] ISO DIS 9241-210 (2008): Ergonomics of human system interaction - Part 210: Human-centered design for interactive systems. ISO, Switzerland.

[22] Forlizzi, J., Ford, S. (2000), The Building Blocks of Experience: An Early Framework for Interaction Designers. Proceedings of Designing Interactive Systems 2000. New York City, USA.

[23] Makela, A., Fulton Suri, J. (2001), Supporting Users Creativity: Design to Induce Pleasurable Experiences. Proceedings of the International Conference on Affective Human Factors Design, pp. 387-394.

[24] Arhippainen, L., Tahti, M. (2003), Empirical Evaluation of User Experience in Two Adaptive Mobile Application Prototypes. Proceedings of the 2nd In- ternational Conference on Mobile and Ubiquitous Multimedia, Norrkping, Sweden.

[25] Virpi Roto (2006) User Experience Building Blocks. In conjunction with Nordi CHI 06 conference.

[26] J. Ramsay, A. Barbasi and J. Preece (1998), A psychological investigation of long retrieval times on the world wide web. Interacting with Computers.

[27] J. Nielsen (2000), Designing web usability. ISBN 1-56205-810-X.

(45)

REFERENCES

[28] H. Shubin and M. Meehan (1997), Navigation in web applications. ACM Interactions 4, pp. 1317.

[29] Krishnamurthy and C. Wills (2000), Analysing factors that influence end- to-end web performance. Computer Networks Journal 33, pp. 1732.

[30] N. Bhatti, A. Bouch and A. Kuchinsky (2000), Integrating user-perceived quality into web server design. Computer Networks Journal 33, pp. 116.

[31] Myers, B. A. (1985), The Importance of Percent-Done Progress Indicators for Computer-Human Interfaces. Proceedings of CHl’85, San Francisco CA, April 1985

[32] S. Khirmam and P. Henriksen (2002). Relationship between quality of service and quality of experience for public internet service. The 3rd workshop on passive and active measurement. Fort Collens, Colorado, USA.

[33] Geoffery S. Hubona and Elizabeth Kennick (1996), The Influence of External Variables on Information Technology Usage Behavior. Proceedings of the 29th Annual Hawaii International Conference on System Sciences.

[34] Greene, S.L., Gomez, L.M., and Devlin, S.J. (1986), A cognitive analysis of database query production. Proceedings of the Human Factors Society, pp.

9-13.

[35] Davis, L.D. and Davis, F.D. (1990). The effect of training techniques and personal characteristics on training end users of information systems. Jour- nal of Management Information Systems, 7:2,93-110.

[36] Egan, D.E., and Gomez, L.M. (1985). Assaying, isolating and accommodat- ing individual differences in learning a complex skill. Individual Differences in Cognition, Vol.2, Ed. R. Dillon. New York: Academic Press.

[37] IBTT i-City Living Lab [Online: Verified June 2009] Available: http://

www.openlivinglabs.eu/pdfs/ibbt-icity.pdf

[38] Ron Kohavi, Roger Longbotham, Dan Sommerfield and Randal M.Henne (2008). Controlled Experiments on the Web: Survey and Practical Guide.

Springer Science & Business Media, LLC 2008.

[39] Andrew King (2008) Web Optimization Secrets. O’Reily Media, ISBN 9780596515089.

[40] Patrik Arlos, Markus Fiedler, and Arne A. Nilsson (2005). A distributed passive measurement infrastructure. In Proceedings of Passive and Active Measurement Workshop, pages 215227.

[41] Net:Netem [Online; Verified July 2009] Available: http://www.

linuxfoundation.org/en/Net:Netem

[42] DAG Network Monitoring Cards [Online; Verified July 2009] Available:

http://www.endace.com/dag-network-monitoring-cards.html

[43] Mozilla FireFox Homepage. [Online; Verified July 2009] Available: http:

//www.mozilla.com/en-US/firefox/firefox.html

(46)

REFERENCES

[44] FasterFox Homepage. [Online; Verified October 2009] Available: http://

fasterfox.mozdev.org/

[45] Network Time Protocol Homepage. [Online; Verified October 2009] Avail- able: http://www.ntp.org/

(47)

Appendix A

Test Bed Setup

A.1 Experiment Setup Topology

Detailed figure for the experiment setup.

(48)

Chapter A. Test Bed Setup

16 Users 192.168.161.XXX

Cisco Switch

HP ProvCurv Switch

NetGear Switch Experiment

Controller

Measurement

Point Traffic Shaper

eth 0 10.0.0.1 eth 1

10.0.1.1

e th 2 1 9 2 .1 6 8 .1 6 1. 2 3 3

B if0 A if1 B if1

eth 0 10.0.0.218

A if0

eth 0 10.0.1.208

eth 2 eth 1

Consumer eth 0

10.0.0.224

eth 1 10.0.1.205

Figure A.1: Experiment Setup Topology.

(49)

Appendix B

Experiment Data;

Individual User Behavior (Plots; Page wise)

This chapter shows the plots for download times (page wise) and their ratings by individual users.

(50)

Chapter B. Experiment Data; Individual User Behavior (Plots; Page wise)

102030405060708090 0 2 4 6 8 10 12 User 1

Page Number

Download Time [s]

102030405060708090 1 2 3

Page Number

User Rating

FigureB.1:User1.

(51)

1020304050607080900

2

4

6

810

12User 2 Page Number

Download Time [s]

1020304050607080901

2

3 Page Number

User Rating

FigureB.2:User2.

(52)

Chapter B. Experiment Data; Individual User Behavior (Plots; Page wise)

102030405060708090 0 2 4 6 8 10 12 User 3

Page Number

Download Time [s]

102030405060708090 1 2 3

Page Number

User Rating

FigureB.3:User3.

(53)

1020304050607080900

2

4

6

810

12User 4 Page Number

Download Time [s]

1020304050607080901

2

3 Page Number

User Rating

FigureB.4:User4.

(54)

Chapter B. Experiment Data; Individual User Behavior (Plots; Page wise)

102030405060708090 0 2 4 6 8 10 12 User 5

Page Number

Download Time [s]

102030405060708090 1 2 3

Page Number

User Rating

FigureB.5:User5.

(55)

1020304050607080900

2

4

6

810

12User 6 Page Number

Download Time [s]

1020304050607080901

2

3 Page Number

User Rating

FigureB.6:User6.

(56)

Chapter B. Experiment Data; Individual User Behavior (Plots; Page wise)

102030405060708090 0 2 4 6 8 10 12 User 7

Page Number

Download Time [s]

102030405060708090 1 2 3User Rating

FigureB.7:User7.

(57)

1020304050607080900

2

4

6

810

12User 8 Page Number

Download Time [s]

1020304050607080901

2

3 Page Number

User Rating

FigureB.8:User8.

(58)

Chapter B. Experiment Data; Individual User Behavior (Plots; Page wise)

102030405060708090 0 2 4 6 8 10 12 User 9

Page Number

Download Time [s]

102030405060708090 1 2 3

Page Number

User Rating

FigureB.9:User9.

References

Related documents

Finally, the thesis discusses the suitability of wireless networks for different mobile services, since the influence of the network on the application-perceived Quality of Service is

The objective with this study was to investigate how supportive documents can be incorporated into undergraduate courses to promote students written communication skills.

Consequently, in the present case, the interests of justice did not require the applicants to be granted free legal assistance and the fact that legal aid was refused by

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Exakt hur dessa verksamheter har uppstått studeras inte i detalj, men nyetableringar kan exempelvis vara ett resultat av avknoppningar från större företag inklusive

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar