• No results found

Measurement of user-related performance problems of live video streaming in the user interface

N/A
N/A
Protected

Academic year: 2021

Share "Measurement of user-related performance problems of live video streaming in the user interface"

Copied!
58
0
0

Loading.... (view fulltext now)

Full text

(1)

Master Thesis

Electrical Engineering Thesis no: MEEyy:xx Month Year

School of Computing

Blekinge Institute of Technology 371 79 Karlskrona

Master thesis

Electrical engineering

Dec 2012

Measurement of user-related performance

problems of live video streaming in the

user interface

Anoop Vuppala

Lakshmi Narayana Sriram

School of Computing

(2)

This thesis is submitted to the school of computing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering. This thesis is equivalent to 20 weeks of full time studies.

Contact Information: Authors:

Anoop Vuppala

E-mail: anoopvuppala@gmail.com

Lakshmi Narayana Sriram

E-mail: ssrinad@gmail.com

University advisor: Dr. Markus Fiedler School of Computing

(3)

A

BSTRACT

Video streaming has gained public interest for video conferencing, telecasting, surfing and communicating. The video player plays a vital role in presenting the streaming video which is utmost important for content providers. It has a huge impact on user experience. Wireless networks are vulnerable to noise, interface and have bandwidth limitations. Due to the intrinsic vulnerability of the communication channel and large number of variables involved, simulations alone are not enough in the evaluation of the performance of the wireless networks and performance of the video in the player is to be considered. When there are disturbances or shortages of capacity in the network, network disconnection or sender failure, the video player stops. This research focuses on the performance of the video in the player while streaming over wireless networks. The contribution of this research work is to instrument a video player and to log the disturbances of a streaming video. The disturbances are gathered in terms of timestamps. The timestamps were recorded in the log files by repeating several experiments on different videos. Furthermore, the packet count, memory size allocation and utilization were calculated using PHP scripts. These calculations are used to plot graphs as well as to analyze the performance of the video in the player. The major findings of this research are emptiness of the buffer for audio and video, rebuilding of video frame, triggering re-sampling for audio, also the player response to these findings while playing the video and analyzing them for different types of videos.

(4)
(5)

Acknowledgements

First, we would like to express our heartfelt gratitude to Dr. Markus Fiedler, for giving such a great opportunity to be a part of his research. We are thankful to Selim Ickin for his precious time and valuable assistance during the thesis work. Their knowledge and experience helped to enrich the results.

We are thankful to Jean-Baptiste Kempf (site administrator of VLC) for his help to solve problems using VLC forums. We are thankful to Anders Nelsson for providing Dr. Markus Fiedler as supervisor.

We are thankful to Microsoft for the software “Microsoft Visual Studio” which helped in understanding source code and finally very much thankful to BTH for providing all the facilities that are necessary to complete the work.

(6)

C

ONTENTS

ABSTRACT ... II CONTENTS ... V 1 INTRODUCTION ... 1 1.1 STREAMING ... 1 1.2 RELATED WORK ... 2 1.3 OUR CONTRIBUTION... 3 1.4 RESEARCH QUESTIONS... 3 1.5 RESEARCH METHODOLOGY ... 4 1.6 OUTLINE ... 4 2 BACKGROUND ... 6

2.1 VLC MEDIA PLAYER AND ITS FUNCTIONS ... 6

2.1.1 Functions ... 6

2.1.2 Works performed by VLC player: ... 7

2.2 VLC PLAYER MODULES ... 7

3 METHODOLOGY ... 10

3.1 EXPERIMENTATION ... 10

3.2 VLC MEDIA PLAYER INSTALLATION ... 10

3.2.1 Configure ... 11

3.2.2 Compile ... 12

3.3 SOURCE CODE MODIFICATIONS ... 13

3.3.1 Number of Received RTP packets ... 13

3.3.2 Audio buffer allocation and utilization... 13

3.3.3 Buffer utilization for video ... 14

3.3.4 Processing of RTP packet... 14

3.3.5 Main loop execution ... 14

3.4 VALIDATION ... 15

4 RESULTS ... 18

4.1 PERFORMANCE OF VIDEO 1 ... 18

4.1.1 Traffic flow during streaming ... 18

4.1.2 Audio buffer allocation and utilization... 19

4.1.3 Video buffer allocation and utilization ... 20

4.1.4 Number of RTP packets processed ... 21

4.1.5 Working of VLC player ... 21

4.2 PERFORMANCE OF VIDEO 2 ... 22

4.2.1 Traffic flow during streaming ... 22

4.2.2 Audio buffer allocation and utilization... 23

4.2.3 Video buffer allocation and utilization ... 23

4.2.4 Number of RTP packets processed ... 24

4.2.5 Working of VLC player ... 24

4.3 PERFORMANCE OF 1080PIXELS VIDEO ... 25

4.3.1 Traffic flow during streaming ... 25

4.3.2 Audio buffer allocation and utilization... 25

4.3.3 Video buffer allocation and utilization ... 26

4.3.4 Number of RTP packets processed ... 26

4.3.5 Working of VLC player ... 27

5 DISCUSSION ... 29

(7)

7 APPENDIX ... 34

7.1 GUI STATISTICS ... 34

7.2 RECEIVING RTP PACKETS ... 35

7.3 BUFFER UTILIZATION FOR AUDIO ... 37

7.4 BUFFER UTILIZATION FOR VIDEO ... 38

7.5 PROCESSING OF RTP PACKETS ... 39

7.6 VLC RUNTIME ... 40

7.6.1 PHP Script ... 41

7.7 OUTPUT... 42

7.7.1 gets a datagram from network(rtp).txt: ... 42

(8)
(9)

List of Figures

4.1.1 4.1.2 4.1.3 4.1.4 4.1.5 4.2.1 4.2.2 4.2.3 4.2.4 4.2.5 4.3.1 4.3.2 4.3.3 4.3.4 4.3.5

Number of RTP packets received ……… Audio buffer utilization ……… Video buffer utilization ……….... Number of RTP packets processed ……….... Execution of main loop ……….. Number of RTP packets received ……… Audio buffer utilization ……… Video buffer utilization ………. Number of RTP packets processed………. Execution of main loop……….. Number of RTP packets received……… Audio buffer utilization………. Video buffer utilization……….. Number of RTP packets processed………. Execution of main loop………...

(10)
(11)

1

I

NTRODUCTION

Internet has become a popular way of distributing media and is currently used in digital broadcasting especially for streaming. The rapid increase in technology is hugely dependent on internet services for surfing, chatting, communicating, watching and many other services. People are watching television using internet so-called IPTV, due to availability of inexpensive internet services. With advance in technology people are less interested to watch downloaded videos. So, this is where live-streaming gets popular and advantageous over downloaded videos. Live streaming has the ability to reach multiple people with the same content at the same time. This refers to the sending of audio and video signals real time over the internet. Live streaming is mainly used for broadcasting news, connecting friends and relatives in online chat rooms, conducting online business face to face conferences or meetings, and selling products and services, as internet usage increased from 194 million households in 2005 for high speed connections to 413 millions in 2010 [18].

Today many researches were carried out mainly on network types, topologies and amount of traffic. The performance of the player can be estimated, based on three aspects; firstly, to select suitable codec, secondly, to receive the data and finally, to play the data. All the memory available at the receiver serves as a buffer for the player. Among all such services, live streaming is gaining enormous popularity and it is spiraling up consistently. VLC player is used in this project for gathering necessary information that is used for the analysis of the video in the user interface. The motivation behind conceiving a VLC player is that, it is the fourth most popular player in the world [28] and VLC player is an open source video player. Also, the streaming can be initiated easily using the Graphical user interface (GUI). It was developed by students of Ecole Centrale Paris [19].

1.1

Streaming

(12)

continuous stream of data and played whenever it arrives. The user needs players that uncompress the received data and send video data to the display, and audio to the speakers. The player may be an integral part of a browser or downloaded from software designers. The streaming procedure saves lot of time and memory as compared to downloading of the complete video.

Streaming a video in general is done from prerecorded files or live feeds and distributes as a live broadcast feed. In broadcast, the video signal is compressed and transmitted by a special web server. This web server transmits the same data to multiple users at the same time (multi-cast).

1.2

Related work

There is an enormous increase in multimedia applications over the internet; video has become the most prominent application having its data transferred via internet. In internet video is transmitted in two ways video streaming and video on demand (VoD) [20]. In this study, we use video streaming to measure the performance of the player. While the service video on demand is a procedure in which the video is downloaded to the computer and then played this requires a lot of space and time on the user system. The performance of the video streaming depends upon the state of the network, encoding configuration parameters of the video stream such as type of video content, type of encoding and transmission type. Each video frame is transmitted in the burst of packets that is potentially queued at the access point. The “burstiness” of the video is due to frame based nature of the encoded video and exhibits saw-tooth-like delay [19]. Video streaming can be done using several protocols to transmit multimedia data over the internet of which UDP, RTP, RTSP, and TCP etc are commonly used. Each protocol has its own advantage and disadvantage. Among these protocols we use RTP which is at the application layer [16]. In RTP video streaming is done by considering the relevance of a frame, state of the channel and the bit rate constraints of the protection bitstream. Most suitable frame is selected and protected by forward error protection techniques [17]. It benefits RTP encapsulation working at a frame level without further processing of RTP header and makes the system straight forwarded and fast, perfectly suitable to be included in video streaming servers.

(13)

data to receivers. The above procedure adopts store carry and forward approach to transmit video data in a portioned network environment [3]. Efficient topology organization, enhanced UDP transmission and intelligent content distribution helps to achieve low operation cost and high user quality *4+. Designer’s effort provides a technique to evaluate human interface before software is developed and feedback is provided for usability tests conducted using effort based evaluation technique [5]. An API was developed for the YouTube player to detect the re-buffering events during its playback to play videos on YouTube for HD-720P and HD-1080P [15]. Several methods are proposed to obtain subjective indices of the QoS received by users from objective data measures. There is a relation between unicast and multicast stream that is accounted in design and capacity planning of VoD and live video streaming [6]. To evaluate video quality of

television services, techniques proposed by ITU-T developed

recommendations and used for 20 years. When a heavy packet loss ratio (PLR) is experienced the video quality worsens significantly on the contrary, if low PLR values are measured, the final quality falls into acceptable range [8]. The increase of effective neighbors and uploading traffic will severely affect the playback performance of live video streaming in wireless network [9]. For real time video streaming applications not only PSNR, but also end-to-end delay and jitter are very important quality metrics, since high delay and jitter results in higher buffering times which degrades user QoE [14]. Packet loss one of the most important QoS parameters, it can directly and significantly impact the user experience. A single packet loss of an I-frame and P-frame can significantly degrade the perceptual quality. [18] states that occurrence of impairment events can make the short term video quality to be unacceptable.

1.3

Our contribution

This work started by choosing an open source video player. The open source video player is then developed further according to requirements of this project. The player is now used for streaming and gathers required information during streaming. The behavior of the play out can be observed after collecting the logs. These statistics helps in measuring the user-related problems of live video streaming at the user interface under different conditions such as weak signal, network disconnection and server failure. The number of packets received per second, buffer allocation for audio and video, number of non empty frames occurred, and number of processed RTP packets per second are also measured and logged.

1.4

Research Questions

(14)

2) What are the advantages of an instrumented video player?

3) How can timestamps related to initiating of video, re-buffering and loss within a video be measured?

4) What are the impacts of the bit rate on live video streaming on frequency of freezes?

1.5

Research methodology

The formulated research methodology aided this study in the following way: The research methodology fulfills the research aim by observing the behavior of the video player at the user interface and thus producing numerical data to analyze the performance of the video in the user interface.

This study is carried out in the following way.

1) Video streaming is observed. Problems are registered, described, classified and assigned to the parts of the player (in terms of its code) and the parameters that affect the quality of video while streaming. 2) The full potential of instrumentation on the state of player will become

evident after the code has been studied. Therefore, a thorough study on the implementation of a player will reveal such opportunities. 3) An open source video player is to be chosen which helps in

understanding the source code and its functioning. The player is to be developed further so that it collects time stamps of different parameters during video streaming, and logs corresponding events. 4) Live streaming recording of time stamps and sub-sequential analysis of

the parameters for different videos is performed.

1.6

Outline

The document is organized as follows. Chapter 2 gives a short introduction to the VLC player functionality, its parameters and its modules. Chapter 3 discusses the implementation and development of source code for obtaining the statistics and compilation and using of VLC player for streaming. Analysis is carried out in chapter 4, whereas chapter 5 presents conclusions and discusses future work.

(15)
(16)

2

BACKGROUND

2.1

VLC media player and its functions

The Video Lan Client (VLC) media player is an open source which is released under the General Public License (GNU) and written by Video Lan Project. VLC is free software and any licensed version can be used without any legal restriction. VLC started in 1996 as an academic project. Its main purpose is to stream videos between clients and server across a campus network by the students of Ecole Centrale Paris. It is mainly developed for research purposes rather than entertainment [25]. It can run or installed directly from Flash Drive and can be extended using the LUA scripting language. The GUI is based on QT4 for Windows and Linux. QT4 is classified as Widget toolkit. The QT4 is the default, plain, graphical interface to VLC made using QT library to streamline the creation of application to windows, Mac OS X and Linux. Now, the Video Lan Project was developed by contributors all around the world and coordinated by Video Lan non-profit organization. VLC 1.1.0 has more than 380 modules.

2.1.1 Functions

VLC is a packet-based media player which has ability to play damaged, incomplete and unfinished video content. It can support several file formats including AVI, ASF, AAC, OGG, AIFF, MPEG, FFMPEG, REAL etc. VLC has the advantage to launch one or more interfaces i.e. it can play several audio or video files at the same time. It has a modular design to include modules for new file formats, codec or streaming method. The VLC player can include features like media and codec information. From this, one can know the codec information such as frame rate, bit rate, total number of frames in the file etc. One can also take snapshots of a video in VLC.

(17)

to accommodate several file formats, trans-coding techniques, and different protocols.

2.1.2 Works performed by VLC player:

VLC media player can be used for streaming video using several protocols, such as UDP, RTP, HTTP, DCCP etc. It is capable of playing Motion Picture Expert Group (MPEG) Transport Streams (TS) and these streams were used in this work to gather required information from the player. It has filter that can distort, rotate, split, de-interlace, mirror videos. VLC can stream live from cable boxes to computer using firewire connection to monitor or HDTV. In our thesis we are using the MPEG-4 codec with the RTP protocol. MPEG-4 is a method of compressing audio and visual data designated to audio and video coding formats agreed by ISO/IEC Motion Picture Expert Group MPEG-4. Video code has efficiency towards no frame skip or quality loss compromise etc [26]. The advantages of H.264 are error resiliency and adaptability to various networks and it has the disadvantage of increasing the complexity and computational load [27].

2.2

VLC player modules

VLC uses a modular system, which allows adding easily new functions and formats. Following content gives an introduction of some VLC modules. There are more than 380 modules for VLC 1.1.0. They play a very important role for streaming, converting, sharing etc.

Access modules: VLC can be used for streaming; it provides accessibility to many protocols. For every protocol a module with that name is defined. VLC reads a stream from different sources using these modules, for example: HTTP, UDP, RTP, FTP, and RTSP. The CDDA is used for audio CD input. The DVB module allows reading DVB-T, DVB-S and DBC-C satellite, terrestrial or cable cards. The DVD input module has been superseded by the DVDPLAY. The VCD module is used for reading VIDEOCD input.

Demuxers: In video streaming, both audio and video is received in containers format. The Demuxers extract video and audio content from a video stream and pass them to decoders. These modules allow the player to read different kinds of formats according to their type, e.g. AVI, ASF, AAC, OGG, AIFF, AU, WAV, MP4, MKV, FFMPEG, REAL.

Video outputs: The video output modules allow VLC to display video on the screen. Based on the type of the system, VLC chooses the most suitable video module. We can force any specific module from the command line by using “vout modulename”. The Directx module display video using “Microsoft Direct X” libraries and it is recommended for Win32 port. This is a basic video output module of type X11. The xvideo is for GNU/Linux operating systems, and it requires an xvideo-compliant graphic card.

(18)

cropping etc). They are available from the command line control and not from the GUI. The adjust module allows to alter image contrast, saturation and brightness. The CRT monitors are capable of playing analog video frames which is difficult for modern monitors. Hence a deinterlace filter module was introduced to convert analog frame into digital one and it is useful with streams coming from digital satellite channel. The crop module allows to crop part of the image. The transform module allows rotating the

image to 90, 180 and , horizontal-flip (hflip) and vertical-flip (vflip). The

distort filter module creates distortion to the video and the invert filter modules reverses all the colors of the video which is enabled by default. Audio outputs: These modules help us to choose the type of output for audio system. It chooses the most appropriate module or else we can force to switch specific module from command line using “-aout modulename”. The OSS module implements the open sound system, which is used in the Linux environment. The wave output is an audio output module which is used for the windows port and some other modules are CORE AUDIO, DIRECT, OSS, ALSA, ESD, and ARTS.

Interface modules: These are either graphical or control interfaces. The dummy interface is used when no interface is needed (i.e. display video without any control keys). The gesture modules allow the user to control VLC using mouse gestures. The HTTP interface module allows user to control VLC via web browser remotely. RC is a remote control interface module with which the user can control VLC using command line controls, for example play, stop etc. The skin modules allows user to create themes for VLC media player using XML files.

(19)
(20)

3

METHODOLOGY

This chapter explains the implementation and the procedure for compilation of the VLC player and the modifications that are done to the source code for gathering the required information for the purpose of analysis.

3.1

Experimentation

Figure 3.1 Experimental Set-up

Figure 3.1 illustrates the setup for the experiments. Sender and receiver are located in the same network they are connected to the internet through Wi-Fi. The sender uses the Windows operating system and the receiver uses Ubuntu 10.10. So, the receiver is located in Linux environment. The compilation of VLC is explained in the following steps.

3.2

VLC media player installation

The VLC player is compiled in ubuntu version (10.10). The following steps are involved for the compilation of VLC player. VLC uses programming tools like GCC, AUTOCONF and FRIENDS for source code assistance, ensuring portability to different Unix-systems during compilation.

(21)

bootstrapping. If the bootstrapping is successful the VLC compilation can be done in the Linux operating system.

3.2.1 Configure

In order to configure the VLC player the third party libraries are to be added. These third party libraries give additional functionality like playing different file formats, streaming and opening a network stream. If the third party libraries are not added, the application will configure up well essentially which doesn’t perform any action. The third party libraries consist of codecs, Demuxers and Muxers, access, images, tools and others. Especially in the case of Ubuntu there is a single line command installation and this is as follows.

$ sudo apt-get -y install libvorbis-dev libogg-dev libtheora-dev speex libspeex-dev flac libflac-dev \

x264 libx264-dev a52-0.7.4 liba52-0.7.4-dev mpeg2dec libmpeg2-4-dev faad libfaad-dev faac libfaac-dev \

lame libmp3lame-dev ffmpeg libavdevice-dev libmad0 libmad0-dev dirac libdirac-dev liboil-dev libschroedinger-dev \

libdca-dev twolame libtwolame-dev libmpcdec-dev libvorbisidec1 libvorbisidec-dev libass-dev libass4 libebml2 \

libebml-dev libmatroska2 libmatroska-dev libdvbpsi6 libdvbpsi-dev libmodplug1 libmodplug-dev libshout3 libshout3-dev \

libdvdread4 libdvdnav4 libdvdnav-dev livemedia-utils liblivemedia-dev libcddb2 libcddb2-dev libcdio10 libcdio-dev \

libcdio-utils vcdimager libvcdinfo0 libvcdinfo-dev error0 libgpg-error-dev libgcrypt11 libgcrypt11-dev \

gnutls-bin libgnutls26 libgnutls-dev libdap10 libdap-bin libdap-dev libxml2 libxml2-dev libpng12-0 libpng12-dev \

libjpeg8 libtiff4 libsdl1.2-dev libsdl-image1.2 libsdl-image1.2-dev libc-bin gettext libfreetype6 libfreetype6-dev \

libfribidi-dev libfribidi0 zlib1g zlib1g-dev libtag1-dev libcaca0 libcaca-dev caca-utils libqt4-core libqt4-dev \

libportaudio2 libportaudio-dev libupnp-dev libupnp4 libupnp3 libexpat1 libexpat1-dev yasm libxcb-xv0 libxcb-xv0-dev \

libx11-xcb1 libx11-xcb-dev

(22)

configured. The PKG-CONFIG helps us to know how FFMPEG’s libs were compiled in VLC configuration. The final configuration is done by prefixing another folder to which it should be compiled.

Ex 3.2:

mkdir build && cd build && ../configure –prefix=/ usr \ --enable-snapshot –enable-debug \ --enable-dbus-control –enable-misicbrainz \ --enable-shared-libvlc –enable-mozilla \ --enable-lirc \ --enable-live555 --with-live555-tree=../extras/live \ --enable-x264 --with-x264-tree=../extras/x264-trunk \ --enable-shout --enable-taglib \ --enable-v4l --enable-cddax \ --enable-dvb --enable-vcdx \ --enable-realrtsp --enable-xvmc \ --enable-svg --enable-dvdread \ --enable-dc1394 --enable-dv \ --enable-theora --enable-faad \ --enable-twolame --enable-real \ --enable-flac --enable-tremor \ --with-ffmpeg-mp3lame --with-ffmpeg-faac \ --enable-quicktime --enable-dirac \ --enable-skins2 --enable-qt4 \ --enable-ncurses \ --enable-aa --enable-caca \ --enable-esd --enable-portaudio \ --enable-jack --enable-xosd \ --enable-galaktos --enable-goom \ --enable-ggi \ --disable-cddax --disable-vcdx

3.2.2 Compile

The compilation is done using three commands in the root directories.  Make

(23)

The “make” command builds up all necessary files that are required during installation. The “make install” command is used to install the VLC player in another directory which is defined during final configuration (ref ex: 3.2). The “make clean” command cleans all the unnecessary files in the source code directory that are generated during compilation of VLC player.

Though the compilation of VLC player can be done in a Windows environment [17], it is complicated to install too many external libraries like CYGWIN, MSYS and MINGW (which build environment to compile UNIX projects for Microsoft Windows)

Source code development:

The following are the necessary modifications that are done before the compilation of the VLC player for gathering necessary information with reference to appendix 7.1, 7.2, 7.3, 7.4, 7.5 and 7.6 respectively. The source code is modified as stated in appendix.

 Statistics of the video player (7.1)  Number of received RTP packets (7.2).  Audio buffer allocation and utilization (7.3).  Video buffer allocation and utilization (7.4).

Number of missing frames.  Number of processed RTP packets (7.5).  VLC run time (7.6).

3.3

Source code modifications

3.3.1 Number of Received RTP packets

The VLC player is opened for streaming using RTP protocol. When it receives an RTP packet, a pre-defined file pointer enters timestamps and size of the packet in a “gets a datagram from network (rtp).txt” file, which will be helpful in measuring the jitter in the network. For RTP session, starting time and ending time of the session are recorded to a text file (.txt). When the RTP session is closed the buffer is cleaned and the timestamp of the closing event is recorded, refer to appendix 7.2.

3.3.2 Audio buffer allocation and utilization

(24)

starved. To gather the timestamps of these events the modifications were done to the source code in dec.c located at src/audio_output, refer to appendix 7.3.

3.3.3

Buffer utilization for video

In a similar way, the video content is available in the memory allocated by the demuxer that serves as buffer. The video decoder uses the video content and drives the video to display. When the frame buffer becomes empty the video player has nothing left to display and the video decoder keeps on presenting the last video frame available in the buffer until a new video frame is received- this is freezing, the decoder writes the timestamps of these events. If a video frame is delayed more than a certain amount of time (approx 1 sec), the receiver will discard and the timestamps of the discarded frame are recorded instantly. These modifications were done in video_output.c located in src/video_output, seen in appendix 7.4.

3.3.4 Processing of RTP packet

Every received RTP packet is processed. The processing of RTP packet means to extract the data from the received packet and to allocate some memory; the demuxers extract both audio and video content from the packet and allocate some memory. The audio and video decoders use this allocated memory as buffers for playing audio and to display video, refer to appendix 7.5 for modifications.

3.3.5 Main loop execution

Whenever a VLC is launched (for streaming, playing or converting) the main loop is executed in src/input/input.c. If the receiver player crashes due to the network consistency main loop is terminated and the termination timestamp is recorded.

The timestamps were collected to a log file for both UDP and RTP streams during live streaming. A PHP script is used for measuring the number of received RTP packets which amounts to the range of 100 – 200 packets every second for 360p video, 75 – 320 for 720p video and 250 – 1350 for 1080p video. This counting is done based on timestamps and helps in obeserving jitter in the network.

Another PHP script is developed to find out at what time the player did not receive audio and video data during playing. This helps us to know at what time exactly the audio is starved. Using this data we can find available size of audio and video buffer left to play i.e., when the streaming stops. The numbers of empty frame timestamps are helpful for observing the video performance. When video data is missing, skipping of frame appears. When the video data is not available or the buffer becomes empty then there will be nothing left to play the corresponding event is logged.

(25)

the audio and video for every second. And the occurrences of events are explained through the use of graphs. A PHP script for measuring VLC runtime timestamps is developed. The VLC runtime is used to indicate performance variation of the VLC player during its playing. This helps us to know the behavior variations of the player, seen by the user.

3.4

Validation

Figure 3.2 Experimental set-up for validation

The compiled VLC player is to be validated before using it for experiments. For this, the sender and receiver are connected to Internet through Ethernet cables as shown in fig 3.2. The working performance of the VLC player in the receiver is compared with the VLC player of the sender which helps to know whether the developed VLC player is working properly or not.

Sender:

Fig 3.3 sender image

(26)

the “stream” button. A new window opens named “stream output”, the destination and stream type are added with the click of “next” button in that window.

The destination settings were performed as follows. The protocol type for streaming is chosen, in this scenario RTP/ MPEG transport is selected and the “Display locally” button is clicked to watch the video at sender. In the profile settings video-H264+AAC (TS) is chosen, and the “add” button is clicked. Then the IP address of the destination (receiver) is added with a port number which is default, and finally stream the button is selected to start streaming. Receiver:

On the receiving side the user can watch the video by opening a network stream located in the “media” of the GUI. The user opens a network stream using “rtp://@:5004”. Here the catching memory is to be set, serving as a buffer. Larger cache memory increases the delay between sender and receiver where as a lower value increases the number of frames skipped. After performing several experiments the caching memory is ideally set to 300ms. This opens an RTP network stream. The video looks as shown in the Figure3.2 and the user cannot move the cursor (forward or backward) during the video play. The video frame is at 1:09 minute but the cursor still appears to be in its initial position, this is due to receiver not being able to go forward or backward in the video during streaming.

(27)
(28)

4

RESULTS

The performance analysis is carried out using three videos; 360 pixels (video 1), 720 pixels HD (video 2) and 1080 pixels (video 3). Three different videos of AVI format were used for streaming. These different videos help the user to experience the observations in GUI which is rather difficult if same videos are considered. One minute interval is introduced in each video to observe and analyze the behavior of the player during that session. The results were plotted from the log files that are obtained during streaming. A sample of log files can be observed in the appendix (7.7.1, 7.7.2, 7.7.3 7.7.4, 7.7.5, 7.7.6, 7.7.7 and 7.7.8).

Type and duration of videos:

Video Quality Play

time Duration Video 1 360p 03:32 04:32 Video2 720p 03:59 04:59 Video 3 1080p 03:55 04:55

4.1

Performance of video 1

4.1.1 Traffic flow during streaming

The VLC media player is used for streaming video 1. The streaming is carried out using the RTP protocol. The following results were received during streaming.

Fig4.1.1 number of RTP packets received

0 50 100 150 200 250 0:00:00 0:00:10 0:00:20 0:00:30 0:00:40 0:00:50 0:01 :0 0 0:01:10 0:01:20 0:01:30 0:01:40 0:01:50 0:02:00 0:02:10 0:02:20 0:02:30 0:02:40 0:02 :5 0 0:03:00 0:03:10 0:03:20 0:03:30 0:03:40 0:03:50 0:04:00 0:04 :1 0 0:04:20 0:04:30 n o o f rec e iv e d p ac ke ts Time(sec)

(29)

When an RTP packet is received a timestamp is recorded as shown in appendix 7.7.1. The results were taken while streaming a live video for 4 min 32 sec. The AVI format video file is used for streaming. During streaming, to observe the performance difference of the player, the streaming is stopped roughly for one min i.e., between 2 min 3 sec and 3 min 1 sec for video 1. So, no RTP packets were sent from source to the destination during this time. From fig 4.1.1 the maximum number of packets received during one second

of streaming is 191 (19th sec) and the minimum number of packets received is

zero. The number of packets received during streaming varies from 92 to 170. We can observe jitter in received packets. When the display of the player is observed the effect of this jitter during streaming is zero. The average number of packets received is 129/s, standard deviation is 22 and coefficient of variation is 0.202.

4.1.2 Audio buffer allocation and utilization

Whenever an RTP packet is received, audio and video content are separated from the packet.

Fig4.1.2 audio buffer allocation and utilization

Demuxers extract the content from the packets. The extracted content are sent to the respective decoders. The decoders allocates 1053 bits each time, the procedure is repeated for 38-39 times for every second to play the audio smoothly. In this experiment, when streaming is stopped completely for one minute, the buffer allocated for audio becomes empty and this can be observed in graph 4.1.2, from 0:02:03 to 0:03:00. The audio is starved at 0:03:03 and 0:03:04. It has received 2 blocks (2*1053 bits) of audio content which is not sufficent to play. At this point decoder re-samples the buffer to play the audio. The resampling helps the audio device to play audio for more

0 5 10 15 20 25 30 35 40 45 0: 00 :0 0 0:00:10 0:00:20 0:00 :3 0 0:00:40 0:00:50 0:01:00 0:01:10 0:01:20 0:01:30 0:01:40 0:01 :5 0 0:02:00 0:02:10 0:02:20 0:02:30 0:02:40 0:02:50 0:03:00 0:03 :1 0 0:03:20 0:03:30 0:03:40 0:03:50 0:04:00 0:04:10 0:04:20 0:04 :3 0 au d io b lo ck s u ti liz ed Time (sec)

(30)

seconds. When the decoder is starved it records the timestamp of that event (ref. appendix 7.7.8) and triggers resampling.

4.1.3 Video buffer allocation and utilization

Fig4.1.3 Video buffer allocation and utilization

As mentioned earlier the video content is available in memory when a packet is received. The demuxer extracts the video data and allocates some memory, this memory acts a frame buffer. The decoder drives the video content to video devices for display. In this experiment the video streaming is paused from 0:02:03 to 0:03:02. During this period the buffer becomes empty and the empty frames are reported to the log files named “skipped (video).txt”. The freezing is due to displaying the last available video frame available in the buffer repeatedly. During this time the memory usage is doubled by duplicating last video frame in the memory, the duplicated frame is used as next frame.

The number of empty frames represents the emptiness of the buffer. But when it receives something in the buffer it starts again presenting the video, and this can be observed in the graph. After 0:02:59, if there are any more frames or delayed frames received they will be skipped since they affect the

video if they were displayed. The number of skipped or delayed frames can

be more when the network is bad or the sender is unable to send properly; (ref. graph 4.1.3). In the graph the first part of non empty frames is due to the pause of the streaming by sender (from 0:02:02 to 0:03:02), and the second portion of non empty frames due to network disconnection (from 0:04:32).

0 10 20 30 40 50 60 70 80 90 0:00:00 0:00:10 0:00:20 0:00:30 0:00:40 0:00:50 0:01:00 0:01:10 0:01:20 0:01 :3 0 0: 01 :4 0 0:01:50 0:02:00 0:02:10 0:02:20 0:02:30 0:02:40 0:02:50 0:03:00 0:03:10 0:03:20 0:03:30 0:03:40 0:03:50 0:04 :0 0 0: 04 :1 0 0:04:20 0:04:30 0:04:40 n o o f vi d eo b lo ck s u ti liz ed time (sec)

(31)

4.1.4

Number of RTP packets processed

Fig4.1.4 Number of RTP packets processed

The processing of RTP packets means to extract the content from received packets which is performed by the demuxers. The extracted content is fed into audio and frame buffer. This is used by the audio and video decoders. If no RTP packets are received, this means nothing is extracted by the demuxers. The processing of RTP packets resembles extracting of audio and video content from received RTP packets and keeping them in buffer. The Decoders use the available data in the buffer to present audio to the speakers and video to the display. When the graph is observed from 0:02:03 to 0:03:01 no processing of RTP packets takes place. The processing of RTP packets is zero when the packets are not received (from 0:02:03 to 0:03:02), i.e., the Demuxers are not working which can be observed in the graph 4.1.4. The maximum number of RTP packet processed is 191 for one sec at 0:00:17. In this video streaming the audio and video played smoothly irrespective of the fluctuation in the receiving of RTP packets.

4.1.5 Status of VLC player

The following results help in testing of the VLC player. The main loop is executed for five times in a second. Any disturbances or malfunctioning of the player can be observed from the graph i.e., deviation of the frequency of the main loop from its nominal value of 5/s. These disturbances are generally observed during streaming, playing and converting etc. At 0:03:03 the variation can be observed and once again at 0:04:33. The scale on Y-axis represents the number of times of execution of main loop for every second.

0 50 100 150 200 250 0:00:00 0:00:10 0:00:20 0:00 :3 0 0:00:40 0:00:50 0:01:00 0:01:10 0:01:20 0:01:30 0:01:40 0:01:50 0:02 :0 0 0:02:10 0:02:20 0:02:30 0:02:40 0:02 :5 0 0:03:00 0:03:10 0:03:20 0:03:30 0:03 :4 0 0:03:50 0:04:00 0:04:10 0:04:20 0:04:30 n o o f p ro ce ss ed rt p p ac ke ts time(sec)

(32)

Fig4.1.5 Execution of main loop in VLC

The below results are obtained when the main loop of the VLC player is executed. This loop is executed 5 times a second and keeps on as long as the buffer becomes empty or else when it reaches EOF (end of file). The variations in the VLC runtime graph are due to re-beginning of the streaming small variations are observed in graph.

4.2

Performance of video 2

4.2.1 Traffic flow during streaming

Fig4.2.1 number of RTP packets received

The results were taken when streaming a live video for 0:03:58 sec as the video is 720 pixels. The maximum number of packets received is 312 at 0:04:31. The number of received packets varies from 60 to 287. The video is paused from 0:02:02 to 0:03:02 for performance analysis. The average number of packets received is 167/s, standard deviation is 43 and coefficient of

0 1 2 3 4 5 6 7 0:00:00 0:00:19 0:00:29 0:00:39 0:00:49 0:00:59 0:01 :0 9 0:01:19 0:01:29 0:01:39 0:01:49 0:01:59 0:02:09 0:02:19 0:02:29 0:02 :3 9 0:02:49 0:02:59 0:03:09 0:03:19 0:03:29 0:03:39 0:03:49 0:03 :5 9 0:04:09 0:04:19 0:04:29 0:04:39 0:04:49 freq u en cy o f m ai n lo o p Time(sec) VLC run time 0 50 100 150 200 250 300 350 400 0:00:00 0:00 :1 1 0:00:22 0:00:33 0:00 :4 4 0:00:55 0:01:06 0:01 :1 7 0:01:28 0:01:39 0:01 :5 0 0:02:01 0:02:12 0:02 :2 3 0:02:34 0:02:45 0:02 :5 6 0:03:07 0:03:18 0:03 :2 9 0:03:40 0:03:51 0:04:02 0:04:13 0:04:24 0:04:35 0:04:46 0:04:57 n o o f au d io b lo ck s u ti liz ed

(33)

4.2.2 Audio buffer allocation and utilization

Fig 4.2.2 Audio buffer utilization

From the graph 4.2.2 by comparing the audio performance between video 1 and video 2 is roughly the same.

4.2.3 Video buffer allocation and utilization

Fig 4.2.3 video buffer utilization

From the graph 4.2.3 the performance of the video 2 in the player is same compared with video 1. In both scenarios, when the buffer becomes empty it displays the last frame available in the buffer until a new one arrives. The first set of empty frames is due to video pause from 0:02:00 to 0:03:02 and the second set of empty frames is due to network disconnection. The scale on Y axis represents number of blocks used by the video decoder; each block represents a video frame.

0 5 10 15 20 25 30 35 40 45 00:00:00 00:00:11 00:00:22 00:00:33 00:00:44 00:0 0: 55 00 :0 1: 06 00:01:17 00:01:28 00:01:39 00:01:50 00:02:01 00:02:12 00:02:23 00:02:34 00:02:45 00:02:56 00:03:07 00:03:18 00:03:29 00:0 3: 40 00 :0 3: 51 00:04:02 00:04:13 00:04:24 00:04:35 00:04:46 00:04:57 n o o f au d io b lo ck s u ti liz ed TIme(sec)

audio buffer utilization audio starved

0 10 20 30 40 50 60 70 80 90 0:00:00 0:00:11 0:00:22 0:00:33 0:00 :4 4 0: 00 :5 5 0: 01 :0 6 0: 01 :1 7 0:01:28 0:01:39 0:01:50 0:02:01 0:02:12 0:02:23 0:02:34 0:02:45 0:02:56 0:03:07 0:03:18 0:03:29 0:03:40 0:03:51 0:04:02 0:04:13 0:04:24 0:04:35 0:04:46 0:04:57 n o o f vi d e o b lo ck s u ti liz ed time(sec)

(34)

4.2.4 Number of RTP packets processed

Fig4.2.4 number of RTP packets processed

The processing of RTP packets takes place upon receiving of RTP packet. The maximum number of RTP packets processed is 363 at 0:00:15 for video 2 which is very large when compared with video 1. The processing of RTP packets varies from 75 to 287 every second, large compared with video 1. So from these results we can say that as the quality of video increase, more data is received by the video player and the variations grow!

4.2.5 Working of VLC player

The working of the video player can be observed from below graph. Due to small variation in the performance of the player at 0:03:56, the main loop is executed for 6 times.

(35)

4.3

Performance of 1080pixels video

4.3.1 Traffic flow during streaming

Fig 4.3.1 Number of RTP packets received

The results were taken when streaming a HD 1080 pixel video for 0:03:58 sec. The maximum number of RTP packets received is 1331/s at 0:01:12 sec. the number of RTP packets received during streaming varies from 169 to 1272 in one second. For purpose the video is paused during 0:02:07 to 0:03:01 sec. The number of packets received during this period is zero. The comparison of graphs 4.2.1 and 4.3.1 shows that, as the quality of video increases, more data are received by the video player. The average number of packets received is 767/s, standard deviation is 266 and coeffiecient of variation is 0.346.

4.3.2 Audio buffer allocation and utilization

Fig 4.3.2 Audio buffer utilization

0 200 400 600 800 1000 1200 1400 0:00:00 0:00:10 0:00:20 0:00:30 0:00:40 0:00:50 0:01:00 0:01:10 0:01 :2 0 0:01:30 0:01:40 0:01:50 0:02:00 0:02:10 0:02:20 0:02:30 0:02:40 0:02 :5 0 0:03:00 0:03:10 0:03:20 0:03:30 0:03:40 0:03:50 0:04:00 0:04:10 0:04 :2 0 0:04:30 0:04:40 0:04:50 n o o f rec ei ve d p ac ke ts time(sec) receive RTP packet 0 10 20 30 40 50 60 00:00:00 00:00:10 00:00:19 00:00:28 00:0 0: 37 00:00:46 00:00:55 00:01:04 00:01:13 00:01:22 00:01:31 00:0 1: 40 00:01:49 00:01:58 00:02:07 00:02:16 00:02:25 00:02:34 00:02:43 00:02:52 00:0 3: 01 00:03:10 00:03:19 00:03:28 00:03:37 00:03:46 00:03:55 00:0 4: 04 00:04:13 00:04:22 00:04:31 00:04:40 00:04:49 00:04:58 n o o f au d io b lo ck s u ti liz ed TIme(sec)

(36)

From the graph 4.3.2 by comparing the results with video 1, video 2 and video 3, the variations between video 1 and video 2 are very small when compared with video 3. The performance of video-3 audio content is really poor when compared with video 1 and video 2. The audio triggers frequent re-sampling during streaming because of insufficient network performance.

4.3.3 Video buffer allocation and utilization

From the graph 4.3.3 the performance of video player to display video 3 is compared with video 1 and video 2. The video play is not smooth when compared with previous videos 1 and 2. This is due to receiving a large amount of data on a shared network (Wi-Fi). The performance of the video player is poor as the audio buffer and frame buffer becomes empty very often. This could be overcome by using a dedicated Internet link.

Fig 4.3.3 video buffer utilization

4.3.4 Number of RTP packets processed

From the graph 4.3.4 the number of packets processed increases with increase in the quality of video. For an HD video (1080p) a large amount of data will flow during a small interval of time through wireless networks. The number of RTP packets processed varies from 1332 to 0, quite a large difference. If no RTP packets are processed, audio and frame buffer will become empty. The processing of RTP packets especially depends on received packets as explained earlier. If packets are not received then nothing is processed. 0 10 20 30 40 50 60 70 80 90 0:00:00 0:00:10 0:00:20 0:00:30 0:00:40 0:00:50 0:01:00 0:01 :1 0 0:01:20 0:01:30 0:01 :4 0 0:01:50 0:02:00 0:02:10 0:02:20 0:02:30 0:02:40 0:02:50 0:03:00 0:03:10 0:03 :2 0 0:03:30 0:03:40 0:03 :5 0 0:04:00 0:04:08 0:04:18 0:04:28 0:04:38 0:04:48 0:04:58 n o o f vi d e o b lo ck s u ti liz e d /s time(sec)

(37)

Fig 4.3.4 number of RTP packets processed

4.3.5 Working of VLC player

Fig 4.3.5 execution of main loop

There is hardly any performance variation in the working of VLC player, with exception of 0:01:59, 0:03:37 and 0:03:45 when the main loop is executed for four to six times which presents a small variation in the performance of the player. The player tries to overcome from the problem by itself that can be observed from the un-synchronization. The occurrence of that event is due to absence of receiving data and the problem is overcome by presenting the last video frame available in the buffer repeatedly. The next event is, it

(38)

receives data (packets) again after one minute and the player requires more effort to re-begin the stream.

From the graph 4.3.1 and 4.3.5 of “number of received packets”, number of RTP packets received during streaming varies with time. The effect of this variation on the player is zero. In the case of video 1: it receives a minimum of 92 and maximum of 172. When the video graph (video buffer utilization) is observed the effect of this variation is negligible. The number of packets received increases with increase in the quality of video.

(39)

5

D

ISCUSSION

The lessons learnt through this experiment are as follows:

The experiment is implemented in real networks instead of using a simulator. The results collected from this experiment are more effective, as simulators are lack of perspective evidence and reliability. The experiment is carried out in two stages. Initially, the sender is located somewhere outside of the campus and the receiver is located at BTH. Later, in the second stage, the sender and receiver are located in the same network, those results were considered. Whenever an RTP packet is received the timestamp for every packet is recorded and it is observed that the number of packets received per second varies, from graph 4.1.1, 4.2.1 and 4.3.1.

When the receiver receives any information, some memory is allocated for the received information which serves as buffer. For the purpose of analysis what happens in case of traffic outage, the video streaming is stopped for one minute. During this period the audio content and video content available in the buffer is played out. The audio decoder plays no audio. In case of video content, the scenario is completely different from audio. When the buffer becomes empty it keeps on presenting the last available frame in the buffer until a new frame is arrived. During this period the player reports for empty frame when it finds frame buffer is empty.

In the case of video, the video frames received later than their scheduled playout time will be skipped which in turn affects the quality of the video. The receiving of late frames is due to congestion in the network or problems at the receiver

As we didn’t used dedicated link for streaming, the losses may vary due to several applications using same network during same time. For the purpose of better analysis, we collected the results three times for every video and compared the results. They all look similar for each video and the graphs were plotted for the results that were taken as the third time. The results were taken for three different types of video (360p, 720p and 1080p). The interesting observation made here is that the loss of the audio and video content as wells as the frequency of freezes on the receiving side increases with increase in the quality of the video. From the graph 4.1.1 and 4.2.1 jitter can be observed in receiving packets, however the effect of the jitter on audio and video for video 1 and 2 is zero.

(40)
(41)

6

C

ONCLUSION

It is important to measure user related performance problems during streaming in the user interface. The thesis has estimated and analyzed the importance of the VLC player functions and modules by collecting necessary timestamps of frame playout towards user.

6.1

Synthesis

RQ1: What problems are experienced in the user interface while streaming a live video?

The freezing of video and starving of audio due to emptiness of the buffer, re-building of video frames, re-using the most recent ones, re-sampling of audio due to presence of low buffer size, disturbance in the player due to bad network, and skipping of late received frame are observed at the user interface while streaming a video.

RQ2: What are the advantages of an instrumented video player?

An open source video player is instrumented according to requirements which benefited in gathering necessary timestamps of received packets, allocating memory for the decoded packets and obtaining demuxers decoded time for both audio and video during streaming process as illustrated in appendix 7.7.7 and 7.7.8, which is not possible without the instrumentation of the player.

RQ3: How the timestamps be measured related to initiating of a video, re-buffering and loss with in a video?

When the developed video player is used in streaming, data are collected to log files such as “buffer utilization (audio).txt”, “buffer utilization (video).txt”, “gets a datagram from network (rtp).txt”, “main loop (input.c).txt” and “processes rtp packet.txt” (ref. appendices 7.7.1 – 7.7.8). In each log file a timestamp, the reason for the event and size of memory allocated are recorded and these values are plotted in figures 4.1.1, 4.2.1, 4.3.1, 4.4.1 and 4.5.1. Based on these graphs the performance of the player is determined.

RQ4: What are the impacts of the bit rate on live video streaming on frequency of freezes?

(42)

Future work:

The technique implemented in this study is quite useful to analyze the performance of the player. In recent years, smart phones became more popular because of their robust connectivity to internet via 3G or Wi-Fi. Applications were developed especially for Smartphone such as iphone, Android mobiles etc. Potential follow-up work could hedged the implementation of an instrumented player on a smart phone and the user-perceived streaming behavior via 3G or Wi-Fi networks.

(43)
(44)

7

APPENDIX

7.1

GUI statistics

The VLC source code is downloaded [16] following are the modifications done to the source code for gathering necessary information for analysis purpose.

The statistics that are located in the GUI are collected to a log file, to gather them the source code is modified as follows. The following lines were added

in info_panels.cpp at line 507 located in the path

modules\gui\qt4\components\info_pannels.cpp. {

FILE *fp = fopen ("/media/9C1CC0671CC03DD0/files/statistics.txt","a"); Char buffer [30]; struct timeval tv; time_t curtime; gettimeofday(&tv, NULL); curtime=tv.tv_sec; strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime)); fprintf(fp,"%s%ld\n",buffer,tv.tv_usec);

int ade = p_item->p_stats->i_decoded_audio; fprintf(fp,"%d ",ade);

int apl = p_item->p_stats->i_played_abuffers; fprintf(fp,"%d ",apl);

int alo = p_item->p_stats->i_lost_abuffers; fprintf(fp,"%d ",alo);

int vde = p_item->p_stats->i_decoded_video; fprintf(fp,"%d ",vde);

int vdi = p_item->p_stats->i_displayed_pictures; fprintf(fp,"%d ",vdi);

int vlo = p_item->p_stats->i_lost_pictures; fprintf(fp,"%d ",vlo);

int reme = (float)(p_item->p_stats->i_read_bytes)/1024; fprintf(fp,"%d ",reme);

int inbi= (float)(p_item->p_stats->f_input_bitrate * 8000 ); fprintf(fp,"%d ",inbi);

int dere= (float)(p_item->p_stats->i_demux_read_bytes)/1024 ; fprintf(fp,"%d ",dere);

int debi= (float)(p_item->p_stats->f_demux_bitrate * 8000 ); fprintf(fp,"%d ",debi);

(45)

int disc= p_item->p_stats->i_demux_discontinuity; fprintf(fp,"%d ",disc);

fprintf(fp,"\n"); fclose(fp);

}

And file pointer is to be declared in the header file name info_panels.hpp at line 150 located in modules\gui\qt4\components\info_p annels.hpp in class infopanel

struct File *fp; time_t now;

7.2

Receiving RTP packets

When an RTP packet is received at the receiver, a block of memory is allocated. The variables accesses this buffer are block->i_buffer and block->p_buffer and the following lines are added to rtp.c in file input.c modules/access/rtp at line 63. Whenever player receives an RTP packet some information along with timestamps are recorded in “gets a datagram from network(rtp).txt” log file.

FILE *fp = fopen ("/media/9C1CC0671CC03DD0/files/gets a datagram from network(rtp).txt","a"); fprintf(fp,"%d ", block->i_buffer); fprintf(fp,"%s ", "i_buffer"); fprintf(fp,"%d ", block->p_buffer); fprintf(fp,"%s ", "p_buffer,"); char buffer[30]; struct timeval tv; time_t curtime; gettimeofday(&tv, NULL); curtime=tv.tv_sec; strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime)); fprintf(fp,"%s%ld\n",buffer,tv.tv_usec); fclose(fp);

Session starting and ending

(46)

are added in session.c at line 68 for session starting and at line 87 for session ending located in modules/access/rtp

FILE *fp = fopen ("/media/9C1CC0671CC03DD0/files/rtp.txt","a"); fprintf(fp,"%d ", session);

fprintf(fp,"%s ", "session started time,"); char buffer[30]; struct timeval tv; time_t curtime; gettimeofday(&tv, NULL); curtime=tv.tv_sec; strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime)); fprintf(fp,"%s%ld\n",buffer,tv.tv_usec); fclose(fp); session.c modules/access/rtp ln 87

FILE *fp = fopen ("/media/9C1CC0671CC03DD0/files/rtp.txt","a"); fprintf(fp,"%s ", "session ended time");

fprintf(fp,"%d ", *session); char buffer[30]; struct timeval tv; time_t curtime; gettimeofday(&tv, NULL); curtime=tv.tv_sec; strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime)); fprintf(fp,"%s%ld\n",buffer,tv.tv_usec); fclose(fp); RTP flow stopped

Due to various problems in the network, some problem at the sender or receiver RTP flow may stop. So to know when the flow is stopped the following lines are added to input.c located in modules/access/rtp at line 142

FILE *fp = fopen ("/media/9C1CC0671CC03DD0/files/rtp.txt","a"); fprintf(fp,"%s ",block);

fprintf(fp,"%s ", "rtp flow stopped,"); char buffer[30];

(47)

curtime=tv.tv_sec;

strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime)); fprintf(fp,"%s%ld\n",buffer,tv.tv_usec);

fclose(fp);

7.3

Buffer utilization for audio

The audio content is available in the memory allocated by the decoder when a packet is received. Demuxers reads the audio content and sends the audio content to audio speakers; the timestamps are recorded into a “buffer utilization (audio).txt” log file. The following lines are added to dec.c located in src/audio_output at line 270.

FILE *fp = fopen ("/media/9C1CC0671CC03DD0/files/buffer utilization(audio).txt","a"); fprintf(fp,"%d ", p_buffer->i_buffer); fprintf(fp,"%s ", " buffer utilization for audio,");

char buffer[30]; struct timeval tv; time_t curtime; gettimeofday(&tv, NULL); curtime=tv.tv_sec; strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime)); fprintf(fp,"%s%ld\n",buffer,tv.tv_usec); fclose(fp); Re-sampling:

During streaming, some memory is allotted for audio content which serves as a buffer. Due to various problems in network (sender stops, network failure etc...) the available audio content in the memory becomes too low and triggers re-sampling and the corresponding timestamps are recorded in “skipped picture(video).txt”. The following lines are added to input.c located in src/audio_output at line 679. If no audio content is available then it sends a message that audio is starved, and the corresponding time stamps are recorded in the same file.

FILE *fp = fopen ("/media/9C1CC0671CC03DD0/files/skipped picture(video).txt","a");

fprintf(fp,"%d ",p_input->i_resampling_type);

fprintf(fp,"%s ", "audio buffer is low triggering re-sampling in ln 338,"); char buffer[30];

(48)

strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime)); fprintf(fp,"%s%ld\n",buffer,tv.tv_usec);

fclose(fp);

These lines are added to output.c (src/audio_output at line 334) to know that audio is starving.

FILE *fp = fopen ("/media/9C1CC0671CC03DD0/files/skipped picture(video).txt","a");

fprintf(fp,"%d ",p_aout->output.b_starving); fprintf(fp,"%s ", "audio starved in ln 338,"); char buffer[30]; struct timeval tv; time_t curtime; gettimeofday(&tv, NULL); curtime=tv.tv_sec; strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime)); fprintf(fp,"%s%ld\n",buffer,tv.tv_usec); fclose(fp);

7.4

Buffer utilization for video

The video content is available in the memory allocated by the decoder when a packet is received. Demuxer read the video content and sends the video content for display; the timestamps are recorded into a “buffer utilization (video).txt” log file. The following lines are added to video_output.c located in src/video_output at line 1005.

FILE *fp = fopen ("/media/9C1CC0671CC03DD0/files/buffer utilization(video).txt","a");

fprintf(fp,"%d ", p_last);

fprintf(fp,"%s ", "allocates buffer for video,"); char buffer[30]; struct timeval tv; time_t curtime; gettimeofday(&tv, NULL); curtime=tv.tv_sec; strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime)); fprintf(fp,"%s%ld\n",buffer,tv.tv_usec); fclose(fp);

(49)

sampling and the timestamps were recorded in “skipped picture(video).txt”. The following lines are added to video_output.c located in src/video_output at line 679. If no video content is available then it sends a message that the video frame is empty and time stamps were recorded in “skipped(video).txt” log file. The following lines were added to video_output.c located in src/video_output at line 1102.

FILE *fp = fopen ("/media/9C1CC0671CC03DD0/files/skipped(video).txt","a"); fprintf(fp,"%d ", display_date);

fprintf(fp,"%s ", "frame is empty,"); char buffer[30]; struct timeval tv; time_t curtime; gettimeofday(&tv, NULL); curtime=tv.tv_sec; strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime)); fprintf(fp,"%s%ld\n",buffer,tv.tv_usec); fclose(fp);

If frames are received lately then the late received frames are skipped. The timestamps were recorded into “skipped picture(video).txt" log file. The below lines are added to video_output.c located in src/video_output at line 1039.

FILE *fp = fopen ("/media/9C1CC0671CC03DD0/files/skipped picture(video).txt","a");

fprintf(fp,"%d ",p_vout->p->statistic);

fprintf(fp,"%s ", "late video frame skipped in video_output.c ln 1038,"); char buffer[30]; struct timeval tv; time_t curtime; gettimeofday(&tv, NULL); curtime=tv.tv_sec; strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime)); fprintf(fp,"%s%ld\n",buffer,tv.tv_usec); fclose(fp);

7.5

Processing of RTP packets

(50)

FILE *fp = fopen ("/media/9C1CC0671CC03DD0/files/processes rtp packet.txt","a"); fprintf(fp,"%d ", data); fprintf(fp,"%s ", "data"); fprintf(fp,"%d ", *demux); fprintf(fp,"%s ", "demux,"); char buffer[30]; struct timeval tv; time_t curtime; gettimeofday(&tv, NULL); curtime=tv.tv_sec; strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime)); fprintf(fp,"%s%ld\n",buffer,tv.tv_usec); fclose(fp); UDP streaming

If the video streaming is done using UDP instead of RTP the timestamps were collected when it receives UDP packet. The below lines were added to udp.c located in modules/access at line 237

FILE *fp = fopen ("/media/9C1CC0671CC03DD0/files/udpblock.txt","a"); fprintf(fp,"%d ",*p_access);

fprintf(fp,"%s ", "receives an udp packet,"); char buffer[30]; struct timeval tv; time_t curtime; gettimeofday(&tv, NULL); curtime=tv.tv_sec; strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime)); fprintf(fp,"%s%ld\n",buffer,tv.tv_usec); fclose(fp);

7.6

VLC runtime

(51)

FILE *fp = fopen ("/media/9C1CC0671CC03DD0/files/main loop(input.c).txt","a");

fprintf(fp,"%d ",b_buffering);

fprintf(fp,"%s ", "main loop running"); char buffer[30]; struct timeval tv; time_t curtime; gettimeofday(&tv, NULL); curtime=tv.tv_sec; strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime)); fprintf(fp,"%s%ld\n",buffer,tv.tv_usec); fclose(fp);

7.6.1 PHP Script

A PHP script is developed to count the timestamps that are obtained for one second during streaming. This PHP script is developed into five different types by changing the name of the file generated during streaming. The file name in the script is replaced by following names

Statistics.txt

buffer utilization(audio).txt buffer utilization(video).txt

gets a datagram from network(rtp).txt main loop(input.c).txt

processes rtp packet.txt skipped(video).txt

The log file in the script is replaced by the following names buffer utilization(audio) count.txt

buffer utilization(video) count.txt

gets a datagram from network(rtp) count.txt main loop(input.c) count

processes rtp packet count.txt skipped(video) count.txt

<?php

$myFile = "E:/files/lady gaga/processes rtp packet.txt"; $fh = fopen($myFile, 'r');

(52)

$a='anu'; $i=0; if ($fh) {

while (!feof($fh)) // Loop til end of file. {

$buffer = fgets($fh, 4096); // Read a line.

$timestamp = explode(',', $buffer); //splits the line $b = $timestamp[1]; $time = explode(' ',$b); $c = $time[1]; $time = explode('.',$c); $c = $time[0]; if (strcmp($c, $a) != 0) { $a = $time[0]; $a = str_replace("\n",'',$a); $e = $a."\t".$i."\n"; echo $e; fwrite($f,$e); $i = 0; } $i++; }

fclose($fh); // Close the file. }

?>

7.7

Output

LOG file:

The log files that are generated during streaming are as follows.

7.7.1 gets a datagram from network(rtp).txt:

References

Related documents

[r]

To contribute with further understanding on how the derivative development in Open Source Hardware affects innovation, this research explores three examples of

This thesis uses the HS as a framework for defining hopelessness, as a basis for comparison between studies on games, people who play them, and game design theories about what makes

The categories were divided between mentions of the contexts (hunting, being hunted, searching etc.), influencers such as light, color and architecture, movement patterns (left to

There are different roles that a LOSC plays in OSP. From our interviews we found different aspects that LOSC can engage in when dealing with OSP. These roles are

Thus, a decision is of great importance for an organization for which it is interesting for us to see how they make their decisions regarding a purchase and foremost how the

Graph 8: Comparing bandwidth used between WebRTC and DASH for ABR scenario 3.

The video shows the cardboard waste forest from within its leaves, revolving around the Filipino domestic workers at Central, Hong Kong, who hacked into the