• No results found

Powered by TCPDF (www.tcpdf.org)

N/A
N/A
Protected

Academic year: 2021

Share "Powered by TCPDF (www.tcpdf.org)"

Copied!
43
0
0

Loading.... (view fulltext now)

Full text

(1)

UPTEC IT 20010

Examensarbete 30 hp

Oktober 2020

Automating regression testing

on fat clients

Emil Österberg

(2)
(3)

Teknisk- naturvetenskaplig fakultet UTH-enheten Besöksadress: Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0 Postadress: Box 536 751 21 Uppsala Telefon: 018 – 471 30 03 Telefax: 018 – 471 30 00 Hemsida: http://www.teknat.uu.se/student

Abstract

Automating regression testing on fat clients

Emil Österberg

Regression testing is important but time consuming. Automating the testing have many benefits. It will save money for companies because they will not have to pay testers to manually test their applications. It will produce better software with less bugs as testing can be done more frequently so bugs will be found faster. This thesis has compared two techniques used to automate regression testing. The more traditional component level record and replay tools and Visual GUI testing tools that uses image recognition. Eight tools in total was tested and compared, four from each technique. The system under test for this project was a fat client application used by Trafikverket. After automating a test suite using all tools, it could be concluded that the component level record and replay had some advantages over visual GUI testing tools, especially when it comes to verifying the state of the system under test. The benefits of visual GUI testing tools comes from their independence from the system under test and that the technique more correctly mimics how a real user interacts with the GUI.

Examinator: Lars-Åke Nordén

Ämnesgranskare: Lars-Henrik Eriksson Handledare: Ingemar Wararak

(4)
(5)

Sammanfattning

Regressionstestning ¨ar en viktig men tidskr¨avande del av mjukvaruutveckling. Att au-tomatisera testningen har flera f¨ordelar. Det sparar pengar f¨or f¨oretag eftersom de inte beh¨over betala testare f¨or att manuellt utf¨ora testerna. Det resulterar i b¨attre mjukvara med f¨arre buggar eftersom man kan testa oftare och d¨armed hitta buggar tidigare. Det h¨ar projektet har unders¨okt och j¨amf¨ort tv˚a tekniker som kan anv¨andas f¨or att automatisera regressionstestning och verktyg som anv¨ander dessa tekniker. Dels de traditionella verk-tygen som identifierar objekt p˚a komponentniv˚a samt verktyg som ist¨allet anv¨ander sig av bildigenk¨anningf¨or att identifiera objekt. Totalt testades och uv¨arderades ˚atta verktyg, fyra av varje tekniktyp. Systemet som testades under projektet ¨ar en skrivbordsapplika-tion som anv¨ands av Trafikverket.

After att ha automatiserat en testsekvens med varje verktyg kunde konstateras att verk-tygen som identifierar objekt p˚a komponentniv˚a har flera f¨ordelar ¨over verktyg som enbart anv¨ander bildigenk¨anning. Detta g¨aller fr¨amst n¨ar det kommer till verifiering av systemets tillst˚and. Den st¨orsta f¨ordelen med bildigenk¨anningsverktygen visade sig vara dess oberoende fr˚an systemet, samt att tekniken mer efterliknar en verklig anv¨andare.

(6)
(7)

Contents

1 Introduction 1 1.1 Problem . . . 1 1.2 Purpose . . . 4 1.3 Goal . . . 4 1.4 Delimitations . . . 4 2 Testing Methodologies 5 2.1 Functional Testing . . . 5 2.2 Non-Functional Testing . . . 5 2.3 Regression Testing . . . 6 3 Testing techniques 7 3.1 Record and Replay . . . 7

3.1.1 Coordinate based R&R . . . 7

3.1.2 Component level R&R . . . 7

3.2 Visual GUI Testing . . . 10

4 Related work 11 4.1 Visual GUI Testing . . . 11

5 Method 13 5.1 Selection of tools . . . 13

5.1.1 SikuliX . . . 13

5.1.2 EyeAutomate . . . 14

(8)

5.1.4 TestingWhiz . . . 14

5.1.5 Unified Functional Testing (UFT) . . . 14

5.1.6 Visual Studio Coded UI (VS Coded UI) . . . 15

5.1.7 Squish . . . 15 5.1.8 TestComplete . . . 15 6 Evaluation 16 6.1 Testing scenario . . . 16 6.2 Evaluation criteria . . . 17 7 Results 18 7.1 SikuliX . . . 18 7.2 EyeAutomate . . . 19 7.3 Eggplant . . . 20 7.4 UFT . . . 22 7.5 TestComplete . . . 23 7.6 Squish . . . 24

7.7 Visual Studio Coded UI . . . 25

7.8 Result tables . . . 26 8 Discussion 30 8.1 Reflection . . . 31 9 Conclusions 31 10 Future work 32 Bibliography 33

(9)

1 Introduction

1

Introduction

After updating a software system you want to test it to ensure that the new features have not broken old features [1]. This is called regression testing and bugs found during these tests are called regression bugs. When testing fat clients/desktop applications, source code is often not available for the testers. This means, in order to test the system, the testers will need to interact with the systems GUI. Testing is a huge part of software development and can often take 30-50% of a projects resources [2]. Automating testing is therefore highly interesting for companies as there is a potential to save a lot of money or use those resources to produce more value. Automation of tests have long been and still is a hot topic [3]. Replacing manual testing with automatic tests have many benefits. It allows for more, faster and more concise testing [4][5], with in turn should result in better software with less faults. An advantage of automating testing is not having to have testers doing the tests manually. This frees up time for the testers to focus on more complicated tasks that may not be possible to automate. It is also a lot faster to run a test automatically than having a tester doing the same task, making deploying new software faster [6]. Automation also makes it possible to do more testing and thus increasing the overall quality of the software.

This project is done in collaboration with the Swedish Transport Administration, Trafikver-ket. Trafikverket develops and operates a great number of systems. A large number of these systems are so called ”fat clients”. A fat client is a desktop application that does most of the calculations locally, in contrast to a thin client where most of the calcula-tions in performed on a remote server. Trafikverket is interested in finding methods to automate testing of these systems.

Numerous tools for testing exist on the market today. For unit tests automatic testing is standard. Automatic tests of web interfaces are widely used and documented, but for fat clients automatic tests are less well used and documented so there is a need to test and document this. The team collaborated with at Trafikverket use Selenium [7] and Eggplant [8] for high level testing. Selenium primary for web interfaces and Eggplant for desktop applications. These tools work well for some applications but there are also tests that can not be automated to a satisfying level with these tools.

1.1

Problem

Trafikverket have had problems automating testing off some clients. One of those clients is ProjectWise that will be used as the system under test (SUT) for this project[9]. In ProjectWise the user can browse files and folders, displayed in a tree structure, much

(10)

1 Introduction

like Windows File Explorer. The files and folders can be viewed in two windows in the client, one showing the whole tree and one showing only the content in the innermost open folder. See figure 1.

Trafikverket believes that the reason testing tools have trouble automating ProjectWise is because, when the ProjectWise client is started and the file and folder objects are generated and drawn, they all receive a random ID-number from the operating system. If a testing tool would this ID-number as a reference for the object, a script generated from that tool will not be reusable after restarting the system as it will then have received a new ID-number. One type of tools uses image recognition as a way to automate tests. These tools are described in section 3.2. They would not have problem with the random ID numbers. They could instead have trouble with the mirrored file structure in the client, as an image representing an object could match multiple coordinates on the screen.

Automating the regression testing is necessary if Trafikverket are to use a continuous delivery approach. Manual testing takes to much time and is to expensive. Automating the testing also helps to eliminate some bugs that could otherwise be missed due to human error.

(11)

1 Introduction

Figure 1: The mirrored file and folder structure in ProjectWise causes problem for au-tomation tools.

(12)

1 Introduction

1.2

Purpose

The purpose of this project is to explore the possibilities of automating the regression testing of Trafikverket’s desktop applications. In particular the application ProjectWise. In addition to solving this specific problem the project also aims to compare the two test automation techniques, Component Level Record and Replay and Visual GUI Testing (VGT).

1.3

Goal

The goal is to do a detailed evaluation of a selection of the test automation tools that ex-ists on the market today. If a suitable tool is found, the possibilities of including the tool in Trafikverket’s Application Life cycle Management(ALM) should be explored. The goal is also to compare and find pros and cons of the automation techniques Component Level Record and Replay and Visual GUI Testing. The following are the goals of this thesis.

• G1. Explore ways to automate Trafikverket’s application ProjectWise.

• G2. Compare the test automation techniques Component level Record and Replay and Visual GUI Testing.

• G3. Compare tools on the market based on criteria produced together with Trafikver-ket.

• G4. If a suitable tool is found, explore the possibilities of including it on Trafikver-ket’s Application Life Cycle Management.

1.4

Delimitations

In order to be able to evaluate the tools, a test suite is implemented using each of the tools. This test suite only tests a single desktop application. The tools and the desktop application, ProjectWise, will all be run on Microsoft Windows 10. Other results may occur for different operating systems based on how the operating system draws the GUI-objects. This is a small basis that does not highlight every strength and weakness of every tool. Instead it primarily attempts to find the optimal tool for Trafikverket to use to test this type of application.

(13)

2 Testing Methodologies

2

Testing Methodologies

2.1

Functional Testing

Functional testing aims to test the functionality of the System Under Test(SUT), i.e. that the system fulfills all functional requirements of the specification. This can be done with different granularity, from low level unit testing to high level system testing.

• Unit Testing is the process of testing the correctness of small components of the SUT. The SUT is divided into components consisting of one or a few functions. For each component the expected output for some predefined input values are asserted in the tests. This work is time consuming and changes in the functions often means the tests need to be rewritten. [10]

• Integration Testing is the process of testing that two or more units work as in-tended together. While the intention of unit testing is to isolate each unit to test their functionality, in integration testing the intention is to test the units in their intended environment [11].

• System Testing means testing the functionality of the whole system and verify that it meets its specifications [12]. This is usually done by testing a set of prede-fined test scenarios/test suites and verify the results. Also non-functional system testing is done. These types of testing will be described in more detail in the next section.

• Acceptance Testing is done by validating that the developed system meets the requirement set by the customer or intended users of the system.

2.2

Non-Functional Testing

Non-functional testing tests things not strictly connected to the function of the SUT. It instead focuses on aspects such as performance, security, usability and compatibility. These aspects will be described in more detail below.

• Performance Testing is the act of testing how well the SUT performs its tasks relative to its specifications. Often multiple types of performance tests are made. These can be stress tests, where the SUT is run with a high workload in order to find its limits. Other types of performance tests can test the responsiveness, reliability, throughput, scalability etcetera of the SUT. [13]

(14)

2 Testing Methodologies

The goal of performance testing can be defined as testing how effective the system is. It is often used to find the limits of the system. For example how many users can the system handle at once? What is the maximum data throughput of the system? What are the minimum system requirements to run the software?

• Security Testing aims to test the security of the SUT. The three basic concepts of security are confidentiality, integrity and availability. Confidentiality is the concept that no unauthorized source should have access to information stored in or communicated by the system. Integrity is the concept that data stored in or communicated by the system is not corrupted. This can be both modified inten-tionally by an unauthorized part on uninteninten-tionally as the result of a bug. With availabilitymeans that authorized sources should have access to information it is supposed to have.[14]

• Usability Testing is the process of testing the SUT by letting the end users use the system. What is tested here is that the intended end users can use the system to perform the work it was designed to do. Aspects measured are if the users can do what they want to do, time to learn how to use the system and how efficiently the users can use the system after they have learned how to use it.[15]

• Compatibility Testing is the act of testing that the developed software is compat-ible with the environment in witch it is supposed to be used. That includes but is not limited to the hardware components, operating system and web browsers [16]. A challenge here is that components often come in multiple different version. With many components that should work together the total number of configura-tions can be huge and testing all of them may not be possible [17].

2.3

Regression Testing

Sometimes after adding new features to a system, old features stop working correctly. This is called a regression bug. In order to avoid regression bugs, the whole system is tested after each major update, so called regression testing. Usually it is not possible to test every part of the system for every update. Instead a number of test suites, that are considered to have good coverage, are run. When testing individual units of the system these tests are easy to automate, but when testing the system on a higher level, for example testing a GUI, it becomes harder to automate the tests. Because of this these high level regression tests are often run manually. Testing manually is time consuming, expensive and error prone [5].

(15)

3 Testing techniques

3

Testing techniques

Tools for high level test automation usually uses one or both of to techniques. Com-ponent Level Record and Replay or image recognition, also called visual GUI testing. This section will describe these two techniques.

3.1

Record and Replay

Record and replay(R&R) is a technique where user interaction with the system un-der test(SUT), such as mouse movements, clicks and keyboard inputs are recorded and saved by a tool. This recording can then be replayed automatically in order to test the system. There are two mayor ways to record the inputs to the system. Either on a bitmap level or on a component level.

3.1.1 Coordinate based R&R

In coordinate based R&R screen coordinates are used to reference the objects interacted with. For example when clicking on a button the coordinates of that click can be saved and used to automate clicks on the same button. This is illustrated in figure 2 where a user clicks on button 5 on a windows 10 calculator and the coordinates of that click is recorded and saved. Typically the test is generated in a recording section where the coordinates of every interaction with the SUT is saved to a script. The script can later be used to replicate the recorded scenario. This method is very vulnerable to changes in the GUI. Moving objects just a little bit can cause the whole test suite to fail. [5][18]. Because of this a lot of maintenance is required to keep the scripts up to date with the latest GUI layout. This maintenance is time consuming and therefore costly. Because of this, this method is not used much in practise.

3.1.2 Component level R&R

In component level R&R references the objects interacted with are saved through direct access into the GUI components. Figure 3 displays how button five on the windows 10 calculator application is identified on component level. The button is identified as an object of type button with the name ’Five’. In order to record on a component level the tester needs access to the source code to be able to use GUI libraries and tool kits [6]. Working on this level have many benefits over coordinate based R&R. Using references

(16)

3 Testing techniques

(17)

3 Testing techniques

(18)

3 Testing techniques

Figure 4: Button 5 identified using image recognition.

to the components in the GUI instead of coordinates of the screen makes the test suite scripts more robust to changes in the GUI layout [18].

A feature available in some component level testing tools is GUI-ripping. GUI-ripping is a process that extracts all GUI components from from the SUT and creates a map of the system. This map can then be used to automatically create test suites for the SUT. Theoretically such a map can be used to create a full coverage test suite where all possible combinations of component interactions are tested. In reality full coverage is not feasible as it would take to much time. Instead a subset of the possible paths is used. A problem with working on component level is that the script won’t simulate human interaction as well. For example a button that is hidden behind something else in the GUI is not accessible for a human to interact with, but it might still be possible for the script to click the button through its saved reference to the button. This means a faulty GUI might not be caught by a component level script. In the same way coordinate based R&R is vulnerable to moving around components in the GUI, component level R&R is vulnerable to changes in the GUI API [18].

3.2

Visual GUI Testing

Emil Al´egroth introduces, in his PhD Thesis, the term Visual GUI Testing (VGT) [6]. VGT is a technique that uses image recognition in order to find and interact with the GUI objects. In order to identify the same calculator button as in the examples above, the user needs to capture an image of the button and then use a tool to match that image with and area on the screen, see figure 4. The technique is script based, meaning that the tester writes or generates a script that references the various GUI objects using images. Compared to R&R, VGT is more robust to non visual changes in the GUI layout.[6] VGT is however vulnerable to visual changes in the GUI. Changes in size or colour might cause the image recognition tool to not be able to recognise the object any more[19].

(19)

4 Related work

4

Related work

In a collaboration between Chalmers University of Technology and Saab AB, Visual GUI testing was used to test one of Saab AB’s systems. Al´egroth et al. uses and evalu-ates different VGT tools.[19] Their research shows that VGT can improve the speed of testing by up to five times compared to manual testing. In their tests they mostly used the open source tool Sikuli (described furhter in section 5.1.1) after having concluded that it compares equally and often even better than commercial tools.

Gao et al. [20] present in their paper an approach of using work-flow based test cases automatically derived from the system under test together with a test oracle. They de-scribe a framework called testPatch that can be used to fully automate regression testing. It does so by deriving test cases based on paths that can be taken in the GUI. The test case is executed automatically on different versions of the software under test and any mismatch detected between the versions are noted as a potential regression bug. In the paper they discuss the problem with False Positives, that is detected bugs that are not a actually a bug. Because of the potential presence of false positives all mismatches need to be manually checked in order to decide whether it is a regression bug or not.

4.1

Visual GUI Testing

As described in section 3.2, the term visual GUI testing is introduced by Emil Al´egroth in his PhD thesis. The prime focus of the thesis is VGT’s applicability in the industry. In cooperation with the Swedish company Saab Al´egroth et al. tests and evaluates different GUI testing tool that uses image recognition as their way to identify GUI objects on the screen.

In a paper B¨orejesson et al. compares two VGT tools on how well these tools can perform testing that would otherwise be done manually [6]. 50 scenario based test cases, used to test a subsystem of an air traffic control system, was used in this study. These test cases were used by Saab to manually test the system. In the study the 50 test cases were analysed and categorized. This analysis showed that the open source tool Sikuli could fully script 81%, partly script 17% and not script 2% of the test cases. The tested commercial tool could fully script 95%, partly script 3% and not script 2% of the test cases. The major difference in the result between the tools can largely be explained by the fact that Sikuli can not analyse audio output. The two percent of the cases(one case), that neither of the tools could script was a hardware related test case.

From the 50 test cases, five cases where selected to be representative for the entire test suite. There five cases were scripted with the two tools and after this the researchers

(20)

4 Related work

analysed how efficient the tools could handle the tasks. The result showed that Sikuli had a lower development time 15 hours and 55 minutes compared to the commercial tool 17 hours and 40 minutes. The execution time for the scripts was almost identical for the tools, 18.00 minutes for Sikuli and 17.93 minutes for the commercial tool. This gives and estimated execution time for the whole test suite on about three and a half hour. Compared with running the test suite manually an experienced tester could execute the test suite in about 16 hours. This is a gain of 78 percent for the automated tests. The development time for the whole test suite is estimated to about 18 business days for Sikuli and 21 business days for the commercial tool. This is in the same order of time as Saab spends on testing during one development cycle. Therefore the authors conclude the automation should be cost efficient after only one development cycle.

In a study Al´egroth et al. studies long term use of VGT in industry[21]. The study is done in collaboration with the Swedish music streaming company Spotify. Spotify is one of the very few companies that have used VGT in industry over some years, so their experience with the technique is valuable when evaluating it. The VGT-tool used by Spotify is Sikuli. A carefully selected subset of the employees was interviewed and their answers was analyzed. The answers was divided into categories concerning the adoption, benefits and challenges of using VGT. It was concluded that the benefits of using VGT are:

1. Test scripts are robust both in terms of execution and number of false test results 2. Feasible script maintenance

3. Test script logic can be reused between different variants of an application 4. Test scripts can be used to test the actual product as well as incorporate external

applications

5. Test scripts find regression defects that otherwise requires manual regression test-ing

The challenges of using VGT are:

1. Test scripts have limited use for applications with dynamic/non-deterministic out-put

2. Test scripts require significant amounts of image maintenance 3. Sikuli scripts have limited applicability for mobile applications 4. Sikuli locks up the user’s computer during test execution

(21)

5 Method

5

Method

In order to find the optimal tool for Trafikverket to use, a selection of the tools on the market was tested and evaluated. They were evaluated, using the evaluation criteria defined in section 6.2, based on how well they could execute a predefined test suite defined is section 6.1. The two main techniques used by the tools, component level R&R and VGT, was compared. The results from the evaluation was compared with results from similar projects described in related work.

I had no previous experience of any of the tools so besides implementing the scenario, the tools where also evaluated based on how easy they were to learn and on the avail-ability of documentation and tutorials. If the tool was not free to use, the tool was tested using an evaluation license.

5.1

Selection of tools

The tools used in this project where selected based on two factors. First they had to be somewhat popular on the market. Without any reliable data on market share, a list of tools was created based on what tools appeared most frequently in discussions on forums such as stackoverflow.com and in articles related to test automation. The second factor was that the tools had to provide an evaluation version that could be used for free. After consultation with Emil Al´egroth, who was considered an expert in the field, the following eight was finally chosen.

The tools have different types of licensing models. Some are completely free to use while others have requires the user to purchase a license. A group license allows the usage of a certain number of simultaneous running instances of the software. Node locked licenses are locked to a single computer.

5.1.1 SikuliX

SikuliX is a VGT tool and the only open source tool evaluated in this project. It can be used either with a purpose built IDE or through its Java API. For this project only the IDE has been used. SikuliX scripts are written in Jython, a Python language made to run on Java virtual machine.

(22)

5 Method

5.1.2 EyeAutomate

EyeAutomate is a VGT tool developed by the Swedish company Auqtus AB. EyeAuto-mate is free to use together with EyeStudio, a purpose built IDE. EyeAutoEyeAuto-mate uses a custom made scripting language. When editing the scripts using EyeStudio, images can be seen directly in the code. Auqtus calls this Visual script.

5.1.3 Eggplant

Eggplant is another pure VGT tool. It differs from SikuliX and EyeAutomate by pro-viding a recording functionality. Instead of manually capturing each image and insert them i a script, Eggplant allows the user to click trough a scenario and record an image for each click. Eggplant differs from all other tested tools by requiring the testing script to be run on a different system than the SUT. To execute a scripts Eggplant connects to the SUT via VNC.

Eggplant is a commercial tool and has a both groups licenses and node locked licenses. Licenses can be either development licenses that allows access to script creation and debugging through the GUI or execution licenses that only allows execution of scripts.

5.1.4 TestingWhiz

TestingWhiz is the forth and final pure VGT tool. Like Eggplant if provides a recording functionality witch speeds up the script generation. TestingWhiz is used with its puspose built IDE.

TestingWhiz comes in two versions. A free to use community edition with limited functionality and a licensed enterprice edition. For this project the enterprice version was evaluated. The license model for TestingWhiz is a monthly fee of about $150 per user.

5.1.5 Unified Functional Testing (UFT)

Unified Functional Testing(UFT) previous QuickTest Professional(QTP) is an automa-tion funcautoma-tional and regression testing tool developed by Micro Focus. It provides a wide variety of functionality but for this project I have focused on mainly on its VGT and component level R&R functionality. Like the other tested tools it comes with its

(23)

5 Method

UFT provides many different license options for companies. The annual license cost is about $4500 for a floating license that can be used by any user and $3200 for a node locked license that can only used by one specific user[22].

5.1.6 Visual Studio Coded UI (VS Coded UI)

Visual Studio Coded UI is a automation testing tool developed by microsoft. It is used as a plugin for Visual Studio. It is a pure component level R&R tool.

Visual Studio Coded UI is included in the Visual Studio Enterprise edition and does not require any aditional license. The license cost for Visual Studio Enterprice is $2999 per year or $250 per month[23].

5.1.7 Squish

Squish is a functional and regression automation tool developed by Froglogic. It pro-vides both VGT and component level R&R functionality. Coding and recording is done in the provided purpose built IDE.

The price for a license varies depending on if it is a group license or node locked license, that can only be used by one specific user, and what editions of Squish is needed. A node locked license costs from about $2700 per license and year. The price for a group license varies depending on how many individuals should have access to the program and how many users should be able to use the program concurrently, from $1300 to about $2100 [24].

5.1.8 TestComplete

TestComplete is developed by Smartbear and is like UFT and Squish a functional and regression automation tool with both VGT and component level R&R functionality. It as well provides a purpose built IDE for recording, coding and running the tests. The license can be customized as needed depending on if TestComplete should be used to test desktop, mobile or web applications. The cost for a node locked license is about 1067 plus about $1000 - $1300 per module type(desktop, web and mobile) annually. A floating license that can be used by any user costs twice as much[25].

(24)

6 Evaluation

6

Evaluation

Eight tools where evaluated in this project. Four tools that purely relied on the VGT technique to identify object and four tools that identified objects on component level or using a mix of the the two techniques.

A testing scenario was developed together with testers from Trafikverket.

6.1

Testing scenario

The testing scenario used in the evaluation is one that have proven difficult for Trafikver-ket to automate with their current tools. The test suite consists of performing a number of tasks in the client ProjectWise [9].

When Trafikverket has used traditional component level R&R tools to to automate tasks in ProjectWise the tools have not been able to identify the objects in the client after restarting the SUT. The reason for this is that the tools uses an ID-number given to the object by the operating system to uniquely identify each object. After restarting the operating system the object is given a new random ID and the object is no longer recognized as the same object.

In order to get around this Trafikverket have used the VGT-tool Eggplant to identify the objects using image recognition instead. This resulted in a new problem. ProjectWise displays the same folder structure on two locations in the client. This leads to images matching multiple locations on the screen.

The following steps should be performed in the test scenario. 1. Log in on the folder PDB Investera (TRV)

2. Navigate the file tree by clicking folders in both windows in ProjectWise

3. Click on an item not visible on the screen that requires scrolling in order to be seen

4. Verifying the value of the parameter State of a file 5. Download a file to desktop

(25)

6 Evaluation

6.2

Evaluation criteria

The tools will be evaluated based on the following criteria: 1. Execute the whole scenario

2. Installing the tool

3. Dependence on other software 4. Documentation

5. Verification facilities 6. Report functions 7. Complexity

8. Same looking objects on the screen 9. Different screen resolutions

(26)

7 Results

7

Results

In this section I will, for each tool, first describe how the tool performed and what difficulties might have appeared and how they where solved. After that I will answer how well the tool performed for each of the evaluation criteria.

7.1

SikuliX

SikuliX was able to complete the whole test scenario. The hardest of the tasks to get to work was finding objects not visible on the screen. That required implementing a way to scroll on the screen. Another problem of all VGT tools have in common is how to handle same or similar looking objects on the screen. This could be handled in this case by capturing a large enough image to include some of the surrounding area and so making the image match only one area of the screen.

• Execute the whole scenario

SikuliX was able to execute the whole scenario. • Installing the tool

No installation is required. An executable .jar-file can be downloaded from the projects web page and run directly.

• Dependence on other software

Java 8 or later must be installed before SikuliX can run. • Documentation

There are good online documentation that had all information I needed. The offi-cial documentation site also has tutorials to get started with SikuliX.

• Verification facilities

Verification has to be done by comparing a captured target image with the actual result on screen. This worked well for verifying images and larger text fields. Verifying smaller details, like a number in a table, was still possible, but required more thought about how the comparison image was captured to reliable.

• Report functions

Report functions are lacking in SikuliX. The only type of report generated after executing a script is a message saying if the execution succeeded or if it failed and in case it failed what row in the script failed.

(27)

7 Results

• Complexity

The tool is simple, especially if the user knows Python. There are buttons in the interface for the most common actions like clicking and ”drag and drop”. Automation of simple linear tasks should be possible without any programming knowledge. For more complex non linear scenarios, at least basic knowledge in Python is needed.

• Same looking objects on the screen

This could be handled by capturing a large enough image to include some of the surrounding area and thus making the image match only one area of the screen. • Different screen resolutions

Images must be captured on the same resolution as is used during execution. • Find objects not visible on the screen

Sikulix can not see outside the screen. In order to interact with objects not visible on the screen scrolling needed to be automated. For the most part it was sufficient to click on a specific area of the screen and then simulate pressing the arrow keys to scroll.

7.2

EyeAutomate

EyeAutomate as a tool is very similar to SikuliX. It had the same difficulties as Sikulix where it required the user to implement a way to scroll on the screen to find hidden ob-jects, as well as the need to capture larger images in order to correctly identify identical or similar looking objects. An additional problem was that EyeAutomate was inconsis-tent in finding image matches. A script that did work one time may not work when run again.

• Execute the whole scenario

EyeAutomate was able to complete the whole test scenario. • Installing the tool

Installation is trivial.

• Dependence on other software

Java 8 or later must be installed before EyeAutomate can be installed. • Documentation

EyeAutomate has good documentation. In addition to the documentation website, EyeAutomate also provides a PDF document that teaches the user how and when to use EyeAutomate.

(28)

7 Results

• Verification facilities

Just as with SikuliX, verification has to be done by comparing a captured target image with the actual result on screen. EyeAutomate also provides the function-ality to search for text instead of images. To search for text the user must first tell EyeAutomate what font the and size the text is in.

• Report functions

EyeAutomate generates reports as an HTML file that can be viewed in a web browser. The report displays what steps succeeded and what steps it failed to execute. It displays a screen-shot of how the screen looked at the time of failure. • Complexity

The tool is simple and intuitive to use. It uses a custom scripting language that is easy to learn. The limited amount of commands in the language makes it difficult to script more complex scenarios.

• Same looking objects on the screen

Just as for SikuliX, this could be handled in this case by capturing a large enough image to include some of the surrounding area and so making the image match only once on the screen.

• Different screen resolutions

Can switch between resolutions without much problem. • Find objects not visible on the screen

Could be solved the same way as for SikuliX. Scrolling needed to be automated. Either by clicking on the window that needed to be scrolled and then simulating pressing arrow keys or page up/page down, or by finding and clicking on the scroll bar.

7.3

Eggplant

I had large problems with Eggplant crashing and after restarting the software it would often not recognize my trial licence. Due to this I was not able to fully test and evaluate the tool before the trial period ended.

Eggplant uses a R&R approach to VGT meaning you capture images as you click through the scenario. This speeds up the process of creating scripts. Eggplant still suffers from the same difficulties as the two previous VGT tools in that it can not find objects not visible on the screen without the user implementing a scroll and search func-tionality and difficulty separating identical or similar looking objects.

(29)

7 Results

• Execute the whole scenario

The test scenario could not be fully automated before the trial licence for the software ended.

• Installing the tool

Eggplant has to be installed on a different computer than the SUT. Eggplant then connects to the SUT via VNC.

• Dependence on other software

Eggplant requires a VNC software in order to connect to and interact with the SUT.

• Documentation

Eggplant has good online documentation. • Verification facilities

Just like the other image based tools verification must be done by comparing an captured image with the actual results on the screen. EyeAutomate also provides text recognition. This functionality was not tested during this project.

• Report functions

Per default Eggplant provides a report of if the script succeeded to execute and if not where it failed. Using asserts in the script the report can be customized to how it should log and report results of verifications.

• Complexity

Eggplant is more complex to get started with than the other VGT tools, mainly because of the need for using VNC to interact with the SUT. After the initial setup is complete, Eggplant is intuitive to use. The recording functionality makes it faster and easier to work with than both SikuliX and EyeAutomate.

• Same looking objects on the screen

This had to be handled the same way as the other VGT tools, by capturing large enough images to be able to uniquely identify objects.

• Different screen resolutions

Can switch between resolutions without much problem. • Find objects not visible on the screen

Just like the other VGT tools scrolling needed to be automated in order to search for objects not directly visible on the screen.

(30)

7 Results

7.4

UFT

UFT uses a combination of VGT and component level R&R. The script generated by the R&R functionality could not identify many of the objects. No way to adjust what properties was used to identify the objects was found. Therefore image recognition had to be used in order to identify those objects. Using these different ways to identify and interact with objects the whole scenario could be completed.

• Execute the whole scenario

UFT was able to complete the whole test scenario using a combination of compo-nent level identification and image recognition.

• Installing the tool Trivial installation

• Dependence on other software

UFT does not require any additional installed software. • Documentation

UFT has good online documentation. Microfocus also provides an extensive cus-tomer support program that can assist companies using UFT.

• Verification facilities

UFT provides checkpoints that can be used to verify the sate of the tested soft-ware. It works by comparing the current state or characteristics of an object with the expected value. Results of succeeded and failed checkpoints are reported in the run results.

• Report functions

After the execution has terminated UFT provides run results. The results can be browsed from within the IDE or exported as an HTML file. The results display detailed information for each step in the execution and also provides a screen-shot for each step.

• Complexity

UFT is a very complex tool and within the scope of this project it has not been possible to test all parts of the tool. The record and replay functionality is intu-itive to get started with and generates a mostly functional script. The script can be viewed in two modes. Keyword view and Editor view. The keyword view pro-vides an overview of the test steps in a more ”easy to read” way, while the editor view displays the test in VBscript code. The keyword view makes it possible to

(31)

7 Results

• Same looking objects on the screen

As long as the object could be identified on component level this was no issue. For the objects that needed to be identified visually, a large enough image had to be captured so it only matched once.

• Different screen resolutions

Can switch between resolutions without problems. • Find objects not visible on the screen

As long as the object could be identified on component level this was no issue. For the objects that needed to be identified visually, scrolling had to be automated.

7.5

TestComplete

TestComplete uses mostly component level R&R. The recorded script worked when replayed immediately, but after restarting the operating system TestComplete could no longer recognize many of the objects in ProjectWise. This is due to Microsoft Windows assigning the objects a unique id number when it is created by the ProjectWise client. This id number is recorded by TestComplete and used to identify the objects. In order to fix this problem I had to change what properties was used to identify those objects. Because this was possible to do in TestComplete the VGT functionality of the tool never had to be used.

• Execute the whole scenario

TestComplete was able to execute the whole scenario using only component level object identification.

• Installing the tool Trivial installation.

• Dependence on other software

TestComplete does not require any additional installed software. • Documentation

Smartbear provides good online documentation for TestComplete. • Verification facilities

TestComplete uses checkpoints to verify the state of GUI objects, data tables, images, files and databases.

(32)

7 Results

• Report functions

After the execution has terminated, TestComplete provides a result log for the run. In the log details can be seen for each step in the test. For each step two screen-shots can be viewed. One taken during recording and one taken during execution. The log can be exported as MHT, HTML, XML, PDF or tcLogX.

• Complexity

TestComplete is a complex tool and within the scope of this project it has not been possible to test all parts of the tool. Just like UFT, TestComplete provides two ways to view the test. In TestComplete they are called Keyword tests and Script tests. Keyword view provides a way for users with little programming knowledge to use the tool.

• Same looking objects on the screen

This was no issue as all objects could be identified on component level. • Different screen resolutions

Can switch between resolutions without problems. • Find objects not visible on the screen

This was no issue as all objects could be identified on component level.

7.6

Squish

Squish was the only tool that could fully execute the whole scenario directly after recording, even after restarting the operating system. This should mean that Squish uses other properties than the id given from the operating system to identify objects.

• Execute the whole scenario

Squish was able to execute the whole scenario using only component level object identification.

• Installing the tool Trivial installation.

• Dependence on other software

Squish does not require any additional installed software. • Documentation

(33)

7 Results

• Verification facilities

Verification points can be used to verify properties of objects, data in tables or comparing images.

• Report functions

After the execution has terminated Squish provides a log that reports execution time, what steps succeeded and failed and what verification points succeeded and failed. Compared to UFT and TestComplete the result log is not as extensive. • Complexity

Squish supports the use of Behaviour Driven Development(BDD)[26] by allowing the user to first write a Gherkin[26] file. This file is then used as a guide when recording the test scenario. This approach makes Squish easy to use without any programming knowledge. For users who want to fine tune the automation it is possible to access the implementation code of every automated step.

• Same looking objects on the screen

This was no issue as all objects could be identified on component level. • Different screen resolutions

Can switch between resolutions without problems. • Find objects not visible on the screen

This was no issue as all objects could be identified on component level.

7.7

Visual Studio Coded UI

Visual Studio Coded UI also uses component level R&R. Compared to the other tools the code generated by Visual Studio Coded UI is far larger. Instead of generating one script file VS Coded UI divides the code into multiple files. The generated code from VS Coded UI could not execute the test scenario as it was. Often the tool failed to identify the objects interacted with. After editing the code to correctly identify the objects, the whole test scenario could be completed.

• Execute the whole scenario

Visual Studio Coded UI was able to complete the whole test scenario using only component level object identification.

• Installing the tool

Coded UI is pard of the Visual Studio package. Visual studio is easy to install, but it must be installed on the same disk as the operating system.

(34)

7 Results

• Dependence on other software

Except for Visual Studio, no additional software need to be installed. • Documentation

Good online documentation. • Verification facilities

Assertions can be added during recording. These assertions can be used to verify properties of any object Coded UI can identify.

• Report functions

As default the reports in Coded UI are simple and only provides basic information about if the test succeeded or failed and what line in the code that caused the failure. More informative reports can be generated but that requires the user to manually edit configuration files.

• Complexity

Visual Studio Coded UI is the most complex of the tested tools. The tool provides no ”easy to read” view. High programming knowledge is required to understand and edit the generated code. Editing the code was in most cases necessary in order to get the test to execute.

• Same looking objects on the screen

This was no issue as all objects could be identified on component level. • Different screen resolutions

Can switch between resolutions without problems. • Find objects not visible on the screen

This was no issue as all objects could be identified on component level.

7.8

Result tables

Table 1: Results for the pure VGT tools. Results for the tool TestingWhiz is marked as N/A as it was not further evaluated after it could not perform the first steps if the scenario.

EC SikuliX EyeAutomate Eggplant TestingWhiz

1 Yes Yes No, due to problem

with the trial license

(35)

7 Results

EC SikuliX EyeAutomate Eggplant TestingWhiz

2 Easy Easy Needs to be

in-stalled on separate system from SUT

Easy

3 Java 8 Java 8 VNC software N/A

4 Good

documenta-tion and tutorials

Good Documenta-tion and tutorials

Good online docu-mentation N/A 5 Limited. Must be done by image comparison Limited. Must be done by image comparison.

Pro-vides tool for text recognition

Limited. Must be

done by image

comparison.

Pro-vides tool for text recognition

N/A

6 Limited Good step by step

report with screen captures

Can after

cus-tomization provide report based on asserts N/A 7 Simple. Requires Python knowledge to do more ad-vanced things Easy to learn. Limited possibil-ities for writing advanced scripts

Complex setup.

After setup the

recording is easy to use

N/A

8 Requires capturing

a larger image that includes more of the surroundings

Requires capturing a larger image that includes more of the surroundings

Requires capturing a larger image that includes more of the surroundings

N/A

9 Same resolution

must be used for

capturing images

and executing

Can switch

be-tween resolutions

without much

problem

Can switch

be-tween resolutions without much problem N/A 10 Must implement scrolling Must implement scrolling Must implement scrolling N/A

Table 2: Results for the component level R&R tools

EC UFT VS Coded UI Squish TestComplete

1 Yes Yes Yes Yes

2 Easy Must be installed

on the C drive on Windows

Easy Easy

3 None Visual Studio

Professional

(36)

7 Results

EC UFT VS Coded UI Squish TestComplete

4 Good

documenta-tion and extensive

customer support

program

Good online docu-mentation

Good online docu-mentation

Good online docu-mentation

5 Checkpoints are

used to compare desired and actual state of the system

Assertions are

added during

recording and used to verify properties of objects

Verification points

can be used to

verify properties of objects, data in ta-bles or comparing images

Checkpoints are

used to verify the state of objects

6 Detailed report

with screen-shots for each step in the scenario. Can be Viewed in the IDE or exported as HTML

As default only in-formation about if a test failed or suc-ceeded is shown. A more detailed re-port can be ac-cessed by editing a configuration file

The report shows

details of what

steps and

ver-ification points

succeeded and

failed. The log is not as detailed as some of the other tools

Detailed log with

screenshots for

each step taken

both during record-ing and execution. The log can be exported as MHT,

HTML, XML,

PDF or tcLogX

7 Very complex tool.

Intuitive record and replay functional-ity. Editing scripts requires experience with UFT.

The most

com-plex of the tools.

Requires exten-sive programming knowledge and familiarity with Visual Studio The use of BDD makes the tool pos-sible to use without previous program-ming knowledge

Complex tool

similar to UFT.

Keyword view can be used by users with little

program-ming knowledge

and script view

can be used by more experienced programmers

8 Most objects could

be identified on

component level

and for them ap-pearance was not

an issue. Other objects could be identified using VGT by capturing a large enough image

No issue as all ob-jects could be iden-tified on compo-nent level

This was no is-sue as all objects could be identified on component level

This was no is-sue as all objects could be identified on component level

(37)

7 Results

EC UFT VS Coded UI Squish TestComplete

9 Can switch

be-tween resolutions without problems

Can switch

be-tween resolutions without problems

Can switch

be-tween resolutions without problems

Can switch

be-tween resolutions without problems

10 As long as the

ob-ject could be identi-fied on component level this was no

is-sue. For the

ob-jects that needed to be identified visu-ally, scrolling had to be automated

No issue as all ob-jects could be iden-tified on compo-nent level

This was no is-sue as all objects could be identified on component level

This was no is-sue as all objects could be identified on component level

(38)

8 Discussion

8

Discussion

After testing and evaluating all the tools my strongest impression is that the image recog-nition tools are useful in situations where access to the object on component level is impossible, but for the most parts they only over-complicate the automation task. I experienced that the execution time for the image recognition tools was longer. This could be an important factor. For example for automatic deployment when you want to run a full regression test suite when deploying new code. If testing takes to long this could be hindering. In other cases it may not be an important factor. For example when running over-night tests that will finish before the morning anyway.

Comparing the different component level R&R tools, all of them have a recording func-tionality to automatically generate a script. These generated scripts varied a lot in use-fulness. Squish was able to generate a fully functional script that could execute the whole test scenario. All other tools needed some form of adjustments made to the scripts before the scenario could be completed. The ease to understand and adjust the scripts also varied a lot between the tools. The script generated by VS Coded UI was by far the largest and required programming knowledge and at least some familiarity with the Visual Studio IDE to understand.

A complex tool that takes time to learn is not necessarily something bad. If the tool is to be adapted by a company to be used for a long time forward, the users will have time to learn the tool. Automation of tests is not something that should be seen as a one time task, but rather as a continuous job and a part of managing a system.

Something that is important to consider when automating tests is how changeable the system is. The VGT tools are very sensible to change, something also mentioned by the Spotify staff in Al´egrot et al’s study presented in section 4.1. Adding a new row to a table can cause the image recognition tool to no longer find a match even if the correct data is still there. Tools working on component level would still be able to find the correct data.

A test scenario was developed and could be fully automated with six of the Tools us-ing both component lever R&R and VGT completus-ing G1. The techniques Component Level Record and Replay and Visual GUI Testing has been compared both by testing tools using the techniques and by studying related studies comparing their results to mine, completing G2. Eight tools have been tested and evaluated based on ten criteria, completing G3. Due to time constraints, the possibility of including any of the tools in Trafikverket’s ALM have not been explored so G4 is not completed.

(39)

9 Conclusions

8.1

Reflection

In hindsight, the method I used to analyze the test automation tools had many problems. Using evaluation versions of licensed software was an easy way to get access to a large number of tools, but it also made it impossible to go back and redo an experiment where I felt I needed more data. Planning the testing phase of the project more before I started could probably have mitigated the problems. I have learned that this type of research is hard to do in a sequential order, first planning then testing then analyzing, without the option to go back and redo a step.

Only measuring the tools with binary values (if it can or can not do something) makes it hare to compare two tools to each other. In my evaluation, many tools was able to complete the scenario, had good documentation, had ways to verify data and report results. Without a method to quantify these properties and while missing data such as time spent coding and execution time, it was hard to find a way to compare the tools.

9

Conclusions

Comparing Visual GUI testing to traditional component level R&R has shown how there is no cookie cutter tool. The different techniques have different applications. Tools that operate on component level are more efficient to work with and will work best in most scenarios. In scenarios where objects can not be identified on component level, VGT tools are a good option. The VGT tools are slower to work with, and requires more manual work to verify results, but there are few tasks they cannot do. They also have the advantage over component level tools to interact with the SUT the same way a human user would.

The evaluation work that was done in the scope of this project was not enough to give an answer to what individual tool performed the best. In order to do so, a lot more quantifiable data would be needed. What this project did manage to do was to give an idea to what strengths and weaknesses the two types of techniques have.

(40)

10 Future work

10

Future work

Measuring the execution times for the different tools would be a valuable addition to the study. During the test runs the component level tools appeared faster, but more runs with focus on execution time would be needed in order to draw any conclusions. It would also be valuable to study how efficient the tools are to work with for experi-enced users. As this project was done by a single person with limited to no experience with the tools and languages, my experience may be significantly different to someone who is more experienced with the tools.

Comparing the long term implications of using the tools would probably be the most interesting and valuable angle of the subject. Here it would be possible to compare how much time is needed to develop and maintain tests with the different techniques. Finally, testing more applications with different architecture would help give a better answer to what tool to use for what situation.

(41)

Bibliography

Bibliography

[1] A. Butterfield and G. E. Ngondi, “regression testing.” [Online]. Avail-able: http://www.oxfordreference.com/view/10.1093/acref/9780199688975.001. 0001/acref-9780199688975-e-4422

[2] M. Ellims, J. Bridges, and D. C. Ince, “The economics of unit testing,” Empirical

Software Engineering, vol. 11, no. 1, pp. 5–31, Mar 2006. [Online]. Available: https://doi.org/10.1007/s10664-006-5964-9

[3] V. Garousi and F. Elberzhager, “Test automation: Not just for test execution,” IEEE

Software, vol. 34, no. 2, pp. 90–96, 2017.

[4] D. M. Rafi, K. R. K. Moses, K. Petersen, and M. V. M¨antyl¨a, “Benefits and limita-tions of automated software testing: Systematic literature review and practitioner survey,” in 2012 7th International Workshop on Automation of Software Test (AST). IEEE, 2012, pp. 36–42.

[5] E. Al´egroth, “Visual gui testing: Automating high-level software testing in indus-trial practice,” Ph.D. dissertation, Chalmers University of Technology, 9 2015. [6] E. Borjesson and R. Feldt, “Automated system testing using visual gui testing

tools: A comparative study in industry,” in 2012 IEEE Fifth International

Confer-ence on Software Testing, Verification and Validation, April 2012, pp. 350–359. [7] “Selenium [Computer software],” http://www.seleniumhq.org/, Mar 2018. [8] “Eggplant [Computer software],” https://www.testplant.com/, Mar 2018.

[9] Trafikverket, “Projectwise/ida i trafikverket,” February 2018. [Online]. Avail-able: https://www.trafikverket.se/tjanster/system-och-verktyg/projekthantering/ ProjectWise-i-Trafikverket/

[10] Y. Cheon and G. T. Leavens, “A simple and practical approach to unit testing: The jml and junit way,” in ECOOP 2002 — Object-Oriented Programming, B. Mag-nusson, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002, pp. 231–255. [11] M. E. Delamaro, J. C. Maidonado, and A. P. Mathur, “Interface mutation: an approach for integration testing,” IEEE Transactions on Software Engineering, vol. 27, no. 3, pp. 228–247, Mar 2001.

[12] L. Briand and Y. Labiche, “A uml-based approach to system testing,” Software

and Systems Modeling, vol. 1, no. 1, pp. 10–42, Sep 2002. [Online]. Available: https://doi.org/10.1007/s10270-002-0004-8

(42)

Bibliography

[13] B. Erinle and I. ebrary, Performance testing with JMeter 2.9, 1st ed. Birmingham, UK: Packt Pub, 2013.

[14] US-CERT, “Introduction to information security,” Nov 2019. [Online]. Available: https://www.us-cert.gov/sites/default/files/publications/infosecuritybasics.pdf [15] J. Rubin and D. Chisnell, Handbook of usability testing: how to plan, design, and

conduct effective tests, 2nd ed. Indianapolis, IN: Wiley Pub, 2008.

[16] I. Yoon, A. Sussman, A. Memon, and A. Porter, “Testing component compatibility in evolving configurations,” Information and Software Technology, vol. 55, no. 2, pp. 445–458, 2013.

[17] L. Pobereznik, “A method for selecting environments for software compatibility testing.” Polish Information Processing Society, 2013, pp. 1355–1360.

[18] M. Jovic, A. Adamoli, D. Zaparanuks, and M. Hauswirth, “Automating performance testing of interactive java applications,” in Proceedings of the 5th

Workshop on Automation of Software Test, ser. AST ’10. New York, NY, USA: ACM, 2010, pp. 8–15. [Online]. Available: http://doi.acm.org.ezproxy.its.uu.se/ 10.1145/1808266.1808268

[19] E. Alegroth, R. Feldt, and L. Ryrholm, “Visual gui testing in practice : challenges, problems and limitations,” Journal of Empirical Software Engineering, vol. 20, no. 3, pp. 694–744, 2015.

[20] Z. Gao, C. Fang, and A. M. Memon, “Pushing the limits on automation in gui regression testing.” IEEE, 2015, pp. 565–575.

[21] E. Al´egroth and R. Feldt, “On the long-term use of visual gui testing in industrial practice: a case study,” Empirical Software Engineering, vol. 22, no. 6, pp. 2937– 2971, Dec 2017. [Online]. Available: https://doi.org/10.1007/s10664-016-9497-6 [22] “Unified functional testing (uft) pricing,” Sep 2018. [Online]. Available: https://software.microfocus.com/en-us/products/ unified-functional-automated-testing/pricing

[23] “Visual studio pricing,” Sep 2018. [Online]. Available: https://visualstudio. microsoft.com/vs/pricing/

[24] “Squish licensing and prices,” Sep 2018. [Online]. Available: https://www. froglogic.com/squish/prices/

(43)

Bibliography

[26] M. Wynne, A. Hellesøy, and S. Tooke, The cucumber book: behaviour-driven

de-velopment for testers and developers, 2nd ed. Raleigh, North Carolina: Pragmatic Bookshelf, 2017.

References

Related documents

Způsob, jakým byla struktura dotazníku koncipována, byl autorkou diskutován s vedoucí této práce, dále s konzultantem ze společnosti, vedoucím výroby, a p edevším s

Potencionálními a velmi výnosnými zákazníky, by mohly být strávníci z okolních firem, kteří již mají o restauraci U Pešíků jisté podvědomí, neboť právě

Zví ecí masky jsou velmi důležité, protože vystihují význam této festivity coby ob adu oslav, spojení člov ka s p írodou a její nápodobu. Práv zví ecí masky mají

Cílem diplomové práce bylo provést obsahovou analýzu učebnic používaných ve vzdělávacích p edmětech občanská výchova, rodinná výchova, výchova

Jak již bylo několikrát zmíněno, zákazník a kvalitní vztahy se zákazníkem jsou pro maloobchod tím nejdůležitějším a zároveň nejobtížnějším bodem jejich

Učitel bude žák m postupn dávat jednotlivé kartičky a jedna skupina bude druhé vysv tlovat, které slovo mají hádat. Uvedená slova na kartičce nesmí žáci použít.

Všechny díly jsem vytvo ila v programu Malování (Microsoft Paint) v pot ebné velikosti, které byla poté upletena na stroji Stoll.. 57 Hlavní díl pro tašku vytvo ený

Účelem Zákona o spot ebitelském úvěru je tranfosmace Směrnici Evropské regulace spot ebitelských úvěrů a hypoték MCD (Mortgage Credit Directive – Směrnice