• No results found

Traveling Into The Future: The Development of a Currency Exchange Application

N/A
N/A
Protected

Academic year: 2021

Share "Traveling Into The Future: The Development of a Currency Exchange Application"

Copied!
130
0
0

Loading.... (view fulltext now)

Full text

(1)
(2)

Abstract

This thesis examines the experiences and results regarding the development of the web application Cash-on-Arrival, an online currency exchange service where the customer can order their currency exchange ahead of their travel to save money compared to exchanging their money at the time of arrival to their destination. The thesis mainly focuses on answering the research questions raised concerning how a secure and usable web application can be developed as well as how it can save money for travelers.

In order to establish what design and features customers prefered, a market research in the form of a survey and market plan was done in the pre-study phase. Furthermore, the thesis continues with presenting how the implementation of features and solutions were performed. The result of the development is the web application Cash-on-Arrival where focus was on the security and usability of the application. Conclusions drawn were the need for a secure site, with a quick and efficient buying process, to encourage users to utilize the service. The implementation of the currency exchange optimization has not been fully answered.

(3)

Contents

1 Introduction 1 1.1 Aim . . . 2 1.2 Research question . . . 2 1.3 Delimitations . . . 2 2 Theoretical framework 3 2.1 Technical aspects . . . 3 2.1.1 Prediction . . . 3 2.1.2 Security . . . 4 2.1.3 Database . . . 6

2.1.4 Session and cookies . . . 6

2.1.5 Single page application (SPA) . . . 7

2.2 Usability . . . 7

2.2.1 Graphic design . . . 7

2.2.2 User interface design . . . 8

2.3 Development process . . . 9 2.3.1 Version control . . . 10 2.3.2 Code refactoring . . . 10 2.3.3 Testing . . . 11 2.4 Surveys . . . 11 2.4.1 Market survey . . . 11

2.4.2 User experience survey . . . 12

2.5 Prototyping . . . 12 3 Method 13 3.1 Pre-study . . . 13 3.1.1 Market research . . . 13 3.1.2 Project plan . . . 14 3.1.3 Prototype . . . 14 3.2 Implementation . . . 14 3.2.1 Technical . . . 15 3.2.2 Usability . . . 17 3.2.3 Development process . . . 17 3.3 Evaluation of results . . . 21 4 Results 22 4.1 Pre-study . . . 22 4.1.1 Market research . . . 22 4.1.2 Prototype . . . 23 4.2 Implementation . . . 23 4.2.1 Technical . . . 26 4.2.2 Usability . . . 30 4.2.3 Development process . . . 32

4.3 Evaluation user experience . . . 35

5 Discussion 38 5.1 Results . . . 38

5.1.1 Security . . . 38

(4)

5.1.3 Development process . . . 41

5.1.4 Prediction . . . 44

5.2 Method . . . 44

5.3 The work in a wider context . . . 45

5.3.1 Environmental . . . 45 5.3.2 Personal integrity . . . 45 5.3.3 Profitability . . . 46 5.4 Source criticism . . . 46 6 Conclusion 47 6.1 Prediction . . . 47 6.2 Security . . . 47 6.3 Usability . . . 48 6.4 Development Process . . . 48 7 Future Work 49 References 50 A Risk analysis 53 B Marketing plan 55

C Marketing research survey results 70

D User experience survey results 83

E Group contract 87

F Project plan 92

(5)

List of Figures

3.1 Sprint 1 backlog in Trello . . . 18

3.2 Product backlog in Excel . . . 19

3.3 Branching Strategy . . . 20

4.1 Prototype Design . . . 23

4.2 The service’s home page . . . 24

4.3 Timeline of loading the web application . . . 26

4.4 Password Strength . . . 27

4.5 Email Verification . . . 28

4.6 Admin Page . . . 29

4.7 Model of the database used in the project. . . 30

4.8 Buying process . . . 31

4.9 JavaScript before refactoring . . . 34

4.10 JavaScript after refactoring . . . 34

4.11 UX survey answer to: The website was clear and easy to navigate . . . 35

4.12 UX survey answer to: The main page gave me a clear idea about the service . . . 36

4.13 UX survey answer to: Making a purchase was easy . . . 36

4.14 UX survey answer to: I would use this service . . . 36

4.15 UX survey answer to: The website feels reliable and safe . . . 37

(6)

Chapter 1

Introduction

Once a luxury for a select few, international travels have today become both a↵ordable and a common aspect of the lives of regular consumers (see further in appendix B). International flights have become cheaper and can today cost as little as 10 EUR, about the same price as a dinner for one. Traveling internationally often requires travelers to exchange currency. The di↵erence between making this exchange at the airport’s arrival hall compared to making it a couple of weeks in advance can prove to be costly for the traveler, as seen in the example below. Since the exchange market is volatile and therefore hard to accurately predict, most travelers have a hard time deciding on what date to exchange currencies in order to maximize savings. Can a web application help international travelers save money by exchanging the money at an optimized time and date?

Cash-on-Arrival

Cash-on-Arrival is a single page web online application where the user can exchange currencies. The customer selects a date, destination and airport together with the amount that they want to convert. When the exchange should take place will be selected according to the result of a prediction model. The prediction model itself originates from a machine learning algorithm that uses historical data of the development of the currency’s exchange rates. When using the web application, the customer transfers money to Clementine and the service ensures that the exchange will take place at the right time to save as much money as possible for the customer. A future aim for the service is to o↵er agreements with a selected number of partner banks where the customer can retrieve the exchanged money. The customer will have the possibility to cancel the transaction until the date on which the exchange is done, after that, if the customer wants to cancel the transaction he or she will have to pay the partner bank for the re-conversion.

The business idea is to take a percentage of the profits obtained from exchanging the money at an optimal rate in comparison with the customer doing so on the day of travel. Future partnerships with banks across the world would make it possible to outsource the cash management to these banks and focus solely on the development of the prediction algorithm and the implementation of the web application.

Benefits of the service

An illustrative example of the service follows. Consider a Swedish tourist traveling to Russia in a month. Typi-cally the person would have three options to exchange money: exchanging at a bank on beforehand, exchanging at the airport on the day of departure/arrival or withdraw money with his credit card upon arrival. Notably the first two options include relatively large transaction fees, due to the costs incurred by the banks handling foreign currencies. Nevertheless, using the card for withdrawal will also incur surcharges. More importantly the customer is forced to buy currency to the exchange rate at the time of traveling. If the exchange rate is favorable at an earlier date, the person will miss this opportunity to save money. The idea is to provide the customer with a service that will exchange money at an optimal exchange rate. While avoiding the costs incurred by handling foreign currency and also minimize transaction fees by exchanging large amounts of currency at the same time. When the customer arrives at the destination airport, he/she will be able to easily collect his cash that has been exchanged at an optimal exchange rate and with minimal transaction fees.

(7)

1.1

Aim

The objective of this thesis is to develop a secure and usable web application that saves money for travelers exchanging currency.

1.2

Research question

The following research questions will aid to answer the aim and creates a measurable question to answer. • How can a web application for currency exchange save money for travelers?

• How can a secure and usable web application be developed?

1.3

Delimitations

The thesis covers the development and implementation of the web application and does not concern the actual implementation of the business plan nor the development of the prediction algorithm. The use of the prediction model will be implemented if the time constraints allow it.

The web application is a mock service to illustrate the idea, therefore users will not be able to use it to actually exchange money. The functionality of the application has focused on usability and security. The security aspect will focus mainly on good use of handling a user’s password. Usability will focus mainly on navigation, color and design. Academic theories as well as user surveys have been used to ensure a high standard regarding these aspects.

Another important part of this work was how the development process was performed. The development process follows an agile work method and will also be described. This will therefore also be discussed in the paper.

(8)

Chapter 2

Theoretical framework

In this chapter a theoretical background needed for the thesis is presented. The theory is divided into technical aspects, usability, development process, survey and prototyping.

2.1

Technical aspects

For readers not acquainted to web development a quick summary of the foundational technologies follows. The web pages of the World Wide Web (WWW) are based on three techniques, HyperText Markup Language (HTML), Cascading Style Sheets (CSS) and JavaScript. HTML is used for structuring the content of web pages, CSS is used for layout and design of that content and Javascript is used for making the sites dynamic and interactive. These are called front-end techniques. They are processed by the client and directly a↵ects the user’s experience of the website.

The client is served the HTML, CSS and JavaScript code from a server. While the entire WWW conforms to the same front-end techniques, there are multiple di↵erent back-end techniques used on the server to handle client requests. Programming languages commonly used for the back-end are PHP, ASP, Java and Python. Frameworks are code libraries used with these languages to automate common server functions so that devel-opers can focus on the unique functions of their web application. This is done by writing specific functions required and called by the frameworks to handle certain events.

In addition to the above techniques, template engines are used to dynamically manipulate the HTML code served to the client by the server and Asynchronous JavaScript And XML (AJAX) is a technique used by the client to send and receive data from the server without reloading an entire new HTML, CSS and JavaScript page.

2.1.1

Prediction

This work focuses on the development of a web application, hence the somewhat concise explanation of the prediction theory and model in the eyes of someone familiarized on the topic. Predictions using artificial neural networks (ANN) for currency exchange have been made before (Bekiros & Marcellino, 2013). The argument for focusing on ANN is because it has proven to be more accurate than previous methods (Khashei & Bijari, 2010). Studies argue that using a wavelet design of the input signal used in the ANN enhances the ability to detect noise and produces a more accurate prediction (Bekiros & Marcellino, 2013; Dahl´en & ˚Astr¨om, 2016).

Artificial neural networks (ANN)

The neural networks model is a computational technology which foundations were laid in 1958 by Frank Rosenblatt (Rosenblatt, 1958). The model was inspired by the brain’s neurons and is trying to use its basic concept to, for instance, visualize data and recognize objects (Rosenblatt, 1958). ANN provides a way of examining large data and make predictions about what will happen to certain data based on historical and predicted data. This method is extensively used for examining di↵erent economic and financial models and parameters (Zhang & Berardi, 2001) and (Shachmurove, 2005). Neural networks are categorized into feedback

(9)

and feedforward networks. Back propagation is one of the most popular neural network training algorithms, mainly because it is simple, computationally efficient and often works (Rumelhart et al., 1988).

Machine learning

The machine learning domain can be further classified into smaller subgroups, some of the more notable examples are: Decision tree learning, Clustering, Bayesian networks, Support vector machines and Artificial neural networks. A prediction using ANN can be helpful when predicting exchange rates thanks to the capability to fit nonlinear functions that ANN has (Almeida, 2002). The aim of the ANN prediction model is to obtain a better result than benchmark models, such as the random walk, in economic time series (Kristjanpoller et al., 2014). The ANN model is not universally better than previous methods but many recent papers investigate its’ benefits and applications. When applying it to international stock return volatility the ANN seem to be the best fit (Donaldson & Kamstra, 1997).

Prediction Method

The model used in this paper for forecasting the currency continues on the previous work done by Bekiros and Marcellino (2013). Moreover the model follows the wavelet design used in the same paper, where they decompose the input signal, which in this case is the exchange rate, into di↵erent wavelets using SIDWT1. In short the model is separated into two steps, the first step is to decompose the input signal. The second is when it feeds the decomposed signal to the ANN. For more extensive details about the wavelet-model see Bekiros and Marcellino (2013); Dahl´en and ˚Astr¨om (2016) and for more details about the model as a whole see Dahl´en and ˚Astr¨om (2016).

2.1.2

Security

According to Antunes and Vieira (2012) the two most common security risks for a web application are SQL injection and cross-site scripting, while OWASP (2013) names Injection, cross-site scripting and broken authen-tications and session management as the top three threats. The security risk with injection is that an attacker can alter SQL queries sent to the database and make it possible to retrieve unauthorized data. Huang et al. (2003) mentions that security can be increased through high-level input validation. According to Vogt et al. (2007) the security risk with cross-site scripting is that it lets attackers run their own scripts on the client side of the application, enabling attackers to also retrieve information or perform malicious tasks. Prevention of cross-site scripting can be implemented on the client-side by ensuring that all user input is properly escaped and that input validation is used (OWASP , 2013). The risk with broken authentication and session management is that authentication credentials are not well protected while stored (OWASP , 2013). How to protect credentials is presented below in the sections “Choosing a secure password” and “Password storage”. Theory about session management is in section 2.1.4.

Another security risk, that is less less common, is man in the middle attacks (Callegati et al., 2009; Karlof et al., 2012). The security risk with man in the middle is that an attacker intercepts information being sent between two parties, for example when an email is sent during a registration process the attacker can intercept the email and alter its content (Karlof et al., 2012).

Choosing a secure password

Choosing a strong password for any online account is an important security aspect, preventing attackers to brute force 2the password. To set a secure password, commonly used passwords should be avoided. Common passwords include words in dictionaries, names or birth dates. Stronger passwords have the characteristics of being longer and more complex. For example, the minimum requirements for passwords at leading technological companies require a minimum of 6 characters, at least a symbol, digits and punctuation.(Bonneau et al., 2015; Sriramya & Karthika, 2015; Furnell, 2014; Zhao & Yue, 2014) Already in 1979 Morris and Thompson (1979) talked about the importance of the strength of a password and the article explains that only using lower-case letters results in 26n possible outcomes, whilst having a password that contains a combination of all types of “writable characters” increased the password strength to 95n.

1Shift Invariant Discrete Wavelet Transform

(10)

To aid a user in choosing a strong password, a website can provide password guidance and in some cases enforce it to ensure that the provided password guidance is met. Password guidance and enforcement can be done through visual e↵ects during or after having typed the password, often through text that explains what characters the site require. Moreover, once logged in, some websites require the user to first type in the old password before changing it to something new. (Furnell, 2014)

Password storage

Password storage is common for websites that require their users to register an account. Safely storing passwords is important since attacks frequently are made to access databases where users passwords are stored (Zhao & Yue, 2014) (Sriramya & Karthika, 2015). There are many di↵erent ways that passwords are stored including di↵erent levels of safety. The simple and less safe ways to store a password is through plain text or basic encryption functions. A more secure way of storing a password is by using the implementation of salted3 password hashing (Sriramya & Karthika, 2015). Encrypting a complex password with salt was recommended already in 1979 (Morris & Thompson, 1979).

Password hashing as mentioned above is a one-way encryption function, meaning that it is considered close to impossible to revert. Another benefit is that the hashing produces an output of equal length that could be stored in a database making it harder for an attacker to guess the length of the unhashed password. One of the properties of a hash function is that if an input is slightly changed, the output will be completely di↵erent. An issue with the hashed function is that the same input produces the same output; meaning that users with the same password would have the same hash. The consequences would be that an attacker, just by breaking one of these passwords, would access all the accounts. Also, most hashing algorithms are created in such a way that it takes a long time to use brute force making it time consuming for an attacker to guess all the passwords. (Sriramya & Karthika, 2015)

To tackle the problems that arise from password hashing, specific steps - which are discussed below - are taken and implemented in certain encryption algorithms, like bcrypt or PBKDF2 (Mirante & Cappos, 2013). The first step is to make all input into the hash function unique. It is important that the input is unique since equal inputs into a hash function produce equal outputs. To achieve that all inputs are unique, password salting is required before hashing. Password salting means that a long and random generated string is added to a user’s password and since the added string is so long it is highly improbable to create equal inputs for the hashing algorithm and therefore unlikely that passwords that at first were the same has the same hash output. The next step is key stretching and iteration hashing. The more iterations an encryption algorithm has to perform the longer it takes to complete the encryption algorithm by a linear factor. The desired result is that the computing time of the encryption algorithm becomes slow enough to increase the time it takes for an attacker to brute force a password. Important though, is that it is not being done at the expense of user experience because it could take longer to authenticate the password for the system (Zhao & Yue, 2014; Sriramya & Karthika, 2015).

Password security on major websites

Many of the current big websites did not, during 2013, apply the best security practices, which is salting and hashing of passwords. According to Mirante and Cappos (2013) many websites do not follow the best security practice. For example when Yahoo was hacked in July of 2012 or the UN in November of 2011, they stored the passwords in plain text, making it very easy for hackers get access to the user’s passwords. When Linkedin was hacked in 2012 they used an unsalted SHA1 algorithm which made it a little harder for hackers to retrieve the passwords than against Yahoo and the UN, but it only took a few hours for the hackers to decrypt the passwords. Citing (Mirante & Cappos, 2013) on how password handling is managed on many of the major companies websites: ”While absolutely no information on how the passwords were stored could be found in 26.5 percent of the cases, we found 11.8 percent reported passwords were “Hashed and Salted”, 5.9 percent used salted MD5, 14.7 percent used unsalted MD5, 11.8 percent used salted SHA1, while unsalted SHA1, SHA256 salted, crypt(3) salted, and bcrypt each accounted for 2.9 percent. Plaintext use was noted in 17.6 percent of the site breaches”. To have the best security measures on a website is uncommon and even if the best security practices are applied, a website is never a hundred percent safe from attacks. (Mirante & Cappos, 2013)

(11)

2.1.3

Database

Almost every web application needs a database to store information. A multitude of di↵erent database man-agement systems exist, the main modern architectural types being relational database manman-agement systems (RDBMS), object-oriented database management systems (OODBMS) and object-relational database manage-ment systems (ORDBMS) – hierarchical and network models being obsolete (Kifer et al., 2005). RDBMS is the most common database management system of today, with technologies such as Oracle Database, Microsoft SQL Server, MySQL and PostgreSQL being among the dominant ones (Jukic et al., 2014).

The use of modern object-oriented programming languages together with relational database management systems causes a number of issues referred to as object-relational impedance mismatch and requires object relational mapping (ORM) to translate the objects of the object-oriented programming language to relational data in the RDBMS and back. Another way to solve this are the OODBMS that were created for the specific task of handling objects. OODBMS is more suitable for dealing with very complex data. In these systems data is represented in the form of objects rather than the relational tables used in RDBMS. OODBMS have not gained any serious amount of ground as compared to the RDBMS, except in niche areas where the advantage of being able to handle very complex data is important. Finally, ORDBMS is a compromise between RDBMS and ODBMS to create a system that adds support for object-oriented features to an RDBMS foundation. PostgreSQL is such a system. (Jukic et al., 2014)

As for ORM:s for Python there are for example ”The Django ORM”, SQLAlchemy and Peewee. The first one is built into the Django server framework, while the second one is in standard use together with Flask. Peewee, finally, is a minimalized and customizable Python ORM.(Object-relational Mappers, 2016)

Database abstraction layers are used to unify and simplify the communication between a database and appli-cations using the database. There are three levels of abstraction - the physical level, the conceptual or logical level and the external or view level. The reason for the database abstraction levels are the same as general programming abstraction: to remove the need for users to understand the actual dynamics of the application and just provide an interface for using it (Glass et al., 2004). The level that is of concern for web application development is the external level as the other concerns the implementation of the database system. It is at the external level of the database management system that developers can implement their own interface for operations that are needed for their application.

Since a database tends to store a great deal of data it is important to store the data e↵ectively. Data redundancy causes increased disk space usage without adding any advantages. In addition, data redundancy can cause issues when the database is updated as redundant data needs to be updated everywhere in the database and failure to do so will cause the database to malfunction. The way to prevent data redundancy in RDBMS is to use normalization. Normalization is based on functional dependencies between elements in the relation. Di↵erent normal forms, based on requirements on functional dependencies in the relations have been developed. The most widespread normal forms are the three fundamental normal forms defined by Edgar F. Codd and the Boyce-Codd normal form that is an addition to the third normal form proposed by Codd. (Ryan & Smith, 1995)

The first normal form (1NF) was defined as none of the relation’s data domains having sets as elements, meaning that each attribute contains single, atomic values. A table in the second normal form (2NF) - in addition to fulfilling the definition for 1NF - must also have no non-prime attribute that is dependent on a subset of a candidate key. The third normal form (3NF) adds that every non-prime attribute must be non-transitively dependent on the candidate key. The Boyce-Codd Normal Form (BCNF) slightly adjusts the 3NF to also demand that every dependency X ! Y is either a trivial dependency (Y ⇢ X) or X is a super key of Y (Kifer et al., 2005)

2.1.4

Session and cookies

According to Bortz et al. (2011) web applications are still built on old primitive web architecture protocols, where one of them in particular is that messages between the client and the server is sent back and forth using HTTP. This protocol is stateless, and to link each state that is specific to each user, the user needs to save them into so called sessions. This is done so that the user state can be accessed and modified during the interaction the user has with the web application.

One of the techniques for saving a session state is the use of cookies. Cookies are data stored on the client browser that are later sent to the server with every request so that they can be authenticated and modified. Bortz et al. (2011). According to Lin and Loui (1998) there are two type of cookies, transient and persistent cookies. Transient cookies are only saved in the web-browser, disappear when the browser is closed down,

(12)

and according to Lin and Loui (1998) thus poses no large privacy concern to users. While on the other hand, persistent cookies are saved on the client hard-drive and exist indefinitely, until removed by the client itself.

HTML5 brings with it a number of new possibilities in regard to client data storage. Through the use of session storage, that is similar to a session cookie, data leaking is prevented due to the session storage being stored by window rather than browser. The data is thus only saved at the top level of browsing context and if the window in question is closed. Therefore it should only be used in single-session visits, like for example shopping cart transactions. If not, there is a risk that the transaction is processed multiple times if several windows are opened at the same times, as they share cookies. (West & Pulimood, 2012)

Additionally according to West and Pulimood (2012) HTML5 web storage bring addresses a number of security issues with storing data on the client. They point out that through the use of HTML5 web storage a di↵erent domain than the one that stored the session item can not access the data saved. Additionally there are several inherent advantages to saving data on the client side. One of them being that a security breach in the web application itself does not mean that the user data is at risk. This is thanks to that the only point of access to the client stored data is being stored on the user’s machine itself.

2.1.5

Single page application (SPA)

A single page application is an application that uses a single web page. Mikowski and Powell (2013) argue that there are many benefits of a well-written single page application as it can o↵er both the ”immediacy of a desktop application and the portability and accessibility of a web site”. In practice, a well designed single page application minimizes the load process of data from the server and transfers the processing onto the client or browser. Moreover, a single page application is delivered as a fat client to the browser and does not reload during use (Mikowski & Powell, 2013). The application loads new information from the server dynamically to increase responsiveness. Benefits of using a single page applications are: • The need of refreshing the page for every action is removed• An SPA can incorporate more logic which can make the client faster instead of waiting for the server to respond • An SPA have states which can be notified to the user • An SPA is cross-platform as a website

2.2

Usability

According to Gehrke and Turban (1999) the most essential attributes of a website are page loading speed, content, navigation efficiency, security and customer focus. Since a potential customer might leave the website if it takes too long to load a page, loading speed is an essential factor to take into account while designing a website. Furthermore, the authors argue that it’s important to have a focus on customers and security while having the goal to create a fast, simple and e↵ective web application. The maximum time for a web page to load should never be more than 4 seconds (Performance, 2006). Efficiency and security is essential for a web application, but to obtain more customers and to obtain an attractive website Lindgaard et al. (2006) argue that web developers should put their main e↵ort on the first impression. Some studies show that the time you have to make a good first impression can be as little as 50 milliseconds (Lindgaard et al., 2006). In the same report it is stated that, out of the seven visual characteristics listed in the paper, five are strongly correlated to the overall judgment of the visual appeal of the web application: interesting-boring, good design-bad design, good color-bad color, good layout-bad layout and imaginative-unimaginative. The two characteristics with a low or no correlation were simple-complex and clear-confusing. Meaning that the five first mentioned characteristics are important to focus on when establishing a good first impression of a web application.

Another article from the same author states that the first impression is shown to a↵ect the overall judgement of other aspects of a website. Even though that the participants of the experiment had no opportunities to interact with the website apart from the initial impression, they made judgements of the reliability and usability aspects of the site. This implies that the first impression a website imposes on a user is highly important. (Lindgaard et al., 2011)

2.2.1

Graphic design

An important part of the graphical design is the choice and combinations of colors. According to Yee et al. (2012) one should avoid using too bright colors, such as apple green, yellow and red, since these colors might distract and make the user eyesore. On top of this, it might be worth to add that a mixture of bright and soft

(13)

colors can be tiresome for the eye, and should be avoided if possible. Furthermore red and green colors do not work well together since they look alike for the colorblind. In the same paper, experimental results indicate that users prefer soft colors in the background that blue is a good color to get the user’s attention and that the users prefer black text. (Yee et al., 2012)

Color psychology

Color psychology concerns the human behavior and perception in response to colors. Every color has associ-ations in the form of feelings, sensassoci-ations and perception (Birren, 1961). Accordingly, in order to convey the correct message and to achieve a desirable image perceived by the customer, the choice of color is crucial. Birren (1961) defines orange as a color of warmth together with red, purple, orange, yellow, green, blue, and white. The colors are often also being perceived as exciting, mystic, jovial, cheerful, peaceful, melancholy and youthful. A website using any of these colors as a primary color will according to Boyle (2001) be perceived as a lively and energetic website.

Lane (n.d.) discusses the e↵ects of orange from a sales perspective and concludes that orange many times is associated with something cheap, but also priceworthy. Smith (2014) discusses the fact that many sport teams use orange in their logo since orange is associated with psychical activity and confidence. Furthermore, they argue that the frequent occurrence of orange on websites catches attention of the user and make messages more noticeable, an example being Amazon that uses the color orange on banners showing o↵ers.

2.2.2

User interface design

The most fundamental and basic thing there is to state about user interfaces is that if the user does not have to spend a lot of time on getting to know how the UI4 works, it is an e↵ective UI. It is also important that the user can use the UI intuitively and does not have to spend time trying to understand how it works. It is easier said than done to obtain the ideal user interface and in the end the user will be the one to say if it was successful or not (Oppermann, 2002). There are successful and less successful ways to designing a user interface. One recognized theory describing a successful user interface design describes 10 heuristics as the key to a successful UI. These are meant to be general standards or rules which the programmer should think about when designing a website. The 10 heuristics according to (Nielsen, 1995) are:

• Visibility of system status

The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.

• Match between system and the real world

The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.

• User control and freedom

Users often choose system functions by mistake and will need a clearly marked ”emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo. • Consistency and standards

Users should not have to wonder whether di↵erent words, situations, or actions mean the same thing. Follow platform conventions.

• Error prevention

Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.

• Recognition rather than recall

Minimize the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.

(14)

• Flexibility and efficiency of use

Accelerators - unseen by the novice user - may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.

• Aesthetic and minimalist design

Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of in-formation in a dialogue competes with the relevant units of inin-formation and diminishes their relative visibility.

• Help users recognize, diagnose, and recover from errors

Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.

• Help and documentation

Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out and not be too large.

The article by Molich and Nielsen (1990) was something new that inspired more researchers to define their own principles. For example Martinez (2011) which argue that there are four main principles to consider in order to obtain a good user interface:

• Focus on the users instead of what technologies that are used. • First consider functions and then the presentation.

• Use a simple GUI.

• Focus on delivering information and make it easy for the user to learn how to use the GUI.

2.3

Development process

Scrum is an agile work methodology with few and simple rules that can be executed di↵erently depending on the group and situation. The scrum methodology is commonly used in the software industry according to Schwaber and Beedle (2002) due to the changing and challenging demands developers face. This is because scrum works in short iterations with short term goals thus allowing for a quick change in the project direction/focus if needed; being an advantage in the fast paced software industry (Cervone, 2014).

It has been shown that scrum, when well deployed can increase team productivity, employee satisfaction and creativity. The increase in productivity compared to other operations can decrease project cycle time by more than 75 percent. This leads to less time wasted in meetings and decreases the chances of excessive documenta-tion. It is also important to have eager participants since the team is self-managed. (Schwaber & Beedle, 2002; Rigby et al., 2016)

User stories

During the sprints short term features are to be completed to move the project forward (Cervone, 2014; Rigby et al., 2016). The features are usually derived from a set of user stories that contain a description of what the system shall be able to do. User stories often follow the general template: ”As a (role) I want (goal) so that (benefit)”.(Cohn, 2004)

Group size

The size of a group in scrum ranges from three to nine people and the group members are collectively supposed to have the skills required to complete the given features. (Rising & Jano↵, 2000) The group is self-managed and is, therefore, fully responsible for the work it produces. A key role in the group is the product owner, who in the end is responsible that product is of value to the customer. The product owner is to divide his or her time coordinating with the stakeholders of the product as well as the group. Knowing what gives value to the customer and having coordinated with stakeholders the product owner is responsible for creating and ranking the features in the product backlog; this is done to prioritize what features are to be done first during each

(15)

sprint. What is important to note is that the product owner does not decide who is to complete the features or how long it should take to complete. The group makes a road map deciding how many features they can do - breaking the features down into smaller components called user stories - during a sprint and define what it means when a feature is done.

Definition of done

The definition of done is an important part of the agile software development process as it defines when a feature or task on the product backlog is finished. More simply put, it is a checklist or list of activities de-cided upon by the team at the start of each sprint and stays the same throughout the whole sprint. However, it is not fully static as it is possible to modify the definition in-between sprints. The definition of done is in fact something that continuously should be reviewed and updated as the scrum team acquires higher level of competence and can add further criterions to the definition before marking a task as ”done”. (Alliance, 2013)

Scrum master

Another key role in the group is the scrum master which is there to assist the group on problems relating to the work process and also coordinate scrum meetings. Scrum meeting are frequently held and last about 20 minutes where each person in the group answers the following three questions:

1. What the person has done since the last scrum meeting. 2. What the person is to do until the next scrum meeting. 3. Any roadblocks that might have appeared.

The aim of the meeting is to help keep the group updated on the progress and to, directly after the scrum meeting, solve any problems if there are any.

Sprint retrospective

At the end of each sprint, a sprint retrospective is held. On this occasion the whole group sits down together to evaluate the work process during the sprint. Feedback is brought up and improvements are discussed for the sprints ahead. (Rigby et al., 2016)

2.3.1

Version control

Version control is a part of Software Configuration Management (SCM) and is a method of keeping track of changes. It is very useful when multiple developers are working on the same project and in the same files. In a basic sense, it makes it possible to track of the changes, the time of the change and the person making the change. It makes simultaneous work easier and reduces the risk for losing work as it is easy to recreate a previous version. It is both possible and encouraged for developers to create branches to ease working in parallel. This enables the developers to try out di↵erent solutions or to keep a fully functional branch, as branches can easily be merged. When merging, the version control software will search for conflicts in all files and if there are any, it will o↵er the possibility to fix them manually. (Git, 2016)

2.3.2

Code refactoring

Refactoring code is a process of altering the software system’s internal structure without changing the external behavior of the system (Fowler, 2009). This process is applied after the code has been written to improve the design of the code. One can improve a chaotic code structure with refactoring to a readable and maintainable design. The first step of refactoring is to build tests which can determine the external behavior of the software (Fowler, 2009). Then one can try di↵erent extractions of the code to separate code into smaller more readable pieces. The refactoring can be made on simple attributes or whole methods. Frequently used functions can beneficially be refactored to reduce the code and duplications (Fowler, 2009). The process of refactoring is continuous through the development.

(16)

2.3.3

Testing

Perhaps one of the most famous quotes on the topic of software testing belongs to computer scientist and mathematician professor Edsger Wybe Dijkstra who said that ”Testing shows the presence, not the absence of bugs”. The four common layers of testing can be divided in:

• Unit testing

• Module and integration testing • System testing

• Acceptance and user testing

The unit testing is done continuously by each programmer after something has been added or altered in the code. The module or integration test is done after several features have been finished and the developers want to test the features together as a module. Common testing strategies at this level are:

• Big-bang

– When all features are tested simultaneously • Bottom-up

– When development is done bottom-up

– When the features are complex in nature or error-prone • Top-down

– When well defined test cases exist for the functional requirements – When flaws can be identified early on in the design

• Sandwich

– Combines the bottom-up and top-down strategies

At step three, modules are tested together to form a system. A way to test the system is to run smoke tests5 which help to make sure that the development is on the right way. The last step before release or final merge is the acceptance test. It is a form of user testing where everyone can contribute, even people without any, or limited, previous programming experience. At this step the whole system can be tested against use cases or user stories. If this test is passed, the system is considered to be ”accepted” and everything shall work as defined in the functional requirements. (Sandahl, 2015)

2.4

Surveys

Below, the theories behind the use of the market and user experience surveys is presented.

2.4.1

Market survey

When doing a market survey there are mainly three di↵erent ways to gather data; Personal interviews, telephone interviews and questionnaires (Kerlinger, 1973).

Qualitative market research through personal or telephone interviews have a number of strengths and weak-nesses. They dig deeper than quantitative research methods, as they enable more complex and subtle in-formation gathering (Hart, 1987). According to Chisnall (2004) qualitative data can discover what may be the underlying factors that influence a certain behavior or liking. On the other hand there are a number of downsides to qualitative data gathering as it is difficult to determine whether it is reliable and objective. This is due to the subjective nature of an interview where the interviewer has to interpret the answers correctly. Additionally, it is both time-intensive and good interviewing skill is required to perform it to a satisfactory degree (Hart, 1987).

On the other hand can quantitative data gathering with a questionnaire reach a larger and more representative population. It also does not have any interviewer bias, it is cost e↵ective, it is a speedy form of data gathering

(17)

and it allows for statistical compilation and analysis of the answers. However, there are downsides to using a questionnaire. It is difficult to know the reasons behind respondent omissions and it is not possible to do a deeper probe of underlying factors that influence the answers. (Hart, 1987)

When creating a questionnaire a number of di↵erent things have to be taken into consideration. The questions have to be formulated in a way that make them non-leading and so that they can only be interpreted in the way desired by the inquirer. One should be aware that the reliability of the survey is decreased when giving questions with a rating scale, because the respondent is led toward giving a certain answer. However using rating scales when posing questions makes it easier for the respondent as it may be difficult for them to choose an answer freely. To ensure that the survey has good validity it should go through a pilot test towards a smaller sample group.(Lekwal & Wahlbin, 2007)

2.4.2

User experience survey

When developing software, an important part for the design and development process feedback provided by the users. The user feedback provide guidance for the team and help determining the focal points of the development. For the purpose of collecting this feedback, a survey is a good mean. Data is provided after the users have had the chance to utilize the web application and test it by clicking around and testing di↵erent functionality of it. (Rubin & Chisnell, 2008)

2.5

Prototyping

There are two main ways of prototyping a design. Lo-fi prototyping which entails making the prototype on paper. Building a lo-fi prototype is easy as it does not take very long to write everything down on paper and thus it is very time efficient. It is easy to change the prototype before committing to code allowing developers to iterate on the design many times in order to continuously improve the design. (Rettig, 1994)

Hi-fi prototypes, on the other hand, are made through the use of multi-media tools, demos and high level programming languages. They have their uses in presenting an idea to, for example, prospective customers, but they take a very long time to make. A slick prototype that looks good is more prone to criticism of less important parts of the early design, such as font-size and colors. Subsequently, because it takes such a long time to make a hi-fi prototype there is more resistance to making big changes in layout, navigation that a↵ect major parts of the design and instead developers become more prone to fixating on the details of the design. (Rettig, 1994)

(18)

Chapter 3

Method

In this chapter a description of the methods used during this project to complete the web application is presented. This chapter is divided into pre-study, implementation, usability, development process and evaluation of user experience.

3.1

Pre-study

A Pre-study was made to investigate the business idea and produce a prototype that followed the guidelines from the potential customers.

3.1.1

Market research

In order to establish what design scheme the web application should follow the need to create a market plan and survey on which the consumer segment could be determined, was needed. The market plan consisted of an external analysis which made clear what factors a↵ected the service and a market mix which determined the features of the service, marketing, price. The external analysis of the service was done through a PESTEL-analysis, which examines macro factors, followed by an analysis of how competitors, suppliers and customers a↵ect the service. This analysis became the basis of a SWOT-analysis which helps clear up the relative strengths, weaknesses, opportunities and threats that may help or stand in the way of further expansion. The data for the market plan was extracted from secondary sources and through a market survey done by group.

Survey

When deciding what method of data gathering to use for the market survey there were a number of factors the group took into consideration. Interviews are often very time-consuming and with them comes subjectivity through the interpretation of the answers by the interviewer. In order to avoid these problems that occur unless the interviewer is exceptionally skilled, the idea of doing interviews were discarded in favor of a questionnaire. A questionnaire was created to facilitate a larger respondent number and thus more statistically reliable data to base further decisions on.

Questionnaire

Interest for a remote currency exchange service that is cheaper than the competition was examined in the questionnaire. The design of the website and e-shop was also examined. The used a digital platform in its creation and was distributed to potential respondents through di↵erent social networking sites and through personal contacts. The sample of respondents chosen to answer the survey were selected through what is according to Lekwal and Wahlbin (2007) a convenience sample, which in the authors’ case meant that they chose a sample from acquaintances that are more likely to answer in a short amount of time. In order to avoid the problem of omitted answers the questionnaire required the respondents to pick an answer to all questions before handing it in. The questions were formulated as statements that the respondent could agree

(19)

or disagree with on a scale from one to five. Responding with a one respectively five meant that they disagree wholeheartedly or agree wholeheartedly.

After the survey was done the answers were compiled and a number of di↵erent variables were compared to each other to explore what parameters a↵ect the willingness to pay for the service and what they find important in using a web application of this nature. This was later used in the marketing plan.

3.1.2

Project plan

A project plan was created based upon discussion among the group. The project plan was made to ensure that the SCRUM method was followed thoroughly. The structure of the project plan was the following

1. A background containing the aim of the project and a small sales pitch that could be used to attract future investors

2. A document containing the organizational structure and roles of the team

3. A primary time plan that contained the start and end dates for each sprint as well as the associated deliverables associated with each sprint

4. A primary risk analysis where the most apparent risks to the project were identified 5. A section describing how to work according to the scrum methodology

6. A prototype

By creating a project plan, a more favorable organizational climate was created. According to Wheelan (2014), the most e↵ective groups are the ones who are capable of creating clear common goals and that set clear expectations for the group output, quality, timing and pacing. By creating the project plan these issues were addressed at an early instance of the project.

3.1.3

Prototype

A prototype was created based upon the results of the market survey as well as the project plan. By using the data gathered from the survey results, the most important features of the service where defined and discussed by the project group. Pictures describing each of the fundamental features were then implemented and presented in the project plan. The plan of this was to aid the development process making it clear how the visual design would be implemented and which functions that needed to be developed.

The process of this work was: 1. Pre-Study

(a) Market Survey (b) Market Plan

(c) Prototype 2. Implementation 3. Evaluation

By using this process the work followed a structured path with user feedback in the beginning and end. The prototype made the important step from market research, which gave guidelines, to implementation which development process was simplified due to the prototype design.

The idea of Lo-fi prototypes was used instead of Hi-fi prototypes due the their flexibility and time constraints. Nine initial prototypes were created and then iterated through selecting the best features and designs as Rettig (1994) suggests one should do when designing a prototype.

3.2

Implementation

(20)

3.2.1

Technical

A number of requirements for the web application was decided. Some requirements have also been the result of user feedback and group discussions. These requirements are presented below.

Functional requirements The functional requirements describe the functions of the software product and can be tested like a mathematical function where a certain input X shall render a certain output Y. Good requirements are according to the IEEE1 numbered, inspected, prioritized, unambiguous, testable, consistent and feasible among other things. In this project, the major functional requirements have been:

1. All the users must be able to register on the web page by clicking the register button 2. The user must receive a confirmation e-mail before becoming an active member 3. The customer must be able to study his/her order history

4. The customer must be able to remove a placed order

5. The customer must be able to add an order to the cart and continue shopping • Users must be able to make several orders simultaneously

6. The admin shall be able to edit products online

• Including non-technical users must be able to understand and modify the product range 7. The customer must be able to read about the company and the service in a FAQ

8. The web page must contain customer reviews Non-functional requirements

Non-functional requirements are requirements that are not directly related to the input/output behavior of the system. According to Glinz (2007): quality, constraint, and performance constraints are usually described in the non-functional requirements. Unlike the functional requirements, there is however no consensus on how to define non-functional requirements as they are generally more vague.

One of the chief technical constraints of the project is that the web application must function as a single page application, which means that the website only loads once and thereafter is only updated when a specific operation is undertaken, such as clicking on something on the page. Other equally important non-functional requirements have been:

1. The web application shall be built using HTML, jQuery, Bootstrap, JavaScript, Jinja2 Python, CSS and Flask

2. The data provided by the customers shall be stored in a database • The database shall be created in SQL Alchemy for Flask2

3. The web application shall be set up and hosted on OpenShift via the servers of IDA3 4. Version control and management must be used for the development

• Gitlab accounts provided by IDA for version control Front-end

The web applications Front-end is using the following techniques: • HTML

• Bootstrap • jQuery • AJAX • Sessions

• Hashing for password

1Institute of Electrical and Electronics Engineers 2a database package for Python

(21)

The HTML describes how the structure and information is presented on the web page. In HTML the text and information on the page is marked with tags to define elements. The web application uses cookies saved on the user’s device for handling login sessions, store relevant data and which products a user has when returning to the site (Park & Sandhu, 2000). jQuery will provide the dynamic functionality that enables the web page to load in a faster and dynamic way. jQuery is a JavaScript library (jQuery Foundation, 2016). jQuery supports calls such as AJAX4 which enables the client to make asynchronous calls to the server. This enables the web page to load specific information without refreshing the entire page (Garret, 2005) .

Back-end

These technologies are used in Back-end: • Python

• Flask • SQL • OpenShift

• Variety of packages

Python was the tool to construct the web application server. Python is an object oriented language based on open source (Lutz, 2006). Flask is a framework that was used when developing the web application. The reason for using these technologies was to enable easy routing of calls and information with Flask and then processing with the aid of Python. Python will also dynamically handle the database. Packages that have been used are Flask packages for log-in, sessions and admin. The use of packages speeds up the development and also readability on the code due to higher level coding. OpenShift was the platform that the application will run on. OpenShift also handles the database and connects the application to it. Using Git together with OpenShift works seamlessly and pushing to OpenShift restarts and configures the server.

Prediction

The prediction has been implemented as a separate python function according to some requirements: It must give the best possible date for the exchange and to what rate the exchange will take place, at least 14 days ahead. It must be able to calculate an approximate profit of making the exchange. Making it easier to implement other predictions the method which returns the best possible date and price must be interchangeable. When making an order the prediction will be called and the best possible date, price and profit will be returned. The idea is that the predicted values for each currency will be predicted during less busy hours of the web application and the orders done each day can use these predictions.

The parts of the prediction model described in the theory has been separated in these parts: Algorithm 1: Predicting exchange rates

for number of prediction steps do Decompose Signal

Fit ANN to Signal

Shift signal and set signal to predicted value end for

Security

The theory section of this paper states the importance of using reinforcements for input sections for the overall security of the system. Regex5 has been used for this project in input sections to ensure it is valid. Also, according to the theory section, Bcrypt and PBKDF2 are both legitimate and secure way to hash passwords. Considering that they both are seen as secure ways to handle passwords, Bcrypt was used since it was easier to implement.

4Asynchronous JavaScript And XML

(22)

Database

PostgreSQL was used as the Database Management System for this project. PostgreSQL is a RDBMS that has added object oriented features, therefore making it an ORDBMS. Although these features were not used in the project - and might not need to be used in the future - it was considered an advantage with no negative consequences to have this support in case there would be future work that might take advantage of this. As the database is accessed from a Python application and used as an RDBMS, an ORM was needed to map the Python objects to PostgreSQL data. SQLAlchemy (Flask-SQLAlchemy) was used for this as it is the standard ORM used with the Flask Python framework.

Session and cookies

Sessions, which use cookies as an identifier have been used in two parts of the application; login authentication and in the shopping cart. In managing the state of the user to determine if a user is logged in or not, the package for FLASK called flask-login was used.

3.2.2

Usability

The design of the web application was implemented with inspiration from Lane (n.d.); Birren (1961), giving recommendations on what colors to use and things to consider in order to make the website more intuitive. The name and logo of the web application are both designed in orange and named Clementine as a reference to the orange fruit of the same name. The color of the logo and the fact that orange is a good color to fetch attention determined the choice of orange as the primary color (Birren, 1961). This meaning that important objects, such as for example clickable objects, were designed in orange.

With this knowledge a pre-defined template6 was chosen and implemented as a base for the design of the web application. Since this template uses a mixture of orange and dark colors in its’ design, it was considered to be a good starting point for this web application. In order to customize the design for this service, the pre-defined background image was replaced with an image more appropriate for the purpose of this business idea. Accord-ing to Lindgaard et al. (2006) it is important to give the user a good first impression, hence stressAccord-ing this in designing the first page was important.

In order to make the design of the web application as usable as possible, the ten heuristics defined in citeANielsen-summarize were considered when designing it.They were all taken into consideration when developing the the web application, stressing “Aesthetic and minimalist design” when it comes to the first impression.

3.2.3

Development process

No roles were appointed apart from that of the scrum master who was responsible for setting up all types of meetings in the group and two project owners who were responsible for the product backlog. Additionally this structure was chosen in order to create a feeling of personal and equal responsibility for the completion of the product. Furthermore, the choice of structure was intended to make team members develop skills in new areas.

Meetings

Di↵erent types of meetings have been used during the course of this project: • Scrum meeting

• Scrum master meeting. 7 • Mentor meeting

• Group meetings, consisting of: 1. Working meeting

6the template used can be found at: http://startbootstrap.com/template-overviews/creative/ 7scrum meeting with all the scrum masters from all the groups conducting the bachelor thesis

(23)

2. Workshop and informative meeting 3. Sprint planning meeting

4. Sprint retrospective meeting

The scrum meetings were conducted according to basic principles of scrum and meant that every person went through what they had done since the last scrum meeting, what they were going to do and what obstacles lay in their path. The ones that were not able to attend the scrum meeting had the option to ”scrum from distance” via the Slack channel created early on in the project.

Once per week the group attended a meeting with a mentor for an hour, to discuss group progress. Every Wednesday the scrum master attended a master scrum meeting where all met and had the chance to discuss current issues and how they could be solved. At the beginning of each sprint a sprint planning meeting was arranged to give structure to and plan the upcoming sprint. At the end of each sprint a sprint retrospective was held to assess the group’s performance during the sprint.

Product and sprint backlog

The product backlog was seen as a to-do list where all the tasks needed for the product was listed. The list was used and managed by the product owners to set the agenda for the development process. For each feature added to the product backlog, there was a user story to describe the feature in a more natural language. The backlog was set up in Google Drive as to be able to more efficiently rank the user stories and then the user stories were added to Trello for maneuverability – As shown in figure 3.1 and figure 3.2. During the sprint start meetings the team decide on a set of the tasks contained in the product backlog and reprioritize these for the sprint to come.

(24)

Figure 3.2: Product backlog in Excel

Definition of done

With a well outlined definition of done the communication between team members is facilitated as everyone understands what ”being done” with a task means. In this project, the group has used the following list as the definition of done:

1. Create the code for the user story

2. Have the code peer-reviewed and tested by another team member 3. Acceptance test the feature

4. Commit and push the code to the Development branch 5. Inform the rest of the group

6. The code was accepted by the team

7. Make a merge request with the Master branch 8. Merge the branch and upload it to the server 9. Done!

Version control

This project has been using Git for version control with Gitlab as the repository hosting service.

The branching strategy was based on a master and development branch. The goal was to have the master branch as an always functional, fully tested and potentially live version. The development branch was also to be functional but was not expected to be fully tested and bug free. New features were developed in feature branches which originated from the development branch. When a developer felt that the feature developed in the feature branch was done, the branch was merged – by a person in the group who was appointed ”merge-responsible” – with the development branch. Once all the features planned for a sprint had been merged with development, a testing branch was to be created. In this test branch the website was subjected to more robust tests to fix bugs and prepare for the development branch to be merged with the master branch. The branching strategy is visualized in figure 3.3.

Merging with any branch was to be preceded by a message in order to to make sure that only two branches were merged at once. If multiple developers wanted to merge with the same branch at the same time they would

(25)

have to decide amongst themselves about the order and give a clear message for each other as they finished merging.

One person in the group was appointed “merge-responsible” and handled all merging. When a group member had finished their feature, they sent a merge request and the “merge-responsible” would then test the feature, do the merge and finally test the feature again after the merge.

Figure 3.3: Branching Strategy

Code refactoring

Code refactoring was done continuously by the developers during development as individual developers noticed code smell8. It was always to be done in feature branches.

If code smell was noticed by a developer, he notified the other group members as quickly as possible. If the group agreed that the code needed refactoring the developer who noticed it was asked whether he/she wanted to refactor it or not. If he decided to do so, he could do it within his current feature branch or create a new feature branch.

If the developer that noticed the code smell declined to refactor the code, anyone who was interested in refactoring it could say so. If no one was interested in refactoring and the group decided it was necessary, the developer that was responsible for the bad code was tasked with refactoring it. This was to be done in a new feature branch and the group needed to approve the refactoring before the feature branch was to be merged with development.

Testing

Initially most testing was done as the code was being produced. Each developer was in charge of unit and integration testing continuously. As the work progressed and it was time for system testing the developer had to solve any merge conflicts with the development branch before being able to send a merge request on Gitlab. After the merge request, accepting tests could be done by the merge responsible and the developer who initially sent the merge request.

The features that were completed and marked as done were moved from ”todo”) ”in progress” ) ”testing” ) ”done” in the sprint product backlog on Trello.

8Code smell refers to indications that the code is not well designed and structured. This does not, however, mean that the code

(26)

Following this structure, when the merging was done after testing was completed, the live version of the application was updated and acceptance tests could be conducted by external parties such as friends or other laymen. They were asked to test the release branch version (considered a beta test) as well as the master version that was up on OpenShift. Feedback on the website and service in general was discussed in the group and if considered important, formulated into a user story and developed in feature branches.

3.3

Evaluation of results

In this chapter the way the group evaluated the results obtained during the project is presented.

After most of the development was done a user experience survey was made. This survey was made to measure what the users thought of the web application. By only making the survey available to people that actually used the web application itself, the survey’s reliability increased. The answers have later been used throughout this paper as a basis for result, discussion and future research.

It was sent out to a test group consisting of names from the market survey. 30 names were chosen at random from the pre-study, all of whom obtained a link to the web application. The respondents were then asked to navigate on the website for as long as they wanted and were afterwards directed to the online survey.

The survey was divided into two parts. The first half consisted of statements graded on a scale from 1-7 where the respondents selected the number that best corresponded with how much they agreed with the question. The second half consisted of four open-ended questions in which the respondents had the possibility to write feedback on a selected number of topics. More about the survey, the questions asked and the answers obtained is presented in appendix D.

In total, 16 respondents between the ages 18-52 answered the survey. Out of the respondents, two thirds were male and one third were women. The low number of respondents can be viewed as a potential threat to the reliability to the survey. Notwithstanding, this is something that has been taken under consideration by the project group when drawing any conclusion from the results.

(27)

Chapter 4

Results

The results of this thesis is presented with the the results from the pre-study, followed by the results from the implementation of the web application and finishing with the results from the evaluation.

4.1

Pre-study

The outcome of the pre-study consisted of results from the market research and a prototype.

4.1.1

Market research

With the help of the market survey a number of di↵erent insights were gained as to how attractive the web application would be toward a prospective consumer segment. When asked if they would be willing to pay for a service which could save the traveler money when doing currency exchange before traveling abroad, 75% percent and responded with a 3 or higher on a scale of 5 of if they agree to paying for such a service.

To a questions regarding what design approach would be the best, where there were three di↵erent options; minimalistic, information rich and retro. 72% indicated that they want a minimalistic design when using a web application of the kind of nature examined in this thesis.

Additionally, the respondents of the questionnaire indicated that they find easy navigability and being able to use the service quickly highly important. Over 97% responded with a 3 or higher on whether they agree that easy navigation is important to them and over 94% did the same for if easy registration is important.

(28)

4.1.2

Prototype

Figure 4.1: Prototype Design

Figure 4.1 illustrates the prototype of the first page which was used as a base when implementing the web application Clementine. 1

The prototype was built on the outcome of the market survey which resulted in some design guidelines. The web application should be minimalistic, easy to navigate and purchase. This inspired the design of the prototype. The first page, as seen in figure 4.1, is meant to be give direct access to the service for returning customers with hints of where one can read more about the service for new customers.

To ease the navigation scroll and a simple navigation bar was designed. Also implementing a short buying process was designed to ease the purchase which was concluded as an important aspect from the market research.

4.2

Implementation

Presentation of the website .A large part of this project referred to user experience and the result of the project as it is experienced by a site user is presented below. The home page, to which a user is first presented when navigating to the page, is seen in figure 4.2.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i