• No results found

Engaging Customers with Recommendations - A Study about How Customers Can Be Engaged Using a Recommender System

N/A
N/A
Protected

Academic year: 2021

Share "Engaging Customers with Recommendations - A Study about How Customers Can Be Engaged Using a Recommender System"

Copied!
62
0
0

Loading.... (view fulltext now)

Full text

(1)LiU-ITN-TEK-A--20/023--SE. Engaging Customers with Recommendations - A Study about How Customers Can Be Engaged Using a Recommender System Malin Niska 2020-06-10. Department of Science and Technology Linköping University SE-601 74 Norrköping , Sw eden. Institutionen för teknik och naturvetenskap Linköpings universitet 601 74 Norrköping.

(2) LiU-ITN-TEK-A--20/023--SE. Engaging Customers with Recommendations - A Study about How Customers Can Be Engaged Using a Recommender System Examensarbete utfört i Medieteknik vid Tekniska högskolan vid Linköpings universitet. Malin Niska Handledare Camilla Forsell Examinator Niklas Rönnberg Norrköping 2020-06-10.

(3) Upphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under en längre tid från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/ Copyright The publishers will keep this document online on the Internet - or its possible replacement - for a considerable time from the date of publication barring exceptional circumstances. The online availability of the document implies a permanent permission for anyone to read, to download, to print out single copies for your own use and to use it unchanged for any non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional on the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its WWW home page: http://www.ep.liu.se/. © Malin Niska.

(4) Linköping University | Department of Science and Technology Master’s thesis, 30 ECTS | Media technology and engineering 202020 | LIU-ITN/LITH-EX-A--2020/001--SE. Engaging customers with recommendations. A study about how customers can be engaged using a recommender system –. Malin Niska Supervisor : Camilla Forsell Examiner : Niklas Rönnberg External supervisor : Åsa Detterfelt. Linköpings universitet SE–581 83 Linköping +46 13 28 10 00 , www.liu.se.

(5) Copyright The publishers will keep this document online on the Internet - or its possible replacement - for a period of 25 years starting from the date of publication barring exceptional circumstances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.. © Malin Niska.

(6) Abstract Recommendation systems can today help users to find what they are looking for in an online store or online service. The companies want to create engagement with the recommendations in the form of interactions. The aim of this thesis is to investigate how to create engagement from the users. For this, the concept of a system will be made. The concept will build on knowledge and data gathered from interviews of owners of an online store or online service and a survey, where the respondents are customers. The results show that different types of information and recommendations fit in different types of situations. That the content of emails needs to stand out in order to create engagement. If the product or service is something that the customers often search for themselves then it is a good option to advertise this through the companies own channels such as web page or app. If the product is something that is more general an ad on social media can be one way to advertise it in order to create engagement from the customers..

(7) Acknowledgments I would like to thank my supervisor Åsa Detterfelt from MindRoad, Camilla Forsell my supervisor and Niklas Rönnberg my examiner from Linköping University, Niklas Ekvall from Comordo and everyone that let me interview them or answered my survey. Without you this thesis would not have been possible! Malin Niska, Linköping 2020. iv.

(8) Contents Abstract. iii. Acknowledgments. iv. Contents. v. List of Figures. vii. List of Tables. ix. 1. Introduction 1.1 Background . . . . 1.2 Aim . . . . . . . . . 1.3 Research questions 1.4 Delimitations . . .. . . . .. 1 1 2 2 2. 2. Concept 2.1 System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Content providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 End users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 3 3 4 4. 3. Theory 3.1 Recommender System . . . . 3.2 Explaining recommendations 3.3 Recommendations and sales . 3.4 Method . . . . . . . . . . . . .. . . . .. 5 5 6 8 9. 4. Method 4.1 Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Impact mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 13 13 14 15. 5. Results 5.1 Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Interview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Impact mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 16 16 32 33. 6. Discussion 6.1 Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Impact mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 37 37 40 40. 7. Conclusion 7.1 Research questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 42 42. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. v. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . ..

(9) 7.2. Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. Bibliography A Survey A.1 Coffeemaker . . . A.2 Shoes . . . . . . . A.3 Children’s book . A.4 Harry Potter book. 43 44. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. vi. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. 46 46 47 48 49.

(10) List of Figures 3.1 3.2. An example of movie ratings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of the impact map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 8 11. 5.1 5.2 5.3. Chart over how often the respondents get emails regarding marketing. . . . . . . . Chart over how often each age group get emails regarding marketing. . . . . . . . Chart over the attitude for opening emails containing offers and/or recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chart over the attitude for opening emails containing offers and/or recommendations for each age group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chart over how long all the respondents look at an email on average. . . . . . . . . Chart over how long all the age groups look at an email on average. . . . . . . . . . Chart over how all the respondents feel about the relevance of the emails sent to them. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chart over how each age group the respondents feel about the relevance of the emails sent to them. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chart over which marketing each age group see on social media. . . . . . . . . . . Chart over which digital marketing the respondents prefer. . . . . . . . . . . . . . . Chart over which digital marketing each age group prefer. . . . . . . . . . . . . . . Chart over how the respondents are reached in an effective way. . . . . . . . . . . Chart over how each age group are reached in an effective way. . . . . . . . . . . . Chart over how the respondents feel like they can control their feed. . . . . . . . . Chart over how each age group feel like they can control their feed. . . . . . . . . . Chart over how relevant the respondents think their feed is. . . . . . . . . . . . . . Chart over how relevant each age group think their feed is. . . . . . . . . . . . . . . Chart over which channels the respondents look at marketing for different subjects. Chart over which channels different age groups look at marketing regarding fashion. Chart over which channels different age groups look at marketing regarding electronic devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chart over which channels different age groups look at marketing regarding restaurants. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chart over which channels different age groups look at marketing regarding groceries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chart over which channels different age groups look at marketing regarding events. Chart over which channels different age groups look at marketing regarding offline services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chart over which channels different age groups look at marketing regarding online services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How likely it is for the respondents to engage in different ways. . . . . . . . . . . . How likely it is for the respondents to reads a text a company has written about a product. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How likely it is for the respondents to looks at an advertising video. . . . . . . . . How likely it is for the respondents to click on an ad on social media. . . . . . . . .. 17 17. 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14 5.15 5.16 5.17 5.18 5.19 5.20 5.21 5.22 5.23 5.24 5.25 5.26 5.27 5.28 5.29. vii. 18 18 19 19 20 20 21 21 22 22 23 23 24 24 24 25 25 26 26 27 27 27 28 28 29 29 29.

(11) 5.30 How likely it is for the respondents to click on a link/image/button in an advertising email. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.31 Cart over which filtering method fits a coffeemaker. . . . . . . . . . . . . . . . . . . 5.32 Cart over which filtering method fits a shoe. . . . . . . . . . . . . . . . . . . . . . . 5.33 Cart over which filtering method fits a children’s book. . . . . . . . . . . . . . . . . 5.34 Cart over which filtering method fits a Harry Potter book. . . . . . . . . . . . . . . 5.35 Impact map without tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.36 Impact map for online service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.37 Impact map for E-commerce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1 A.2 A.3 A.4 A.5 A.6 A.7. Image of the coffeemaker that the respondents had to choose recommendations for. Image of the recommendations using collaborative filtering for the coffeemaker. . . Image of the recommendations using content-based filtering for the coffeemaker. . Image of the shoes that the respondents had to choose recommendations for. . . . Image of the recommendations using collaborative filtering for the shoes. . . . . . Image of the recommendations using content-based filtering for the shoes. . . . . . Image of the children’s book that the respondents had to choose recommendations for. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.8 Image of the recommendations using collaborative filtering for the children’s book. A.9 Image of the recommendations using content-based filtering for the children’s book. A.10 Image of the recommendations using collaborative filtering for the Harry Potter book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.11 Image of the recommendations using content-based filtering for the Harry Potter book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. viii. 30 31 31 31 32 34 35 36 46 47 47 47 48 48 48 49 49 50 50.

(12) List of Tables 3.1. Aims for explaining recommendations . . . . . . . . . . . . . . . . . . . . . . . . . .. ix. 7.

(13) 1. Introduction. Today recommender systems have become a new standard for e-commerce businesses like Amazon and streaming services like Netflix. This shows that if companies are willing to spend time and money to develop these recommender systems, it means that the companies see value in the effect that the recommendations given. Therefore, this thesis will investigate how such recommendations can be used sufficiently. The goal of this thesis is to engage the end-users to interact more with the online store or service. To create and monitor the engagement from the customers, a concept of a system will be made in this thesis. The aim is to engage as many end-users as possible, where the enduser is the consumer of the store or service. The engagement should be targeted both at the customers that are already active and the customers that are churning. In business, churn rate is a measurement of how many of the existing customers are about to leave the store or service. The project, that this thesis will cover, will consist of data gathering and analysis of data. The data that will be gathered are attitudes the end-users have towards marketing and how the people behind the online service or online store work with recommendations today. To gather this data semi-structured interviews, emails and a survey will be conducted. The result of this data will be the base of an impact map that will form the result of this project and form the pre-study of a system, that will be created for this project.. 1.1. Background. Before this project Comordo, a company based in Linköping, Sweden, has created a recommender and churn prediction system. The Comordo system has the goal to engage as many consumers as possible, the same goal as the system that this project will be developing conceptually. The system is working with three different channels: E-mail, social ad and on-demand. On-demand means to expose the recommendations through the companies own channels such as a website or an app. Since Comordo’s system has both a recommender engine and a churn prediction, it is working for two use cases. The first use case is to recommend a certain product to a group of people that might already be an active customer. The second use case is to engage a churning customer with recommendations of products/services/content that might help to engage the customer before they leave the service or store. 1.

(14) 1.2. Aim. 1.2. Aim. The purpose of this thesis is to help a store owner or an owner of an online service to engage their customers. Both the active customers and the customers that are churning. The aim is to reach the customers in a way that creates the most engagement and retention. To achieve this, the conceptual development of a system that helps a store owner or an owner of an online service to engage their customers will be developed. The aim is to have, at the end of the project have an impact map that will help visualize the system and what effects the system can accomplish. The impact map can then become the document that helps the project management and project participants to have an overview of the project; why certain parts of the system are important and help them prioritize.. 1.3. Research questions. The research questions for this thesis are: 1. What type of recommendation information from the recommender system fits in which type of situation? 2. How does the consumers want to be reached in order for them to experience engagement and retention? With the first research question, the aim is to investigate which type of recommendation information that fits which situation the best. This means looking at a specific situation and see how much engagement, for example, clicks, reviews, buy and views that one type of information gives compared to another type of information. The second research question aims to investigate how consumers want to be reached. Reached, meaning through which channels such as email, on-demand or/and social ad the consumers want to be contacted, for them to experience engagement and retention.. 1.4. Delimitations. Comordo has a recommender system which is also a churn predictor and churn reducer. This system has already been developed and the correctness of the recommendations has therefore already been validated. Therefore, this thesis will assume that the data from the system will be valid and correct. This means that the process of creating the data that the system outputs will not be taken into consideration. Another limitation made is that the work will only cover the use case of recommendations and recommender systems. This means that some marketing strategies are excluded. For example, influencer marketing is excluded because this is a use case that does not include recommendations. The influencer markets a specific product regardless if the followers of this influencer are interested in the product or similar products or not. This type of marketing is not relevant and directed towards a specific person but rather to a big group of people. Also, this means that the use case of expanding the company’s clientele is not covered in this project. For all the store owners or the owners of an online service, the system presumes that the companies already have an existing base of customers. This means that transitional data (such as order history, clicks etc.) already exists as well as a list of the consumer’s email. Also, they know some demographic data of the customers, such as age, gender and interests. Since the data is very important for the system to function and give good suggestions the marketing strategies will only be online strategies meaning that advertisement through printed content will not be covered or suggested by the system.. 2.

(15) 2. Concept. For this thesis, the concept of a system will be created. This system has two main target groups; Content providers and End users. In this chapter, the concept for the system and how the two target groups are connected to the system will be explained. A content provider is a person who works either with an E-commerce store or an online service. Examples of online services, in this case, could be Netflix, Spotify or The New York Time’s online newspaper. Content providers provide or create content for the end-users to see, hear, buy or in other ways take part in the content created. This means that for example, an online banking solution would not count as a content provider, due to the fact that a bank does not create any content for the end-user to consume but rather provide information for the end-user. An end-user is the person who subscribes, uses, buys or takes part in something that the content providers provide. The end-user consumes the content provided by the content provider. The concept for the system will be presented as an impact map where the target group will only be content providers. This is due to that the end-user only shapes the system but does not actually use the system which is what represents a target group for the impact map. This is further described in section 3.4. First, in this chapter, the system as a whole will be explained, thereafter it will be explained how the content providers interact with the system and lastly how the end-users help to shape the system.. 2.1. System. The main purpose of this concept is to help the content providers engage the end-users. With engaging means to get the end-users to interact with the content of the content providers, which could mean to watch a video or buy a product. There are several aspects that will help the content provider to engage the end-user. The main aspect is to recommend relevant content to the end-user via marketing campaigns. By using the system, the content provider can get insights on how the marketing campaigns are going right now and help the content providers to create new marketing campaigns. Insights mean that the system will provide, for example, information about the company’s key performance indicators (KPI), usage patterns of the end-users, downloads etc. For marketing campaigns, this system has two use cases, the first being that the content provider wants to market some specific content or product. The other use case is that the content provider wants to engage a specific type of user, this user 3.

(16) 2.2. Content providers could for example be a churning user. Independent of which use case the content provider wants to use, the content provider has to input several parameters, in order to get the result. The result contains which marketing strategy fits the specific case of a specific product or service. These input parameters could be the budget, target audience, and what type of product or service that will be marketed. In the first use case, where the content provider has a specific product that should be marketed, the system can help the content provider both to find which users this product would fit to recommend it to but also, to help the content provider to pick one or several strategies. Examples of marketing strategies could be email, posting an ad on social media or market the product through the company’s website or app. Budget matters when creating campaigns because some strategies are cheaper than others therefore is budget one of the parameters that the system requires as input when helping the content provider to pick a marketing strategy. In the second use case, where the content providers want to engage a specific group of end-users. The system will help the content providers to navigate to these customers and their interests. This is done using historical data of their engagement with earlier marketing campaigns. The marketing strategy can also depend on what type of content it is that should be marketed to reach the end-user. The system could also be used to look at the data from the recommender system and to collect feedback from the end customer. Different data should be seen in the system that will help the content providers to measure the engagement from the customer. The thought is that the system should both help the content providers to engage the end-users but also to track their process of engagement from the end-users. The hope is that through this system the customer loyalty will increase as a side effect from the increase of engagement from the end-users.. 2.2. Content providers. The content providers are the target audience that will interact with the system. The end-user will only see the results of the system and not the system itself. The content providers will be divided into two different subsets of the target audience. These subsets are E-commerce stores and online services because depending on which type of content provider the companies are, they have different needs and purposes. For the E-commerce company, the main purpose is to sell as much as possible and for the online service, it is to have the end-users use the service as much as possible. This also means that they have different KPIs and information that is important for the company. This means that the information page will be used differently.. 2.3. End users. The end users do not see the system but are the ones that help shape it. The end users are both what starts the circle but also what closes the circle. For the end-users, the system is a black box where they cannot change the system directly. They can only change it indirectly by providing different data. If the marketing campaigns are successful and the end-users like them, then the system will know that and continue to use the same strategy. This is because the marketing works and starts to engage the end-users. If the end-users do not like it the system will also know it and change the system based on the engagement and the feedback given from the end-users. This means that the content of the system will depend entirely on the end-users and their usage pattern. The end-users will also have tags in the system. These tags can for example be interests, what type of user it is, for example, if they engage often or rarely with the store or service. The tags can also be how they like to be contacted. For example, if a content provider wants to send an email to the end-user the content provider requires consent from the end-user. 4.

(17) 3. Theory. To deepen the understanding of the topic of recommender systems and the effect that the recommendations have, research about the topic has been done. In the following chapter, some of the most common filtering techniques for recommender systems will be explained. Also, how to explain and evaluate the recommendations given to the users, and the impact recommendations have on sales will be explained. Lastly, interview techniques, surveys, impact mapping and S.M.A.R.T goals will be explained.. 3.1. Recommender System. A recommender system is a software tool and technique that gives the user suggestions of items that might be useful for them. The basic idea is to personalize the suggestions given to the user to make it easier for them to select an item. An item can for example be products, movies, books, etc [18]. Recommender systems have become a popular feature on websites like Amazon and Netflix. In 2006 Netflix started an open competition for the best collaborative filtering algorithm, that could be used in Netflix’s recommender system. The prize was called The Netflix Prize and the winners, in 2009, won a million-dollar for their algorithm [4].. Collaborative filtering There are four different approaches to techniques for recommender systems, where the most common one is collaborative filtering. Collaborative filtering builds on the assumption that if two users’ opinions are similar then their future opinions will also be similar. For collaborative filtering, there are two different types of methods: user-based and item-based. The user-based method is when the method finds a similarity between users. It builds upon user ID and user profiles. If one user’s preferences and behavior are similar to another user then they are considered to be neighbors. In a neighborhood the ratings and interests are similar and therefore recommendations to a user can be made out of the preferences of a neighbor. User preferences and behavior can be interpreted with search history, ratings and purchase information. The item-based method builds upon item profiles. When using an item-based method the system searches for similarity and items that are often preferred together. If products are often bought together then they are neighbors and therefore recommended to the user [10]. 5.

(18) 3.2. Explaining recommendations. Content-based filtering Another approach for recommender systems is content-based filtering. This method wants to recommend items to users that have similarities to other items that the user already liked before. The difference between content-based filtering and collaborative filtering is that with content-based filtering the system does not take into account what other users have liked. Content-based filtering only looks at the current user and finds similarity among items. This means that content-based filtering is a good option when there is a new item with only a small amount of ratings or the content is new and thus has no user data connected to it [2].. Knowledge-based recommender When developing a recommender system with either collaborative or content-based filtering, the system is using data about transaction history and ratings. If this data does not exist, is poor or does not cover all of the information for all items, the recommendations will not be complete or give a correct image of the content. This problem is called the cold-start problem [14]. Both collaborative and content-based filtering is bad at giving recommendations for customized items, which could be cars, real estate and tourism requests. These are areas where users do not often purchase items. When users buy items like this it is also difficult to go on other users’ recommendations. For example, when looking for a new house the user might want a specific number of bedrooms, house area, location of the house, etc. These are examples when a knowledge-based recommender system works the best. A knowledge-based recommender system has two primary types, constraint-based and case-based. If the recommender system is constraint-based the user puts in via the user interface constraints or requirements for the item. It can be lower or upper limits. For case-based recommender systems, specific cases are presented and specified by the users. These cases become targets or anchor points which forms a similarity metrics. The similarity metrics form the domain knowledge and sends the results [3].. Hybrid approaches Another approach of a recommender system is a hybrid of different methods. It can be two or more approaches that are combined to get more accurate recommendations of the items [10]. For example, Turnip et al. combined collaborative and content-based filtering when developing a recommendation system for e-learning [22]. Comordo’s recommender system uses a hybrid approach of both collaborative and content-based filtering. At some parts of the system the recommender uses collaborative filtering, some parts use content-based filtering and other parts use a hybrid approach of both mixed together.. 3.2. Explaining recommendations. Tintarev and Masthoff wrote an article about the reviews of explanations in recommender systems [20]. The article aims to evaluate the explanations of the recommendation given to the user. An explanation can for example be “You watched movie X then you might like movie Y as well”. A good explanation will help the user know why a certain recommendation has been given to the user and why. It can also help the user to make a quicker and easier decision. The aims, presented in the article, are listed in table 3.1 [20]. The different aims can all be evaluated and are an important aspect of recommendations. If the users feel like they can trust and rely on the recommendations coming from the system they are more likely to use it. To measure how understandable the explanations are can leave the users having a good experience of the system. An understandable explanation can for example lead to user trust, transparency and user satisfaction.. 6.

(19) 3.2. Explaining recommendations Table 3.1: Aims for explaining recommendations Aim. Definition. Transparency. Explain how the system works. Scrutability. Allow users to tell the system it is wrong. Trust. Increase users’ confidence in the system. Effectiveness. Help users make good decisions. Persuasiveness. Convince users to try or buy. Efficiency. Help users make decisions faster. Satisfaction. Increase the ease of usability or enjoyment. To evaluate transparency, it is possible to ask the users if they understand the system and why they are recommended certain products. Most previous work surrounding the evaluation of trust is often evaluated with scrutability. Scrutability is hard to measure because quantitative measures, which could be time measurement for completing a task, could be deceptive due to issues with the interface rather than the scrutability of the system [20]. A study shows that when the users have the opportunity to scrutinize and change the user profiles to update the personalization it was appreciated by the users. Even if it is not common for the users to update the system on the wrong aspects when it happens the users see it as something positive [8]. Trust can be evaluated using questionnaires and the degree of trust users place in a system. Another way to measure trust is to measure it indirectly via user loyalty, user logins and increasing sales [20]. A study showed that user trust increased when the users had the opportunity to choose themselves when to rate an item or not [15]. This meant that when a user did not have to review an item, the reviews for this item are considered more trustworthy than if the user had to review the item. Bilgic and Mooney did a study and evaluated the effectiveness of an online bookstore. In Bilgic and Mooney’s study, the users had to rate a book twice. The first time was when they received an explanation and the second time was after reading the book. The goal was to have the two ratings be as similar as possible for the effectiveness to be good for the explanations because if they are similar it shows that the recommendation was effective [5]. Persuasiveness is evaluated via user studies where the user has to navigate different systems which give different explanations and user ratings. A study showed that a histogram of neighbors’ movie ratings gave the best response from the users. The histogram displayed how many of the neighbors rated the movie 1 or 2 out of 5, 3 out of 5 and 4 or 5 out of 5. An example of how the histogram could have looked can be seen in figure 3.1 [12]. Efficiency is often evaluated in walk-through user tests. A test where the user continually interacts with the system to change their personalization of the recommendations given to the user. The test examines the time it takes for the user to complete a task, this is measured as a variable of efficiency. One study looked at the time it takes for a user to find a restaurant in a personalized recommendation system for restaurants [19]. To improve efficiency, it is possible to weigh the different options or variables against each other. In a study made with 7.

(20) 3.3. Recommendations and sales. Figure 3.1: An example of movie ratings.. cameras, the user got different variables and had to weigh them against each other to find the right product. The variables for the cameras could be “Less Memory and Lower Resolution and Cheaper” which made the user see what to change to get the best camera they could. Satisfaction can easily be evaluated by asking the users if they like the system or not. As mentioned above satisfaction can also be measured by how understandable a recommendation is and indirectly via user loyalty and increase of sales. When measuring the satisfaction of a recommendation it is important to differentiate between the recommendation process and explanation of it and the recommended product itself [20].. 3.3. Recommendations and sales. One big field of application for recommendations is sales. Recommendations are often used as a way for e-commerce stores to expose their products to their customers. It is also a way for e-commerce to expose the right type of products to the customer when using recommendations. One measurement of sales and recommendation is diversity of sales. Diversity of sales means “to reflect the concentration of consumer purchases conditional on firms’ assortment decisions” meaning that it is not the number of products offered rather the different numbers of sales for the different products as a whole [9]. The reason for a company to want to have a high diversity of sales depends on the situation and the aim of the company. Fleder and Hosanagar investigated whether recommendations had any effect on the diversity of sales [9]. They found out that for the individual level, the diversity of sales increased with a recommender system but not in aggregate. Fleder and Hosanagar meant that this was due to the filtering technique. Common recommenders such as collaborative filtering base the recommendations on ratings and sales. This means that the effect of the rich-getricher effect happens on popular products and vice versa for unpopular products. That even if the individual level of diversity of sales increases the recommender system tends to recommend the same product to different users because they have better ratings and historical data. Even if the user explores more and different products it tends to be the same products that get recommended to most of the users, which is the popular products or as some stores call it “Top selling list” [9]. There are situations when a company might want to have a high sales diversity and the wrong type of filtering can filter the items that might be of interest for the company to sell. For example, if the goal of the recommendations is to help the customers explore more products the goal would be to expose as many items as possible. Also, if the company wants to sell the “back catalog” then one filtering method such as collaborative filtering might not fit because 8.

(21) 3.4. Method it exposes the items that other users have looked at or bought. On the other hand, if the goal of the recommendations would be to sell as much as possible collaborative filtering works since it helps to increase the sales diversity on the individual level [13]. For niche products, a good strategy is to use co-purchase networks. This means that some product might easily get sold or exposed due to the connections between the product that the company wants to expose and the product that the users already looks at. For example, if one publisher wants to sell a new book, they might put a discount on a different book that is popular and related to the new book. Then when the popular book draws attention the new book will get more exposed to the consumers [13]. Another study by Pei-Yu Chen et al. investigated the impact of recommendations on sales for the books on Amazon [7]. The results showed that recommendations did have a positive impact on sales, meaning that the books that were recommended also sold more. It was also discovered that consumer ratings did not have an impact on sales. With rating means that the customer could rate the product from 1–5 where 1 was the worst and 5 was the best. Even if the product had a good rating, for example, 4.5 out of 5, it did not have an impact on sales. What could be seen in the data is that recommendations are positively associated with higher sales. It was also discovered that for the more popular books, which in this case/data set was book 1–9999 of the bestselling books, recommendations were not associated with higher sales. Meaning that for the popular books the recommendations did not make a significant difference. Pei-Yu Chen et al. also discovered that the newer the book was the more popular the book got compared to the older ones and the popular books also had a higher price than the unpopular ones [7].. 3.4. Method. For this thesis interviews and a survey will be conducted and an impact map will be made. In this section theory about these methods will be presented.. Semi-structured interviews A semi-structured interview is an interview that contains both closed- and open-ended questions. Rather than having exact questions to ask, the interview is more about understanding the topic, the agenda and/or the opinion of the respondents. Often follow-up questions like why and how are common. A semi-structured interview should not exceed more than one hour to minimize fatigue [1, 23]. The downside of semi-structured interviews is that they are time-consuming. The preparation, carrying out the interview, analyzing and compiling the results and presenting it requires more time than at a first glance. The results can also be hard to analyze if the questions diverse too much from interviews to interview. The upside of semi-structured interviews is that they give in-depth understanding and direct communication to the interviewing person [1, 23]. Semi-structured interviews can also uncover problems that were not known before and help unfold other concerns and issues [23]. When planning and preparing the interview there are a lot of aspects to consider. How long the interview should be and when to schedule it. It is important not to schedule too little time because then the respondents might have to go before the interview is finished if the time runs out and leave the results looking deceptive. It is also good to remember that close-ended questions are a good way to also ask open-ended questions. For example, when asking “In your judgment, was this program change a major improvement, minor improvement, or not an improvement?” the follow-up question could be “Why do you think that is?”. This also leads to an easy way of presenting it in the end with reference points. This could for example be “X out of Y thought the program gave a ‘major improvement’ and stated these reasons...” [1].. 9.

(22) 3.4. Method A good way to start off the interview is to start with some more easy questions or even some small talk to make the respondent feel more at ease. Also, to explain what the interview and research are about is good to start with and then continue by asking the more in-depth questions. It is also good to start with the positive questions such as “What do you like about X?” because people who might think it is hard to voice criticism often think it easier to speak out after raising awareness of the great parts. Another reason is that people often find it hard to say something good after having a critical tone. Generally, it is easier to first say the positive and then the negative rather than the other way around. After asking positive questions the more in-depth questions can be asked and finally end the interview on an easy and positive note to make the respondents leave on a good note [1]. After the interviews, it is time to analyze the results. Because semi-structured interviews can involve both open- and close-ended questions the results can in cases of close-ended be presented in tables and diagrams. Open-ended questions can be presented with percentages and/or with illustrative quotations. With percentages can one example be “32% of the respondents said X”. In order to get the percentage, it is possible to count the percentage of how many respondents answered a specific answer to a question. Illustrative quotation means that if some respondents share the same experience this experience can be quoted by some comments made by the respondents in order to make the experience more vivid [1].. Structured interviews A structured interview is an interview that has a specific set of questions and order to the questions. It is possible for the questions to be either close- or open-ended. The interview can be conducted over the phone, in person, through chat or email. The strengths of structured interviews are that the interviews are not hard to conduct compared to semi-structured interviews and unstructured interviews. The responses from the interviews are easier compared to each other than for semi-structured interviews. This means that also the data analysis is easier for the structured interviews compared to the unstructured interviews. The weaknesses of structured interviews are that if the background of the questions and the respondents are not solid, the questions asked might be the wrong questions. If the question is wrong then the answers given are precise answers to the wrong questions. These answers would not give much information for the study. A structured interview can also make the participants a more passive role than for semi-structured or unstructured interviews. This can give the impression that the interviewer already has an answer that they want and think is of importance for the interviewer [24].. Survey Surveys are a way to gather information from a large number of people in a faster way than interviewing them. With surveys, it is also possible to reach a wider number of respondents. While creating a survey there are a few general things to consider in order to obtain the most from the answers given from the survey. First of all, it is important to only ask one question at the time. For example, if the question is “Do you like ice-cream and cake?” Then if the answer is yes. Does that mean that they like both ice-cream and cake, just cake or just ice-cream? If it would be two questions instead the answers will be much clearer [21]. It is important to choose the right words while writing the survey. Because unlike doing an interview the respondents are not next to you and can ask a question if they don’t understand. To use the same language, meaning to use the same type of terms, as the respondents are important in order to not get misleading answers due to a misunderstanding in the communication [21]. When asking a question in a survey it is also good to avoid negation because this is often seen as confusion for the respondents. For example, “It is good to have many questions in a 10.

(23) 3.4. Method questionnaire?” is a better way of asking the question than to use negation in the question. One way to make it simpler for the respondents is to avoid negations in the questions [21]. After the answers of the survey have been received the answers need to be analyzed and presented. When presenting the data, it is important to remember that the results should be presented in such a way that it is easy for the target audience to understand it. One of the most common ways of presenting the results is to use diagrams or tables. Some diagrams that are often used are circle diagrams, histograms and bar charts [21].. Impact mapping An impact map is a method used to get an overview of the effects or impacts that a project is going to produce. It answers the questions; why do we want to create this product, what do we want to accomplish, what does the project need and how will the project be done. Often the method of impact mapping is implemented when a new IT system or an update of an old IT system is made but not always, as it can be implemented in many different areas. The impact map is done for the leaders and developers in order for them to have an overview of why a project is done, for example, what impacts or effects will the system provider, who will create these effects, what do these people need in order to create these effects and how the project will be done. This has been illustrated in figure 3.2 from [17]. The impact map consists of four parts: purpose, target group, goals of usage and task/measure. The purpose (number one in figure 3.2) is usually just one sentence that summarizes what the project wants to accomplish when completed. The sentence is the goal and has complementary measure points of how the goal should be measured. These goals should all be measurable and concrete [17]. The second part of the impact map is the target group (number two in figure 3.2). An impact map can have many target groups because the map should include all the different types of people that will be in contact with the system. It can be the developer that maintains the system, the novice user or the expert user to name a few possible target groups. It is important to divide the users after the interests and effects they want from the system. If the user just wants to browse the content of the system just to explore it or if the user wants to search for a specific type of information, these users would be in different target groups. Figure 3.2: Overview of the impact map.. 11.

(24) 3.4. Method because they have different goals of their use and therefore want different effects from the system [17]. The third part (number three in figure 3.2) is the goals of usage. This is where it is stated what the target groups needs in order to reach the goal effects of the system. What the target groups need means what is required to reach the wanted effects and not what the target group wants in order to get there. The main question that the goal of usage answers is “How shall the system work in order to meet the users’ needs?”. There will be at least as many goals of usage as there are target groups but since two target groups can have the same goal of usage it does not have to be the same amount of goals and target groups. It is also important that the goals of usage also are measurable and can be measured [17]. The fourth and last part (number four in figure 3.2) is task/measure. The task/measure gives a suggestion on how the goal of usage can be reached. Depending on the project the impact map is used for the task/measure could either be an adjustment, change or update of the business/system or it can be an implementation of a new system. The tasks/measurements can have a tree structure with the root is the logical reasoning and the leaves are the more concrete tasks.. S.M.A.R.T goals For an impact map it is important to have good goals but more important is to have great measure points. A way to make the goals clearer and more reachable is to use the S.M.A.R.T goals method [16]. The S.M.A.R.T goals are used for focusing the team or individual to reach a goal. When a goal is reasonable it helps a team to focus and stay motivated. The S.M.A.R.T goal is an acronym for Specific, Measurable, Achievable, Realistic/Relevant and Time-based. This is the objectives that a S.M.A.R.T goal should have in order to be a S.M.A.R.T goal. The first letter S, stands for Specific. A goal needs to be specific in order to be reachable and effective. The more concrete a goal is the better it is. For example, “I want people to answer my survey” is not specific enough but “I want more than 20 answers on the survey” are. This also leads to the second letter M which stands for Measurable. If a goal is not measurable how is it possible to reach it? To make a goal measurable helps to keep the motivation up because you can track your progress throughout the project. The third letter is A, which stands for Achievable. It is not good to set a goal that is not achievable. For example, it is not realistically achievable to get one million answers on the survey. Which also the next letter R, addresses. The letter R, stands for Realistic or Relevant. It is important that a goal is also realistic and not just achievable. Realistic or relevant aims to have a goal that can be done and makes sense for the project. For example, certain tasks might require certain skills that might not exist in the team and might not be relevant for the project either. The last letter T, stands for Time-based. The goal needs to be time-based in order for it to become a priority. It also helps to not become distracted from other less important goals if you have a time plan for your goal. For this example the goal could be “I want more than 20 answers on the survey within the next three weeks” [6, 16].. 12.

(25) 4. Method. For this project, three main activities were carried out. These activities were interviews with the content providers, a survey for the end-users and the creation of an impact map which shows the result of the interviews and survey. The impact map gives the project management an overview of what is included in the project and how the different tasks can be done. Interviews with the content providers were carried out because there was an understanding of the process that needed to be taken part of. The level of understanding that needed to be reached could not have been done using a survey, therefore were the interviews carried out. For the end-users, more answers were required than there was time for conducting interviews. Therefore, a survey was made to discover attitudes the end-users had towards marketing and recommendations.. 4.1. Interviews. There were five interviews conducted for this thesis. Two of the people that got interviewed had a background within E-commerce and three people had a background within online services. Three of the interviews were conducted using a structured methodology via email and two of them using a semi-structured methodology via telephone. The interviewees were found via connections from MindRoad and Comordo. All the interviewees were contacted via email, where the project was explained and a request for an interview which was estimated to take around 30 minutes. For the three interviewees, one of the interviewees had a background in E-commerce and two within online services. The reason why the interviews were email interviews were mainly because the interviewees did not have any time for an interview via phone or video. Even though there are disadvantages to email interviews such as the fact that there is no opportunity to observe the interviewee there are also advantages. These advantages were that the interviewee had time to answer whenever the time suited them since email interviews happen asynchronously. The interviewees also had the time to think about the answers before answering which can make the answers more complete compared to a face-to-face interview where the interviewee is expected to answer right away. Another advantage is that the transcript or the summary of the interview is already done and can easily be stored, compared to a phone-interview [11]. The questions asked in the emails were in which situations they are using recommendations, where they see the biggest difference from before they started 13.

(26) 4.2. Survey using recommendations and in which situations they do not see a difference from the recommendations, when they started to use recommendations and why what they value from the recommender system and how they communicate with their end-users. The other two interviews were conducted using a semi-structured interview methodology. One of the interviewees had a background within an online service and one within E-commerce. The two interviews took approximately 30 minutes each. Since the two interviewees had different backgrounds, the questions did differ because they would have different use cases for the system that want to engage the end-users. For the person who has a background within E-commerce, the company did not have a recommender system today but wanted to have one. The questions asked in the interview were about why the company wanted to have a recommender system and how they thought that the recommender system would benefit the company. Questions about how the company communicates with its customers today were asked, what needs they see for the company’s customers, how they work with marketing today and why they choose to market their products that way. The person that has a background within online services works at a company that creates a white-label app that is used by end-users but controlled by different companies. This means that the person interviewed had the perspective of working business-to-business instead of business-to-client that the recommender system would be. The questions asked in the interview was about what the company sees that the businesses that they work with need and values from them as a white label company and cooperation partner. How they help the businesses to market their app and the needs that they see that they fill for the businesses. Also, how they measure engagement from the end-users and what data they help to provide for their cooperation partners. After the phone interviews had been done, they got summarized, both the notes taken during the interview and the recording of the interview.. 4.2. Survey. The survey was made using Google Forms. It contained 13 questions where 3 of them had follow-up-questions. Out of the 13 questions, 11 of them were answered using radio buttons (which gives the respondents the possibility to only pick one choice out of many) and 2 of the 13 questions using check-boxes (giving the respondents the possibility to pick several of options). For all the follow-up-questions free text answers where used. After each question, the respondents had the opportunity to write down comments and thoughts about the questions if there was something that the respondents wanted to state. The survey asked the respondents about their attitudes regarding marketing with a focus on digital marketing and directed marketing. This was because the recommendations are an example of directed marketing. After all, the recommendations are specific to an end-user. The first three questions asked the respondents about their attitude regarding email and email marketing. How often the respondents get emails containing marketing and how long they spend, on average, on each of these emails. Afterward, questions about digital marketing were asked. The questions were about the relevance of the digital marketing that the respondents see online, both of which types they see and how relevant it is. A question about which digital marketing the respondent prefers was also asked and why the respondent prefers this type of digital marketing. After that, questions about the respondent’s attitude regarding social media and their personal feed on social media were asked. Which medium (marketing channel) the respondents devote the most attention to, how much they feel like they can control their feed on social media and why. Afterward, the respondents had to fill in which channel they look at marketing regarding fashion, electronic devices, restaurants, groceries, events, online services and offline services (such as a hairdresser). Then the respondents had to answer how likely it is for the respondent to read a text a company has written, look. 14.

(27) 4.3. Impact mapping at a marketing video, click on an ad on social media and click on a link/image/button in an email. Then the respondents were given four scenarios where the respondents had to choose between two different sets of recommendations to a specific product. The products were two books, a shoe and a coffeemaker. These objects were chosen because they are all gender-neutral (the shoe was a sneaker in a unisex model) and could be bought for people of different ages. The recommendations for these objects were grouped were one group of recommendations was reflecting a collaborative filtering method and the other group reflected a content-based filtering method. Lastly, the respondents were asked to answer what age group they belonged to going from under 18, 18–24, 25–34, 35–44, 45–54, 55–64 and over 65 years. Age was the only demographic data that was collected to make the survey anonymous. All the answers to the survey were collected online via Google Forms. The respondents got given a link that took them to the survey. The survey was a one-page survey and no question was compulsory to answer, all questions were voluntary. The link to the survey was sent out via Facebook to two different forums, one for students studying media technology at Linköping University and the other for women who work within technology. The survey was also sent out to workers at MindRoad in Linköping. The answers were sent out and collected in a spreadsheet. The data then firstly got compiled into pie charts, bar charts and column charts for all the answers from all age groups. Afterward, the data got split into the different age groups and the result was visualized in column charts. For all the answers given with radio buttons, the visualizations contain the answers counted for each answer, meaning that one answer was one unit which later got divided with the number of all answers from the same age group. For the questions where check-boxes were used the number of checks was counted to get a percentage where all the answers became 100%, instead of counting the number of people that answered, making then all the answers to become more than 100%, thus preventing that all the answers become more than 100%. To get 100% one check was divided with the number of all the checks given at a certain question for the specific age group. For example, for one question 28 respondents answered with 48 total amounts of checks (where 13 answered with one check, 11 answered with two checks, 3 answered with three checks and 1 answered with four checks). This means that if the answer should become 100% then the amounts of respondents from one age group who answered one specific answer needs to be divided by 48 instead of 28. Because if divided by 28 the answer becomes the percentage of the number of respondents that thought one option was a good choice and if divided by 46 one checks becomes as valuable as any other check. To have all the checks be equal is the desired outcome.. 4.3. Impact mapping. When the interviews were made and the responses from the survey had been sent in, the impact map started to be made. First, the goal and the target groups were decided. When these were decided the measure points of the goal were decided based on how the content providers decided to measure the goal today. After the measure points were made the goal of usage was decided based on the interviews made with the content providers. The prioritization of the goals of usage was decided from the interviews and their view on how important certain aspects of the work were. Lastly, the tasks were decided based on how the goals of usage could be achieved. When the content of the impact map was created the visualization of it began. The visualization was made in Adobe Photoshop. The colors and shapes followed the graphical profile of Comordo with yellow, gray, white and black. The quotes from the target audience are taken and rewritten from the interviews. The quotes got rewritten to fit the space of the impact map.. 15.

(28) 5. Results. The result that will be presented is the answers for the survey and the answers from the interviews that have been made during this project. These results will then lay the basis for the impact map that will also be presented at the end of this chapter.. 5.1. Survey. The survey got 97 responses and 1 respondent was under 18, 29 respondents were 18–24 years old, 35 respondents were 25–34 years old, 11 respondents were 35–44 years old, 14 respondents were 45–54 years old, 6 respondents were 55–64 years old and 1 respondent was over 65 years old. Since the number of responses differs a lot for the different age groups, it is not possible to say that the results represent the whole age group’s attitude. Therefore, the results can only show an indication of what the result might be rather than with 100% certainty. Since there is only one respondent in the age group under 18 years and one respondent in the age group over 65 years, these answers will not be taken into account since one respondent cannot represent the whole age span of the age groups. The first questions were about how often the respondents got emails. Most respondents answered that they got emails containing marketing more than once per day. The result for respondents can be seen in figure 5.1. For all respondents, 53,7% got more than 1 email per day and 14,8% got an email less than 4 times a month. One reason that some of the respondents get fewer emails than other respondents is that some of them wrote in the free text fields that they actively try to unsubscribe or mark as spam from as many emails as possible, emails that they find irrelevant whiles others just consider the emails as spam and do not do anything about it. The result of dividing the answers of the different age groups for the first question, how often the respondents get emails containing an advertisement, can be seen in figure 5.2. As shown in figure 5.2 the trend seems to be the same for all age groups. For all the age groups more than 54,5% of the respondents got at least one email per day that contained advertisement. This result also points out that the younger age groups are the ones that do unsubscribe to the emails more than the respondents in the older age groups. There was one respondent in the age group 55–64 that expressed in the free text option that they do try to unsubscribe from emails.. 16.

(29) 5.1. Survey. Figure 5.1: Chart over how often the respondents get emails regarding marketing.. Figure 5.2: Chart over how often each age group get emails regarding marketing.. 17.

(30) 5.1. Survey. Figure 5.3: Chart over the attitude for opening emails containing offers and/or recommendations.. Figure 5.4: Chart over the attitude for opening emails containing offers and/or recommendations for each age group.. For the next question regarding if the respondents opened these emails, 68,1% of the respondents answered that they usually do not open these emails and look at the content, which can be seen in figure 5.3. It is also shown that 18,1% of the respondent do usually open emails containing offers and/or recommendations. In figure 5.4 it is shown that the result might be the same for all age groups. In the free text section, most of the answers state that they read the email if the headline of the email is appealing. This might show that some people might not look at the email but they do read the headlines. When the respondents do open the email, 77,9% of the respondents do not look at the content for more than 30 seconds. This can be seen in figure 5.5. This aligns with the free text answers that most people do not feel like they have time to read all the emails and therefore decide not to read them. All the respondents that answered that they never open the emails containing offers and/or recommendations did also answer that they spend on average less than 30 seconds, which can also explain why it is so many respondents that chose this option. For the different age groups, it looks like the age group of 35–44 and 55–64 are the two age groups that spend the most time reading emails. This can be seen in figure 5.6. The age group that seems to be spending the least amount of time looking at their emails is the age group of 45–55. Only 7,1% said that they spend more than 30 seconds on average on one email. It is also the only age group that does not have any answer to spending on average more than 18.

(31) 5.1. Survey. Figure 5.5: Chart over how long all the respondents look at an email on average.. Figure 5.6: Chart over how long all the age groups look at an email on average.. 1 minute on looking at one email. For the free text answers, the respondents said that they do not feel like they have time to read all of the emails and that they mostly just read the headlines unless it is something that catches their interest. When looking at the emails that are sent to the respondents 35,5% either fully agree or agree that the recommendations that are sent to them are relevant for them. Almost the same number of respondents, 33,4%, feels like the recommendations are not. 31,2% of the respondents said that they are neutral in the question. These results can be seen in figure 5.7. Some of the free-text answers said that they are neutral because they do not look at these emails and therefore do not know if they are relevant. Others say that sometimes they are relevant and sometimes they are not. Common free text answers for the question is also, that the respondents that feel like the emails are relevant, do that because they have all signed up for the emails and therefore comes from stores that they usually shop from. The respondents that disagree, wrote that when they already bought a product and afterward get recommendations of it, these recommendations are not seen as relevant. When looking at how the responses are distributed among the different age groups it is seen in figure 5.8 that the distribution is similar. It is also seen that the only age group that has the responses of fully agreeing is the age group of 18–24. The age group that disagrees the most seems to be the age group of 35–44. This is also the age group that seems to be the age group that thinks that the recommendations that are sent to them are the least relevant compared to all the other age groups. The respondents that are answering ’Neutral’ also seem. 19.

(32) 5.1. Survey. Figure 5.7: Chart over how all the respondents feel about the relevance of the emails sent to them.. Figure 5.8: Chart over how each age group the respondents feel about the relevance of the emails sent to them.. to be distributed evenly throughout the age groups. This is expected and aligns because the answers to those questions also seem to be evenly distributed. The age group 45–54 are an age group that is one of the age groups that are disagreeing the least. This group is also the age group that seems to spend the least amount of time looking at the content of the emails but at the same time are the age group that seems to open the emails the most. This means that they mostly feel like the recommendations that are sent to them are relevant and they do open them sometimes but they do not spend any time looking at the content of the email. For the question “Which forms of marketing do you see on social media” the respondents had to check the check-boxes for which marketing they do see. That is why there are many answers. The responses to this question can be seen in figure 5.9. The most common answer and form of marketing that is seen are sponsored ads. The answers of ’companies post a picture/video’ and ”Influencers advertise a product/service’ seems to have the same amount of reach for the respondents, meaning that they seem to be the same amount of common. When asked about which digital marketing the respondents preferred 59,6% of the respondents answered ’Through the companies’ channels’, which can be seen in figure 5.10. The second most preferred type of digital marketing is emails, with 18,1% of the answers. The sub-question of this question was ’Why do you prefer that type of digital marketing’ most of the answers were that the respondents feel like they can choose this option them20.

(33) 5.1. Survey. Figure 5.9: Chart over which marketing each age group see on social media.. Figure 5.10: Chart over which digital marketing the respondents prefer.. selves. The marketing through the companies’ own channels, such as web page or app, is something that the respondents look up to themselves. This argument also came for the option email, where the respondents feel like they can choose themselves which company can send emails to them and when they open these emails. For sponsored ads, the argument was that it is clear that these ads are marketing and are therefore easier to ignore when the marketed product is something that does not interest the respondent. For the different age groups does the result look a little different, which is seen in figure 5.11. Even though the option ’Through the companies’ channels’ seems to be the most popular option for all age groups, the attitude towards email seems to be different for the different age groups. The age groups 35–44 and 45–54 seem to prefer email more than the rest of the age groups. The next question was the most effective way to reach the respondents as a customer. The result is seen in figure 5.12. In alignment with which digital marketing the respondents saw the number one answer is ’Ads on social media’. But compared to which type of marketing the respondents prefer which was through the companies’ channels the most effective way to reach the respondents as a customer is ads instead. For the different age groups, the result seems to differ, which is shown in figure 5.13. There it seems like for the younger age groups the respondents are easily accessed via ads on social media and the older age groups through the companies’ channels. For the result, it seems like the age groups 34–44 and 45–54 seems to be reached through emails more than the. 21.

(34) 5.1. Survey. Figure 5.11: Chart over which digital marketing each age group prefer.. Figure 5.12: Chart over how the respondents are reached in an effective way.. ages of 18–34. Ads on external web pages seem to also be more popular for the age groups 25–34 and 55–64. When asked if the respondents feel like they can control their feed on social media, more respondents felt like they could not compare to the respondents that felt like they could. This can be seen in figure 5.14. 39,6% of the respondents say that they either disagree or fully disagree that they can control their feed on social media and 38,5% say that they agree or fully agree. Here the point of view seems to be different for the respondents. Most respondents say that they can control which accounts that they follow but the opinion differs when it comes to the ads. Some respondents feel like they cannot control the ads because when the ads are in the feed, the ads are in the feed and this cannot be controlled. What is meant by that is that the respondents do not feel like they can control the number of ads that are in the feed. Other respondents say that they can control it because if they search for a type of product this type of product ends up on the ads in the feed of the respondent’s social media, meaning that they in that way can control what is shown in the ads on social media. The other group of respondents then means that if they search for a type of product it ends up on the feed on social media whether they like it or not. A few of the respondents that agreed state that they feel like they can control the feed because they can report the ads that they do not want to see and therefore they feel like they can control it. For the different age groups, the results do seem to differ a little bit among the amount agreeing or disagreeing, which is seen in figure 5.15. For the age group of 45–54 it seems like 22.

References

Related documents

Assuming the predicted behaviors can differentiate between participants enacting a strategy versus not enacting a strategy, it seems like suspects’ self-accuracy might be

This survey is part of a research project on how German manufacturing firms make use of digital technologies, supply chain integration, supply chain agility and

(2012) have studied the subsurface infrasystems of Norrköping, including both cables and pipes from various technical systems. The city of Norrköping shares many similar

This dissertation describes influences on the occupational aspirations and attainments of non-Western, non-European immigrants’ descendants, from their own perspective.

Nonetheless, the results reveal that the immigrant heritage of the descendants of immigrants influences their views on labour market participation, perceptions of gender norms, and

However the authors performed a content analysis of the ten selected business school websites in Europe, by analyzing the collected data from WordStat to identify relations

Lundin menar även att det finns stor press från chefer på Arbetsförmedlingen att anställda ska visa positiva resultat, där positiva resultat innebär att så många deltagare

Hon skriver även att det är vanligt att vuxna uppmuntrar barn att söka kamrater av samma kön och att barnen uppmuntras till könsspecifika lekar genom de leksaker pedagogerna ger