• No results found

Chatbot: The future of customer feedback

N/A
N/A
Protected

Academic year: 2022

Share "Chatbot: The future of customer feedback"

Copied!
43
0
0

Loading.... (view fulltext now)

Full text

(1)

Degree Thesis

HALMSTAD

UNIVERSITY

Computer engineering, 180 credits

Chatbot

The future of customer feedback

Degree thesis, 15 credits

Kevin Hoang Dinh

(2)

Acknowledgements

I would like to express my deepest appreciation to Magnus Olander from Quicksearch and Ludvig Linse from Narratory for providing me materials and helping me with technical issues. Without their help, this study wouldn’t be possible.

I also thank my advisor, Eric Järpe, for helping me with writing my thesis and giving me lecture on hypothesis testing.

In addition, a thank to my friend Carl Thomsen for proofreading, fixing my spelling mistakes, and supporting me through this period.

Last but not least, I would like to thank my family and friends for their mental support

during this corona period. It has been a lot of stress for me in these tough times and I almost

gave up half-way through this study. Thanks for their support I’ve managed to push through

and finish it.

(3)
(4)

Abstract

This is a study about how to convert a survey to a chatbot and distribute it to various

communication channels to collect feedback to improve themselves. What would be the

most convenient way to gather feedback? Our daily lives are becoming more and more

dependent on digital devices every day. The rise in digital devices leads to a wider range

of communication channels. Is it not a good opportunity to use these channels for several

purposes. This study focuses on chatbots, survey systems, communication channels, and

their ability to gather feedback from respondents and use it to increase the quality of goods,

services, and perhaps life. By using chatbot language knowledge, people can engage with the

bot in a conversation and answer survey questions in a different way. By using Restful API,

the chatbot can extract quantitative information to be analyzed for development. Although

the chatbot is not well-made and still requires a lot of adjustments, the work has proven to

have many opportunities in surveys, gathering feedback, and analyzing it. This could be an

improvement for research regarding chatbots in the future or a new way to make surveys

better.

(5)
(6)

Sammanfattning

Detta är en studie om hur man konvertera en undersökning till en chattbot och sprida den till olika kommunikationskanaler för att samla återkoppling for att förbättra sig själv. Vad skulle vara det bekvämaste sättet att samla återkoppling? Våra dagliga liv blir mer och mer beroende av digitala enheter var dag. Ökningen av digitala enheter leder till ett större utbud av kommunikationskanaler. Är det inte då en bra möjlighet att utnyttja dessa kanaler för flera ändamål. Det här arbetet focuserar på chattbotar, undersökningssystem och deras förmåga att samla återkoppling från respondenter och använda den för att öka kvaliteten av varor, tjänster och kanske livet. Genom att använda chattbottens språkkunskap kan människor engagera sig med botten i en konversation och svara på undersökningsfrågor på ett annorlunda sätt.

Genom att använda sig av något kallat Restful API kan man ta ut kvantitativ information för

att analysera den för förbättringssyfte gällande produkter och tjänster. Trots att chattbotten

inte är välgjord och fortfarande kräver mycket justeringar så har arbetet visat sig ha många

möjligheter inom undersökningar, samla återkoppling och att analysera det. Detta kan

vara en förbättring för forskning om chattbottar i framtiden eller ett nytt sätt att förbättra

undersökningar.

(7)
(8)

Table of contents

1 Introduction 1

1.1 Objective . . . . 2

1.2 Research questions . . . . 2

1.3 Limitations . . . . 2

2 Background 3 2.1 Chatbot . . . . 3

2.2 Natural Language Processing . . . . 4

2.3 Extract, Transform and Load . . . . 4

2.4 REST API . . . . 4

2.5 Related Work . . . . 5

3 Method 7 3.1 A chatbot that can interact with the respondent . . . . 9

3.2 Survey/Answer file and Application . . . . 11

4 Result 15 4.1 A chatbot that can interact with respondents . . . . 15

4.2 An application to analyze the respondent’s answer and update the chatbot dialog . . . . 17

5 Discussion 21

6 Conclusion 25

References 27

Appendix A 29

(9)
(10)

Chapter 1 Introduction

Societies, organizations, and residents are becoming more dependent on digital devices.

People use computers and mobile phones to contact each other more frequently which leads to an increased traffic in these communication devices. As the number of chat channels continuously increases and becomes more common in our daily life and it also becomes more relevant to use them for different purposes. An interesting idea which connects with those purposes is Quality Control. The most commonly used form for Quality Control is a survey. Quality Control is a process to ensure the quality of a product or a service keep up to people demands. Would you want to use an ineffective product? Would you pay for a poor service? Or a medicine that barely cure your sickness? This is why Quality control is very important. Without it, the outputs’ quality will be put at risk.

The survey gives us the possibility to gather quantifiable answers from structured ques- tions which makes Quality Control easier to handle for different costumers and collaborators.

A lot of application areas are possible to investigate by means of surveys using Quality Control: errors inspection [1], crowdsourcing [2], neurochemical dementia diagnostics [3].

However, the problem with traditional polls is survey fatigue. Survey fatigue is also known as the Respondent Burden [4]. It means the polls can put a burden on the respondent. How?

Because participation in a survey can be time-consuming and require effort which makes the

respondent less likely to answer the survey. This leads to a lower response rate making the

poll’s data unreliable. Improving survey quality is more important than people think. How

will a product improve without consumers’ opinions? How can a business grow without

listening to its costumers? Things people use every day have been utmost perfect to their

needs all thanks to the survey system. Without it, our society can hardly progress onward

and stagnate. But how can we improve it furthermore? According to Angela Sinickas, the

key to improving survey fatigue is making sure that people feel their opinions are being

listened to [5]. For example, by shortening the survey the response rate can be increased

(11)

2 Introduction

significantly but the quality is greatly decreased. There is also a study about using incentive effect increases both response rate and quality but with the cost of money [6]. Online survey is less time-consuming and better cost-effective yet their sampling has issues and concerns [7].

1.1 Objective

There are many ways to improve the survey but most come with a cost. The primary purpose of this project is to create a unique way with the help of bot and chat channels to improve the response rate, quality and cost-effectiveness of surveys. After that, people can deploy the chatbot to their application, service and research to use for their own purpose.

1.2 Research questions

In order to reach the objective the project requires:

- A chatbot that can interact with the respondent. Can the chatbot understand the respondent’s answer? What will the bot do if it does not understand?

- An application which is able to generate analyzable information from the interactions with the chatbot, surveys and the respondent’s answers. How does the application send out the survey to the bot and gather answers from the conversations? Can the application also update the chatbot dialog (Answer and Question) if there is a new polls or a change?

1.3 Limitations

There are many types of surveys such as market research, employee satisfacion, training

evaluation... But the project focuses most on the use of chatbot in a survey system and how

the respondent feel about using the chatbot compared to a poll/survey. It would require a

lot of time, research and resources to go through all surveys. Additionally, the project will

focus on messenger due to limited workforce. The project also exclude the chatbot’s speech

interface due to similar reasons.

(12)

Chapter 2 Background

To answer the research questions, it requires certain knowledge and method. In this chapter, the project will go more into the theories and the materials which are used to achieve the objective.

2.1 Chatbot

Chatbots, also known as conversational agents, exploits the Natural Languages Processing (NLP) to interact with users through an interactive text- and speech-based dialog [8]. Im- plemented on many different platforms(e.g., applications, web services, chat channels), it responds to customers’ inquiries and questions. The uses of chatbots are endless as it is becoming a part of people’s daily lives. Apple’s Siri, Amazon’s Alexa and Google Assistant are well-known agents for their multi-use,e.g. training, education, booking trips, transactions, customer service or route planning [9]. Developers evolve complex agents non-stop, adding more abilities and functions. Beside these complex multi-use agents, there are agents com- mitted to a single task that can be composed through a cloud communications platform as a service (PaaS). Popular platforms that specialize in conversational chatbots are Dialogflow, Twilio and Azure Bot Service.

The development of chatbots are on the rise. From simple code patterns to a hardware

product and embedded deep-learning system, chatbots created for multiple purposes and

have proven its usefulness to many different cases [10]. Even though every chatbot has

different objectives, they are all focusing on the same field, conversational knowledge. The

oldest chatbot, ELIZA, generates its answers by matching keyword from the user’s statement

[11]. This simple implementation became the most referenced in today’s conversational

techniques [12]. By exploiting these techniques(e.g., NPL, keyword extraction, deep-learning

or machine learning), chatbots can understand the user’s input and reply with an appropriate

(13)

4 Background

answer making the conversation more human-like which encourages participants to continue using the service [13].

2.2 Natural Language Processing

Natural Language Processing (NLP) is field research about computers’ understanding of human language, including both text and speech. The essential part of all chatbots are to interpret the user’s input and give back the desired output. NLP is an accumulation of many application areas such as machine translation, natural language text processing and summarization, user interfaces, multilingual and cross-language information retrieval (CLIR), speech recognition, artificial intelligence, and expert systems [14]. NLP involves three major issues of Natural Language Understanding (NLU) that need to be solved for computers to accomplish human-like language processing. The first issue is the human thought process, the second is the comprehension of the semantic input, and the last is world knowledge, the knowledge outside the default program. In order not to confuse "processesing" with

"understanding", NLP focuses on the process of natural language interpretation while NLU focuses on the interpretation.

2.3 Extract, Transform and Load

The project’s point is to compare the normal survey to the chatbot regarding the quality, effectiveness, and response rate. In order to compare data from different sources using different formats, it needs to be extracted and transformed into the correct one. Extract, Transform and Load (ETL) is the name of the process when integrating data from multiple sources and put them together into one system. The process is as follows:

• Extract data from different channels, platforms or applications.

• Transform the data into the correct format for the targeted system.

• Load the transformed data into the system.

2.4 REST API

Representational State Transfer (REST) is a style to build distributed hypermedia systems.

HTTP (HyperText Transfer Protocol) is the main protocol for most common REST imple-

mentations. A RESTful API works around resources, which can be any kind of data, object,

(14)

2.5 Related Work 5

or a service that can be accessed through an identifier, which is an URL (a website address).

REST APIs use HTTP as a protocol so it can perform common HTTP methods. Using a resource identifier together with HTTP methods allows the client to performs common operations on resources:

• GET: retrieves a representation of the resources.

• POST: create a new resource. The client must provide the data to the new resource for creation.

• DELETE: remove a resource. The client must have identification to remove a specific resource.

This project will use REST API to build a web service to receive a POST request through a webhook.

2.5 Related Work

The project’s objective is to engage consumers in a poll through a conversational chatbot.

Similar work categorizes into two areas as detailed below.

Agents for information

Many developers have built agents to interact with their customers to retrieve information.

Knowing the information, agents can find relevant data and perform the correct task, such as education support [15], booking travel, customer service [16] or recommendations [17]. The agents will not be able to give you education support if it does not know about which education a student has. Booking travel requires a traveler’s name, credit card, and private information.

Recommendations without knowing the costumer’s references would be irrelevant and time- wasting.

There are studies about agents for gathering information but not many of them situated in a survey form. There is no consideration of the quality of information, unlike a survey. The objective of this project might be similar to other research above. But it goes further to the next step and discovers more findings to improve the survey quality.

Improving survey quality

As stated above, the main purpose of this project is to find a method to improve survey quality.

Not just this project, many other studies have put a lot of effort into improving the response

(15)

6 Background

rate. In 2006, a research has shown that personalization can increase the response rate by

4.4 percentage, from 50.3 to 54.7 [18]. Other research about interactive feedback has also

shown the improvement of web surveys [19]. Another research uses probings to produce

better quality answers from the respondent [20]. Learning from previous studies, the project

can apply these techniques on conversational agents to improve survey quality furthermore.

(16)

Chapter 3 Method

Fig. 3.1 Project Plan

When the chatbot communicates with the respondents, the chatbot will let the server

know what the respondent has answered with each interaction. The project wants to collect as

many answers as possible even if the respondent has not finished the whole conversation. The

chatbot will send the data to an application that is suitable for handling it. The application

will then convert the data to an appropriate format for a system to process. Additionally, it

requires a second application to take a survey and convert it into an appropriate script to

update the chatbot’s dialog. As shown in Fig.3.1, the project will require a chosen platform

to create an appropriate chatbot. The platform should be able to connect with different

communcation channels e.g Facebook Messenger, Slack, Microsoft Teams. Additionally,

the chatbot should be programmable since it requires to update the dialog should there be

a change in the survey. To pass the platform, the project will set up an easy survey with

branches and different kinds of answers from the bot.

(17)

8 Method

Fig. 3.2 A simple survey’s flow

There are many suitable platforms for the chatbot e.g Twilio, Azure Bot Service, Di- alogflow, and Manychat. Twilio’s autopilot is a conversational Artificial Intelligent (AI) to create agents and train them with NLU and machine learning

1

. Same with Azure Bot Service, their Virtual Assistant uses a machine learning-based service to increase their language experiences, Language Understanding Service(LUIS)

2

. Azure, Dialogflow, and Twilio can integrate with many different platforms and even have a speech pattern. Manychat’s flowchart allow users to create more flexible chatbots. These platforms have been used in various research before: using Azure to create a chatbot application for Slack [21], using Twilio to create a chatbot for healthcare [22], and Dialogflow for Placement Activity at College [23].

However, this project will use Narratory. Narratory is a platform that uses Dialogflow as its core and codes the agent through Typescript programming languages. Twilio can not create the chat flow effectively, and the collected data is saved directly in JSON which is impossible to used as a context for the next dialogue. Azure requires a lot of programming knowledge to create a survey chatbot. With the limited time and workforce, Azure is excessively complex to create the chatbot. Despite Manychat creating chatbots fast and easy, it is not programable.

Click and drag is what Manychat is all about. If a survey is changed/updated, the chatbot itself has to be updated manually. Narratory can perform the task which the project is asking for. It is easier to use, and able to integrate with many different channels through Dialogflow.

1

Twilio Autopilot. Accessed: 2020-05-08. https://www.twilio.com/docs/autopilot

2

Azure Bot. Accessed: 2020-05-08. https://azure.microsoft.com/sv-se/services/bot-service/

(18)

3.1 A chatbot that can interact with the respondent 9

3.1 A chatbot that can interact with the respondent

Fig. 3.3 Integration between Narratory and Facebook

After picking out the platform, the project will determine if the chatbot can comprehend human language. Narratory is an independent platform for building chatbots. Narratory focuses on research in the dialog system and takes a dialog script to model conversationsnote

3

. As mentioned above, Narratory is using Dialogflow as its core which means Narratory’s NLP is entirely dependent on Dialogflow. Dialogflow is a development suite for producing conversational interfaces

4

. It makes the chatbots conversation more human-like with the help of two concepts: Intents and Entities. To simplify, Intents refers to a user’s intentions when the user writes or says something. Dialogflow matches the user’s intentions to the best intent in an agent. An example of how to create Intents in Narratory A.4. Entities, to simplify, is a list of data that intents use to determine the user’s intention in a specific phrase. If an intent captures users’ attempts to order a meal then the entities should be a type of food and drink. For example when creating entities in Narratory A.3 and an intent using that entity A.5. Dialogflow can integrate with many chat channels such as Slack, Messenger, Skype, Kik... But the project will focus on Messenger as stated above in the Limitation. In order to integrate with Messenger, it requires a Facebook page and application.

Facebook pages have settings that allow an Facebook application to communicate with the users through Messenger. This application can set a webhook to Dialogflow to capture

3

Narratory. Accessed: 2020-05-08. https://narratory.io/docs/intro

4

Dialogflow. Accessed: 2020-05-08. https://dialogflow.com/docs

(19)

10 Method

the answer when Facebook users converse with the Facebook page. Webhook, a.k.a web callback, is a way to send data when a certain event triggers, which in this instance is referred to when the user answers the bot. Narratory has a webhook as well and it can be customized.

The customization depends on what the data will be used later on which will be explained in section 3.2.

Testing: When deploying a chatbot, it requires a certain level of language understanding to converse with the respondent. In order to test its understanding capability, the project will do a hypothesis testing to estimate it. Hypothesis testing is a statistical method that is used to assess the plausibility of a hypothesis by using sample data. There will be two tests: a test of proportion and a test of dependency.

To do the first test, it requires at least 100 sample data with the chatbot. The project will set a simple answer option from 1 to 5 as an entity together with examples for intents so the chatbot can understand to a significant level. The examples are some text written before the answer or the opposite. The project will assume the respondent will write a short text plus an answer to the chatbot. The answer is to guarantee correct, else the bot will never understand, but the short text is unknown. Due to the given examples in intents, the chatbot can understand the context to around fifty percent. This means the bot has a 25 percent chance to misunderstand the respondent’s answer. This will also be the test null hypothesis.

The alternative hypothesis is the chatbot misunderstanding is below 25 percent. The test will use the most common significance level, five percent. The significance level indicates the risk to reject a true hypothesis. The test function in this case is

U =

√ n(p − π

0

)

p π

0

(1 − π

0

) (3.1)

where

n = total number of observations.

p = observed proportion among the observations.

π

0

= proportion under the null hypothesis.

Using the test function (3.1) it can be checked to what extent the data supports the alternative hypothesis as opposed to the null hypothesis. To this end, the test function is asymptotically standard normally distributed. If the value u of the test function exceeds the standard normal percentile λ

α

it is concluded that the null hypothesis should be rejected at the level α of significance and that the alternative should be accepted as true.

The chatbot’s NLU capability is based mostly on intents, entities. Intents are a must

for the chatbot to function properly but not entities. The project wants to know if entities

affect the chatbot language capability much. The test will include 20 tests without entity

(20)

3.2 Survey/Answer file and Application 11

and 10 tests with it. The null hypothesis: The chatbot’s NLU does not require an entity. The significance level is five percent. The alternative hypothesis: The chatbot’s NLU requires an entity.

Include Entity

Yes No

Understand

Yes N

11

N

12

R

1

No N

21

N

22

R

2

C

1

C

2

N

Test function in this case:

U = (N

11

C

2

− N

12

C

1

) √

√ N

C

1

C

2

R

1

R

2

(3.2)

where:

N

11

− N

22

= sample data divide into cases.

C

1

= N

11

+ N

21

. C

2

= N

12

+ N

22

. R

1

= N

11

+ N

12

. R

1

= N

21

+ N

22

.

N = Total number of observations.

Using the table, test functions (4.1) to calculate and then verify the hypothesizes above with standard normal distribution table. If |u| > λ

α /2

it is concluded that the null hypothesis should be rejected at the level α of significance and that the alternative should be accepted as true.

3.2 Survey/Answer file and Application

Data extraction

The project uses a premade system by a company. In order for the premade system to use the answer from the respondent, it first needs to receive the data through an application, integrate it with the system, and convert it into the correct format that is analyzable in the system.

The application needs to be a Restful API to receive a POST method from the Narratory’s

webhook and will be coded in C# because the premade system uses C# as well. Before

building a Restful API, it will need to know how many parameters are needed to customize

the Narratory’s webhook. A.2 is an answer generated by the premade system. The answer

file shows a class type SurveyResult has four data type:

(21)

12 Method

• PublicationID: a GUID (globally unique identifier) to identify the survey

• PersonID: a GUID to identify a respondent. Identification allows the respondent only able to answer a survey once.

• Platform: platform of the premade system. Self-generated by the system.

• QuestionResult: a class that includes QuestionID, Timestamp (which is self-generated when creating the object), and an AnswerID.

Following the file format, the REST API needs at least four-parameters: PublicationID, PersonID, QuestionID, and an AnswerID. The premade system is built with Microsoft Visual Studio so the application does the same to avoid conflicts. The system has mainly two class libraries called Rundlg2 and Rundlg2.Integration. Rundlg2 consists of many classes:

• Repositories: classes providing a path to store/get data.

• Domain: classes with a constructor to create a new object with input parameters such as publicationID, personID, questionID...

Rundlg2.Integration library provides classes that use Domain classes from Rundlg2 and serialize them into XML format and put the output file according to the path provided by the Repositories class. The application uses these two libraries as references to convert the data from Narratory into the correct format. Setting the configuration is necessary for the application to use the class libraries. Using a template ASP.NET Web Application (.NET Framework) in Visual Studio to create a Web API application and create a controller for POST method to receive data from Narratory’s webhook.

Testing: The project uses a website called Hookbin to create a temporary website to test the webhook and receive the incoming data from the POST request. After receiving the data, it will be forward to the REST API through Postman. Postman is a program that can perform HTTP requests.

Survey converts into chat-bot dialog

To convert a survey into a chatbot dialog, it requires an understanding of the survey format and Narratory’s structure. Appendix A.1 is a survey example from the company which can be read through the premade system using Parser classes. The system will load the survey file into Survey classes which consists of the following information:

• PublicationID: an ID to identify the survey.

(22)

3.2 Survey/Answer file and Application 13

• LanguageID: an ID to identify the language using in the survey.

• Question class: a class consisting of the question ID, type, and answer options.

The Narratory’s chatbot is built on two main files called narrative and nlu. Narrative is a script to program the conversation flow and how the bot reacts to user expression while nlu is a script to create a list of intents and entities. Knowing the structure of the platform and system, the application can be simplified as a flow diagram:

Fig. 3.4 Application’s conversion

Narratory can set a customized webhook, mentioned above, so it is necessary to have PublicationID to identify the survey. The question class requires an languageID to get the text for the chatbot dialog. Since there are different type of questions, the application requires functions for each type to make a correct script.

Testing: Using the survey example mentioned above to test the application. After convert-

ing, update the bot and test the bot through Facebook Messenger or any other communication

channels.

(23)
(24)

Chapter 4 Result

Following the steps explained in the method. The project has created a Facebook page along with a Facebook app. Connect it together and put it up for testing different chatbot platform.

Narratory is able to connect to Messenger and able to converse with the respondent according to the test case in figure 3.2.

Fig. 4.1 Testing different chatbot platforms on Messenger

After successfully integrating the chatbot with Messenger, this research can now conduct experiments, observations to achieve results which will be shown below.

4.1 A chatbot that can interact with respondents

Can the chatbot understand the respondent’s answer?

In order to check whether it can be proved that the proportion of misunderstandings fall

below 25 percent, 100 observations, of answers to questions to the chatbot, were made. It

(25)

16 Result

turned out that 86 questions were correctly understood and the rest not. From this we may calculate the test function value

U =

100(0.14−0.25)

0.25(1−0.25)

= −2.54

Comparison with normal-percentile table with significance level α = 0.05:

Because u = −2.54 < −1.64 = λ

0.05

is true, the null hypothesis is rejected and the alternative hypothesis is proven to be true. Continue to find the p value with the help of standard distribution table (Z-table):

p = φ (α) = φ (−2.54) = 1 − φ (2.54) = 1 − 0.9945 = 0.0055.

The p value indicates the lowest significance level of our hypothesis is around 0.55 percent which means the chance to reject a true hypothesis is very low. The calculation proves the chatbot misunderstands respondent lower than 25 percent.

Continue with the second tests to verify if entities affects the chatbot’s NLU capability much.

Include Entity

Yes No

Understand

Yes 9 15 24

No 1 5 6

10 20 30

Calculation:

U =

(9∗20−15∗10)√

√ 30

10∗20∗24∗6

= 0.97 Comparison with normal-percentile table:

u > −λ

0.05/2

⇔ 0.97 ≯ 1.96. The statement is not true so this means that the null hypothesis can not be rejected, and thus a possible significant effect of Entity on NLU can not be proved in such a small study as this.

In conclusion, Dialogflow NLU capability is able to understand human language more than 75 percent with a simple implementation. It does not necessary require entity to understand the respondent.

What will the bot do if it does not understand?

In Dialogflow, there is a score in each intent called the confidence score range from 0.0

(completely uncertain) to 1.0 (completely certain). The programmer can set a threshold for

the score and if there are not any intents that reach the threshold, the chatbot will return

(26)

4.2 An application to analyze the respondent’s answer and update the chatbot dialog 17

to a fallback intent or do not return anything. A fallback intent is a way for a chatbot to express itself when it does not understand the respondent’s intention. The default fallback is a question requesting the respondent to repeat the answer again: "I didn’t get that. Can you say it again?".

4.2 An application to analyze the respondent’s answer and update the chatbot dialog

Data extraction

Due to the time limit and technical problems, the project could not set up a proper server to get the answer from the respondent. The project is using a website called Hookbin to create a temporary URL for Narratory’s webhook. The extracted data is shown in the Appendix A.6 The data consist of SessionID, PublicationID, QuestionID, AnswerID and FacebookID.

Narratory’s webhook includes sessionID by default but the project will not be using it. The Narratory has an ID for each user in different communication channels and this case is FacebookID. FacebookID consists of numbers that are not suitable to use as personID in the system which leads to making a new class, fbPersonID, to bind a FacebookID to a personID Appendix A.10. After building the Restful API, the project tests it by using Postman to make a POST request with the content from webhook A.6. The application will find the survey file according to the PublicationID and try to validate the QuestionID in the survey along with the AnswerID. If the validation is correct, the application will generate an output answer file as shown in Appendix A.7. This output file will update every time the respondent answers a question. For example, the project continues to request another POST method with the same PublicationID but different QuestionID and AnswerID A.8. The output is updated with the new QuestionID and AnswerID if it is valid as shown in Appendix A.9.

Converting a survey into a chatbot’s dialog

After following the step explained in methods, the conversion has created bot dialog, entities, and intents as expected with no errors. Testing the bot through Facebook and it works as follow:

As the result has shown, the bot works fine and able to converse with the respondent if the

respondent’s expression matches the intents created through answer options. The question

has some extra part "<p> <\p>" which has not trimmed through the conversion. The test

survey is in Swedish and some Swedish characters such as "å ä ö", could not be converted

(27)

18 Result

Fig. 4.2 Testing a chatbot on Messenger after converting survey into dialog

correctly. These small issues can be fixed but it has some concerns which will be discuss later on. During the project period, Narratory has updated a new function called suggestions which is not mention before. This function list out all the answer options as a button in Messenger that respondent can click on to answer. The conversion is included this function as shown:

Fig. 4.3 Chatbot gives suggestion to the respondent

(28)

4.2 An application to analyze the respondent’s answer and update the chatbot dialog 19

The customized webhook is included in the conversion and has been tested through Postman and Rest API. The project is trying to use the answer as a context in a few dialogues (mentioned in test case 3.2) but it did not work out as shown in 4.2. Because the answer tries to match the intents and entities, the context will use the entity’s name instead of the respondent’s answer. In this case, the bots’ entities is named after answerID so it would not be usable as a context as shown in the dialog "79614c46... is it then".

In summary, the project has successfully created applications for extracting and converting

data with a few issues. The applications use inbuilt class library from a premade system

which means it should also be possible to implement the chatbot in another similar system.

(29)
(30)

Chapter 5 Discussion

The result is better than expected. Even though the chatbot has only a simple implementation

it can comprehend the respondent answer more than 75 percent. Usually, one would expect

the respondent to converse a chatbot with a short simple answer that a chatbot can easily

understand. But observation is trying to write more extra text to see if the chatbot can

get confused and misunderstand the respondent. For example, if a chatbot presents the

respondent with five answer options to choose from, the respondent would write the same as

the suggestion. However, the 100 observations use answers with extra text before and after

the answer to confuse the chatbot yet it is still able to understand. Dialogflow has proven to

be a great platform to use as a chatbot, more than the project anticipated. However, there is

a flaw in this hypothesis testing. The project assumes the chatbot has a 25 percent chance

to misunderstand the respondent. There is no real source but use only reasoning explains

this value. This means there is a better way to use hypothesis testing. Another method

is to compare one chatbot to another. This method can allow the project to choose which

chatbot has a better language understanding but the project does not focus only on language

understanding but many other aspects as well which already explained in the method. On

the other hand, the bot is still able to interpret the whole sentences even though it lacks

entity which is a surprising factor. As explained in the method, the entities support intents to

comprehend human languages. The project expects the entity to have a much bigger roll in

chatbot NLU yet it was not. Maybe there is some other mechanic which is already inbuilt

inside Dialogflow that allows the chatbot to understand it easier. Maybe it requires more test

cases to verify the entity’s value. The result might differ if the project uses more people for

the observations. Additionally, the project focuses mostly on the respondent’s answer because

the project goal is for the chatbot to work symbiosis with a survey system. If there is an

aspect that the project overlooks it would be respondent initialize the conversation. It means

that the respondent is the one who asks the chatbot if there is something that the respondent

(31)

22 Discussion

does not understand. This can be another great idea for future research, to anticipate the respondent’s intentions. Another way to improve the chatbot is to create fallback intent which lets the chatbot know what to do when it misunderstand the respondent e.g giving answer suggestions, contacting live support. Furthermore, Narratory has options to use external intents and entities by submitting a GET request from a REST API. Most chatbots nowadays also implement machine learning ability to increase their understanding capability furthermore but this is not the goal of this project.

For testing the data extraction, the project intended to use Azure REST API but it met some technical problems with the settings which cost much time. In the end, the project uses Hookbin as an alternative for the Narratory’s webhook. Despite many problems occurred, the data extraction is successfully implemented and works as intended. It might still have some small errors and issues somewhere because we have not created many test cases for different situations but the main function works fine. The application has proven successful by setting a port in the localhost and make an HTTP request through Postman. On the other hand, there are other aspects that can be considered for testing like parallel processing, errors, exceptions, security, or safety.

When converting the survey file to a dialog, the project focuses on making all the basic data for the bot to works likes entities, intents, and chatbot dialog. Using the inbuilt class library, it has managed to convert the basic data but the text does not trim out the last part.

This can be solved by adjusting the system but that is something that this project is trying to avoid. Changing the system can cause severe errors and in a worse case, making the system go haywire. Another way to solve this is by editing it in the application but it is still uncertain if all survey file has the same conversion errors. The project has only tested the conversion on one survey file so far due to the limitation according to the company. The project keeps the extra part "<p><>" without trimming it because there are still many uncertain elements.

As for the listing answer options, Narratory is able to give suggestions to the respondent to choose between through Messenger. Suggestions show different answer buttons that the respondent can click to answer fast, easy, and correct. This also prevents the respondent from writing the wrong answer. Problem is that the suggestion is only able to suggest a single choice answer. If there is a multi-choice question, the suggestion might confuse the respondent with a single choice question. There is another way to list the answer options is by using a List class in Narratory. However, there is a problem with using List class on Messenger. Narratory’s support mentioned about the List has a problem with Messenger so the project does not use it.

The project intended to works a stand-alone project in the beginning. This means it

will not be using any private material but half-way through the project, it shifts to using a

(32)

23

premade system with inbuilt libraries classes. If the project can use these libraries’ classes for the ETL process, it means that it is possible to implement these to any other survey system as long as it does not change the system itself. During the making of the project, we encountered many problems e.g. application settings, choosing the proper platform, ETL processing. Due to using a premade system, the application required a proper configuration.

The project had a problem with the set up because there were no clear instructions for integrating with the premade system. To overcome this problem, we had asked the company for some applications/programs that are also using the premade system. Through studying, researching, and analyzing the other programs’ configuration, we were then able to set up a proper configuration after many tests. Another problem the project encountered is platform selection and the ETL process. There are many great and well-made platforms for creating chatbots but we had to choose the most suitable for this project. As explained above in the Method chapter, the project created a small conversation flow to try on different platforms.

Because each platform had a different setting and programming language, so the project took a lot of time to do this test even though it does not seem a lot in the explanation above. At first, the project was trying to use the platform Twilio but with the ETL process in mind, the project needed to see what the extracted data consisted of. After checking, Twilio’s webhook has more than 10 different parameters and has a lot of unnecessary data to the project. This would make the ETL process harder for the project to tackle due to limited time and workforce. The project then had to research for another platform with decent extracted data to ease the ETL process. By contacting/communicating with the support service of each platform, we have a better understanding of each platform and can decide on the platform for the project. The last problem which the project had was the research question. Due to the time limit, the project could not set up the REST API publicly to test the chatbot with the companies customers. The project wanted to gather data from the respondents and then make statistics regarding the answers but it was not possible within the time limit. In order to answer the research question, the project has found another way which is hypothesis testing.

The test can be done locally and does not require a lot of time so it was a perfect option for the project. After finishing the project, it has proven the possibility to implement a chatbot inside an already existing system without causing a problem. Additionally, Narratory is still under development so it can be promising in the future. According to Narratory, probing will be implemented in the future so it might be able to increase the response rate to the chatbot.

Beside Narratory, there are many other promising platforms mentioned above such as Twilio,

Azure. Due to their complexity, the project has chosen Narattory instead. But other platforms

might prove more useful, flexible than Narratory. As the project has proved the possibility to

implement chatbot in a survey system, continuous research between the chatbot and online

(33)

24 Discussion

survey is not impossible either. Compare to other related work there are some differences.

There are some studies to use a chatbot for different services mentioned in related work. But most of them are just a means for an end. They mostly use a chatbot to gain or to achieve a certain result but not many consider using a chatbot to improve themselves. Looking from the survey’s perspective, there are studies to improve the response rate, answers quality, and so forth. Yet not many consider finding another way to develop the survey system further.

Most of them were conducted through a survey or an online survey. This project wants to

create a unique way for the survey system where you can use a chatbot to gather analyzable

answers/feedback from the respondent and in turn, improve oneself. The project has taken

the first step in creating a new way to improve the survey system and prove to be useful in

other chatbot cases as well.

(34)

Chapter 6 Conclusion

To conclude this project, it will go through the applications and compare them to the goal

of this project. The conversion application uses class libraries of a premade system to read

a structured survey in XML format to the correct data classes. Using the data classes, the

application converts them into a structured Typescript file that Narratory can read and update

the chatbot. The conversion has created a dialog, entities, intents, customized webhook,

and suggestions for the chatbot. The application works better than the goal set by the

project. The application is expected to only create a scripted dialog and let the user set

the customized webhook. As for the Rest API, it receives resources/data through a POST

request from Narratory’s webhook. Then it will use the same class libraries to process the

data and transform it into the correct class which is saved in a correct structured XML file

as an answer. Thanks to Narratory’s customized webhook, the data transformation goes

smoothly and requires less effort in removing unnecessary information. The application

was successfully built and work as the objective intended. To prove this point, the project

has test the application and compare to the original example file. The results give the same

structured as the example and can update the answer as the respondent continue with the

survey as the project intended. Through this reseach I have learnt a lot about chatbot, Rest

API, technical techniques/issues and different testing method. There are also unexpected

discovery about the chatbot natural language understanding. With a simple implementation

the chatbot can understand the respondent more than 75 percents as long as the respondent

has answer that matches implementation’s intention. For future development, the conversion

can be improved by using other functions in Narratory like fallback intent, user initialization

and so forth. The Rest API can be set up and use for gathering data to analyze and does a

comparasion between a normal survey and a survey chatbot. At the same time, the project is

able to implement these applications in a premade systems which mean it is possible for other

studies, research to implement chatbot to their system as well. To summarize, the project

(35)

26 Conclusion

has successfully create a way to use chatbot in a survey system and can use the chatbot for

gathering the answer as well. It still has a long way to go, but this project has taken first step

and creates the possibility for using chatbot in survey system. Combining the survey system

with the artificial intelligence technology looks to be very promising.

(36)

References

[1] Alan L Dorris and Bobbie L Foote. Inspection errors and statistical quality control: a survey. AIIE Transactions, 10(2):184–192, 1978.

[2] Florian Daniel, Pavel Kucherbaev, Cinzia Cappiello, Boualem Benatallah, and Moham- mad Allahbakhsh. Quality control in crowdsourcing: A survey of quality attributes, assessment techniques, and assurance actions. ACM Computing Surveys (CSUR), 51(1):1–40, 2018.

[3] Piotr Lewczuk, Georg Beck, Oliver Ganslandt, Hermann Esselmann, Florian Deisen- hammer, Axel Regeniter, Hela-Felicitas Petereit, Hayrettin Tumani, Andreas Gerritzen, Patrick Oschmann, et al. International quality control survey of neurochemical dementia diagnostics. Neuroscience letters, 409(1):1–4, 2006.

[4] Laure M Sharp and Joanne Frankel. Respondent burden: A test of some common assumptions. Public Opinion Quarterly, 47(1):36–53, 1983.

[5] Angela Sinickas. Finding a cure for survey fatigue. Strategic Communication Manage- ment, 11(2):11, 2007.

[6] Mike Brennan, Peter Seymour, and Philip Gendall. The effectiveness of monetary incentives in mail surveys: Further data. Marketing Bulletin, 4(5):43–52, 1993.

[7] Kevin B Wright. Researching internet-based populations: Advantages and disadvan- tages of online survey research, online questionnaire authoring software packages, and web survey services. Journal of computer-mediated communication, 10(3):JCMC1034, 2005.

[8] James Lester, Karl Branting, and Bradford Mott. Conversational agents. The Practical Handbook of Internet Computing, pages 220–240, 2004.

[9] Veton Kepuska and Gamal Bohouta. Next-generation of virtual personal assistants (microsoft cortana, apple siri, amazon alexa and google home). In 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), pages 99–103. IEEE, 2018.

[10] Petter Bae Brandtzaeg and Asbjørn Følstad. Why people use chatbots. In International Conference on Internet Science, pages 377–392. Springer, 2017.

[11] Joseph Weizenbaum. Eliza—a computer program for the study of natural language

communication between man and machine. Communications of the ACM, 9(1):36–45,

1966.

(37)

28 References

[12] Alice Kerlyl, Phil Hall, and Susan Bull. Bringing chatbots into education: Towards natural language negotiation of open learner models. In International Conference on Innovative Techniques and Applications of Artificial Intelligence, pages 179–192.

Springer, 2006.

[13] Ella Tallyn, Hector Fried, Rory Gianni, Amy Isard, and Chris Speed. The ethnobot:

Gathering ethnographies in the age of iot. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1–13, 2018.

[14] Gobinda G Chowdhury. Natural language processing. Annual review of information science and technology, 37(1):51–89, 2003.

[15] Fabio Clarizia, Francesco Colace, Marco Lombardi, Francesco Pascale, and Domenico Santaniello. Chatbot: An education support system for student. In International Symposium on Cyberspace Safety and Security, pages 291–302. Springer, 2018.

[16] Lei Cui, Shaohan Huang, Furu Wei, Chuanqi Tan, Chaoqun Duan, and Ming Zhou.

Superagent: A customer service chatbot for e-commerce websites. In Proceedings of ACL 2017, System Demonstrations, pages 97–102, 2017.

[17] Jie Kang, Kyle Condiff, Shuo Chang, Joseph A Konstan, Loren Terveen, and F Maxwell Harper. Understanding how people use natural language to ask for recommendations.

In Proceedings of the Eleventh ACM Conference on Recommender Systems, pages 229–237, 2017.

[18] Dirk Heerwegh and Geert Loosveldt. Personalizing e-mail contacts: Its influence on web survey response rate and social desirability response bias. International Journal of Public Opinion Research, 19(2):258–268, 2007.

[19] Frederick G Conrad, Mick P Couper, Roger Tourangeau, Mirta Galesic, and Ting Yan.

Interactive feedback can improve the quality of responses in web surveys. In Proc. Surv.

Res. Meth. Sect. Am. Statist. Ass, pages 3835–3840, 2005.

[20] Maria Elena Sanchez and Giovanna Morchio. Probing “dont know” answers: Effects on survey estimates and variable relationships. Public Opinion Quarterly, 56(4):454–474, 1992.

[21] Svetlana Sannikova. Chatbot implementation with microsoft bot framework. 2018.

Publisher: Metropolia University of Applied Sciences.

[22] Jeremy Beaudry, Alyssa Consigli, Colleen Clark, and Keith J Robinson. Getting ready for adult healthcare: Designing a chatbot to coach adolescents with special health needs through the transitions of care. Journal of pediatric nursing, 49:85–91, 2019.

[23] Sushil S Ranavare and RS Kamath. Artificial intelligence based chatbot for placement

activity at college using dialogflow. Our Heritage, 68(30):4806–4814, 2020.

(38)

Appendix A

Fig. A.1 Survey format from Quicksearch

(39)

30

Fig. A.2 Answer format from Quicksearch

Fig. A.3 Entity in Narratory

Fig. A.4 Intent in Narratory

(40)

31

Fig. A.5 Intent with Entity

Fig. A.6 Extracted data through Narratory’s webhook

(41)

32

Fig. A.7 Answer file generated from extracted data

Fig. A.8 Second extracted data through Narratory’s webhook

(42)

33

Fig. A.9 Updated answer file

Fig. A.10 FacebookID bind with personID

(43)

PO Box 823, SE-301 18 Halmstad Phone: +35 46 16 71 00

E-mail: registrator@hh.se www.hh.se

Kevin Hoang Dinh, student at Halmstad University.

I'm a motivated person. Always eager to learn and try out new experiences.

Not afraid of making mistake. Nothing

is perfect, mistake will appear along

the way. But what you are going to

do with it is your decision.

References

Related documents

This division could cause a confusion about who or what has the intelligence in an organization considering the organizational intelligence must consist of an

I can’t really answer that question because like I’m not that into Hubert and don’t know how it could present data better or how it could be done that I understand Hubert better,

From the observation that biblical phrases and expressions pervade the thought and speech of Jesus and his apostles, Franz Delitzsch (d. He cites the use of Hebrew in

A literature survey was conducted focusing primarily on the plasma environment of planet Mercury, and secondarily on its neutral atmosphere and the electrical properties of the

The message bus combined with the Xively API for MQTT, HTTP, and Web Sockets to provide an interoperability layer. It is a data driven platform with ability to give fine grain access

When creating shared value, the company will not just maximize profits, it will not be mixed up in charity either, but instead integrate a business model that generates both

Förutom beskrivningen av Jesu dop (1:19-34) finns sju berättelser där ὕδωρ inte nämns i de andra evangelierna; vatten blir vin, född av vatten och ande, mötet med samariskan vid

Along with expensive costs, low return on investment, low customer demand, few more reasons were mentioned by the respondents in their answers to the open ended questions