• No results found

Developing an Analysing a Web Application made for Teachers to Evaluate Students' Performance : Utveckling och analys av en webbapplikation för examinatorers analys av elevers lärande

N/A
N/A
Protected

Academic year: 2021

Share "Developing an Analysing a Web Application made for Teachers to Evaluate Students' Performance : Utveckling och analys av en webbapplikation för examinatorers analys av elevers lärande"

Copied!
43
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet SE–581 83 Linköping

Linköping University | Department of Computer and Information Science

Master’s thesis, 30 ECTS | Datateknik

2021 | LIU-IDA/LITH-EX-A--2021/029--SE

Developing and Analysing a Web

Application made for Teachers to

Evaluate Students’ Performance

Utveckling och analys av en webbapplikation för examinatorers

analys av elevers lärande

Andreas Hultqvist Tobias Hultqvist

Supervisor : Anders Fröberg Examiner : Erik Berglund

(2)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet - eller dess framtida ersättare - under 25 år från publicer-ingsdatum under förutsättning att inga extraordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka ko-pior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervis-ning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säker-heten och tillgängligsäker-heten finns lösningar av teknisk och administrativ art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsman-nens litterära eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet - or its possible replacement - for a period of 25 years starting from the date of publication barring exceptional circumstances.

The online availability of the document implies permanent permission for anyone to read, to down-load, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility.

According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement.

For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

(3)

Abstract

The need to learn programming increases as more jobs require basic programming skills and computer knowledge. Compulsory school is adding programming to the cur-riculum, which leads to challenges due to both teachers and students are new to this sub-ject. Even at the university level some students get in touch with programming for the first time in their lives.

This thesis aim to develop a web application that can be used by teachers as a reli-able and informative tool when evaluating the learning process of its students, by combin-ing data collected through user interactions while solvcombin-ing programmcombin-ing related puzzles in Python, with answers from periodic self-evaluation surveys.

The study shows that the web application can be seen as a valid tool when evaluating the students’ learning process.

(4)

Acknowledgments

We would like to thank our examiner Erik Berglund for supporting us through the entire thesis. We also want to give a special thanks to Daniel Johnsson for always answering our questions regarding the old version of Coder Quiz. We also want to thank our friends and family for believing in us through all these years and special thanks to our mom and dad who drove us to ice hockey practice back in the days.

(5)

Contents

Abstract iii

Acknowledgments iv

Contents v

List of Figures vii

List of Tables viii

1 Introduction 1 1.1 Motivation . . . 1 1.2 Aim . . . 2 1.3 Research question . . . 2 1.4 Delimitations . . . 2 2 Background 3 2.1 Coder Quiz . . . 3 3 Theory 6 3.1 Learning programming . . . 6

3.2 The User Experience . . . 6

3.3 Web Analytics . . . 7 3.4 Self-evaluation . . . 7 3.5 Related Work . . . 8 4 Method 10 4.1 Pre-study . . . 10 4.2 Implementation . . . 11 4.3 Analysis . . . 12 5 Results 13 5.1 Pre-study . . . 13 5.2 Implementation . . . 14 5.3 Analysis . . . 21 6 Discussion 26 6.1 Results . . . 26 6.2 Method . . . 28

6.3 The work in a wider context . . . 28

7 Conclusion 29 7.1 Future work . . . 30

(6)

Bibliography 31 A Interview guide: First interview 33 B Interview guide: Second interview 35

(7)

List of Figures

2.1 Landing page of previous version . . . 4

2.2 Quiz overview of previous version with chapter selection . . . 4

2.3 Missing Word-puzzle of previous version . . . 5

4.1 Application flow . . . 11

5.1 Database structure . . . 14

5.2 Landing page . . . 15

5.3 Quiz overview with chapter selection . . . 15

5.4 Chapter theory . . . 16 5.5 Missing Word-puzzle . . . 17 5.6 Quiz overview . . . 17 5.7 Chapter overview . . . 18 5.8 Chapter evaluation . . . 18 5.9 Puzzle overview . . . 19 5.10 Puzzle details . . . 19 5.11 Puzzle attempts . . . 20 5.12 Event trace . . . 20

5.13 Application flow when the user runs code . . . 21

5.14 Self-evaluation modal . . . 21

5.15 Completed puzzles per chapter . . . 22

5.16 Students that completed each chapter . . . 23

5.17 Overview of Primes puzzle . . . 24

(8)

List of Tables

5.1 Course activities and chapter associations . . . 22 5.2 Quiz statistics . . . 23 5.3 Self-evaluation means of each chapter . . . 24

(9)

1

Introduction

In March 2017 the Swedish government revised the curriculum for compulsory and upper secondary school, and programming was added to the syllabus [16]. The purpose is to strengthen students’ digital skills and encourage the students to solve problems with dig-ital tools. By adding programming to the syllabus, both teachers and students faces new challenges by having to learn and teach a subject that they are not familiar with. Fortunately for students, there are many applications available on the internet with programming exer-cises and tasks to help students’ getting started with programming. But on other hand, it can be difficult for a teacher to keep the education at a consistent level if students in the same study group uses different learning applications, and even more difficult to keep track of the students’ learning progress.

This thesis will therefore investigate how a web application can be developed to make it easy for teachers to create customized exercises depending on the course content and how teachers can evaluate the students’ learning progress during the course.

1.1

Motivation

Learning computer programming can be a difficult task for students’ with no prior knowl-edge [1]. It can be a time consuming and complicated process to set up a development envi-ronment on the users’ local machine, and having an application on the web instead makes it simpler for the user to get started since no downloads or installation is required.

Even when the development environment is all set up, it can be difficult for a beginner to get started with actual coding. It takes time for a beginner to think like a programmer, and it can be a frustrating experience. Studies has shown that beginner programmers often lose motivation at an early stage because the tasks are too difficult [5]. It has also been shown that students are in need of interactive visualizations and code examples when learning pro-gramming [10]. In addition to simplifying the users’ process of getting started with learning programming, an application on the web also opens up possibilities regarding key features and benefits of cloud services that can be useful when analyzing the users’ progress. This might be used by the teachers to analyze and evaluate the students’ learning in more depth.

(10)

1.2. Aim

1.2

Aim

The underlying purpose of this thesis is to investigate how a web application can be bene-ficial for students’ attending an introduction course in programming, both in regards of the intended learning outcomes but also to encourage the students to gain programming knowl-edge. In order to investigate this, a web application will be developed in a way that makes it easy for the teacher to customize exercises that are relevant for the course syllabus. The thesis will also investigate how a teacher might benefit from cloud services to simplify the evaluation of the students’ learning.

1.3

Research question

With the underlying purpose described in Section 1.2, this thesis will seek to answer the following research question:

1. How to design the data collection model and statistics visualisation in a web application for programming quizzes with a focus on teachers evaluating the students’ knowledge and ability?

In order to answer the research question above, a web application will be developed based on the needs of a university examiner in an introduction course to Python. The system will be tested by the students of the course and evaluated by the examiner to ensure the validity of the solution.

1.4

Delimitations

The application will be tested by first year students at Linköping University taking an intro-duction course in programming. It is assumed that the students in the target group do not have programming or computer science as the primary subject in their education and will therefore be seen as a beginner in programming and the programming tasks will therefore be designed for beginners.

There are also no restrictions that only the test group can use the application, it is therefore possible for other users to sign in and complete it, but it’s assumed that these outliers won’t affect the result in any major way.

(11)

2

Background

This section will describe the background of the web application and the work that was made during previous research. The aim is to provide the reader with a clear understanding of how the application was developed before and the features that will be kept in our research in order to answer to the research question.

2.1

Coder Quiz

Coder Quiz is a web application that has been under development during several iterations the past few years by different developers. The latest iteration was developed in 2020 with the purpose of simplify the learning process for novice programmers [8]. This was accomplish with code puzzles that didn’t require the user to write any code, instead the user had to solve gamificated programming problems only using the mouse or keyboard shortcuts. The following sections will describe the implementations that will be used in our research.

Structure

The structure of the web application is divided into three levels. When entering the web application the user gets to the landing page which is displaying all quizzes in a grid, see Figure 2.1.

The user needs to be signed in for further interactions with the application. If the user is signed in and clicks on a quiz, a list of all chapters associated to that quiz is displayed, which can be seen in Figure 2.2. The user can then click on a chapter and the list will be expanded with all its associated puzzles.

Lastly, the user can click on a specific puzzle to enter that puzzle, se figure 2.3. In a puzzle, the user can navigate to the previous and next puzzle or chapter by clicking arrows. The user can also access chapter theory or puzzle description by expanding each row. In order to complete a chapter the user needs to finish all puzzles, and all chapters has to be finished to complete the entire quiz.

(12)

2.1. Coder Quiz

Figure 2.1: Landing page of previous version

Figure 2.2: Quiz overview of previous version with chapter selection

Puzzle types

The puzzles are all based on the programming language Python and their solutions are real code examples which are executed by the application in the users’ web browser to validate whether the user successfully completed the puzzle. The puzzle types with a short descrip-tions are provided below:

• Indentation - All indentation are removed, the user needs to restore them to complete the puzzle.

• Line Order - The code lines are swapped, the user needs to rearrange the rows to com-plete the puzzle.

(13)

2.1. Coder Quiz

Figure 2.3: Missing Word-puzzle of previous version

• Comment Out - The code has too many lines and some are wrong, the user needs to comment out some rows to complete the puzzle.

• Missing Words - Some words or phrases in the code are removed, the user needs to figure out which words to complete the puzzle.

Quiz editor

A user can have rights to be an editor in the application. If that’s the case, a pencil icon will be visible in the top right corner of each quiz on the landing page. By pressing it, the user gets redirected to an edit page for that quiz. It is from this page the user can maintain the quiz. Chapters can be created and puzzles can be added to the chapters by navigating in an expandable list. This editor role is application-wide and not specific to a quiz.

(14)

3

Theory

3.1

Learning programming

Learning to program is difficult for students with no prior knowledge. It is common that in-troduction courses in programming are seen as difficult by the student and have high dropout rates [5]. Various factors can be associated with the difficulties. Except for the programming related difficulties, such as loops, logical condition, debugging and recursion, [10] authors state that the students’ find it hard with the abstract nature of programming [5]. This in-cludes lack of everyday life correspondence, problem solving skills and mathematical skills [5].

In a psychological/educational study of programming, Robins et al. tries to identify the difference between a novice programmer and an expert programmer related to learning and teaching, and outlines the process of becoming an expert [17]. Robins et al. also emphasises the importance of individual learning when it comes to novice programmers. The range of background, abilities and motivation to learn can vary depending on the student group, and as a teacher this needs to be kept in mind. The conclusion provides the reader with a guidance of how to structure an introduction course. The main takeaways are that the course should be simple in the beginning and should be expanded on a systematic level as the student gains more experience. It is also essential to develop basic tracing and debugging skills to be able to step through and correct their program. As Lahtinen et al. also claimed, Robins et al. highlights the importance of fully understanding loops, conditionals, arrays, recursion since it is a common problem to understand these concepts for novice programmers [10] [17]. The final takeaway is that novice programmers can be identified as movers, stoppers, and tinkerers and by understanding the students characteristic, the efficiency of the assistance can be optimized [17].

3.2

The User Experience

User experience (UX) is defined by International Organization for Standardization (ISO) as a users "perception and responses resulting from the use and/or anitcipated use of a product, system or service" [7], and it is central to any Human-Computer Interaction (HCI). The qual-ities of the ISO definition are hard to measure, and as such they are hard to use as basis for requirement gathering. According to a survey done by Law et al. [11], 98 % of the

(15)

respon-3.3. Web Analytics

dents considered usability as a necessary precondition for a good UX. Usability is easier to measure, and is defined by Nielsen [14] and interpreted by Matera et al. [13] as:

• Learnability: the ease of learning the functionality and the behavior of the system. • Efficiency: the level of attainable productivity, once the user has learned the system. • Memorability: the ease of remembering the system functionality, so that the casual user

can return to the system after a period of non-use, without needing to learn again how to use it.

• Few errors: the capability of the system to feature a low error rate, to support users making few errors during the use of the system, and in case they make errors, to help them to easy recover.

• User’s satisfaction: the measure in which the user finds the system pleasant to use. Matera et al. [13] further writes that the three most common evaluation methods for usability are:

• User testing: the study of real users using the system. • Usability inspection: the system is tested by specialists. • Usage analysis: the study of statistics from usage logs.

The process can be used both as an evaluation method for finished systems, but also as part of an iterative design and development process.

3.3

Web Analytics

Web analytics is a term that focuses on the understanding of online experience and how the can be improved for optimal web usage [4]. It can be used in different ways, such as better understanding the interaction between the visitor and the website, evaluating the website’s performance and the visitors experience [15]. Companies use web analytics to better under-stand their visitors by logging details about each visit. Examples of details that are commonly used when analysing a website are:

• Number of unique visitors • Pages visited

• When/why they leave a page • Buttons clicked

Web analytics can also be used to analyze web server performance in different scenarios and what impact the server can have on the visitor experience. For example, if a web page takes a long time to load, it can have a negative impact on how the visitor experienced the website [15].

3.4

Self-evaluation

Studies has shown that self-evaluation increases the learning and engagement, and is con-firmed as a reliable teaching method [19]. Using the self-evaluation instrument in an intro-ductory course in programming has been tried by previous researchers with valuable find-ings, such as similar correlation to traditional performance measures [18].

A common partition of self-evaluation, made in other researches [3, 23, 21], is the follow-ing three motives:

(16)

3.5. Related Work

• Self-assessment • Self-enhancement • Self-verification • Self-improvement

Individually the motives cover a specific cognitive area of self-evaluation, and will be de-fined in this thesis as follows: Self-assessment is a term that has different meanings in different contexts, but in the motives of education it is defined as the reflection of a persons self-knowledge and its accuracy compared to traditional metrics [2]. Self-enhancement on the other hand, defines by the performance feedback given to a user after a certain task to provide the user some kind satisfaction and positive feedback when solving a task [9]. Self-verification and self-improvement in the context of self-evaluation is foundation of the motivation to engage-ment in the learning process [3]. By combining new information during the learning part with previous knowledge, self-verification is an important factor when evaluating the new knowledge gained.

3.5

Related Work

In this section, articles regarding the effectiveness of online courses will be summarized and the main takeaways will be highlighted as they can come to have an impact on this study.

The Effects of Internet-Based Instruction on Student Learning

Wegner et. al performed a study over a two-semester period where students at a master degree level were allowed to choose either to attend the course in a traditional classroom approach or attend to the course remotely. At the end of the course both groups had an on-campus exam and the test scores of both student groups did not show any significant difference. The course evaluation also showed that the student in the distance course had overall a more positive feeling about the course than the group taking the course in-class but felt a lack of guidance through the course. When designing an online course this is a crucial factor since online courses complicate the natural interaction between teacher and students that is experienced in a traditional classroom. The importance of this kind of interaction needs to be considered when designing an online course [24].

It is important in this project to develop the web application so the users can interact with the web application independently and not need to interact with the teacher throughout the course. As the article stated, it is common that students feel a lack of guidance which makes it important to develop the web application with a clear, straight forward path.

Evaluating effectiveness of e-learning

In a similar study performed by Leung a comparison was made between two student groups taking a master degree course in e-commerce. One group took the course online and the other took the course with a traditional classroom approach. Overall, the content of both courses were the same with the same teaching material. The only difference is that the web-based group learned from online material and the conventional group had face-to-face lectures.

The goal of this study was to find any differences in the learning outcome by comparing the study result of the web-based group and the conventional group. The key findings from this study is that there was no significant difference when comparing the study groups after the course [12].

This is beneficial for our work as it proves that applications can be a useful tool for the students.

(17)

3.5. Related Work

Effectiveness of Reflection on Programming Problem Solving Self-Assessments

In an ongoing study by M. Alzaid and I.H. Hsiao, they introduce the concept of learning through reflection during an introductory course in programming. The students were en-couraged to participate in a self-assessment process during the course, which used an online educational system. When the student answered a question correctly, they got the possibility to answer multiple choice and free text questions regarding the usefulness of that particular question. The main findings from this study is that the students who participated in the self-assessment process increased their learning and performed better than the other students. It is also notable that the students who conducted constructive feedback in the free text ques-tions increased their engagement even higher.

By implementing a tool that enables the user to reflect on its own learning process, as they did in the article, the teacher will also be provided with information about both the students performance and the students own thoughts.

(18)

4

Method

4.1

Pre-study

The pre-study consisted of two separate studies, a literature review and a semi-structured interview. Our intention with this approach is to determine the challenges and possibilities when developing an application with our motives. The findings from the literature research combined with the necessities desired by an university examiner will be the foundation of this report.

Literature Review

The first part of the pstudy is a literature review with the purpose of finding relevant re-search on how online classes can be designed to achieve the same amount of passing students as a traditional class and what tools that were used by the teacher to confirm that the student has reached the intended learning outcomes. The literature review will also cover findings in research regarding self-evaluation and how it can be used as a valid method to confirm students’ learning.

Semi-structured Interview

To assure that the application will be found useful for examiners, it is significant to investigate the examiners needs. When deciding data collection method for examiners feedback, surveys are seen as beneficial when investigating the relationship between the predefined group of respondents, examiners for this study, and the survey questions, but the understanding of these relationships are often missed out [6].

Conducting a semi-structured interview with an examiner of an introduction course for programming in Python was found most suitable for this study, with the aim of getting an deeper understanding of an examiners needs. To assure that the interview will cover the top-ics that are investigated in the study, an interview guide were conducted [20], see appendix A.

(19)

4.2. Implementation

4.2

Implementation

The development process began with deciding what features from the previous version of Coder Quiz that were necessary to retain in this implementation to minimize reinventions.

Back end

The back end needs of Coder Quiz consists of user authentication and a database to store quizzes, user progresses and statistics. These were previously implemented in Firebase, a PaaS (Platform as a Service) maintained by Google, which offers a rapid development of applications. Most of the back end was to be kept as is, but the database schema needed to be redesigned to decouple quizzes for a more flexible progress and statistics system.

User Interface

Usability of the application was a priority to not discourage users from using the application, as the acceptability of the application by the users relies on its usability [13]. It was decided to redesign many of the previous features following Material Design, a widely used adaptable set of guidelines and components developed by Google, to give the user a feel of recognition hence improving usability [13][14]. Material-UI, a Material Design component library for React, was used to ease this process.

The flow of the application was decided to be kept as is, where the user chooses a quiz, a chapter and then proceeds to complete puzzles, with the addition of chapter theory before the puzzles, Figure 4.1 displays this flow. The user interface for controlling this flow was redesigned as a part of the process of implementing Material Design.

Figure 4.1: Application flow

It was decided to not change the implementation of puzzles, other than the visual design changes that were a part of implementing Material Design.

Creator Interface

In previous versions of Coder Quiz a user could have the role of editor, giving them access to the quiz creation tool and the possibility to edit any quiz in the application. This was to be changed so that only the creator of a quiz could edit it.

The creator of a quiz should also be able to access extra features such as quiz sharing and statistic tools. The interface should be easy to use and present the statistics of the quiz in an easy to understand manner. Furthermore, the editing tools should be integrated into the normal user interface in a way that does not require the creator to navigate through a different route, as was required in the previous version of Coder Quiz.

Access to the creator interface was to be handled via the Firebase authentication and Fire-store rules system, eliminating any custom server side functions to be developed.

Data Collection

To give creators feedback on how users were performing on its quizzes, the quiz system was designed to automatically and anonymously collect data from users and summarize it to the

(20)

4.3. Analysis

creator. To get an understanding of what statistics could be interesting for a quiz creator to see, a list of possible measurements were produced during the pre-study interview.

To complement the automatic data collection, it was decided that users should perform a self-evaluation continuously during a quiz, as it provides the creator with valuable feedback regarding the students learning process. It is important that the evaluation is not too complex and that the user quickly can resume to puzzle solving, as to not lose the user due to a lack of interest. Therefore, the questions should be few and chosen in regards to previous studies, and the questions should relate to self-assessment, self-verification, self-enhancement and self-improvement.

4.3

Analysis

To validate that the system is usable and shares valuable information to quiz creators, the system was used as an optional training resource in an university level introduction to coding course. The data generated by this test was also used to further understand what information could be gathered and analyzed.

After the course, the examiner of the course was granted access to the statistics of the course material and another semi-structured interview was conducted to get an understand-ing of what the examiner thought was useful or not.

(21)

5

Results

5.1

Pre-study

Literature review

The users of the application were to perform a self-evaluation continuously during a quiz. This was chosen to be done after each completed chapter and in regards to the findings of the literature review three questions were chosen:

• How difficult was this chapter? • How well did you perform? • Did you learn anything?

The full findings of the literature study can be found in the theory chapter of this thesis.

Semi-structured Interview

As a part of the pre-study, a semi-structured interview of a university examiner was con-ducted. The interview was online over Microsoft Teams and was divided into two parts. Initially, we presented the previous version of the web application and the interviewee an-swered questions regarding the user experience of the application. A simple usability test where also applied using the thinking aloud principles [14]. The second part of the inter-view covered questions on how the application can be further developed to make it usable for examiners in introduction courses for programming.

Along with insights on what an examiner might need in a system like Coder Quiz, a list of interesting measurements of student performance were produced:

• Number of attempts on each puzzle • Successful attempts on each puzzle • Time to finish each puzzle

(22)

5.2. Implementation

• How users complete each puzzle

These were used to form the implementation of the new system.

5.2

Implementation

After deciding what features should be added or re-implemented, the implementation phase of the project started.

Back End

User authentication was kept as it was, with the addition of OAuth sign-in strategies with various social media as this was considered a step towards using university authentication strategies.

The database was re-implemented using Firestore, a NoSQL database component in the Firebase platform, which stores data as documents organized in collections and subcollec-tions. Previously each quiz was stored as a single document consisting of chapter and puzzle data, and a document for each user tracked their progress on all quizzes. This was changed to store quizzes, chapters and puzzles, and documents mapping progress from each of these to users. This implementation increased the number of documents, and thus reads, in the database, but reduced the size of each one. This implementation also allows for quizzes to share common chapters and for chapters to share common puzzles while still retaining user progress for each unique document, e.g. the chapter called Introduction to Coder Quiz which is used in several quizzes to introduce the application to the user. Statistics and progress was implemented as user specific documents in sub-collections of quizzes, chapters and puzzles. See Figure 5.1 for a visualisation of the database structure.

Figure 5.1: Database structure

User Interface

While the application flow was mostly kept as the previous version, the user interface was redesigned using the Material Design guidelines.

Landing page

The landing page, see Figure 5.2, is a central part of the application since it is where all further interaction begins. It was designed to be easy for a new user to understand how to navigate to get to a quiz. Two progress bars were implemented for each quiz, one linear which shows the progress in percent of chapters completed and one non-linear which shows what chapters are completed and not.

A section of the landing page was dedicated to functions not yet implemented, such as user groups and quiz sharing, to future-proof the design.

(23)

5.2. Implementation

Figure 5.2: Landing page

Quiz overview

The quizzes are divided into separate chapters which each intend to cover a specific sub-topic, see Figure 5.3. The icon of a chapter changes as it is completed and a non-linear progress bar was implemented per chapter to provide the user with a sense of progression.

Figure 5.3: Quiz overview with chapter selection

Chapter

As mentioned in the theory chapter, a critical aspect regarding novice programmers using an online teaching tool is the lack of interaction with teachers. Therefore, when a user starts a chapter from the quiz overview, they are shown a theory page specific to the chapter, see

(24)

5.2. Implementation

Figure 5.4. The quiz creator are given the opportunity to share text, code examples, images and videos to help its users understand the chapter contents.

The user can navigate between the puzzles of the chapter using the pagination provided at the bottom of the page. At any time during a puzzle, the chapter theory is available to the user as a popup modal to help with questions that may arise during the puzzle solving process.

Figure 5.4: Chapter theory

Puzzle

Each puzzle is based on the programming language Python, and the code is executable in the browser of the user. This allows the user to test the code of the puzzle by clicking a "Run"-button and get a result in real-time.

The previous implementation of puzzles were left as it were with a few tweaks to the design elements. The focus of the tweaks was mainly to give the user a clear view of the im-portant elements, e.g. the run code button, and to keep the interface similar between different kinds of puzzles.

The expected result of puzzles was added as its own element to help the user understand what to accomplish and keep it in the same place for all the puzzles, as it was previously optional to add it as part of the puzzle description. This adds to the learnability and memo-rability of the application, as the user doesn’t need to look for the information, which is good for usability [13]. An example of the new implementation of a Missing Words-puzzle can be seen in Figure 5.5.

Creator interface

In the previous implementation of Coder Quiz, all users of the application-wide role editor could create new quizzes and edit all quizzes in the system. This was redesigned to make use of a single application-wide role, renamed creator, which has the ability to create new quizzes but not edit other quizzes. The creator of a quiz is automatically assigned editing rights to that quiz, and can give this right to other users.

The create tool is accessed from the landing page, and the editing is done within the quizzes themselves. If an element of a quiz, chapter or puzzle is editable by a user, an

(25)

edit-5.2. Implementation

Figure 5.5: Missing Word-puzzle

button is shown next to it and by clicking it the user can edit it in-place in a "what you see, you get"-manner.

Statistics of a quiz is accessed through a button on the quiz overview, shown only to users with elevated access to that quiz. An expandable tree menu enables navigation through chapters and puzzles, which allows the user to see statistics on every level (quiz, chapter and puzzle) of the quiz. The collected data is visualized in different ways to provide the user with clear and relevant information, and to give an easy to understand overview of the users’ progression in the quiz. The statistics are shown both as the aggregated measurements of the group and the individual performances.

The quiz statistics shows an overview of the amount of users that have completed the whole quiz and the number of chapters completed in total by the user, see Figure 5.6. A radar chart is also displayed to provide an overview of how the users have completed the chapters.

Figure 5.6: Quiz overview

The chapter statistics are divided into two tabs. Similar to the quiz statistics, the creator are provided with an overview with general statistics of the chapter. The amount of users

(26)

5.2. Implementation

that has completed the chapter and the number of puzzles completed in total by the users are provided, re-runs are included. The amount of self-evaluations that has been done for the chapter is also displayed, along with the average answer for each self-evaluation question, see Figure 5.7. A radar chart is also displayed, similar to the quiz statistics, it shows the how the users have completed of each puzzle in the chapter.

Figure 5.7: Chapter overview

The evaluation tab shows a grouped bar chart of the number of answers for each option for the corresponding question, see Figure 5.8. The colors of the bars represent each question and it is possible to hide answers for a question by pressing the label of that particular question. If there were any free text feedback given in the evaluation, these are listed below.

Figure 5.8: Chapter evaluation

The puzzle statistics are divided into three tabs. The overview displays different statistics regarding the users attempts to solve the puzzle, such as number of attempts, users com-pleted, success rate, failed attempts and number of re-runs, see Figure 5.9. The number of code variations which the students tried are also displayed. Details about these attempts will be described later in this section. Time to solve the puzzle are also displayed with fastest time, slowest time and average time for the user group.

The user details tab, which can be seen in 5.10, shows a scatter plot with time in seconds on the x-axis and number of user interactions on the y-axis. Successful runs are displayed

(27)

5.2. Implementation

Figure 5.9: Puzzle overview

with green points in the plot and failed attempts are displayed with red points. It is possible to only view successful or failed attempts by pressing the label of which points to hide.

Figure 5.10: Puzzle details

The attempts tab, see Figure 5.11, shows the correct code snippet for the puzzle with its output at the top. Below the correct code snippet all attempts by the users are displayed with its output. The number of users who tried each attempt are also displayed.

Data collection

For each puzzle, the system needed to collect data on how the user was performing.

An event trace was implemented logging each user interaction with the system while solving puzzles. This allowed the system to calculate how long time a user spent on a puzzle, and how it interacted with the system to solve it.

Along with the event trace, it was also logged what response the user was given by the system when running code that the user considered correct. For each attempt it was also logged whether the user already solved that particular puzzle in the past. This was used to calculate the number of attempts was made prior to solving the puzzle, and the number of successful attempts. See Figure 5.12 for all logs.

When the user completes a puzzle an event is propagated to the current chapter, which stores it as chapter progress. When the last puzzle of a chapter is completed, the chapter is considered completed and a similar progress event is propagated to the current quiz This

(28)

5.2. Implementation

Figure 5.11: Puzzle attempts

Figure 5.12: Event trace

gives the user feedback on how it is doing and a feeling of being in control, which adds to the usability of the application [14].

The system also saved the state of the puzzle, i.e. the code the user considered correct, when the user hit the Run-button. This was used by the system to show what code the users most often considered correct, and give the quiz creator an understanding of its users knowledge. The flow when a user runs code can be seen in Figure 5.13.

When the user completes a chapter, a self-evaluation dialog appears to the user which prompts them to answer the questions presented under the results of the literature review, see Figure 5.14. To make it easier and more efficient for the user, all questions are answered on a scale from 1-5 represented as stars in the user interface. An additional free text field

(29)

5.3. Analysis

Figure 5.13: Application flow when the user runs code

was added as an optional input, in the case that a user wanted to communicate more specific feedback.

Figure 5.14: Self-evaluation modal

It was not made possible to continue without answering the three self-evaluating ques-tions, as this input was considered important for the study.

5.3

Analysis

The system was made available to a university level introduction to Python course as an optional training tool over the course of one semester. The quiz presented to the students was specially made to match their course plan, in a try to motivate more students to use the

(30)

5.3. Analysis

system. The first chapter of the quiz was the Introduction to CoderQuiz chapter and the other nine were batched three-by-three to associate to three course activities, see Table 5.1.

Course activity Chapter

- 1 Introduction to CoderQuiz 1 2 Functions 3 Lists 4 Strings 2 5 Statements 6 Loops

7 Strings- and lists 3

8 Dictionaries 9 Nested structures 10 Recursion

Table 5.1: Course activities and chapter associations

As the quiz was not mandatory to complete the course, a drop in usage over time was expected by the examiner. This was shown to be a correct prediction, as shown by Figure 5.15 and Figure 5.16. One should note that there was a couple of weeks between the activities in the original course plan, which might have had an impact on the usage of the quiz.

1 2 3 4 5 6 7 8 9 10 20 40 60 80 Chapter No. of Puzzles

Figure 5.15: Completed puzzles per chapter

Some overall statistics are shown in Table 5.2. Some examples of interesting findings in the statistics are shown and discussed under Key Findings later in this section.

When the course was finished, a semi-structured interview was conducted with the ex-aminer of the course following the interview guide in Appendix B. He was granted access to the creator interface and introduced to the statistics that the system had gathered. Overall the examiner thought the system useful and gave him insights about how the students under-stood the concepts of the course. A full list of answers and insights from the interview will not be shown in this report, due to the nature of secrecy regarding future product releases.

(31)

5.3. Analysis 1 2 3 4 5 6 7 8 9 10 5 10 15 Chapter No. of Studens

Figure 5.16: Students that completed each chapter

Students registered on the course Approx. 130 Students with progress in the quiz 75

Number of chapters 10

Number of puzzles 34

Number of completed chapters 80 Number of completed puzzles 389

Number of evaluations 73

Self-evaluation question 11 2.77 Self-evaluation question 22 4.18 Self-evaluation question 33 3.56

1How difficult was this chapter? 2How well did you perform? 3Did you learn anything?

Table 5.2: Quiz statistics

Key Findings

As described earlier in the result chapter, it is possible for the quiz creator and select users to access statistics of a quiz. In this section, a demonstration will be given of how it can be used to find statistics of the users’ progress and learning from the data collection that was made. By observing the answers from the self-evaluation for each chapter, see Table 5.3, Loops was the chapter that the students found most difficult and performed poorly. By observing the chapter statistics overview, it is noticeable that 8 users completed the second last puzzle but only four completed the last puzzle in Loops.

By observing the puzzle overview of the last puzzle of Loops called Primes, see Figure 5.17, a lot of interesting information regarding the students’ progress is provided. Just by observing the times, it is noticeable that the fastest time is 60 seconds. It is by far the longest Fastest Time for any puzzle. The users has also submitted 38 different code variations which also is the highest number of all puzzles. The users are obviously struggling with this puzzle. This puzzle is of type Line Order, see Chapter 2 for descriptions of puzzle types, and this puzzle contains 13 rows which means that there are 13! = 6227020800 possible solutions, and

(32)

5.3. Analysis

Chapter Question 14 Question 25 Question 36

Introduction to CoderQuiz 1,6 4,8 2,7 Functions 2,4 4,2 3,9 Lists 2,6 4,3 4,1 Strings 3,2 4,5 4,6 Statements 2,0 5,0 2,4 Loops 3,8 3,5 3,5

Strings- and lists 2,3 4,3 3,3

Dictionaries 3,3 3,7 3,7

Nested structures 3,3 4,0 4,0

Recursion 3,2 3,4 3,4

4How difficult was this chapter? 5How well did you perform? 6Did you learn anything?

Table 5.3: Self-evaluation means of each chapter

Figure 5.17: Overview of Primes puzzle

without drawing any major conclusions, the complexity of this puzzle can be seen as too difficult in comparison with other puzzles in the chapter.

In the puzzle Print Red, also in the Loops chapter, another interesting finding that might be valuable for a teacher is found. The puzzle is of type Indentation which means that the indentation of the code is missing and the user needs to correct this to complete the puzzle. The puzzle only contains 4 rows, and is probably considered easy for an experienced pro-grammer. Still, 12 different code variation were submitted over 31 total attempts and only 9 of the attempts were submitted using the correct code, see Figure 5.18 with the correct code at the top of Figure 5.18a.

This is a good example of what a teacher might not consider difficult and of no need to further explain, while the students in reality are struggling to understand the concept.

(33)

5.3. Analysis

(a) (b)

(c) (d)

(34)

6

Discussion

6.1

Results

Below are discussions relating to the results of the study, with a focus on problems and real-izations made after the study was conducted.

Implementation

During the implementation phase, focus was at creating a sensible user interface promoting a high usability. A few realizations about the choices made are further discussed in this section, as they are interesting for applications of similar nature.

Back End

When the study began, Coder Quiz was already using Firebase as its back end solution with Firestore as its database manager. With a few alterations this was kept due to the intuitive schema design mapping each quiz, chapter and puzzle to its own document.

The use of a document-based database proved to be useful for the core of the application, which only provides users with data about the quizzes, chapters and puzzles, and to store the user progresses in a simple way. However when summaries of user progresses were to be generated, the system needed to download all the documents containing the relevant user progresses and then aggregate them in the web browser to calculate statistics. As the number of users grows, this approach is not sustainable and should be replaced with either a relational database that could aggregate the data server-side or another schema for how to save the statistics. A combination of different database management systems might be another solution to investigate, where the core system is implemented in a document-based database and statistics in a relational.

Getting stuck

A noticeable problem with online teaching tools is when a student gets stuck, it is difficult to get the right help for solving the task. The chapter theory is currently accessible at all time but might not be enough help to a novice programmer. To improve the support from the system to the user but still keep the puzzles challenging, hints to the solution to each puzzle

(35)

6.1. Results

when the user answers incorrectly can be considered as to minimize the amount of students dropping off. To get a more classroom-like experience it could be beneficial to implement a forum for frequently asked questions or make it possible for the users to ask each other for help.

Self-evaluation

It is only possible for the users who completes a chapter to evaluate since the self-evaluation dialog only appears when a chapter is completed. This implies that the result from the self-evaluation does not cover the entire group, only the users who completed the chapter, which decreases its reliability. For example, the question How difficult was this chap-ter? would most likely be rated as a 5 for those who did not manage to complete the chapter. The same logic could be applied for the question How well did you perform?, which probably would be rated as a 1 by unsuccessful users. Implementing a way to catch the users who did not finish a chapter would provide more reliable statistics for the self-evaluation.

Noticeable is also that only one user answered the optional feedback at one time out of 73 self-evaluations in total.

Data collection

The event-based data collection implemented in this thesis provides in most cases reliable result of students’ performance, but assumes that the student is actively solving the puzzle. All interactions are logged with timestamp from the beginning of each quiz, but the time does not pause if the user is not actively solving puzzles. This means that the time between interactions increases even when the user might be on a break, or browsing in another tab.

There are a few technical possibilities to help mitigating this issue. One is to to add a event listener which pauses the time when the window is not active, but with this implementation it would be possible to solve the puzzle in a different tab while the timer is paused. Another would be to track cursor movement in the application, but this does not ensure that the user isn’t simply just thinking without moving the mouse.

A possibly effective solution is to make the users aware of the puzzle solving time, which they are currently not, and incentivize the user to not let their time run too long. For instance, awarding points and/or keeping a high score for each puzzle could make the users more eager not to waste time, while also encourage them to solve the same puzzles again. Gamifi-cation of this kind could prove to be very efficient, but could also discourage users with less competitive personalities.

Analysis

Having the system as an optional training resource led to a falloff of users over time, as briefly discussed in the results. One could argue whether the users that completed all or most of the puzzles are users that found it easy and completed it for their own satisfaction, if it were users that actually needed the extra training, or a combination of both. Looking at the self-evaluation means in table 5.3, users did not find any chapter more difficult than an average of 3.8, implying that the users weren’t novices, the chapters too easy or the user got better as the chapters got harder. The answers to the third question, Did you learn anything?, implies that users actually got better as the means are quite high.

The statistics gathered for a puzzle should be used with caution when evaluating stu-dents’ learning and progress. As shown with the two example puzzles under Key Findings, puzzles might be difficult to solve either because of the puzzle itself or because of the coding concept behind the puzzle. Primes is probably too difficult because of the puzzle type, hav-ing a too big solution space, and Print Red is probably too difficult because of the students’ understanding of the code concept. When evaluating students’ ability, the latter of the above two puzzles is probably the better one to analyze.

(36)

6.2. Method

The users are obviously facing difficulties with the indentation of conditionals in loops, which were pointed out in 5.18. It can be valuable for a teacher to receive this kind of feed-back in order to be aware of the students difficulties and adjust the education based on their knowledge. It is also no surprise that loops is the most difficult chapter according to the self-evaluation, since it was found in the literature review, see chapter 3, that loops is one of the code structures that novice programmers have most difficulties to understand. It was also found in the literature review that recursion, arrays and conditionals are difficult to un-derstand as a novice programmer and the result presented table 5.3 also confirms the claims from the literature review, except for conditionals/statements. Although, this chapter has the lowest rating regarding difficulty, everybody performed at maximum and this chapter was the least learning of all chapters in the quiz. An implication is that the puzzles in this chapters was too easy and not challenging enough.

6.2

Method

Pre-study

The literature review was mostly based on peer reviewed scientific publications in the main area of computer science education.

The semi-structured interview promoted a wider understanding of what a examiner re-ally desires. However, by just having a single interview the information gathered might only suit that examiners way of education and examining. By conducting several more interviews, the findings for the interview would be more reliable. As mentioned in 4.1, it was found more suitable of conducting a semi-structured interview than data collection through survey. Al-though, one does not have to exclude the other, and by using both it opens up the possibility to provide a broader picture of the studied unit by combining qualitative and quantitative research method [22].

Analysis

By having the system as an optional training resource for an introduction course to program-ming, valuable data about student behaviours when using the web application was collected. These data was the foundation of the analysis and validation of the system, and led to some conclusions about the usefulness of the application.

To analyse the system on a deeper level, more data is needed and the falloff needs to be reduced. One way to do this is to have Coder Quiz as a mandatory activity in the course. This puts a lot of other requirements on the system, as the student has to be provided with the right course material, theory and support. The teacher would also need a way to validate which students completed the quiz, which require personal data to be handled differently.

6.3

The work in a wider context

Since the application is built with data collected from user interactions, the necessity of track user data is inevitable. It is with no doubt an ethical matter of tracking user data, and the raw data collected is only used in purpose for this thesis. Some data is although extracted to provide the statistics for the quiz creator, but cannot be traced back to a specific user.

To ensure that the users of Coder Quiz gives their compliance, it was mandatory to accept a privacy notice before signing in. It covered both academic use and anonymity, and also the privacy policy regarding General Data Protection Regulation. The group of students for the introduction course also got more detailed information about the purpose of this study, that it was optional and in no way examining.

(37)

7

Conclusion

The main aim of this study was to develop a web application that is beneficial for a teacher of evaluating students’ learning though solving puzzles and the students’ self-evaluation process. Students’ progress was collected through user interactions when solving puzzles with the application, and combined with their own self-evaluation it is possible to answer the thesis research question:

How to design the data collection model and statistics visualisation in a web application for programming quizzes with a focus on teachers evaluating the students’ knowledge and ability?

With the re-implementation of the back end, a more flexible progress and statistics extraction simplified the data visualisation for the quiz statistics. The decoupling of quizzes made the response time shorter when requesting quiz-data since only the necessary documents were loaded. Since the implementation collects both user interaction and user feedback in regards of evaluation, this data triangulation provides the teacher with useful data to evaluate stu-dents’ knowledge and ability. Even though the collected data is anonymous and grouped before being visualized on the statistics page, it was still found useful for the teacher when evaluating the study group. The visual overviews provides the teacher with valuable infor-mation on the students’ progress of the quiz and it is simple to determine if the students have a common difficulties with different subjects, and the teacher can consequently adapt the gained knowledge from the application to the education.

(38)

7.1. Future work

7.1

Future work

User Community

In the development process of the web application, architectural design choices were chosen carefully to provide the next development iteration of Coder Quiz with a structure that en-ables classic social community functions such as user groups, invitations and user creation and sharing of quizzes. A study of the effects of such a coding quiz community on student performance would be of high interest, as it may engage students in ways that are not possi-ble with conventional education.

Puzzle Types

A study could be conducted on what types of puzzles are most beneficial for students’ learn-ing. If puzzles are too difficult due to the nature of the puzzle type rather than the code itself, it may not be a good type of puzzle. Contrary to this, a puzzle type might be too easy to brute-force, not requiring the user to read the code and therefore not learn anything.

Gamification

To interest and incentivize users to keep solving puzzles, more gamification in the system could be added. This could be high score lists, personal ratings, rewards or other game ele-ments that give the user a feeling of success and progression.

Quiz Progression

Currently, all chapters of a quiz are available from the start to a user. This might not be desir-able as students may skip chapters they believe boring or easy, but in reality miss important information. A system were users unlock chapters as they complete previous ones might be interesting as it both stops the user from skipping a chapter and acts as a gamification reward for completing a chapter.

(39)

Bibliography

[1] Yorah Bosse and Marco Aurélio Gerosa. “Why is programming so difficult to learn? Pat-terns of Difficulties Related to Programming Learning Mid-Stage”. In: ACM SIGSOFT Software Engineering Notes (Jan. 5, 2017), pp. 1–6.

[2] David Boud. Enhancing learning through self assessment. Kogan Page, 1995. ISBN: 0749413689. URL: https : / / login . e . bibl . liu . se / login ? url = https : / / search.ebscohost.com/login.aspx?direct=true&AuthType=ip,uid&db= cat00115a&AN=lkp.310849&lang=sv&site=eds-live&scope=site.

[3] Sedikides C. “Assessment, enhancement, and verification determinants of the self-evaluation process.” In: 65.2 (1993), pp. 317–338.

[4] WAA Standards Committee. Analytics Definitions. Washington DC: Web Analytics As-sociation (2008). http : / / 94 . 126 . 173 . 33 / ad2006 / adminsc1 / app / marketingtecnologico / uploads / Manuais / waa standards analytics -definitions-volume-i-20070816.pdf. (Accessed on 12/02/2020).

[5] Anabela Jesus Gomes, Alvaro Nuno Santos, and António José Mendes. “A study on students’ behaviours and attitudes towards learning to program”. In: Proceedings of the 17th ACM annual conference on Innovation and technology in computer science education. Association for Computing Machinery, July 2012, pp. 132–137.

[6] Kim Goodwin. Designing for the digital age : how to create human-centered products and services. Wiley Pub., 2009.ISBN: 9780470229101.

[7] ISO. ISO 9241-2010. Ergonomics of human-system interaction. Human-centred design for in-teractive systems. Tech. rep. 2010, p. 32.

[8] Daniel Johnsson. “Creating and Evaluating a Useful Web Application for Introduction to Programming ; Utveckling och utvärdering av en användbar webbapplikation för introduktion till programmering.” In: (2020).

[9] ALEXANDER H. JORDAN and PINO G. AUDIA. “SELF-ENHANCEMENT AND LEARNING FROM PERFORMANCE FEEDBACK.” In: 37.2 (2012), pp. 211–231. [10] Essi Lahtinen, Kirsti Ala-Mutka, and Hannu-Matti Järvinen. “A study of the difficulties

(40)

Bibliography

[11] Effie Lai-Chong Law, Virpi Roto, Marc Hassenzahl, Arnold P.O.S. Vermeeren, and Joke Kort. “Understanding, scoping and defining user experience”. In: Proceedings of the 27th international conference on Human factors in computing systems - CHI 09 (2009), p. 719. ISSN: 1605582468.DOI: 10.1145/1518701.1518813.URL: http://dl.acm.org/ citation.cfm?doid=1518701.1518813.

[12] Hareton K. N. Leung. “Evaluating the Effectiveness of e-Learning”. In: Computer Science Education 13.2 (June 1, 2003), pp. 123–136.

[13] Maristella Matera, Francesca Rizzo, and Giovanni Toffetti Carughi. “Web usability: principles and evaluation methods”. In: (2006).

[14] Jakob Nielsen. Usability engineering. 1994.

[15] Eric T Peterson and Carrabis Joseph. Web Analytics Demystified: A Marketer’s Guide to Understanding how Your Web Site Affects Your Business. Celilo Group Media, 2004, pp. 5– 7.

[16] Regeringskansliet. Stärkt digital kompetens i skolans styrdokument. Accessed: 2020-12-02. Mar. 9, 2017. URL: https : / / www . regeringen . se / 493c41 / contentassets / acd9a3987a8e4619bd6ed95c26ada236 / informationsmaterial starkt -digital-kompetens-i-skolans-styrdokument.pdf.

[17] Anthony Robins, Janet Rountree, and Nathan Rountree. “Learning and Teaching Pro-gramming: A Review and Discussion”. In: Computer Science Education 13.2 (2003), pp. 137–172.

[18] Duran Rodrigo, Rybicki Jan-Mikael, Sorva Juha, and Hellas Arto. “Exploring the Value of Student Self-Evaluation in Introductory Programming.” In: (2019), pp. 121–130. [19] John A Ross. “The Reliability, Validity, and Utility of Self-Assessment”. In: 11.10 (2006),

p. 14.

[20] Per Runeson and Martin Höst. “Guidelines for conducting and reporting case study research in software engineering”. In: 14.2 (Dec. 19, 2008), p. 131.

[21] Constantine Sedikides and Michael J. Strube. “Self evaluation: To thine own self be good, to thine own self be sure, to thine own self be true, and to thine own self be bet-ter.” In: Advances in experimental social psychology, Vol. 29. San Diego, CA, US: Academic Press, pp. 209–269.

[22] Robert E. Stake. The Art of Case Study Research. SAGE, Apr. 5, 1995.

[23] Shelley E. Taylor, Efrat Neter, and Heidi A. Wayment. “Self-Evaluation Processes”. In: 21.12 (1995), pp. 1278–1287.

[24] Scott B. Wegner, Ken C. Holloway, and Edwin M. Garton. “The Effects of Internet-Based Instruction on Student Learning”. In: Online Learning 2 (Mar. 19, 1999).

(41)

A

Interview guide: First interview

General formalities

• Presentation of the thesis.

Purpose

Objectives

• Explain the structure of the interview (Semi-structured). • Explain the value of this interview for the thesis outline.

Part 1

• Visit the website www.coderquiz.com • Describe your first thoughts.

• Try to sign in, describe your experience.

• What are your thought regarding the user interface.

• Enter a quiz and then try to enter a puzzle. Explain your thought about the application flow.

• Try to solve a puzzle and explain the difficulties. • How was the overall experience?

Part 2

• Describe your role as an examiner in a programming course.

• What are the most common mistakes that novice programmer does?

• What would be the most important requisites as an examiner if an application like this was used during a course?

(42)

• What do you think are the major aspects to consider when developing a web application like this one?

• What kind of statistics would be interesting for you to view about your students? • Do you think that your students would find this application useful?

(43)

B

Interview guide: Second

interview

General formalities

• Explain the structure of the interview (Semi-structured). • Explain the value of this interview for the thesis outline.

Interview

• Visit the website www.coderquiz.com • Describe your first thoughts.

• Try to sign in, describe your experience.

• What are your thought regarding the user interface.

• Enter a quiz and then try to enter a puzzle. Explain your thought about the application flow.

• Enter the statistics page what are your main thoughts.

• Do you feel that the provided information is valuable for you as an examiner? • What could be added/removed?

• Do you think that this could be useful when evaluating students’ learning?

• What are your thoughts regarding the user interface compared to the previous version? • How was the overall experience?

References

Related documents

glued laminated construation

• Development of more realistic surrogate targets for safe testing of automotive radar systems: The project has gained knowledge about which properties that influence the

Det framgår även av referat att det för en hel del uppgifter inte går att säga vilket barn som är källa till en uppgift eller om båda är det.. Det saknas även uppgift om

The upper part of Figure 12 is the graphical illustration of the research model, whereas the outgoing dashed arrows illustrate how the research model is used in order to

The application is object oriented, handles dynamic types, performs well with a lot of data, handles changes over time in an elegant way and have a modern UI. The way we approach

Register allocation and optimal spill code scheduling in software pipelined loops using 0-1 integer linear programming for- mulation. In Shriram Krishnamurthi and Martin Oder-

Per capita annual P consumption rates (mg P per person year −1 ) for each scenario were combined with the projected popula- tion rise in the UK for the years 2016 to 2050 (Office

The previous mentioned study by Gulz and Haake [1] showed, for example, that a female ECA typically were described with less positive words than the male version, but those