• No results found

Figure 22: The results of the Statement 11 evaluation

were clickable. This applied to the page number range, progress point text, as well as page icon; only the title text of each chapter and subchapter itself was clickable.

These elements can be viewed in Figure 4. Furthermore, it was noted that no hover highlighting was made when hovering over each subchapter in the sidemenu; only the cursor styling changed to indicate something clickable. Since there already was visual hinting for hovering over the three available pages in the navigation bar, it was expressed as something missing in the sidebar. The evaluators judged that these issues went against the Provide Shortcuts and Be Consistent heuristics. The suggested solution was to add a change in color (a highlight) to each element of relating to the same subchapter when hovering over it in the chapter sidebar, as well as making the icons and rest of the text clickable.

The tasks page, like the previous page, contains a button inconsistency, as the “Back”

button that appears when entering a task does not share the appearance of the rest of the application’s buttons. The button is also grey, which makes it blend in to the background of the page. This breaks the Provide Clearly Marked Exits and Be Consistent heuristics. When inside of one of these tasks, the user is able to mark answers before submitting the questions. The user can still navigate outside the page in order to i.e. read the theory page, but when the user returns to the task the marked answers are removed without warning. This means that the user needs to remember to check the answer again before submitting the task, else risking to submit the task without answers and losing points. This violates the Minimize the User’s Memory Load and Provide Feedback heuristics. Another minor flaw is the lack of a “Filter” text next to the filter buttons, which may cause the user to miss the filter function. This violates the Provide Feedback heuristic.

Next, the score page shows all the completed and uncompleted tasks, with the same images as the tasks page does. This may make the user believe that they would take them to the tasks when clicked, but they do not. Adding this feature could be a good way of adding more shortcuts to the application, which would follow the Provide Shortcuts heuristic. This page contains the leaderboard element, the mechanics of which are not clearly explained. The user is able to retry questions and the score on the leaderboard only updates if they beat their previous score on that question, but neither of these functions are explained, which goes against the Provide Feedback heuristic. Adding a small explanation window, either before or after registering, would work towards solving this problem. Finally, an area marked “Awards” looks like the other buttons in the applications, but it is not one and can therefore confuse the user.

This violates the Be Consistent heuristic, and can be fixed by changing the design of this area away from the typical button design.

7

Discussion

Chapter 7 covers the discussion about the results of the project, including the concept evaluation, usability aspects, limitations, and potential continuations of the concept.

7.1 Proof-of-Concept and Usability Evaluation

The main purpose of the project was to assess the practicality of the proposed concept that is OpenCourse; can students in computer science use this application to interface with course content in an engaging and motivating way? Finding a method and a metric that accurately measures the dependent variables, engagement and motivation, in the experiment was shown to be the biggest challenge. Although the sample size in the questionnaire was quite small, at only 11 respondents, some relevant information can still be extracted from the trends in the results. However, it must be noted that the results themselves might be misleading due to this, and other error sources. If the experiments would’ve been performed on a larger scale, a more accurate and reliable result could have been achieved. This could potentially result in a higher external va-lidity, because the results would easier be generalized for bigger and different types of populations. Furthermore, the results could’ve been drastically different if the partici-pants had a longer exposure to using OpenCourse—perhaps if they used it alongside an entire university course. This would also enable a deeper investigation into the overall effects of gamification on students, for example the effect on exam scores.

Appearing out of the results of the experiments is a slight trend that the participants appear to prefer using the OpenCourse application, over the regular pen and paper method. For example, Statement 1 in Table 1 refers to the experiment takers’ perceived engagement using the two methods, and it resulted in approximately 72 % answering that they agreed with the statement when using OpenCourse. In contrast, the same an-swer alternatives, but instead for the pen and paper method, resulted in approximately 36 % of the answers agreeing, while also having about 9 percent answering “Strongly Disagree”. This trend is also true for three out of the four comparative statements, pos-sibly indicating that the application format (independent variable) has a relationship with the motivation and engagement (dependent variables). This relationship infers an increase in these variables. The other two statements that showed this trend were Statement 2 and 4, relating to how fun the methods were perceived to be, as well as the perceived motivation in learning while performing the tasks, respectively. Because of this, there is a strong indication that the alternative hypothesis is true. As mentioned, many different flaws in the approach to evaluate the concept might’ve resulted in these answers specifically. On the other hand, Statement 3, relating to each method’s ease of use and practicality, showed a slight trend that the pen and paper method was more practical and easier to use.

A possible point of failure in the method of performing the examinations, was that the surveyee was observed when filling out answers. The reason for this was to provide introductory guidance for the participants by one of the project members, which also handled the switching between application and survey. However, the surveyee should not have been observed when eventually answering. The presence of observation of the participants could affect their answers—which relates back to the Hawthorne effect—

and results in possibly skewed results, perhaps in favor of the application. Although this is true for observing the participants fill out process, it would also be true for observing the participants at any point during the experiment; perhaps observation of their progress when answering the task questions could trigger the Hawthorne effect as well. Instead, the experiments should have been streamlined and used very precise instructions, so that no external help was needed in order to complete the whole ex-periment. This would enable the experiment taker to be fully free from influence from others, and possibly negate the Hawthorne effect. It should be noted that if the exper-iment instead were performed in this manner, one could argue that it would improve the external validity of the experiments.

Another fault in the experiment process could be the choice of reward that the partic-ipants received for participating. Upon finishing the experiment, they were rewarded with a specific brand of candy bar and there were no alternative rewards. It is therefore possible that this method only attracted those who liked that specific type of candy bar, while repelling any who do not, which introduces a selection bias into the experiment method. If the experiment had a wider range of rewards, there is a possibility that more people would have liked to participate.

Statement 7 of Table 2 asks if the participants believe that they would learn efficiently with the pen-and-paper method, while Statement 8 asks them to evaluate OpenCourse in the same way. As seen in Figure 18, over half the participants, at approximately 55 percent, were neutral to the first statement while about 18 percent agreed and about nine percent strongly agreed. Meanwhile, approximately 18 percent of the participants were neutral to the second statement, while about 73 percent agreed. The disparity between these answers can be explained by the differences between the two methods, as well as maturation. In this case, maturation means that students have likely often used a similar method of studying to the pen-and-paper method. Because of this, the method may be viewed as more of a standard, which might explain the high proportion of neutral answers to Statement 7. Statement 8’s relatively high amount of agreeing answers may therefore partially be because of how different OpenCourse is to the pen-and-paper method, as the application may be seen as a more interesting method of learning because of its digital nature. An alternative would be if the experiment instead would have compared two similar applications, one which contains gamification elements and one which does not. This would serve to have the method only evaluate the gamification elements without the impact of other factors, such as maturation.

Related documents