• No results found

Debugging in a World Full of Bugs : Designing an educational game to teach debugging and error detection with the help of a teachable agent

N/A
N/A
Protected

Academic year: 2021

Share "Debugging in a World Full of Bugs : Designing an educational game to teach debugging and error detection with the help of a teachable agent"

Copied!
56
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköping University | Department of Computer and Information Science

Master’s thesis, 30 ECTS | Cognitive Science

2020 | LIU-IDA/KOGVET-A--20/017--SE

Debugging in a World Full of Bugs

Designing an educational game to teach debugging and error

detection with the help of a teachable agent

Hur man designar ett digitalt spel för att introducera felsökning

med hjälp av en digital lärkompis

Isabella Koniakowski

Supervisor : Agneta Gulz Examiner : Arne Jönsson

(2)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet - eller dess framtida ersättare - under 25 år från publicer-ingsdatum under förutsättning att inga extraordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka ko-pior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervis-ning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säker-heten och tillgängligsäker-heten finns lösningar av teknisk och administrativ art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsman-nens litterära eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet - or its possible replacement - for a period of 25 years starting from the date of publication barring exceptional circumstances.

The online availability of the document implies permanent permission for anyone to read, to down-load, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility.

According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement.

For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

(3)

Abstract

This study used the Magical Garden software and earlier research into computational thinking as a point of departure to explore what metaphors could be used and how a teach-able agent could be utilised to introduce debugging and error detection to preschool chil-dren between four and six years old. A research through design methodology allowed the researcher to iteratively work divergently and convergently through sketching, creat-ing a Pugh matrix, conductcreat-ing six formative interviews, and finally creatcreat-ing two hybrid-concepts as paths to teaching debugging in the form of hybrid-concepts. Many metaphors discov-ered in the design process and in preschool teachers’ daily practices were judged possible for teaching debugging and error detection. The analysis of these resulted in four recom-mendations for choosing a suitable metaphor when teaching debugging: it should have clear rights and wrongs, it should allow for variation, it should have an easily understand-able sequentiality to it, and it should be appropriate for the age-group. Furthermore, six recommendations were formulated for utilising a teachable agent: have explicitly stated learning goals, review them and explore new ones as you go, have a diverse design space exploration, make the learning objective task complex, not the game in general, reflect on if using a TA is the best solution, make use of the correct terminology, and keep the graph-ical elements simple. These recommendations, together with the hybrid-concepts created, provide researchers and teachers with knowledge of how to choose appropriate metaphors and utilise teachable agents when aiming to teach debugging and error detection to chil-dren between four and six years old.

(4)

Acknowledgements

The biggest thanks goes to my supervisor, Agneta Gulz, for her input, interesting discussions, thoughts and help. When I started this project, Agneta jokingly said ”You’ll have the chance to ruin a generation”, to which I laughed nervously. Now, I hope that what I have done might be a step in the right direction towards helping this new generation understand the increasingly digital world around them. Thank you for giving me the opportunity.

I would also like to thank my friends and classmates, without our KRIS and standup-meetings, this thesis would never have been finished. Finally, I would like to thank the preschool teachers who participated in the study, without your knowledge I would have been lost in the dark.

(5)

Contents

Abstract iii

Acknowledgments iv

Contents v

List of Figures vii

List of Tables viii

1 Introduction 1

1.1 Motivation and Background . . . 1

1.2 Aim . . . 2

1.3 Research questions . . . 2

1.4 Delimitations . . . 2

2 Theory 3 2.1 Digital Competence in European and Swedish Education . . . 3

2.2 Learning by Teaching and Teachable Agents . . . 4

2.3 Computational Thinking . . . 5

2.4 Teaching and Learning CT . . . 6

3 Method 10 3.1 Research Through Design . . . 10

3.2 Data Collection . . . 11

3.3 Ethics . . . 14

4 Design Work 15 4.1 Initial Ideation . . . 15

4.2 Narrowing Down the Number of Concepts . . . 16

4.3 Exploring Possibilities Through Formative Concept Interviews . . . 22

4.4 Two Paths to Teaching Debugging . . . 25

5 Results 32 5.1 Possible Metaphors for Teaching CT . . . 32

5.2 Teaching Debugging Through a TA . . . 33

6 Discussion 36 6.1 Results . . . 36

(6)

References 42

(7)

List of Figures

2.1 The Magical Garden software’s garden (left) and the location of the party (right).

Screenshot by author. . . 5

2.2 A Magical Garden mini-game view when judging the solution of the TA (left) and when correcting the TA (right). Screenshot by author. . . 6

3.1 A visualisation of the design phases of the project with the figure-height expands in the exploratory phases and shrinks in the defining phases. . . 12

4.1 A selection of sketches created during the idea generation. . . 16

4.2 A sketch of the dance concept. . . 19

4.3 A sketch of the cake decoration concept. . . 20

4.4 A sketch of the hamburger concept. . . 21

4.5 Sketched example of how to indicate a current action. . . 24

4.6 A sketch of the debugging in focus concept. . . 26

4.7 Sketched procedure for the third difficulty. . . 27

4.8 Sketched explorations of the TA suggesting a solution for a task. . . 28

4.9 A sketch of the Learning Companion concept. . . 29

4.10 Sketches of the procedure of the Learning Companion concept starting in the left upper corner and ending in the right lower corner. . . 30

5.1 Sketches of the final concepts Debugging in focus (left) and Learning Companion (right). . . 34

(8)

List of Tables

2.1 Classifications of CT recreated from Hsu et al. (2018). . . 7 4.1 Pugh matrix rating concepts . . . 18

(9)

1

Introduction

The introduction gives an overview of the academic landscape of Computational Thinking (CT) and its role in Swedish preschool education. It also briefly describes teachable agents, and their potential for teaching CT. Finally, it illustrates the lack of research in the intersection between computational thinking and teachable agents that this study aims to illuminate.

1.1

Motivation and Background

The three R’s of education (reading, writing, and arithmetic) are seen as fundamental skills for operating in society and has a natural place in education all over the world. But as comput-ers and digital solutions are becoming increasingly more prevalent, and research has shown that there is no such thing as a digital native (European Commission, 2014; Prensky, 2001) researchers and governments worldwide are calling for, and adding computer science to, formal, compulsory education (Heintz, Mannila, Nordén, Parnes, & Regnell, 2017).

Computational Thinking and Programming

Within the research community, the concept of computational thinking (CT) as re-popularised by Wing (2006) has gained traction as a theoretical framework to conceptualise some aspects of digital competence and computer science. Wing defines CT as the way computational scientists think to allow them to abstract and reformulate difficult problems in ways that make the problems possible to solve by a computer (Wing, 2008). Hsu, Chang, and Hung (2018) further defines 19 thinking steps of CT, such as abstraction, pattern generalisation, and debug and error detection (Table 2.1). The step debugging and error detection consists of both discovering that there is a mistake in a solution, and finding specifically where the mistake lies.

Hsu et al. (2018) state that CT training should be designed for specific ages and use appro-priate strategies that map to age-related cognitive abilities. One of the most prevalent ways to teach CT is by using programming as a mediator, and educators have argued that it is indeed the most appropriate way to teach CT (Wing, 2006, 2008). Learning to program does however require the ability to both read and write, which makes it hard to teach to preliterate children. As a solution to this, several programming languages for children have been developed, de-manding different levels of reading competencies. Examples of languages used are LOGO (Papert, 1980), Scratch (https://scratch.mit.edu/), and existing programming languages such as Java or Python. Several studies have been conducted on Scratch (e.g. Maloney, Peppier, Kafai, Resnick, and Rusk (2008); Maloney, Resnick, Rusk, Silverman, and Eastmond (2010); Resnick et al. (2009)), which is a block-programming language developed at MIT where chil-dren can drag and drop blocks of code in different combinations to create programs and learn

(10)

1.2. Aim

For Swedish children, there is an additional barrier; programming languages are gen-erally based on English. In addition to learning to read and write, children also have to learn English to be able to program. As a solution to these problems, games like ScratchJr (https://www.scratchjr.org/), Blue-Bot, LightBot (https://lightbot.com/) are being used in preschool today. ScratchJr was developed as a version of Scratch aimed at younger children (5-7 years old) who have not yet learned to read. ScratchJr allows children to program char-acters to move in different directions, as well as turn, wait, and make sounds. These actions are initiated by dragging blocks with signs on them to a programming area (Papadakis, Kalo-giannakis, & Zaranis, 2016). Similar applications are LightBot (where the child controls a robot) and Blue-Bot (which has a physical component as well).

All of these languages rely on the metaphor of spatial movements to explain and convey the concepts of programming and CT. As proposed by Marton (2015), there is a need for vari-ation in teaching, to provide opportunities to discover necessary and non-necessary aspects of a learning object. There is an apparent lack of interventions teaching CT without the use of programming and the spatial metaphor.

Educational Games and Teachable Agents

Gulz and Haake (2019) have researched the benefits of letting a child act as a teacher to a teachable agent (TA). They have shown that the concept learning by teaching is transferable to a digital context and proven benefits of having a child teach a digital agent rather than a peer. These benefits include the possibility for every child to act as a teacher, no matter their individual knowledge level, and without the risk of them teaching a peer incorrectly. Their educational software Magical Garden aims to teach children between 4 and 6 years old basic number sense and early maths without singling out children that are over- or under-performing. The TA learns from the child and then performs the tasks by itself, asking the child if it has understood correctly. There is reason to believe the method has the potential to be used to teach CT as well.

1.2

Aim

This study aims to understand how debugging and error detection can be taught through the use of a Teachable Agent in an educational game. It also explores possible alternative metaphors to the spatial metaphor for teaching debugging and error detection as well as what makes them suitable for the purpose.

1.3

Research questions

1. What makes metaphors suitable to explore and teach debugging and error detection to children in preschool education?

2. How can a teachable agent be utilised to introduce debugging and error detection to children in preschool education?

1.4

Delimitations

The study is focused on teaching preschool children (between 4 and 6 years old) aspects of computational thinking and does not aim to generalise this to other age groups. It is also limited to exploring possibilities to teach one aspect of computational thinking: debug and error detection and to do this within the thematic constraints of the Magical Garden software.

(11)

2

Theory

This chapter presents the current landscape of digital competence education throughout Eu-rope and Sweden, as well as earlier relevant research within the areas of educational games, Teachable Agents, and computational thinking. The chapter also goes into greater detail about research conducted within the Magical Garden project and research about debugging and error detection.

2.1

Digital Competence in European and Swedish Education

The European Union defines digital competence as the confident, critical and responsible use of, and engagement with, digital technologies for learning, work, and for participation in society and as a key competence for lifelong learning. Countries differ in whether they have incorporated digital competence as a cross-curricular theme (e.g. Italy, Norway), in-tegrated into other learning areas (e.g. Spain, Slovenia), as an individual subject (e.g. Bul-garia, Montenegro), or a combination of the three (e.g. Lithuania, Malta). In primary school, most countries classify it as a cross-curricular subject, while it in lower secondary educa-tion is classified as an individual subject. The countries also differ in the earliest targeted age ranging from pre-primary education to upper secondary education (European Commis-sion/EACEA/Eurydice, 2019).

The Swedish National Agency for Education holds forth that children should learn ade-quate digital literacy. This term is broken down into four aspects; being able to understand how digitisation affects both society and the individual, being able to use and understand digital tools and media, have a critical and responsible approach to digital technology, and being able to solve a problem and turn ideas into actions in a creative way using digital technology. The Swedish curriculum for preschool furthermore describes how this should be implemented in preschool education. It states that children should be given the possibil-ity to build a foundation to a critical and responsible approach towards digital technology (Skolverket, 2018).

In Swedish preschool, there is a long tradition of learning through play. The Swedish preschool curriculum (Skolverket, 2018) states that play is the foundation for development, learning and well-being. Simultaneously, there is a trend of gamifying existing educational programs and developing educational games. The aim of these is to combine fun and edu-cational value to increase motivation and continued use and learning. They provide motiva-tional mechanics from the game-world with addimotiva-tional possibilities of interactivity, adap-tation, and personalisation. Moreno-Ger, Burgos, Martínez-Ortiz, Sierra, and Fernándes-Manjón (2008) describe three ways to create and develop educational games; translating ed-ucational content to fit a game-like environment, re-purposing existing games for education, and designing games specifically for learning. Out of these three, they describe the last one

(12)

2.2. Learning by Teaching and Teachable Agents

2.2

Learning by Teaching and Teachable Agents

Learning by teaching has been identified as an effective way to learn across different subject domains (Cohen, Kulik, & Kulik, 1982; Davis et al., 2003; Roscoe & Chi, 2007). This effect also applies to students with lower proficiency levels (Robinson, Schofield, & Steers-Wentzell, 2005). Bargh and Schul (1980) showed that by preparing to teach others what they learn, people learn better. When preparing to teach someone else, the teacher may need to struc-ture content, engage in deeper reflection and take responsibility for the teaching situation, which all contributes to learning and motivation (Leelawong & Biswas, 2008). Acting as the teacher or tutor can help verbalise knowledge, improve motivation, and even improve the self-efficacy of the teacher (Gulz & Haake, 2019). In these cases, the tutor needs to have at least the same knowledge level as the tutee, which could cause problems for lower-achieving children.

A solution to this problem is a digital agent who takes the role of the tutee, often referred to as a Teachable Agent (TA). Through artificial intelligence, the TAs learn from the student and is therefore on the same knowledge level as the student. This allows the student to teach and, at the same time, learn at the appropriate level (Pareto, Haake, Lindström, Sjödén, & Gulz, 2012). If the student teaches the TA incorrect knowledge, no real person comes to harm.

Effects from utilising TAs are considered to be stemming from additional motivation and effort as well as effects of additional metacognitive reasoning. In Chase, Chin, Oppezzo, and Schwartz’s (2009) study, students studied either for a future test or to teach a TA. They were found to make greater effort to learn with the aim of teaching their TAs than for themselves. This has been called the protégé effect. The protégé effect is believed to work through; a) an ego-protective buffer - the student gains distance from the performance (and possible negative effects of failure) by attributing it to the TA; b) responsibility - the student takes social respon-sibility for the TA’s learning which motivates them to help the TA better and in novel ways and; c) incrementalist theory - by seeing the TA learn and improve, the students are led to ac-cept the belief that one can improve and learn by being taught, rather than the belief that each individual’s knowledge is determined and unchangeable (Chase et al., 2009).

Implementing a TA has also been shown to lead to metacognitive behaviours that increase content learning (Schwartz et al., 2009). Learning by teaching is an activity high in interac-tive metacognition, which is metacognition directed outwards towards another agent. When teaching, a tutor anticipates, monitors, regulates, and interacts with their tutees’ cognition. The TA serves to allow the same effects. The TA can also work as a means to alleviate cog-nitive load. When engaging in self-directed metacognition, a person needs to both think problem-solving thoughts, and engage in metacognitive thoughts, as well as monitor and regulate their problem-solving thoughts. By changing the task into teaching the TA, the child only has to engage in metacognitive behaviour about the TA’s problem-solving reasoning. When the cognitive load is alleviated like this, the child can be focused on practising and learning metacognition related to the subject, which will help them when they are solving the problems themselves (Okita, 2008).

Magical Garden

The Magical Garden software (Figure 2.1 and 2.2) is developed by Lund and Linköping Universities in Sweden and in collaboration with School of Education, Stanford Univer-sity and Stockholm UniverUniver-sity (http://www.magicalgarden.se). Gulz and Haake (2019) de-scribe how it allows preschool children to encounter and practice early mathematical skills and basic number sense. The game involves mathematical concepts such as higher/lower, larger/smaller, longer/shorter, more/less, as well as different representations of numbers. From a research perspective, Magical Garden works as a research platform, allowing projects investigating early mathematics education, how to develop educational software,

(13)

develop-2.3. Computational Thinking

Figure 2.1: The Magical Garden software’s garden (left) and the location of the party (right). Screenshot by author.

mental issues relating to children’s mentalising abilities, their understanding of the tutor-tutee relationship and their ability to ascribe responsibility for errors, mistakes and success to different agents.

The game has three pillars; learning by teaching, feedback, and individual adaptation and inclusion. A child playing the game starts by choosing a TA (introduced as a friend), either Panders the panda, Mille the mouse, or Igis the hedgehog. The chosen friend becomes the child’s TA, while the other animals show up as side characters in the game (Husain, Gulz, & Haake, 2015). The child and their chosen TA help different animals in the forest with tasks in the form of mini-games and gain water drops to water their garden as compensation. When making a mistake, children are provided with meaningful feedback with information about why their answer is wrong and hits at how it can be corrected. The difficulty of the game is regulated through artificial intelligence, balancing which representation (unorganised dots, fingers, lines, organised dots, numbers, or a mix of the prior), which mathematical operator (counting, step-by-step, addition, subtraction, or addition and subtraction) and which num-ber range (1-4, 1-6 or 1-9) that is used to make the game adaptive while rotating through the same mini-games. When progressing in the game, children are rewarded through their mag-ical garden blossoming (to the left in Figure 2.1). Through this reward-mechanic, no child is being singled out for under- or over-performing, as there is no way for the children to see how far they or other children have progressed. This makes the game an inclusive game.

Each mini-game is split into three modes. The child first solves the problems on their own, then shows the TA how to solve the problems, and finally helps the TA know if it is acting correctly by correcting it if it makes a mistake. Ternblad, Haake, Anderberg, and Gulz (2018) describe this as three subsequent modes. The child first learns and practices on their own by completing the tasks of the game (Mode 1). The TA then arrives to the game and the child shows the TA how to play (Mode 2). Finally, the child monitors as their friend tries to complete the tasks and is provided opportunities to correct the TA if it is needed (Mode 3, seen in Figure 2.2). The TA learns from the child and thus makes the same mistakes as the child would, matching the child’s knowledge level. Through these mechanisms, the child is allowed to be the tutor, teaching the TA on a level that is appropriate for the child.

As part of the game, there is a party to celebrate the TA’s birthday with accompanying party-themed mini-games. This is the setting for this study’s exploration of teaching CT, limiting the design space exploration in one dimension (Westerlund, 2005).

(14)

2.4. Teaching and Learning CT

Figure 2.2: A Magical Garden mini-game view when judging the solution of the TA (left) and when correcting the TA (right). Screenshot by author.

computational scientists think to allow them to abstract and reformulate difficult problems in ways that make the problems possible to solve by a computer (where a computer can be a human). She goes on to define (2008) that CT is: a conceptualisation rather than the develop-ment process of programming language; a logical process rather than repetitive, mechanical behaviour; a way of human thinking utilising the creativity of humans and used to solve human problems; a combination of mathematical and engineering thinking; something that allows us to solve problems in our daily life and interact with others; and a basic life skill. Although CT originates in computer science, it can be applied to other problem-solving con-texts (Ching, Hsu, & Baldwin, 2018) such as in mathematics, physics, biology or language (see Hsu et al. (2018) for examples). This is also mirrored when CT is taught, as it is often taught as an integrated part of other subjects (European Commission/EACEA/Eurydice, 2019).

Wing (2006) further classifies CT into 11 thinking processes, and Hsu et al. (2018) add eight more CT thinking steps in their review study (presented in Table 2.1). Not all of these are suitable for preschool children to learn, but there is potential to teach or begin teaching several of the thinking processes connected to CT. The practice of debugging and error detection consists of discovering that there is a mistake and finding where this mistake lies. The process is described as pertaining to your own mistakes, but studies on TAs have shown that children find it easier to correct someone else, rather than acknowledging their own mistakes (Chase et al., 2009).

2.4

Teaching and Learning CT

In their literature review, Hsu et al. (2018) make suggestions on how to teach CT. They defined 16 learning strategies to learn CT, where problem-based learning, project-based learning, col-laborative learning, and game-based learning were the most prevalent. When designing edu-cational games, Polsani (2003) recommends following the learning objectives model - having small and focused games covering specific topics. This is in line with what Magical Garden has already implemented and supports this thesis’ decision to break down the CT concept into smaller aspects that can be targeted through individual games. Hsu et al.’s literature study found programming to be the most common subject, followed by computer science, mathematics, biology and robotics. Out of the 93 CT studies reviewed by Hsu et al. there were only 3 papers aimed at preschoolers.

One common way to develop CT is through programming (Hsu et al., 2018), which in turn is typically learnt through learning a programming language such as Java, Python or C. For younger learners, several block programming interventions have been developed such as Scratch (Maloney et al., 2008) and LEGO (Papert, 1980). These allow for graphical interfaces that reduce the amount of reading necessary. However, for preliterate children, reducing

(15)

2.4. Teaching and Learning CT

Table 2.1: Classifications of CT recreated from Hsu et al. (2018).

Num. Thinking steps Definition

1. Abstraction Identifying and extracting relevant information to de-fine main ideas.

2. Algorithm Design Creating an ordered series of instructions for solving similar problems or for performing a task.

3. Automation Having computers or machines do repetitive tasks. 4. Data Analysis Making sense of data by finding patterns or developing

insights.

5. Data Collection Gathering information.

6. Data Representation Depicting and organising data in appropriate graphs, charts, words, or images.

7. Decomposition Breaking down data, processes, or problems into smaller, manageable parts.

8. Parallelisation Simultaneous processing of smaller tasks from a larger test to more efficiently reach a common goal.

9. Pattern Generalisation Creating models, rules principles, or theories of ob-served patterns or theories of obob-served patterns to test predicted outcomes.

10. Pattern Recognition Observing patterns, trends, and regularities in data. 11. Simulation Developing a model to imitate real-world processes. 12. Transformation Conversion of collection information.

13. Conditional logic Finding the associated pattern between different events.

14. Connection to other fields

Finding the relationships between information. 15. Visualisation Visual content is easier to understand.

16. Debug and error detection

Find your own mistakes and fix them. 17. Efficiency and

performance

Analyse the efficiency of the final results in order to achieve a more perfect goal.

18. Modelling Solve the current problems through the model architec-ture or develop a new system.

19. Problem Solving The final step of logical thinking.

the amount of reading necessary is not enough. Therefore, a lot of the research on pre-kindergarten to elementary level CT education utilises robots, block-based programming, electronic blocks, storybooks and devices controlled by buttons (Ching et al., 2018) to make the programming concepts more accessible.

Among other games, ScratchJr has been designed and developed as an answer to this problem (Flannery et al., 2013). The game allows younger, preliterate, children to learn pro-gramming through dragging and dropping instructions for characters to a scripting area. It is a game based on tinkerability, allowing the children to explore programming without a pre-determined goal and provides opportunities to learn about CT aspects such as condi-tional logic and automation. It allows children to program characters to move, speak and interact with each other. Instead of block-programming or other written language, ScratchJr uses visual blocks with signs on them such as arrows and speech bubbles to communicate meaning.

(16)

2.4. Teaching and Learning CT

Teaching and Learning Debugging

Debugging has been known to account for more than 50% of the time and effort spent in the development of a computer program and is especially important for novice programmers who tend to make more errors than expert programmers (Lee & Wu, 1999). As debugging is a natural component of programming, a lot of the literature on debugging is found in studies of novice programming (McCauley et al., 2008). Ahmadzadeh, Elliman, and Hig-gins (2005) showed that most good debuggers are often also good programmers, but that good programmers do not necessarily make good debuggers, and there are more examples of this phenomenon (see McCauley et al. (2008), for more examples). Furthermore, there are also studies that show that domain knowledge can improve debugging practices and that many bugs are rooted in a superbug - the belief that there is a hidden mind in programming languages which has intelligent interpretative powers (Pea, 1986). McCauley et al. (2008) con-clude that debugging must be taught, rather than be assumed as a by-product of learning to program. Chmiel and Loui (2004) also showed that training in debugging decreased the time spent on debugging.

A bug occurs when a programmer experiences a breakdown between a goal and a plan. Ko and Myers (2005) describe that these breakdowns occur in skill, rules or knowledge due to cognitive limitations in a programming system or external environment. Breakdowns in skill stem from inattention a routine action (resulting in necessary code not being written) or over-attention to a routine action (resulting in assumptions that necessary code has already been written when it has not, or assumptions that necessary code has not been written when it has). Breakdowns in rules occur when learned programming rules are implemented incorrectly, or when the wrong rule is applied in the wrong setting. Finally, knowledge breakdowns occur because the problem space is too large to explore (causing sub-optimal strategies and biased knowledge based on salient or readily available information) or when the programmer has a faulty model of this complex problem space (causing the wrong solutions to be implemented) (Ko & Myers, 2005).

Bugs are identified using different techniques, aided by the use of artefacts such as pro-gram output and knowledge about the domain. The first step to finding a bug usually in-volves either examining the output of the program or re-reading the code. When re-reading code, experts tend to read in the order of execution, while novices read in the order that the code was written. In Chmiel and Loui’s (2004) study, training included debugging exercises, debugging logs, development logs, reflective memos, and collaborative assignments. The de-bugging exercises consisted of learners being provided with code containing bugs, and then being asked to correct the code. This seems to be a common way to teach debugging. Simi-larly, having students predict the output before the code is executed can allow them to work on self-correcting their own misconceptions (McCauley et al., 2008).

Variation in CT Educational Games

In its simplest form, a metaphor is another way to think of something. Within design, metaphors are used to make the affordances of an interface perceivable (Norman, 2013). Lovgren (1994) defines a metaphor as a partial map between two concepts and Lakoff and Johnson (1980) writes that ”the essence of metaphor is understanding and experiencing one kind of thing in terms of another”. It is no big surprise that metaphors play a central role in learning which, in turn, makes the choice of metaphor we use in teaching a critical decision. Metaphors describe one experience in terms of another, and therefore they also influence and constrain how we think about the concept being taught. This affects the importance, meaning and actions that are perceived available to what is being taught and how the concept relates to and fits with other knowledge and experiences (Lawley & Tompkins, 2000).

Related to this, variation theory (Marton, 2015) offers a way to understand learning, founded in phenomenography. It describes that for a learner to perceive the object of

(17)

learn-2.4. Teaching and Learning CT

ing, they have to be able to distinguish it and its internal characteristics from other objects in their environment. These internal characteristics are critical aspects of the object of learn-ing. The learner must learn to discern these critical aspects of the object of learning and then discern what aspects of the object of learning is necessary aspects as well as to discern what aspects are non-necessary aspects. The former allows the learner to solve the problem at hand, while the latter allows the learner to generalise the knowledge to other contexts. Teaching, therefore, should stem from the objects of learning, and for every object of learning teach the learner to discern the critical aspects of the object.

Variation theory can be seen at a micro level and a macro level. At the macro level, the learner needs to practice concepts of CT in different settings, using different metaphors, to learn what CT can be applied to. At the micro level, the learner needs to meet each metaphor of CT with smaller variations, to learn how specific CT concepts/aspects of CT work and can be used. Designing for variation thus requires designing both for a variation in the overarch-ing metaphor, as well as within the concept itself. When teachoverarch-ing computational thinkoverarch-ing (or programming), some metaphors are more common than others. In teaching programming to smaller children, there is a tradition of utilising the spatial metaphor, moving a physical or digital agent by programming (e.g. Blue-Bot, ScratchJr). This allows the learner to draw on their spatial capabilities and apply what they know about moving themselves to program-ming a robot. However, in line with variation theory, the learner needs to learn about CT aspects across different settings or contexts, to perceive and understand the learning objec-tive.

In the following chapters, I will explore how TAs can be utilised to teach debugging and error detection, and some alternatives to the spatial metaphor.

(18)

3

Method

This chapter describes the methodological background of the study and presents the methods used during the design work. The study concerns the initial stages of design, focusing on early work to ensure that the right design is created. As Pugh (1981, as cited in Pugh (1991)) stated:

”The wrong choice of concept in a given design situation can rarely, if ever, be recouped by brilliant detail design”

3.1

Research Through Design

One way to conduct design research is through the Research Through Design (RtD) model, which includes making as a method to address wicked problems (Zimmerman, Forlizzi, & Evenson, 2007). Zimmerman2007ResearchHCI describe that the researcher focuses on mak-ing the right thmak-ing, and then analyses this designed research artefact to find the knowledge it carries about design. The intent of RtD is not the creation of the artefact in itself, but to create knowledge for the design research and practice communities. The final result can be pre-patterns or recommendations for future research or future design processes that can benefit designers working or conducting research within the same area.

Zimmerman2007ResearchHCI describe four criteria for evaluating RtD: process, inven-tion, relevance and extensibility. The process of the research must be supported by a design rationale and provide adequate details so that the process can be reproduced. The research must also be a significant invention, integrating various subject matters to address a situation in a novel way. The research must also be relevant, motivating the preferred state that the re-search is trying to achieve and detailing the current situation’s need for change or innovation. This criterion replaces the validity criterion, as two designers faced with the same problem and methodology are not expected to arrive at the same solution. The final criterion dictates that it should be possible to use either the process or the knowledge created in the resulting artefacts in future research and practice.

Educational design research (EDR) has a similar goal of developing practical and the-oretical insights simultaneously, creating ”usable knowledge” that is relevant for practice (McKenney & Reeves, 2012). The aim of EDR is to develop usable knowledge (Lagemann, 2000), scientifically-backed knowledge that is relevant for and usable in educational practice. McKenney and Reeves stress the fact that EDR needs to be situated in the context where the learning takes place. This is again conceptually close to RtD as EDR methodology ensures that the right thing is designed. In addition to this, McKenney and Reeves argue that theoret-ical and practtheoret-ical knowledge generated through EDR has to be transferable to other contexts than the specific context that has been the subject of research. Much in the same way that RtD tries to generalise knowledge from specific design instances, EDR also utilises iterations, replications and testing to develop knowledge in several stages, studies and context to find generalisable aspects.

(19)

3.2. Data Collection

This study is inspired by the Research Through Design and educational design research methodologies, aiming to abstract knowledge from the development of a designed research artefact and with the overarching aim to create knowledge that is usable for preschool teach-ers, as well as educational game designers. This is illustrated by how the study adopts design methods and adheres to Zimmerman et al.’s RtD criteria.

Process - To ensure that the process criterion is fulfilled, the research process is documented in Chapter 4 with commentary and annotated sketches acting as design rationale to ensure that the process can be replicated.

Invention - The study integrates the research fields of computational thinking education for preschoolers and teachable agents, using design as a method to explore the design space. As described by Hsu et al. (2018), there is a lack of research on how to teach CT to preschoolers and, as illustrated in Chapter 2, there is also a lack of research exploring how to utilise TA to teach CT.

Relevance - The relevance of creating an educational game teaching CT is motivated in Chapter 2. Programming is considered a 21st-century skill, and preparing preschool children for their future programming education is relevant.

Extensibility - The artefacts created in this study contain knowledge that can be used in educational practice and research. The artefacts are analysed in Chapter 4.

3.2

Data Collection

This section describes the methods used in the study, switching between controlled concept generation and controlled convergence (Pugh, 1991). Moreno-Ger et al. (2008) define how to produce and maintain educational games. They describe the main phases of identifying pedagogical requirements, designing and implementing. This study focuses on the first two phases of creating an educational game: identifying pedagogical requirements and designing the educational game.

The literature review presented in Chapter 2 as well as the research questions and earlier studies within the Magical Garden project together serve as a basis for developing the peda-gogical requirements of the game. Based on these pedapeda-gogical requirements, the researcher conducted divergent sketching to explore possible metaphors, game mechanics, and interac-tions for the game. The sketches created was rated in a Pugh matrix, based on criteria related to the pedagogical requirements. Three concepts were chosen to develop further and pre-sented to preschool teachers in a series of formative concept interviews. Insights from the interviews were summarised and used to develop a hybrid-concept. Finally, knowledge gen-erated from throughout the process was analysed and generalised into knowledge in the form of recommendations that can benefit others within the research field. The design phases are shown in Figure 3.1. The figure also shows which steps are considered diverging (sketching, conducting formative concept interviews) and which are considered converging (creating a Pugh matix, creating hybrid prototype).

(20)

3.2. Data Collection

Figure 3.1: A visualisation of the design phases of the project with the figure-height expands in the exploratory phases and shrinks in the defining phases.

Pedagogical Objectives and Project Goals

Variation theory stresses the need for deciding educational goals. The main object of learning is in this study debugging and error detection. The final design, therefore, needs to keep as-pects of debugging static, while varying other asas-pects of the design, to achieve learning. As the study also aims to explore possible metaphors to introduce and teach aspects of compu-tational thinking, this is also considered a project goal.

The Magical Garden software offers some limitations to the design exploration:

• The design needs to utilise the TA through the same procedure that Magical Garden does.

• The designed metaphor needs conceptually fit into the setting of a birthday party. • The design needs to be tablet compatible, as this is the platform intended for Magical

Garden.

When designing educational software, there are additional criteria to consider. Husain et al. (2015) describe 7 design criteria for designing high-quality software for early maths. The first two criteria are specific to designing maths software while criteria 3, 4, 5, 6 and 7 have relevance for designing CT educational software. Criterion 3 states that the software should provide informative and meaningful feedback and create feedback loops. Feedback can be provided in different levels, varying from no feedback or right/wrong feedback to feedback pointing out specific discrepancies, feedback elaborating on what is incorrect and why, and finally, too much feedback (Schwartz, Tsang, & Blair, 2016). What Husain et al. call for feedback that is more meaningful than simple right/wrong-feedback, giving the child direction on how or why an answer is correct or incorrect. This needs to be implemented throughout the mini-games.

Criterion 4 states the need to consider what is motivating for the age group. Husain et al. state that this can be achieved both through meaningful feedback-loops, implementing surprises throughout the game, and providing a captivating narrative. This is implemented in Magical Garden through the garden, which surprises the child with different, unusual, flowers and thorough the overarching narrative. However, this can also be a design feature of the mini-games, designing sub-themes and narratives that are engaging for the children, as well as providing motivational feedback. Criterion 5 states that the software needs to provide individually adapted support as well as challenge the child. This entails adapting feedback, tasks, and level of difficulty to the individual child’s progress, contributing to learning, flow, and intrinsic motivation.

(21)

3.2. Data Collection

Criteria 6 and 7 respectively calls for including a reporting function into the software and utilising inclusive pedagogy. The reporting function allows teachers to keep track of individ-ual children’s progress which is implemented in Magical Garden in the form of logging chil-dren’s play and presenting it in a follow-up tool that allows teachers to follow the chilchil-dren’s progress. Inclusive pedagogy is defined by Husain et al. as not singling out children who are struggling or over-performing as ’different’. This is implemented by providing different difficulties within the same mini-game and not showing any differences in the appearance of the child’s garden based on learning progress.

Sketching

During the sketching phase, the designer’s goal is to create ideas, as many ideas as possible. The first ideas that are sketched down will be common, ideas that most people would arrive at when confronted with the problem, but by continuing on, the designer can find ideas that are unique and innovative, solving the problem in a new way. This is also a major contributor to getting the right design. Creating a visual representation of an idea allows the designer to see relationships between parts of the design and the whole (Arvola, 2014). The sketches can and should be annotated continuously, with notes about questions that have popped up, possible problems, possible alternatives, positives and negatives, and design decisions. These annotations serve as the design rationale, clarifying the reasoning behind design decisions and further explorations during the sketching phase (Buxton, 2007).

Pugh Matrices

One way to evaluate concepts is through Pugh matrices (Pugh, 1991). These are by nature a convergent part of the process, but Pugh stresses the method should positively stimulate the emergence of new concepts. Criteria for evaluation are chosen with the design requirements or use qualities in mind and should be agreed upon before evaluation starts. By creating a matrix, all concepts can be evaluated based on each criterion and how it is fulfilled compared to a chosen baseline concept. The ratings of ”better than”, ”worse than” and ”same” are written into the matrix and summarised to a score for each concept. At this stage, Pugh states that one should then try to change the negatives into positives by modifying the concepts, expanding the matrix with the new, modified concepts. Arvola (2014) describes that the last step of the evaluation matrix process is to decide which concepts to continue developing.

Formative Concept Interviews

Users can be involved at different points of a design process and in different ways. They can be consulted early in the process or later, and they can be evaluators or creators. The involve-ment can also be formal or informal, and the purpose formative or summative, all depending on what the goals of the user involvement are. Designers benefit from involving users early in the process when they want to know answers to high-level questions about the design. From this kind of exploratory test, one can find what users conceive and think about using the product, how the product’s functionality has value to the user and how easily the user can navigate the design (Rubin & Chisnell, 2008). These early user tests are informal and collabo-rative, allowing the moderator to probe as well as discuss with the users to explore high-level concepts and thought processes. The data obtained is qualitative, and there are often several concepts to compare, which through questions investigating why a concept is preferred over another provides knowledge about which aspects of each concept that are favourable or un-favourable. The result of an exploratory study is seldom one winning concept, but rather

(22)

3.3. Ethics

structured organisation where a list of questions or areas that should be touched upon are listed. In the case of exploratory tests, there is an additional element of a prototype, provid-ing the user with a focal point to base their reflections in. The same questions are asked for each concept, but questions can be expanded or skipped depending on what the interviewee spontaneously brings up during the interview. Counterbalancing is used to account for po-tential effects of presenting one concept before another in a within-subjects design (Rubin & Chisnell, 2008).

In qualitative analysis, transcribing is often used to understand the data better (Howitt, 2010). Emerson, Fretz, and Shaw (2011) describe a procedure to analyse field data. They sug-gest reviewing recordings and notes while coding them openly and writing down memos. The open coding consists of keywords that summarise data sections while still being unique and specific. Memos are defined as thoughts and reflections that occur during the open cod-ing. After the open coding Emerson et al. write that codes can be collected into groups of interest and themes to be better understood.

3.3

Ethics

The study was conducted according to the ethical guidelines by the Swedish Research Coun-cil (2017). Participants who took part in this study did so voluntarily. They were informed beforehand how their personal information would be handled and that they, at any point during the interviews, could end their participation without repercussions. They were also informed that they could contact the researcher to withdraw their consent in retrospect and have all their non-anonymised data be deleted, in accordance with European GDPR law. This information was given to participants verbally and in text, and consent was verbally collected before interviews started. Participants were not compensated for their participation, and all data presented in this study have been anonymised.

(23)

4

Design Work

In this chapter, the procedure of the design work is described and material created during the process presented.

4.1

Initial Ideation

During the ideation, the researcher sketched as described in Section 3.2 to explore the design space. This resulted in 10 different concepts, whereof annotated sketches of 6 are shown in Figure 4.1. These annotations serve as design rationale for the concepts.

The concepts all utilised the TA in the same way, allowing it to be the one making mistakes that the child then had to correct. The child first creates sequences of actions themselves (e.g. describing in which order to add decorations to a banner), completing the task several times. This is followed by the TA arriving and asking if they can watch and learn from the child. Finally, the TA asks to try the task themselves, checking with the child when they are finished if the child believes that the actions are in the right order. If not, the TA goes on to ask about the correctness of each planned action, allowing the child to find the bug, and then correct it. As an additional level of difficulty, the scaffolding of asking about each individual action can be removed. The concepts are described briefly below:

• Dance: The child is asked to copy a dance performed by an animal so that everyone can dance in unison (Sketch in Figure 4.2).

• Cookies: The child is asked to bake cookies according to each party-goer’s desires. • Party hats: The child makes party-hats according to each party-goer’s desires (Sketch in

Figure 4.1d).

• Cake decorations: The child decorates cakes for Panders, matching the patterns they imagine (Sketch in Figure 4.3).

• Fishing game: The child manages a popular Swedish birthday party game where party-goers fish for some specific candy that the party-goer likes (Sketch in Figure 4.1b). • Hide and seek: The child is provided with a bird’s-eye view of the forest and asked to

instruct an animal on how to find where the others are hiding.

• Find the way: The child is asked to help party-goers find their way to the party by giving them instructions (Sketch in Figure 4.1c).

(24)

4.2. Narrowing Down the Number of Concepts

• Banner: The child creates the party-banner shown in Figure 2.1 (Sketch in Figure 4.1b).

(a) A sketch of the concept ”Fishing game”. (b) A sketch of the concept ”Banner”.

(c) A sketch of the concept ”Find the way”. (d) A sketch of the concept ”Party hats”.

(e) A sketch for the concept ”Pancake cake”. (f) An alternative design of the concept ”Pancake cake”.

Figure 4.1: A selection of sketches created during the idea generation.

4.2

Narrowing Down the Number of Concepts

The sketched ideas were rated using a Pugh matrix with 10 criteria based on the pedagogical objectives and the project goals and limitations as well as Husain et al.’s (2015) criteria for educational software.

(25)

4.2. Narrowing Down the Number of Concepts

1. Clear rights and wrongs: To properly teach debugging and error detection, it is important that the game is not ambiguous in what is right and what is wrong. This criterion estimates how clear the rights and the wrongs are in the concepts.

2. Supports variation: In line with variation theory; the game needs to provide variation in play. This criterion estimates how much variation a given concept can provide to the user.

3. Additional activities: One aim of the Magical Garden software, is to provide additional activities connected to the themes of the game that preschool teachers can bring into their daily operations. This criterion estimates how easily a concept can be brought into the physical preschool world.

4. Meaningful feedback: Criterion 3 from Husain et al. (2015) states that feedback should be informative and meaningful. This criterion estimates how meaningful the feedback in a concept can be, considering its metaphor.

5. Tablet compatible: As the intended platform for the game is tablets, this criterion esti-mates how well a concept will fare with the interactions available and the limitations present when using the tablet.

6. ”New” metaphor: As the aim of this study is to find alternative metaphors to use when teaching CT, this criterion estimates how far a concept’s metaphor is from the spatial metaphor.

7. Graphically simple: Criterion 4 from Husain et al. (2015) concerns itself with age-appropriate design. Therefore, this criterion estimates how visually simple a concept is so that the design is not too distracting for a child between 4 and 6.

8. Utilises TA: As per the aim of the study, this criterion estimates how well a concept utilises the teachable agent.

9. Relatable metaphor: Criterion 4 from Husain et al. (2015) also states the importance of motivating the age group, e.g. through a captivating narrative. This criterion, therefore, estimates how relatable the proposed metaphor is to a child in the target age group. 10. Additional CT possibilities: Beyond the pedagogical goal of introducing debugging and

error detection, this criterion estimates the possibilities of a concept to teach other as-pects of computational thinking as defined by Hsu et al. (2018).

The dance concept was chosen as the datum and used as a reference point to compare how well criteria were fulfilled. The low rating concepts negative ratings were examined to find potential for development. One of the low rating concepts, Pancake cake, was found to provide opportunities for change in ways that would create a novel, better ranking concept. As an effect of this, the hamburger concept was created, scoring higher than pancake cake in criterion 2, 3, 5, and 7.

Three high-rating concepts were chosen to present during the formative concept inter-views: dancing, cake decorations and hamburgers. These were chosen both due to their high score, but also due to them providing differences that could drive discussions during the sub-sequent interviews. The three concepts all shared the general procedure for utilising TAs but differed mainly in how the task was presented and which metaphor was used. The concepts are described in detail below.

(26)

4.2. Narrowing Down the Number of Concepts T able 4.1: Pugh matrix rating concepts Criteria Concept Dance (Datum) Cookies Party hats Cake decorations Fishing game Hide and seek Find the way Pancake cake Candy bag Banner Hambur ger 1 Clear rights and wr ongs 0 0 + + + + + + -+ Supports variation 0 + -+ -0 0 -0 Additional activities 0 + -0 0 -0 Meaningful feedback 0 0 + + + + + + + 0 + T ablet compatible 0 0 0 0 0 0 0 -0 0 0 ”New” metaphor 0 0 0 0 0 -0 0 0 0 Graphically simple 0 0 0 + -0 0 0 Utilises T A 0 0 0 0 -0 -0 0 0 0 Relatable theme 0 -0 0 0 0 -0 0 0 0 Additional CT possibilities 0 + 0 + 0 + + + 0 0 + Sum negatives 0 2 1 1 4 2 4 3 3 2 0 Sum positives 0 2 2 5 2 3 3 4 1 0 3 Sum neutrals 10 6 7 4 4 5 3 3 6 8 7 1Concept adapted fr om pancake cake

(27)

4.2. Narrowing Down the Number of Concepts

Concept 1: Dancing

Figure 4.2: A sketch of the dance concept.

In the dancing concept (Figure 4.2), the child helps teach a choreography to animals going to the party. The child first meets an earthworm who describes that the mouse Mille has a choreography to teach everyone who’s going to the party, and asks the child if they can construct instructions by looking at the dance while Mille dances it. Mille exclaims ”Come on! Dance like I do!” and starts dancing, allowing the child to drag dance moves (presented in squares to the left) to the instruction area (represented by the piece of paper). The child can re-watch the dance by pressing the note-button next to Mille. When the child thinks that the choreography is finished, the child presses a button (not depicted in Figure 4.2). A new animal arrives at the location and uses the instructions to dance like Mille. If they dance the same, the earthworm exclaims ”Yaay! They’re dancing the same choreography!”. If they do not dance the same, the earthworm instead exclaims ”Oh no, they’re not dancing the same!”. After a few choreographies have been created, the TA arrives, asking if they can watch and learn. The child continues completing the task by dragging dance moves to the instruction area. Finally, the TA asks if they can try, and the child watches as the TA drags correct and incorrect dance moves to the instruction area. They then ask if what they have instructed is correct, and the child answers through the choice of smiley (described in Section 2.2). The TA then for each planned action asks if the action is correct (e.g. ”Is the first one correct?”, ”Is the second one correct?”, Is the last one correct?”) and the child answers in the same fashion. If the child indicates an action as incorrect, they are provided with the opportunity to correct it. There are several choreographies for each of the three stages, completing the action themselves, showing the TA and correcting the TA. The dancing concept was chosen

(28)

4.2. Narrowing Down the Number of Concepts

Concept 2: Cake decorations

Figure 4.3: A sketch of the cake decoration concept.

In the cake decoration concept (Figure 4.3), the child helps different animals to get cakes with their desired decorations in a desired order. The child first meets an octopus who verbally asks for help in figuring out in which order to add the decorations. An animal then arrives and visualises how they want their cake to be decorated in a thought bubble. The child drags different fruit (presented to the left) to the instruction area (represented by the piece of paper). When the child is done, they press the execute-button (located in the right part of the instruction area). If the cake is decorated correctly, the octopus exclaims ”Good job! Everything is in the order they wanted!”. If they are not, the octopus instead exclaims ”Oh no, that was not the order they wanted.”. New animals arrive at the location and visualise how they want their cake to be decorated in a thought bubble. After a few cakes have been created, the TA arrives, asking if they can watch and learn. The child continues completing the task by dragging decorations to the instruction area. Finally, the TA asks if they can try, and the child watches as the TA drags correct and incorrect decorations to the instruction area. They then ask if what they have instructed is correct, and the child answers through the choice of smiley (described in Section 2.2). The TA then for each planned decoration asks if the decoration is correct (e.g. ”Is the first one correct?”, ”Is the second one correct?”, Is the last one correct?”) and the child answers in the same fashion. If the child indicates a decoration as incorrect, they are provided with the opportunity to correct it. There are several cakes to be decorated for each of the three stages; completing the task themselves, showing the TA, and correcting the TA.

The cake decoration concept scored negative on criterion 3 and scored positive in criteria 1, 2, 4, 7 and 10.

(29)

4.2. Narrowing Down the Number of Concepts

Concept 3: Hamburgers

Figure 4.4: A sketch of the hamburger concept.

The hamburger concept was developed in the Pugh matrix process as a development of the Pancake cake concept. The pancake cake concept got negative scores on the Pugh matrix in criteria 2, 3, 5 and 7. As it is conceptually a pancake cake, every other ingredient is logically a pancake, providing an obstacle in providing variation (criterion 2). The pancake cake concept was also judged to have fewer possibilities of additional activities (criterion 3) than the dance concept, as it needs pancakes and other foodstuffs high in sugar and not likely to be in stock at preschools. Finally, the concept scored negatively on the criteria of and tablet compatibility (criterion 5) and graphical simplicity (criterion 7) as it was judged hard for children to see and manipulate the different ingredients on the finished cake (pancakes and jams being very thin and therefore hard to see). The re-design of the pancake cake concept resulted in all of these negative scores being turned into neutrals.

In the hamburger concept (Figure 4.4), the child helps different animals to make ham-burgers with ingredients in a desired order. The child first meets an earthworm who verbally asks for help making hamburgers according to the wishes of the different animals. An animal then arrives and verbally describes what ingredients they want in their hamburger. The child drags different ingredients (presented to the left) to the instruction area (represented by the piece of paper). When the child is done, they press the execute-button (located in the right part of the instruction area). If the hamburger is made correctly, the earthworm exclaims ”Good job! Everything is in the order they wanted!”. If they are not, the earthworm instead exclaims ”Oh no, that was not the order they wanted.”. New animals arrive at the location and describe what ingredients they want in their hamburger, and in which order. After a few

(30)

4.3. Exploring Possibilities Through Formative Concept Interviews

to the instruction area. They then ask if the recipe they have created is correct, and the child answers through the choice of smiley (described in Section 2.2). If the child indicates that everything is not correct, the TA asks about each ingredient in order (e.g. ”Is the first one correct?”, ”Is the second one correct?”, Is the last one correct?”) and the child answers. If the child indicates an ingredient as incorrect, they are provided with the opportunity to correct it. Several hamburgers are made in each of the three stages; completing the task themselves, showing the TA, and correcting the TA.

4.3

Exploring Possibilities Through Formative Concept Interviews

The formative concept interviews allowed the researcher to get feedback on the three con-cepts chosen through the Pugh matrix. A pilot interview was conducted to review the inter-view script, and six main interinter-views were conducted to evaluate the concepts.

Pilot

A pilot interview was conducted with a preschool teacher who had previously worked with children between 4 and 6 years old. After the interview, a script was written for the explana-tory part of the interview, explaining Magical Garden, computational thinking and debug and error detection. A few of the questions were also reformulated, and one question was added.

Interviews

The concepts were evaluated through formative concept interviews with preschool teachers. The interviews were conducted in Swedish with 6 preschool teachers (5 women, 1 man) be-tween the ages of 24 and 50 (M = 42.8, SD = 9.6) currently working with children bebe-tween 4 and 6 years old. The interviews were on average 48.5 minutes long (SD = 17.5), and the preschool teachers had worked for an average of 14.8 years (SD = 8.9). The interviews were conducted remotely using Zoom software (https://zoom.us/) installed on the participants’ personal computers (and in one case iPad), allowing for video to be recorded as well as screen-sharing of concept sketches. Participants were located in their own homes or at their place of work during the interviews and were asked to find a space where they would not be disturbed.

Before the interviews, participants were asked to read a form of consent, and consent was collected verbally before the start of recordings. Teachers were first interviewed about their background and prior experience with teaching programming, and then introduced to the computational thinking theory and debugging. They were shown the concepts one at a time and asked to reflect on how they would work in their educational setting and with the children in their current class. Concept orders were varied according to a Latin square design. Finally, the teachers were asked to compare the different concepts and reflect on aspects of each that they thought would be better or worse in regard to teaching CT to children. A translated version of the interview guide can be found in Appendix A.

To analyse the interviews, recordings were reviewed, transcribed and openly coded; find-ing short keywords that describe the content of passages while not summarisfind-ing too much. Memos were continuously added in the margins; thoughts, insights, and connections to other research (Emerson et al., 2011) as well as questions and possible ways of development for the concepts. The codes were formalised so that they matched across the dataset and then sorted into areas of interest. Finally, themes that descried the areas were generated.

(31)

4.3. Exploring Possibilities Through Formative Concept Interviews

Interview Findings

The interviews provided indications of how the concepts would work in practice and raised questions and possibilities about the final concept. In general, the teachers were positive about the concepts and felt that the children would enjoy playing the games. They were also positive that the designs could be adapted into physical activities to teach debugging and error detection. Most teachers related to their current child-group unprompted when reflecting on the concepts, but several stated the need for testing with children to know how the game would be received. Some also raised that they would be careful with expressing right and wrongs, naming them as problematic and loaded in a way that could affect children negatively. The analysis of the interviews resulted in three themes and a reflection on how to proceed. These are presented below.

Current work. Debugging practices were mainly found in routine-transitions,

analo-gously with patterns or movement and using programming robots. In routine-transitions, teachers found that there was often a sequential aspect of these kinds of activities. One teacher described how, after eating lunch, the children needed to remove their plate, wash up and then wash their hands before continuing with other activities. They described how some children forgot one step in the process and how they then worked through the dif-ferent steps to find what the child had missed. Analogue activities described included pro-gramming dance moves with children calling out if someone did not follow the agreed-upon instructions and constructing patterns to be followed when creating bracelets. With the pro-gramming robots, several teachers described creating courses and propro-gramming the robot to navigate it, and one teacher described how they, when the robot took a wrong turn, discussed with the children why the problem occurred and found a solution to the bug. They also de-scribed how they had heard children discussing amongst themselves why the robot took a wrong turn, debugging their code.

Notably, all teachers stated that their preschool worked with programming in some way, and after hearing the introduction to debugging and error detection, they all identified routine-transitions where error detection was conducted. In most cases of error detection in routine-transitions, the existence of an error was pointed out by the teacher and the child was then asked to identify the error. However, these situations were not necessarily accom-panied by any discussions about error detection or debugging and were not held with the aim of teaching debugging, limiting its transfer to other domains. Some teachers described situations where error detection had a central role (such as when programming robots), and the majority of these identified activities were founded on the spatial metaphor.

Beliefs about children. The interviews gave insights into the teachers’ beliefs about

chil-dren’s use of digital and educational software. These are summarised as statements below: • Children have a hard time focusing.

• Children have a hard time understanding complicated instructions. • Children need short instructions.

• Children think visual instructions are easier than verbal.

• Children with parents who do not speak Swedish have difficulties with the language. • Children learn how technology works quickly.

References

Related documents

The agent oriented view of systems as (open) societies of co-existing agents has already shed new insight on the design of different types of distributed systems.. Two

Swedenergy would like to underline the need of technology neutral methods for calculating the amount of renewable energy used for cooling and district cooling and to achieve an

According to the data collected, no significant difference in knowledge existed after the game had been played, however several ideas about how source criticism could be utilised

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Tillväxtanalys har haft i uppdrag av rege- ringen att under år 2013 göra en fortsatt och fördjupad analys av följande index: Ekono- miskt frihetsindex (EFW), som

This article first details an approach for growing Staphylococcus epi- dermidis biofilms on selected materials, and then a magnetic field exposure system design is described that