• No results found

Exploring the use of contextual metadata collected during ubiquitous learning activities

N/A
N/A
Protected

Academic year: 2021

Share "Exploring the use of contextual metadata collected during ubiquitous learning activities"

Copied!
99
0
0

Loading.... (view fulltext now)

Full text

(1)

MSI Report 07125

Växjö University ISSN 1650-2647

SE-351 95 VÄXJÖ ISRN VXU/MSI/IV/E/--07125/--SE

Exploring the use of contextual metadata

collected during ubiquitous learning

activities

Martin Svensson Oskar Pettersson School of Mathematics and Systems Engineering Reports from MSI - Rapporter från MSI

(2)

Abstract

Recent development in modern computing has led to a more diverse use of devices within the field of mobility. Many mobile devices of today can, for instance, surf the web and connect to wireless networks, thus gradually merging the wired Internet with the mobile Internet. As mobile devices by design usually have built-in means for creating rich media content, along with the ability to upload these to the Internet, these devices are potential contributors to the already overwhelming content collection residing on the World Wide Web. While interesting initiatives for structuring and filtering content on the World Wide Web exist – often based on various forms of metadata – a unified understanding of individual content is more or less restricted to technical metadata values, such as file size and file format. These kinds of metadata make it impossible to incorporate the purpose of the content when designing applications. Answers to questions such as “why was this content created?” or “in which context was the content created?” would allow for a more specified content filtering tailored to fit the end-users cause. In the opinion of the authors, this kind of understanding would be ideal for content created with mobile devices which purposely are brought into various environments. This is why we in this thesis have investigated in which way descriptions of contexts could be caught, structured and expressed as machine-readable semantics.

In order to limit the scope of our work we developed a system which mirrored the context of ubiquitous learning activities to a database. Whenever rich media content was created within these activities, the system associated that particular content to its context. The system was tested during live trials in order to gather reliable and “real” contextual data leading to the transition to semantics by generating Rich Document Format documents from the contents of the database. The outcome of our efforts was a fully-functional system able to capture contexts of pre-defined ubiquitous learning activities and transforming these into machine-readable semantics. We would like to believe that our contribution has some innovative aspects – one being that the system can output contexts of activities as semantics in real-time, allowing monitoring of activities as they are performed.

(3)

Table of Contents

1 Introduction...1 1.1 Background...1 1.2 Motivation...4 1.3 Purpose...5 1.4 Limitations...6 1.5 Intended audience...7 1.6 Expected results...7 1.7 Disposition...7 2 Method...9 2.1 Unified Process...9 2.2 Scenario-based design...10

2.3 Why UP and scenario-based design?...12

2.4 Reliability and validity...13

3 Theoretical foundations...14

3.1 Context...14

3.2 Knowledge representation...15

3.2.1 Sematic networks...15

3.2.2 Ontologies...16

3.3 Markup languages for representing knowledge...17

3.3.1 SGML...17 3.3.2 XML...17 3.3.3 Topic maps...18 3.3.3.1 XML Topic Maps...18 3.3.4 RDF...19 3.4 Ubiquitous computing...20

3.4.1 Global Navigation Satellite System...21

3.4.2 Visual markers...21

3.5 Current efforts in the scope of this thesis...23

3.5.1 MOBIlearn...23

3.5.2 ENLACE...24

3.5.3 Ambient wood...24

3.5.4 The Knowledge Management Research Group...25

3.5.4.1 Server side Solution for Conceptual Browsing...25

4. The bigger picture...26

4.1 CeLeKT activity model...26

4.1.1 Learning Activity System...27

4.1.1.1 CCS – Collect Convert and Send...27

4.1.2 Outdoor Activities...28

4.1.3 Indoor Activities...28

4.1.4 Sensors & Actuators...28

4.2 Where does our contribution fit in?...28

4.3 In which ways does our approach differ from others?...29

5 System development...32

5.1 UP instantiation...32

5.2 Motivation of development decisions...33

5.2.1 Selecting an appropriate definition of context...33

5.2.2 Selecting an appropriate technique for generating semantically related metadata and values...34

5.3 Test workflow environment...35

6. System development: Cycle 1...36

6.1 Project: AMULETS Stortorget...36

(4)

6.2.2 Actors...38

6.2.3 Use case diagram...39

6.2.4 Use case specifications...39

6.3 Analysis...40

6.3.1 Analysis class descriptions...40

6.3.1.1 Actor...40 6.3.1.2 Activity...41 6.3.1.3 Task...41 6.3.1.4 SortOrder...41 6.3.1.5 ActorInActivity...41 6.3.1.6 TaskStatus...41 6.3.1.7 Setting...42 6.3.1.8 Content...42

6.3.2 Analysis class diagram...42

6.4 Design...42

6.4.1 Database design...43

6.4.1.1 Motivation...43

6.4.2 Design class diagram...44

6.5 Implementation...44

6.6 Test...44

6.6.1 Preparing the trial...44

6.6.2 Performing the trial...45

6.6.3 Reviewing the outcome of the trial...47

7 System development: Cycle 2...50

7.1 Project: AMULETS Biology...50

7.2 Requirements gathering...51

7.3 Analysis...51

7.3.1 Analysis class descriptions...52

7.3.1.1 ActorInActivity...52

7.3.1.2 Snapshot...52

7.3.1.3 Metadata...52

7.3.1.4 MetadataValue...52

7.3.2 Analysis class diagram...52

7.4 Design...53

7.5 Implementation...53

7.6 Test...53

7.6.1 Preparing the trial...53

7.6.2 Performing the trial...54

7.6.3 Reviewing the trial...55

8 Generating semantics in RDF...58

8.1 Finding and defining suitable classes...58

8.2 Finding and defining suitable properties...63

8.3 Create RDF Schema...64

8.4 Generating RDF document...65

8.5 Visualizing semantics: Content browsing application...66

9 Conclusion...71

9.1 Conclusion in relation to thesis purpose...71

10 Discussion and reflection...73

11 Future work...76

(5)

Figure Index

Figure 1: The vision of a context enhanced World Wide Web...3

Figure 2: Extended metadata collection...5

Figure 3: Workflow – Bachelor thesis to Master thesis...6

Figure 4: Unified Process...9

Figure 5: Challenges and approaches in scenario-based design...11

Figure 6: A model of context...15

Figure 7: RDF drawing example...20

Figure 8: Example of RDF...20

Figure 9: CeLeKT activity model...26

Figure 10: ACS and RDF development timeline...32

Figure 11: Use case diagram...39

Figure 12: Abstracted conceptual idea...40

Figure 13: Conceptual class diagram...42

Figure 14: Class hierarchy...44

Figure 15: Participant scanning a semacode...46

Figure 16: Sequence diagram based on UC1...47

Figure 17: Photograph put in context...48

Figure 18: Map displaying where the photo was taken...48

Figure 19: Abstract conceptual idea...51

Figure 20: Conceptual class diagram...52

Figure 21: Involved devices and their interaction during Amulets Biologi...54

Figure 22: Camera interface...55

Figure 23: Activity data associated with group Gamma...56

Figure 24: Content put in context...57

Figure 25: Class meta model (stage 1)...59

Figure 26: Class meta model (stage 2)...60

Figure 27: Class meta model (Complete)...62

Figure 28: Meta model (Completed)...64

Figure 29: Screenshot of using the RDF Schema in RDFAuthor...65

Figure 30: Content browser main interface...67

Figure 31: Content browsing interface...68

Figure 32: Single content interface...69

Figure 33: Single content interface - map view...70

(6)

Index of Tables

Table 1: Activity type definition...58

Table 2: Actor type definition...59

Table 3: Task type definition...59

Table 4: ActivityStatus type definition...59

Table 5: TaskStatus type definition...60

Table 6: Snapshot type definition...61

Table 7: Metadata type definition...61

Table 8: Content type definition...61

Table 9: Triples of the classes derived from Figure 27...62

(7)

1 Introduction

Our thesis aims to describe the development of a system which captures contextual data during learning activities, with the key functionality being the capability of representing contextual data as machine-readable semantics with the help of Knowledge Representation (KR) technologies. The first section in this chapter will give a short introduction to this subject area, leading to a section where we specify the purpose of our our work. Throughout the following sections the purpose will be narrowed and formed towards a tangible problem. The chapter will end with a description of the thesis disposition. We will begin with an introduction relevant for the problem domain.

1.1 Background

The World Wide Web was in its early days very much a static experience, and all sources of information were HTML documents which were manually updated in regular text editors or crude and rudimentary HTML editors. Like the early versions of Frontpage which produced “HTML-like” documents which were large and completely unreadable. Many people still insisted on making their own home pages with these crude tools and with the state that HTML was in the 90's making stylish pages was, as a look in the web.archive.org's archive shows, no easy task. This however shows us that a lot of people are not happy with being just passive content receivers but want to actively participate in the World Wide Web and the content that is being created there (Kooser, 2007). This era in the development of the World Wide Web can perhaps be referred to as “Web 1.0” with static HTML pages ruling the domain and the term web designer meant someone who could open Frontpage and publish own generated content. The need for more versatile web-applications soon arose however, as running for example a news site like cnn.com in static HTML pages soon spins out of control. Therefore efforts began to make dynamically built pages, first with small Perl and python scripts which required quite a lot of knowledge and server tinkering to get running, which later led on to the creation of PHP which is perhaps the most used language for creating dynamic web pages today1. The creation of the dynamic programming languages2 led the web in a new and exciting direction, the users could now participate in the life of a web-page in a whole different way in the form of guest books and dynamic web-forms. Even the 1 http://en.wikipedia.org/wiki/PHP 05/25/2007

(8)

possibility to implement IRC (Internet Relay Chat) or other types of real-time chat-rooms soon emerged on cutting edge web sites.

What marked the transition into “Web 2.0” is still being debated, and people still have not defined exactly what “Web 2.0” is. Some even question if the term is relevant at all. But much like “Advanced Content Management Systems” and “Intelligent Agents” most people still seem to agree upon a few points which mark the transition (O'Reilly, 2007). For starters, the view of Web 2.0 applications is generally more service oriented and user-driven than those of Web 1.0. To exemplify the difference, Google and Wikipedia are Web 2.0 variants of Netscape and Britannica Online respectively (Lankshear & Knobel, 2006). While being similar in purpose they differ vastly in performance. Furthermore, the blog is a big part of “Web 2.0”. A blog is the modern version of having your own web-site and can be used for anything from being a personal diary to presenting your new projects latest conquests. Another thing which marked the transition is Ajax (O'Reilly, 2007), a technology which allows for asynchronous information retrieval into web applications without reloading. Ajax was constructed to be able to bridge the gap between applications on the web and applications run in your operating system, which made for example constructing the Google application suite (Google Maps for example) possible (Hirschorn, 2007).

Web 2.0 marks the current state of the web and the logical progression would be to head for Web 3.0, and that is just where the World Wide Web, according to the current “buzz”, is heading right now. While the Web 2.0 was only a logical evolution of people's willingness to participate in the web rather than passively receive information, the Web 3.0 will be something more aimed at the back end of the Internet, namely the context of the content. The thoughts around this subject are hardly new however and talks of the semantic web3, which is a way to enable machines to understand the information on the web, has been around since the beginning of the World Wide Web. Even the HTML revision from 1993 to some extent supported a minor implementation of these ideas. This Semantic Web was the brain-child of a man named Tim Berners-Lee, and in 1999 he said (Berners-Lee & Fischetti, 1999, chap. 12):

“I have a dream for the Web in which computers become

capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic

(9)

Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.”

Tim Berners-Lee has been following a similar vision since the early days of the web. However, in 1999 this vision was far from being realized.

The web was, as is the case with some technologies, not ready for this step in the evolution, so the technology has only continued to live on inside the minds of cutting-edge researchers and the labs of supporting corporations. As Figure 1 aims to illustrate, the value of semantics is evident as you allow the computer to add meaning to the data for you (McCN, 2001). The white pages in Figure 1 represents regular HTML pages while the top layer represents the added context made possible with the use of these technologies.

The current state of the web is not using much of these thoughts but as McCN states, the World Wide Web was originally designed with this functionality in mind. However, the implementation of these features is no easy task and as the development of the semantic web shows, there are many pitfalls to work out. There are however many things to be won in knowing the context of the content that you are viewing, and especially, for your computer to know what information you are viewing. Although humans easily can pair together information from different sources, a computer has a very hard time with this as it only recognize the actual text and not the context of the content.

(10)

If history has taught us anything, it is that there is no such thing as a magical solution to a complex problem such as this. Many however believe that adding computer-understandable context to the Internet is the next big step in the evolution towards Web 3.0. Efforts in this direction currently being made by some of the big players in the area, such as the inventors of Kazaa4, and Skype's5 new project Joost6 which is entirely built upon RDF (Resource Description Framework), a general-purpose language for representing information in the Web. Another example is the Yahoo Food site7 which make use of RDF and similar semantic web technologies (Lassila & Hendler, 2007), and there have been reports of a lot of companies developing projects surrounding these technologies in the coming year.

1.2 Motivation

The predicted step of the web towards semantics gave us an idea while conducting the work on our bachelor thesis. That thesis focused on what possible benefits an extended metadata collection containing tags and values describing the reality that existed when content was created could have in terms of content relevance accuracy for the end-user (Svensson & Pettersson, 2006). The reality – or context – in which content was created was fictional and narrated using scenario-based design techniques. The setting in which scenarios took place were based on a ubiquitous learning activity where pupils at a local elementary school had to perform and solve various tasks to learn about trees and nature. While remaining on the conceptual level, the work more or less consisted of modeling the different concepts existing in the contextual domain, such as “group” and “point”, into semantic structures. The idea we had at that time is illustrated in Figure 2.

(11)

The output from our bachelor thesis was not only a proposed meta model of how contextual data could be structured, it also granted the authors knowledge within the domain of semantics. As potential benefits of an extended understanding of user-created content already has been examined, we are now ready to pick up the torch where we left it and investigate the possibility of creating a system catching contexts and translating these into machine-readable semantics. By “catching contexts” we mean the process of mirroring real-life contexts into digital representations. The reason for translating the context into semantics is – based on the discussion in the previous section – that we believe the knowledge characteristic of semantics will bring interesting means of structuring and filtering user-created content. To summarize, the upcoming section will formalize the goal of this thesis in one sentence.

1.3 Purpose

Within the scope of this thesis we aim to investigate in what ways the context of ubiquitous learning activities can be caught, structured and expressed as machine-readable semantics. In order to dissect and isolate the work the above sentence generates we have compiled a list of supporting questions:

• What kinds of data need to be collected and structured in order to catch context? • Which technique for expressing semantics would most effectively support the

(12)

purpose of this thesis?

The above purpose tells us that we need a way to catch the context of ubiquitous learning activities. To manage this, we intend to create a computational mechanism with the purpose of controlling activities as they are performed which simultaneously populates a database with data gathered from these activities. The contents of the database will then be the source for generating semantics describing the context of the monitored activities. To make this discussion clearer Figure 3 attempts to illustrate the connection between our two theses. In the following section we will limit the scope of our work in pursuit of how to approach the subject.

1.4 Limitations

In the scope of this master thesis it would be impossible to adhere to several definitions of the word “context”. Therefore we will look closer at some explanations of the word angled towards educational settings and select the one we find most appropriate and suitable for the purpose of our work. The same reason applies to semantics – we will present a selection of possible techniques for representing machine-readable semantics, weigh benefits and drawbacks and select one to use for our implementation.

(13)

1.5 Intended audience

The intended audience for this thesis are people with knowledge in the fields of informatics and system science.

1.6 Expected results

We expect the outcome of this thesis to be a system coordinating activities and populating a database with contextual data, which can be used to generate semantic structures describing the context of the activities. This will be narrated throughout the thesis as we aim to perform several live experiments to try out the computational mechanism in action, as well as in its own chapter of the thesis summarizing our findings. It is our belief that the semantic structures we generate from the database will prove to be valuable for organizing content and improving content relevance accuracy for end users. With “improved content relevance accuracy” we refer to the possibility of machines to semantically “understand” content, and incorporating this understanding to, for instance, suggest other resources of interest which is semantically related to the current resource.

1.7 Disposition

The initial part of this thesis consists of a literature review where relevant topics and interesting initiatives within the scope of this thesis are presented in order to introduce a theoretical foundation to the thesis. The following chapter places our contribution in a bigger picture and compares our approach with similar efforts that have been conducted. With this clarified, a presentation of the chapters explaining the characteristics of the system development follows, along with an argumentation between the different contextual definitions stated in the literature. This is necessary in order to select one suitable definition to use as an input for system analysis and design. A similar argumentation for selecting an appropriate technique for generating semantics is also performed. With these steps motivated we describe the actual system development of the computational mechanism and database, which was developed and improved during two development cycles. The two leftmost blocks in the master thesis section of Figure 3 illustrate the two artifacts.

(14)

discusses the use and value of the generated semantic structures. The disposition of the thesis can be summarized in four distinct phases closely related to the blocks illustrating the work in the master thesis section of Figure 3. These are:

• Literature review.

• Investigation of what others have done and state what our contribution is.

• System development process and putting the computational mechanism

populating the database with contextual data to the test.

• Generate semantic structures from the contents of the database and investigate

their possible use.

(15)

2 Method

This section introduces the methodologies we have used to pursue the purpose of the thesis. Planning the work to be performed before rushing on to drawing models is crucial both in terms of thesis quality and validity. In this section we describe the different methodologies used to support our work with our thesis. We begin with an overview of the iterative development model used to steer our efforts when developing the computational mechanism and the database structure, which is followed by an introduction to scenario-based design. The final section of this chapter motivates why these methodologies were selected to support our work.

2.1 Unified Process

Before being able to give a rough overview of the Unified Process it is crucial to first introduce the Unified Modeling Language (UML). UML is a general purpose visual modeling language for systems most often used for, but not limited to, modeling object-oriented software systems. The language is truly general in the sense that it is not tied to one certain methodology – it only provides a visual syntax for creating models. One methodology which visually relies on UML is the Unified Process (UP), introduced by the original authors of UML (Arlow & Neustadt, 2002). UP is an iterative and incremental system engineering process in which each cycle generates an outcome of a partially completed system until the final system is finished. Each cycle is made up of the five sequential core workflows Requirements, Analysis, Design, Implementation and

Test. As UP is meant to be tailored to suit the current development process additional

workflows might be inserted in the sequence. The amount of effort put in each workflow varies depending on the focus and the phase of the current cycle (Figure 4).

(16)

The first workflow – Requirements – includes the allocation of functional and non-functional requirements of the system as well as use case modeling. Use case modeling involves finding the system boundary, actors and use cases these actors can initiate in the system. Analysis has the first and foremost aim of pinpointing what to actually do and creating a conceptual model over the system in the UML diagram type class diagram. Furthermore, the analysis phase may involve use case realizations by modeling activity diagrams or sequence diagrams.

Following the analysis phase is the Design phase. This workflow moves the focus from what, as this should have been settled, to how to actually do what is to be done. This includes the creation of design models based on the conceptual models created in the analysis phase. Design models are made up of design classes, interfaces, use case realizations and a deployment diagram, where most emphasis is put on modeling a design class diagram. The Implementation phase consists of the actual implementation of the system, which generates two types of diagrams – component diagrams and deployment diagrams. The last and final workflow, Test, verifies that the implementation works as desired. Additionally, each cycle in UP belongs to one of four phases labeled Inception, Elaboration, Construction and Transition. The different phases include goals and milestones which steer and help to keep the efforts in the development on track.

UP and UML will not be described in further detail as we expect the intended audience of this thesis to have some knowledge of system modeling. Instead we move on to the following section in which we introduce an interesting methodology for extracting requirements.

2.2 Scenario-based design

(17)

the course of the narrated story. Scenarios also include actions actors may perform which drive the narrative forward. These may cause actors to fulfill their objectives – or even change the main objective of the entire scenario. Scenarios can be represented in several different ways, such as written text, storyboards or videos (Carroll, 2000). The figure below (Figure 5) visualizes five challenges associated with system design as well as the corresponding response of scenario-based design. In order to display the value of scenario- based design we will now present five general challenges for system designers as well as their solutions through scenario-based design and why this methodology is considered a good choice for our upcoming work.

When developing a system with a group of professionals, the reflective side of a step in the design process is tricky to catch. All individuals take some pride in what they do, which may leave reflective thoughts out of a discussion for fear of being questioned competence-wise. The opinion of one individual may be trivial and irrelevant, but it might on the other hand make the end-result a lot better if brought to the surface. Scenario-based design provides for reflection, as it describes an implementation of decisions already made. This is visualized in Figure 5 as the section Action versus

Reflection. The next section we will discussed is represented in the area labelled Design Problem Fluidity. Requirements always change, especially when using and/or

(18)

developing cutting-edge technologies. The time-frame for a development project may be altered, developers may quit as well as new designers may join the development group. If not traced, the result of the development project may be a successful one in creating something which conforms to the initial requirements – but not to the requirements adjusted “along the way”. Scenario-based design is in its nature a flexible way of capturing requirements. If a scenario is changed, the list of requirements could be analyzed and altered according to the changes made in the scenario.

Design Moves Have Many Effects in Figure 5 claims that changes in the design

process of any extent will most probably affect other elements involved in the design. As a scenario in itself is a story, the author of the story may decide the level of complexity and detail in which the story is written. This allows for the creation of several views of the same scenario that, when adjusted to fit the changes accordingly, may shed light on consequences that otherwise might have been left unattended for. In some cases the present scientific solutions lag design, either because they make the solution of the problem at hand more complicated than it needs to be or because there isn't a technology that solves the problem at hand. This is represented as Scientific

Knowledge Lags Design Application in Figure 5. By using scenarios the dilemma can

be abstracted and categorized, which provides for a clearer overview when applying the combined knowledge of the development group. The usability aspect is a factor easily neglected when defining requirements of a system. Font sizes, system feedbacks etcetera are as vital as any other requirements in order to create a good system. This fifth challenge - External Factors Constrain Design - is addressed as scenarios are design objects that describe the system by telling how the users try to use parts of the system. This provides for reflective thoughts concerning the front-end of the system.

2.3 Why UP and scenario-based design?

(19)

neglected. In our UP instantiation we chose to use scenario-based design as the main methodology for gathering requirements. While we realize that use-case realizations are important artifacts within UP for deriving requirements, we still believe our choice of replacing written use case realizations according to UP with more discussion-based “brainstorm” generated scenarios made sense for our system development. With the limited amount of time, together with the loose boundaries the development team faced when designing the location-based activities, we believed that the scenario-based design methodology would provide a rapid way to derive requirements to be used in the development cycles.

2.4 Reliability and validity

This section aims to discuss the reliability and validity of our work. Reliability is about that the research and findings are correctly executed (Thurén, 2003) and, in quantitative research, that the outcome could be reproduced regardless of who is conducting the survey / experiment. We have had to think a little bit differently in terms of reliability, as the purpose of this thesis steers our work towards construction engineering research8 rather than conventional scientific research. Construction research involves the design and creation of one or several artifacts which, for instance, could be physical devices, computer programs or techniques. Artifacts are usually designed with a body of theory, and when created they are often subject to test and measurements in order to see if aims and goals have been achieved. In terms of our work of designing, implementing and testing the computational mechanism and transferring the gathered contextual data into semantics can of course be reproduced – the artifacts will consist of conceptual structures and concrete and working computer applications.

Validity aims at assuring that the research and efforts put in the thesis pursue and attempt to satisfy the purpose of the thesis. We have striven to keep the validity high throughout our work by reflecting on the main purpose of the thesis and the supporting questions as we go along. Along with the methodologies chosen to support our work, we believe that we have had a solid and clear path to follow. Additionally, in order to understand the value of our contribution it is important to survey and investigate current initiatives within the scope of this thesis. Therefore the upcoming section will introduce important theoretical foundations and relevant initiatives in-line with our work.

(20)

3 Theoretical foundations

This section aims to provide enough information for the reader to be able to talk about key concepts freely during the result sections of this thesis. Additionally it aims to give an overview of the current initiatives in line with our work. The first subsections introduce the reader to various techniques of knowledge representation, beginning with a general description of markup languages and ending with more specific instances of knowledge representation such as XML Topic Maps and RDF. These are important to understand as we will make an active choice among these for generating semantic structures from the collected contextual data. Following this is a section describing Ubiquitous technologies relevant to understand how the contextual data was gathered during the activities and how participants interacted with the system. Last but not least we have dedicated a section to describe current efforts in the field similar to the scope of this thesis. But before we begin to talk about markup languages we will define the key concept in this thesis – context.

3.1 Context

(21)

3.2 Knowledge representation

The following subsections intend to summarize relevant knowledge representation techniques for representing semantics. The next section, 3.3, focuses on the same aspect but from the angle of markup languages. We will begin by talking about semantic networks.

3.2.1 Sematic networks

Semantic networks are a way to represent knowledge in a standardized way in the form of a directed graph (Sowa, 1992). They have quite a few similarities with UMLs class diagrams so a skilled UML modeler should not have big problems working with the concept. The semantic network is a directed graph consisting of vertices, much like topics in a topic map, which represent concepts and edges which represent the semantic relations between the concepts. There are a lot of uses for semantic networks. There are even voices raised that a regular mind-map is a very loose form of a semantic network using colors and symbols to further trigger the human imagination and creativity. A big difference between mind-maps and semantic networks, though, is that a mind-map is hierarchical with a big node in the center and nodes going out from that, while in a semantic network, all nodes can be connected to any node in the network, thus creating a hierarchical structure.

In common for all semantic networks are that they have a declarative graphic representation that can be used either to represent knowledge or to support automated

(22)

systems for reasoning about knowledge. Some versions are very informal being only crude drawings and figures, while others are very formally defined systems of logic. According to Sowa (Sowa, 1992) there are six common kinds of semantic networks:

● Definitional networks, which emphasize the subtype, or is-a relation between

a concept type and a newly defined subset. The resulting network is called a generalization and supports the rule of inheritance.

● Assertional networks are designed to assert propositions.

● Implicational networks, as the name suggests, use implication as their

primary relation. They are mainly used for mapping patterns of beliefs.

● Executable networks includes some mechanisms such as maker passing or

attached procedures.

● Learning networks build their models upon acquired knowledge from

examples.

● Hybrid networks are the last kind of network, they combine two or more of

the previous techniques to make a single network, or separated but closely connected networks.

Some of these however were constructed to support implementation of human cognitive mechanisms, while others were designed to be used in the area of computer science. Another way to represent knowledge and the relations between different nodes are ontologies, which have a more philosophical background.

3.2.2 Ontologies

The term ontology first came up in philosophy where it is a branch of metaphysics that describes various types of existence, often focusing on the relations between different entities. However, in the area of computer science it has a slightly different meaning. Tom Gruber at Stanford University describes it as “[...] a specification of a conceptualization”. Gruber goes on saying that ontologies often are confused with epistemology, which is about knowledge and knowing. (Gruber, 1993)

(23)

or relations. There are a few different languages for expressing ontologies, OWL, KIF and CycL for example, OWL is however targeted for the world wide web and therefore perhaps the most interesting if you are in that field.

3.3 Markup languages for representing knowledge

The Generalized Markup Language is the mother of all markup languages. It was developed by IBM in the 1960's to be used with their text formatter “SCRIPT” which was the heart of their DCF (Document Composition Facility). The whole idea of GML was to use macros, or tags as we know them today, to describe what type the different data inside of a document was, for example, :h1. described a header, :p. was a paragraph and :ol was the start of a list. The GML was a big inspiration for SGML (see below) which came out decades later.

3.3.1 SGML

The Standard Generalized Markup Language is a metalanguage and is used by people all around the world to define markup languages for their data and documents. SGML was originally designed to enable sharing of machine-readable documents in large corporations and government projects, which required documents that were readable for a very long time after its creation, and time is as we all know relative, but in the IT industry even 20 years is a very long time to support something (Goldfarb & Rubinsky, 1990). As SGML is a fairly complex standard it lacked real-world implementations for a long time until the Oxford English Dictionary implemented it in its second edition and got, and still is, fully marked up in SGML. Because of the complexity of a full blown SGML implementation, there have evolved some smaller derivatives during the years. One of these, is XML, another is HTML.

3.3.2 XML

(24)

conforms to SGML by design9.

The structure of a XML document is fairly straightforward and always begins with a <?xml> tag to tell the viewer what kind of XML the document contains and so on. After that, the creator is basically on his/her own. A document always has a root node, though, which can be called anything, and inside this root node, all other nodes and their children are placed in a hierarchical structure and this is where the core and the real strength of XML comes in. It can be used to describe anything the creator desires in a standardized way. The hard part of this is that it needs a context to actually be worth anything. One way to describe context is topic maps, another one is RDF, both of which are described in the following sub-chapters.

3.3.3 Topic maps

Topic maps started out as many of the other Internet-related methodologies in the beginning of the 1990's at a company which later became the Davenport Group, which around that time also produced one of the most widely used DTD's (Document Type Definition) today which is called DOCBOOK1999. At Davenport they discussed much around the problem of merging indexes of different sets of documentation. Thus what became Topic Maps ten years later was based on much of this knowledge. This work with indexes led the Davenport people to come up with the TAO (Topics, Associations and Occurrences) which the Topic Maps are all based on today (Pepper, 2000).

Topics, in their most general sense, can be anything, a car, a boat, a tree or a person. It might be a physical thing, a special characteristic or even a feeling. These topics may also be linked to a number of information resources that the creator feels are relevant to the topic. These resources are called occurrences. The topic map also has associations which tie different topics together and describe the relationship between these, for example “The car was made in Italy” or “The apple was grown on a tree”. There are various methods of putting topic maps into physical form and one of them uses XML. 3.3.3.1 XML Topic Maps

(25)

XTM. The whole design process of XTM was focusing on making XTM very simple and easy to use without any extra unnecessary features and complicated syntaxes. The development was very keen on getting XTM done and ready for the public as quickly as possible and this might have been one of the reasons for the simplicity of the standard.

The terminology used to describe XTM documents is ordinary looking XML and everyone who is familiar with XML can quickly write their own XTM. For example, a topic is defined as “<topic id=”apple”>” (Biezunski, 2003). To summarize, XML topic maps are an implementation in XML of a way to describe reality, there are however other methods for doing this in XML-like formats, one of the most promising is the Resource Description Framework.

3.3.4 RDF

The history of metadata at the W3C (World Wide Web Consortium) began in 1995 with PICS, the Platform for Internet Content Selection which was a mechanism to communicate ratings of web pages. These ratings contained information about the content of the web page and were supposed to allow different organizations and governments to filter pages according to their beliefs and values. This came as a reaction to the situation at the time and pending restrictions concerning Internet Service Providers' ability to filter out “inappropriate” material, such as foul language, pornography and so on. These laws, however, never materialized. After PICS was done, W3C began to work on a new version called PICS-NG (Next Generation). But it soon became clear that the infrastructure could be applied in many different applications, thus these applications where consolidated into RDF10.

Resource Description Framework is a set of W3C specifications originally meant to be a metadata model but which turned out to be a good way to model all information in a generalized way, using different syntaxes. The RDF metadata model is based on a concept called triples, which means that you make statements about the resources at hand in the form of subject-predicate-object. To put it simply, this means that the subject is the resource, the predicate dictates the aspects of the relation between the subject and the object and the object, being the thing that is concerned11. For example, in the sentence “The cat has a tail”, the subject is “the cat”, the predicate is “has” and the object is in this case the “tail” of the cat.

(26)

Another benefit that can be derived from RDF is that it can be used to effectively exchange and reuse metadata, which is something that is not common today. The fact that RDF uses the very popular XML standard to structure its documents is another positive aspect, as many people nowadays are familiar with how XML works. RDF also supports the use of different namespaces that enable the creator to use different subsets of RDF-schemas inside of one document.

Figure 7 is an example of what a simple RDF can look like if presented as an image. It uses two namespaces, NS1, which is a custom namespace and the RDF namespace. The figure describes that the subject “Cat” has a predicate “hasName” which points at the object “Fido”. This relation would be syntacticly described as viewed in Figure 8 below.

RDF is an emerging technology and there are still relatively few applications based on it, but there are indications that RDF and ubiquitous technologies combined might be a promising development in the near future.

3.4 Ubiquitous computing

Ubiquitous computing has become a highly relevant research area in human-computer interaction (HCI) during the past decade, driven by the rapid technological development. Devices and applications within ubiquitous computing aims to gain understanding of the environment around the user and to become contextually aware of the user. With this awareness, devices and applications can alter their behavior depending on the current situation. The ultimate goal of ubiquitous computing is to reach a level of interaction with the environment where the computing disappears into the fabric of the environment (Greenfield, 2006). This would, in this vision, grant elements in the environment (for instance shoes, trees or flowers) the ability to compute

(27)

and process information. This section aims to describe a few categories of technologies that help facilitate ubiquitous computing: positioning systems and visual markers.

3.4.1 Global Navigation Satellite System

Global Navigation Satellite System (GNSS) is an umbrella-name for the various technologies that exist for a system that can tell you where you are in the world via a device of one kind or another. This is achieved by satellites orbiting the planet sending out time signals and allows you to determine your latitude, longitude and altitude within, in best cases, a few meters (Getting, 1993). The only current implementation of a system like this is the American GPS (Global Positioning System) system, but there are efforts from various countries around the world to come up with matching, or hopefully, better, systems than the American one. For example there is China's Beidou system which only works locally in China as of now, but is proposed to be expanded into covering the whole globe. Another one is the European Union's Galileo project which is aimed to be launched year 2010 and has the backing of China and India amongst others.

A GNSS is a fundamental part of ubiquitous computing because one of the most important parts of achieving the ubiquitousness is to know where the user is currently located and adapt the programs to the new environment that the user is currently in. If a user is for example on a football field, chances are that the user is attending a football game and will require a very high ring volume on his cellphone to be able to hear it ringing, as the volume at football games generally is very high. Or maybe the user is a football player and wants the cellphone to give an away message every time it is within the football arena. These could be possible scenarios where positioning technologies helps to facilitate ubiquitous computing. Aside from GNSS, there are other methods of finding out the location of a user. Perhaps not as effective, but if used right, visual markers can be a very good substitute.

3.4.2 Visual markers

(28)

actually within the company and where it was headed. Traditional barcodes could only store numbers, but as time progressed they were developed further and can now hold a relatively large string in a matrix of dots or squares. This has lead to new areas of use for the barcode as a visual marker.

In the 1990's various efforts were made by the manufacturers of barcodes to digitalize the scanning process with digital cameras instead of the laser usually used to scan the bars. In the end the result turned out better than lasers in many cases. This research spilled over into mobile phones when these were equipped with decent enough cameras, and now there are several techniques that allow you to create a matrix-barcode that can for example open a URL (Uniform Resource Locator) when scanned with a special application in the phone. This process can be a bit tricky and recent research shows that users find it unreliable and hard to use in its current form. There are, however, other ways than barcodes to achieve visual makers. One of these is RFID (Broll et. al., 2007).

RFID (Radio-frequency identification) comes from a technology used to identify airplanes during the second world war and basically works like an echo. When a radio wave is sent towards the device, the device sends its content back to the initiating source. There are a few different ways to achieve this but, without getting too technical, there are two categories of devices, active and passive. Passive is the less costly one of the two and works by harvesting the power from the radio wave it receive as its only source of power, making it more suitable for ubiquitous implementations as you don't need an extra power source like a battery. The drawback is that passive RFID has a very short range which makes it unsuitable for traffic and such. It is however thin enough to be implemented in a piece of paper nowadays. The active version of RFID has an internal power supply which makes it more reliable and enables it to have a longer range. An active RFID is, however, bigger and several times more expensive than its passive counterpart.

(29)

date. The Nokia implementation of RFID contains a technique called Near Field Communication which aims to ease commerce over the mobile phone, but also to help set up WiFi networks when available or just general information about what the RFID tag is attached to (Broll et. al., 2007).

These technologies can be used to track for example, contestants in a game, as you know where the markers are placed beforehand and can therefore conclude where they are based on which marker they activated last. This can for example be used to automatically attach location metadata to a marker activated task or activity and is far less complicated to make use of then satellite tracking technologies with coordinates. Both of these technologies are important parts of the current ongoing efforts in this area in the research community and we will in the next chapter describe a few of the current efforts that are in the same area as the thesis.

3.5 Current efforts in the scope of this thesis

The following sections aim to describe various initiatives in line with the purpose of this thesis, and by describing these here we aim to clarify where the efforts in this thesis fit in, why they are important and how our efforts differ from others in this field, which we will discuss further in chapter 4. The efforts are spread around the world at different academic institutions and we feel that they are representative of the efforts that are done in the field globally.

3.5.1 MOBIlearn

(30)

3.5.2 ENLACE

Interesting work related to our efforts is being conducted within the Spanish ENLACE Project. The project aims to explore the design and implementation of a technological infrastructure for a pervasive framework for activities both inside and outside the classroom. In one particular paper the system infrastructure is presented which we can compare with our implementation ideas. The backbone of this infrastructure is a learning object repository (LOR) which enables them to handle the storage of a vast amount of learning objects reflecting result artifacts and bi-products that have been produced within the activities performed. There are many other tools developed alongside of the LOR, for example a voting tool and others.

Their system architecture is built around intelligent agents which is only really a piece of software which has the supposed ability to learn and adapt. Celorrio and Verdejo found that distributed artificial intelligence is a very intuitive way to tackle development in pervasive environments (Celorrio & Verdejo, 2007).

3.5.3 Ambient wood

The ambient wood project was conducted in Bristol, England as a part of EQUATOR, by a research team placed at the University of Bristol. The aim of this project was to make ubiquitous learning experience for children of the ages of 11 – 12 and explore a vast amount of ways that information can be presented in a physical world, including narrative audio, ambient audio, movies, text and pictures. The research is focused on design, deliverance and interaction of digital information when learning about ecology outside.

(31)

3.5.4 The Knowledge Management Research Group

At the Royal Institute of Technology (KTH) in Stockholm, Sweden, there is an ongoing research vision around the evolution of the semantic web which they call the “conceptual web”. This aims to produce semantic relations that are not only understood by a computer, but are also presented in an appealing way to the end user. This is achieved by using technologies and techniques such as UML, RDF and something they call a conceptual browser, such as their own developed Conzilla. There have been quite a lot of papers produced here over the years but there is usually a red line among them and most of them point towards the semantical web and the visualization of said web (Naeve et. al., 2001).

3.5.4.1 Server side Solution for Conceptual Browsing

This master thesis in computer science is one of the latest to come out of the KMR group at KTH (Royal Institute of Technology) and it describes a few concepts on how to build a form of proxy server for serving a mobile version of their Conzilla application, which is a concept browser for RDF amongst other technologies, with filtered data better suited for a mobile environment. As the author realized that a lot of the information in the Conzilla was unnecessary if you only wanted an overview of the context of something and could therefore be filtered out.

(32)

4. The bigger picture

Throughout the previous sections we have described what we intended to produce within the boundaries this thesis. But before entering the first cycle of the development it is important to understand the bigger picture of our efforts. Our intention was to contribute with and realize an essential part of the CeLeKT activity model (Figure 9). The model itself demands an explanation before we can discuss our contribution.

4.1 CeLeKT activity model

The CeLeKT activity model has the intention of describing the components and their interaction which, in the opinion of CeLeKT, makes up ubiquitous learning activities. It is important to see the connection between this model and the contextual model in section 3.1 (Figure 6), as there is a clear traceable relationship between these. Component 2 and 3 in Figure 9 can, for instance, be traced to the Person/Interperson block in Figure 6. The following sections will explain the four components in Figure 9

(33)

in detail. For easier referencing we have added numbers to the different components in the model.

4.1.1 Learning Activity System

The Learning Activity System, which is referenced with number 1 in Figure 9, can be considered as the main component in the model making it possible to perform and control ubiquitous learning activities. Note that the component has a traceable relationship with the Learning Activity System block in Figure 6. The responsibilities of the component are further decomposed into three subcomponents – Presentation

Engine, Activity Generator and Collaboration Tools. Presentation Engine includes tools

for visually representing activities performed with, for instance, digital maps or web interfaces. The Activity Generator is the core component of the Learning Activity System. It is responsible for designing, controlling, monitoring and visualizing activities and user progress within these. The leftmost abbreviated symbol, CCS, is a repository system developed by CeLeKT described in section 4.1.1.1 intended for storage and web service functionality. ACS, or Activity Controller System, is the heart of the model and can be considered the engine of the system. The third and last subcomponent, Collaboration Tools, are methods of communication and collaboration made available by the system for the activity participants. This could, for instance, be support for text messaging or collaboration through content exchange. The Learning Activity System and its subcomponents have interfaces for incoming and outgoing communication with all of the other components of the model.

4.1.1.1 CCS – Collect Convert and Send

The CCS started out as an in-house project at the research group CeLeKT at Växjö University in the project MUSIS (Multicasting Services and Information in Sweden,

www.musis.se). One of the goals of the MUSIS project was to bridge the gap between

(34)

To be able to achieve this you need to know things about the file the CCS receives, you need the context, you need metadata. The CCS looks at where the file came from, what file-format it has, when it was received and even has a built-in list of possible conversions for the file-type in question. If the file-type is new to the system, it will try to generate a new set of metadata for the file-type. The CCS thus acts as a repository for files of many different kinds and their related metadata. The need to store content that goes through the CCS was important as it is crucial for the ability to reuse content and to use the CCS as a library of files and the information about these, proved to be a great asset in the MUSIS project and in other efforts that CeLeKT has undertaken (Gustavsson & Alserin, 2006).

4.1.2 Outdoor Activities

Outdoor Activities refers to participants in activities performing out in the field. What characterizes this component is the primary use of mobile ubiquitous devices, such as PDAs or Smartphones, as the participants are located outdoors. This component, along with the Indoor Activities component, can be traced to the Person / Interperson block in Figure 9.

4.1.3 Indoor Activities

Component 3 in Figure 9, Indoor Activities, refers to participants in activities located indoors during the activity. Devices existing “naturally” in these environments are desktop computers, laptops and tablet computers.

4.1.4 Sensors & Actuators

Sensors & Actuators are special means of triggering events and collecting data for advancing and monitoring activity progress for participants. GPS is closely associated with the Outdoor Activities component, as GPS will not function indoors. Semacodes, mentioned in section 3.4, are optic barcodes which are scanned using a semacode reader application that can be used by participants residing both indoors and outdoors.

4.2 Where does our contribution fit in?

(35)

use for handling content storage. As our main purpose was to investigate the value of an extended metadata base, we realized that the top subcomponent of component 1 – Presentation Engine – would have to be incorporated in some way. Exactly what kind of visualization support we added is described throughout section 6, 7 and 8.

Now that we understand the bigger picture of our work it is time to proceed to the following section describing the first cycle of the development process. From now on in the text we will refer to the computational mechanism and its database as simply “ACS”.

4.3 In which ways does our approach differ from others?

This section aims to point out what makes our work stand out and differ from the other initiatives within the scope of this thesis. The MOBIlearn project, which was presented in section 3.5.1, has some similarities to our approach, one being that core contextual elements, such as task rules and constraints, have to be defined before being able to perform actual trials (stated in section 1.6). Besides this, the angle of our approach and the MOBIlearn trial differ quite a lot. Our view is more centered on the server side; to record the context that exists in activities in order to better understand and categorize generated content, while the MOBIlearn trial in this paper focused on the user experience when handling context-aware devices. Furthermore, we will not use a “Wizard of Oz” methodology for evaluating our system – we will collect contextual data during real-life activities and store it as the activity is being performed. In this way we will be able to instantly associate content created during the activity with the current context stored in the system.

(36)

Then we have the ambient woods project which didn't really have anything like the ACS at all and used more of an ad-hoc structure with predefined content-triggers in the environment and a Wizard of Oz which sent out content depending on what the participants in their activity were doing and where in the forest they were. They did, as already noted in the description of the project in chapter 3, have a very interesting way of tracking their participants using FM radio waves and receivers called “pingers”. In our activities we choose to make the positioning with visual markers instead, as we only had task-based activities, the use of the FM system would not have made any sense as it takes much longer to implement, neither did GPS make any sense as the distance between the stations was never greater then a few hundred meters.

The ambient wood also featured some innovative narrative features. They did however not have any competitive element to their activity, nor did they have any method to answer questions on the fly and were more focused on raising awareness and reflections about the forest. ACS however, has the capability to run many groups simultaneously and receive questions on the fly from all of them. ACS also has the capability to store the semantic relations between everything that is put into it and therefore makes us able to do post activities in a whole different sense (Rogers et. al., 2004).

To further clarify how our contribution differs from the rest of the field, we will here present a bullet list of points where our contribution differs from what has been done before in this area.

• Ambient Wood is more of a multimedia book then anything else, it contains a lot

of interesting approaches to spreading media but doesn't contain any game-like features and offers no way of semantically relating the information they collect, if any.

• ENLACE is done on a completely different scale than ACS, with a rather

different approach. ENLACE is done with the help of so called agents and is a general framework for pervasive learning whilst our approach goes more towards small and rapidly tailored solutions for every case with a common engine called ACS.

• The work at the KMR group is more focused on frameworks and general design

(37)

• MOBILEARN uses a similar approach but ended up in the complete other corner

of this. They centered on the user experience while using context-aware devices and had no technology behind to support it, thus using a “Wizard of Oz” methodology. We did the opposite and focused more on the server side and on capturing the semantic relations in activities and gathering our experiences from those.

(38)

5 System development

The purpose of this chapter is to give an overview of the system development of ACS. As discussed in chapter 2, we have been using a tailored version of UP for supporting our work along with scenario-based design for requirements gathering during a total of two development cycles. A timeline visualizing the sequence of our work can be seen in Figure 10. The first cycle, detailed in chapter 6, aimed to get the system up and running in order to get some impressions of our idea and see how well the context of ubiquitous learning activities was mirrored to the ACS database structure. Chapter 6 will end with a summary of our findings, which served as input requirements for the second development cycle. The aim of the second development cycle, described in chapter 7, was to correct possible design flaws from the first cycle, add new functionality to the system and to perform a more comprehensive and complex system test. During this test the ACS database was be populated with contextual data necessary for generating RDF data. The work of creating a meta model for the RDF structure, mining the ACS database and exemplifying the usage of a generated RDF document containing activity semantics is covered in chapter 8. The two development cycles was instantiated and tailored to fit our needs – the following section will give an overview of the characteristics of our development instance.

5.1 UP instantiation

Because of the relatively short cycles in the system development of ACS, our

(39)

instantiation of UP was tailored to fit our cause. The decision of leaving out the concept of phases during the development process (discussed in section 2.1) was taken early on, primarily as the entire development process only consisted of two development cycles. The Requirements workflow in our UP instantiation puts less focus on Use Cases for requirements gathering and relied more on the scenario-based design methodology described in section 2.2. The Analysis workflow consisted of isolating the purpose of ACS by modeling an analysis class diagram, while the Design workflow focused on how the purpose was going to be fulfilled by design class diagrams. The

Implementation workflow consisted of implementation details while the Test workflow

involved testing the ACS in a sharp location-based learning activity. All of these workflows are described for each of the two development cycles in chapter 6 and 7, together with a summary section compiling our findings, but first we will motivate some of the initial development decisions we made in the next section.

5.2 Motivation of development decisions

The upcoming sections will motivate early design decisions made before entering the first cycle of the development of the ACS. First the choice of context definition will be explained, followed by a motivation of which technique to use when structuring the collected contextual metadata and its values in a semantically related structure supporting the cause of this thesis.

5.2.1 Selecting an appropriate definition of context

(40)

The model in Figure 6 visualizes a model of context as defined by CeLeKT, with clear blocks which we believe can be adapted to a database model. While not claiming to be a learning system, we believe that the computational mechanism we made fits somewhere in the center of the conceptual model within the Learning Activity System. On the other hand, the database populated by the computational mechanism aims to capture all of the three outer blocks – Task/Activity, Person/Interperson and Environment/Location – and the interactions between them.

5.2.2 Selecting an appropriate technique for generating

semantically related metadata and values

This section aims to motivate our decision regarding which technique to use for generating semantics from the metadata and values collected during activities As there are numerous approaches for this – which most likely could be a thesis subject of its own – we have limited our selection to the techniques discussed in section 3.3, namely XML Topic Maps, RDF and SGML/XML.

As we were already familiar with XML, we decided to spend most of the scheduled time investigating the areas of XML Topic Maps and RDF. The idea of formatting the metadata and values according to a standardized way of structuring data was something we initially regarded as perfect for our work. We started off with creating a few mock-up XML Topic Maps in order to try out tools for visualizing and querying these, using applications such as the Omnigator12 and TM4L Viewer13. While these applications did their job, we realized that XML Topic Maps wouldn't contribute enough to the purpose of our work to make the effort worthwhile. A custom XML structure to suit our cause would make the structuring phase of the thesis less complex and grant us more time to investigate the value of the contextual metadata and their corresponding values, but by generating a custom XML structure would limit usage to applications tailored and programmed to understand the structure. This is the main reason of why we in the end chose to use RDF for structuring the data. We considered RDF as being sophisticated enough to be able to support our cause while not being too complex and after browsing the web we found a few usable applications which could aid us when designing and validating the document structure. The exact structure of the RDF document was determined once the computational mechanism and its database were designed, 12 http://www.ontopia.net/omnigator/models/index.jsp , 05/24/2007

(41)

developed and matured during the two following development cycles.

This concludes the motivation of initial development decisions. In the upcoming section we will describe the context in which the test workflow of the development processes took place.

5.3 Test workflow environment

During the period of us authoring this thesis CeLeKT planned several location-based learning activities in which we had the possibility to test ACS. These activities was performed within the framework of Ung Kommunikation14 – a five-year development project supported by KK-stiftelsen and run by the University of Växjö, the University of Kalmar and Blekinge Institute of Technology. Ung Kommunikation aims to investigate how recent communication technologies used by the youth of today can support learning and enhance methods of teaching. The CeLeKT theme group within this project in which activities relevant for this thesis will be performed is labeled Advanced

Mobile and Ubiquitous Learning Environments for Teachers and Students. Throughout

the thesis we will use the acronym AMULETS and details regarding individual activities within AMULETS will be further explained in the development cycle chapters. With the prerequisites settled, the upcoming chapter will introduce the first development cycle – AMULETS Stortorget.

References

Related documents

The EU exports of waste abroad have negative environmental and public health consequences in the countries of destination, while resources for the circular economy.. domestically

Stöden omfattar statliga lån och kreditgarantier; anstånd med skatter och avgifter; tillfälligt sänkta arbetsgivaravgifter under pandemins första fas; ökat statligt ansvar

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av