• No results found

Digital EFL reading versus traditional EFL reading in upper secondary school.

N/A
N/A
Protected

Academic year: 2021

Share "Digital EFL reading versus traditional EFL reading in upper secondary school."

Copied!
30
0
0

Loading.... (view fulltext now)

Full text

(1)

FACULTY OF EDUCATION AND BUSINESS STUDIES

Department of Humanities

Digital EFL reading versus traditional EFL reading in upper secondary school.

A study of reading comprehension in digital and print text.

Max Sjölander

2021

Student thesis, Bachelor degree, 30 HE Education

Upper Secondary Teacher Education Programme English 91 - 120

Supervisors: Jessika Nilsson & Henrik Kaatari Examiner: Marko Modiano

(2)

Abstract

This study tested differences in reading comprehension between printed text and digital platforms. Two groups of upper secondary EFL students took eight tests, aiming to test their reading comprehension. The students took four tests in which they read

traditionally and four in which they read digitally. The results of those tests were then compared using the percentage of correct answers, mean scores, and a t-test. The results showed slightly, but statistically insignificant higher scores in favor of the traditional test-takers. One test showed a statistically significant difference in favor of traditional test takers. The results were later discussed through relevant previous and related research. The reason for the difference in performance is, arguably, due to test mechanics.

Keywords: Comprehension, reading, digital, traditional, EFL

(3)

Table of Contents

1. Introduction ... 1

1.1. Aim and research question ... 1

2. Background ... 1

2.1 What is traditional and digital reading? ... 2

2.2. The differences in comprehension? ... 3

2.3. Previous research ... 5

3. Method ... 9

3.1. Participants ... 10

3.2. Materials and Data collecting procedure. ... 11

3.3. The Tests ... 13

3.4. Reliability and Validity ... 14

4. Results ... 17

5. Discussion ... 19

5.1. Earlier studies compared to this study ... 21

5.2. Why is there a difference? ... 22

5.3. The future ... 23

6. Conclusion ... 24

References... 25

(4)

1 1. Introduction

This study originated in the interest to investigate whether the digitalization of the classroom affects student learning. Digital aids are usually introduced as tools to simplify and ease the workload for both teachers and students. Digital aids have their application in education. However, some argue that digital tools might be more of a disturbance than an aid for students. The average upper secondary school student in Sweden has in their lifetime not read many books but has spent a vast amount of time in front of a screen, meaning that they have spent much time reading digitally. Society, in general, is slowly turning from the traditional alternatives of consuming text and other media and turning digital. Therefore, the school system should be open to evolving with the rest of society, but not without caution. If the future of the classroom is a digital one, then the students’ performance in digital landscapes should be researched and compared to their performances in a traditional pen and paper landscape. Reading, being a

complex function in a second or foreign language, and a student’s comprehension of what they read should be researched and considered before entirely switching to digital reading education and testing. If the proper research concerning the switch of platforms and how to examine digital reading comprehension is not done, our students and their results might be negatively affected, leading to a poor EFL climate in Swedish upper secondary schools. The research in this study is concerned with how well students perform on reading comprehension tests across traditional and digital platforms.

1.1. Aim and research question

This study aims to investigate EFL reading digitally and traditionally to determine which form of text leads to higher levels of comprehension. This study investigates if students perform equally in reading comprehension tests when reading digitally as they do when reading traditionally. Hopefully, this study serves as an aid to future teachers and as an evaluation of the current digitalization process of Swedish upper secondary schools. The research question in this study is: Do upper-secondary school Swedish learners of English score higher on comprehension tests when reading digitally or when reading traditionally?

2. Background

The reason for carrying out the study is that Skolverket in recent years has tried to direct education in Swedish schools in a more digital path. The basis for that change in

direction is mainly referred to a study conducted by Utbildningsutskottet (2016). The

(5)

2 study highlights the positives and the negatives of digital aids in an educational

environment. The study reports higher motivation and engagement in students and that students have performed better overall using digital aids (Utbildningsutskottet, 2016, p.

24). Reading, and especially reading comprehension, is, however, a very complex function. Students who study ESL or EFL in school are sensitive to this complexity and because they are not familiar with the language, the digitalization might complicate it even further.

2.1 What is traditional and digital reading?

Firstly, the definition of traditional and digital reading needs clarifying. Reading a printed text on a piece of paper is in some cases called traditional reading, reading in print, or analog reading. However, the term "traditional reading" is commonly used in the related field of research and is more versatile than print reading or analog reading.

Therefore, this study will use the term traditional reading when referring to the instances in the study where the students read printed text on paper without any technical aids.

When traditionally reading, the human brain is more likely to use a reading strategy of deep reading (Ferguson, 2019, p. 64) since traditional reading mostly consists of reading newspapers, books, and reports. The reading strategy that readers usually resort to is deep reading, where they register all of what they read and are open to information.

When reading traditionally, a person is more likely to focus on what they are reading since they do not have the opportunity to multi-task as they do when reading digitally on a computer (Baron, 2017, p. 15). A much higher percentage of people multi-task when reading digitally, with some studies showing numbers as high as 85% as opposed to 26% when reading traditionally (Baron, 2017, p. 16).

However, digital reading is slightly more complex and has more layers to it than traditional reading. A simple way of defining digital texts in contrast to

traditional ones is through digitizing, which according to Hlynka (2010, p. 62), means

“to copy print material (or other) into a digital form”. Reading digitally does, however, hold more complex layers than simply a text which have been digitized. Reading digitally is, in many cases, based on skim-reading (Ferguson, 2019, p. 64). Digital reading is usually connected with everyday tasks on a personal computer or a

smartphone where the reader does not have a need to deep read all the text but instead looks for key parts of information in the text. Some reports indicate that reading

(6)

3 comprehension tends to decline when the reader must scroll through the text (Baron, 2017, p. 16).

2.2. The differences in comprehension?

Reading comprehension can, according to Garrett-Rucks et al. (2015, p. 27), be defined as “constructing a mental representation from the content of the text for the purpose of understanding the message”. Reading comprehension can also, according to Pardede (2019, p. 79), be defined as “an interactive process involving features of the reader, the texts, and tasks”. Regardless of which interpretation is used, most scientists agree that comprehending a text is a function of using one’s preunderstanding of the subject and vocabulary to puzzle together the pieces of the text to reach the goal of understanding that text. The goal might be to enjoy the text, like reading a story, or it might be to answer a particular set of questions or solve a task. The task of mentally constructing an image of the happenings in the text and solving the task is even more challenging in a foreign language due to a reader’s limited vocabulary (Garrett-Rucks, 2015, p .28).

However, the vocabulary and one’s preunderstanding do not differ between the platforms of reading and are therefore not the reason why the platforms may differ.

The comprehension is therefore not as much up to the platform of reading as it might seem. The way the reader approaches the text to comprehend it is mainly the difference. The reading comprehension of traditional reading is deeply connected to deep reading, where a person carefully reads a text and tries to understand it that way.

Digital reading is, on the other hand, more connected to skim-based reading, meaning quick actions like texts, messages, or searching for information (Baron, 2017, p .18).

However, these definitions are generalizations of highly complicated neurological functions. Reading comprehension and reading, in general, is constantly evolving and adapting to the readers' perceived reality. The reading strategies students use today are very different from the strategies used ten years ago. The mediums of reading are constantly evolving; therefore, reading is constantly changing.

Ferguson (2019) argues that because of the information society we live in today, where we are constantly faced with mass amounts of information, the average way a person reads has changed and adapted. Ferguson (2019, p. 64) argues that a person who adapts to skim-reading, without attention to deep-reading “may lose the ability to follow a complex written argument, engage in deep and thoughtful analysis, and reflect on what they have read”. This adaptation is most significant in the behavior

(7)

4 of young people since they have grown up in the computer age, and Ferguson´s quote clearly show the difference in reading comprehension between the two platforms.

Students who have adapted to skim-based reading are used to looking for specific pieces of information and read under the pressure of time. Computer games, digital messages, or googling for basic information do not require the ability to follow complex written arguments or deep and thoughtful analysis.

Rowsell et al. (2009) writes about reading paths in the text, which also builds on Ferguson’s theory. The reading path of traditional texts is mainly a set text and path with a linear trajectory, where digital reading is usually a path waiting to be constructed by the reader (Rowsell et al., 2009, p. 107). Which means that when reading online or in any other digital setting, the readers are not used to deep reading a text or even reading it from the beginning to the end. Readers of online texts are used to

quickly skimming through a text to look for the single piece of information they need. A digital reading path is often built by navigating through a game, a series of websites or messages which significantly differs from the traditional way of reading a text. Rowsell et al. (2009, p. 115) argue that, a different skill set is required for reading digital texts, but usually, the text also “carries a different set of assumptions and epistemological framings based on how a text is designed and produced”.

This way of quickly reading and skimming through text is commonly referred to as hyper reading and is described by Hayles (2012, p. 12) as “to conserve attention by quickly identifying relevant information so that only relatively few portions of a given text are actually read”. Digital reading might need to be viewed as a new literacy and might not be comparable with the “old” literacy, which would be reading and comprehending in print. That would not be far-fetched, seeming that an effective digital reader requires an entirely different skill set than a traditional reader (Leu et al., 2011, p. 6).

In summary, there are differences in digital and traditional reading comprehension. A lot of the differences are based on expectations and preset

assumptions in the readers’ minds. A reader who has adapted to the skim-based hyper reading and the “to-be-constructed” reading path of the digital world might have a hard time applying it to print texts or complete tasks that require deep reading and reading focused regardless of their platform. However, the tests the students will be taking in this study are not online-based, meaning that they do not have interfaces where the students are required to guide their way through websites or search for information

(8)

5 online. However, the reading behavior while reading on a computer will likely be

affected by the digital reading assumptions stated above, and the mechanics of reading on a computer will probably also affect them. The hypothesis based on this information is that the traditional test takers should perform better than the digital test takers.

2.3. Previous research

The previous research on this subject is not extensive. Many studies relate to this study in the sense that they study reading comprehension, reading digitally, reading patterns while reading digitally, or the effect digital reading has on the reader’s brain. However, not many studies examine how different reading platforms affect students’ test

performances and compare them. That is, arguably, a reason why this study is

important. However, for that reason, some not similar but closely related studies need to be included in the previous research section to make sense of the subject.

In a study by Usó-Juan et al. (2009), the purpose was to examine whether reading on a digital platform affects learners’ reading comprehension and to analyze if the use of reading strategies learners use differs from reading in print when digitally reading (Usó-Juan et al., 2009, p. 63). In the study, 50 female tourism students were chosen based on their gender to lessen the effects of extraneous variables. They were divided into two groups based on which year they were in. The second-year students made up the group who read traditionally, and the first-year students made up the group who read digitally. The students took the same test, an EAP reading test that contained a reading passage from the area of tourism, which took place over four hours with a 15- minute break. The students completed two tasks during the test, one with true/false questions regarding the text and one with open-ended questions. Students were also given questionnaires, and a class about reading strategies to ensure that lack of a reading strategy would not affect the results. The results showed that the differences in

performance between the two groups were statistically insignificant, but the digital test- takers scored slightly higher. The questionnaire showed a slight tendency for the students to prefer reading digitally rather than traditionally. 36 percent of the digital test-takers would prefer to have read the text in print, and 64 percent would not. In conclusion, the results showed that digital reading does not affect learners reading comprehension (Usó-Juan et al., 2009, p. 75).

In a study by Mangen et al. (2009), the objective was to explore the effects of the technological interface on reading comprehension in a Norwegian school context.

(9)

6 The participants in the study were tenth graders from two separate primary schools. The participants were later randomized into two groups. The students took a pretest

consisting of one narrative and one expository text followed by multiple-choice questions and short answer constructed response questions. Students also took pretests in word reading ability and vocabulary using a word-chain test and a traditional single- word-item semantic vocabulary pretest. Four weeks later, the students took the main test in their groups where group one read digitally, and group two read traditionally.

However, both groups answered the questions in the test digitally. This test included a narrative and expository text designed at the National Centre for Reading Education and Research. They had one hour to complete the test. These tests showed that the

difference in results between the two groups was slight over the entire testing spectrum.

However, on the reading comprehension test, the traditional test takers performed better.

The students who read traditionally scored higher. The authors pose one possible explanation to this as an issue of navigation. Navigating in a digital text includes scrolling which could affect the spatial mental representation. Another possible

explanation for the results might be differences on a metacognitive level. Mangen et al.

(2009) conclude that those who read digitally, when the text is a linear narrative and expository, show a poorer comprehension than those who read the same text on paper (Mangen et al., 2009, p. 68).

A study by Kretzschmar et al. (2013) aimed to examine if digital reading required more effort and caused more strain than reading in print. In doing so, they questioned the subjectively negative reputation digital reading had (Kretzschmar et al., 2013, p. 2). In the study, the researchers tested if reading digitally required higher cognitive effort than reading digitally and if any of the two platforms affected reading comprehension. The study participants were 36 younger adults, mostly university students, and 21 older adults, primarily senior citizens. All participants were unaware of the purpose of the study. The participant was tasked to read nine texts of different purposes (three scientific, three non-fiction, three fiction) with a length of between 167- 266 words. They read three texts on three different platforms, i.e., one of each on every platform. The platforms were a tablet computer, an e-reader, and printed paper. While reading, scalp electrodes were used to record the participants' EEG

(electroencephalogram, which measures neurological activity), and a tower-mounted eye tracker recorded the participants' eye movements. After recording a participant’s rest-EEG, the test started. When the participant had read all three texts on a device, they

(10)

7 were presented with two comprehension questions per text before switching devices.

The comprehension questions used in Kretzschmar et al.’s (2013) study “probed

contents from different pages of a text (literal, sentence-level messages in all cases) and required either a yes or a no answer” (p. 4). After the test, the participants completed a short questionnaire. The total test time was between 2.5-3 hours. The data from the test were analyzed by the EEG voltage density in the brain’s theta frequency band (waves and movements in the brain which show certain activities such as cognition) and by the eye towers measuring of fixation time. The results showed no apparent difference in error rate between the reading medium when it comes to the comprehension questions.

The summary of Kretzschmar et al.’s (2013) findings regarding the theta frequencies and eye fixation was that if the eye focused on a certain point in the text for a longer time, the theta voltage density increased, i.e., the activity in the brain increased to a measurable point. The increase in voltage causes higher strain on the reader. The increase was observable across both participant groups but slightly more pronounced in young adults. The theta voltage density increase was detectable in the EEG of readers on both platforms. Therefore, the findings in the study show no evidence that suggests that digital reading requires more effort than traditional reading (Kretzschmar et al., 2013, pp. 7-10).

In Hou et al. (2016), the authors used a paper book, a digital equivalent, and a digital disrupted view to examine if any of those would show any differences in a reader's comprehension of the text. They examined this through the scope of two reading mechanisms: The Cognitive Map Mechanism and the Medium Materiality Mechanism. The cognitive map mechanism argues that the human brain tends to form what it thinks into mental objects to understand it. Hou et al. (2016) argue that the human brain, over time, has got used to modules that are “designed to guide specific adaptive behaviors, such as hunting for food, escaping danger, and selecting a mate” (p.

85). However, since reading is such a new “function” in the human brain, it does not have a module. Instead, it relies on existing modules such as “vision, speech, motor coordination, and visual object recognition” (p. 85), which means that the function of reading mainly comes down to object recognition and the translation of a text into the tangible physical world. Throughout the text, the reader creates a world or a mental map by turning the objects into mental pictures and building their surroundings. When asked questions about what they have read, it is therefore much more manageable for

traditional readers to find their way back in the text since they have fixed layouts. In the

(11)

8 study, it is stated that it is generally agreed that traditional reading is preferable in the creation of a digital map. A linear and set text makes it easier for learners to use placeholders in texts and, by that, they are not interrupted in the creation of a coherent cognitive map (Hou et al., 2016, p. 85).

According to the medium materiality mechanism, digital reading poses a lot more additional strain to the reader’s mind than reading traditionally does. While reading digitally, readers are more easily fatigued because of eye strain, among other factors. However, in recent studies, the differences in fatigue are tiny due to the

technical development in screens. Therefore, the role that this theory plays in their study is one of reading haptics (a way of recognizing or relating to an object through physical touch). Many readers report that they like reading traditionally because they like the feeling or sound of turning pages or the smell of an old book. A book can be felt and related to on a different level than a laptop or a tablet. The text in the book is forever printed there, and the text on the tablet is only temporary (Hou et al., 2016, p. 86).

Therefore, according to Hou et al. (2016) “where paper text is tangible and touchable, text on a screen is intangible and detached or mediated” (p. 86). Since “people learn best when information is presented across multiple sensory modalities” (p. 86), they should benefit from reading traditionally. Creating a mental map and haptic interaction have been argued to strongly affect reading comprehension (Hou et al., 2016, p. 87).

The experiment of this study included a total of 45 undergraduate students separated into three groups. The object was to read a comic book. The groups were randomly assigned one of the reading mediums: a paper book, a digital equivalent, or a digital disrupted view. In the digital disrupted view, the full page of the text was not visible simultaneously. Instead, the individual frames of the comic were visible one by one. All the participants were asked to read two comics, and they were aware that they were to answer questions about it afterward, but not that they were being timed. Once they had finished the task, the participants were given two questionnaires assessing comprehension, fatigue, and immersion. The participants who read the paper book or the digital equivalent performed significantly better on the comprehension questions, reported less fatigue and a higher immersion than the disrupted view readers.

The results of the studies showed insignificant differences between reading digitally, and in print when the digital document was adapted to the platform, i.e., looked the same as it would have in its physical form. Hou et al. (2016) concluded that reading on a screen could render the same results, given that the representation of

(12)

9 the electronic document is similar to that of a printed book or other physical documents (Hou et al., 2016, p. 92).

In a study by Singer et al. (2016), 90 undergraduate students read four texts with topics regarding childhood ailments. All texts were of approximately the same length (450 words) and of the same readability level (8.5). All the participants read two texts traditionally and two texts digitally. They all read the texts in a random order to negate any possible order effects. After reading the texts, they were assessed on their topic knowledge and comprehension, took a survey on medium-preference and a questionnaire on medium usage. The study results showed that most students thought they comprehended the most using digital platforms, but the print readers outscored the digital readers in all of the measured comprehension factors (Main idea, Key points, Other information). In the discussion, the authors argue that both digital and print reading has benefits—the answer to whether it is best for students to read traditionally and digitally is, therefore, unclear. Singer et al. (2016) argue that

“Evidently, there is still much to be learned about the nature of reading and comprehending when the medium is digital or print, not solely in terms of the cognitive processing that transpires, but also with regard to any motivational, sociocultural, or visual-motor factors that are

implicated. Yet, in light of the pervasiveness of multiple mediums in the lives of mature readers, these complexities must be more richly examined and better articulated in the goal of enhancing student learning and academic development is to be fostered” (p. 13).

3. Method

This study is based on eight different reading comprehension tests. The research in this study is quantitative which Sukamolson (2007, p. 2) when quoting Creswell defined “as a type of research that is `explaining phenomena by collecting numerical data that are analyzed using mathematically based methods (in particular statistics)”. The method in this study is based on similar studies by Singer et al. (2016), Usó-Juan et al. (2009), and Mangen et al. (2009) which are introduced in Section 2.1. These studies use a similar method to the one adopted in this essay in that of dividing students into two groups and testing them by having one group reading digitally and one group reading traditionally,

(13)

10 and that it uses the data collected from standardized tests to compare the two groups and draw a conclusion from those results.

3.1. Participants

In this study, two groups of students are compared. The students chosen are from the same upper secondary school program; they are of the same age group, attend the same school, and have had the same teacher in English throughout the entirety of upper secondary school. The students are my students from English classes I teach. The groups will consist of the two classes, i.e., this study will not separate the students from their classes and make new groups, the classes will be the groups. The first group contains 21 students (20 males and one female) and the second group contains 24 students (21 males and three females). The benefit of putting them in their regular classes is that they will hopefully not feel as they are being studied and researched, but instead, as this situation is normal. Therefore, they should perform as close to normal as possible. Due to ethical considerations, the students are anonymized in this study, and their names are replaced with codes.

In Usó-Juan et al. (2009), they also had two groups. However, they chose to examine only female students to account for extraneous variables in the study. The female students were separated into two groups. Group one included only second-year students who read the text in print and the second group included only first-year students who read digital texts (Usó-Juan et al., 2009, pp. 63-64). Regarding group separation, this study has two separate classes, and they represent group one and two.

The reason for this is to create an accurate representation of a general upper secondary class. Furthermore, this study does not aim to draw any larger conclusions based on gender or socioeconomic background and will not narrow the groups down to a homogenous specific one.

In Usó-Juan et al. (2009), the groups might reach different results based on the division into groups of students from different years who always read the same type of text. In this study, a decision was made to carefully control which program, which teacher and what year of school the students were in. This means that all students included in the study have the same teacher, are enrolled in the same program and in the same year of school. It was also decided to alternate back and forth between the groups to eliminate the factor of one group being stronger than the other and that effecting the results.

(14)

11 3.2. Materials and Data collecting procedure.

The instruments for collecting data in this study are reading comprehension tests. The tests are chosen from trustworthy and established sources such as Skolverket and an EFL textbook adapted to the level of English the students are currently studying.

Skolverket’s national tests and the textbook tests were chosen because of reliability and validity since they are constructed by experts and are standardized testing methods throughout Swedish schools.

Firstly, a pilot study is conducted on ten students, randomly picked from the two main study groups, to ensure that the questions in the tests are not

misinterpreted and that the answers to the questions can answer the research question in this study. The pilot study also aims to test the test mechanics, situational factors such as the setting, the procedure of handing out, instructing, and collecting the tests. The test in the pilot study is one of Skolverket’s national tests. The participants do it

simultaneously in the same room, half of them digitally and half of them traditionally.

The pilot study indicates no issues using Skolverket’s national tests to answer the research question in this study.

Secondly, the main study consists of two larger upper secondary school groups. Group one contains 21 students in total, and group two contains 24 students in total. The tests take place on eight separate, hour long occasions, and the students will take one test per occasion. Each test occasion starts with a shorter introduction, including handing out the tests and giving instructions. The students take the tests separately, either on their computers or on paper. They can hand their tests in when 15 minutes have passed and are not allowed to hand in their test after the full hour has passed. In the main study, the groups are alternating between taking tests digitally and traditionally. Group one takes tests one, three, five, and seven digitally, and group two takes tests two, four, six, and eight digitally. The groups alternate the traditional tests the same way, i.e., they never use the same platform for a test. The tests include a text and several questions assessing comprehension. The students have access to both the text and the questions during the entire test, which mean that they have the possibility of going back in the text and looking for the answers while reading the questions. In tests, one to six, one point per question is awarded since they only include close-ended questions. In tests seven and eight, two points are available per question since they are open-ended. One point is awarded for showing a brief grasp of the question, and two

(15)

12 points are awarded for fully answering the question correctly. To secure validity and reliability in assessing these questions, two additional teachers also assess the results of these tests, unknowing of what score the other teacher and I give. There were only two separate questions where one party disagreed. The initial thought of handling

disagreements was to calculate the mean score from all the assessments, but since there were only two instances where it happened and no further discrepancies, the majority decided the score in those instances since a mean average calculation would have overcomplicated the process.

In previous similar studies, most of the testing has only been done one or two times for each group. In Mangen et al. (2009), the groups included in the study did a pretest four weeks before doing the main test, which they only did one time. The first group read the text in the main test digitally, and the second group read the text

traditionally (Mangen et al., 2009, pp. 63-64). In Usó-Juan et al. (2009, pp. 64-65), the groups did the EAP reading test only once. This study wants to build on previous studies by examining the same groups several times but for a shorter time per test. The benefit of examining the same thing several times is the diminishing risk of

underperformance by students due to external factors such as hunger, fatigue, stress, among others. If tests are only conducted once, it is possible that such exterior factors primarily affect one group but not the other, causing unreliable results. If the same thing is tested several times, more reliable results will be achieved, and external factors are more likely to be similar across the two groups.

This study also wants to build on previous studies by alternating which groups read digitally and traditionally. Regardless of preparation and careful selection of equally skilled students in groups, there is still a risk that one group is stronger than the other. However, if both groups read both traditionally and digitally, they can be compared to each other and themselves. Alternating diminishes the otherwise

misleading effect, uneven groups, on a skill level, would have on the results. Therefore, doing the tests only once or twice and not alternating is less reliable than doing the tests eight times and alternating between the groups. This study will analyze the results of the tests by calculating each group's percentage of correct answers on each test, the mean score on each test, and the calculation of statistical significance on each test based on the p and t-values calculated using a t-test.

(16)

13 3.3. The Tests

The tests are taken either traditionally or digitally. In the case of this study, traditionally means that they received the test printed on a piece of paper, a blank piece of paper, and a pencil to complete the test. In this study, digitally means that they did the tests on their laptops on which they had access to the test and answered in a Microsoft Word

document which they turned in through an LMS platform. The first six tests are excerpts from previous national tests written by Skolverket. The two last tests are from the book

“Blueprint A” which is the current textbook the students use. The time limit for all the tests was one hour.

The first test is called “Getting Around Atlanta” and is based on a travel guide. In the test, the students are faced with 12 statements regarding what a tourist might like to do in Atlanta. On the next page, there are nine short informational texts about tourist destinations in Atlanta. The goal of the test is to pair the texts with the statements. The maximum number of points are 12 (Skolverket, a).

Test two is called “Can You Figure It Out?” and tests the students’ ability to understand the explanation of words. There are 12 explanations of words, and the task is to pair them up with a selection of 22 words. The maximum number of points are 12 (Skolverket, b).

The third test is called “The Royal River Thames” and is based on a travel guide. In the test, the students are faced with 12 statements regarding what a tourist might visit along the river Thames. There are ten short informational texts about tourist destinations along the river. The goal of the test is to pair the texts with the statements.

The maximum number of points are 12 (Skolverket, c).

Test four is called “One Word Gap”. The students are faced with ten sentences with gaps in them. The task is to figure out what word fills the gap by reading and interpreting the situation in the text and analyzing the sentences grammatical parts.

The maximum number of points is 10. (Skolverket, d).

The fifth test is called “The charms of Whitby”. The students are presented with a text containing fifteen one-word gaps. The students are given four alternatives for each of the gaps, and the task is to figure out which of them is correct by reading and understanding the text. The maximum number of points is 15 (Skolverket, e).

Test six is called “Bits of News II”. The students are presented with six shorter texts, and after each text, there are questions testing whether or not the students

(17)

14 have understood what they read. The maximum number of points is seven (Skolverket, f).

The seventh test is called “Come Get Me” and is based on an article in USA Today. The students read the article and are afterward faced with five questions regarding the text. The maximum number of points is ten (Lundfall et al., 2010, pp. 17- 19).

Test eight is called Diving in, and the text is originally from the book Diving In by Kate Cann. The students read an excerpt from the book and are afterward faced with four questions regarding the text. The maximum number of points is eight (Lundfall et al., 2010, pp. 60-65).

3.4. Reliability and Validity

According to Celce-Murcia (2001, p. 525), “The reliability of a test concerns its precision as a measuring instrument. Reliability asks whether a test given to the same respondents a second time would yield the same results”. When it comes to the test factors, the tests have been chosen to, following Celce-Murcia’s (2001, p. 525) definition, “contribute to the likelihood that performance on one item on a test will be consistent with performance on another item”, which serves to contribute to consistency in the study as well. By picking six tests written by Skolverket and two tests from the textbook the students use, consistency aims to be achieved. The tests made by

Skolverket serve as a national benchmarks and are constructed by professionals to examine the proficiency level throughout the country. The tests from the textbooks have also been chosen for the same reason; professionals construct them to measure

knowledge in a particular course. However, Baron (2016) argues that asking students to read a text and then give them comprehension questions is not ideal; it is seen as

simplistic and “tells us little about any deeper level of understanding.” (Measuring learning, paragraph. 4). However, the national tests and the textbooks are standardized measuring tools in Swedish schools, and even though this might not be the optimal way of testing reading comprehension, it is the tool used in the setting that is examined.

The situational factors of testing are, according to Celce-Murcia (2001, p.

525) defined as “the manner in which the examiner presents the instructions, the characteristics of the room (comfort, lighting, acoustics), outside noises, and other factors can have a bearing on how well the respondents perform on the test.”. The situational factors of this study have not been ideal. Ideally, the testing would have

(18)

15 taken place in the same room, with the same conditions. However, due to the current pandemic, that has not been possible. Most of the tests took place in the same

classroom, which is well lit and soundproof, but some of the tests had to be taken through distance classrooms or in the students’ own homes. Extensive measures were taken to secure that the tests will not in any other way be affected, such as giving

instructions in an online classroom instead of a physical one and mimicking the physical classroom setting as much as possible by requiring the subjects to have their cameras on and the teacher always having their camera on.

The individual factors of this study relate to the participants in the study.

According to Celce-Murcia (2001, p. 525), “These include (a) transient factors, such as the physical and psychological state of mind of the respondent (motivation, rapport with examiner), and (b) stable factors, such as mechanical skill, IQ, ability to use English, and experience with such tests.”. This study has chosen, as stated earlier, two groups of students that are actual classes from an actual school, rather than handpicking

homogenous students and making groups out of them. This could be argued to be a flaw in this study, as factors such as gender, proficiency, and native language can affect the results and should be accounted for. However, a sub-aim of this study is to represent and mimic real-life teacher-student situations, which is why the participants are divided into their regular classes, making it more applicable to real-life situations. Another argument that could be made against the participants in this study relates to the “ability to use English”, and that their proficiency level is too varied. Certainly, in a regular class, students will vary in proficiency, as they do in this study. However, firstly, both groups vary in proficiency but are similar in results when compared. Secondly, as earlier mentioned, this study has a sub-aim to mimic real-life situations, and thirdly, that is one of the reasons for the extensive testing. By doing eight tests, which is a lot more than students have done in similar studies, it is possible to compare them between the groups, individually and between platforms, making the results more extensive and enables all variables to be scrutinized, therefore making it more reliable and negating the “issue” of proficiency. A third argument that could be made against the reliability of this study is that the pilot study included students from the main study groups. It could be argued that those students got a head start and were aware of the concept before the rest of the students. That might be the case; however, the students in the pilot study were randomly selected, five from each group, which means that both groups are affected equally and should not severely harm the reliability of the results.

(19)

16 Validity is, according to Celce-Murcia (2001, p. 525), defined as:

“Validity refers to whether the test actually measures what it purports to measure. Thus, the test must be reliable before it can be valid”. Internal validity, according to Andrade (2018, p. 498), examines “whether the manner in which a study was designed,

conducted, and analyzed allows trustworthy answers to the research questions in the study.”. There are several factors to internal validity, some of which speak for this study, and some speak against it—firstly, alternating which groups take the tests on which platform means that the amount of testing that has been done should give an accurate and valid result of their actual knowledge. However, what speaks against it is the fact that the groups do not take the same tests digitally and traditionally. Ideally, a group would do the same test both traditionally and then digitally. That would make for the most valid comparison. However, that is not feasible since the students would naturally perform better on the test they took last since they would have done the test before and would be prepared for it. Therefore, alternating the tests and doing many of them is the most valid, however not ideal, way of conducting the testing.

Regarding the validity in the actual tests, they have been selected due to their highly regarded stance in the educational system. Professionals wrote the first six tests for the sole purpose of testing reading comprehension and the last two are taken directly from the textbook which have been handpicked and trusted for several years by the school where the studies took place. The tests are to be viewed as valid. It is also possible that the order in which the tests were taken should optimally have been randomized for each student to negate any possible order effects, as seen in Singer et al.’s study (2016, p. 6). However, that was not possible due to the already strained testing conditions the pandemic restrictions caused. Arguably, the ratio of tests to groups could be a problem in the design of this study. Ideally, this study would have benefited from having several groups to cross-reference the results between or having one group per test to give more reliable results due to larger groups. However, due to the timeframe of this study, which was one month and, again, due to restrictions, two groups were all that was possible.

External validity, according to Andrade (2018, p. 498), examines “whether the findings of a study can be generalized to other contexts.”. This study has a design that represents realistic testing situations and uses standardized measuring tools to do so. However, what harms the external validity in this study is the small number of test subjects. Since the students are from the same part of Sweden, studying the same

(20)

17 program, and are quite few, this study cannot be generalized into larger contexts. This study is not extensive enough to be generalized beyond this student population.

However, the design of the study is realistic and accurate to real-life testing, which means it could serve as an indicator for other teachers. However, any generalizations and conclusions drawn from this study should consider the small scale of it.

4. Results

The results of the test will be presented across the two groups. Figure 1 shows the percentage of correct answers for each test. Table 1 shows the mean score for each test and Table 2 shows a students’ t-test and a p-value calculation of the results. Figure 2 compares some of the students’ individual results. The full results are available in Appendix A.

Figure 1 shows varied results. In all the tests that included close-ended questions, which was test one to six, the results differed by a small margin. However, in the two last tests that included only open-ended questions, the traditional test-takers outperformed the digital test-takers by quite a large margin. The first six tests, i.e., the close-ended ones, show no clear patterning but the effect of the open-ended ones are much more pronounced. In only two instances, the digital test-takers perform higher than the traditional test-takes, tests two and five. However, the margin is too small to be statistically significant.

Figure 1. Percentage of correct answers compared across all tests.

Table 1 shows the differences in overall scores by presenting them in mean scores rather than percent. The differences in the last test, which includes open-

(21)

18 ended questions, are as large as two points in an eight-point test. This mean-score

comparison of the platforms highlights that the most significant differences are related to whether the test questions were open or close-ended, with open-ended questions causing larger differences.

Table 1 Mean scores compared across all tests (Traditional/digital).

T1 (12)

T2 (12)

T3 (12)

T4 (10)

T5 (15)

T6 (7) T7 (10)

T8 (8) Mean

scores (T)

9.3 8.9 8.6 7.67 9 5.11 7.24 6.43

Mean scores (D)

8.7 9.9 8.1 7.6 9.3 4.86 6 4.36

Table 2 shows the statistical significance of the results, calculated by a t- test. The cut-off for significance is set at p 0.05. Even though there are differences in all the tests, only test 8 shows a difference that can be deemed statistically significant.

Table 2 strengthens the numbers presented in Figure 1 and Table 1. The tests containing open-ended questions showed the largest difference in performance in favor of

traditional test takers. Even though test 7 show statistically insignificant result, it was still the test with the second-lowest p-value, indicating that open-ended questions played a significant role in these results.

Table 2. T and p-value across all the tests.

t p-value

Test 1 -0.58 0.57

Test 2 0.93 0.36

Test 3 0.51 0.61

Test 4 0.16 0.88

Test 5 -0.29 0.77

Test 6 -0.23 0.82

Test 7 1.17 0.25

Test 8 -2.71 0.01

Figure 2 shows the results of all the students, except students EE, EK, FI, and FQ, who took too few tests to be compared. These results show that far from all students performed as expected or in accordance with the overall result. Many students vary slightly in their performance between platforms; however, some students’

performance varies as much as 20 percent or even more. However, except for those who

(22)

19 varied considerably, both the lines follow each other quite closely, indicating that most students had a similar variation regarding the platform use even though they scored differently.

Figure 2. The total performance of each student.

5. Discussion

The results show only slight differences between the traditional test takers and the digital test takers. With some anomalies, however slight, the results indicate that the students perform better when reading traditionally.

Figure 1 shows, for the most part, an advantage for the traditional test takers. Tests two and five are in favor of the digital test takers. Excluding test eight, the tests are so close in mean score that they are considered statistically insignificant. What is interesting is test eight, and to an extent, test seven, the only tests to differ

significantly and the only tests containing open-ended questions. It might, however, not have anything to do with the actual reading of the text but rather with the process of answering. The students who took the test traditionally could place the text and the paper they answered on next to each other, which gave them an entirely different overview compared to the digital test takers that either had to scroll in their documents or switch their windows to answer which could have diverted their focus from the text.

Therefore, this great gap between the test takers is most likely not attributed to reading comprehension but instead attributed to test mechanics.

Much like the study by Hou et al. (2016) show, a disruption of view can largely affect the comprehension of the text. The studies are not comparable in the sense that they test the same thing or even use the same type of text, but they are comparable when speaking about test mechanics. The tests the students in this study took were not a

(23)

20 digital equivalent of a physical test; for that, the document would have to be adapted to the screen of a school laptop, which is hard to do and doubtfully something the general teacher does in everyday testing. The students had to scroll and navigate their way through the document; therefore, the students’ view was disrupted. One possible explanation for the results, in accordance with this theory, is that the tests which had close-ended questions was “simple” and the text short enough that answering the questions was not affected by the disrupted view. The students could still find their way back in the text and pick one of the set alternatives to answer the question. However, when the students were asked to answer open-ended questions in test seven and eight, the digital disrupted view of the text was enough of a disturbance to seriously affect their result. In summary, considering the study made by Hou et al. (2016), the students in this study were probably affected by the digital disrupted view in the digital testing in tests seven and eight. The impaired view disrupted the building of a mental map, and answering open-ended questions proved much harder while taking a test digitally.

However, as shown in Figure 2, there were students who, on average, performed better in the traditional test, and there were students who performed better in the digital tests. The reason for the differences is not something this study can measure;

however, it proves that the results in this study are not only due to test mechanics or study design. The overall results point in the direction favoring traditional test-takers.

However, individual variation can be seen in the data. One possible explanation for this phenomenon is that the students have different experiences in reading and are used to different platforms. Following the arguments of Fergusson (2019) and Baron (2017), a reader who is used to digital reading and has started to adapt to reading digitally is much more prone to multi-task and skim-read, compared to the typical traditional reader who is less prone to multi-tasking. Therefore, it is possible that the students who

perform better when reading digitally are more used to that platform and are less affected by, for example, scrolling in a document which for an inexperienced digital reader would be an easy way to lose placeholders and get lost in the text.

The difference in performance could also be an issue of haptic preference.

As argued in Hou et al. (2016), reading digitally stimulates more senses which invokes positive memories and feelings in the reader’s mind and is, therefore, haptically ideal for reading. However, the students in this study are young and might not have any positive connotations to books or any connotations at all connected to the flipping of pages, the smell of a book, or the feeling of paper. They might, however, have positive

(24)

21 connotations connected with the sound of a keyboard or the light from a screen.

Therefore, depending how much or how little the students have read digitally and traditionally before this study, they might have a haptic preference, giving them a slight edge to perform a couple of percent better on either platform. Both the haptic argument and the argument involving how familiar the readers are to either platform are likely explanations to the results shown in Figure 2.

5.1. Earlier studies compared to this study

Earlier studies show that traditional reading equals higher test results than digital reading, even if the differences are slight and mostly statistically insignificant.

However, this study shows that test performance varies and indicates that the issue of reading comprehension holds more layers than simply which platform to use. Earlier studies' limitation to only one or a couple of testing opportunities is too little to draw accurate conclusions. As argued in this study, the individual taking the test, and the test mechanics, are just as much a factor as the platforms themselves. If tests are only taken a single or a couple of times, and if the groups do not alternate, there are too many factors that could interfere with the results. That is possibly why earlier studies like Uso-Juan et al. (2009), Singer et al. (2016), and Mangen et al. (2009) show results that indicate only a slight difference between the platforms. Other than testing several times and alternating between the groups, the testing is similar to this study, and neither of the other mentioned studies discovered any statistically significant differences. Regardless, all studies have their strongpoints and weaknesses, including this one, what is to be taken from this comparison is that all the studies mentioned in this paragraph, including this one, have not included enough testing factors to reach an accurate result or to find the solution to the question. However, Mangen et al. (2009) also conclude that the slight differences they have in their results are attributed to the test mechanics and the

scrolling through documents. That might be one of the factors that have to be examined further, but it cannot be the only factor affecting the results in similar studies since it would not explain the individual variation, which can be seen in figure 2 in this study.

Certainly, there will be individual differences when conducting any research, but the point to be made is that individual factors, such as haptic preference and how used someone is to use a computer, is also an important factor. Future research must closely examine the individual and why there is a difference in the performances in individuals.

In summary, more factors must be included to draw accurate conclusions on which

(25)

22 platform students perform best regarding reading comprehension. Future studies should conduct several tests over a more extensive period, include more test subjects divided into more groups, and include more testing factors.

5.2. Why is there a difference?

Why there is a difference is not something this study measures. There is, however, a slight difference in the comprehension of texts between the platforms, and it is often in favor of traditional reading. It would seem unlikely that students who are used to reading digitally and do so more often than in print, would perform worse in

comparison. As shown in the previous research, especially in the study conducted by Kreutzschmar et al. (2013), there is no neurological evidence suggesting that there should be a difference. One of the hypotheses of this study, which would be an interesting subject for further studies, is that the key lies, partially, in the testing itself and not in the platforms. The standardized tests in Swedish schools only examine digital and traditional reading but not the comprehension of those platforms, i.e., they do not test digital reading comprehension; they test traditional reading comprehension on digital platforms. However, the testing of digital reading comprehension is relatively complicated since it could not simply be a digitized document. An actual digital reading comprehension test would have to include an interface that focused on the key part of digital reading comprehension, i.e., finding key parts of information quickly and skim reading. Said interface would also have to include, following Rowsell et al.’s (2009) argument regarding reading paths, a reading path with a nonlinear trajectory. However, since the reading path could not be set beforehand it would have to be created by the test taker. Designing a test like this would be immensely complicated.

As shown in Rowsell et al. (2009) and Ferguson (2019), traditional

reading comprehension and digital reading comprehension are not the same. Traditional reading comprehension and its testing are best suited for traditional platforms since the system is built after it. It does not suit digital platforms since digital platforms include the temptation to multi-task and the nuisance of having to scroll through texts. This is, as earlier stated, not something the results of this study can clearly show. Nevertheless, all evidence points in the direction of the difference being due to test mechanics. To test digital reading comprehension, a researcher would have to construct a suitable test for digital platforms and their traits. However, the problem is that a test suited for digital platforms would be poorly suited for traditional reading. This matter would be a case for

(26)

23 future research, where several groups should be studied over a more extended period. A study in which some groups would study traditional reading comprehension, and some would study digital reading comprehension. The researcher could then ensure the exponential growth over a more extended period to establish which is more suitable for the future of Swedish schools.

5.3. The future

As Singer et al. (2019) states,

“Evidently, there is still much to be learned about the nature of reading and comprehending when the medium is digital or print, not solely in terms of the cognitive processing that transpires, but also with regard to any motivational, sociocultural, or visual-motor factors that are

implicated. Yet, in light of the pervasiveness of multiple mediums in the lives of mature readers, these complexities must be more richly examined and better articulated in the goal of enhancing student learning and academic development is to be fostered.” (p. 13)

The subject of digital versus traditional reading comprehension will require a lot more research before finding an answer to it, and since technology develops rapidly, the question might not even be relevant when an answer is found. The aims of all teachers should be to foster academic development, and therefore we should all be aware of what examining students by using digital aids means and how it affects their learning and results. This study does not indicate that digital aids or digital reading is negative for students’ performance; it indicates that the standardized tests we use are not suitable for digital testing and that teachers must keep up with the technology. It was not evident to me before this study, but after the testing, it became apparent that one cannot test traditional reading comprehension on a digital platform. Most digital platforms are not created for that purpose. Teachers might think that digital reading comprehension will naturally develop in students since they lived and were born into a digital age. However, if the schools do not test digital reading comprehension but rather traditional reading comprehension on digital platforms, they are doing the students a great disservice.

(27)

24 6. Conclusion

To answer the research question posed in this study, students comprehend more when reading traditionally, however, mainly to a statistically insignificant extent. The main difference in the results was represented by the tests where students were asked open- ended questions. One of the open-ended question tests showed a statistical significance in favor of the traditional test takers. However, the test design for this study and by that the design that is standardized in upper secondary schools in Sweden is most likely the reason why the students in this study, on average, performed better when reading traditionally. The standardized tests and standardized testing of reading comprehension are not suited for digital reading comprehension. According to the previous research presented in this study, there is no neurological evidence suggesting any significant differences on a neurological level between the two platforms. Therefore, the problem likely lies in the testing itself. If students must work with a disrupted view of the text, they will, in accordance with previous research, perform worse in reading

comprehension tests. The standardized tests and testing methods are simply not suited for digital platforms and are not testing digital reading comprehension, which is something teachers and researchers must address further not to risk having to act retrospectively to solve future problems.

When considering its small scale and mostly statistically insignificant results, this study may only serve as a pointer for teachers and future research. This study cannot be used to draw any larger conclusions or be generalized into a larger context. Following the results and the discussion in this study, further research in this subject is needed, and it would have to include more factors to possibly find an answer to the question of which platforms to use to test students’ comprehension.

(28)

25 References

Andrade, C. (2018). Internal, External, and Ecological Validity in

Research Design, Conduct, and Evaluation. Indian Journal of Psychological Medicine, vol 40 (5), 498-499.

Baron, N. (2016). Do students lose depth in digital reading? The conversation. Available at: https://theconversation.com/do-students-lose-depth-in- digital-reading-61897.

Baron, N. (2017). Reading in a digital age. The Phi Delta Kappan, vol 99 (2), 15-20.

Celce-Murcia, M. (2001). Teaching English as a second or foreign language. Boston: Heinle and Heinle.

Ferguson, M. (2019). Preparing students’ reading brains for the digital age. The Phi Delta Kappan, vol 100 (4), 64-65.

Garrett-Rucks, P., Howles, L., Lake, W. (2015). Enhancing L2 Reading Comprehension with Hypermedia Texts: Student Perceptions. CALICO Journal, vol 32 (1), 26-51.

Hayles, K. (2012). How we think: Digital media and contemporary technogenesis. Chicago: University of Chicago Press.

Hlynka, D. (2010). Deconstructing “Digital”. Educational Technology, vol 50 (6), 62-63.

Hou, J., Rashid, J., Min Lee, K. (2016). Cognitive map or medium materiality? Reading on paper and screen. Computers in Human Behavior, vol 67 (2017), 84-94.

Kretzschmar, F., Pleimling, D., Hosemann, J., Füssel, S., Bornkessel- Schlesewsky, I., Schlesewsky, M. (2013). Subjective Impressions Do Not Mirror Online Reading Effort: Concurrent EEG-Eyetracking Evidence from the Reading of Books and Digital Media. PLoS One, vol 8 (2), e56178.

Leu, D., McVerry, G., O’Bryne, I., Killi, C., Zawilinski, L., Everett- Cacoporado, H., Kennedy, C., Forzani, E. (2011). The New Literacies of Online Reading Comprehension: Expanding the Literacy and Learning Curriculum. Journal of Adolescent & Adult Literacy, vol 55 (1), 5-14.

Mangen, A., Walgermo, B., & Brønnick, K. (2013). Reading linear texts on paper versus computer screen: Effects on reading comprehension. International Journal of Educational Research, vol 58, 61-68.

Pardede, P. (2019). Print vs Digital Reading Comprehension in EFL.

Journal of English Teaching, vol 5 (2), 77-90.

Rowsell, J., Burke, A. (2009) Reading by Design: Two Case Studies of Digital Reading Practices. Journal of Adolescent & Adult Literacy, vol 53 (2), 106-118.

Singer, L., Alexander, P. (2016). Reading Across Mediums: Effects of Reading Digital and Print Texts on Comprehension and Calibration. The Journal of Experimental Education, vol 0 (0), 1-18.

(29)

26 Sukamolson, S. (2007) Fundamentals of quantitative research.

Chulalongkorn: Chulalongkorn University.

Uso-Juan. E., & Ruiz-Madrid. N. (2009). Reading Printed versus Online Texts. A Study of EFL Learners’ Strategic Reading Behavior, International Journal of English Studies, vol 9 (2), 59-79.

Utbildningsutskottet (2016) Digitaliseringen i skolan – dess påverkan på kvalitet, likvärdighet och resultat i utbildningen. Riksdagstryckeriet.

The tests

Skolverket (a). (n.d.). Getting around Atlanta. Available at:

https://www.gu.se/sites/default/files/2020-03/En5_Getting_around_Atlanta.pdf Skolverket (b). (n.d.). Can You Figure It Out. Available at:

https://www.gu.se/sites/default/files/2020-03/En5_Can_You_Figure_It_Out.pdf Skolverket (c). (n.d.). The Royal River Thames. Available at:

https://www.gu.se/sites/default/files/2020-03/En5_Royal_River_Thames.pdf Skolverket (d). (n.d.). One Word Gap. Available at:

https://www.gu.se/sites/default/files/2020-03/En5_OWG.pdf

Skolverket (e). (n.d.). The Charms of Whitby. Available at:

https://www.gu.se/sites/default/files/2020-03/En5_Whitby.pdf Skolverket (f). (n.d.). Bits of News II. Available at:

https://www.gu.se/sites/default/files/2020-03/En5_Bits_of_News_I%2BII.pdf Lundfall, C., Nyström, R., Clayton, J. (2010). Blueprint A. Stockholm:

Liber AB.

References

Related documents

After analyzing four discussion exercises, the following conclusion can be drawn: similar to findings from previous research based on native speakers of English both in and out

However, the teachers need to be aware of how these different terms affect students’ motivation of reading fiction in English as well as what aspects in teaching

Dinwoody Glacier : Fitzpatrick Wilderness Area Wind River Range of Wyoming. Marston et al: Dinwoody Glacier has receded approximately 34% in the past

Att införa anonym rättning får inte heller stå i vägen för att utveckla nya examinationsformer, vilket vore mycket fördelaktigt då en mångfald av examinationsmoment

But if the old electoraP system had been in place and the three non-socialist parties had cooperated with each other in electoral cartles, there would have been a clear

Denna brist på samstämmighet ger upphov till konflikter, konflikter som ständigt driver fram nya metoder för konfliktlösning Sökandet efter nya sätt att lösa konflikter

Utredningens för- slag innebär ju en viss förbättring, men det torde icke vara tillräckligt för att nämnden som »krisorgan» i fortsättningen skall kunna spela den

I just adapted to what the company says. But now I do on my own, and I talk to my