• No results found

Examiner: Lars Blomberg Head Supervisor: Nickolay Ivchenko Assistant Supervisor: Margareta Enghag External Supervisor: Caroline Leck

N/A
N/A
Protected

Academic year: 2021

Share "Examiner: Lars Blomberg Head Supervisor: Nickolay Ivchenko Assistant Supervisor: Margareta Enghag External Supervisor: Caroline Leck"

Copied!
112
0
0

Loading.... (view fulltext now)

Full text

(1)
(2)

1

Topography and morphology analysis of marine nanoparticles and a pedagogical study of representations used for improving

a high school experiment

A thesis centered around a scanning electron microscope, utilizing it in two vastly different ways

Robin Bramsäter

Examensarbete på programmet Civilingenjör och lärare inom området teknik och lärande Stockholm 2012

Examiner: Lars Blomberg

Head Supervisor: Nickolay Ivchenko Assistant Supervisor: Margareta Enghag External Supervisor: Caroline Leck

(3)

2

(4)

3

Abstract

The Arctic Summer Cloud Ocean Study expedition took place during the autumn of 2008 and brought back water and air samples. One theory was that marine particles were shot into the atmosphere by bubble bursting and, while in the atmosphere, acted as cloud condensation nuclei. Particles collected from the subsurface water, surface microlayer and just above the surface had their topography and morphology analyzed using a scanning electron microscope. Due to a lack of EDS analysis it's impossible to say for sure if the particles found were the same found in previous studies, just that it is highly likely that they are. No evidence against the marine particles being able to act as cloud condensation nuclei was found but the data gathered was not sufficient to strengthen the theory either.

The scanning electron microscope was also used in a pedagogical study, analyzing how operators with different knowledge and prior experience interact with the microscope's images. These results as well as knowledge gained from literature studies were used to improve a high school experiment regarding centripetal acceleration. The main issue with the experiment was that what the students learned performing the experiment was not the same as the theoretical models the teachers wished the students to learn. The reason for this was because the experimental equipment lacked the centripetal model's external representations such as force arrows. If a simulator would be incorporated into the lab centripetal acceleration representations could be visualized and a clearer connection between experiment and theory could be made.

(5)

4

(6)

5

Content

1 Introduction ... 9

2 Objectives ... 11

2.1 The Duality of the Thesis ... 11

2.2 Question Formulations ... 11

2.3 Method ... 11

2.4 Deliverables ... 11

3 Technical Background ... 13

3.1 The ASCOS (Arctic Summer Cloud Ocean Study) Project ... 13

3.2 Marine Particles ... 14

3.3 Aerosol Particles ... 15

3.4 Size Distributions ... 15

4 ASCOS Analysis ... 17

4.1 Method ... 17

4.1.1 The Samples... 17

4.1.2 Scanning Procedure ... 17

4.1.3 Categorization ... 19

4.1.4 Preparation for the Statistical Analysis ... 19

4.2 The Scanning Electron Microscope ... 20

4.2.1 General Concept ... 20

4.2.2 The Electron Gun ... 21

4.2.3 Magnetic Lenses ... 22

4.2.4 Sample Loading ... 23

4.2.5 Imaging ... 23

4.2.6 Focusing ... 24

4.2.7 Charge Effects ... 26

4.3 Results ... 27

4.3.1 The Water Samples ... 27

4.3.2 Subsurface Water Specific Particles ... 29

4.3.3 Surface Microlayer Specific Particles ... 30

4.3.4 The Spray Samples ... 31

5 The Scanning Electron Microscope and Its Representations ... 35

5.1 The Pedagogical Study ... 35

5.1.1 The Goal of the Study ... 35

(7)

6

5.1.2 The SEM Experiment ... 35

5.1.3 The SEM Task ... 35

5.1.4 Focus of the Experimental Study ... 36

5.1.5 The Interview Questions ... 36

5.2 Theoretical Framework ... 37

5.2.1 The Cognitive Load Theory ... 37

5.2.2 Problem Solving Strategies ... 37

5.2.3 External Representations ... 38

5.2.4 Zone of Proximal Development (ZPD) ... 40

5.2.5 Stimulated Recall ... 40

5.3 Interview Results ... 41

5.3.1 Method of Analysis and Instrument Developed From Abductive Reasoning ... 41

5.3.2 Participants ... 41

5.3.3 Dictionary for "My" SEM Terms ... 42

5.3.4 Context for the SEM Operator... 42

5.3.4 The Results ... 45

5.4 Interview Analysis ... 54

5.4.1 The Expert ... 54

5.4.2 The Intermediate ... 54

5.4.3 The Novice ... 55

5.4.4 Comparing the Three ... 55

5.4.5 Conclusion ... 56

6 Improving the Physics Experiment ... 59

6.1 The Experiment ... 59

6.1.1 Background ... 59

6.1.2 Common Misinterpretations ... 60

6.2 Improvements ... 62

6.2.1 Comparison With the SEM Results ... 62

6.2.2 Clarifying the Theory Before the Experiment ... 63

6.2.3 Solving the Experiment Task ... 64

6.2.4 Including a Simulator ... 66

6.2.5 Summary... 67

7 Discussion ... 69

7.1 The Marine Particles ... 69

(8)

7

7.1.1 Comparison With Previous Studies ... 69

7.1.2 Marine Gels ... 70

7.1.3 Particle Patterns ... 71

7.1.4 Particle Size Distribution ... 71

7.1.5 Reflections Regarding the Method ... 71

7.2 The Pedagogical Study ... 72

7.2.1 Reflections Regarding the Method ... 72

7.2.2 Different Representations ... 73

7.2.3 Reflections Regarding the Improvements of the Experiment ... 73

8 Conclusions ... 75

8.1 The Marine Particles ... 75

8.1.1 Future Work ... 75

8.1.2 CCN Coming From the Water ... 75

8.2 The Pedagogical Study ... 75

8.2.1 Things to Consider When Creating a High School Experiment ... 75

8.2.2 Educational Experiments ... 75

References ... 77

Literature ... 77

Pictures ... 78

Appendix 1 - The Interview Guide ... 81

Appendix 2 - The Interview Transcriptions ... 83

The Expert ... 83

The Intermediate ... 93

The Novice ... 100

(9)

8

(10)

9

1 Introduction

The program Master of Science and Education was started in the year 2002 and was an effort to create engineers with a pedagogical edge as well as teachers with the in depth knowledge of an engineer. Consequently, the master thesis of this program requires research and work in both of the fields which means this report consists mainly of two large parts: one technical part and one pedagogical.

A lot of work during this master thesis was concentrated around the usage of a scanning electron microscope (SEM) when analyzing marine particles collected during the ASCOS expedition during 2008. The main goals for the thesis were to analyze these particles, to analyze the external representations used in the SEM and try to apply this information regarding representations in the high school education.

The first half of this report focuses on the ASCOS analysis, where Chapter 3 contains theoretical background and Chapter 4 is about the study which was carried out during this thesis. The second half is the pedagogical part where Chapter 5 contains pedagogical theories and the SEM experiment and Chapter 6 focuses on the high school experiment and how to improve it. The very last part discusses the results from the technical and pedagogical studies and draws some conclusions.

(11)

10

(12)

11

2 Objectives

2.1 The Duality of the Thesis

This thesis has a technical and pedagogical part, both relating to a scanning electron microscope (SEM). The technical part analyzes marine particles which were collected in the ASCOS expedition.

The pedagogical part is split into two parts. The first is strictly theoretical where a study revolving around the SEM's representation is done. This is done by gaining knowledge from literature regarding cognitive load theory and representations as well as a representations experiment incorporating the SEM and three different users. The second half of the pedagogical part is to improve a high school experiment regarding the centripetal acceleration.

2.2 Question Formulations

The questions are split up into three categories: technical, theoretical pedagogy and practical pedagogy.

What can be found in the marine samples collected during the ASCOS project 2008?

- Which different particles can be found?

- What are their size distributions?

- What shapes do they have?

How effective are the representations in the JEOL JSM-7401F's interface?

- Which representations are there and what impact do they have in the interaction between the user and the scanning electron microscope?

- Which differences are there between expert and novice regarding which representations are being used?

How can a high school experiment regarding the centripetal acceleration be performed and can representations be incorporated?

2.3 Method

The marine samples were analyzed using a scanning electron microscope of the type JEOL JSM- 7401F, taking pictures of the samples at 10 000x and 40 000x magnification. These pictures created the foundation for the discussion (see Chapter 7).

To obtain information regarding the user-SEM interaction, several different operators were given a simple task to carry out alone while the operating screen was being videotaped. Shortly after a stimulated recall interview commenced. This knowledge combined with knowledge gained from literature studies was used to improve a high school experiment.

2.4 Deliverables

- Some hundreds of pictures of the aerosol samples at 10 000x and 40 000x magnification - Concrete examples to implement and test in the high school physics experiment

(13)

12

(14)

13

3 Technical Background

3.1 The ASCOS (Arctic Summer Cloud Ocean Study) Project

In the recent years, climate issues such as pollution, global warming etc have become huge discussion subjects and the scientific interest regarding the climate has increased. A big part of understanding the climate is to fully understand small airborne particles: aerosols. The ASCOS project contributed with valuable scientific data with its expedition to the Arctic where it gathered both aerosol and water samples. The ASCOS expedition took place in early August 2008, its goal being to drift with the pack ice in the central Arctic while studying the processes which determine the creation and life-cycle of clouds.

One of the theories was that tiny organic particles (microcolloids), which originally only had been observed in the uppermost water layer , also could be found in the air. When these particles are up in the atmosphere they are able to act as cloud condensation nuclei (CCN) and thus play a crucial role for the cloud formation. When the sun's radiation heats up the water, tiny bubbles are created.

These bubbles reach the water surface and burst, shooting the microcolloids out of the water and into the air. Since the clouds reflect sunlight, fewer bubbles will be created and consequently fewer CCN will be able to form clouds. This is called climate feedback (see Figure 3.1) and is a totally natural process since the human pollution and interference in the Arctic is very negligible (ASCOS Preliminary Report).

Figure 3.1: The climate feedback process (From ASCOS Preliminary Report)

Several samples from different places were collected during the expedition. Surface microlayer (SML) samples were collected from open leads with a rotating drum where the water got stuck to the surface of the drum and subsequently dripped into a container. Subsurface water (SSW) was collected at the same location at a depth of 0.5 meters. Multi-stage Berner cascade impactors were used at a height of 23 meters above sea level for collecting aerosol particles. (Gao, 2012).

Artificial bubbling was conducted both inside a lab and in situ. Inside the lab a glass tower was used where bubbles were generated by air being blown into the water from the bottom. The particles from bubble bursting were collected over the water surface with a collection time of one hour. For the in situ collection a bubble source was positioned 15 cm below the water surface and the particles

(15)

14 were collected from the bubble bursting 10 cm above the surface (Gao). Figure 3.2 depicts the bubble bursting process.

Figure 3.2 When a bubble reaches the water surface it bursts. Particles on the bubble surface are being shot into the atmosphere (c-d) and jet drops are subsequently following as the surrounding water fills the gap from the bubble (e-f)

(From College of Environmental Science and Forestry).

Newly formed fresh sea ice as well as algal assemblages loosely attached to the bottom of ice floes were also collected (Gao).

3.2 Marine Particles

There is a wide variety of oceanic particles such as sea salts and micro-organisms (e.g. bacteria and virus-like particles) (Leck & Bigg, 2005). The ocean is also an important factor in biogeochemical carbon cycling as it contains a large amount of reduced organic carbon which is mostly in the form of dissolved organic carbon (DOC) polymers (Verdugo & Santschi, 2010).

Verdugo (2011) mentions four different processes: assembly, annealing, dispersion and fragmentation. Assembly and annealing are processes when the nanoparticles entwine with each other and create larger particles. Dispersion and fragmentation is the reversed process (see Figure 3.3).

The DOC polymers are imbedded in a solvent which hinders a collapse of the polymer network, these stable polymer networks are what is called the marine gels. Typically the polymers are interconnected by tangles and/or low energy bonds where the stability of these polymer networks rely on a large amount of both tangles and energy bonds (Verdugo). This means that the assembly and dispersion of the tangled polymers mainly depend on the polymers' length. The longer the polymers the greater the possibility of them entwining and forming nanogels. These nanogels can anneal with each other and form even larger particles: microgels (Ibid.). Assembling and annealing are both reversible however and depend on the stability of the gels which in turn depends on factors such as chain length, charge density, topology etc.

(16)

15

Figure 3.3 DOC polymers are able to self-assemble into nanogels which are stabilized by entanglements and calcium bonds. The nanogels can in turn anneal with each other creating microgels. Both of these processes are reversible. (From

Verdugo, 2011)

3.3 Aerosol Particles

Aerosols are suspensions of solid and/or liquid particles in the air, and are on a global scale mainly derived from natural sources such as volcanic eruptions and sea salt. Sea salts and dust particles mainly reside in the troposphere whereas the volcanic ashes are transported by upward winds to higher altitudes and ultimately end up in the stratosphere. There are also external aerosols such as meteoric smoke which is created when meteors are evaporated in the mesosphere. Besides these naturally occurring aerosols, manmade ones are created mainly from combustion and can be found in industrialized and heavily populated areas. These are just a few examples of the particles which go under the label "atmospheric aerosol", all having different compositions, shapes, optical depths and sizes where the size of an aerosol ranges from a few nanometers to several tens of a micrometer (Chin, 2009).

Some aerosols affect the human health while other aerosols act as cloud condensation nuclei (CCN).

A CCN is a particle, typically around 200 nm in diameter (Eastern Illinois University), which captures water droplets and in a large scale with other nuclei ultimately form clouds (Hamill, Jensen, Russell &

Bauman , 1997). Some of these aerosols reflect and scatter the sunlight, acting as a coolant for the Earth, while others reflect sunlight coming from the Earth back again in a manner like greenhouse gases. In the industrialized and more heavily populated areas the aerosols create health issues and cause lung diseases to the areas' inhabitants, both humans and animals (Chin).

3.4 Size Distributions

The size distribution, for both the aerosols and the marine particles, can be represented by a curve which normally has four distinct peaks. An example of such a curve can be seen in Figure 3.4.

The first peak is called the nucleation mode and consists of particles less than 0.01 µm in diameter.

Due to the very small sizes of these particles it is rather hard to get a detailed picture of them (College of Environmental Science and Forestry, 2012).

The second peak is called the Aitken mode and normally consists of organically derived virus-like particles (Gao). These particles are in the size range of 0.01-0.1 µm.

(17)

16 The third peak is called the accumulation mode and the particles here are in the size range of 0.1-1 µm. These particles are created by coagulation of smaller particles (New Media Studio, 2012) and condensation of organic vapors onto microcolloid aggregates (Gao).

The coarse mode is the last peak and consists of particles larger than 1 µm which typically are sea salts, bacteria and particles of multiple-source original (Ibid.).

Figure 3.4 The particle size distribtuion curve and with pictures of particles which are typically found in the different modes. (From College of Environmental Science and Forestry)

(18)

17

4 ASCOS Analysis 4.1 Method

4.1.1 The Samples

The particles were collected at TEM grids (Figure 4.1), i.e. grids normally used for transmission electron microscopy. These are circular copper rings with a checkered copper grid, about 3mm in diameter.

Figure 4.1: The shape of the TEM grids being used (From Radboud Universiteit Nijmegen)

The different objectives were to gather statistical data and data regarding the particles' geometry and chemical composition. The statistical data was obtained by taking plenty of pictures at 40 000x (40k) magnification, the geometrical data was obtained by photographing interesting areas at 40k magnification as well as looking at the more general picture gained from all of the 10 000x (10k) pictures. After the pictures were taken and the observed particles had been categorized the samples were supposed to be coated in platinum and the chemical composition analysis using EDS was to be done.

Four samples were studied: one from the SSW, one from the SML and two from bubble bursting in situ (the bubble bursting in situ will be referred to as spray samples).

4.1.2 Scanning Procedure

The checkered surface of the TEM grid makes orientation very easy. In order to achieve an effective and thorough study each grid was divided into four quadrants where the best looking (i.e. least damaged) quadrant would be analyzed. Beginning from the middle going radially outwards following the quadrant's diagonal (i.e. 1,1; 2,2; 3,3, etc), each square was carefully studied at higher magnifications.

Figure 4.2: The map over the TEM grid, the notation being: quadrant, horizontal coordinate, vertical coordinate (e.g.

a,1,1). The coordinates for the empty squares follow the same pattern.

(19)

18 Before taking any pictures at all, the grid was rotated so it represented the map in Figure 4.2. Since the TEM grids had a asymmetrical object in the middle it was easy to rotate the grid so the object was in the same position for each study. A working distance of 3.5mm was used with a sample height of 1.9mm. The acceleration voltage was 3.00 kV with a -2.00 kV voltage over the sample (i.e. 1.00 kV Gentle Beam), using the SEI (secondary electron imaging) detector and an electron current of 20 µA.

The picture was first focused as best as possible by looking at the sharpness of the grid lines. After this was done a small hole was located and the beam alignment was corrected by wobbling the beam and adjusting accordingly. When adjusting the astigmatism the goal was to have the hole as circular as possible, easily spotting the over- and underfocus when the hole began turning oval. All of the three steps (focus, beam alignment and stigmatism) were repeated several times until the picture was of desirable quality.

For each sample (i.e. for each TEM grid) 7 squares in a diagonal was studied. For each square, 20-25 pictures was taken at a magnification of 10k and a single picture at 1000x (1k). Around 100 pictures at a magnification of 40k was evenly spread out over the 7 squares. Since areas previously photographed are damaged by the microscope, it was easy to spot where pictures have previously been taken. The damage is not as severe as it sounds but a mere darkening of a previously studied area (see Figure 4.3 and Figure 4.14). By taking the 1k photograph after all of the 10k pictures it was possible to get a map over the square with each 10k picture marked out. By using a consistent pattern when taking the 10k photographs each individual picture was traceable by using the map.

Figure 4.3 An overview picture of a TEM square. The red numbers are not part of the original picture but have been added afterwards to demonstrate the pattern being used when taking the 10 000x pictures

The 10k pictures were to get a general picture over the grid while the 40k pictures were to be used for determining the form and geometry of the particles as well as being used in statistical analysis.

Due to the qualitatively reasons for the 40k pictures, areas of interest detected in the 10k pictures were gone back to and photographed with the larger magnification.

(20)

19 4.1.3 Categorization

The particles found were categorized with respect to their shape and no consideration was taken to earlier scientific studies . This was done so any hasty conclusions would not be drawn and to keep it as objective as possible. Subjectivity still sneaks its way in however since it was personal judgment that determined whether or not two different particles were of the same kind or if they were distinguishable as two different particle types. The comparison between this report's categories and the scientific reports' as well as the analysis can be found in Chapter 7.1.

4.1.4 Preparation for the Statistical Analysis

The statistical analysis was carried out by another person involved in the ASCOS project but I will include a brief description of how it works. A program is used which is able to distinguish distinct particles in a picture. By telling the program at what magnification the pictures were taken at it will calculate how many distinct particles there are and their sizes. In order for a picture to be usable the particles need to be easily distinguishable and have sharp edges. If the picture has any astigmatism or charge effects the particles tend to get blurry which results in the program calculating them being larger than they actually are. A picture with good quality is not enough however, it needs to be in a binary form (i.e. two colors only), not in grayscale which a picture normally is in. There are options in the program which allows to determine which parts of the grayscale shall be converted to the binary colors. In Figure 4.4 we see an example of a binary conversion where a grayscale picture (Figure 4.4 left) of three particles is converted into a black and white picture (Figure 4.4 middle) and ultimately letting the program analyze where the particles are (Figure 4.4 right).

Figure 4.4 A SEM picture to the left, converted to a binary form in the middle then having the program analyze the particles (outlined to the right). The program includes a table of the particle areas in the picture being analyzed. The

freeware ImageJ has been used for this example but that is not the same program as the ASCOS researcher uses.

The purpose of the statistical analysis is not to see how large portion of the grids are covered in particles but rather to see the size distributions between the observed particles. This size distribution is plotted which, with a sufficient amount of data, should be able to reveal the Aitken and accumulation modes.

(21)

20

4.2 The Scanning Electron Microscope

The instrument used for the electron microscopy was the JEOL JSM-7401F and the following text is based on the web literature from The Northern Arizona University's course regarding micro probing.

4.2.1 General Concept

Very small objects and their structures have peaked the interest of scientists for many centuries. For a very long time the magnification has been limited to that of the light microscope. The limiting factor has been the de Broglie wavelength

where h is planck’s constant, p is the momentum, m the mass and v the velocity. Take the following example to get an impression of the light microscope's resolution. A photon with an energy of 1 eV has a wavelength of 1240 nm which means the highest achievable resolution is 1,24 µm in a light microscope, using photons with this energy. However, the wavelength is decreasing as the mass of the particle is increasing which is why electrons are used instead of photons. An electron with a kinetic energy of 1 eV results in a de Broglie wavelength of 1,23 nm, i.e. more than 1000 times greater magnification than that of a classic light microscope.

In a scanning electron microscope (SEM), the electrons are accelerated in a vacuum to a desired kinetic energy, focused with magnetic lenses then shot at the prepared sample one wishes to study.

Figure 4.5 shows a simplified model of the SEM. The electron needs a very high mean free path in order to minimize interference and is done by having a very high vacuum inside of the microscope.

Figure 4.5: A very simple model of the SEM

(22)

21 When an electron hits the sample, it either scatters elastically or inelastically. In the elastic scattering, only the electron's trajectory changes while the kinetic energy remains constant. During the inelastic scattering, the incident electron will sometimes collide and displace an electron from some of the sample's nuclei. If a displacement occurs an electron from the nucleus' outer shell will fill the gap, resulting in an emitted photon, i.e. an X-ray.

The electron microscope consists of an electron gun which creates the electron beam. The beam is focused by several condenser lenses in a vacuum. When the beam finally reaches the sample there are detectors which detect primary, secondary and backscatter electrons as well as X-rays from excited atoms. When an electron hits the sample its direction will change in a new direction with each collision which can be modeled using the Monte Carlos method. Most of the incoming electrons are absorbed into the actual sample and very few are able to escape the surface and reach the detectors. The depth and width, i.e. the spread, of the electron's path inside of the sample is known as the interaction volume and the lesser the volume the greater the quality of the topography picture. Figure 4.6 shows an example of an interaction volume Monte Carlos simulation.

Figure 4.6: A Monte Carlos simulation of the interaction volume, simulating the paths of several different incident electrons. Electron absorbed in the sample are marked as blue and those which reached the surface are marked as red.

The sample's surface is located at the top (From Northern Arizona University)

4.2.2 The Electron Gun

The electron gun (Figure 4.7) creates the intense electron beam required for the scanning and for this project a cold field emission gun was used. The electrons are gathered in a V-shaped tungsten filament which is used as a cathode with the tip of the V pointing towards the anode. The tungsten filament tip is held at several kilovolts negative potential relative to the nearby anode which creates an electron point source at the tip of the V.

(23)

22

Figure 4.7: A simple model of the electron gun (From Northern Arizona University)

Due to the tunneling effect, electrons from the electron source are able to overcome the potential barrier required to escape the surface if the electrostatic field is strong enough. When they are in the vacuum the electrons are accelerated by the electrostatic field and their kinetic energy is adjusted to get a suitable wavelength to acquire the desired information from the sample. Low energy electrons produce a lot of secondary electrons which gives more topographic information. High energy electrons penetrate the sample easier and will create more backscatter electrons and X-rays, giving information about the sample's chemical composition. The anode is used to change the voltage and in doing so, adjusting the energy of the electron beam.

4.2.3 Magnetic Lenses

Similar to a light microscope, the electron microscope uses a lens system. These lenses are not optical but magnetic and focus the electron beam. A magnetic lens (Figure 4.8) is a coil with a current inside it which in turn creates a magnetic field. This magnetic field varies in magnitude and direction depending on the position in the coil. When an electron passes through the coil it will be pushed around by the varying magnetic forces and by adjusting these lenses the SEM operator is able to adjust the beam. In a SEM there are typically four lenses, first two condenser lenses, after those a scanning coil and lastly an objective lens (which is a kind of condenser lens).

Figure 4.8: The general concept for a magnetic lens (From Wikipedia)

(24)

23 4.2.3.1 Condenser Lenses

The condenser lenses decrease the spot size of the electron beam. When the current in the lens is increased, more electrons are diverted out of the beam, thus making it more and more narrow. This means that the resolution is increasing the higher the current, but the number of secondary electrons created are decreasing (since more are being diverted by the strong magnetic field).

4.2.3.2 Scanning Coils

There are two sets of scanning coils, one for raster and one for deflection. The coils move the beam over the sample surface in two perpendicular directions, creating a squared area.

4.2.3.3 Objective Lens

The objective lens (sometimes called the final condenser lens) focuses the beam even more, though without any electron losses.

4.2.4 Sample Loading

Due to the vacuum inside of the SEM, the sample first needs to be loaded into an airlock. The samples are very small, 3 mm in diameter, and are placed in special holders which secure the samples. Once inside the airlock, the pressure begins to drop until it reaches the same pressure as inside of the SEM. When it reaches equal pressure a hatch to the microscope opens up and the sample is pushed into place by the operator. When the sample is at the sample stage it is possible to both tilt the sample and also adjust its height in order to get a good focus. If the sample is not conductive it might be necessary to spray it with some gold in order to get a good image. When analyzing organic samples with the EDS (Energy-dispersive X-ray Spectroscopy, a high energy beam used to obtain knowledge regarding chemical composition) it is necessary to coat the sample in platinum. This is required since the EDS beam will evaporate the parts being looked at without a protective coating.

4.2.5 Imaging

4.2.5.1 Secondary Electrons

There are two types of electrons: one which indicate the topography and one which indicates the atomic mass distribution of the sample. The ones indicating the topography are the secondary electrons which are electrons from the sample which have been excited due to a collision with an incident electron from the electron beam. These secondary electrons undergo elastic and inelastic scattering while they move around in the sample (see Figure 4.6). Eventually some of the electrons reach the specimen surface where they will escape if they have a sufficient amount of kinetic energy.

Secondary electrons have very low energy which means only the ones closest to the surface will be able to escape and be detected.

The amount of electrons detected from different parts of the sample will create more or less bright parts which reveal the topography of it. This is known as the edge effect and is present because the detector registers more electrons if the surface is tilting towards the detector, giving a higher secondary electron yield. Figure 4.9 shows a visual example of this phenomenon.

(25)

24

Figure 4.9: A higher SE yield creates the edge effect (From Northern Arizona University)

4.2.5.2 Backscatter Electrons

The second kind are the backscatter electrons. These are high energy electrons which are elastically scattered, i.e. reflected, by the specimen's atomic nuclei. The higher the average atomic number, the stronger the backscattering and the brighter the image. The backscatter electrons are thus used to differentiate parts of the specimen that have different average atomic number.

4.2.5.3 Excitation and De-excitation of Atoms

As was just mentioned, during the inelastic scattering the incident neutron sometimes excites an atom. When the atom after some time returns to a de-excited state it releases an excess energy which can be in the form of an X-ray, cathodoluminescence and Auger electrons. Each element has a very characteristic relaxation energy and will thus reveal what kind of chemical composition the specimen consists of.

4.2.5.4 Gentle Beam

The image quality depends ultimately on the probe current (i.e. amount of electrons hitting the sample) which is also known as the spot size. The narrower the spot size, the better the resolution.

The spot size can be narrowed down by increasing the acceleration voltage, which in turn creates a high interaction volume which is bad for the topography quality. By using a function in the JEOL JSM- 7401F called "Gentle Beam", it is possible to apply a negative voltage over the sample which reduces the velocity of the incoming electrons. An example would be to have the sample under -2.00 kV while using an acceleration voltage of 3.00 kV. The interaction volume would be small (the same size as if one would use a 1.00 kV acceleration voltage) and the spot size would be small as well since the acceleration voltage actually is 3.00 kV.

Another reason to use the gentle beam function is due to non-conductive samples tend to get damaged at higher voltages. By using the gentle beam, both the image quality is increased and sample damage reduced.

4.2.6 Focusing

When focusing the beam, there are basically three different steps which are repeated over and over until the operator is satisfied with the image quality. These three steps are adjusting the focal length (also known as working distance), aligning the apertures (condensation lenses and objective lens) and correcting astigmatism.

(26)

25 4.2.6.1 Working Distance

Just like in light optics, the lens system of the SEM has a focal point. The sample stage, where the sample is located, can be moved up and down by a knob on the side of the microscope. This is used for very rough focusing and is only done at the very beginning. It is mainly to get a picture at all since at the start of studying a new sample the image screen only shows black and white static. This is normally done at low magnification (~500x).

When one is satisfied with the focus (i.e. when it is impossible to get a better focus by just adjusting the height) it is time to adjust the beam's focal point. In comparison with the previous stage, this adjusts the beam while the sample is stationary while in the first step the sample stage was being moved while the beam's focal length was fixed. The beam's focal length is called the working distance and can be changed either by rotating a knob on the operating panel or by sliding a bar in the computer interface. This is done throughout the entire focusing process and will be referred to as

"focusing".

4.2.6.2 Aperture Alignment

When focusing at higher magnification one might notice that the sample moves around. This is because the electron beam is not aligned correctly which also affects the image quality. By turning on the wobbler function the sample is being brought in and out of focus, making the picture jump around quite a lot (see Figure 4.10). The goal here is to shift alignment in the X- and Y-axis until the sample is not moving. A good strategy is to start with one axis, making the sample just move either horizontally or vertically and thereafter correct the second axis.

Figure 4.10: The beam paths and the image during the wobbling (From Protrain)

4.2.6.3 Astigmatism

Astigmatism is created when the condenser lenses magnifies one part more than another, creating a stretched out image. The operator should always bring the image to focus before attempting to correct the astigmatism. There are two stigmators, X and Y, which are used to correct the astigmatism, and the operator might have to alternate between these and re-focusing if the astigmatism is severe. The image will get stretched out in one way during overfocus and stretched out in the other during underfocus (see Figure 4.11 and Figure 4.12). Going in and out of focus is a

(27)

26 good way of telling if the astigmatism corrections have been successful. If it gets more blurry the more out of focus the image gets the astigmatism is corrected, if it is getting stretched out the astigmatism still needs some work. Normally these corrections are made at a magnification greater than the working magnification, so that the images taken at the working magnification will not be spoiled if the operator does not make a perfect correction.

Figure 4.11: The image stretching observed during astigmatism and also when it is corrected (From Protrain)

Figure 4.12: An example of what the stretching looks like during under- and overfocus (From Protrain)

4.2.7 Charge Effects

The charge effect has a big impact on the image quality and is due to an excess buildup of electrons on the surface of the sample. This effect gets more common the less conductive the sample is. A charged up area will create an electric field which deflects incoming electrons, appearing as bright white on the image.

This is common when first adjusting the focus since a small part of the sample will be exposed to the electron beam for a long time. A good way around this problem is to do the initial focus at a place

(28)

27 which will not be studied later, leaving the more interesting parts still uncharged. If the parts which are to be photographed still gets charged there are some solutions: decrease the acceleration voltage, reduce the spot size, lower the vacuum and let the gas absorb some electrons, or lastly, coat the sample in a conductive layer.

4.3 Results

The results will be split up in four parts: particles found both in the SSW and SML, particles found only in the SSW, particles found only in the SML and particles found from the the spray samples, including a description of the particles found.

4.3.1 The Water Samples

These samples are from the subsurface water (SSW) and the surface microlayer (SML). Many of the particles appear both in the SSW and the SML which is why I chose to combine the two categories into one. The SML had typically more very small particles, some areas looking like a galaxy full of stars where as the SSW had a more barren characteristic with slightly larger particles. The following categories will include an example from the SSW to the left and from the SML to the right.

4.3.1.1 Small Grains

Typically in the size range of 100-200 nm, these grains were commonly found across the sample, increasing in abundance closer to the middle. They had a distinct three dimensional topography, often with a slightly knobby surface and asymmetrical shapes.

Figure 4.13 The small grains from the SSW (left, 40k magnification) tended to be slightly larger than the ones from the SML (right, 43k magnification).

(29)

28 4.3.1.2 Diffuse Strings

Ranging from several micrometers to tens of micrometers, these strings appeared as diffuse white shapes across the TEM grids. Some squares had very long strings while other were quite devoid of them with just a short stump somewhere. The strings were not showing any three dimensional structure and sometimes contained other particles inside/on top of them.

Figure 4.14 A diffuse string from the SSW to the left in 10k magnification and an overview of a square from the SML to the right (1k magnification) showing the spread some of the strings had.

4.3.1.3 Nanoparticles

Very small particles (only a few nanometers to some tens of nanometers in diameter). Sometimes lying together in larger groups and sometimes as a lone particle.

Figure 4.15 Several nanoparticles from the SSW sample and a lone particle from the SML sample. Both pictures are taken in 40k magnification.

(30)

29 4.3.1.4 Mucus

Areas of two dimensional mucus were frequently spotted. They had a smooth surface but were fairly diffuse when trying to focus on them. The sizes of them varied greatly, from some hundreds of nanometers to several micrometers.

Figure 4.16 A small area of mucus to the left (40k magnification) and a large area of mucus to the right (16k magnification).

4.3.2 Subsurface Water Specific Particles

Two different kind of particle formations where found only in the SSW samples: small clusters and halos

4.3.2.1 Small Clusters

What seems to be nanoparticles clumped up in small chains or clusters. They were seen together with other small clusters or with small grains in a near vicinity.

Figure 4.17 A few small clusters and a bit of mucus to the left and to the right we see a few small clusters, some scattered nanoparticles, a bit of mucus and a few small grains. Both pictures are taken at 40k magnification.

(31)

30 4.3.2.2 Halo

Consists of a larger core which either seemed to be a large particle (Figure 4.18, left part) or several small particles held together in some electron transparent material (Figure 4.18, right part). These halos were rarely spotted and only 2-3 were observed in all of the SSW samples.

Figure 4.18 Two kinds of halos, both taken in 10k magnification. The left particle has a much smaller halo than the right one. Small particles inside of the right core are visible. In the left picture we see some scattered nanoparticles.

4.3.3 Surface Microlayer Specific Particles

Crystals and huge gatherings of nanoparticles (what I chose to call shimmer) was only found in the SML samples.

4.3.3.1 Shimmer

The shimmer was mostly large areas, often some micrometers in width and length, containing lots of very small (some nanometers in diameter) particles. Often having a very well defined edge where there is almost like an outline around the shimmer (though not always the case, on some occasions the shimmer gradually disappeared). The visibility of the particles also varied quite a lot, from barely being visible to looking like a galaxy of very bright stars (as seen in Figure 4.19).

Figure 4.19 Very bright shimmer (10k magnification) and a closer look at the mucus to the top left (40k magnification).

(32)

31 4.3.3.2 Crystals

A few crystalline particles were found throughout the SML samples. These were quite rare and only a few crystals were spotted. Their edges were very distinct and sharp unlike the soft edges like those of the small grains.

Figure 4.20 Different forms of crystals, both pictures taken at 40k magnification. To the left we see some mucus as well.

4.3.4 The Spray Samples

The spray samples were a lot more barren and devoid of small particles compared to the water samples. What these samples had, however, were very large particles in the size of several micrometers in diameter. The samples also had a substance which was sensitive to the electron beam and vaporized under too long exposure.

4.3.4.1 Assemblies

These particles were commonly found varying in shape and size, all seeming to be assembled by small lumps. They were normally found in sizes ranging between some hundreds of nanometers to several micrometers. There were however some very large ones as well, being up to ten micrometer in size.

Figure 4.21 The assemblies in different shapes. Left in 10k magnification and right in 40k magnification

(33)

32

Figure 4.22 A very large assembly with a particle from Figure 4.21 edited in to give perspective to the size. The picture is taken at 11k magnification and the small particle has been resized accordingly.

4.3.4.2 Fragile Gel

This mucus-like substance was found occasionally and vaporized by an intense electron beam (achieved by a high magnification) or by long exposure. At high magnification the magnified area would completely disappear while during the exposure the gel would slowly recede starting from the outer edges slowly disappearing inwards.

Figure 4.23 To the left we see fragile gel damaged by very high magnification (picture taken at 30k magnification). To the right we see a picture of fragile gel taken by freezing the image, exposing the gel for a long time making it recede from

the outer line towards the middle.

(34)

33 4.3.4.3 Nanoparticles

Similar to those found in the surface microlayer samples but a lot less frequently found. Rather than seeing them in clusters (see left part of Figure 4.15) they were often found alone in a very barren area.

Figure 4.24 One nanoparticle seen in 10k magnification (left) and two others seen in 40k magnification (right).

4.3.4.4 Small Clusters

Similar to those found in the SSW samples, these clusters also seemed to be composed of nanoparticles. These sometimes formed even more compact structures though, almost looking like solid flakes. The clusters were a lot less frequently found when compared to SSW ones.

Figure 4.25 A flake (left, 40k magnification) and clusters (right, 40k magnification) resembling those in Figure 4.17.

(35)

34

(36)

35

5 The Scanning Electron Microscope and Its Representations 5.1 The Pedagogical Study

5.1.1 The Goal of the Study

The pedagogical part of this master thesis was to study the interaction between a user and a JEOL electron microscope to see how they explained in what way their reasoning changed depending on the feedback received from the operating screen. In order to do so an interview was necessary so they could explain their thoughts while operating the instrument. To minimize disturbances while operating the SEM, the interviewees were videotaped and undisturbed while performing a simple task using the microscope, then later interviewed while watching certain selected parts of the video.

This method is called stimulated recall and is explained in Chapter 5.2.5.

These interview results were used, in combination with a literature study regarding external representations and cognitive load theory, to further develop high school lab instructions for a physics lab. I thus needed to know very specifically what I wanted to obtain from the interviews and the SEM study.

5.1.2 The SEM Experiment

First of all, why use a SEM for this part? The obvious downsides are that it is expensive to use and requires training before a person is allowed to operate it (thus limiting suitable interviewees).

However, when working with the SEM, the operator tries to manipulate the instrument in a way to get the best resolution and image quality possible. This essentially means that the operator is constantly working with an image, planning his next move depending on what feedback he has gotten from the image he is working on. Operating experience helps out but due to the complexity of the instrument solid physics knowledge regarding how the lens system etc works is also necessary.

This means, all in all, that the person operating a SEM has to plan ahead by using prior knowledge and experience, then change the plans according to the feedback received from the image and adjust the image by using knowledge regarding how the SEM works.

Three people volunteered to operate the instrument and to be interviewed afterwards. They ranged greatly in experience where one person had been working for 30 years with the SEM, the second person had been using the SEM for several years in a specific field but had never tried samples of the kind which was being used for this study, and the last person was new to the SEM and had only been using it for 3-4 weeks.

5.1.3 The SEM Task

All of the three people got the same instructions: a brief description of the sample and the task. The sample was a TEM grid coated with a carbon fiber film with gold sputtered across it. It is a sample which new operators get to practice on since it is highly conductive and easy to focus on. Detailed pictures of what the sample looks like when viewed in the SEM is shown in Chapter 5.3.4.

The task can be split up into four parts and I will write them down as four separate steps to make the whole task easier to overview.

1. The first part of the task was to find the smallest particle (or structure) the participant deemed possible.

(37)

36 2. When a particle or structure had been found the participant was to focus the beam as best

he or she could and get (according to him or her) the best possible quality.

3. This particle or structure was then to be measured by using a measuring tool which is built in into the SEM interface.

4. Ultimately the participant was to take a picture of what he or she had achieved.

They got to use any settings they wanted and got complete freedom with no time restraints. Each person started with the same settings at the same position and magnification.

5.1.4 Focus of the Experimental Study

Before conducting the experiment I needed to figure out exactly what feedback I wanted to obtain from this research. I needed to know this in order to create the task for the test persons. This is when I wrote an interview guide where I summarized my main questions and then started to break them down in sub categories. These questions were:

 What part in the interface is helping the user to intuitively progress in his problem solving process?

 What parts are excessive/confusing?

 How does the user's prior knowledge affect his performance?

The finished guide can be found in Appendix 1. From this I started creating more specific questions which were more easily observable. A few examples of such questions would be if the person had performed a similar task earlier, what kind of plan the person had before taking on the task, what the person did when the feedback received was not corresponding to his expectations etc. This guide was to be used when creating the real and personally designed interview questions which I would ask the interviewees.

5.1.5 The Interview Questions

Since there would be quite a lot of excessive material from the video tapes I knew I had to select certain points of interest in the film to make the interview as effective as possible. When all the films had been produced I sat down and looked through them, writing up different time stamps for each of the questions I wanted to ask.

The questions for the three persons were all fairly similar where they all shared two identical questions. The first question they received was what their plan was after first hearing their task. The last question they received was why they were satisfied with the picture they took and why they wanted to stop when they did.

Since each person used the instrument in a slightly different manner it was impossible to ask the exact same questions and weigh the interview in a certain way. Instead I selected things in the films when they performed an action which they had not done before and asked the reason for it. The persons were obviously allowed to comment on things in the film on their own during the interview if they wanted to. After the interviews I transcribed the audio files and started categorizing their answers. This categorization and a compilation of the interviews can be found under Chapter 5.3.

(38)

37

5.2 Theoretical Framework

Representations are all around us all the time. They allow us to interpret our surroundings and structure our cognitive processes when solving problems. There are mainly two types of representations, internal and external. The internal representations can be compared to cognitive schemata and the external ones are text, pictures, movies, music etc. With proper usage of different external representations it is possible to decrease the work load of the cognitive processes which increases the task focus and improves schema acquisition (van Bruggen, Kirschner & Jochems, 2002).

5.2.1 The Cognitive Load Theory

In cognitive load theory, knowledge is modeled as thought patterns stored in the long term memory which are called schemata. When a person engages in a task and he possesses the proper schemata to solve the task, the schemata will be activated and the relevant procedures to complete the problem at hand will be carried out. These processes implemented by the schemata are automated and thus requires less working memory (Ibid.).

If a schema, the way we interpret and interact with the world, does not correspond with what we see, we try to adjust in one of two ways. One way to adjust is that we accommodate the situation which means that we change our old schema or create new ones to better suit our new experience.

This process is called accommodation and an example of this would be of a person learning how to eat with chopsticks. The person knows how to eat with western cutlery but needs to adjust his hands in order to eat with the chopsticks (Piaget , 2008).

The other way we adjust to a situation is by a process called assimilation and is when we adjust our new experiences according to our previous schemata. The same person in the previous example has been served a new food which he has never seen before but he notices he can eat it the same way like he has always eaten. The experience was new but old schemata could be used (Ibid.).

Sweller et al. (1988) introduced three different kinds of cognitive load. The intrinsic load is related to the difficulty of a concept. For example is reading upside down is a lot harder and requires more mental effort than reading the normal way. Extraneous load unnecessary cognitive load which is caused by poor instructions. This can be compared with the germane load which is the cognitive load which creates useful schemata. Sweller et al. describes the differences between these two as

"Although both can be altered by instructional interventions, extraneous cognitive load reflects the effort required to process poorly designed instruction, whereas germane cognitive load reflects the effort that contributes to the construction of schemas. Appropriate instructional designs decrease extraneous cognitive load but increase germane cognitive load." (Sweller et al., 1988)

5.2.2 Problem Solving Strategies

There are several different approaches to solving a problem. One way is the means-end analysis where the problem solver gets a problem with a clear goal and starts working backwards. Since a novice does not have the required schemata to tell him which path to take in order to reach the goal he instead looks at the goal itself to try and figure out the necessary steps. It is an effective method but requires a great cognitive load since the problem solver needs to remember plenty of things at the same time such as the problem, the goal, the relation between these two, problem solving operations and, if sub goals have been used, a goal stack to keep track of which sub goals have been finished and which ones are next. While this strategy has been proven to be effective for solving a problem it has not been effective for educational purposes since the problem solver tends to not

(39)

38 obtain desirable schemata. He might learn how to solve problems of a similar kind but fails to understand the underlying theories as to why the different steps he took were successful (Sweller, 1988).

The other approach for a novice problem solver would be to explore different paths and see what information he can get. Even though the goal might be clear he begins at the starting point and experiments his way forward, seeing what variables are possible to derive or what consequences different decisions get. Having a clear goal tends to encourage people to a means-end strategy which means that in an educational scenario open questions are more preferable - questions which asks the problem solver to derive as much information as possible from a given start (Ibid.).

The third strategy would be if the problem solver is an expert. If he is an expert he has the required schemata and knows exactly which steps to take, which sub goals are required to obtain the desired results and does not need to neither explore different angles of the problem nor work his way backwards (Ibid.).

These different strategies also appear in a study conducted by Kohl and Finkelstein (2008) in which experts and novices got to solve different physics problems. There were two different physics problems which had a heavy focus on representations. The first problem worked solely by analyzing different representations whereas the second one was a calculation problem but was significantly easier solved by incorporating some self-made representations while solving it.

What Kohl and Finkelstein noticed was that the novices and experts used about an equal amount of different representations during their problem solving process, with the experts being more successful and more effective. When working with the representations the experts, when compared to the novices, spent a greater fraction of their time pursuing specific goals or subgoals, even when not knowing exactly how to proceed (Kohl & Finkelstein, 2008). Novices also spent some time using this technique but they were more likely to explore without a clear purpose with hopes of discovering something which would help in the problem solving process (Ibid.).

5.2.3 External Representations

There are many kinds of external representations and according to Alty (1991) audio is good for stimulating the creative thinking, movies to deliver information regarding a process or an action, text to emphasize details and diagrams to explain a theory. An example of this would be to first stimulate the creativity by some music, present a diagram with a theory the person is about to learn and include some explanatory text to it. (Ibid.)

A creator of educational material has to be careful when including external representations. A misguiding or confusing representation will make schema acquisition harder or in the worst case erroneous, leading to an assimilation of the "bad" schemata (Scaife & Rogers, 1996). Scaife and Rogers are using three different characteristics when analyzing external representations which are computational offloading, re-representation and graphical constraining.

" [Computational offloading] refers to the extent to which differential external representations reduce the amount of cognitive effort required to solve informationally equivalent problems." (Ibid). Larkin and Simon illustrate this very well with a geometric example. Compare the geometry task seen in Figure 5.1 in its purely sentential form and when accompanied by a diagram depicting the scenario.

(40)

39 The diagram offers easily obtained information regarding the location of different objects and how they relate to each other which reduces a need to search and recognize. The geometry problem in its sentential form requires a greater computational load to keep track of how the problem is progressing. Possible states, consequences and object relations has to be explicitly formulated and computed mentally in order to solve the problem purely in the sentential form (Scaife & Rogers) which, of course, leads to a greater cognitive effort.

Figure 5.1: Verbal and diagrammatic representation of a geometry problem (From Larkin & Simon, 1987)

By using external representations in the forms of diagrams, pictures, movies etc the answer can easier be read directly off of the representation rather than having to formulate the picture mentally before acquiring the answer. This is a very clear example where the latter instruction clearly reduces the extraneous load and increase the germane load.

This leads us in to re-representations which "[...] refers to how different external representations, that have the same abstract structure, make problem-solving easier or more difficult." (Scaife &

Rogers). The previous geometric problem is still a good example, showing the different difficulties between its sentential and half-graphical, half-sentential forms. Another example of this is that a person today would have an easier time interpreting and solving the mathematical problem 67*10 in arabic numerals rather than the roman equivalent LXVII*X (with the participant obviously being informed of the latter being Roman numerals).

Graphical constraining refers to how the external representation limits and constrains different inferences of a given problem or theory. The general idea is to force the interpreter of an external representation to reach a certain conclusion, avoiding erroneous interpretations. The computational offloading is increased by having good re-representations and graphical constraining (Ibid.). Another way to look at good graphical constraining is that a representation should be fairly simple and should not include a lot of rules to work with. The fewer rules a representation has and the less abstract it is the easier it is for the problem solver to work with it and consequently easier to solve the original

(41)

40 task at hand. This can obviously backfire with a poorly designed diagram which forces the problem solver to reach faulty conclusions.

5.2.4 Zone of Proximal Development (ZPD)

The Zone of Proximal Development is a theoretical development zone constructed by Lev Vygotskij.

He explains the zone as

"[...]the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance, or in collaboration with more capable peers" (Vygotskij, 1978)

What he means by this is that the ZPD is the zone constructed by the difference between what a learner can do without help with regards to what he or she could do with help. An example would be of a person who wants to bake a cake but lacks the schemata required to do so. If the person gets help by something which contains the required information (that can be a teacher, parent, a book, a guide on the internet etc) the person will be able to bake the cake. This is not to be confused with the person actually learning how to bake, he could have just followed instructions from a cooking book and forgot what he did the moment he did it. Obtaining the correct schemata might take time depending on the task and relies on the pedagogical level and effort by the learner.

5.2.5 Stimulated Recall

Stimulated recall is an interview technique where a person is video- or audio taped during a scenario where the person is active (might be solving a task, talking to people etc). The person is later interviewed while, together with the researcher, looking at the recorded material. This is to stimulate and help the interviewed person to recall thought processes from the time of documentation (Haglund, 2003).

There are some recommendations in order to take care of issues regarding memory, retrieval and instructions (Mackey & Gass, 2005). These include, among some others, giving clear guidelines to each participant (Schepens, Aelterman, & Van Keer, 2007), carrying out the stimulated recall interview as soon as possible after the recorded scenario (Ibid.) and participants should be minimally trained to enable them to carry out the procedure but they should not be cued to extra and unnecessary knowledge (Mackey & Gass).

The advantages with using this technique is that people can carry on with their work, undisturbed by questions from the researcher. The method has also been criticized by arguments questioning if the interviewed person recalls the actual thought processes and feelings or creates them subconsciously while looking at the documented material. Further criticism argues that the person might be influenced by external stimuli which makes accurate recalling even harder (Haglund).

The reason I chose to use stimulated recall as an interview technique is because the criticism regarding disturbances created by irrelevant external stimuli was negligible in this study. The test person who was operating the scanning electron microscope was in a room with no other people in it besides me and the operator himself. The only things which made any sounds or could catch someone's attention were the image screens which means any excessive external stimuli were very low. Another reason was that I did not want to create more cognitive load for the test persons' by taking away some of their focus to answer questions during their time of operation.

References

Related documents

 Implementering i klinisk praksis forutsetter blant annet kontinuerlig ferdighetsbasert opplæring, veiledning og praksisevaluering.. 4/15/2018

• Familjehem avser ett enskilt hem som på uppdrag av socialnämnden tar emot barn för stadigvarande vård och fostran där verksamhet inte bedrivs

• Är risk- och behovsbedömningsmetoder effektiva för utredning och bedömning av unga lagöverträdares behov samt som vägledning till behandlingsplanering på kort- och

Johannes Vitalisson, Team Nystart, Sociala utfallskontraktet, Norrköpings kommun.. Teamets arbete följs upp och

flesta som har behov av psykosociala insatser inte har tillgång till hjälp över huvud taget, med eller utan evidens.”..

Vi i HRF ska värna barnens rätt till en bra start i livet genom att arbeta för att landstingets habilitering tar en aktiv roll för att ge alla hörselskadade barn och ungdomar

[r]

[r]