• No results found

These prospective teachers were preparing to teach in Grades 4 to 9 or in the gymnasium (Grades 10 to 12) and were students in a course in mathematical modeling

N/A
N/A
Protected

Academic year: 2022

Share "These prospective teachers were preparing to teach in Grades 4 to 9 or in the gymnasium (Grades 10 to 12) and were students in a course in mathematical modeling"

Copied!
28
0
0

Loading.... (view fulltext now)

Full text

(1)

MATHEMATICAL MODELING FOR PRESERVICE TEACHERS:

A PROBLEM FROM ANESTHESIOLOGY

ABSTRACT. The study reported in this article deals with the observed actions of prospective Swedish mathematics teachers as they were working with a modeling situation.

These prospective teachers were preparing to teach in Grades 4 to 9 or in the gymnasium (Grades 10 to 12) and were students in a course in mathematical modeling. The larger study of which this study was a part focused on these students’ understanding of modeling and how they related mathematical models to the real world. This article also attempts to illustrate how mathematics is intertwined with many other subjects, in this case medicine.

KEY WORDS: assessment, mathematical modelling, teacher education

The heart of applied mathematics is the injunction “Here is a situation; think about it”. The heart of our usual mathematics teaching, on the other hand, is: “Here is a problem; solve it” or “Here is a theorem; prove it”. We have very rarely, in mathematics, allowed the student to explore a situation for himself and find out what the right theorem to prove or the right problem to solve might be.

Henry Pollak (1970)

Model and modeling are common expressions with many seemingly different meanings. We are introduced to new car models that we are supposed to feel attracted to, to picture ourselves in possession of the new car. Architects use models of a landscape or a house to illustrate a product they want to sell. In the fashion industry, a model is a person who wears clothes that other people watching can imagine themselves wearing.

Fashion models are selected because they possess certain idealized human characteristics, which change from time to time but always refer to ideals such as thinness, height, skin color, and attitude. Children use many models of reality in their toy cars, dolls, trains, and so forth. All modeling activities have at least two aspects in common: they use a model in order to think about or introduce the related reality, and the model is something more or less idealized or simplified.

The process of mathematical modeling also has a variety of definitions.

As used in secondary mathematics, it ordinarily entails taking a situation, usually one from the real world, and using variables and one or more

International Journal of Computers for Mathematical Learning 7: 117–143, 2002.

© 2002 Kluwer Academic Publishers. Printed in the Netherlands.

(2)

out, of helping locate a trail, and of making his way cross-country with only intuition and a compass as a guide. “Cross-country” mathematics is a necessary ingredient of a good education (p. 329).

Prospective teachers need to understand a great variety of topics and approaches in mathematics. Today these topics include concepts, prin- ciples, methods, and procedures that were not traditionally part of school or college mathematics but that many secondary school students may now address very well through the use of computers and graphing calculators.

Applied mathematics as a field and the process of mathematical modeling in particular are one part of the mathematical curriculum that may be broadened and enhanced through the use of technology. The presence of technology in today’s classrooms may assist teachers in implementing Pollak’s vision of cross-country mathematics, since tedious, routine calcu- lations can be done by the technology and a much greater number of realistic, open-ended situations can be modeled. In Sweden, the United States, and many other countries, the availability of the graphing calcu- lator, with its built-in regression analysis capability for comparing a number of mathematical models, has changed the school mathematics curriculum. Today, secondary school students can handle problems that were not even possible in college mathematics only a decade ago.

The Swedish school should, in its teaching of mathematics, strive after that students in projects and in group discussions develop their conceptual capacity and that they learn how to formulate and motivate different methods for solving mathematical problems.

They should also develop their aptitude to give shape to, refine, and use mathematical models together with a critical estimation of the model’s conditions, possibilities and limitations. (Skolverket, 2000, p. 2, my translation).

A national statement like this addresses many questions, including how to prepare prospective secondary school mathematics teachers to func- tion in an environment where teaching and learning are characterized by processes and activities. This statement also gives a reflection on new trends in how to look upon teaching and learning in mathematics. What

(3)

mathematics educators mean by terms such as quality in mathematical knowledge and assessment has to be in focus. From a researchers view, it is important to try to clarify and describe different approaches to teaching and learning in mathematics. For example Sfard (1997) emphasizes that our thinking on learning may be rooted simultaneously in two different conceptual domains:

On the one hand, there is the view of learning as an acquisition of some private property;

and on the other hand there is the idea of learning as becoming a participant in a certain practice or discourse. (Sfard, 1997, p. 120).

My research, briefly discussed in this article, arose from my experi- ence in teaching mathematics to prospective mathematics teachers at the University of Gothenburg and was stimulated in part by the ongoing evolution of technology. During the past 3 decades, personal computa- tional technology has evolved from four-function calculators in the 1970s through scientific calculators in the 1980s to graphing and symbolic calcu- lators in the 1990s. Today, most students who study high school or college mathematics also have easy access to computers equipped with a variety of mathematical tool systems. The evolution in technology has affected the content of some courses in mathematics for teachers and many times also the way those courses are taught.

During the last 5 years or so, there has been a distinct change in some of the courses in the program for prospective mathematics teachers at Gothen- burg. In the mid-1990s, technology was introduced as an isolated part of the program, often through a visit to the computer laboratory. Today, the program includes courses in which the technology is an integral part of the syllabus, including the assessment. In my case, the course in mathema- tical modeling I teach every semester was changing dramatically during that time. The new software tools that became available in addition to the perspectives I brought with me from the University of Georgia encouraged me to restructure the course together with a colleague and focus much more on mathematical modeling. As a consequence, I started to look more closely at the students’ conceptions of mathematical modeling. It should be explained at this point that the mathematical modeling that I studied is the kind in which students work with data drawn from real life. Although the data are usually somewhat simplified, the problems are more open and less constrained than, for example, standard mathematical word problems.

(4)

In Figure 1, the left-hand column represents the real world, the right- hand column represents the mathematical world, and the middle column represents the connection between the two. In the middle column, the problem is simplified and formalized, and

Figure 1. Main stages in modeling (adapted from Mason, 1988, p. 209).

then the mathematical results obtained are translated back into terms mean- ingful in the original real-world situation. In a straightforward modeling process, one might be able to go through Stages 1 through 7 in sequence.

But mathematical modeling is not always straightforward, especially when realistic results are expected. There often is a tradeoff between a model sufficiently simple that a mathematical solution is feasible and one suffi- ciently complex that it faithfully mirrors the real-world situation. If the model originally defined is too simple to be realistic, the mathematical results may not translate into valid real-world results. In that case, one might have to return from Stage 6 to Stage 2 and repeat the process using a more sophisticated model. In many cases, particularly in the social sciences, it is difficult to carry out the Stage 6 validation step at all, and one might simply proceed directly from Stage 5 to Stage 7. In other cases, when the mathematical model is so sophisticated that the mathematics is

(5)

intractable, one might have to return to Stage 2 and simplify the model in order to make a mathematical solution feasible. But then the valida- tion step of Stage 6 might indicate that the model is now too simple to yield correct real-world results. There is an inevitable tradeoff, therefore, between what is physically realistic and what is mathematically possible.

The construction of a model that adequately bridges this gap between realism and feasibility is the most crucial and delicate step in the process.

Skovsmose (1994) distinguishes between two types of mathematical modeling; namely, pointed modeling and extended modeling. When we perform pointed modeling, the problem we are dealing with is transformed into a formal language, in terms of which we try to solve the original problem. Pointed modeling is the type whose stages were just discussed.

But extended modeling is different. In this case, mathematical modeling is used not to describe a specific problem situation but to provide a general foundation for a technological process. Mathematics becomes part of the conceptual framework we use to interpret and interpret the reality of our modern world. Through that framework our daily lives are structured math- ematically – how we measure distance, space, time, and so forth. A pointed mathematical model must be based on some sort of specific interpretation of reality.

Having said all this about mathematical modeling, I should mention that I see pure and applied mathematics as part of problem-solving, and I see mathematical modeling as part of applied mathematics. Nevertheless, it is obvious that quite some activities in the modeling process can be characterized as problem-solving, so it is both hard and fruitless to try to find a strict order or hierarchy between problem-solving and mathematical modeling.

A MATHEMATICAL MODELING SITUATION FROM ANESTHESIOLOGY

The students’ responses to one of the modeling problems in an exam- ination in the modeling course in Gothenburg illustrate the findings of my research. The 25 students in the class had been given an option to choose a subject area from which they would create a model during the course by developing the necessary mathematical theory. Subjects like biology, chemistry, economy, medicine, and physics were proposed, and the students decided that they wanted to see how mathematics might be used in medicine. Finally, they chose the concept of cardiac output.

As an introduction to the richness of mathematics that exist in medical measurement, I start with presenting the mathematical treatment of cardiac

(6)

constructing a mathematical model of the circulatory system in a human body, we consider it to be a closed loop and assume that the blood flowing around this loop is incompressible. Consequently, the total volume V of blood (measured in liters) in the system is constant. The rate at which this blood flows around the circulatory loop is critical. We can (in principle) measure the flow rate (in liters/minute) past any given point in the system.

Attention ordinarily is focused on the heart itself, and the cardiac output CO is the rate at which blood is pumped out of the heart. The cardiac output of the heart is the product of

• the stroke volume SV – the volume of blood pumped per beat – and

• the heart rate HR – number of beats per minute.

Typical values for a 70-kg man are

SV = 70 to 80 cm3/beat = 0.070 to 0.080 liters/beat HR = 70 to 80 beats/minute

CO = 5 to 6.5 liters/ minute

To permit the comparison of patients with different body sizes, cardiac output often is considered relative to body surface area BSA (in square meters). The cardiac index is the ratio CI = CO/BSA measured in liters per minute per square meter. A typical value of CI for a 70 kg man with a body surface area of about 2 m2is 2.5 to 3.5 liters/minute/m2. However, it should be noted that a normal person has a large “reserve capacity” that allows the cardiac output to increase to as much as 25 to 30 liters/minute during strenuous exercise.

Measuring Cardiac Output

Cardiac output is often monitored during and after surgery (especially in the case of heart surgery). Serial measurements are used to assess the general status of the circulation and to determine the appropriate hemo- dynamic therapy and estimate its efficacy. Several other useful variables – such as the stroke volume, the left ventricular stroke work index, the

(7)

systemic vascular resistance, and the stroke index – can be determined once the cardiac output is known.

Cardiac output can be measured by several techniques, all based on the same idea for measuring the flow rate in a fluid loop. A measurable indicator is injected into the fluid, and its subsequent concentrations at various points in the flow loop are measured. Such a method was first proposed in 1870 by the German physiologist Adolph Fick, who described a means of determining blood flow by measuring overall oxygen intake and content in the blood.

One determines how much oxygen an animal takes out of the air in a given time . . . . During the experiment one obtains a sample of arterial and a sample of venous blood. In both the content of oxygen is to be determined. The difference in oxygen content tells us how much oxygen each cubic centimeter of blood takes up in its course through the lungs, and since one knows the total quantity of oxygen absorbed in a given time, one can calculate how many cubic centimeters of blood passed through the lungs in this time (Miller, 1982, p. 1058).

The indicator dilution method is a variant of Fick’s technique in which a known amount I of an indicator substance is injected into the blood stream and its concentration C(t) (in liters per cubic meter) is measured as a function of time t at a single downstream location. This dilution method was first introduced by the British physician Stewart, who together with a colleague Hamilton, developed the dye solution method and the Stewart-Hamilton formula:

CO = I



0 C(t)dt

which gives the corresponding cardiac output CO (see Glantz, 1979, p. 38;

Miller, 1982, p. 1059).

Derivation of the Stewart-Hamilton Formula

Suppose that we inject the indicator at a point through which all the blood passes and place a sensor downstream from the injection site, also at a point through which all the blood must pass. In practice, this is done by an injection of, for example, indocyanine green into a vein or via a catheter into the right ventricle. A few seconds after the injection, the dye begins to appear in the arterial blood. The dye concentration gradually increases until it reaches a maximum; then it begins to decline until a second rise in concentration occurs as a result of recirculation (see Figure 2. Figures 2 to 5 are taken from Davis, Parbrook and Kenny, 1995).

The Stewart-Hamilton formula says simply that what goes in (at the injection site) must eventually be measured at the downstream sensor site.

(8)

Figure 2. The indicator dilution method.

The formula itself results from a simple “mass balance” at the sensor site.

Assuming the indicator is injected into the circulatory loop at time t = 0, let Q(t) denote the mass (in kg) of indicator that has passed the sensor site by time t > 0. Then the additional mass dQ of indicator that passes the sensor site during the very short time interval from time t to time t + dt is given by

dQ(kg)≈ CO

liters min



× C(t)

 kg liter



× dt(min)

(recalling that the flow rate or cardiac output CO is assumed constant).

(Note the cancellation of units on the right, which, if it did not yield the correct units on the left, would indicate a mistake in the analysis.) Dividing by dt and then considering the limit as dt→ 0, we see that

Q(t)dQ

dt = CO · C(t)

If all of the injected indicator passed the sensor site precisely once, then we would get the total amount I of indicator by summing the infinitesimal amounts Q(t)dtfrom time t = 0 to t =∞,

I =



0

Q(t)dt =



0

CO· C(t)dt = CO ·



0

C(t)dt.

If the amount I of indicator injected is known and measurements at the sensor site permit the computation of the integral

0 C(t)dt, then a simple division yields the desired cardiac output formula: CO =  I

0 C(t )dt.

(9)

Because a definite integral of a positive-valued function gives the area under its graph, the Stewart-Hamilton formula says that the cardiac output equals the quantity of indicator injected divided by the area under the concentration-versus-time curve C = C(t) in the tC-plane. At this point in the analysis, the (constant) parameters CO and I, the variables t and C, and the Stewart-Hamilton formula relating them, constitute a simple mathematical model for the process of circulation and dilution that ensues upon the injection of indicator into the circulatory system.

Complications in the Model

It should be noted that today the dye solution method has been almost entirely replaced by variants of a thermodilution method. Originally (around 1954), the thermodilution method used an iced or room temper- ature solution of salt or dextrose in water. Today. the method uses a small heating thermistor on the Swan-Ganz catheter (a lung artery cath- eter) (see Figure 3). The temperature TB(t) of the blood at the sensor is measured (rather than the injectate concentration), and the simple Stewart- Hamilton formula discussed above is replaced with the formula (Miller, 1982, p. 1059):

CO = KI (TB− TI)



0 TB(t)dt where

TB– TI = initial blood-injectate temperature difference (TB = TB(0)), and

K = an empirical constant depending on the catheter size, specific heat and volume of the injectate, and the rate of injection.

In the course, the students’ attention was drawn to the original Stewart- Hamilton formula and method. But even here a significant complication results from the fact that the circulatory system is a closed loop. Before the indicator concentration curve returns to zero (i.e., when the entire indicator has passed the censor), the indicator concentration exhibits a secondary peak due to recirculation. There are two ways to evaluate cardiac output in the presence of recirculation. One could develop theoretical equations to account explicitly for recirculation, but this approach would require detailed analysis of the indicator washout curve, rather than simply finding the area under the curve. A simpler and perhaps more effective method to account for recirculation is to remove its effect from the observed indicator

“washout curve”.

(10)

Figure 3. The thermodilution method.

Figure 4. Mud removal from a bathtub.

The way the circulatory system washes out drugs and anesthetics from the tissues by means of the blood flow is quite similar to the way a muddy bathtub can clear itself (see Figure 4). The mud can be cleaned out by running water in from the tap and draining water out of the tub simul- taneously at the same rate of flow. For simplicity, let us assume that bath water in the tub is constantly stirred so that the concentration of the mud is always uniform. If Q(t) denotes the amount (kg) of mud in the tank at time t, then the concentration at time t is given by c(t) = Q(t)/V (kg/liter), where V is the (constant) volume of bath water in the tub. Hence the change dQ in Q during the short time interval dt is given by

dQ= −rc(t)dt = −r · Q/V · dt = −kQdt.

(11)

Thus Q(t) satisfies the simple differential equationdQ

dt = −kQ with the familiar exponential decay solutionQ(t)= Q0e−kt, where k = Q/V and Q0is the initial amount of mud in the tank.

For the purpose of a preliminary analysis of the indicator dilution curve, we assume that the initial decreasing part of the indicator dilution curve – before recirculation sets in – is similarly exponential in character. Then suppose this dilution curve is plotted on semi-log paper – paper on which the vertical (Q) scale is logarithmic and the horizontal (t) scale is linear.

This exponentially decreasing part of the curve then looks like ln Q= ln Q0− kt

Thus the initial “downstroke” is a straight line on this semi-log plot (Figure 5). With the help of suitable software or graphing calculators, we may even use measured values of the concentration to fit an exponential curve to the washout part (to the right of the peak) of the curve in Figure 5.

Figure 5. Plot of a washout curve on a semi-logarithmic paper.

An Examination Problem

During the course, the mathematics described above was discussed and introduced step by step together with an explanation of the medical jargon involved. The final examination was given as a take-home exam, handed out on a Monday and due the following Friday. One of the problems was the following modeling situation:

(12)

Figure 6. Change in dye concentration for a cardiac patient injected with indicator dye.

The cardiac output as monitoring devices present it is normally traced out on a paper slip like that shown in Figure 6. The paper shows the change in dye concentration as a deflection from zero. Normally the measured CO is also printed on the paper.

Using the theory of cardiac output discussed in class, calculate the cardiac output for the patient whose measured data are presented in Table I.

The dye solution injection was 5.68 mg. Observe that 55 mm of deflection equals a change in dye concentration of 5 mg/liter.

SOLUTION STRATEGIES Disregarding Recirculation

The students, who at this point in the course were experienced users of graphing calculators and computer tools like Excel (Microsoft, 1995) and CurveExpert (Hyams, 1996), started in different directions, but a common approach was to visualize the change in dye concentration. By entering the data in, for instance, Excel, it is quite easy to do the following:

(a) Obtain a scatter plot and thereby a view of the modeling situation;

(b) Transform the deflection in mm on the paper slip to the “real”

concentration in mg per liter; and

(c) Make a semi-log plot by transforming the measured data points onto natural or base 10 logarithms and thereby make a visual estimation of where the curve would cross the horizontal axis were it not for the recirculation phenomenon (compare with Figure 5).

The results of using Excel to illustrate the reasoning in (a), (b), and (c) are given in Figures 7, 8, and 9.

(13)

TABLE I

A Patient’s Cardiac Output Over Time Measured in mm of Deflection

Time (sec) Deflection (mm)

0 0

1 5

2 20

3 50

4 88

5 115

6 122

7 118

8 100

9 80

10 66

11 53

12 41

13 35

14 29

15 24

16 20

17 17

18 15

19 13

20 12

21 13

22 14

23 15

24 16

25 18

From the graph in Figure 9, we see that the data points for t = 13, 14, 15, . . . , 23, 24, 25 should be “lowered” so that they fit the extrapolated straight line. This is quite easy to do in Excel, since the scatter plot changes whenever one changes the value of any data point in the graph. Thereby it is quite easy to transform the data into a straight line (see Figure 10).

Finally, exponentiating the natural logarithms to recover the actual concentrations, we obtain a new revised set of data points that represent what the cardiac output would presumably have looked like if not for the

(14)

Figure 7. Scatter plot of deflection in mm over time.

Figure 8. Scatter plot of dye concentration in mg/liter over time.

Figure 9. Natural logarithm of dye concentration over time.

(15)

Figure 10. Natural logarithm of dye concentration over time, revised to eliminate recirculation.

recirculation phenomenon. The new set of data points, from t = 0 to t = 25 is displayed in Table II. Thus we have a new, presumably more accurate, set of data points, and we can see the look of the new, more “true”, scatter plot shown in Figure 11.

Figure 11. Scatterplot showing true cardiac output curve.

An Approach by Summing Rectangles

A first estimate of the area under this curve is easily obtained by summing the rectangles whose length is the height of each data point and whose width is 1 unit (see Figure 12). Excel calculates the sum of the rectangles to be 92.32 mg/liter · sec. Since the injection was 5.68 mg and cardiac output is measured each minute, we get: CO = 5.68 · 60 / 92.32 = 3.69 liter/min.

(16)

5 115 10.45454

6 122 11.09090

7 118 10.72727

8 100 9.09090

9 80 7.27272

10 66 6.00000

11 53 4.81818

12 41 3.72727

13 33 3.00000

14 26.5 2.40909

15 21.5 1.95454

16 17 1.54545

17 13.8 1.25454

18 11 1.00000

19 8.8 0.80000

20 7 0.63636

21 5.6 0.50909

22 4.5 0.40909

23 3.6 0.32727

24 2.9 0.26363

25 2.3 0.20909

A More Analytical Approach

The curve can obviously be divided into two functions, f(x) and g(x), and f(x) can be integrated between 0 and k and g(x) can be integrated between k and 25. How do we find the constant k? The most natural thing is, of course, to take the time for which the maximum value occurs, that is, at t = 6 seconds. A regression model for the first 6 values (dye solution deflection in mg per liter) is easily given by CurveExpert, a tool that was presented

(17)

Figure 12. Histogram showing true cardiac output curve.

to the students earlier in the course. The tool Curve Finder in CurveExpert suggests a fourth-degree polynomial model (see Figure 13):

Figure 13. Fourth-degree polynomial curve for first six values of deflection in mg/liter.

f (x)= a + bx + cx2+ dx3+ ex4,with the coefficients a = 0.021645022,

b = –0.11399711, c = 0.35606061, d = 0.13131313, e = –0.022727273

(18)

gives the function

Figure 14. Exponential function curve for last 19 values of concentration in mg/liter.

g(x)= aebx with the coefficients

a = 38.787194, b = –0.19225869

CurveExpert calculates the area under this curve as 62.0044, while Derive measures it as 62.0040. Using the CurveExpert figures, we get a total area of 93.695 mg/liter· sec. We thereby get a cardiac output figure of CO = 5.68· 60 / 92.9145 = 3.67 liter/min. If we instead use the figures from Derive, we also get CO = 3.67 liters/min (rounded off to the two decimal places that are significant here).

(19)

A Straightforward Numerical Approach

In numerical mathematics, a well-known formula for numerical integration is Simpson’s rule. The method is rather easy to adapt to this problem, since the step h is fixed at 1. A simple model in Excel might look like the one in Figure 15. We see that the area is calculated to be 91.98485 mg/liter· sec, which yields a cardiac output equal to 3.64 liter/min.

Figure 15. Excel model for Simpson’s rule.

A More Sophisticated Approach

One reason to study a variety of graphs of functions is to acquire an intu- itive feeling for what kind of function corresponds to a given type of graph.

In Figure 12 we “see” a graph y = f(x) such that

• f(0) = 0;

• f(x) is positive valued with a single (global) maximum value for x >

0;

• f(x) approaches 0 as x → ∞.

(20)

model using both Mathematica and MATLAB can be found at http://ma- serv.did.gu.se/matematik/datafit/datafit.htm

STUDENTS’ SOLUTIONS AND REACTIONS

The teaching and learning of mathematics at all levels is naturally closely related to assessment of student achievement. The more complicated and open a problem is, the more complicated it is to assess the solution of it, and if one adds the component of existing technology, assessment becomes even more complicated. So what do pre-service teachers learn when engaging in a modeling process like the one described here? To assess mathematical modeling is not easy to accomplish.

We are prepared to risk our skin by claiming that assessment of applications and modeling is easy. As mentioned earlier, assessment is not easy if we (have to) stick to conventional modes and practices. In that case, sound assessment is rather very difficult if not impossible (Niss, 1993, p. 48).

We are obviously unable to investigate the learning process in a direct way; we can just try to observe the learning outcome in terms of external characteristics such as students’ behaviors, attitudes, and skills. Neverthe- less, mathematics educators are expected to collect, examine and grade learning outcomes from students even in complicated educational situ- ations. It is important to stress that the instructors’ view of learning and assessment converged in the way the solutions of the modeling situation were examined. Each of the students’ papers were examined by the two instructors and most of the papers where then discussed with the students.

This discussion took place with the students in groups or in pairs and underlined the fact that the examination was integrated with the process of learning. Some of the students had to rewrite their papers to pass the exam, some had to take a new exam and some of them gave a short summary of how they had solved the problem.

(21)

The majority of the students in the class seemed to enjoy their work with the problem, although some of them expressed negative opinions about the complexity of the problem and were concerned about their own capability to tie it all up. The overall result when it comes to their mathematical performance was that all students seemingly learned a lot of mathematics when engaging in the modeling situation, not least when engaging in deriving the theory of measuring the cardiac output. Yet most of the students were more attracted by the technical tools than with doing sophisticated, analytical or, for instance, the “old fashioned” Simpson’s rule investigations. Three of the students actually did use Simpson’s rule, but only on the function presented to them by Curve Finder, and not on the data set. Therefore, they only could compare within one model, not between models to validate one of them.

In order to use some of the models, the students had to remove the point (0, 0), something they gladly did, assuming that the difference would be negligible. Some of the students used splines or higher order polynomials in order to get a nice, smooth curve through the data points. The fact that Curve Finder actually cannot provide any explicit expression for a spline through the data points did not worry them, since the program does provide an analysis, including the integration between any a, b within the data set interval. The result of different models’ impact upon the CO is shown in Table III. None of the models where introduced or “explained” to the students, in order to avoid guiding the students into a certain way to solve the problem.

TABLE III

Different Models of the Cardiac Output and Their Resulting Value of the CO Model Function r Measured area: Measured area: CO Frequency

CurveExpert Derive or Maple (rounded) n = 25 students Rational (a + bx) / 0.9885 97.2773 97.2813 3.50 4 function (1 + cx + dx2)

18th degree a + bx + cx2+ 0.9998 92.388 91.8427 3.70 4 polynomial . . .

Linear spline 92.2011 3.70 5

Cubic spline 92.2376 3.70 1

Hoerl model abxxc 0.9946 88.90 88.8766 3.83 6 Vapor pressure ea+b/x+c ln(x) 0.9927 91.7632 91.7770 3.71 10

(22)

set. Doing so, many of them used the Hoerl model or the Vapor Pressure model (without actually knowing anything about these models), none of them trying to fit two different models to the curve. Yet, they all knew that the right part of the curve was a washout curve, an exponentially decreasing function.

When returning the graded papers to the students, the instructors discussed their solutions to the cardiac output problem. Among other ques- tions, they were asked the following: “Why didn’t you try two models when you knew that the second part of the curve was an exponentially decreasing function, that is g(x)= a · ebx?”

− S: “Well”, one student said, “we did not think it could be so complic- ated . . . .”

− I: “What do you mean?”

− S: “Well, we aren’t used to getting problems that take a day or two to solve, you know . . . we didn’t want it to be so complicated . . . I never understood the logarithmic transformation and all that . . .”

− I: “But you had the computer and all the software there to assist you.

Couldn’t you have explored the problem further?”

− S: “At the end, we were not sure if it was we or the computer who did the work, you know. It just got too complicated . . .”

Another type of response was based upon a trustworthy faith in the software:

I just thought that CurveExpert would do the job, you know . . . . I mean, if there is a perfect function, then it must be among all those models in CurveExpert, mustn’t it? I just picked what CurveExpert ranked first.

For those few students who calculated the magnitude of the integral badly and got a rather high or low cardiac output, it was interesting to ask why they didn’t check their values against known or intuitive values of cardiac output for men and women. As a matter of fact, the discus- sion about measurement of cardiac output and other significant medical

(23)

concepts several weeks before, had started with intuitive, realistic and non- realistic values of average persons cardiac output, the weight of a heart, and so forth. When one of the students, who got a relatively high value when she calculated the integral and consequently a relatively low cardiac output value, was asked if she bothered to do any validity check on the outcome, she said:

− S: “I just integrated . . .”

− I: “Yes, you did. Did you get a valid result and a corresponding grade on your paper?”

− I: “No. I was so happy when I found the best model in CurveExpert that I just picked the best r and then I used that model”.

− I: “But there’s a contradiction between what you wrote about normal values of cardiac output for healthy and sick people and what you presented as your result, isn’t there?”

− S: “Sure. I just wanted it over with, and I forgot about my ideas and my knowledge about cardiac output. My mother’s a doctor, you know, and we discussed the problem the whole week even though she couldn’t help me with the mathematics . . . Well, it’s so embarrassing.

I answered with an amount that’s only a fifth of what is average”.

Although it obviously requires a lot of strength and persistence for students to work with a complex mathematical modeling problem, most of the students were afterwards positive about experiencing the mathe- matical modeling process, including writing a report in mathematics for an extended period of time. They appreciated the opportunity to see how mathematics appears in a specific branch of knowledge that they did not know much about and found it interesting to learn about a mathe- matical problem that for most people is hidden inside that branch. It is also important to observe that the instructors in the course considered it more important to work together with the students in order to support the modeling process than just to “tell them how to do”. In a complex modeling process, the traditional teaching approach seems to be less suffi- cient than the collaborative approach where the teacher serves more as a well-informed assistant in the learning process.

It is especially important that pre-service teachers, soon themselves to be involved in the same teaching experience, as much as possible, focus more on qualitative reasoning and less on reproduction of facts and basic routines. The fact that the students are allowed to use graphing calculators and mathematical software in their examination further stresses the impor- tance of selecting a modeling situation that are relevant in the presence of this aid. At the same time that the modeling situation should remain nontrivial in the presence of the tools, the use of the technology should not

(24)

Once presented with CurveExpert, none of the 25 students could resist employing it on problems, whether in class or on an examination, and many of them were severely disappointed that the software could not plot a semi-log graph. As a matter of fact, CurveExpert actually does allow you to chose either the Y-axis or X-axis as log (or both), but most students did not find this option. Thus, the software somewhat took control over the modeling process, and since CurveExpert was “so good”, the model it suggested was considered to be good too. The software took control of the modeling situation in a sense and led the student through the process, instead of the other way around.

Several authors have reported on the difficulties that tertiary students have in understanding both the mathematical modeling process and the results it generates when they use powerful calculators or sophisticated computer software (Lanier, 1999; Lingefjärd, 2000; Lingefjärd and Holm- quist, 2001; Lingefjärd and Kilpatrick, 1998; Searcy, 1997; Zbiek, 1993).

Errors in mathematical modeling and misinterpretations of results are not new phenomena. Usiskin (1979) gave several examples of problems that might very well mislead students into false assumptions and conclu- sions. For example, in the Evolution of the Mile Record (pp. 434–437), Usiskin showed how a supposedly correct straight line fitted to the data points representing the world records for the British Mile from 1875 to 1975 would actually yield a record of 0 seconds in the year 2550.

In his article “Problem-Solving Derailers: The Influence of Misconcep- tions on Problem-Solving Performance”, Shaughnessy (1985) identified several sources of errors that are similar to those we find among students solving modeling problems: lack of appropriate knowledge structure or organization, algorithmic bugs, lack of problem-solving strategies, relin- quished executive control, belief system, folklore paradigms, inadequate problem representation, and ill-chosen schema. In the mathematical- modeling process, unfamiliarity with or uncertainty about the phenomenon to be modeled is obviously a great hindrance.

(25)

It is clear that the progress of computing technology is far from ended.

We can expect the calculator of tomorrow to do at least as much as and maybe more than what the computer software of today does. And courses in mathematical modeling are important for prospective mathe- matics teachers as well as for other students who study mathematics. To reveal, and even better to avoid, the phenomena I found, teachers of courses on mathematical modeling must pay great attention to they way they set up, conduct, and grade their assessments. With technology, it is sometimes easy, far too easy, for students to provide the correct answer without really understanding what the problem is about. Without assessment situations that make use of the technology and involve the students in critical thinking about what the technology offers in terms of possibilities and solutions, we may very well create students who are dependent on technology and not critical and insightful users of it.

From the observations I made in my research, I conclude that there is a risk that technology will create a new sort of authority. Calculators have been known for some years to make students “slaves” in the sense that even a simple multiplication like 5 times 8 will be carried out on the machine if available. Anyone who has been to a shop in the Western world has probably seen clerks using calculators in this unthinking fashion.

As computers become increasingly common in most classrooms and are used to help students and teachers with many tedious tasks, teachers need to pay careful attention to the kind of problems they give students.

I am convinced that assignments like those used in the modeling course can function as a tool to promote the shift away from facts and standard procedures to conceptual reasoning. They can also reveal qualities in the students’ beliefs about concepts and mathematical structures.

When students are forced to explain and argue for their models, they disclose inaccuracies and misunderstandings in a way that would be hidden otherwise. If teachers, for instance, ask students to calculate an integral expression, how do they know whether subsequent errors arise from the routine or the conceptual part of the solution process? With today’s tech- nology, the routine part of solving integrals is only a question of pressing the right button or giving the correct command.

Without implementing different kinds of assignments and assessments, many students may pass through the education system without encoun- tering any real challenge to what they actually know in mathematics.

Studies of how students handle modeling situations in the presence of tech- nology suggest that teachers at all levels need to be cautious about what students understand and the interpretations they make during the modeling process. I have illustrated how easy it seems to be for students to “get lost”

(26)

in a branch like medicine. I must confess that I never ever thought about that before. The way we studied and discussed the mathematics necessary to understand the measurement was also a very satisfactory way to learn mathematics”.

REFERENCES

Davis, P.D., Parbrook, G.D. and Kenny, G.N.C. (1995). Basic physics and measurement in Anaesthesia. Oxford, England: Butterworth-Heinemann.

de Lange, J. (1996). Using and applying mathematics in education. In A. J. Bishop et al.

(Eds), International Handbook of Mathematics Education (pp. 49–97). Dordrecht, The Netherlands: Kluwer.

Glantz, S.A. (1979). Mathematics for Biomedical Applications. Berkeley, CA: University of California Press.

Hyams, D. (1996). CurveExpert: A Curve Fitting System for Windows. Clemson, SC:

Clemson University.

Lanier, S.M. (1999). Students’ Understanding of Linear Modeling in a College Mathema- tical Modeling Course. Unpublished doctoral dissertation, University of Georgia.

Lingefjärd, T. (2000). Mathematical Modeling by Prospective Teachers. Electronic- ally published doctoral dissertation, University of Georgia. Can be downloaded from http://ma-serv.did.gu.se/matematik/thomas.htm

Lingefjärd, T. and Holmquist, M. (2001). Mathematical modeling and technology in teacher education – Visions and reality. In J. Matos, W. Blum, K. Houston and S. Carreira (Eds), Modelling and Mathematics Education ICTMA 9: Applications in Science and Technology (pp. 205–215). Horwood: Chichester.

Lingefjärd, T. and Kilpatrick, J. (1998). Authority and responsibility when learning math- ematics in a technology-enhanced environment. In D. Johnson and D. Tinsley (Eds), Secondary School Mathematics in the World of Communication Technologies: Learning, Teaching and the Curriculum (pp. 233–236). London: Chapman & Hall.

Mason, J. (1988). Modelling: what do we really want pupils to learn? In D. Pimm (Ed.), Mathematics, Teachers and Children. London: Hodder and Stoughton.

Microsoft Co. (1997). Microsoft Excel. Stockholm, Sweden: Microsoft Corporation.

Miller, R.D. (Ed.) (1982). Anesthesia.. New York, NY: Churchill Livingstone.

Niss, M. (1993). Assessment of mathematical applications and modelling in mathematics teaching. In J. de Lange, C. Keitel, I. Huntley and M. Niss (Eds), Innovation in

(27)

Mathematics Education by Modelling and Applications (pp. 41–51). Chichester: Ellis Horwood.

Pollak, H.O. (1970). Applications of mathematics. In E. Begle (Ed.), The Sixty-ninth Yearbook of the National Society for the Study of Education (pp. 311–334). Chicago:

University of Chicago Press.

Searcy, M.B. (1997). Mathematical thinking in an introductory applied college algebra course (Doctoral dissertation, University of Georgia, 1997). Dissertation Abstracts International, 58, 3056A.

Sfard, A. (1997). From acquisitionist to participationist framework: putting discourse at the heart of research on learning mathematics. In T. Lingefjärd and G. Dahland (Eds), Research in Mathematics Education (pp. 109–136). A report from a follow-up conference after PME 1997. Report 1998:02, Department of subject matter didactics, Gothenburg University.

Shaughnessy, J.M. (1985). Problem-solving derailers: The influence of misconceptions on problem-solving performance. In E. Silver (Ed.), Teaching and Learning Mathe- matical Problem Solving: Multiple Research Perspectives (pp. 399–415). Hillsdale, NJ:

Lawrence Erlbaum.

Skolverket. (2000). Kursplaner och betygskriterier för kurser i ämnet matematik i gymnas- ieskolan. SKOLFS 2000:5. [The Swedish secondary school curriculum and syllabus for mathematics] Electronically published document, Skolverket. Can be downloaded from http://www.skolverket.se/kursplaner/gymnasieskola/index.html

Usiskin, Z. (Ed.) (1979). Algebra through Applications with Probability and Statistics (Part 2). Reston, VA: National Council of Teachers of Mathematics.

Zbiek, R.M. (1993). Understanding of function, proof and mathematical modeling in the presence of mathematical computing tools: Prospective secondary school teachers and their strategies and connections. (Doctoral dissertation, Pennsylvania State University, 1992). Dissertation Abstracts International, 53, 2284A–2285A.

Department of Mathematics

Chalmers University of Technology and Göteborg University SE 412 96 Göteborg

Sweden

(28)

References

Related documents

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

a) Inom den regionala utvecklingen betonas allt oftare betydelsen av de kvalitativa faktorerna och kunnandet. En kvalitativ faktor är samarbetet mellan de olika

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Denna förenkling innebär att den nuvarande statistiken över nystartade företag inom ramen för den internationella rapporteringen till Eurostat även kan bilda underlag för

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än