• No results found

False positives versus false negatives

3. UNCERTAINTY

3.3.5. False positives versus false negatives

Scientists do not like to be wrong. In the scientific world, being wrong is, in general, worse than not being right. This does not only mean that scientists prefer to postpone their judgement until they have more evidence. It also means that they are biased to err in favour of false negatives over false positives. It is worse for a scientist’s career to be exposed as having claimed something that turns out not to be the case (a false positive), than to be exposed as having denied something that turns out to be the case (a false negative). 295

Birgitte Wandall calls the bias towards false negatives the “conservative burden of proof”, since it confers the burden of proof on those who make a positive claim.296 She also points out that the reason for this tendency is probably that one of the main values guiding science is to keep the scientific corpus (the body of statements accepted by science) as free as possible from false statements.297 This is the scientific community’s own version of “erring on the side of caution”, and it is doubtlessly a good reason to trust science: If something is claimed by the scientific community to be true, it probably is true. This also means, however, that if the scientific community does not want to exclaim

295 Gee 2006, Gee & Greenberg 2001 p.60, Grandjean 2004 p.217, 384, Herremoës et al 2001 p.184, Mattsson 2005 p.9, Wandall 2004 p.267 note 6

296 Wandall 2004 pp.267, 269, Wandall 2005

297 Wandall 2004 pp.267, 269

something as true, it does not necessarily mean that it is false. To believe that it does seems to be an all too common mistake that in some situations can cause a good deal of harm.298 It is, after all, not obvious that the goal of avoiding false positives is always a super ordinate goal in society at large. In many cases where other values are at stake (values, like human health, that may not be basic epistemic values but are important values in anthropocentric instrumentalism as well as other moral theories), false negatives can have at least as severe effects as false positives. The effects of not regulating or banning something that is dangerous can be at least as bad from a moral point of view as the effects of regulating or banning something that is harmless. If we accept the intuition from sub-section 3.3.3 that human health needs to be assigned a higher value than has traditionally been the case, it is probably in many cases more important to avoid false negatives than to avoid false positives.299 We therefore have a case that is parallel to the intuition discussed above regarding the value of acting in time.

The conclusion must also be the same: We need a decision rule that can compensate for the difference in goals between science and practical decision making,300 and the precautionary principle seems to be precisely cut out for that job. The cost of false negatives for a host of human values, including human health, seems just like the cost of time loss, to be a strong argument in favour of the precautionary principle: Just as it is sometimes more important to act in time than being exactly right, it is sometimes more important to avoid false negatives than to avoid false positives – depending on the values at stake.

It is therefore reasonable to handle this intuition in a similar way: When we make decisions in matters where some important value is at stake (e.g. human health), and when we suspect that a certain decision may result in serious damage to this value, and when we suspect that a false negative is a more substantial threat to the protection or promotion of this value than a false positive, then we should move our priorities from being biased towards avoiding false positives in the direction of avoiding false negatives.

It is important to see that it is not a matter of going from a system that is totally immune to false positives to one that is totally immune against false negatives. A system immune to false positives would not produce any statements about the world at all (only analytical statements would pass the test), while a system that is immune to false negatives would not be able to exclude anything other than pure contradictions. Everything would be considered as possible, and no possibility could ever be excluded from our considerations.

Changing priorities from being right in the direction of acting in time can be done simply by taking action sooner instead of waiting for more reliable information, but how do we, in practice, move our priorities from avoiding false positives to avoiding false negatives?

One way would be to transfer the burden of proof from those who claim that the practice or substance is dangerous to those who claim that it is safe in

298 Gee 2006, Whiteside 2006 p.58

299 Mattsson 2005 p.9, Wandall 2004 pp.269f, Wandall 2005

300 For a discussion on the goals of science, see Wandall 2004 p.267

relation to the values in question. Instead of asking, “is this dangerous?”, we ask

“is this safe?”. This is the solution Wandall suggests.301 Analogously to her categorisation of the scientific urge to avoid false positives as a “conservative burden of proof”, we might call this a “precautionary burden of proof”.

The idea of shifting the burden of proof can be interpreted as an “either/or-solution”. Either we place the burden of proof on one side, or we place it on the other. I think it would be more fruitful to go for a successive solution – as we have done in the previous three subsections. Just as something can be more or less valuable, a threat can be more or less severe, and timing can be more or less important, avoiding false negatives can be more or less important. What we need is a method that allows us to change focus in proportion to the importance of avoiding false negatives relative to the importance of avoiding false positives.

We need to be able to increase or decrease the burden of proof successively on the different sides. One way of doing it could be by moving the confidence level.

This will by no means capture the whole problem, which is quite complex and involves much more than just choosing the confidence level. Moving the confidence level must therefore not be seen as the whole solution. It is, however, a relatively simple method to start with. The scientist can, for instance, supply the decision makers with a set of answers based on different confidence levels.

This would allow the decision makers to choose a confidence level that fits the distribution of the burden of proof that is appropriate given the importance of avoiding false negatives in relation to false positives. At the same time, the scientific community can choose to include only the answers based on the most conservative confidence level in the scientific corpus. It would also make the procedure more transparent and reduce the power that scientists have over deciding the relative importance of avoiding false positives versus avoiding false negatives on behalf of the entire society.

A weakness in this suggestion is that people lacking insight in how science works and what it is about, could point at the discrepancy in confidence level between the assertion incorporated in the scientific corpus and the assertion on which policy/legislation is based, and claim that the latter is not based on sound science or at least does not fulfil the most rigorous scientific demands. It is my humble hope, however, that it is possible to explain the process to these people.

We need to point out the distinction between the scientific method and the choice of confidence level, and we also need to point out the difference in goals between science and society, i.e. keeping the scientific corpus clean on the one hand and protecting/promoting a host of other important values on the other. It would thus hopefully be possible for the public to understand that making decisions based on a confidence level that is less biased in favour of avoiding false positives is not the same as making decisions based on a less scientific method. Instead, it is a matter of making the scientific results more useful in relation to the different but just as legitimate goals of society in general.

301 Wandall 2004 p.270