• No results found

On a fuzzy scientific language

N/A
N/A
Protected

Academic year: 2022

Share "On a fuzzy scientific language"

Copied!
35
0
0

Loading.... (view fulltext now)

Full text

(1)

On a fuzzy scientific language

2020-06-17

Author: Olle Dahlstedt Supervisor: Sebastian Lutz Thesis project STS - Philosophy

Department of Philosophy, Uppsala university

(2)

Introduction 3

Thesis statement 5

Part 1 6

The justification for a scientific language of fuzzy logic 6

On ‘observation’, ‘measurement’ and ‘vagueness’ 6

On fuzzy logic 8

On fuzzy logic and language 9

On social science and statistics 11

On sets and fuzzy sets in social sciences 13

Part 2 16

Constructing a fuzzy scientific language 16

Language of empirical theories 16

The pragmatics of language 18

The semantics of language 19

The logical syntax of language 20

On the axiomatization of language 21

Primitive signs in the predicate calculus 23

Axiomatization of a fuzzy language 24

Individual variables 25

Primitive connectives 25

Truth constants 26

Definable connectives 26

Axioms and inference rule 27

Truth conditions 27

Fuzzy truth evaluations 28

Conclusion 30

References 33

(3)

Introduction

There is this classic view of science as a continuous process of clever people constantly inventing new theories about the world and rational skeptics constructing elaborate experiments to test those theories. But on what logical basis do we validate the results of scientific research? Typically, to make philosophical statements about scientific results, we do not wish to adjudicate the claims of truth of those results solely on the subjective basis of the social merits of scientists. Inevitably, this rules out the idea that we could accept scientific results on the basis of “because the scientists say so”.

A popular approach, favored by philosophers such as Carnap (1939), Przelecki (1969) and Quine (1957), is to discuss the philosophy of science from the perspective of scientific language.1 If scientific results could be analyzed like sentences in a language, it is possible to imagine that other philosophical concepts such as deductions, or accounts of explanations, that work well for logical sentences, similarly could be applied to scientific results. However, one of the major issues with the approach of science as a formal language is that there is no simple way of sewing together the pragmatic aspects of ordinary everyday language with the syntax and semantics of a formal language. A formal language, in this sense, is a language within which we can effectively judge what kind of sentences are logically true or not. The essay will present a more complete description of a formal language later. However, it is only through including the pragmatic aspects of language that we can extend the notion of truth to also be able to include what is factually true, i.e. what can be validated empirically.

Beyond the practical concerns of simply finding the appropriate logical system for our language of scientific theories, there are notable difficulties with accurately judging scientific results, even within a formal language system. If we could translate a scientific finding with precision, then there are still problems, such as the grue paradox, that interfere with the validity of logical inferences.2 It is possible that there are some things

1 Foundations of Logic and Mathematics, Carnap (1939), The Logic of Empirical Theories, Przelecki (1969), The Scope and Language of Science, Quine (1957),

2 Goodman, N. (1946), A Query on Confirmation

(4)

within logical systems that the logical system itself is unable to resolve.3 In spite of this, scientists still use logical inferences, like abduction, to justify the results of their observations as valid with respect to the theory they are testing.4 So it does not follow from the fact that formal languages of science are not perfect, that they would therefore not be useful.

It is within the scope of the idea that formal languages can be useful to scientists, that this essay should present a formal language for science, such that it may be useful. By useful, the preferred interpretation is “philosophically interesting”, since the purpose of a scientific language is to have a language within which we can adjudicate truth claims within science. This is therefore not necessarily of any practical concern to scientists, but should be of real interest to philosophers of science. The quality of our language can’t be promised - in order for a language to be considered useful or interesting, the scope of all philosophy of science needs to be addressed. This is plainly impossible in this short essay. However, there are some indicators that this could be productive. Part 1 of this essay will discuss the arguments justifying the introduction of a new type of scientific language. Specifically, we examine the arguments put forth by social scientists regarding the validity of empirical observations within social science. We also try to connect these arguments to deeper problems within the philosophy of science. It is possible to consider this part of the essay an extension of the introduction; in particular, it is not necessary to read part 1 if the reader is already convinced of the reasons why a fuzzy language of science could be useful.

In part 2 of the essay, we briefly discuss what a formal language entails, and we introduce this kind of formal language, which is based upon fuzzy logic. Fuzzy set theory is a type of set theory based on set membership degrees that are usually within the real-valued interval [0, 1].5 Fuzzy logic extends fuzzy set theory in the context of making truth claims. Beyond the arguments put forth in part one, the case for

3 While not strictly linked to the new riddle on induction, Gödel’s incompleteness theorems (1931) and Tarski’s undefinability theorem (1939) discuss the limitations of formal systems

4 Johansson, L. (2016, $6.1-6.3). Philosophy of Science for Scientists, pp. 103-109

5 $1, Cintulá, P., Fermüller, C., and Noguera, C., (2017) Fuzzy Logic, Stanford Encyclopedia of Philosophy

(5)

introducing a “new” type of logic to evaluate truth claims might seem unjustified. As we will discuss in part 1, fuzzy logic is already a suggested methodology within social sciences. This means that any philosophical language of science that is supposed to deal with the truth claims of fuzzy scientific results needs to be formalized in the same logic. Therefore, we already do have a fairly strong reason to admit the necessity of a fuzzy logic language of science. Moreover, a scientific language will inevitably contain sentences which are conventional, such as the proper way to perform a certain kind of experiment, or which theory to use when explaining some result, or how one should label the axes of graphs. Importantly, even which type of logical system to use is in itself a conventional statement. Conventional statements are normative, in the sense that they do reflect the values of the speaker; in this case, the values of scientists. The purpose of this essay is therefore normative, in the sense that when we are providing the logical system of a scientific language, we are engaging in an attempt of trying to describe how instances of “what is” actually ought to be described.

The essay is concluded with a brief discussion of the results of part two, as well as considering further arguments for and against this type of language.

Thesis statement

The first aim of this essay is to provide a justification of introducing vagueness to scientific language. The second aim is to provide a specific alternative for doing so:

namely, a language based on fuzzy logic, that is designed to cope with vague truth values, and to prove that it is possible to construct the syntax and evaluate the semantics of such a language.

(6)

Part 1

The justification for a scientific language of fuzzy logic

On ‘observation’, ‘measurement’ and ‘vagueness’

Clearly, it should be stated somewhere in this essay that an analysis of observational evidence in scientific language is something different from the physical act of observation. We should acknowledge that observation in science is a slightly more involved process than opening one’s eyes and just taking a look. It is an important and careful activity whose epistemic value depends on what is observed and in what way.

To clarify, we should explain the difference between the words measurement and observation. In this essay, we will use the word observation referring to unaided perceptions by humans. A measurement, meanwhile, can be considered the extension of an observation to include additional content, such as parameters that are not perceivable by humans, arranged in systems of units and ordered in magnitudes, with the help of devices. “The weather seems warm outside” is an observation, whereas

“According to my thermometer, the temperature just outside my window is 30 degrees celsius.” is a measurement. Typically, speakers of natural language and philosophers refer to examples of any kind, measurements or not, formulated as ‘observations’, and how these examples are formulated as ‘observational languages’. Later, in the second part of this essay, for the sake of brevity, we will consider a language in which we should be able to describe observations, and in turn use the word observation generally with reference to any kind of measurement or observation. In this part of the essay, when discussing the subtleties of experimental results, the essay uses the word measurement. However, it should be noted that both measurements and observations require fuzzy truth values.

From Rosenberg (1975), we propose that vagueness is an intrinsic property of observational language, as he reasons that vagueness cannot be avoided in scientific

(7)

language in general.6 He points out that observations are by their nature inexact;7 we can convince ourselves of this quite easily. We propose that any observation is inexact because it excludes relevant information. There is no reason we couldn’t decide that there is some causal link between a butterfly flapping its wings and eventually impacting the movement patterns of storm clouds two thousand miles away. But this seems like an absurd extension to include in every meteorological account of observation. If we try to avoid inexactness in observations via requiring that observers employ maximal specificity in their observations, then we are requiring them to describe anything at all that could be influencing the results. In this case, the limits of the observer’s available knowledge acts as an asymptotic barrier supporting the inexactness of any real observation. Consequently, requiring maximal specificity isn’t really reasonable, since it would lead us to require complete descriptions of everything in order to achieve complete descriptions of anything. This raises questions about whether we could ever conclude that we know everything there is to know about everything. Likewise, if an observation is only required to be minimally specified, such that its content is sufficiently descriptive, then there is the obvious problem of who gets to decide what sufficiently means. Based on this argument, in this essay we will accept that observations are inexact, so then, by the definition of partial interpretation, the theoretical terms which are semantically connected to these observations would also be vague, at least to the extent of the inexact nature of the observations.8 There is considerable philosophical discussion on partial or indirect interpretation, see for instance Andreas (2012), Achinstein (1963) or Friedman (2011).9

This essay doesn’t comment on their arguments, but if some account of partial interpretation is accepted, then we can argue that a language for science which is capable of taking vagueness into account would be useful for philosophers and scientists alike. Unfortunately, the standard semantics of classical logic does not allow

6 Rosenberg, A. (1975) Virtues of Vagueness in Science

7 Virtues of Vagueness in Science, p.1-2.

8 Virtues of Vagueness in Science, p.6. The definition presented by Rosenberg is that partial interpretation is given by (available) semantic rules for a term, but that the rules do not provide individually necessary or jointly sufficient observational conditions for their correct definition.

9 Andreas, H. (2012) Semantic Holism in Scientific Language, Achinstein, P. (1963) Theoretical Terms and Partial Interpretation', Friedman, M.(2011), Carnap on theoretical terms

(8)

for vague truth values. A sentence is either true or false. However, since much of philosophy regarding scientific language borrows from set theory, and because there is a set theory with the stated purpose of being able to interpret vagueness, namely fuzzy set theory, the paper will present a formal scientific language based on fuzzy logic.

On fuzzy logic

In this section, we will briefly describe fuzzy logic, with respect to how it differs from classical logic. The concept of fuzzy logic was introduced in Zadeh (1965) with the proposal of fuzzy set theory.10 Fuzzy logic is based on the concept of set membership for fuzzy sets, where set membership is determined by membership functions that assign the degree of membership to that set. It is a sub-type of many-valued logic, in which sentences and other formulae of the form 𝜑𝑥 have a truth value in the real-valued interval [0.0,1.0]. This corresponds to the degree of membership that the object 𝑥 has in the fuzzy set 𝜑. In plain English, a sentence like The temperature outside is warm will be assigned a truth value based on its membership degree to some set of warm objects, and that membership degree is assigned by some membership function. The membership function is arbitrary, and we assign it ourselves depending on what behavior we would prefer - the important part is that the membership functions map any input values to the interval [0.0,1.0]. In contrast to this, in classical logic the only possible truth values are the integer values{0,1}. In classical logic, a connective like conjunction, , can be seen as a function from{0,1} × {0,1}to{0,1}. A fuzzy connective is accordingly a function from [0,1] × [0,1]to[0,1]. Fuzzy truth functions are further assumed to behave like classical truth functions if the truth values assigned are 0 or 1.

It can be important to distinguish fuzzy logic from Bayesian probability. Fundamentally, fuzzy logic and probability address different forms of uncertainty. While both fuzzy logic and probability theory can represent degrees of certain kinds of subjective belief, fuzzy set theory uses the concept of fuzzy set membership as a mathematical model of

10 Zadeh, L.A. (1965) "Fuzzy sets".

(9)

vagueness, while Bayesian probability is a mathematical model of ignorance. A key aspect of this difference is that fuzzy logic is truth functional, while Bayesian probability is not. The fuzzy expression 𝐴 ∨ 𝐵 has a semantic value which is determinable from the values of 𝐴 and 𝐵respectively. In Bayesian probability, this is not always true. In this essay, we deal with fuzzy logic and the possibility of a fuzzy language of science.

However, probability and fuzzy logic are not irreconcilable, as we will discuss later.

On fuzzy logic and language

This essay doesn’t really remark on whether fuzzy logic accurately represents the way humans speak about observations in general. In this essay, when a ‘fuzzy language’ is denoted, it is meant as a formal language, usable only within the narrow margins of formal accounts of observations. In fact, there may be a good reason to avoid employing fuzzy logic if we want to describe vagueness in general. So we should elaborate, just to make this clear. Sauerland (2011) argues that fuzzy logic cannot provide a complete model for vagueness in general.11 His argument rests on the basis that, while speakers have no problems assigning intermediate truth values to sentences, the truth values assigned to sentences are not systematic in a way that a fuzzy logic would predict.12

What this means in particular is that when we speak vaguely in a general setting, we do not instinctively assign the semantics of fuzzy logic to our sentences. In saying, for instance, This cup of coffee is too hot, I am certainly specifying that there is some threshold of temperature that I would like my coffee temperature to remain below. But when I say this, I am not explicitly stating what kind of threshold, or at which temperature that threshold is reached, which in fuzzy logic would correspond to the membership function for some fuzzy set. If I say The weather is somewhat cold, I am not specifying the rules of ‘somewhat’ - in conversation, we implicitly agree to a reasonable mutual definition of somewhat, and we usually do not need to explain ourselves. Just like in

11 Sauerland, U. (2011) Vagueness in Language: The Case Against Fuzzy Logic Revisited

12 Vagueness in Language: The Case Against Fuzzy Logic Revisited, p. 13

(10)

classical logic, in order for fuzzy logic to make sense, sentences need to have been assigned truth values. We assume that the truth values are assigned through degrees of membership to some fuzzy set, and so their corresponding membership functions will systematically predict the truth values of vague sentences. Fuzzy logic is really a way of ascribing (in a precise and clear way) degrees of vagueness to our statements.

And it seems somewhat natural to assume that this kind of precise specification of fuzzy membership functions to fuzzy sets representing abstract concepts is not something that people generally do when they speak vaguely.

Therefore, it may be true that fuzzy logic can’t provide a general model that can account for vagueness in ordinary everyday language. This is because it doesn’t make a lot of sense to use fuzzy logic if we wanted to describe some kind of systematic way of being able to predict universally acceptable truth values for ordinary vague sentences.

However, it doesn’t follow from this that it cannot provide an adequate model for formal vagueness. In fact, Sauerland’s arguments, and the arguments above, are only problematic if we are trying to describe vague speech generally. This is not necessarily a problem for this essay. While we would prefer that scientists are systematic in how they assign membership functions, the fact that they assign the truth values themselves is precisely what we want them to do. The idea is that scientists should assign the truth values for sentences, through their corresponding membership functions.13 In other words, it does not matter with respect to the purpose of this essay whether or not we can assign vague truth values for ordinary language sentences in a systematic and general way. Likewise, we do not wish to describe some universal fuzzy logic language that should be used for all kinds of social science. It would be counterproductive to assume that the same kind of vagueness applies to all statements in all social scientific theories; the argument we make is that fuzzy truth values should be assigned specifically to each fuzzy set by scientists, with regards to specific theories. An important part of building an hypothesis, from the philosophical perspective, is to try to describe the semantics of observations, for example by connecting them to theories and postulates. This corresponds directly to assigning - to some set - its fuzzy

13 As pointed out by my supervisor, Sebastian Lutz.

(11)

membership functions. In this sense, fuzzy logic is an excellent tool for science, despite not offering many solutions with regards to vague speech in general.

On social science and statistics

This is not a complete account of any specific issue, nor a brief account of all issues regarding truth and truth claims within the philosophy of statistics. It is a slightly compressed account of some practical issues of statistics within social science, as well as some general problems with empiricism in philosophy. At least in some respects, the toolbox of a modern social scientist is similar to the toolbox of a statistician. Taagepera (2008) presents the dramatic case that the excessive dependence on statistical data analysis is destroying the scientific methodology of social and political sciences.14 He notes that descriptive statistical data analysis has crowded out explanatory approaches in the stages of research where logical thinking should be implemented. His argument is that scientific research rests on two legs: one, comprised of empirical analysis, and one of systematic inquiry into explanations for the data.15 He contends that social science lacks its ‘second leg’, the logical basis upon which descriptive questions of

“what is?” should be evaluated.16 In particular, he points out that the logical question of

“why?” is dangerously ignored by statistical modelers who call their statistical data fits

“empirical models”.17 Taagepera does agree that statistics and statistical analysis serve an important role in scientific research, but argues that they take up too much space and serve as crutches to rely upon rather than having to deal with coming up with logical models that can serve for explanations.18 In particular, he notes that the predictive ability of empirical models are often very low, and amounts to little more than interesting indicators of correlation.19 Tagepeera wants the social sciences to add quantitatively predictive logical modeling, to complement the already existing descriptive findings.20

14 Tagepeera, R. (2008) Making Social Sciences More Scientific, Preface

15 Making Social Sciences More Scientific, p. 5

16 Making Social Sciences More Scientific, p. 6

17 Making Social Sciences More Scientific, p. 6

18 Making Social Sciences More Scientific, p. 6

19 Making Social Sciences More Scientific, Preface

20 Making Social Sciences More Scientific, p. 11

(12)

The arguments against statistics in general, and against specific applications of statistics in social science are many, and a popular topic of discussion within the field itself. For example, the statistician Andrew Gelman rants frequently on his blog about the statistical misadventures of other social scientists.21 The case that is presented in this essay, is that this is a problem that philosophers of science should be paying attention to. The premise of that case is that the kind of logical model-building urged for by Tagepeera, and many others, needs to rest on a firm logical basis, where truth claims can be evaluated properly, which in turn falls within the domain of philosophy. 22

While statistical methods and probabilistic causal interpretation do provide insight into experimental data, machine learning algorithms can typically find patterns in sets of data of any shape and size, and the number of parameters and noise inherent to measuring any parameter can construct patterns that are not present in the research problem.2324 To be more specific: measurement noise is certainly natural and typically hard to mitigate completely. However, noise in measurements needs to be accounted for when constructing causal explanations between parameters.25 From a philosophical perspective, we can say that if some kind of noise affects some measurement, and that noise is not clearly defined or accounted for as a background effect, then the validity of any scientific argument based on the evidence of the measurement containing that noise is inherently less valid. In this sense, the lack of logical models for explanation, which themselves can take into account noise within measurements, make social sciences the most vulnerable to this type of problem. If predictive ability should be a valuable property in social scientific research, then empirical modeling is not enough.

There are even deeper philosophical concerns with the over-reliance on empirical models: The underdetermination problem stalks any truth claims based only on

21 See Gaydar and the fallacy of objective measurement (2018), Measurement error and the replication crisis (2017) as examples.

22 Making Social Sciences More Scientific, p.4. In particular, Tagepeera refers to Folk and Luce (1987), McGregor (1993), Sørensen (1998), etc.

23 See Bias-variance tradeoff: Kohavi, Ron; Wolpert, David H. (1996) "Bias Plus Variance Decomposition for Zero-One Loss Functions".

24 See Freedman’s paradox: Freedman, D, (1983) "A Note on Screening Regression Equations"

25 Scheines, R., Ramsey, J. (2016) Measurement Error and Causal Discovery.

(13)

observational evidence.26 One of the more interesting arguments for scientific underdetermination is the empirical equivalence argument, put forth by van Fraassen (1980), which argues that for every theory there is an empirically equivalent theory that is equally as likely to be true as the theory being tested.27 Ultimately, statistical methods cannot be the only adjudicators of the value in some specific case of scientific evidence.

So, what is the alternative?

On sets and fuzzy sets in social sciences

Ragin (2008) emphasizes that claims in social science theory are set theoretic.28 A statement like “small farmers are risk averse” is a statement of sets and set relations.29 The set of small farmers is a subset of the set of risk averse individuals. Ragin argues that these types of statements are too often transformed by social scientists into statistical arguments about hypotheses regarding correlations between attributes or entities, and it is his contention that theory formulated in terms of set relations should be evaluated as statements about set relations.30 Whereas both Tagepeera and Ragin share a distaste for the over-reliance on statistical modelling and correlational analysis techniques, the main difference in their criticism lies with their approach. Where Tagepeera wishes for social science to include more predictive model-building, Ragin wants social scientists to consider their hypotheses about correlations instead as statements on set relations. Ragin further recognizes that one of the reasons why social scientists typically dislike set-theoretic analysis is that classical sets are limited to crisp values.31 To counter this problem, the methodology that Ragin proposes for this is to make use of fuzzy sets.32

26 Stanford, K. (2017), Underdetermination of Scientific Theory, Stanford Encyclopedia of Philosophy

27 van Fraassen, B. (1980) The Scientific Image.

28 Ragin, C. (2008) Redesigning Social Inquiry: Fuzzy Sets and Beyond. p. 13

29 ibid.

30 ibid.

31 Redesigning Social Inquiry: Fuzzy Sets and Beyond. p.26

32 Redesigning Social Inquiry: Fuzzy Sets and Beyond. p.29

(14)

The reasoning behind fuzzy sets and set relations is that many of the topics that social scientists study are vague, in the sense that they vary by level or degree.33 Membership of a country to the set of democratic countries is not necessarily crisp, membership of a product to the set of sustainably produced products is even less crisp.34 Recall that fuzzy sets are multi-valued sets, meaning that membership to any set is a degree, or a score, specified as a real value within the interval [0.0, 1.0]. Ragin considers the possibility of adding “qualitative anchors” at specific breakpoints determined by some membership score as a distinguishing factor between relevant and irrelevant variation.35 A fairly straightforward example of this could be to use the logistic function,

𝑓(𝑥) =

𝑒𝑥

1+𝑒𝑥 as a membership function to a fuzzy set, for instance if we wanted to evaluate individual happiness as a function of individual wealth. The logistic function is fairly arbitrary choice of membership function, but providing that there is some sort of theoretical assumption between individual wealth and individual happiness, it makes sense to place the breakpoints, that would distinguish the subsets of “wealthy” or

“poor” individuals, further away from the membership score of 0.5, where the rate of change would be the highest. If we select breakpoints that indicate “unambiguously poor” and “unambiguously wealthy”, then we can make statements about how individual wealth relates to individual happiness, based on the subset of people who are neither poor nor wealthy.

Heylen & Nachtgael (2012) similarly point to Ragin, and attempt to incorporate fuzzy set theory in statistical analyses.36 Their view is that a combination of fuzzy set theory with conventional statistics is underestimated, and that is not necessary to limit methodological approaches, either to set-theoretical relations or to statistical correlations.37 Rather, they set out to prove that these two kinds of relations can be combined. This is what they call ‘Configurational statistics’.38 In order to calculate statistics on fuzzy sets, they propose two options. The first option is to derive some

33 ibid.

34 The first example is Ragin’s, the second is mine.

35 Redesigning Social Inquiry: Fuzzy Sets and Beyond. p.30

36 Heylen, B., Nachtgael, M. (2012), The integration of fuzzy sets and statistics

37 The integration of fuzzy sets and statistics, p.1

38 The integration of fuzzy sets and statistics, p.2

(15)

meaningful breakpoint, such that the fuzzy set becomes a crisp set with membership values of 0 and 1 above or below that breakpoint.39 Then, only the samples that score above or below the breakpoint are considered part of the statistical domain.40 The other option, which is their suggestion, is to use the fuzzy membership degree as a weight factor in the analysis, such that samples with high degree of membership are given higher weight.41 Their argument is that, if a sample only displays some partial degree of membership to some property, then it is not reasonable to give them full weight in the analysis. However, if their weight in the analysis is determined by their set membership degree, then their impact on the results is moderated. This, they propose, insures the exclusion of irrelevant conclusion while at the same time minimizes the exclusion of relevant variation.42 They also propose that configurational statistics opens up a new approach to falsification in the social sciences.43 While it is not the intent of this essay to delve too deeply into the nature of fuzzy set-theoretical statistics, the possibility of fuzzy logic and probability theory being compatible with each other is a good reason to suggest that a fuzzy language of science can be useful within the social sciences, both in theory and in practice. Heylen & Nachtgael can also serve as a metaphorical bridge, connecting the arguments of Taagepera and Ragin, and showing that they are not incompatible methodologies.

39 The integration of fuzzy sets and statistics, p.10

40 The integration of fuzzy sets and statistics, p.11

41 ibid.

42 ibid.

43 The integration of fuzzy sets and statistics, p.11

(16)

Part 2

Constructing a fuzzy scientific language

Language of empirical theories

In order to apply the concept of fuzzy logic to an observational language, we will first need to discuss the foundations of scientific language. It's important to distinguish between theory and language. A theory can be thought of as a set of statements (interpreted sentences), which are formulated in a language. This essay is not really meant to be a treatise on scientific languages and its surrounding discussion. In the introduction, we explicitly clarified that if we accept the assumption that scientific languages can be useful, then the case we are arguing is that a fuzzy scientific language could be useful. It is not clear why we should accept the first premise, however. The idea of scientific languages is closely associated with logical positivism, or logical empiricism. Among other things, the connection lies within the necessity of integrating mathematics with the empirical sciences, and determining the logical rules we would need in order to unify empiricism with logic. There’s been plenty of criticism of logical empiricism: most notably, Kuhn (1962), Putnam (1974) and even Quine himself (1951).44 Most of the issues with positivism are ontological, or at very least problematic on a deeper level than this essay is operating. Likewise, with respect to the question posed in the opening section of the introduction, “But on what logical basis do we validate the results of scientific research?”, the alternatives to logical empiricism, like the paradigms of Kuhn, do not offer less sociological alternatives. As has been stated before, this essay is not a descriptive treatise on how scientists do science, it is a normative project on how philosophers should evaluate the truth claims of scientists who use fuzzy logic, with some added bits on why social scientists should use fuzzy logic.

44 Kuhn, T. “The Structure of Scientific Revolutions”, Putnam, H. “Problems with the observational/theoretical distinction”, Quine, W V O "Two Dogmas of Empiricism"

(17)

Much of this section uses the definitions and examples presented by Carnap (1939).

Carnap presents the case that the chief theoretical procedures in science, which include theory testing, giving explanations for known facts, and predicting unknown facts, involve the application of logic and mathematics.45 Carnap also contends that logic and mathematics supply rules for the transformation of factual sentences through, for instance, non-factual terms. By non-factual, we refer to terms like signs of operations (‘+’, ‘∗’) or the identity and equal signs (‘’, ‘=’). Indeed, Carnap proposes that the logical and mathematical terms or sentences of a scientific language do not contain any factual content.46 What this means can be explicated clearly if we consider how their truth is to be asserted. The truth values of these parts of the sentences are not validated by facts, in the way that a theorem of mathematics is not tested like a theorem of physics. Even more clearly, we can argue that the hypothesis 𝐹 = 𝑚𝑎contains a factual claim that can be validated by data, but that the law of identity and the law of commutativity involved in the equation itself constitute a non-factual part of the scientific sentence.

Since the practice of science, as Carnap argues and as we can see above, invariably contains logical and mathematical theorems, the basis of their validity must be assured.

Carnap wants to answer these questions by examining how theorems of logic and mathematics are used in the context of empirical science.47 Much of scientific practice involves reports of observations, scientific laws, and theories, as well as predictions;

these are formulations in language which contain descriptions of facts. Since this is a normative project, our analysis of the language of science intentionally aims to suggest a meaning of the terms of science that are logical and mathematical, i.e. those terms that are not validated by empirical facts.

First, some straightforward linguistics is needed. Whenever we investigate a language, regardless of whether it’s supposed to be a formal or practical language, we call the investigated language the ‘object-language’ and the language in which the results of

45 Carnap, R. (1939) Foundations of Logic and Mathematics

46 Foundations of Logic and Mathematics, p. 2

47 Foundations of Logic and Mathematics, p. 2

(18)

the investigation are formu­lated is called the ‘metalanguage’.48 The theory concerning the object-language which is formulated in the metalanguage is sometimes called

‘metatheory’.49 The metatheory of a language typically has three branches, which are the pragmatics, the semantics, and the syntax of the object-language in question.

Carnap focuses his analysis on these three branches in the study of language, and we will do likewise:

The pragmatics of language

As stated in the introduction, the purpose of our essay is not to try and describe the pragmatic language of scientists. However, it’s still worthwhile to describe what the pragmatics of a language comprise. The basic elements of language systems are signs and a sequence of signs is an expression.50 Typically, the pragmatic study of a language consists of a set of investigations regarding the designata and mode of use of all the words and expressions of that language.51 Pragmatic investigations should elucidate both the cause and effect of uttering an expression.52 Pragmatics is therefore contextual, as the mode of use contains specifically the different social groups, age groups, and geographical groups in the choice of expressions. 53

Importantly, pragmatics does not determine whether the use of a certain expression is right or wrong but only how often it occurs and how often it leads to the effect intended.54 A question of right or wrong must always refer to a system of rules.

However, given the complex nature of human speech, the rules which are laid down are not the rules of the factually given language, i.e. the ‘pragmatic language’.55 The

48 Foundations of Logic and Mathematics, p. 4

49 Foundations of Logic and Mathematics, p. 5

50 Foundations of Logic and Mathematics, p. 5

51 Foundations of Logic and Mathematics, p. 6

52 Foundations of Logic and Mathematics, p. 6

53 ibid.

54 Foundations of Logic and Mathematics, p. 6-7

55 Foundations of Logic and Mathematics, p. 7

(19)

rules rather constitute a language system corresponding to that language. which is typically called the ‘semantic system’.56 This is an important distinction.

The semantics of language

In a semantic study of a language, the investigation only concerns the relations between the ex-pressions of the language and their designata, which assigns some meaning to the expression.57 On the basis of facts about the language given by pragmatics, semantics sets to lay down a system of rules establishing truth relations, which are usually called ‘semantic rules’.58 As indicated previously, the semantic system is not factual but constructed. Nevertheless, we construct semantic systems not arbitrarily but with regard to the facts about the language.59 Typically, we want to divide the signs of our system into two classes: descriptive and logical signs.60 Descriptive signs designate things and properties of things, whereas logical signs designate connections between descriptive signs that are not themselves descriptions of things or properties of things.61 A sequence like ‘A is not B’ contains the descriptive signs A and B and the logical signs “is” and “not”, for instance.

Crucially, the semantic system of a language should determine truth conditions for the sentences of the language.62 These truth conditions should clearly make sense of whether the sentences are logically true or not, but we do not need to define whether the sentences are actually pragmatically useful. For instance, the sentence “Germany is a country or Germany is not a country” is a perfectly acceptable (and true) semantic sequence, but it doesn’t make a lot of pragmatic sense. Carnap puts it like this: we understand a language system, or any sign, expression, or sentence in the language system if we know the semantic rules of that system.63 We can also say that the

56 Foundations of Logic and Mathematics, p. 7

57 Foundations of Logic and Mathematics, p. 7

58 ibid.

59 ibid.

60 ibid.

61 ibid.

62 Foundations of Logic and Mathematics, p. 8

63 Foundations of Logic and Mathematics, p. 10-11

(20)

semantic rules give an interpretation of the language system.64 On the level of semantics, the language is a system of rules, and so given the rules, we can interpret what might be the logically true, or false, sequences of the language. This does not presume anything about the factually true or false properties of any sentence in the language.

The logical syntax of language

The next logical step if we want to abstract further from the pragmatics of the language is to consider the syntax. The syntax of any language only takes the expressions themselves into consideration and discards the objects, and their properties, and any potential states of affairs of the speakers discussed in the sections above.65 Even the relations of the designations previously discussed are left behind. To distinguish our discussions from the term ‘syntax’ as it may be used in the field of linguistics, which is not necessarily restricted to logical terms, we will use the term ‘logical syntax’ from now on.66 We can compress Carnap’s discussion on the topic slightly, and state that the logical syntax of a language can be defined briefly in this way: it is the theory of any object language that concerns itself with terms that are formal; a term is called formal if it only refers to the different kinds of signs of the object language and the order in which they appear, with no reference to any extra-linguistic objects or to the designata of the signs.67 Another word that is commonly used to describe such a formal system of syntax is ‘calculus’.68

The possibility for such a system to completely formalize logical deduction is the key ingredient for us in order to build a formal language ourselves that can be used for scientific purposes.69 The components for any calculus for it to function for such a

64 Foundations of Logic and Mathematics, p. 11

65 Foundations of Logic and Mathematics, p. 16

66 ibid.

67 Foundations of Logic and Mathematics, p. 16-17

68 Foundations of Logic and Mathematics, p. 17

69 ibid.

(21)

purpose are some primitive sentences, like axioms, and some rules of inference.70 If it is then provable that the system is sound and complete, then this means that all logically true or false sentences can be derived from within the formal system itself.71 With regards to the relation between calculus and a semantic system, Carnap invokes the term ‘interpretation’ again to mean that a semantical system 𝑆 interprets a calculus 𝐶 if the rules in 𝑆 determine truth criteria for all sentences of 𝐶.72 This means simply that for every possible formula of 𝐶that there is a corresponding proposition of 𝑆.

On the axiomatization of language

Fundamentally, the separation between pragmatics, semantics, and logical syntax, with respect to the sentences of a scientific language, is based on levels of abstraction. If we take into consideration the action, state, and environment of someone who speaks or hears a word, then we are considering the pragmatics of the language.73 Say then, that we abstract from the conditions and circumstances of the speaker and simply take the word and consider it as an element of the language. The relation between the expression which contains the word and the word itself happens on this level of abstraction, and it is here that we are considering the semantics of the language. If we abstract even from the designata of the word and restrict the investigation to formal properties of the expressions and relationships among them, then we are considering the logical syntax of the language. The logical syntax of the language can also be considered the axiomatization of a language since it is similarly the foundation of the rules between the logical signs of the language.

Przelecki (1969), also considers formal languages, and considers a formalized language as something that is usually defined by enumerating its primitive signs, which consist of its individual variables, its logical constants, and its descriptive constants.74 Since Przelecki gives a fairly descriptive account, we will follow it here, and later as we

70 Foundations of Logic and Mathematics, p. 17

71 Foundations of Logic and Mathematics, p. 20

72 Foundations of Logic and Mathematics, p. 21

73 Foundations of Logic and Mathematics, p. 4

74 Przelecki, M. (1969) The Logic of Empirical Theories, p.6-7

(22)

describe a fuzzy language. After its enumeration, the language is then extended by laying down rules of formation, which we can call axioms, which tell us how its compound expressions, like sentences, are to be constructed out of the simpler ones.75 Przelecki argues that the formalization of a language 𝐿 makes it possible to codify the system of logic presupposed by theory 𝑇.76 Przelecki further notes that such a procedure consists of selecting a set of logical axioms and laying down a suitable set of rules of transformation or inference.77 Below, we will discuss how to construct such a formalized language through axiomatization.

To Przelecki, axiomatization provides a means of a theory’s precise characterization.78 A theory always comprises all of its logical consequences: 𝐶𝑛(𝑇) ⊂ 𝑇.79 We can call this a logical system. Being a logical system, a theory is always an infinite set of statements.80 In one case, there exists an effective procedure enabling anyone to decide in a finite number of predetermined steps whether or not any given sentence in L is a theorem of theory T. In the other case there is no such procedure. The notion of a theorem of T is effective in the first case and ineffective in the second case.81 A theory for which an effective decision procedure is available is said to be decidable, and vice versa; a theory that does not satisfy this condition is called an undecidable theory.82 According to Przelecki, all actual empirical theories belong to undecidable systems.83

A theory 𝑇is said to be axiomatizable if all its theorems follow from a decidable subset of them, that is, there is a decidable set 𝐴, called the set of axioms, such that𝑇 = 𝐶𝑛(𝐴).84 If a set of axioms is infinite, the procedure of defining them instead becomes the formulation of schemata for the axioms.85 If A is not only decidable but also finite, T

75 The Logic of Empirical Theories, p.8

76 The Logic of Empirical Theories, p.8

77 ibid.

78 The Logic of Empirical Theories, p.10

79 ibid.

80 ibid.

81 ibid.

82 ibid.

83 The Logic of Empirical Theories, p.11

84 The Logic of Empirical Theories, p.11

85 ibid.

(23)

is said to be finitely axiomatizable. While undecidable, empirical theories are certainly axiomatizable, and for the most part - finitely axiomatizable.86 They can then be presented as axiomatic systems.

To do this, we need to specify a set of axioms 𝐴.87 If 𝐴 is finite, the axioms may be explicitly enumerated; if 𝐴 is infinite, it is usually specified by formulating their general schemata.88 An axiomatizable theory could possibly admit an infinite number of different sets of axioms; a specifically given axiomatization of any given theory constitutes only one of its possible representations.89 Przelecki uses the first-order predicate calculus with identity as the logical basis of his language.90 Then, the formalized language can be characterized as below:

Primitive signs in the predicate calculus

91

1. individual variables: {𝑥𝑖: 𝑖 ∈ 𝑁}

2. logical constants: (a) sentential connectives: ¬ (negation), (conjunction),

(disjunction), (implication), (equivalence); (b) quantifiers: (general),

(existential); (c) the sign of identity: =;

3. descriptive constants: {𝑃𝑛: 𝑛 ∈ 𝑁} (𝑛 𝑘𝑛-place predicates)

The next step in formalizing a language is to establish the formulas. In this language, the simplest well-formed formulas are of the form 𝑃𝑖(𝑥1, . . . , 𝑥𝑘) or 𝑥1 = 𝑥2; all others are formed from them by means of sentential connectives and quantifiers.92 In other words, if 𝑓is a formula and 𝑥 is a variable, then (∀𝑥)𝑓 and (∃𝑥)𝑓 are formulas as well. Among all well-formed formulas of our language, the ones which contain no free variables, we

86 ibid.

87 ibid.

88 ibid.

89 ibid.

90 The Logic of Empirical Theories, p.7

91 ibid.

92 The Logic of Empirical Theories, p.7

(24)

call the sentences.93 The sentences are the most important type of expression.94 Moreover, a sentence is an effective concept.95 There exists an effective procedure for deciding, for an arbitrary expression, whether it is a sentence in the language.96 With regards to the creation of our language, the next step will be to set up our axioms; at this point, we switch over to defining our fuzzy logic system.

Axiomatization of a fuzzy language

We will make use of one of the more fundamental fuzzy logic systems, namely the Monoidal T-norm based Logic, or MTL henceforth, proposed by Esteva & Godo (2001).97 Because designating predicates for fuzzy logic sentence is a somewhat involved process, we will simply start by assuming that we have defined the notions of a formula, or a sentence. Then we are restricted to sentential logic. To be clear, in order to develop an entire fuzzy language, we should define the formulas. But this is beyond the scope of this essay. Readers can verify that the predicate fuzzy logic exists in Esteva & Godo (2001) as well as in Cintula & Hájek (2009).9899

This is not an argument for MTL as the only adequate system of fuzzy logic for our purpose, nor that is necessarily the most adequate. The argument we are proposing is that there exists a fuzzy logic system adequate for the purpose of an observational language, and we will use MTL to prove that this is in fact possible. MTL is an extension of Monoidal logic.100 As noted, MTL stands for Monoidal t-norm logic. A triangular norm, or ‘t-norm’ is a function 𝑇: [0,1] × [0,1] → [0,1] which satisfies the properties of commutativity, monotonicity and associativity, as well as the number 1 acting as the identity element.101 The MTL logic is defined as a consequence relation over the

93 ibid.

94 ibid.

95 ibid.

96 ibid.

97 Esteva, F., Godo, L. (2001) Monoidal T-Norm Based Logic

98 Monoidal T-Norm Based Logic, ch. 4

99 Cintula, P., Hajek, P. (2009) Triangular norm based predicate fuzzy logics

100 Monoidal T-Norm Based Logic, p.1

101 Definition. 2.1.1. Hajek, P. (2001) Metamathematics of Fuzzy Logic, p. 28

(25)

semantics given by any left-continuous triangular norm. In fact, because of this, MTL logic is the logic of all left-continuous t-norms.102

Since this is the definition, we will assume here that our t-norm is the product of real numbers. In fact, this is the so-called product t-norm. The primary reason for using the product t-norm, is simply that the semantics become much easier. See, for example Vetterlein (2008) on the difficulties of defining a weaker left-continuous t-norm.103 In order to use this, we will have to add two axioms, but predicates still work for product logic.104 Given a particular continuous t-norm ∗, a truth evaluation 𝑒 maps from propositional variables to [0,1], which is extended to all formulas by the interpretation of its primitive connectives, as below. The MTL system is defined as a quantified propositional calculus, and so we introduce, like in the case of the predicate calculus defined by Przelecki above, three distinct categories of primitive signs.

Individual variables105

1. {𝑝𝑖: 𝑖 ∈ 𝑁}and{𝑥𝑖: 𝑖 ∈ 𝑁} are countable sets of free 𝑝-variables and bounded 𝑥- variables, respectively.

Primitive connectives106

Let us define the connectives in more detail. MTL has the primitive sentential connectives &, , , and derivable sentential connectives , ¬, .

1. : 𝑥 ∧ 𝑦 = 𝑚𝑖𝑛{𝑥, 𝑦}∀𝑥, 𝑦 ∈ [0,1] is another associative and commutative conjunction. is also an idempotent conjunction. By idempotence, we refer to the following property of :

- 𝜑 ∧(𝜑 ∧ 𝜒) = 𝜑 ∧ 𝜒

Because fuzzy truth values range between 0 and 1, it makes sense that applying conjunction twice may not necessarily result in the same truth value as applying it

102 Monoidal T-Norm Based Logic, p.4

103 Vetterlein, T. (2008) Regular left-continuous t-norms

104 See axiom 10, 11 respectively. Also see Hajek, P. (2001) for product logic predicates

105 Monoidal T-Norm Based Logic, p.3

106 Monoidal T-Norm Based Logic, p.3

(26)

once.107 Therefore, it is typical to introduce a second conjunction, which is non- idempotent:

2. &: 𝑥 & 𝑦 = 𝑥 ∗ 𝑦 ∀𝑥, 𝑦 ∈ [0,1]is the non-idempotent conjunction: it is defined as the left-continuous t-norm ∗. As we stated, the t-norm is in our case defined as a product 𝑥 ⋅ 𝑦. Note that the conjunction &is stronger than in MTL, such that the formula (𝜑 & 𝜓) → (𝜑 ∧ 𝜓)is a tautology. Further justification for the non- idempotence follows when we look at the truth evaluations of fuzzy statements below.

3. : 𝑥 ⇒ 𝑦 = 𝑦/𝑥 ∀𝑥, 𝑦 ∈ [0,1]. Algebraically, T-norms define a residuated implication function, which is defined as the unique function satisfying the condition 𝑥 ⋅ 𝑦 ≤ 𝑧if and only if 𝑥 ≤ 𝑦 ⇒ 𝑧.108 The residuum function above can be characterized as the weakest function that makes the fuzzy modus ponens for product logic valid, which makes it a suitable truth function for implication.109

Truth constants110

In MTL we have two truth constants, the primitive 0denoting total falsity and the derivable truth constant 1 which is defined as 1 = ¬0 → 𝜑.

Definable connectives111

The definable connectives of MTL are, as the name implies, defined through derivation from its primitive connectives.

1. : disjunction is defined as 𝜑 ∨ 𝜓 = ((𝜑 → 𝜓) → 𝜓) ∧ ((𝜓 → 𝜑) → 𝜑) 2. ¬: negation is defined as ¬𝜑= 𝜑 → 0

3. : equivalence is defined as 𝜑 ↔ 𝜓= (𝜑 → 𝜓) ∧ (𝜓 → 𝜑)

107 $2, Fuzzy Logic, Stanford Encyclopedia of Philosophy

108 Fuzzy Logic, Stanford Encyclopedia of Philosophy, $1

109 Theorem 8. Metamathematics of Fuzzy Logic, p. 30

110 Monoidal T-Norm Based Logic, p.3

111 Monoidal T-Norm Based Logic, p.3

(27)

Axioms and inference rule112

The axioms of MTL as defined by Esteva and Godo, are the following:

1. (𝜑 → 𝜓) → ((𝜓 → 𝜒) → (𝜑 → 𝜒)), 2. (𝜑 & 𝜓) → 𝜑, and

3. (𝜑 & 𝜓) → (𝜓 & 𝜑), and correspondingly,

4. (𝜑 ∧ 𝜓) → 𝜑, and 5. (𝜑 ∧ 𝜓) → (𝜓 ∧ 𝜑),

such that it can be made clear that & and are distinctly defined.

6. (𝜑 & (𝜑 → 𝜒)) → ((𝜑 & 𝜓) → 𝜒), 7. a) (𝜑 → (𝜓 → 𝜒)) → ((𝜑 & 𝜓) → 𝜒) and

b) ((𝜑 & 𝜓) → 𝜒) → (𝜑 → (𝜓 → 𝜒)), 8. ((𝜑 → 𝜓) → 𝜒) → (((𝜓 → 𝜑) → 𝜒) → 𝜒), 9. 0 → 𝜑,

and because we are using the product t-norm113, the following axioms are added, 10. (𝜑 & (𝜑 → 𝜓)) → (𝜓 & (𝜓 → 𝜑)),

11. ¬𝜑 ∨((𝜑 → 𝜑 & 𝜓) → 𝜓)

The rule of inference of MTL is modus ponens 𝜑 → 𝜓, 𝜑⊢𝜓.114

Truth conditions

In order for the semantic system of a language to be consistent, then all that is necessary is that for each sentence the truth value of the sentence can be evaluated.115 The truth evaluations 𝑒 in product logic are mappings that assign each propositional variable a truth value belonging to a real unit interval [0,1], and are uniquely extended to arbitrary formulas through the connectives defined above, i.e through:

1. 𝑒(𝜑 & 𝜓) = 𝑒(𝜑) ∗ 𝑒(𝜓) = 𝑒(𝜑) ⋅ 𝑒(𝜓), 2. 𝑒(𝜑 → 𝜓) = 𝑒(𝜑) ⇒ 𝑒(𝜓) = 𝑒(𝜓) / 𝑒(𝜑,

112 Monoidal T-Norm Based Logic, p.3

113 Fuzzy Logic, Stanford Encyclopedia of Philosophy, $5

114 Monoidal T-Norm Based Logic, p.3

115 See the section ‘Semantics of language’ above

(28)

3. 𝑒(𝜑 ∧ 𝜓) = 𝑚𝑖𝑛(𝑒(𝜑), 𝑒(𝜓))

Fuzzy truth evaluations

In order for us to determine whether the truth conditions above represent a useful semantics, we will evaluate them and provide intuitive arguments for their usefulness.

We will begin with the ‘spiky and’, or :

Let us propose that we have some observational sentence 𝐴 which has some fuzzy truth value 𝑒(𝐴). Note that any fuzzy truth value 𝑒 is determined by the membership degree to some fuzzy set. Since we are not suggesting fuzzy membership functions for practical use here, the values are simply referred to as 𝑒. If we have some other observational sentence 𝐵 with a corresponding fuzzy truth value 𝑒(𝐵), then from the truth condition it follows that we can evaluate the truth of the proposition 𝐴 ∧ 𝐵 as such:

𝑒(𝐴 ∧ 𝐵 ) = 𝑚𝑖𝑛(𝑒(𝐴), 𝑒(𝐵)).

Respectively, an interpretation of this can be given like this: If A is the proposition that Bob is an old person and B is the proposition that Bob has gray hair, then the proposition of Bob is an old person Bob has gray hair is evaluated as the minimum of the truth values of both of the statements. An intuitive argument for this is that generalizes classical logic; in classical logic, the intersection of sets 𝐴 and 𝐵 represent a truth table where 𝐴 ∧ 𝐵 cannot be true if either 𝐴 or 𝐵 are false. Both classical conjunction and take the minimal value of both conjuncts. With regards to contradictory statements, like 𝐴 ∧ ¬ 𝐴, these would be evaluated like 𝑚𝑖𝑛(𝑒(𝐴), 𝑒(¬𝐴)) = 𝑚𝑖𝑛(𝑒(𝐴),0) = 0for all 𝑒(𝐴) > 0and as 𝑚𝑖𝑛(𝑒(𝐴),1) = 𝑒(𝐴) = 0 for 𝑒(𝐴) = 0.116

So a complete contradiction is completely false, which is what we would intuitively expect. Let us also consider &, and .

116 This is based on product logic negation, see Lemma 2.1.13 in Metamathematics of Fuzzy Logic, p. 31

(29)

With regards to &, since we have defined it as the product of real numbers, the truth evaluation of 𝑒(𝐴 & 𝐵) = 𝑒(𝐴) ⋅ 𝑒(𝐵). The argument that we made before is that it should make sense that applying the same truth function twice might not yield the same result. In general, this is because we want to be able to be less certain of A and B than we are of A or B individually. This kind of reasoning is mirrored in Bayesian probability, where 𝑃(𝐴 ∧ 𝐵) ≤ 𝑃(𝐴) ⋅ 𝑃(𝐵). By contrast, Bayesian probability can be seen as a mathematical model of ignorance. The intuitive bridge between ignorance and vagueness could be found within the non-idempotent conjunction, if we consider that if something is fuzzy, then it is not perfectly defined. And if it is not perfectly defined, this is because it is not completely known. So it makes sense to consider that the formalism should be transferable between probability and fuzzy logic, given that there is a cognitive link between the concepts of ignorance and vagueness.

Clearly, the non-idempotent property doesn’t hold if either 𝑒(𝐴)or 𝑒(𝐵) are 0 or 1, but if they are not, then iterative applications of & will yield different truth values. And if either of 𝑒(𝐴) or 𝑒(𝐵)are 0, or 1, then in any case, the assumption is already that fuzzy connectives will behave like classical connectives.117 We will use the proposition of A that Bob is an old person and B that Bob has gray hair again, and reason like this: the sentence Bob is an old person & Bob has gray hair is evaluated as the product 𝑒(𝐴) ⋅ 𝑒(𝐵). So, if by some set membership function the truth values have been assigned as 𝑒(𝐴) = 0.75 and 𝑒(𝐵) = 0.78, then the evaluation of 𝑒(𝐴 & 𝐵) = 0.75 ⋅ 0.78 = 0.585.

If we compare this to the weaker conjunction , we would get 𝑒(𝐴 ∧ 𝐵) = 0.75. In fact, the weaker conjunction giving a higher truth value holds generally for any 0.0 <

𝑒(𝑥) < 1.0. No formal proof is given, however this can easily be seen. If the 𝑚𝑖𝑛{𝑎, 𝑏}function is simply interpreted, instead of “the lowest of arguments a or b”, as

“the lowest of the arguments a or b times 1”, this is always going to be a larger number than “the lowest of the arguments a or b times a number smaller than 1”. The stronger conjunction & therefore gives us a stronger truth evaluation, which is reasonable.

117 See section on ‘Fuzzy logic’ in part 1

References

Related documents

In this thesis we investigated the Internet and social media usage for the truck drivers and owners in Bulgaria, Romania, Turkey and Ukraine, with a special focus on

(a) First step: Normalized electron density of the nanotarget (grayscale) and normalized electric field (color arrows) during the extraction of an isolated electron bunch (marked

As we indicated in the introduction, the results are consistent with the following set of combined hypotheses: 1) There is a bias in the ISI databases SSCI, A&amp;HI, and SCI

The Content and the Language Analysis investigated the concept formation by means of school science language and everyday language in three different stages: (from) meaning of

government study, in the final report, it was concluded that there is “no evidence that high-frequency firms have been able to manipulate the prices of shares for their own

(1997) studie mellan människor med fibromyalgi och människor som ansåg sig vara friska, användes en ”bipolär adjektiv skala”. Exemplen var nöjdhet mot missnöjdhet; oberoende

The teachers at School 1 as well as School 2 all share the opinion that the advantages with the teacher choosing the literature is that they can see to that the students get books

Studiens syfte är att undersöka förskolans roll i socioekonomiskt utsatta områden och hur pedagoger som arbetar inom dessa områden ser på barns språkutveckling samt