The raven paradox, also known as Hempel's paradox, Hempel's ravens, or rarely the paradox of indoor ornithology,[1] is a paradox arising from the question of what constitutes evidence for the truth of a statement. Observing objects that are neither black nor ravens may formally increase the likelihood that all ravens are black even though, intuitively, these observations are unrelated.
This problem was proposed by the logician Carl Gustav Hempel in the 1940s to illustrate a contradiction between inductive logic and intuition.[2]
Hempel describes the paradox in terms of the hypothesis:[3][4]
Via contraposition, this statement is equivalent to:
In all circumstances where (2) is true, (1) is also true—and likewise, in all circumstances where (2) is false (i.e., if a world is imagined in which something that was not black, yet was a raven, existed), (1) is also false.
Given a general statement such as all ravens are black, a form of the same statement that refers to a specific observable instance of the general class would typically be considered to constitute evidence for that general statement. For example,
is evidence supporting the hypothesis that all ravens are black.
The paradox arises when this same process is applied to statement (2). On sighting a green apple, one can observe:
By the same reasoning, this statement is evidence that (2) if something is not black then it is not a raven. But since (as above) this statement is logically equivalent to (1) all ravens are black, it follows that the sight of a green apple is evidence supporting the notion that all ravens are black. This conclusion seems paradoxical because it implies that information has been gained about ravens by looking at an apple.
Nicod's criterion says that only observations of ravens should affect one's view as to whether all ravens are black. Observing more instances of black ravens should support the view, observing white or coloured ravens should contradict it, and observations of non-ravens should not have any influence.[5]
Hempel's equivalence condition states that when a proposition, X, provides evidence in favor of another proposition Y, then X also provides evidence in favor of any proposition that is logically equivalent to Y.[6]
The paradox shows that Nicod's criterion and Hempel's equivalence condition are not mutually consistent. A resolution to the paradox must reject at least one out of:[7]
A satisfactory resolution should also explain why there naively appears to be a paradox. Solutions that accept the paradoxical conclusion can do this by presenting a proposition that we intuitively know to be false but that is easily confused with (PC), while solutions that reject (EC) or (NC) should present a proposition that we intuitively know to be true but that is easily confused with (EC) or (NC).
Although this conclusion of the paradox seems counter-intuitive, some approaches accept that observations of (coloured) non-ravens can in fact constitute valid evidence in support for hypotheses about (the universal blackness of) ravens.
Hempel himself accepted the paradoxical conclusion, arguing that the reason the result appears paradoxical is that we possess prior information without which the observation of a non-black non-raven would indeed provide evidence that all ravens are black.
He illustrates this with the example of the generalization "All sodium salts burn yellow," and asks us to consider the observation that occurs when somebody holds a piece of pure ice in a colorless flame that does not turn yellow:[3]:19–20
This result would confirm the assertion, "Whatever does not burn yellow is not sodium salt," and consequently, by virtue of the equivalence condition, it would confirm the original formulation. Why does this impress us as paradoxical? The reason becomes clear when we compare the previous situation with the case of an experiment where an object whose chemical constitution is as yet unknown to us is held into a flame and fails to turn it yellow, and where subsequent analysis reveals it to contain no sodium salt. This outcome, we should no doubt agree, is what was to be expected on the basis of the hypothesis ... thus the data here obtained constitute confirming evidence for the hypothesis. ...
In the seemingly paradoxical cases of confirmation, we are often not actually judging the relation of the given evidence, E alone to the hypothesis H ... we tacitly introduce a comparison of H with a body of evidence which consists of E in conjunction with an additional amount of information which we happen to have at our disposal; in our illustration, this information includes the knowledge (1) that the substance used in the experiment is ice, and (2) that ice contains no sodium salt. If we assume this additional information as given, then, of course, the outcome of the experiment can add no strength to the hypothesis under consideration. But if we are careful to avoid this tacit reference to additional knowledge ... the paradoxes vanish.
One of the most popular proposed resolutions is to accept the conclusion that the observation of a green apple provides evidence that all ravens are black but to argue that the amount of confirmation provided is very small, due to the large discrepancy between the number of ravens and the number of non-black objects. According to this resolution, the conclusion appears paradoxical because we intuitively estimate the amount of evidence provided by the observation of a green apple to be zero, when it is in fact non-zero but extremely small.
I. J. Good's presentation of this argument in 1960[8] is perhaps the best known, and variations of the argument have been popular ever since,[9] although it had been presented in 1958[10] and early forms of the argument appeared as early as 1940.[11]
Good's argument involves calculating the weight of evidence provided by the observation of a black raven or a white shoe in favor of the hypothesis that all the ravens in a collection of objects are black. The weight of evidence is the logarithm of the Bayes factor, which in this case is simply the factor by which the odds of the hypothesis changes when the observation is made. The argument goes as follows:
Many of the proponents of this resolution and variants of it have been advocates of Bayesian probability, and it is now commonly called the Bayesian Solution, although, as Chihara[13] observes, "there is no such thing as the Bayesian solution. There are many different 'solutions' that Bayesians have put forward using Bayesian techniques." Noteworthy approaches using Bayesian techniques (some of which accept !PC and instead reject NC) include Earman,[14] Eells,[15] Gibson,[16] Hosiasson-Lindenbaum,[11] Howson and Urbach,[17] Mackie,[18] and Hintikka,[19] who claims that his approach is "more Bayesian than the so-called 'Bayesian solution' of the same paradox". Bayesian approaches that make use of Carnap's theory of inductive inference include Humburg,[20] Maher,[7] and Fitelson & Hawthorne.[9] Vranas[21] introduced the term "Standard Bayesian Solution" to avoid confusion.
Maher[7] accepts the paradoxical conclusion, and refines it:
A non-raven (of whatever color) confirms that all ravens are black because
- (i) the information that this object is not a raven removes the possibility that this object is a counterexample to the generalization, and
- (ii) it reduces the probability that unobserved objects are ravens, thereby reducing the probability that they are counterexamples to the generalization.
To reach (ii), he appeals to Carnap's theory of inductive probability, which is (from the Bayesian point of view) a way of assigning prior probabilities that naturally implements induction. According to Carnap's theory, the posterior probability, [math]\displaystyle{ P(Fa|E) }[/math], that an object, [math]\displaystyle{ a }[/math], will have a predicate, [math]\displaystyle{ F }[/math], after the evidence [math]\displaystyle{ E }[/math] has been observed, is:
where [math]\displaystyle{ P(Fa) }[/math] is the initial probability that [math]\displaystyle{ a }[/math] has the predicate [math]\displaystyle{ F }[/math]; [math]\displaystyle{ n }[/math] is the number of objects that have been examined (according to the available evidence [math]\displaystyle{ E }[/math]); [math]\displaystyle{ n_F }[/math] is the number of examined objects that turned out to have the predicate [math]\displaystyle{ F }[/math], and [math]\displaystyle{ \lambda }[/math] is a constant that measures resistance to generalization.
If [math]\displaystyle{ \lambda }[/math] is close to zero, [math]\displaystyle{ P(Fa|E) }[/math] will be very close to one after a single observation of an object that turned out to have the predicate [math]\displaystyle{ F }[/math], while if [math]\displaystyle{ \lambda }[/math] is much larger than [math]\displaystyle{ n }[/math], [math]\displaystyle{ P(Fa|E) }[/math] will be very close to [math]\displaystyle{ P(Fa) }[/math] regardless of the fraction of observed objects that had the predicate [math]\displaystyle{ F }[/math].
Using this Carnapian approach, Maher identifies a proposition we intuitively (and correctly) know is false, but easily confuse with the paradoxical conclusion. The proposition in question is that observing non-ravens tells us about the color of ravens. While this is intuitively false and is also false according to Carnap's theory of induction, observing non-ravens (according to that same theory) causes us to reduce our estimate of the total number of ravens, and thereby reduces the estimated number of possible counterexamples to the rule that all ravens are black.
Hence, from the Bayesian-Carnapian point of view, the observation of a non-raven does not tell us anything about the color of ravens, but it tells us about the prevalence of ravens, and supports "All ravens are black" by reducing our estimate of the number of ravens that might not be black.
Much of the discussion of the paradox in general and the Bayesian approach in particular has centred on the relevance of background knowledge. Surprisingly, Maher[7] shows that, for a large class of possible configurations of background knowledge, the observation of a non-black non-raven provides exactly the same amount of confirmation as the observation of a black raven. The configurations of background knowledge that he considers are those that are provided by a sample proposition, namely a proposition that is a conjunction of atomic propositions, each of which ascribes a single predicate to a single individual, with no two atomic propositions involving the same individual. Thus, a proposition of the form "A is a black raven and B is a white shoe" can be considered a sample proposition by taking "black raven" and "white shoe" to be predicates.
Maher's proof appears to contradict the result of the Bayesian argument, which was that the observation of a non-black non-raven provides much less evidence than the observation of a black raven. The reason is that the background knowledge that Good and others use can not be expressed in the form of a sample proposition – in particular, variants of the standard Bayesian approach often suppose (as Good did in the argument quoted above) that the total numbers of ravens, non-black objects and/or the total number of objects, are known quantities. Maher comments that, "The reason we think there are more non-black things than ravens is because that has been true of the things we have observed to date. Evidence of this kind can be represented by a sample proposition. But ... given any sample proposition as background evidence, a non-black non-raven confirms A just as strongly as a black raven does ... Thus my analysis suggests that this response to the paradox [i.e. the Standard Bayesian one] cannot be correct."
Fitelson & Hawthorne[9] examined the conditions under which the observation of a non-black non-raven provides less evidence than the observation of a black raven. They show that, if [math]\displaystyle{ a }[/math] is an object selected at random, [math]\displaystyle{ Ba }[/math] is the proposition that the object is black, and [math]\displaystyle{ Ra }[/math] is the proposition that the object is a raven, then the condition:
is sufficient for the observation of a non-black non-raven to provide less evidence than the observation of a black raven. Here, a line over a proposition indicates the logical negation of that proposition.
This condition does not tell us how large the difference in the evidence provided is, but a later calculation in the same paper shows that the weight of evidence provided by a black raven exceeds that provided by a non-black non-raven by about [math]\displaystyle{ -\log P(Ba|Ra\overline{H}) }[/math]. This is equal to the amount of additional information (in bits, if the base of the logarithm is 2) that is provided when a raven of unknown color is discovered to be black, given the hypothesis that not all ravens are black.
Fitelson & Hawthorne[9] explain that:
The authors point out that their analysis is completely consistent with the supposition that a non-black non-raven provides an extremely small amount of evidence although they do not attempt to prove it; they merely calculate the difference between the amount of evidence that a black raven provides and the amount of evidence that a non-black non-raven provides.
Some approaches for resolving the paradox focus on the inductive step. They dispute whether observation of a particular instance (such as one black raven) is the kind of evidence that necessarily increases confidence in the general hypothesis (such as that ravens are always black).
Good[22] gives an example of background knowledge with respect to which the observation of a black raven decreases the probability that all ravens are black:
Good concludes that the white shoe is a "red herring": Sometimes even a black raven can constitute evidence against the hypothesis that all ravens are black, so the fact that the observation of a white shoe can support it is not surprising and not worth attention. Nicod's criterion is false, according to Good, and so the paradoxical conclusion does not follow.
Hempel rejected this as a solution to the paradox, insisting that the proposition 'c is a raven and is black' must be considered "by itself and without reference to any other information", and pointing out that it "... was emphasized in section 5.2(b) of my article in Mind ... that the very appearance of paradoxicality in cases like that of the white shoe results in part from a failure to observe this maxim."[23]
The question that then arises is whether the paradox is to be understood in the context of absolutely no background information (as Hempel suggests), or in the context of the background information that we actually possess regarding ravens and black objects, or with regard to all possible configurations of background information.
Good had shown that, for some configurations of background knowledge, Nicod's criterion is false (provided that we are willing to equate "inductively support" with "increase the probability of" – see below). The possibility remained that, with respect to our actual configuration of knowledge, which is very different from Good's example, Nicod's criterion might still be true and so we could still reach the paradoxical conclusion. Hempel, on the other hand, insists our background knowledge itself is the red herring, and that we should consider induction with respect to a condition of perfect ignorance.
In his proposed resolution, Maher implicitly made use of the fact that the proposition "All ravens are black" is highly probable when it is highly probable that there are no ravens. Good had used this fact before to respond to Hempel's insistence that Nicod's criterion was to be understood to hold in the absence of background information:[24]
This, according to Good, is as close as one can reasonably expect to get to a condition of perfect ignorance, and it appears that Nicod's condition is still false. Maher made Good's argument more precise by using Carnap's theory of induction to formalize the notion that if there is one raven, then it is likely that there are many.[25]
Maher's argument considers a universe of exactly two objects, each of which is very unlikely to be a raven (a one in a thousand chance) and reasonably unlikely to be black (a one in ten chance). Using Carnap's formula for induction, he finds that the probability that all ravens are black decreases from 0.9985 to 0.8995 when it is discovered that one of the two objects is a black raven.
Maher concludes that not only is the paradoxical conclusion true, but that Nicod's criterion is false in the absence of background knowledge (except for the knowledge that the number of objects in the universe is two and that ravens are less likely than black things).
Quine[26] argued that the solution to the paradox lies in the recognition that certain predicates, which he called natural kinds, have a distinguished status with respect to induction. This can be illustrated with Nelson Goodman's example of the predicate grue. An object is grue if it is blue before (say) 2024 and green afterwards. Clearly, we expect objects that were blue before 2024 to remain blue afterwards, but we do not expect the objects that were found to be grue before 2024 to be blue after 2024, since after 2024 they would be green. Quine's explanation is that "blue" is a natural kind; a privileged predicate we can use for induction, while "grue" is not a natural kind and using induction with it leads to error.
This suggests a resolution to the paradox – Nicod's criterion is true for natural kinds, such as "blue" and "black", but is false for artificially contrived predicates, such as "grue" or "non-raven". The paradox arises, according to this resolution, because we implicitly interpret Nicod's criterion as applying to all predicates when in fact it only applies to natural kinds.
Another approach, which favours specific predicates over others, was taken by Hintikka.[19] Hintikka was motivated to find a Bayesian approach to the paradox that did not make use of knowledge about the relative frequencies of ravens and black things. Arguments concerning relative frequencies, he contends, cannot always account for the perceived irrelevance of evidence consisting of observations of objects of type A for the purposes of learning about objects of type not-A.
His argument can be illustrated by rephrasing the paradox using predicates other than "raven" and "black". For example, "All men are tall" is equivalent to "All short people are women", and so observing that a randomly selected person is a short woman should provide evidence that all men are tall. Despite the fact that we lack background knowledge to indicate that there are dramatically fewer men than short people, we still find ourselves inclined to reject the conclusion. Hintikka's example is: "... a generalization like 'no material bodies are infinitely divisible' seems to be completely unaffected by questions concerning immaterial entities, independently of what one thinks of the relative frequencies of material and immaterial entities in one's universe of discourse."[19]
His solution is to introduce an order into the set of predicates. When the logical system is equipped with this order, it is possible to restrict the scope of a generalization such as "All ravens are black" so that it applies to ravens only and not to non-black things, since the order privileges ravens over non-black things. As he puts it:
Some approaches for the resolution of the paradox reject Hempel's equivalence condition. That is, they may not consider evidence supporting the statement all non-black objects are non-ravens to necessarily support logically-equivalent statements such as all ravens are black.
Scheffler and Goodman[27] took an approach to the paradox that incorporates Karl Popper's view that scientific hypotheses are never really confirmed, only falsified.
The approach begins by noting that the observation of a black raven does not prove that "All ravens are black" but it falsifies the contrary hypothesis, "No ravens are black". A non-black non-raven, on the other hand, is consistent with both "All ravens are black" and with "No ravens are black". As the authors put it:
Selective confirmation violates the equivalence condition since a black raven selectively confirms "All ravens are black" but not "All non-black things are non-ravens".
Scheffler and Goodman's concept of selective confirmation is an example of an interpretation of "provides evidence in favor of..." which does not coincide with "increase the probability of..." This must be a general feature of all resolutions that reject the equivalence condition, since logically equivalent propositions must always have the same probability.
It is impossible for the observation of a black raven to increase the probability of the proposition "All ravens are black" without causing exactly the same change to the probability that "All non-black things are non-ravens". If an observation inductively supports the former but not the latter, then "inductively support" must refer to something other than changes in the probabilities of propositions. A possible loophole is to interpret "All" as "Nearly all" – "Nearly all ravens are black" is not equivalent to "Nearly all non-black things are non-ravens", and these propositions can have very different probabilities.[28]
This raises the broader question of the relation of probability theory to inductive reasoning. Karl Popper argued that probability theory alone cannot account for induction. His argument involves splitting a hypothesis, [math]\displaystyle{ H }[/math], into a part that is deductively entailed by the evidence, [math]\displaystyle{ E }[/math], and another part. This can be done in two ways.
First, consider the splitting:[29]
where [math]\displaystyle{ A }[/math], [math]\displaystyle{ B }[/math] and [math]\displaystyle{ C }[/math] are probabilistically independent: [math]\displaystyle{ P(A\ and\ B)=P(A)P(B) }[/math] and so on. The condition that is necessary for such a splitting of H and E to be possible is [math]\displaystyle{ P(H|E)\gt P(H) }[/math], that is, that [math]\displaystyle{ H }[/math] is probabilistically supported by [math]\displaystyle{ E }[/math].
Popper's observation is that the part, [math]\displaystyle{ B }[/math], of [math]\displaystyle{ H }[/math] that receives support from [math]\displaystyle{ E }[/math] actually follows deductively from [math]\displaystyle{ E }[/math], while the part of [math]\displaystyle{ H }[/math] that does not follow deductively from [math]\displaystyle{ E }[/math] receives no support at all from [math]\displaystyle{ E }[/math] – that is, [math]\displaystyle{ P(A|E)=P(A) }[/math].
Second, the splitting:[30]
separates [math]\displaystyle{ H }[/math] into [math]\displaystyle{ (H\ or\ E) }[/math], which as Popper says, "is the logically strongest part of [math]\displaystyle{ H }[/math] (or of the content of [math]\displaystyle{ H }[/math]) that follows [deductively] from [math]\displaystyle{ E }[/math]", and [math]\displaystyle{ (H\ or\ \overline{E}) }[/math], which, he says, "contains all of [math]\displaystyle{ H }[/math] that goes beyond [math]\displaystyle{ E }[/math]". He continues:
The orthodox Neyman–Pearson theory of hypothesis testing considers how to decide whether to accept or reject a hypothesis, rather than what probability to assign to the hypothesis. From this point of view, the hypothesis that "All ravens are black" is not accepted gradually, as its probability increases towards one when more and more observations are made, but is accepted in a single action as the result of evaluating the data that has already been collected. As Neyman and Pearson put it:
According to this approach, it is not necessary to assign any value to the probability of a hypothesis, although one must certainly take into account the probability of the data given the hypothesis, or given a competing hypothesis, when deciding whether to accept or to reject. The acceptance or rejection of a hypothesis carries with it the risk of error.
This contrasts with the Bayesian approach, which requires that the hypothesis be assigned a prior probability, which is revised in the light of the observed data to obtain the final probability of the hypothesis. Within the Bayesian framework there is no risk of error since hypotheses are not accepted or rejected; instead they are assigned probabilities.
An analysis of the paradox from the orthodox point of view has been performed, and leads to, among other insights, a rejection of the equivalence condition:
The following propositions all imply one another: "Every object is either black or not a raven", "Every raven is black", and "Every non-black object is a non-raven." They are therefore, by definition, logically equivalent. However, the three propositions have different domains: the first proposition says something about "every object", while the second says something about "every raven".
The first proposition is the only one whose domain of quantification is unrestricted ("all objects"), so this is the only one that can be expressed in first-order logic. It is logically equivalent to:
and also to
where [math]\displaystyle{ \rightarrow }[/math] indicates the material conditional, according to which "If [math]\displaystyle{ A }[/math] then [math]\displaystyle{ B }[/math]" can be understood to mean "[math]\displaystyle{ B }[/math] or [math]\displaystyle{ \overline{A} }[/math]".
It has been argued by several authors that material implication does not fully capture the meaning of "If [math]\displaystyle{ A }[/math] then [math]\displaystyle{ B }[/math]" (see the paradoxes of material implication). "For every object, [math]\displaystyle{ x }[/math], [math]\displaystyle{ x }[/math] is either black or not a raven" is true when there are no ravens. It is because of this that "All ravens are black" is regarded as true when there are no ravens. Furthermore, the arguments that Good and Maher used to criticize Nicod's criterion (see § Good's baby, above) relied on this fact – that "All ravens are black" is highly probable when it is highly probable that there are no ravens.
To say that all ravens are black in the absence of any ravens is an empty statement. It refers to nothing. "All ravens are white" is equally relevant and true, if this statement is considered to have any truth or relevance.
Some approaches to the paradox have sought to find other ways of interpreting "If [math]\displaystyle{ A }[/math] then [math]\displaystyle{ B }[/math]" and "All [math]\displaystyle{ A }[/math] are [math]\displaystyle{ B }[/math]," which would eliminate the perceived equivalence between "All ravens are black" and "All non-black things are non-ravens."
One such approach involves introducing a many-valued logic according to which "If [math]\displaystyle{ A }[/math] then [math]\displaystyle{ B }[/math]" has the truth value [math]\displaystyle{ I }[/math], meaning "Indeterminate" or "Inappropriate" when [math]\displaystyle{ A }[/math] is false.[33] In such a system, contraposition is not automatically allowed: "If [math]\displaystyle{ A }[/math] then [math]\displaystyle{ B }[/math]" is not equivalent to "If [math]\displaystyle{ \overline{B} }[/math] then [math]\displaystyle{ \overline{A} }[/math]". Consequently, "All ravens are black" is not equivalent to "All non-black things are non-ravens".
In this system, when contraposition occurs, the modality of the conditional involved changes from the indicative ("If that piece of butter has been heated to 32 °C then it has melted") to the counterfactual ("If that piece of butter had been heated to 32 °C then it would have melted"). According to this argument, this removes the alleged equivalence that is necessary to conclude that yellow cows can inform us about ravens:
Several commentators have observed that the propositions "All ravens are black" and "All non-black things are non-ravens" suggest different procedures for testing the hypotheses. E.g. Good writes:[8]
More recently, it has been suggested that "All ravens are black" and "All non-black things are non-ravens" can have different effects when accepted.[34] The argument considers situations in which the total numbers or prevalences of ravens and black objects are unknown, but estimated. When the hypothesis "All ravens are black" is accepted, according to the argument, the estimated number of black objects increases, while the estimated number of ravens does not change.
It can be illustrated by considering the situation of two people who have identical information regarding ravens and black objects, and who have identical estimates of the numbers of ravens and black objects. For concreteness, suppose that there are 100 objects overall, and, according to the information available to the people involved, each object is just as likely to be a non-raven as it is to be a raven, and just as likely to be black as it is to be non-black:
and the propositions [math]\displaystyle{ Ra,\ Rb }[/math] are independent for different objects [math]\displaystyle{ a }[/math], [math]\displaystyle{ b }[/math] and so on. Then the estimated number of ravens is 50; the estimated number of black things is 50; the estimated number of black ravens is 25, and the estimated number of non-black ravens (counterexamples to the hypotheses) is 25.
One of the people performs a statistical test (e.g. a Neyman-Pearson test or the comparison of the accumulated weight of evidence to a threshold) of the hypothesis that "All ravens are black", while the other tests the hypothesis that "All non-black objects are non-ravens". For simplicity, suppose that the evidence used for the test has nothing to do with the collection of 100 objects dealt with here. If the first person accepts the hypothesis that "All ravens are black" then, according to the argument, about 50 objects whose colors were previously in doubt (the ravens) are now thought to be black, while nothing different is thought about the remaining objects (the non-ravens). Consequently, he should estimate the number of black ravens at 50, the number of black non-ravens at 25 and the number of non-black non-ravens at 25. By specifying these changes, this argument explicitly restricts the domain of "All ravens are black" to ravens.
On the other hand, if the second person accepts the hypothesis that "All non-black objects are non-ravens", then the approximately 50 non-black objects about which it was uncertain whether each was a raven, will be thought to be non-ravens. At the same time, nothing different will be thought about the approximately 50 remaining objects (the black objects). Consequently, he should estimate the number of black ravens at 25, the number of black non-ravens at 25 and the number of non-black non-ravens at 50. According to this argument, since the two people disagree about their estimates after they have accepted the different hypotheses, accepting "All ravens are black" is not equivalent to accepting "All non-black things are non-ravens"; accepting the former means estimating more things to be black, while accepting the latter involves estimating more things to be non-ravens. Correspondingly, the argument goes, the former requires as evidence ravens that turn out to be black and the latter requires non-black things that turn out to be non-ravens.[34]
A number of authors have argued that propositions of the form "All [math]\displaystyle{ A }[/math] are [math]\displaystyle{ B }[/math]" presuppose that there are objects that are [math]\displaystyle{ A }[/math].[35] This analysis has been applied to the raven paradox:[36]
A modified logic can take account of existential presuppositions using the presuppositional operator, '*'. For example,
can denote "All ravens are black" while indicating that it is ravens and not non-black objects which are presupposed to exist in this example.
Original source: https://en.wikipedia.org/wiki/Raven paradox.
Read more |