Inductive reasoning

From Wikidoc - Reading time: 10 min


Induction or inductive reasoning, sometimes called inductive logic, is the process of reasoning in which the premises of an argument are believed to support the conclusion but do not ensure it. It is used to ascribe properties or relations to types based on tokens (i.e., on one or a small number of observations or experiences); or to formulate laws based on limited observations of recurring phenomenal patterns. Induction is employed, for example, in using specific propositions such as:

This ice is cold.
A billiard ball moves when struck with a cue.

...to infer general propositions such as:

All ice is cold.
All billiard balls struck with a cue move.

Inductive reasoning has been attacked several times. Historically, David Hume denied its logical admissibility. During the 20th century, most notably Karl Popper and David Miller have disputed the existence, necessity and validity of any inductive reasoning, even of probabilistic (Bayesian) ones.[1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34]

Strong and weak induction[edit | edit source]

Strong induction[edit | edit source]

All observed crows are black.
therefore,
All crows are black.

This exemplifies the nature of induction: inducing the universal from the particular. However, the conclusion is not certain.

Unless we can systematically falsify the possibility of crows of another color, the statement (conclusion) may actually be false.

For example, one could examine the bird's genome and learn whether it's capable of producing a differently colored bird without mutation or a long set of breeding changes. In doing so, we could discover that albinism is possible, resulting in light-colored crows.

Even if you change the definition of "crow" to require blackness, the original question of the color possibilities for a bird of that species would stand, only semantically hidden.

A strong induction is thus an argument in which the truth of the premises would make the truth of the conclusion probable, but not definite.

Weak induction[edit | edit source]

I always hang pictures on nails.
therefore
All pictures hang from nails.

Assuming the first statement to be true, this example is built on the certainty that "I always hang pictures on nails" leading to the generalization that "All pictures hang from nails". However, the link between the premise and the inductive conclusion is weak. No reason exists to believe that just because one person hangs pictures on nails that there are no other ways for pictures to be hung, or that other people cannot do other things with pictures. Indeed, not all pictures are hung from nails; moreover, not all pictures are hung. The conclusion cannot be strongly inductively made from the premise. Using other knowledge we can easily see that this example of induction would lead us to a clearly false conclusion. Conclusions drawn in this manner are usually overgeneralizations.

Many speeding tickets are given to teenagers.
therefore
All teenagers drive fast

In this example, the premise is built upon a certainty; however, it is not one that leads to the conclusion. Not every teenager observed has been given a speeding ticket. In other words, unlike "The sun rises every morning", there are already plenty of examples of teenagers not being given speeding tickets. Therefore the conclusion drawn can easily be true or false (perhaps more easily false than true in this case), and the inductive logic does not give us a strong conclusion. In both of these examples of weak induction, the logical means of connecting the premise and conclusion (with the word "therefore") are faulty, and do not give us a strong inductively reasoned statement.

Validity[edit | edit source]

Formal logic, as most people learn it, is deductive rather than inductive. Some philosophers claim to have created systems of inductive logic, but it is controversial whether a logic of induction is even possible. In contrast to deductive reasoning, conclusions arrived at by inductive reasoning do not necessarily have the same degree of certainty as the initial premises. For example, a conclusion that all swans are white is false, but may have been thought true in Europe until the settlement of Australia, when Black Swans were discovered. Inductive arguments are never binding but they may be cogent. Inductive reasoning is deductively invalid. (An argument in formal logic is valid if and only if it is not possible for the premises of the argument to be true whilst the conclusion is false.) In induction there are always many conclusions that can reasonably be related to certain premises. Inductions are open; deductions are closed. It is however possible to derive a true statement using inductive reasoning if you know the conclusion. The only way to have an efficient argument by induction is for the known conclusion to be able to be true only if an unstated external conclusion is true, from which the initial conclusion was built and has certain criteria to be met in order to be true (separate from the stated conclusion). By substitution of one conclusion for the other, you can inductively find out what evidence you need in order for your induction to be true. For example, you have a window that opens only one way, but not the other. Assuming that you know that the only way for that to happen is that the hinges are faulty, inductively you can postulate that the only way for that window to be fixed would be to apply oil (whatever will fix the unstated conclusion). From there on you can successfully build your case. However, if your unstated conclusion is false, which can only be proven by deductive reasoning, then your whole argument by induction collapses. Thus ultimately, pure inductive reasoning does not exist.

The classic philosophical treatment of the problem of induction, meaning the search for a justification for inductive reasoning, was by the Scottish philosopher David Hume. Hume highlighted the fact that our everyday reasoning depends on patterns of repeated experience rather than deductively valid arguments. For example, we believe that bread will nourish us because it has done so in the past, but this is not a guarantee that it will always do so. As Hume said, someone who insisted on sound deductive justifications for everything would starve to death.

Instead of approaching everything with unproductive skepticism, Hume advocated a practical skepticism based on common sense, where the inevitability of induction is accepted.

Induction is sometimes framed as reasoning about the future from the past, but in its broadest sense it involves reaching conclusions about unobserved things on the basis of what has been observed. Inferences about the past from present evidence – for instance, as in archaeology, count as induction. Induction could also be across space rather than time, for instance as in physical cosmology where conclusions about the whole universe are drawn from the limited perspective we are able to observe (see cosmic variance); or in economics, where national economic policy is derived from local economic performance.

Twentieth-century philosophy has approached induction very differently. Rather than a choice about what predictions to make about the future, induction can be seen as a choice of what concepts to fit to observations or of how to graph or represent a set of observed data. Nelson Goodman posed a "new riddle of induction" by inventing the property "grue" to which induction does not apply.

Types of inductive reasoning[edit | edit source]

Sources for the examples that follow are: (1), (2), (3).

Generalization[edit | edit source]

A generalization (more accurately, an inductive generalization) proceeds from a premise about a sample to a conclusion about the population.

The proportion Q of the sample has attribute A.
therefore
The proportion Q of the population has attribute A.

How great the support which the premises provide for the conclusion is dependent on (a) the number of individuals in the sample group compared to the number in the population; and (b) the randomness of the sample. The hasty generalization and biased sample are fallacies related to generalization.

Statistical syllogism[edit | edit source]

A statistical syllogism proceeds from a generalization to a conclusion about an individual.

A proportion Q of population P has attribute A.
An individual I is a member of P.
therefore
There is a probability which corresponds to Q that I has A.

The proportion in the first premise would be something like "3/5ths of", "all", "few", etc. Two dicto simpliciter fallacies can occur in statistical syllogisms: "accident" and "converse accident".

Simple induction[edit | edit source]

Simple induction proceeds from a premise about a sample group to a conclusion about another individual.

Proportion Q of the known instances of population P has attribute A.
Individual I is another member of P.
therefore
There is a probability corresponding to Q that I has A.

This is a combination of a generalization and a statistical syllogism, where the conclusion of the generalization is also the first premise of the statistical syllogism.

Argument from analogy[edit | edit source]

An (inductive) analogy proceeds from known similarities between two things to a conclusion about an additional attribute common to both things.

P is similar to Q.
P has attribute A.
therefore
Q has attribute A.

An analogy relies on the inference that the properties known to be shared (the similarities) imply that A is also a shared property. The support which the premises provide for the conclusion is dependent upon the relevance and number of the similarities between P and Q. The fallacy related to this process is false analogy.

Causal inference[edit | edit source]

A causal inference draws a conclusion about a causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship.

Prediction[edit | edit source]

A prediction draws a conclusion about a future individual from a past sample.

Proportion Q of observed members of group G have had attribute A.
therefore
There is a probability corresponding to Q that other members of group
G will have attribute A when next observed.

Argument from authority[edit | edit source]

An argument from authority draws a conclusion about the truth of a statement based on the proportion of true propositions provided by an authoritative source. It has the same form as a prediction.

Proportion Q of the claims of authority A have been true.
therefore
There is a probability corresponding to Q that this claim of A is true.

For instance:

All observed claims from websites about logic are true.
Information X came from a website about logic.
therefore
Information X is likely to be true.

Bayesian inference[edit | edit source]

Of the candidate systems for an inductive logic, the most influential is Bayesianism. This uses probability theory as the framework for induction. Given new evidence, Bayes' theorem is used to evaluate how much the strength of a belief in a hypothesis should change.

There is debate around what informs the original degree of belief. Objective Bayesians seek an objective value for the degree of probability of a hypothesis being correct and so do not avoid the philosophical criticisms of objectivism. Subjective Bayesians hold that prior probabilities represent subjective degrees of belief, but that the repeated application of Bayes' theorem leads to a high degree of agreement on the posterior probability. They therefore fail to provide an objective standard for choosing between conflicting hypotheses. The theorem can be used to produce a rational justification for a belief in some hypothesis, but at the expense of rejecting objectivism. Such a scheme cannot be used, for instance, to decide objectively between conflicting scientific paradigms.

Edwin Jaynes, an outspoken physicist and Bayesian, argued that "subjective" elements are present in all inference, for instance in choosing axioms for deductive inference; in choosing initial degrees of belief or prior probabilities; or in choosing likelihoods. He thus sought principles for assigning probabilities from qualitative knowledge. Maximum entropy – a generalization of the principle of indifference – and transformation groups are the two tools he produced. Both attempt to alleviate the subjectivity of probability assignment in specific situations by converting knowledge of features such as a situation's symmetry into unambiguous choices for probability distributions.

Cox's theorem, which derives probability from a set of logical constraints on a system of inductive reasoning, prompts Bayesians to call their system an inductive logic.

Other Information[edit | edit source]

Aristotle appears first to establish the mental behaviour of induction as a category of reasoning. This classification has only recently been challenged. Reasoning is used in determining the validity of inductive concepts, but it is not necessarily involved in generalization.

Bibliography[edit | edit source]

  • John H. Holland, Keith J. Holyoak, Richard E. Nisbett, and Paul R. Thagard (1989): Induction: Processes of Inference, Learning, and Discovery. Cambridge, MA: The MIT Press, ISBN-10: 0262580969

References[edit | edit source]

  1. Karl R. Popper, David W. Miller: A proof of the impossibility of inductive probability. Nature 302 (1983), 687–688;
  2. Karl Popper: Logic of Scientific Discovery, new appendix *XIX;
  3. R. C. Jeffrey: Letter concerning Popper and Miller;
  4. I. Levi: The impossibility of inductive probability;
  5. I. J. Good: The impossibility of inductive probability;
  6. Karl R. Popper, David W. Miller: The impossibility of inductive probability. Nature 310 (1984), 433–434;
  7. G. Blandino: Critical Remarks on an Argumentation by K. Popper and D. Miller. Discussion about Induction. Epistemologia 7 (1984), 183–206;
  8. I. Levi: Probabilistic Pettifoggery. Erkenntnis 25 (1986), 133–140;
  9. J. Wise, P. T. Landsberg: Has inductive probability been proved impossible?
  10. Karl R. Popper, David W. Miller: Has inductive probability been proved impossible? Nature 315 (1985), 461;
  11. J. Wise, P. T. Landsberg: On the possibility of inductive probability. Nature 316 (1985), 22;
  12. M. L. G. Redhead: On the Impossibility of Inductive Probability. The British Journal for the Philosophy of Science 36 (1985), 185–191;
  13. I. J. Good: Probabilistic Induction Is Inevitable. Journal of Statistical Computation and Simulation 20 (1985), 323–324, C216.
  14. H. Gaifman: On Inductive Support and Some Recent Tricks. Erkenntnis 22 (1985), 5–21.
  15. D. Gillies: In Defense of the Popper-Miller Argument. Philosophy of Science 53 (1986), 110–113;
  16. J. M. Dunn, G. Hellman: Dualling: A Critique of an Argument of Popper and Miller. The British Journal for the Philosophy of Science 37 (1986), 220–223.
  17. Karl R. Popper, David W. Miller: Why probabilistic support is not inductive. Philosophical Transactions of the Royal Society 321A (1987), 569–591;
  18. A. Rivadulla: On Popper-Miller's Proof of the Impossibility of Inductive Probability. Erkenntnis 27 (1987), 353–357;
  19. I. J. Good: A Restatement, in Response to Gillies, of Redhead's Argument in Support of Induction. Philosophy of Science 54 (1987), 470–472;
  20. E. Eells: On the alleged impossibility of inductive probability. British Journal for the Philosophy of Science 39 (1988), 111–116;
  21. N. C. A. da Costa, S. French: Pragmatic Probability, Logical Omniscience and the Popper-Miller Argument. Fundamenta Scientiae 9 (1988), 43–53.
  22. C. S. Chihara, D. A. Gillies: An Interchange on the Popper-Miller Argument. Philosophical Studies 54 (1988), 1–8.
  23. C. Howson: On a Recent Objection to Popper and Miller’s 'Disproof' of Probabilistic Induction. Philosophy of Science 56 (1989), 675-680.
  24. D. Zwirn, H. Zwirn: L'argument de Popper et Miller contre la justification probabiliste de l'induction, L'âge de la science 2 (Paris: Éditions Odile Jacob, 1989), 59–81;
  25. C. Howson: Some Further Relections on the Popper-Miller Disproof of Probabilistic Induction. Australasian Journal of Philosophy 68 (1990), 221-28.
  26. I. J. Good: Discussion: A Suspicious Feature of the Popper/Miller Argument. Philosophy of Science 57 (1990): 535–536.
  27. David W. Miller: Reply to Zwirn & Zwirn. Cahiers du CREA 14 (1990), 149–153;
  28. A. Mura: When Probabilistic Support is Inductive. Philosophy of Science 57 (1990), 278–289.
  29. A. Boyer: Une logique inductive probabiliste est-elle seulement possible? Cahiers du CREA 14 (1990): 123-145
  30. G. Dorn: Popper's Laws of the Excess of the Probability of the Conditional over the Conditional Probability. Conceptus 26 (1992/1993): 3-61.
  31. Andrew Elby: Contentious contents: For inductive probablitiy. Brit. J. Phil. Sci 45 (1994), 193–200
  32. G. Dorn: Inductive Countersupport. Journal for General Philosophy of Science 26 (1995), 187–189.
  33. J. Cussens: Deduction, Induction and Probalistic Support. Synthese 108 (1996): 1–10.
  34. E. Eells: Popper and Miller, and Induction and Deduction. Proceedings of the Seventh Asian Logic Conference (1999)

External links[edit | edit source]

See also[edit | edit source]

Template:Portalpar Template:Logic Template:Philosophy navigation


ar:استنتاج استقرائي bg:Логическа индукция cs:Indukce (logika) da:Induktion (metode) de:Induktion (Denken) el:Επαγωγή (Φιλοσοφία) eo:Induktiva logiko fi:Päättely hr:Indukcija is:Tilleiðsla it:Induzione he:אינדוקציה lv:Indukcija (izziņas metode) lt:Indukcinė logika nl:Inductie (filosofie) no:Induksjon (filosofi) uz:Induksiya (mantiq) sl:Indukcija (logika) sv:Induktion (filosofi) uk:Індукція логічна

Template:WikiDoc Sources


Licensed under CC BY-SA 3.0 | Source: https://www.wikidoc.org/index.php/Inductive_reasoning
7 views | Status: cached on July 24 2024 17:46:51
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF