An introduction to probability and inductive logic rapidshare


















Personal probabilities; Coherence; Learning from experience; Part VI. Probability as Frequency: Stability; Normal approximations; Significance; Confidence and inductive behaviour; Part VII.

Probability Applied to Philosophy: The philosophical problem of induction; Learning from experience as an evasion of the problem; Inductive behaviour as an evasion of the problem.

Skip to main content. Search form Search. Login Join Give Shops. Halmos - Lester R. Ford Awards Merten M. Ian Hacking. This approach employs conditional probability functions to represent measures of the degree to which evidence statements support hypotheses.

Presumably, hypotheses should be empirically evaluated based on what they say or imply about the likelihood that evidence claims will be true.

Thus, this approach to the logic of evidential support is often called a Bayesian Inductive Logic or a Bayesian Confirmation Theory. This article will first provide a detailed explication of a Bayesian approach to inductive logic.

It will then examine the extent to which this logic may pass muster as an adequate logic of evidential support for hypotheses. In particular, we will see how such a logic may be shown to satisfy the Criterion of Adequacy stated above.

Sections 1 through 3 present all of the main ideas underlying the Bayesian probabilistic logic of evidential support. These three sections should suffice to provide an adequate understanding of the subject.

Section 5 extends this account to cases where the implications of hypotheses about evidence claims called likelihoods are vague or imprecise. After reading Sections 1 through 3, the reader may safely skip directly to Section 5, bypassing the rather technical account in Section 4 of how how the CoA is satisfied. Section 4 is for the more advanced reader who wants an understanding of how this logic may bring about convergence to the true hypothesis as evidence accumulates.

This result shows that the Criterion of Adequacy is indeed satisfied—that as evidence accumulates, false hypotheses will very probably come to have evidential support values as measured by their posterior probabilities that approach 0; and as this happens, a true hypothesis may very probably acquire evidential support values as measured by its posterior probability that approaches 1.

Let us begin by considering some common kinds of examples of inductive arguments. Consider the following two arguments:. Example 1. Every raven in a random sample of ravens is black. This strongly supports the following conclusion: All ravens are black. Example 2.

Bush for President in the Presidential election. This supports with a probability of at least. This kind of argument is often called an induction by enumeration. It is closely related to the technique of statistical estimation. We may represent the logical form of such arguments semi-formally as follows:. Premise: In random sample S consisting of n members of population B , the proportion of members that have attribute A is r.

The premise breaks down into three separate statements: [ 1 ]. Any inductive logic that treats such arguments should address two challenges.

In particular, it should tell us how to determine the appropriate degree p to which such premises inductively support the conclusion, for a given margin of error q.

That is, it should be provable as a metatheorem that if a conclusion expressing the approximate proportion for an attribute in a population is true, then it is very likely that sufficiently numerous random samples of the population will provide true premises for good inductive arguments that confer degrees of support p approaching 1 for that true conclusion—where, on pain of triviality, these sufficiently numerous samples are only a tiny fraction of a large population.

The supplement on Enumerative Inductions: Bayesian Estimation and Convergence , shows precisely how a a Bayesian account of enumerative induction may meet these two challenges.

Enumerative induction is, however, rather limited in scope. This form of induction is only applicable to the support of claims involving simple universal conditionals i. But, many important empirical hypotheses are not reducible to this simple form, and the evidence for these hypotheses is not composed of an enumeration of such instances. Consider, for example, the Newtonian Theory of Mechanics:. All objects remain at rest or in uniform motion unless acted upon by some external force.

If an object exerts a force on another object, the second object exerts an equal amount of force on the first object, but in the opposite direction to the force exerted by the first object. The evidence for and against this theory is not gotten by examining a randomly selected subset of objects and the forces acting upon them. Rather, the theory is tested by calculating what this theory says or implies about observable phenomena in a wide variety of specific situations—e.

This approach to testing hypotheses and theories is ubiquitous, and should be captured by an adequate inductive logic. More generally, for a wide range of cases where inductive reasoning is important, enumerative induction is inadequate. Rather, the kind of evidential reasoning that judges the likely truth of hypotheses on the basis of what they say or imply about the evidence is more appropriate.

Consider the kinds of inferences jury members are supposed to make, based on the evidence presented at a murder trial. The inference to probable guilt or innocence is based on a patchwork of evidence of various kinds. It almost never involves consideration of a randomly selected sequences of past situations when people like the accused committed similar murders.

Or, consider how a doctor diagnoses her patient on the basis of his symptoms. Although the frequency of occurrence of various diseases when similar symptoms have been present may play a role, this is clearly not the whole story.

Diagnosticians commonly employ a form of hypothesis evaluation —e. Thus, a fully adequate account of inductive logic should explicate the logic of hypothesis evaluation , through which a hypothesis or theory may be tested on the basis of what it says or "predicts" about observable phenomena. In Section 3 we will see how a kind of probabilistic inductive logic called "Bayesian Inference" or "Bayesian Confirmation Theory" captures such reasoning. The full logical structure of such arguments will be spelled out in that section.

Perhaps the oldest and best understood way of representing partial belief, uncertain inference, and inductive support is in terms of probability and the equivalent notion odds. Mathematicians have studied probability for over years, but the concept is certainly much older.

In recent times a number of other, related representations of partial belief and uncertain inference have emerged. Some of these approaches have found useful application in computer based artificial intelligence systems that perform inductive inferences in expert domains such as medical diagnosis.

Nevertheless, probabilistic representations have predominated in such application domains. So, in this article we will focus exclusively on probabilistic representations of inductive support. A brief comparative description of some of the most prominent alternative representations of uncertainty and support-strength can be found in the supplement Some Prominent Approaches to the Representation of Uncertain Inference.

The mathematical study of probability originated with Blaise Pascal and Pierre de Fermat in the mid th century. From that time through the early 19 th century, as the mathematical theory continued to develop, probability theory was primarily applied to the assessment of risk in games of chance and to drawing simple statistical inferences about characteristics of large populations—e.

In the early 19 th century Pierre de Laplace made further theoretical advances and showed how to apply probabilistic reasoning to a much wider range of scientific and practical problems. Since that time probability has become an indispensable tool in the sciences, business, and many other areas of modern life.

Throughout the development of probability theory various researchers appear to have thought of it as a kind of logic. John Venn followed two decades later with an alternative empirical frequentist account of probability in The Logic of Chance Not long after that the whole discipline of logic was transformed by new developments in deductive logic.

In the late 19 th and early 20 th century Frege, followed by Russell and Whitehead, showed how deductive logic may be represented in the kind of rigorous formal system we now call quantified predicate logic.

For the first time logicians had a fully formal deductive logic powerful enough to represent all valid deductive arguments that arise in mathematics and the sciences. In this logic the validity of deductive arguments depends only on the logical structure of the sentences involved. This development in deductive logic spurred some logicians to attempt to apply a similar approach to inductive reasoning. The idea was to extend the deductive entailment relation to a notion of probabilistic entailment for cases where premises provide less than conclusive support for conclusions.

Attempts to develop such a logic vary somewhat with regard to the ways in which they attempt to emulate the paradigm of formal deductive logic. Some inductive logicians have tried to follow the deductive paradigm by attempting to specify inductive support probabilities solely in terms of the syntactic structures of premise and conclusion sentences.

In deductive logic the syntactic structure of the sentences involved completely determines whether premises logically entail a conclusion. So these inductive logicians have attempted to follow suit. In such a system each sentence confers a syntactically specified degree of support on each of the other sentences of the language. Thus, the inductive probabilities in such a system are logical in the sense that they depend on syntactic structure alone.

This kind of conception was articulated to some extent by John Maynard Keynes in his Treatise on Probability Rudolf Carnap pursued this idea with greater rigor in his Logical Foundations of Probability and in several subsequent works e.

So, such approaches might well be called Bayesian logicist inductive logics. Other prominent Bayesian logicist attempts to develop a probabilistic inductive logic include the works of Jeffreys , Jaynes , and Rosenkrantz It is now widely held that the core idea of this syntactic approach to Bayesian logicism is fatally flawed—that syntactic logical structure cannot be the sole determiner of the degree to which premises inductively support conclusions.

A crucial facet of the problem faced by syntactic Bayesian logicism involves how the logic is supposed to apply in scientific contexts where the conclusion sentence is some scientific hypothesis or theory, and the premises are evidence claims. The difficulty is that in any probabilistic logic that satisfies the usual axioms for probabilities, the inductive support for a hypothesis must depend in part on its prior probability.

This prior probability represents arguably how plausible the hypothesis is taken to be on the basis of considerations other than the observational and experimental evidence e. A syntactic Bayesian logicist must tell us how to assign values to these pre-evidential prior probabilities of hypotheses in a way that relies only on the syntactic logical structure of the hypothesis, perhaps based on some measure of syntactic simplicity.

There are severe problems with getting this idea to work. Various kinds of examples seem to show that such an approach must assign intuitively quite unreasonable prior probabilities to hypotheses in specific cases see the footnote cited near the end of Section 3.

Furthermore, for this idea to apply to the evidential support of real scientific theories, scientists would have to formalize theories in a way that makes their relevant syntactic structures apparent, and then evaluate theories solely on that syntactic basis together with their syntactic relationships to evidence statements. Are we to evaluate alternative theories of gravitation, and alternative quantum theories, this way? This seems an extremely dubious approach to the evaluation of real scientific hypotheses and theories.

Thus, it seems that logical structure alone may not suffice for the inductive evaluation of scientific hypotheses. At about the time that the syntactic Bayesian logicist idea was developing, an alternative conception of probabilistic inductive reasoning was also emerging. This approach is now generally referred to as the Bayesian subjectivist or personalist approach to inductive reasoning see, e. This approach was originally developed as part of a larger normative theory of belief and action known as Bayesian decision theory.

Bayesian subjectivists provide a logic of decision that captures this idea, and they attempt to justify this logic by showing that in principle it leads to optimal decisions about which of various risky alternatives should be pursued.

On the Bayesian subjectivist or personalist account of inductive probability, inductive probability functions represent the subjective or personal belief-strengths of ideally rational agents, the kind of belief strengths that figure into rational decision making. See the section on subjective probability in the entry on interpretations of the probability calculus , in this Encyclopedia.

Elements of a logicist conception of inductive logic live on today as part of the general approach called Bayesian inductive logic. In this article the probabilistic inductive logic we will examine is a Bayesian inductive logic in this broader sense. This logic will not presuppose the subjectivist Bayesian theory of belief and decision, and will avoid the objectionable features of the syntactic version of Bayesian logicism.

We will see that there are good reasons to distinguish inductive probabilities from degree-of-belief probabilities and from purely syntactic logical probabilities. So, the probabilistic logic articulated in this article will be presented in a way that depends on neither of these conceptions of what the probability functions are.

However, this version of the logic will be general enough that it may be fitted to a Bayesian subjectivist or Bayesian syntactic-logicist program, if one desires to do that. All logics derive from the meanings of terms in sentences. What we now recognize as formal deductive logic rests on the meanings i.

These logical terms, and the symbols we will employ to represent them, are as follows:. That is, the logical validity of deductive arguments depends neither on the meanings of the name and predicate and relation terms, nor on the truth-values of sentences containing them. It merely supposes that these non-logical terms are meaningful, and that sentences containing them have truth-values.

Deductive logic then tells us that the logical structures of some sentences—i. This is the notion of logical inconsistency.

The notion of logical entailment is inter-definable with it. A collection of premise sentences logically entails a conclusion sentence just when the negation of the conclusion is logically inconsistent with those premises.

An inductive logic must, it seems, deviate from the paradigm provided by deductive logic in several significant ways. For one thing, logical entailment is an absolute, all-or-nothing relationship between sentences, whereas inductive support comes in degrees-of-strength. For another, although the notion of inductive support is analogous to the deductive notion of logical entailment , and is arguably an extension of it, there seems to be no inductive logic extension of the notion of logical inconsistency —at least none that is inter-definable with inductive support in the way that logical inconsistency is inter-definable with logical entailment.

Another notable difference is that when B logically entails A , adding a premise C cannot undermine the logical entailment—i. This property of logical entailment is called monotonicity. But inductive support is nonmonotonic.

In a formal treatment of probabilistic inductive logic, inductive support is represented by conditional probability functions defined on sentences of a formal language L. These conditional probability functions are constrained by certain rules or axioms that are sensitive to the meanings of the logical terms i. The axioms apply without regard for what the other terms of the language may mean. Although each support function satisfies these same axioms, the further issue of which among them provides an appropriate measure of inductive support is not settled by the axioms alone.

That may depend on additional factors, such as the meanings of the non-logical terms i. A good way to specify the axioms of the logic of inductive support functions is as follows. These axioms are apparently weaker than the usual axioms for conditional probabilities.

For instance, the usual axioms assume that conditional probability values are restricted to real numbers between 0 and 1. The following axioms do not assume this, but only that support functions assign some real numbers as values for support strengths. However, it turns out that the following axioms suffice to derive all the usual axioms for conditional probabilities including the usual restriction to values between 0 and 1.

We draw on these weaker axioms only to forestall some concerns about whether the support function axioms may assume too much, or may be overly restrictive. This axiomatization takes conditional probability as basic, as seems appropriate for evidential support functions.

Notice that conditional probability functions apply only to pairs of sentences, a conclusion sentence and a premise sentence. So, in probabilistic inductive logic we represent finite collections of premises by conjoining them into a single sentence.

Rather than say,. The above axioms are quite weak. For instance, they do not say that logically equivalent sentences are supported by all other sentences to the same degree; rather, that result is derivable from these axioms see result 6 below.

Nor do these axioms say that logically equivalent sentences support all other sentences to the same degree; rather, that result is also derivable see result 8 below. Indeed, from these axioms all of the usual theorems of probability theory may be derived.

The following results are particularly useful in probabilistic logic. Their derivations from these axioms are provided in note 2. Let us now briefly consider each axiom to see how plausible it is as a constraint on a quantitative measure of inductive support, and how it extends the notion of deductive entailment. It turns out that the all support values must lie between 0 and 1, but this follows from the axioms, rather than being assumed by them.

The scaling of inductive support via the real numbers is surely a reasonable way to go. Axiom 1 is a non-triviality requirement. It says that the support values cannot be the same for all sentence pairs. This axiom merely rules out the trivial support function that assigns the same amount of support to each sentence by every sentence. One might replace this axiom with the following rule:. But this alternative rule turns out to be derivable from axiom 1 together with the other axioms.

Axiom 2 asserts that when B logically entail A , the support of A by B is as strong as support can possibly be. This comports with the idea that an inductive support function is a generalization of the deductive entailment relation, where the premises of deductive entailments provide the strongest possible support for their conclusions. This is an especially weak axiom.

But taken together with the other axioms, it suffices to entail that logically equivalent sentences support all sentences to precisely the same degree. Axiom 4 says that inductive support adds up in a plausible way. When C logically entails the incompatibility of A and B , i. The only exception is in those cases where C acts like a logical contradiction and supports all sentences to the maximum possible degree in deductive logic a logical contradiction logically entails every sentence.

Read this way, axiom 5 then says the following. Suppose B is true in proportion q of all the states of affairs where C is true, and suppose A is true in fraction r of those states where B and C are true together. Then A and B should be true together in what proportion of all the states where C is true? The degree to which a sentence B supports a sentence A may well depend on what these sentences mean. In particular it will usually depend on the meanings we associate with the non-logical terms those terms other than the logical terms not , and , or , etc.

For example, we should want. However, evidential support functions should not presuppose meaning assignments in the sense of so-called secondary intensions —e.

Thus, the meanings of terms we associate with a support function should only be their primary intensions, not their secondary intensions. In the context of inductive logic it makes good sense to supplement the above axioms with two additional axioms.

Here is the first of them:. From axiom 6 followed by results 7, 5, and 4 we have. The idea behind axiom 6 is that inductive logic is about evidential support for contingent claims.

Nothing can count as empirical evidence for or against non-contingent truths. In particular, analytic truths should be maximally supported by all premises C. One important respect in which inductive logic should follow the deductive paradigm is that the logic should not presuppose the truth of contingent statements. If a statement C is contingent, then some other statements should be able to count as evidence against C. This is no way for an inductive logic to behave.

The whole idea of inductive logic is to provide a measure of the extent to which premise statements indicate the likely truth-values of contingent conclusion statements. Such probability assignments would make the inductive logic enthymematic by hiding significant premises in inductive support relationships. It would be analogous to permitting deductive arguments to count as valid in cases where the explicitly stated premises are insufficient to logically entail the conclusion, but where the validity of the argument is permitted to depend on additional unstated premises.

This is not how a rigorous approach to deductive logic should work, and it should not be a common practice in a rigorous approach to inductive logic. Nevertheless, it is common practice for probabilistic logicians to sweep provisionally accepted contingent claims under the rug by assigning them probability 1 regardless of the fact that no explicit evidence for them is provided.

Although this convention is useful, such probability functions should be considered mere abbreviations for proper, logically explicit, non-enthymematic, inductive support relations.

Some Bayesian logicists have proposed that an inductive logic might be made to depend solely on the logical form of sentences, as is the case for deductive logic. The idea is, effectively, to supplement axioms 1—7 with additional axioms that depend only on the logical structures of sentences, and to introduce enough such axioms to reduce the number of possible support functions to a single uniquely best support function.

It is now widely agreed that this project cannot be carried out in a plausible way. Perhaps support functions should obey some rules in addition to axioms 1—7. But it is doubtful that any plausible collection of additional rules can suffice to determine a single, uniquely qualified support function.

Later, in Section 3 , we will briefly return to this issue, after we develop a more detailed account of how inductive probabilities capture the relationship between hypotheses and evidence. Axioms 1—7 for conditional probability functions merely place formal constraints on what may properly count as a degree of support function. The issue of which of the possible truth-value assignments to a language represents the actual truth or falsehood of its sentences depends on more than this.

It depends on the meanings of the non-logical terms and on the state of the actual world. Similarly, the degree to which some sentences actually support others in a fully meaningful language must rely on something more than the mere satisfaction of the axioms for support functions.

It must, at least, rely on what the sentences of the language mean, and perhaps on much more besides. But, what more? Perhaps a better understanding of what inductive probability is may provide some help by filling out our conception of what inductive support is about.

There will not generally be a single privileged way to define such a measure on possible states of affairs. This idea needs more fleshing out, of course.

The next section will provide some indication of how that might go. Subjectivist Bayesians offer an alternative reading of the support functions. Subjectivist Bayesians usually tie such belief strengths to how much money or how many units of utility the agent would be willing to bet on A turning out to be true. Roughly, the idea is this. These relationships between belief-strengths and the desirability of outcomes e. Subjectivist Bayesians usually take inductive probability to just be this notion of probabilistic belief-strength.

Undoubtedly real agents do believe some claims more strongly than others. And, arguably, the belief strengths of real agents can be measured on a probabilistic scale between 0 and 1, at least approximately. In any case, some account of what support functions are supposed to represent is clearly needed.

The belief function account and the logicist account in terms of measures on possible states of affairs are two attempts to provide this account. The gambler's fallacy 4. Elementary probability 5. Conditional probability 6. Basic laws of probability 7. Bayes' rule Part III. How to Combine Probabilities and Utilities: 8. Expected value 9. Maximizing expected value Decision under uncertainty Part IV.

Kinds of Probability: What do you mean? Theories about probability Part V. Probability as a Measure of Belief: Personal… Expand. Save to Library Save. Create Alert Alert. Share This Paper. Background Citations. Methods Citations.



0コメント

  • 1000 / 1000