When an argument is inducitvely strong, the premsises are said to confirm the conclusion. For example, suppose that you are interested in whether Ann (whom you do not know very well) is a student at the UMD. If H is the proposition "Ann is a student at UMD", then your probability of H is around 0.5. Now supppose that you observe, Ann in the Skinner Building carrying a backpack. This evidence confirms H in the sense that it boosts your probability of H, even though it does not guarantee that H is true. On the other hand, observing Ann carrying a backpack in the National Mall would probably not raise your probability of H, and so does not confirm H.
As we have noted when discussing the definition of inductively strong arguments, it is important to distinguish between absolute confirmation and incremental confirmation:
X evidentially supports Y (Pr(Y∣X)>0.5) represents aboslute confirmation where learning that X is true raises the probability of the hypothesis Y above some fixed threshold (typically the threshold is 0.5).
X is positively relevant to Y (Pr(Y∣X)>Pr(Y)) represents incremental confirmation. This is the "boost" that the conclusion would receive after learning the premises are true.
So far in this book, we have focused on evaluating a single argument. To evaluate an argument X⇒Y, we first ask:
Is the argument valid? That is, is Y true in all situations that X is true?
If the answer is no to the above question, then we ask the following:
Should you believe that the conclusion Y is true under the supposition that the premise X is true?
Is the premise X relevant to the truth of the conclusion Y?
Of course, it is also important to determine wether the premise X is true to fully evaluate the above argument.
In this section, we are going to introduce ways to compare two different arguments in terms of both abolute and incremental confirmation. For example, consider the following stochastic truth table:
In both of the following arguments, the premise evidentially supports the conclusion:
P⇒Q
R⇒Q
But, there is more we can say about these two arguments. Since Pr(Q∣P)>Pr(Q∣R), P provides stronger support for the conclusion Q than R does. So, in the absolute sense of confirmation, P confirms Q more than R confirms Q
Note that Pr(Q)=0.6. This means that P is positively relevant to Q and R is positively relevant to Q. So, both arguments are inductively strong. Is there some way to measure which premise is more positively relevant to Q? There are different ways to measure how positively relevant a premise is to a conclusion. One natural thought is to measure positive relevance in terms of difference between the conditional and unconditional probabilities.
Confirmation Measure: Difference
In any stochastic truth table, for any formulas X and Y, let d(X,Y) be defined as follows:
d(X,Y)=Pr(Y∣X)−Pr(Y).
In the above stochastic truth table, we have:
d(P,Q)=Pr(Q∣P)−Pr(Q)=0.667−0.6=0.067; and
d(R,Q)=Pr(Q∣R)−Pr(Q)=0.636−0.6=0.036.
For any stochastic truth table, and any formulas X and Y, the number d(X,Y) is a measure of the boost in probability that X receives from Y. There are many other ways to measure how much X confirms Y. It is beyond the scope of this course to survey all the different confirmation measures. One problem with d(X,Y) as a measure of incremental confirmation is that if Y is already very probability (i.e., the probability is close to 1), then d(X,Y) will be very small regardless of how strong the evidence X is for Y. The next way of measuring positive relevance does not suffer from this problem:
Confirmation Measure: Likelihood Ratios
In any stochastic truth table, for any formulas X and Y, let â„“(X,Y) be defined as follows:
Both d(X,Y) and ℓ(X,Y) measures the incremental confirmation that X gives Y. One potential issue that may lead to confusion with the definition of ℓ is that the equation of ℓ(X,Y) uses Pr(X∣Y)---the probability of X conditional on Y---but d(X,Y) uses Pr(Y∣X). If X is the evidence and Y is the hypothesis, then Pr(X∣Y) is the likelihood of observing X supposing that the hypothesis Y is true and Pr(X∣¬Y) is the likelihood of observing X supposing that the hypothesis Y is false. So, the numerator Pr(X∣Y)−Pr(X∣¬Y) is the differece between observing X assuming Y is true and observing X assuming that Y is false. To calculate ℓ(P,Q) and ℓ(R,Q) in the above stochastic truth table we need to calculate the following probabilities:
Although d(X,Y) and â„“(X,Y) are often different numbers (as in the above example), they are related in the following way:
Observation
In any stochastic truth table, for all formulas X and Y:
X is positively relevant to Y if and only if d(X,Y)>0 if and only if â„“(X,Y)>0
X and Y are independent if and only if d(X,Y)=0 if and only if â„“(X,Y)=0
X is negatively relevant to Y if and only if d(X,Y)<0 if and only if â„“(X,Y)<0
There are many ways to compare and contrast the above two different ways to measure incremental confirmation. One reason to prefer ℓ is illustrated by computing d(P,P∨Q) and ℓ(P,P∨Q) in the following stochastic truth table:
Intuitively, since P⊨P∨Q non-trivially (in the sense that the premise is not a contradiction and the conclusion is not a tautology), the premise P should give the greatest confirmation to the conclusion P∨Q. While ℓ(P,P∨Q) is assigned the maximum value of 1, the measure d(P,P∨Q) is only 0.4. In fact we have the following key observation:
Observation
In any stochastic truth table, For any formulas X and Y, in any stochastic truth table: