![]() A convention (C) in the Carnapian sense resembles a tautology in that Pr(C | O) = Pr(C | notO), where O is an observation statement (again assuming that both probabilities are well-defined).Ĭarnap ( 1950b) famously distinguished two senses of confirmation ─ incremental confirmation and absolute confirmation. Carnapians can grant the inequality and still insist that conventions are different in kind from empirical statements. ![]() ![]() These points indicate that the inequality does nothing to show that evidence for a theory thereby provides evidence for the theory’s consequences. This sounds like epistemological holism on steroids, but consider this: It doesn’t matter whether X is empirical evidence for theory T X could be a tautology, or evidence against T, or an empirical statement that is evidentially irrelevant to T. This point about proposition P holds for every other proposition that T entails. This means that if your evidence tells you that T has a high probability, that same evidence tells you that P’s probability is at least as high. If a theory T entails the postulate (P) that physical objects exist, then Pr(P | X) ≥ Pr(T | X), for any proposition X, provided that the two probabilities are well-defined. This is often not the case, as in his example of LIN and PAR.īayesianism rejects holism as it applies to incremental confirmation, but there is another part of Bayesianism that may seem to be on Quine’s side in his disagreement with Carnap. Another limitation of Huemer’s argument is that he is discussing models in which each parameter is assigned a specified finite value range. Bayesians often embrace this stipulation frequentists demur. The desired result would be obtained if both models were required to impose flat probability density distributions on their parameters, a possibility that Huemer represents in a figure. It is consistent with Huemer’s point about “averages” that, in each situation in which S assigns positive probability densities to a smaller range of values than C does, that S has a lower likelihood than C across almost all of that value range. ![]() C spreads its probability over a larger range of possibilities, consequently assigning a lower probability (density), on average, to the possibilities which it allows (italics mine).” However, on the next page Huemer notes that “even when the simpler of two theories fits a narrower range of data than the more complex theory, the simpler theory need not have a higher likelihood in relation to every possible datum which both accommodate: rather, the simpler theory must have a higher average likelihood within the range of data which it accommodates than the complex theory has within the range which the complex theory accommodates.” Huemer offers no argument for the claim that simpler models “typically” have the higher likelihood, given the data at hand. Since S is compatible with a smaller range of data, it assigns a higher average probability (or probability density) to those possible of data which it allows. He considers a simple model S and its more complex competitor C, and says that “the likelihood account argues that S typically has the higher likelihood P(E|S). Huemer isn’t talking about AIC here rather, he is suggesting a Bayesian rationale for valuing simpler models. He says that this justification is “the most promising” of the three he considers. 221–223) describes a likelihood justification of parsimony where the hypotheses considered are models with adjustable parameters.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |