Limits of Bayesianism

Many proponents of Bayesianism point to Cox’s theorem as the justification for arguing that there is only one coherent method for representing uncertainty. Cox’s theorem states that any representation of uncertainty satisfying certain assumptions is isomorphic to classical probability theory. As I have long argued, this claim depends upon the law of the excluded middle (LEM).
Mark Colyvan, an Australian philosopher of mathematics, published a paper in 2004 which examined the philosophical and logical assumptions of Cox’s theorem (assumptions usually left implicit by its proponents), and argued that these are inappropriate for many (perhaps even most) domains with uncertainty.
M. Colyvan [2004]: The philosophical significance of Cox’s theorem. International Journal of Approximate Reasoning, 37: 71-85.
Colyvan’s work complements Glenn Shafer’s attack on the theorem, which noted that it assumes that belief should be represented by a real-valued function.
G. A. Shafer [2004]: Comments on “Constructing a logic of plausible inference: a guide to Cox’s theorem” by Kevin S. Van Horn. International Journal of Approximate Reasoning, 35: 97-105.
Although these papers are several years old, I mention them here for the record –  and because I still encounter invocations of Cox’s Theorem.
IME, most statisticians, like most economists, have little historical sense. This absence means they will not appreciate a nice irony: the person responsible for axiomatizing classical probability theory – Andrei Kolmogorov – is also one of the people responsible for axiomatizing intuitionistic logic, a version of classical logic which dispenses with the law of the excluded middle. One such axiomatization is called BHK Logic (for Brouwer, Heyting and Kolmogorov) in recognition.

0 Responses to “Limits of Bayesianism”


Comments are currently closed.