# Learning Theory and Belief Revision

Nowadays, most normative theories of learning are in terms of measures--usually probabilities--and revisions of measures of credence. An older, and alternative, tradition in philosophy --as old as Plato's Meno--thinks of learners as making a sequence of conjectures, and investigates the conditions under which there are methods for converging on the true answer, whatever the truth may be. The Platonic perspective was briefly revived in this century by von Mises and by Reichenbach. In the 1960s, Hilary Putnam and E. Mark Gold independently combined it with bits of recursion theory to create the subject of computational learning theory. Philosophical logicians and computer scientists have in recent years given increasing attention to what should be done when data contradict accepted beliefs--and so a conventional coherent, but not strictly coherent Bayesian finds himself in an impossible position. These inquiries into belief revision share with learning theory an absence of measures with which to propound or analyze norms.

Kelly was among the first to portray learning theory as a fundamental tool in epistemology, the basis for an alternative, sceptical philosophy of induction. He developed "ideal" (non-computational) learning theory within a topological perspective, which enabled him to draw surprising parallels between the problem of induction and uncomputability. Still closer to philosophy, Kelly portrayed "transcendental deductions" as recursion theoretic completeness theorems. He gave a comprehensive analysis of the computable testability of empirical theories whose predictions are uncomputable, leading to a proof that some theories with infinitely uncomputable predictions can be tested reliably by computational means. Kelly reconstructed hypothetico-deductive methodology as a computation-theoretic reduction of discovery by hypothesis testing, leading to a formal refutation of Popper's claim that hypothetico-deductive method is the best such reduction.

In perhaps the most striking encounter of post-modern relativism and modern logic, Kelly, Glymour and their former student Cory Juhl analyzed the problem of converging to the truth when the truth is a function of the scientist's acts or beliefs. Kelly's current project is to provide a systematic learning theoretic analysis of recently proposed methods for "belief revision". The results reveal previously unsuspected relationships between belief revision theory and learning theory that serve to enrich both subjects.

Glymour applied the learning theoretic framework to characterize hypotheses about "cognitive architecture" that can be learned from observing combinations of cognitive deficits in people with brain injuries. He is now extending the analysis to resolve a longstanding debate in cognitive neuropsychology over the value of studies that compare aggregate features of distinct groups versus studies that collect records of individual deficits.