Bayesian models of cognition: What's built in after all?

Abstract

This article explores some of the philosophical implications of the Bayesian modeling paradigm. In particular, it focuses on the ramifications of the fact that Bayesian models pre-specify an inbuilt hypothesis space. To what extent does this pre-specification correspond to simply ‘‘building the solution in’’? I argue that any learner (whether computer or human) must have a built-in hypothesis space in precisely the same sense that Bayesian models have one. This has implications for the nature of learning, Fodor’s puzzle of concept acquisition, and the role of modeling in cognitive science.

Publication
Philosophy Compass 7: 127-138
Avatar
Andrew Perfors
Professor

I seek to understand how people reason and think, both on their own and in groups.

Related