What do our sampling assumptions affect: how we encode data or how we reason from it?

In describing how people generalize from observed samples of data to novel cases, theories of inductive inference have emphasized the learner's reliance on the contents of the sample. More recently, a growing body of literature suggests that …

Social meta-inference and the evidentiary value of consensus

In a twitter-like experimental environment, we show that people are more influenced by the number of distinct posts than the number of distinct people, and hardly at all by the diversity of points made

Do additional features help or hurt category learning? The curse of dimensionality in human learners

Shows that people are only afflicted by the curse of dimensionality for rule-based categories, not family resemblance ones.

Leaping to conclusions: Why premise relevance affects argument strength

Demonstrates that premise non-monotonicity can be explained by people's assumptions about how data are sampled and captured by a Bayesian model of generalisation.

How do people learn from negative evidence? Non-monotonic generalizations and sampling assumptions in inductive reasoning

Induction in language learning

Induction, overhypotheses, and the shape bias: Some arguments and evidence for rational constructivism