induction

What do our sampling assumptions affect: How we encode data or how we reason from it?

People need to know how data were generated when they encode it; they can't revise it later if their assumptions were wrong

Human-like property induction is a challenge for large language models

The impressive recent performance of large language models such as GPT-3 has led many to wonder to what extent they can serve as models of general intelligence or are similar to human cognition. We address this issue by applying GPT-3 to a classic …

Social meta-inference and the evidentiary value of consensus

In a twitter-like experimental environment, we show that people are more influenced by the number of distinct posts than the number of distinct people, and hardly at all by the diversity of points made

Do additional features help or hurt category learning? The curse of dimensionality in human learners

Shows that people are only afflicted by the curse of dimensionality for rule-based categories, not family resemblance ones.

Leaping to conclusions: Why premise relevance affects argument strength

Demonstrates that premise non-monotonicity can be explained by people's assumptions about how data are sampled and captured by a Bayesian model of generalisation.

How do people learn from negative evidence? Non-monotonic generalizations and sampling assumptions in inductive reasoning

Induction in language learning

Induction, overhypotheses, and the shape bias: Some arguments and evidence for rational constructivism