When people use samples of evidence to make inferences, they consider both the sample contents and how the sample was generated (“sampling assumptions”). The current studies examined whether people can update their sampling assumptions – whether they …
GPT4 is similar to humans on category-based induction tasks unless they involve sampling assumptions
People need to know how data were generated when they encode it; they can't revise it later if their assumptions were wrong
The impressive recent performance of large language models such as GPT-3 has led many to wonder to what extent they can serve as models of general intelligence or are similar to human cognition. We address this issue by applying GPT-3 to a classic …
In a twitter-like experimental environment, we show that people are more influenced by the number of distinct posts than the number of distinct people, and hardly at all by the diversity of points made
Shows that people are only afflicted by the curse of dimensionality for rule-based categories, not family resemblance ones.
Demonstrates that premise non-monotonicity can be explained by people's assumptions about how data are sampled and captured by a Bayesian model of generalisation.