When people use samples of evidence to make inferences, they consider both the sample contents and how the sample was generated (“sampling assumptions”). The current studies examined whether people can update their sampling assumptions – whether they …
The impressive recent performance of large language models such as GPT-3 has led many to wonder to what extent they can serve as models of general intelligence or are similar to human cognition. We address this issue by applying GPT-3 to a classic …
In a twitter-like experimental environment, we show that people are more influenced by the number of distinct posts than the number of distinct people, and hardly at all by the diversity of points made
Demonstrates that premise non-monotonicity can be explained by people's assumptions about how data are sampled and captured by a Bayesian model of generalisation.