Abstract

Different Algorithms for Associative and Relational Learning

Wednesday 16 May 2018 at 13:00
2D1 Priory Road Complex, 12a Priory RD

John Hummel, University of Illinois

Associative (i.e., featural; statistical) algorithms have dominated thinking about learning in cognitive science—from philosophy, to experimental psychology and most recently artificial intelligence—for centuries. Our associative habit is hard to break because associative algorithms provide a powerful account of learning in non-human animals, and because statistical approaches to learning are, according to some criteria, normative. Such algorithms are also straightforward to realize using standard (vector-based) statistical techniques, making them both familiar and computationally convenient. But in spite of our love affair with associative learning, there have long been reasons to believe it cannot provide a complete account of human learning. I will present several experiments that approach the associative/non-associative learning debate by demonstrating that human learning of feature-based concepts (which can be learned associatively)appears to be accomplished by a qualitatively different algorithm than the learning of relational concepts (which are formally too complex to be learned associatively). Specifically, whereas featural categories are easily learned even when they have a probabilistic structure (i.e., in which no single feature predicts category membership 100% of the time), relational learning fails catastrophically with such structures. Other experiments demonstrate that, perhaps as a result of these different learning algorithms, the resulting featural and relational concepts are structured in qualitatively different ways: Whereas feature-based categories demonstrate classic “prototypeeffects”, relational concepts tend instead to be structured around an abstract, relational “ideal”.