Abstract

Levels of Representation in a Deep Learning Model of Categorisation

Bradley Love
Professor of Cognitive and Decision Sciences, Experimental Psychology, UCL. He’s also a fellow at The Alan Turing Institute for data science


Venue
| Senior Common Room,  Level 2 (2D17), Priory Road Complex
Date | Thursday 23 May 2019
Time | 13:00

Deep convolutional neural networks (DCNNs) rival humans in object recognition. The layers (or levels of representation) in DCNNs have been successfully aligned with processing stages along the ventral stream for visual processing. Here, we propose a model of concept learning that uses visual representations from these networks to build memory representations of novel categories, which may rely on the medial temporal lobe (MTL) and medial prefrontal cortex (mPFC). Our approach opens up two possibilities: a) formal investigations can involve photographic stimuli as opposed to stimuli handcrafted and coded by the experimenter; b) model comparison can determine which level of representation within a DCNN a learner is using during categorisation decisions. Pursuing  the latter point, DCNNs suggest that the shape bias in children relies on representations at more advanced network layers whereas a learner that relied on lower network layers would display a colour bias. These results confirm the role of natural statistics in the shape bias (i.e., shape is predictive of category membership) while highlighting that the type of statistics matter, i.e., those from lower or higher levels of representation. We use the same approach to provide evidence that pigeons performing seemingly sophisticated categorisation of complex imagery may in fact be relying on representations that are very low-level (i.e., retinotopic). Although complex features, such as shape, relatively predominate at more advanced network layers, even simple features, such as spatial frequency and orientation, are better represented at the more advanced layers, contrary to a standard hierarchical view. The aforementioned work relied on supervised training of the DCNN. I’ll end by discussing some new work exploring whether representation learning can be achieved through unsupervised means.

All Welcome | Tea, coffee and cakes will be available after the seminar.