Selectivity Metrics can Overestimate the Selectivity of Units: A Case Study on AlexNet
Ella Gale | Research Associate | Psychological Science | University of Bristol
Venue | Senior Common Room, Level 2 (2D17), Priory Road Complex
Understanding the internal representations held by a neural network is essential to understand their operation. Various methods of measuring unit selectivity have been developed in order to understand the representations learned by neural networks. Here we undertake a comparison of four such measures on the well studied network AlexNet. In contrast to work on recurrent neural networks (RNNs), we fail to find any 100% selective `localist units’ in the hidden layers of AlexNet, and demonstrate that previous assessments of selectivity suggest a higher level of selectivity than is warranted, with the most selective units only responding most strongly to a small minority of images from within a category. We also generated images that maximally activated individual units and found that under 5% of units in fc6 and conv5 produced images of interpretable objects that humans consistently labeled, whereas fc8 produced over 50% interpretable images. We consider why different degrees of selectivity are observed with RNNs and AlexNet, and suggest visualizing activations with jitterplots, aside from being comparable to neuroscience techniques, are a good first step to assessing unit selectivity.
All Welcome | Tea, coffee and biscuits will be available after the seminar.