On Alignment Between Human and Neural Network Visual Representations

23 February 2023 – 14:00 GMT (UK time) Online

Simon Kornblith, Google Brain, Toronto, Ontario, Canada

Both brains and artificial neural networks learn layers of representations from massive amounts of data. To what extent are these similarities sufficient to lead them to converge on similar representations? Do neural networks that achieve higher accuracy on machine learning benchmarks also learn more human-like representation spaces? In this talk, I’ll first discuss the results of a large-scale investigation of how different factors affect alignment between representations from computer vision models and human semantic similarity judgments. This investigation reveals that model architecture and scale have essentially no effect on alignment with human behavioural responses, whereas the training dataset and objective function have a much larger impact. In the second part of the talk, I’ll speculate on why brains and artificial neural networks might learn similar semantic spaces despite relying on different image features.
Bio: Simon Kornblith is a Senior Research Scientist at Google Brain in Toronto. His primary research focus is understanding and improving representation learning with neural networks. Before joining Google, he received his PhD in Brain and Cognitive Sciences at MIT, where he studied the neural basis of multiple-item working memory with Earl Miller. He was also one of the original developers of Zotero and a developer of the Julia programming language.