Abstract

The (un?)importance of Generalisation in NLP

13 April 2023 – 13:00 BST

Dieuwke Hupkes, Fundamental AI Research (FAIR) ) Paris, France

The ability to generalisation well has always been seen as one of the most important feats of an AI model, but what is “good” generalisation? Traditionally, the generalisation capabilities of machine learning models are evaluated using random train/test splits. In the field of NLP, we moved from evaluating generalisation with random (i.i.d.) train-test splits, to realising that i.i.d. performance may not be as indicative of the generalision power of a model as one might hope,  to using gigantic uncontrolled training corpora that may or not may contain large parts of some of the datasets used for evaluation. What are the implications of that? In this talk, I first discuss a newly proposed taxonomy for characterising and understanding generalisation in NLP, and use it to analyse over 400 papers in the ACL anthology. Then, I briefly discuss the challenges of generalisation in the latest LLMs, and what generalisation evaluation may hold for the future.

 

Biography:  Dieuwke Hupkes is a research scientist at FAIR (fundamental AI Research). She has a broad interest in language, intelligence and the brain, and what machine intelligence might teach us about those things. In the past years, at the University of Amsterdam and then at Meta, she has worked on evaluating and interpreting all kinds of aspects of models of language processing, always aiming to ground / inform this research from knowledge from linguistics, philosophy and cognitive sciences.