Abstract

Understanding and Improving Model-Based Deep Reinforcement Learning

15 June 2023 – 13:00 BST (Online)

Jessica Hamrick, DeepMind, London, UK

Model-based planning is often thought to be necessary for deep, careful reasoning and generalization in artificial agents. While recent successes of model-based reinforcement learning (MBRL) with deep function approximation have strengthened this hypothesis, the resulting diversity of model-based methods has also made it difficult to track which components drive success and why. In this talk, I will discuss a line of research from the past few years that has aimed to better understand, and subsequently improve, model-based learning and generalization. First, planning is incredibly useful during an agent’s training and supports improved data collection and a more powerful learning signal. However, it is only useful for decisions made in the moment under certain circumstances—counter to our (and many others’) intuitions! Second, we can substantially improve procedural generalization of model-based agents by incorporating self-supervised learning into the agent’s architecture. Finally, we can also improve transfer to novel tasks by leveraging an initial unsupervised exploration phase, which allows for learning transferrable knowledge both in the policy and the world model.

Short Biography: Dr. Jessica Hamrick is a Staff Research Scientist at DeepMind, where she studies how to build machines that can flexibly build and deploy models of the world. Her work combines insights from cognitive science with structured relational architectures, model-based deep reinforcement learning, and planning. In addition to her work in AI, Dr. Hamrick has contributed to various open-source scientific computing projects including Jupyter and psiTurk. Dr. Hamrick received her PhD in Psychology in 2017 from the University of California, Berkeley, and her BS and MEng in Computer Science in 2012 from the Massachusetts Institute of Technology.