What is disentangling and does intelligence do it?
Despite the advances in modern deep learning approaches, we are still quite far from the generality, robustness and data efficiency of biological intelligence. In this talk I will suggest that this gap may be narrowed by re-focusing from implicit representation learning prevalent in end-to-end deep learning approaches to explicit unsupervised representation learning. In particular, I will discuss the value of disentangled visual representations acquired in an unsupervised manner loosely inspired by biological intelligence. In particular, this talk will connect disentangling with the ideas of symmetry transformations from physics to make a claim that disentangled representations reflect important world structure. I will then go over a few first demonstrations of how such representations can be useful in practice for continual learning, acquiring reinforcement learning (RL) policies that are more robust to transfer scenarios that standard RL approaches, and building abstract compositional visual concepts which make possible imagination of meaningful and diverse samples beyond the training data distribution.