EADS Talk by Kim Stachenfeld on 'How to shrink your MDPs'

Talk by Kim Stachenfeld, Google Deepmind + Princeton Dept. of Neuroscience

Title: How to shrink your MDPs

Abstract

A core problem in both neuroscience and machine learning is how to learn and plan in in complicated tasks. Often the task can be modeled graphically as a Markov Decision Process (MDP). Nodes in the graph correspond to possible positions in a task, and edges signify possible transitions to different states. The agent is then tasked with finding a policy for navigating this graph to reach maximally rewarding states. The agent can achieve this essentially by exploring the graph and collecting statistics about its structure and how reward is distributed.

Most methods for doing so are data-inefficient and require extensive exploration time. These methods scale much better when applied in a dimensionally reduced version of the graph in which the agent can simultaneously learn about clusters of similar states, and so finding good ways to compress the MDP graph is an important step to fast, flexible learning. We will survey some of the existing methods for graph compression employed in machine learning, and talk about some of the neural mechanism that underly how humans and animals address this problem. In addition, we will discuss how ideas from graph theory could be useful for developing even better methods for performing this task.