AbstractIn this work, we tackle the problem of learning symbolic representations of low-level and continuous environments. We present a framework for autonomously learning portable hierarchies that are suitable for planning.
Such abstractions can be immediately transferred between tasks that share the same types of objects, resulting in agents that require fewer samples to learn a model of a new task. We show how to grounded these representations with problem-specific information to construct a sound representation suitable for planning. We demonstrate our approach in a series of video game tasks, where an agent can learn such representations directly from transition data. The resulting learned representations enable the use of off-the-shelf classical planners, resulting in an agent capable of forming complex, long-term plans, consisting of hundreds of low-level actions.
BioSteve James is an associate lecturer and PhD candidate at the University of the Witwatersrand, advised by Prof. George Konidaris and Prof. Benjamin Rosman. As a member of the RAIL Lab at Wits and IRL Lab at Brown, his research focuses on the intersection of reinforcement learning and classical planning. He has published at several international conferences and is the recipient of a Google PhD fellowship in machine learning.