Skip to main content Skip to secondary navigation

Supersizing Robot Learning through Hybrid Imitation

Main content start

Principal Investigators

Silvio Savarese and Fei-Fei Li

TRI Liaison:

Jeremy Ma

Project Summary

Data-driven methods such as reinforcement learning circumvent hand-tuned feature engineering, albeit lack guarantees and often incur a massive computational expense. This work will attempt to address three key technical challenges in robot learning: (a) Reward function Specification, (b) Long- term Reasoning, (c) Safe & Efficient Exploration in new environment. We will achieve this through (a) large scale imitation learning, (b) combining symbolic planning with neural architectures for long term planning, and (c) 3D Scene understanding for model-based reinforcement learning. Enabling robotic systems to solve real-world tasks such general purpose pick and place is of fundamental importance in many real-world applications such as assistive robotics, smart hospital, manufacturing and transportation. In line with the efforts in home robotics and shared autonomy by TRI, this work will be critical in a broad range of new scenarios whereby robots can augment human efforts as well as reduce cognitive effort such as personal care robots in old-age homes.

Research Goals

Large Scale Imitation Learning with Crowdsourced Demonstrations Scaling Imitation Manipulation Benchmark Dataset for Generalized Pick-and-Place task Task Structure Representation and Learning for Generalization in long term planning Exploration through Hybrid Imitation Learning