Closing the Visual-Motor Loop with Deep Reinforcement Learning
Main content start
Principal Investigator:
TRI Liaison:
Max Bajracharya
Project Summary
Autonomous roaming platforms, such as smart cars and robots, need to effectively navigate the world to accomplish their tasks. This requires careful long-range path planning, dynamic obstacle and collision risk avoidance, robust decision-making under uncertainty and activating motion actuators that move, steer or stop. In this proposal, our goal is to close the visual-motor loop by learning end-to-end decision-making deep networks that can directly go from perception to decision.
Research Goals
Our comprehensive approach incorporates cutting edge machine learning techniques, as well as simulation and real world data.
- Navigation in multi-agent settings: We will explore end-to-end sensorimotor learning in a real-world task.
- Bootstrapping with Supervised Learning: We will move from the simpler task of navigation with pedestrian avoidance to more complex tasks such as driving a simulated autonomous vehicle.
- Refinement with Active Reinforcement Learning: We will relax the fully supervised assumption by supervised initialization and fine-tuning the entire system using reinforcement learning algorithms.