Skip to main content Skip to secondary navigation

Task-Driven Visual Perception

Main content start

Principal Investigators:

Silvio Savarese & Leonidas Guibas

TRI Liaison:

Krishna Shankar & Rares Ambrus

Project Summary

We develop a novel visual perception model operating within a larger robotic framework. The perception model is intended to specifically support and adapt to the downstream robotic job. The model includes transfer learning mechanisms to efficiently handle new tasks, as well as the ability to adapt and improve during execution. We adopt organizing an untidy room as our demonstration scenario and HSR as our agent.

The ideas described here target perception as one of the biggest unsolved pieces of autonomous intelligence systems and form a core around which we can build agents capable of evolving their capabilities to related but novel tasks, as well as of specializing their skills to a particular environment, or to the preferences of their owners. We use agents and demonstration scenarios based on TRI needs.

Research Goals

  • Integrating visual perception as an adaptable and evolving module within a larger robotic framework.
  • Developing a perception model that can extend its skill set to specifically support the downstream job of a robot.
  • Gearing the perception model with:
    • Generalization mechanisms for solving novel perceptual tasks faced at the execution time, driven by the downstream robotic goal.
    • Adaptation mechanisms to improve and specialize during execution.