Skip to main content Skip to secondary navigation

Learning to Interact with Articulated Objects

Main content start

Stanford Investigators 

TRI Investigators

Project Summary

In this project, we aim to tightly integrate new perception, planning and control algorithms in order to enable the manipulation of a wide variety of articulated objects. Assistive robots in the home will need to open or close room or cabinet doors, operate appliances by using buttons or levers, etc. To properly execute such tasks on objects with variable geometry and appearance, advances in perception, planning, and control for manipulation are needed. The robot has to detect the movable parts of objects and the handles that a manipulation action can be applied to. Then articulated object joint types have to be estimated, along with joint limits which together define the kinematics of an articulated object. Such information enables the robot to continuously estimate the current object state during manipulation -- which then provides the feedback to a motion controller for achieving a goal state. Apart from the continuous object state, a robot also benefits from understanding discrete, functional object states, e.g., whether a drawer is open or closed. This information provides the interface for task planners that can generate a sequence of manipulation actions for more complex goal states.

Research Goals

(i) Robust perception of articulated objects through canonicalization, leading to actionable information, 

(ii) Planning and control for manipulating these objects using differentiable filters, and 

(iii) real-time operation with built-in recovery.