Grid with multiple grasping examples.

Abstract

We introduce AO-Grasp, a grasp proposal method that generates 6 DoF grasps that enable robots to interact with articulated objects, such as opening and closing cabinets and appliances. AO-Grasp consists of two main contributions: the AO-Grasp Model and the AO-Grasp Dataset. Given a segmented partial point cloud of a single articulated object, the AO-Grasp Model predicts the best grasp points on the object with an Actionable Grasp Point Predictor. Then, it finds corresponding grasp orientations for each of these points, resulting in stable and actionable grasp proposals. We train the AO-Grasp Model on our new AO-Grasp Dataset, which contains 78K actionable parallel-jaw grasps on synthetic articulated objects. In simulation, AO-Grasp achieves a 45.0% grasp success rate, whereas the highest performing baseline achieves a 35.0% success rate. Additionally, we evaluate AO-Grasp on 120 real-world scenes of objects with varied geometries, articulation axes, and joint states, where AO-Grasp produces successful grasps on 67.5% of scenes, while the baseline only produces successful grasps on 33.3% of scenes. To the best of our knowledge, AO-Grasp is the first method for generating 6 DoF grasps on articulated objects directly from partial point clouds without requiring part detection or hand-designed grasp heuristics.

AO-Grasp Dataset

We introduce the AO-Grasp Dataset, a dataset of simulated, actionable grasps on articulated objects. It contains 78K 6 DoF grasps for 84 instances from 7 common household furniture/appliance categories (Box, Dishwasher, Microwave, Safe, TrashCan, Oven, and StorageFurniture) from the PartNet-Mobility dataset.

Results

We conduct a quantitative evaluation of AO-Grasp and CGN on 120 scenes of real-world objects with varied local geometries and articulation axes, in different joints states, and captured from different viewpoints. AO-Grasp produces successful grasps on 67.5% of scenes, while the baseline Contact-GraspNet only produces successful grasps on 33.3% of the scenes.

Results table.

BibTeX

@article{morlans2023aograsp,
    title={AO-Grasp: Articulated Object Grasp Generation},
    author={Carlota Parés Morlans and Claire Chen and Yijia Weng and Michelle Yi and Yuying Huang and Nick Heppert and Linqi Zhou and Leonidas Guibas and Jeannette Bohg},
    year={2023},
    eprint={2310.15928},
    archivePrefix={arXiv},
    primaryClass={cs.RO}
}
                

Contact

If you have any questions, please contact us at aograsp[at]gmail[dot]com.