G3Flow: Generative 3D Semantic Flow for Pose-aware and Generalizable Object Manipulation

*Equal Contribution, Corresponding authors

Abstract

Recent advances in imitation learning for 3D robotic manipulation have shown promising results with diffusion-based policies. However, achieving human-level dexterity requires seamless integration of geometric precision and semantic understanding. We present G3Flow, a novel framework that constructs real-time semantic flow, a dynamic, object-centric 3D semantic representation by leveraging foundation models. Our approach uniquely combines 3D generative models for digital twin creation, vision foundation models for semantic feature extraction, and robust pose tracking for continuous semantic flow updates. This integration enables complete semantic understanding even under occlusions while eliminating manual annotation requirements. By incorporating semantic flow into diffusion policies, we demonstrate significant improvements in both terminal-constrained manipulation and cross-object generalization. Extensive experiments across five simulation tasks show that G3Flow consistently outperforms existing approaches, achieving up to 68.3% and 50.1% average success rates on terminal-constrained manipulation and cross-object generalization tasks respectively. Our results demonstrate the effectiveness of G3Flow in enhancing real-time dynamic semantic feature understanding for robotic manipulation policies.

Pipeline of G3Flow


Pipeline of G3Flow. Our framework consists of (top) an initialization phase that generates comprehensive 3D representation (surface normals, wireframe, and geometry) through object-centric exploration and digital twin generation, which enables rich semantic field extraction, and (bottom) a control execution phase where real-time pose tracking maintains dynamic semantic fields to guide diffusion-based manipulation actions for pose-aware and generalizable manipulation.

Visualization of G3Flow

We present a video compilation demonstrating the four types of observations (G3Flow combined with the original point cloud, original RGB, G3Flow, and original point cloud) for the five tasks as follows:



Experiment

We conduct extensive experiments to evaluate G3Flow's effectiveness in enhancing policy performance across two key aspects: terminal constraint satisfaction and cross-object generalization.




Ablation Study

To explore the advantages of our complete, dynamic, object-level semantic flow representation, we conduct the ablation study with conventional scene-level feature clouds. We selected the Shoe Place and Dual Shoes Place (T) tasks for comparison because they require adjustments to the orientation of the shoe throughout the entire trajectory, which rely more on long-term semantic understanding. Additionally, object occlusions are designed into the tasks, posing a greater challenge for semantic comprehension.

Bibtex

      
@article{chen2024g3flow,
  title={G3Flow: Generative 3D Semantic Flow for Pose-aware and Generalizable Object Manipulation},
  author={Chen, Tianxing and Mu, Yao and Liang, Zhixuan and Chen, Zanxin and Peng, Shijia and Chen, Qiangyu and Xu, Mingkun and Hu, Ruizhen and Zhang, Hongyuan and Li, Xuelong and others},
  journal={arXiv preprint arXiv:2411.18369},
  year={2024}
}
      

Acknowledgements

The authors extend their profound gratitude to D-robotics for their invaluable support in supplying the necessary cloud computing resources that facilitated the execution of this research. Furthermore, our sincere appreciation is extended to Deeoms for their contribution in providing essential model support, which was pivotal to the successful completion of this study.