RoboGround: Robotic Manipulation with Grounded Vision-Language Priors

Haifeng Huang1,2*, Xinyi Chen2*, Yilun Chen2, Hao Li2, Xiaoshen Han2, Zehan Wang1, Tai Wang2, Jiangmiao Pang2, Zhou Zhao1,2†,
1Zhejiang University, 2Shanghai AI Laboratory
CVPR 2025

*Indicates Equal Contribution,Indicates Corresponding Author

Abstract

Recent advancements in robotic manipulation have highlighted the potential of intermediate representations for improving policy generalization. In this work, we explore grounding masks as an effective intermediate representation, balancing two key advantages: (1) effective spatial guidance that specifies target objects and placement areas while also conveying information about object shape and size, and (2) broad generalization potential driven by large-scale vision-language models pretrained on diverse grounding datasets. We introduce \method, a grounding-aware robotic manipulation policy that leverages grounding masks as an intermediate representation to guide policy networks in object manipulation tasks. To further explore and enhance generalization, we propose an automated pipeline for generating large-scale, simulated data with a diverse set of objects and instructions. Extensive experiments show the value of our dataset and the effectiveness of grounding masks as intermediate guidance, significantly enhancing the generalization abilities of robot policies.

Contributions

  • We introduce RoboGround, a grounding-aware robot policy that seamlessly incorporates grounded vision-language priors into the policy network, enabling robust generalization to diverse instructions, unseen objects, and novel categories.
  • We develop a large-scale, automatically generated simulation dataset aimed at enhancing instruction diversity. This dataset captures a wide range of object appearances, spatial relationships, and commonsense knowledge, enriching the model's ability to interpret and execute varied task instructions.
  • Extensive experiments demonstrate the effectiveness of our dataset and the use of grounding masks as intermediate guidance, which significantly improves the generalization capability of robot policies.

Data

The pipeline is composed of three key stages: (a) First, we extract informative object attributes in both keyword and descriptive phrase formats; (b) Next, appearance-based instructions are generated using these attributes, where keywords filter objects and descriptive phrases calculate appearance similarity; (c) Finally, spatial and commonsense instructions are generated through rule-based methods and GPT-generated techniques, respectively.

Data Generation Pipeline

Method

To enhance policy generalization, we leverage grounding masks as intermediate representations for spatial guidance. Specifically, (a) The grounded vision-language model processes the instruction and image observation to generate target masks. (b) The grounded policy network integrates mask guidance by concatenating masks with the image input and directing attention within the grounded perceiver.

Data Generation Pipeline

BibTeX

@misc{huang2025robogroundroboticmanipulationgrounded,
        title={RoboGround: Robotic Manipulation with Grounded Vision-Language Priors}, 
        author={Haifeng Huang and Xinyi Chen and Yilun Chen and Hao Li and Xiaoshen Han and Zehan Wang and Tai Wang and Jiangmiao Pang and Zhou Zhao},
        year={2025},
        eprint={2504.21530},
        archivePrefix={arXiv},
        primaryClass={cs.RO},
        url={https://arxiv.org/abs/2504.21530}, 
  }