Guoliang Zhu1, Wanjun Jia1, Caoyang Shao1, Yuheng Zhang1, Zhiyong Li1,2, Kailun Yang1,2,†
1School of Artificial Intelligence and Robotics, Hunan University, China 2National Engineering Research Center of Robot Visual Perception and Control Technology, Hunan University, China †Corresponding author: kailun.yang@hnu.edu.cn
This work initiates the study of Holistic Affordance Grounding in 360° Indoor Environments. Embodied agents require global awareness for their 360° action space, yet current affordance research remains limited to object-centric, perspective views. To bridge this gap, we introduce a new task of holistic affordance grounding in 360° indoor environments, shifting the paradigm from isolated object-level understanding toward holistic scene-level reasoning, and propose PanoAffordanceNet as a solid baseline for scene-level perception in embodied intelligence.
- Release the 360-AGD dataset.
- Release PanoAffordanceNet model architecture and training code.
For any inquiries or potential collaborations, please open an issue or contact: zhuzhuxia@hnu.edu.cn
