- Code for affordance extraction from egocentric videos is in the folder
ego2aff - Code for affordance model learning is in the folder
affordance-learning - Data: Data_for_Aff-Grasp
- Model and Log: Model_for_Aff-Grasp
@article{li2025affgrasp, title = {Learning Precise Affordances from Egocentric Videos for Robotic Manipulation}, author = {Li, Gen and Tsagkas, Nikolaos and Song, Jifei and Mon-Williams, Ruaridh and Vijayakumar, Sethu and Shao, Kun and Sevilla-Lara, Laura}, journal = {Proceedings of the IEEE/CVF International Conference on Computer Vision}, year = {2025}, } Part of the code is derived from the hand_object_detector, hoi-forecase, GroundedSAM, and ViT-Adapter. Thanks for their great work!
