Skip to content

AlwaySleepy/Garment-Pile

Repository files navigation

GarmentPile:
Point-Level Visual Affordance Guided Retrieval and Adaptation for Cluttered Garments Manipulation

CVPR 2025


teaser

Garment-Pile Simulation Scene

Image 1 Image 2 Image 3

Get Started

1. Install Isaac Sim 2023.1.1

Our Project is built upon Isaac Sim 2023.1.1. Please refer to the official guideline to download it.

After Download, please move the file into path '~/.local/share/ov/pkg/' and rename the file to be 'isaac-sim-2023.1.1' to adapt the path configuration of the repo.

There are some modification need to be done in Isaac Sim's meta-file. Please refer to this document.

2. Repo Preparation

  • Clone the repo frist.
git clone https://github.com/AlwaySleepy/Garment-Pile.git 
  • Download Garment Assets

Here we use Garment Assets from GarmentLab. Please refer to Google_Drive_link to download Garment folder and unzip it to 'Assets/'.

3. Environment Preparation

  • Isaac Sim Env Preparation

For convenience, we recommend to provide an alias for the python.sh file in Isaac Sim 2023.1.1.

# 1. open .bashrc file sudo vim ~/.bashrc # 2. add following part to the end of the file alias isaac_pile=~/.local/share/ov/pkg/isaac-sim-2023.1.1/python.sh # 3. save file and exit. # 4. refresh for file configuration to take effect. source ~/.bashrc

Install necessary packages into Isaac Sim Env.

isaac_pile -m pip install termcolor plyfile
  • Model Training Env Preparation

create new conda environment

conda create -n garmentpile python=3.10

Install necessary packages into Model Training Env.

conda activate garmentpile # CUDA version should be 11.8 or less, but no 12.X pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu118 pip install -r requirements.txt

4. Repo Structure Explanation

πŸ“‚ ProjectRoot # VS Code Configuration Files β”œβ”€β”€ πŸ“ .vscode # Assets used in Isaac Sim β”œβ”€β”€ πŸ“ Assets # Isaac Sim Env Configuration, including Camera, Robot, Garment, etc. β”œβ”€β”€ πŸ“ Env_Config # Used for train_data collection β”œβ”€β”€ πŸ“ Env_Data_Collection # standlone environment with pre-trained model β”œβ”€β”€ πŸ“ Env_Eval # Used for fintuning model β”œβ”€β”€ πŸ“ Env_Finetune # Model training code β”œβ”€β”€ πŸ“ Model_Train # repo images β”œβ”€β”€ πŸ“ Repo_Image 

StandAlone Env

In our project, we provide three garment-pile scenes: washingmachine, sofa, basket.

You can directly run the three environment based on the file in 'Env_Eval' folder.

The retrieve, pick, place procedure all rely on pre_trained model.

[ATTENTION!] If you find failure of assets loading in simulation, please enter "Env_Config / Config / xx_config.py" to check assets loading path.

# washmachine isaac_pile Env_Eval/washmachine.py # sofa isaac_pile Env_Eval/sofa.py # basket isaac_pile Env_Eval/basket.py

Data Collection

Run the following command to generate retrieval data:

# washmachine bash Env_Data_Collection/auto_washmachine_retrieve.sh # sofa bash Env_Data_Collection/auto_sofa_retrieve.sh # basket bash Env_Data_Collection/auto_basket_retrieve.sh

Run the following command to generate stir data:

# washmachine bash Env_Data_Collection/auto_washmachine_stir.sh # sofa bash Env_Data_Collection/auto_sofa_stir.sh # basket bash Env_Data_Collection/auto_basket_stir.sh

There are some flags you can define manually in .sh file. Please check .sh file for more information. (such as, rgb_flag, random_flag, etc.)

Model Training

Training Data are all collected in 'Data' file.

# activate conda env conda activate garmentpile # run any .py file in 'Model_Train' folder. remember to login in wandb # e.g. python Model_Train/WM_Model_train.py

Finetune

We provide washmachine place model finetune code as example in 'Env_Finetune' folder.

you can run the .sh file directly to see finetune procedure.

Citation and Reference

If you find this paper useful, please consider staring 🌟 this repo and citing πŸ“‘ our paper:

@InProceedings{Wu_2025_CVPR, author = {Wu, Ruihai and Zhu, Ziyu and Wang, Yuran and Chen, Yue and Wang, Jiarui and Dong, Hao}, title = {Point-Level Visual Affordance Guided Retrieval and Adaptation for Cluttered Garments Manipulation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2025}, } 

About

[CVPR 2025πŸŽ‰] Official implementation for paper "Point-Level Visual Affordance Guided Retrieval and Adaptation for Cluttered Garments Manipulation".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors