This is the official implementation of ECCV2024 paper "Plug and Play: A Representation Enhanced Domain Adapter for Collaborative Perception". Tianyou Luo, Quan Yuan, Guiyang Luo, Yuchen Xia, Yujia Yang, Jinglin Li.
This repo is mainly based on the cooperative detection framework OpenCOOD. Therefore, the installations are the same.
# Clone repo git clone https://github.com/luotianyou349/PnPDA.git cd PnPDA # Setup conda environment conda create -y --name pnpda python=3.7 conda activate pnpda # pytorch >= 1.8.1, newest version can work well conda install pytorch==1.12.1 torchvision==0.13.1 cudatoolkit=11.6 -c pytorch -c conda-forge # spconv 2.0 install, choose the correct cuda version for you pip install spconv-cu116 # Install dependencies pip install -r requirements.txt # Install bbx nms calculation cuda version python opencood/utils/setup.py build_ext --inplace # install opencood into the environment python setup.py developThe V2XSet data can be found from google url. Since the data for train/validate/test is very large, we split each data set into small chunks, which can be found in the directory ending with _chunks, such as train_chunks. After downloading, please run the following command to each set to merge those chunks together:
cat train.zip.parta* > train.zip unzip train.zipAfter downloading is finished, please make the file structured as following:
PnPDA # root of your v2xvit ├── v2xset # the downloaded v2xset data │ ├── train │ ├── validate │ ├── test ├── opencood # the core codebaseOur data label format is very similar with the one in OPV2V.
We will first train the agent's encoder and detection head in a homogeneous scenario, so you can use early fusion or feature fusion training. The training process can refer to the OpenCOOD process.
python opencood/tools/train.py --hypes_yaml ${CONFIG_FILE} [--model_dir ${CHECKPOINT_FOLDER} --half]Arguments Explanation:
- hypes_yaml: the path of the training configuration file, e.g. opencood/hypes_yaml/point_pillar_v2xvit.yaml, meaning you want to train.
- model_dir (optional) : the path of the checkpoints. This is used to fine-tune the trained models. When the model_dir is given, the trainer will discard the hypes_yaml and load the config.yaml in the checkpoint folder.
- half(optional): if specified, hybrid-precision training will be used to save memory occupation.
After training, you can find the trained model in opencood/logs, which includes the encoder and detection head. You need to copy the model to opencood/pre_train_modules.
mkdir opencood/pre_train_modules cp opencood/logs/your_model/last.pth opencood/pre_train_modules mv opencood/pre_train_modules/last.pth opencood/pre_train_modules/new_model_name.pthBefore pre-training, you need to select the correct configuration file, like opencood/hypes_yaml/aa_bb_ssl.yaml. aa is the encoder model of ego agent, and bb is the encoder model of the neighboring agent. Secondly, you need to modify several places in the configuration file:
- Modify root_dir and validate_dir to the path of your database.
- Modify model/encoder_q/saved_pth and model/encoder_k/saved_pth to the path of your trained model, usually: opencood/pre_train_modules/xxx.pth
Please make sure that saved_pth and the model name correspond
Then execute the following command to start training:
python opencood/tools/train.py --hypes_yaml ${CONFIG_FILE} [--model_dir ${CHECKPOINT_FOLDER} --half]After training, similar to step 1, copy the trained model to opencood/pre_train_modules. At this point, you get the adapter.
Please select opencood/hypes_yaml/aa_bb_cpm_dd.yaml similar configuration file for fine-tuning, where aa is the encoder model of ego agent, bb is the encoder model of the neighbor agent, and dd is the algorithm for feature fusion.
Specially, opencood/hypes_yaml/aa_bb_cc_cpm_dd.yaml, aa is the encoder model of ego agent, bb is the standard encoder model, cc is the encoder model of the neighbor agent, and dd is the algorithm for feature fusion. You will use this file when conducting backward compatibility experiments. Please note that we have written different configuration files for different feature fusion algorithms.
Secondly, you need to modify several places in the configuration file:
- Modify root_dir and validate_dir to the path of your database.
- Modify model/encoder_q/saved_pth and model/encoder_k/saved_pth to the path of your trained encoders, usually: opencood/pre_train_modules/xxx.pth
- Modify model/pre_train_modules to the path of your pre-trained adapter.
- In the backward compatibility experiment, please modify model/args/nig2base and model/args/base2ego to the corresponding adapters.
Please make sure that saved_pth and the model name correspond.
Then execute the following command to start training:
python opencood/tools/train.py --hypes_yaml ${CONFIG_FILE} [--model_dir ${CHECKPOINT_FOLDER} --half]Execute the following command to start inference:
python opencood/tools/inference.py --model_dir ${CHECKPOINT_FOLDER}PnPDA is build upon OpenCOOD, which is the first Open Cooperative Detection framework for autonomous driving.
V2XSet is collected using OpenCDA, which is the first open co-simulation-based research/engineering framework integrated with prototype cooperative driving automation pipelines as well as regular automated driving components (e.g., perception, localization, planning, control).
