This is the official GitHub repository of the CamShift dataset and benchmark from the paper:
Neural Rendering for Sensor Adaptation in 3D Object Detection
Felix Embacher, David Holtz, Jonas Uhrig, Marius Cordts, Markus Enzweiler
Please visit our CamShift Project Page for more information.
You can download a preview of the CamShift dataset here.
For complete dataset access, please reach out to camshift@gmx.de.
| Category | # Instances | Ratio % | # Instances per Scene | ||
|---|---|---|---|---|---|
| Total | Train | Val | |||
| Ambulance | 226 | 0.9 | 0.3 | 0.3 | 0.3 |
| Bicycle | 946 | 3.7 | 1.1 | 1.1 | 1.0 |
| Bus | 294 | 1.1 | 0.3 | 0.3 | 0.4 |
| Car | 15762 | 60.9 | 18.5 | 20.0 | 12.1 |
| Human | 6114 | 23.6 | 7.2 | 7.0 | 8.1 |
| Motorcycle | 1397 | 5.4 | 1.6 | 1.7 | 1.3 |
| Truck | 1143 | 4.4 | 1.3 | 1.4 | 1.0 |
The CamShift dataset is designed for plug-and-play compatibility with the nuScenes dataset. Therefore, each dataset split (sim-SUV, sim-SUB, nerf-SUV, and nerf-SUB) can be used as a drop-in replacement at the nuScenes root ./data/nuscenes.
- Clone repository and navigate to the repository root
- Initialize all submodules with
git submodule update --init - Build docker or venv based on the instructions of each submodule
- Clone the custom nuscenes-devkit using HTTPS
git clone -b camshift https://github.com/iis-esslingen/nuscenes-devkit.gitor SSHgit clone -b camshift git@github.com:iis-esslingen/nuscenes-devkit.git - Navigate to
nuscenes-devkit/setupand install the custom nuscenes-devkit withpip install --no-deps .
- Symlink one of the four CamShift datasets (sim-suv, sim-sub, nerf-suv or nerf-sub) to
src/mmdet_projects/<your_project>/data/nuscenes, e.g.ln -s /data/camshift/sim-suv ./src/mmdet_projects/<your_project>/data/nuscenes - If
mmdet3dis not located insidesrc/mmdet_projects/<your_project>, symlink it withln -s <path_to_mmdetection> ./src/mmdet_projects/<your_project> - Navigate to
src/mmdet_projects/<your_project>, prepare the data, start the training and start the testing with the respective commands of the following table.
| Model | Usage | Command |
|---|---|---|
| DETR3D | Prep | python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag detr |
| Train | tools/dist_train.sh projects/configs/detr3d/detr3d_r50.py 8 | |
| Val | ./tools/dist_test.sh projects/configs/detr3d/detr3d_r50.py <checkpoint_file> 1 --eval bbox | |
| PETR | Prep | python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag petr |
| Train | tools/dist_train.sh projects/configs/petr/petr_r50.py 8 | |
| Val | ./tools/dist_test.sh projects/configs/petr/petr_r50.py <checkpoint_file> 1 --eval bbox | |
| StreamPETR | Prep | python tools/create_data_nusc.py --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes2d --version v1.0 |
| Train | ./tools/dist_train.sh ./projects/configs/StreamPETR/stream_petr_r50.py 8 | |
| Val | ./tools/dist_test.sh ./projects/configs/StreamPETR/stream_petr_r50.py <checkpoint_file> 1 --eval bbox | |
| BEVDet | Prep | python tools/create_data_bevdet.py |
| Train | tools/dist_train.sh configs/bevdet/bevdet_r50.py 8 | |
| Val | ./tools/dist_test.sh configs/bevdet/bevdet_r50.py <checkpoint_file> 1 --eval mAP | |
| BEVFormer‑S | Prep | python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes --version v1.0 --canbus ./data/nuscenes |
| Train | ./tools/dist_train.sh ./projects/configs/bevformer/bevformer_r50_static.py 8 | |
| Val | ./tools/dist_test.sh ./projects/configs/bevformer/bevformer_r50_static.py <checkpoint_file> 1 | |
| BEVFormer | Prep | python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes --version v1.0 --canbus ./data/nuscenes |
| Train | ./tools/dist_train.sh ./projects/configs/bevformer/bevformer_r50.py 8 | |
| Val | ./tools/dist_test.sh ./projects/configs/bevformer/bevformer_r50.py <checkpoint_file> 1 |
| mAP [%] | |||||||
|---|---|---|---|---|---|---|---|
| Training | Validation | DETR3D† | PETR‡ | StreamPETR‡ | BEVDet‡ | BEVFormer‑S† | BEVFormer† |
| sim‑SUV | sim‑SUV | 51.6 | 46.8 | 57.5 | 44.2 | 59.5 | 63.3 |
| sim‑SUB | sim‑SUB | 46.1 | 43.6 | 52.7 | 41.9 | 56.6 | 61.1 |
| nerf‑SUV | nerf‑SUV | 48.1 | 41.1 | 54.3 | 38.3 | 54.8 | 58.4 |
| nerf‑SUB | nerf‑SUB | 44.0 | 36.6 | 48.3 | 38.4 | 49.3 | 51.9 |
| sim‑SUV | sim‑SUB | 29.7 (−16.4) | 14.9 (−28.7) | 17.3 (−35.4) | 29.4 (−12.5) | 48.8 (−7.8) | 50.8 (−10.3) |
| sim‑SUB | sim‑SUV | 32.0 | 10.2 | 18.3 | 25.6 | 55.1 | 57.2 |
| nerf‑SUV | sim‑SUV | 47.7 | 41.2 | 53.0 | 24.6 | 55.4 | 58.4 |
| nerf‑SUB | sim‑SUB | 43.1 (+13.4) | 35.1 (+20.2) | 44.8 (+27.5) | 29.5 (+0.1) | 50.6 (+1.8) | 52.1 (+1.3) |
† [1600 x 900] input resolution. ‡ [1408 x 512] input resolution.
The CamShift dataset is licensed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license.
The repository code is licensed under the terms of the MIT license.
Please note that each submodule additionally follows its own license and has its own dependencies.
If you use the CamShift dataset, please cite:
@INPROCEEDINGS{11097434, author={Embacher, Felix and Holtz, David and Uhrig, Jonas and Cordts, Marius and Enzweiler, Markus}, booktitle={2025 IEEE Intelligent Vehicles Symposium (IV)}, title={Neural Rendering for Sensor Adaptation in 3D Object Detection}, year={2025}, volume={}, number={}, pages={1400-1407}}We want to thank the authors behind CARLA for their simulator and nuScenes for their leading autonomous driving dataset, which have been crucial to create our virtual CamShift dataset and conducting our sensor adaptation investigations.
Special thanks to the team behind MMDetection3D for providing a comprehensive framework for 3D object detection. We also want to thank the authors of DETR3D, PETR, StreamPETR, BEVDet, and BEVFormer for their pioneering work and their open-source implementations.

