This repository provides a SwaV (Swapping Assignments between Views) implementation for self-supervised pretraining on medical images, using PyTorch Lightning and MONAI. It focuses on ablation studies and feature extraction for thymus grading.
- SwaV Pretraining: Self-supervised learning using Swapping Assignments between Views.
- ResNet Backbone: Uses MONAI's ResNet50 adaptation for 3D/2D medical imaging.
- Lighter Integration: Configurable training pipeline using Lighter.
- Technical Ablations: Includes notebooks and scripts for technical ablation studies.
- YAML Configuration: Centralized configuration for training and feature extraction.
. ├── train.yaml # Main SwaV training configuration ├── get_features.yaml # Feature extraction configuration ├── models/ │ └── swav.py # SwaV model implementation ├── losses/ │ └── swav_loss.py # SwaV loss function ├── transforms/ # Data augmentations ├── datasets/ # Data loading wrappers ├── analysis/ # Notebooks for technical ablations ├── README.md └── pyproject.toml # Project dependencies This project uses uv for dependency management.
-
Clone the repository:
git clone <repo-url> cd <repo-root>
-
Install dependencies:
pip install . # OR using uv uv sync
Run the training using the root configuration file:
lighter fit train.yamlYou can override parameters via the CLI:
lighter fit train.yaml trainer::max_epochs=100 trainer::devices=1Extract features from a trained model using the feature extraction config:
lighter predict get_features.yaml- Unsupervised Learning of Visual Features by Contrasting Cluster Assignments (SwaV)
- MONAI (Medical Open Network for AI)
- PyTorch Lightning
- Lighter
This project is for research purposes.