This repository provides the official implementation of UV3-TeD:
Simone Foti, Stefanos Zafeiriou, Tolga Birdal
Imperial College London
We suggest creating a mamba environment, but conda can be used as well by simply replacing mamba with conda.
To create the environment, open a terminal and type:
mamba create -n uv3-tedThen activate the environment with:
mamba activate uv3-tedThen run the the following commands to install the necessary dependencies:
mamba install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia mamba install pyg -c pyg mamba install pytorch-scatter pytorch-cluster pytorch-sparse -c pyg pip install diffusers["torch"] pip install mitsuba pip install trimesh Pillow rtree pip install "pyglet<2" pip install scipy robust_laplacian polyscope pandas point-cloud-utils pip install func_timeout tb-nightly npyvistaIf you want to evaluate the performance of the model run also the following:
pip install clean-fid lpipsDownload instructions should be automatically printed when launching the code if data are not found or automatic download is not implemented.
Permissions to download the data may be required. Please, refer to the ShapeNet and Amazon Berkeley Objects (ABO) dataset websites for more information.
We made available a configuration file for each experiment. Make sure the paths in the config file are correct. In particular, you might have to change root according to where the data were downloaded.
After cloning the repo open a terminal and go to the project directory. Ensure that your mamba/conda environment is active.
To start the training from the project repo simply run:
python train.py --config=configs/<A_CONFIG_FILE>.yaml --id=<NAME_OF_YOUR_EXPERIMENT>Basic tests will automatically run on the validation set at the end of the training. If you wish to run experiment on the test set or to run other experiments you can uncomment any function call at the end of test.py. If your model has alredy been trained or you are using our pretrained model, you can run tests without training:
python test.py --id=<NAME_OF_YOUR_EXPERIMENT>Note that NAME_OF_YOUR_EXPERIMENT is also the name of the folder containing the pretrained model.
The following parameters can also be used:
--output_path=<PATH>: path to where outputs are going to be stored.--processed_dir_name=<PATH>: relative path to where all the preprocessed files are going to be stored. This path is relative to the folder where your data are stored.--resume: resume the training (available only when launching train.py).--profile: run a few training steps to profile model performance (available only when launching train.py).--batch_size=<n>: overrides the batch size specified in the config file, (available only when launching test.py).
Lpips can be used launching a simple script from the LPIPS library. After running the tests follow these steps:
Clone the LPIPS repo and cd into it:
git clone https://github.com/richzhang/PerceptualSimilarity.git cd ./PerceptualSimilarityThen run:
python lpips_2dirs.py -d0 <PATH_TO_A_DIR_CONTAINING_A_SET_OF_RENDERED_SHAPES> -d1 <PATH_TO_ANOTHER_DIR_CONTAINING_A_SET_OF_RENDERED_SHAPES> -o <PATH_TO_OUT_TXT_FILE> --use_gpuThe weights of the pretrained models are downloadable here.
@inproceedings{foti2024uv3ted, title={UV-free Texture Generation with Denoising and Geodesic Heat Diffusions}, author={Foti, Simone and Zafeiriou, Stefanos and Birdal, Tolga}, journal = {Advances in Neural Information Processing Systems}, year={2024} } 