Skip to content

weizequan/LGNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LGNet

Image Inpainting With Local and Global Refinement (Paper)

Prerequisites

  • Python 3
  • NVIDIA GPU + CUDA cuDNN
  • PyTorch 1.3.1

Run

  1. train the model
python train.py --dataroot no_use --name celebahq_LGNet --model pix2pixglg --netG1 unet_256 --netG2 resnet_4blocks --netG3 unet256 --netD snpatch --gan_mode lsgan --input_nc 4 --no_dropout --direction AtoB --display_id 0 --gpu_ids 0 
  1. test the model
python test_and_save.py --dataroot no_use --name celebahq_LGNet --model pix2pixglg --netG1 unet_256 --netG2 resnet_4blocks --netG3 unet256 --gan_mode nogan --input_nc 4 --no_dropout --direction AtoB --gpu_ids 0 

Download Datasets

We use Places2, CelebA-HQ, and Paris Street-View datasets. Liu et al. provides 12k irregular masks as the testing mask.

Pretrained Models

You can download the pre-trained model from Celeba-HQ, Places2_20cat. Note that, our pre-trained model on places 2 only uses 20 categories as our paper described. Then put them into the ./checkpoints/celebahq_LGNet/.

Citation

If you find this useful for your research, please use the following.

@ARTICLE{9730792, author={Quan, Weize and Zhang, Ruisong and Zhang, Yong and Li, Zhifeng and Wang, Jue and Yan, Dong-Ming}, journal={IEEE Transactions on Image Processing}, title={Image Inpainting With Local and Global Refinement}, year={2022}, volume={31}, pages={2405-2420} } 

Acknowledgments

This code borrows from pytorch-CycleGAN-and-pix2pix and RFR.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages