Skip to content

Latest commit

 

History

History
41 lines (28 loc) · 1.7 KB

README.md

File metadata and controls

41 lines (28 loc) · 1.7 KB

When, Why, and Which Pretrained GANs Are Useful?

This repository contains supplementary code for the ICLR'22 paper When, Why, and Which Pretrained GANs Are Useful? by Timofey Grigoryev*, Andrey Voynov*, and Artem Babenko.

TL;DR:

The paper aims to dissect the process of GAN finetuning. The take-aways:

  • Initializing the GAN training process by a pretrained checkpoint primarily affects the model's coverage rather than the fidelity of individual samples;
  • Measuring a recall between source and target datasets is a good recipe to choose an appropriate GAN checkpoint for finetuning;
  • For most of the target tasks, Imagenet-pretrained GAN, despite having poor visual quality, is an excellent starting point for finetuning.

drawing

Code

Here we release the StyleGAN-ADA Imagenet checkpoints at different resolutions that commonly act as superior model initialization. These checkpoints are compatible with the official StyleGAN-ADA repository

StyleGAN-ADA-128

StyleGAN-ADA-256

StyleGAN-ADA-512

We also release the GAN-transfer playground code.

Citation

@misc{www_gan_transfer_iclr22,
      title={When, Why, and Which Pretrained GANs Are Useful?}, 
      author={Timofey Grigoryev and Andrey Voynov and Artem Babenko},
      year={2022},
      eprint={2202.08937},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}