Demystifying Inter-Class Disentanglement
Aviv Gabbay and Yedid Hoshen
International Conference on Learning Representations (ICLR), 2020.
Pytorch re-implementation (thanks to @dneuhof) [Official tensorflow implementation]
Cars3D | SmallNorb | KTH |
---|---|---|
CelebA |
---|
- python >= 3.6
- numpy >= 1.15.4
- pytorch >= 1.3.0
- opencv >= 3.4.4
- dlib >= 19.17.0
Training a model for disentanglement requires several steps.
Preprocessing a local copy of one of the supported datasets can be done as follows:
lord.py --base-dir <output-root-dir> preprocess
--dataset-id {mnist,smallnorb,cars3d,shapes3d,celeba,kth,rafd}
--dataset-path <input-dataset-path>
--data-name <output-data-filename>
Splitting a preprocessed dataset into train and test sets can be done according to one of two configurations:
lord.py --base-dir <output-root-dir> split-classes
--input-data-name <input-data-filename>
--train-data-name <output-train-data-filename>
--test-data-name <output-test-data-filename>
--num-test-classes <number-of-random-test-classes>
lord.py --base-dir <output-root-dir> split-samples
--input-data-name <input-data-filename>
--train-data-name <output-train-data-filename>
--test-data-name <output-test-data-filename>
--test-split <ratio-of-random-test-samples>
Given a preprocessed train set, training a model with latent optimization (first stage) can be done as follows:
lord.py --base-dir <output-root-dir> train
--data-name <input-preprocessed-data-filename>
--model-name <output-model-name>
Training encoders for amortized inference (second stage) can be done as follows:
lord.py --base-dir <output-root-dir> train-encoders
--data-name <input-preprocessed-data-filename>
--model-name <input-model-name>
If you find this project useful for your research, please cite
@inproceedings{gabbay2020lord,
author = {Aviv Gabbay and Yedid Hoshen},
title = {Demystifying Inter-Class Disentanglement},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2020}
}