A collection of resources on generative models which utilize generator functions that map low-dimensional latent codes to high-dimensional data outputs.
The disentangled representation learning is first introduced in [Bengio et al. Representation learning: A review and new perspectives. PAMI, 2013]. This is to say that each scalar of the representation only encodes a single independent factor.
-
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
-
Variational Inference of Disentangled Latent Concepts from Unlabeled Observations
[ICLR 2018
] Abhishek Kumar, Prasanna Sattigeri, Avinash Balakrishnan -
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
[arXiv 2018
] Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem -
Rethinking Content and Style Exploring Bias for Unsupervised Disentanglement
[arXiv 2021
] Xuanchi Ren, Tao Yang, Yuwang Wang, Wenjun Zeng -
Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View
[ICLR 2022
] Xuanchi Ren, Tao Yang, Yuwang Wang, Wenjun Zeng -
Disentangled Representation Learning
[arXiv 2022
] Xin Wang, Hong Chen, Si'ao Tang, Zihao Wu, Wenwu Zhu
Compositional Generalization: people exhibit the capacity to understand and produce a potentially infinite number of novel combinations of known components. As Chomsky said, to make “infinite use of finite means.”
https://blog.research.google/2020/03/measuring-compositional-generalization.html
独立的研究可能不是足够充分的,应该是在你的模型框架下去探究解耦的可能性,这应该是模型本身所应该具备的能力。