需要补充的知识点:
Variational inference 变分推论
ELBO(evidence lower bound)
一般我们想计算 后验概率
现在想用一族 parameterized distributions
写成一个优化问题就是 $$ \theta^{}=\underset{\theta}{\arg \min } \mathrm{KL}\left(q_{\theta}(z) | p(z \mid x)\right) $$ 等价于 $$ \theta^{}=\underset{\theta}{\arg \max } \mathbb{E}{q}\left[\log p(x, z)-\log q{\theta}(z)\right] $$
posterior
Normalizing flows transform simple densities (like Gaussians) into rich complex distributions.
Change of variables, change of volume
MAP(Maximum a posteriori estimation) 最大后验概率
Negative Log Liklihood NLL 负对数似然