You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I might be missing something but according to the paper, Figure 3 suggests the AE model is required in order to train the GEN model. However, the code allows for training the GEN model without first training an AE. Why is this?
The AE model should learn to encode each point in the pointcloud into a latent vector of mean + variance. That is sampled and then given some small noise z to produce the next noisy timestep t of the diffusion for each point in the forward process. The AE SHOULD be providing the GEN with the dataset to learn reverse diffusion. But this isn't the case. The train_gen.py file doesn't even ask for the location of an AE_airplane.pt file in the parser arguments. Someone please explain why the GEN can learn by itself... This doesn't look right.
The text was updated successfully, but these errors were encountered:
Aha I think I have solved the mystery. The authors enable you to train & test the AE separately only in order to prove it works effectively as a Pointnet Encoder, nothing more.
Running train_gen.py alone creates either a GaussianVAE (vae_gaussian.py) or FlowVAE (vae_flow.py) model. Each of those models have a self.encoder they train on-the-fly. They re-train the AE from scratch purely to train the GEN model; then these AE models could get disregarded/thrown away as they are not needed anymore after the GEN has been trained. The prior flow can be sampled using a normal.
I might be missing something but according to the paper, Figure 3 suggests the AE model is required in order to train the GEN model. However, the code allows for training the GEN model without first training an AE. Why is this?
The AE model should learn to encode each point in the pointcloud into a latent vector of mean + variance. That is sampled and then given some small noise z to produce the next noisy timestep t of the diffusion for each point in the forward process. The AE SHOULD be providing the GEN with the dataset to learn reverse diffusion. But this isn't the case. The train_gen.py file doesn't even ask for the location of an AE_airplane.pt file in the parser arguments. Someone please explain why the GEN can learn by itself... This doesn't look right.
The text was updated successfully, but these errors were encountered: