Skip to content

Official Codebase of "DiffComplete: Diffusion-based Generative 3D Shape Completion"

License

Notifications You must be signed in to change notification settings

dvlab-research/DiffComplete

Folders and files

NameName
Last commit message
Last commit date

Latest commit

author
ruihang-chu
Aug 14, 2024
73563f0 Β· Aug 14, 2024

History

9 Commits
Aug 12, 2024
Aug 14, 2024
Aug 12, 2024
Aug 12, 2024
Aug 13, 2024
Aug 13, 2024
Aug 12, 2024
Aug 12, 2024
Aug 13, 2024
Jul 27, 2024
Aug 12, 2024
Aug 12, 2024
Aug 12, 2024

Repository files navigation

DiffComplete: Diffusion-based Generative 3D Shape Completion [NeurIPS 2023]

πŸ”₯πŸ”₯πŸ”₯ DiffComplete is a novel diffusion-based approach to enable multimodal, realistic, and high-fidelity 3D shape completion.

teaser

Environments

You can easily set up and activate a conda environment for this project by using the following commands:

conda env create -f environment.yml
conda activate diffcom

Data Construction

For 3D-EPN dataset, we download the original data available from 3D-EPN for both training and evaluation purposes. To run the default setting with a resolution of 32 3 , we download the necessary data files shapenet_dim32_df.zip and shapenet_dim32_sdf.zip for the completed and partial shapes, respectively.

To prepare the data, you can run data/sdf_2_npy.py convert the files to .npy format for easier handling. Then, run data/npy_2_pth.py to obtain the paired data of eight object classes for model training.

The data structure should be organized as follows before training.

DiffComplete
β”œβ”€β”€ data
β”‚   β”œβ”€β”€ 3d_epn
β”‚   β”‚   β”œβ”€β”€ 02691156
β”‚   β”‚   β”‚   β”œβ”€β”€ 10155655850468db78d106ce0a280f87__0__.pth
β”‚   β”‚   β”‚   β”œβ”€β”€ ...  
β”‚   β”‚   β”œβ”€β”€ 02933112
β”‚   β”‚   β”œβ”€β”€ 03001627
β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”‚   β”œβ”€β”€ splits
β”‚   β”‚   β”‚   β”œβ”€β”€ train_02691156.txt
β”‚   β”‚   β”‚   β”œβ”€β”€ train_02933112.txt
β”‚   β”‚   β”‚   β”œβ”€β”€ ...  
β”‚   β”‚   β”‚   β”œβ”€β”€ test_02691156.txt
β”‚   β”‚   β”‚   β”œβ”€β”€ test_02933112.txt
β”‚   β”‚   β”‚   β”œβ”€β”€ ...  

Training and Inference

Our training and inference processes primarily rely on the configuration files (configs/epn_control_train.yaml and configs/epn_control_test.yaml). You can adjust the number of GPUs used by modifying exp/num_gpus in these yaml files. This setting trains a specific model for each object category; thereby you could change data/class_id in the yaml file.

To train the diffusion model, you can run the following command:

python ddp_main.py --config-name epn_control_train.yaml

To test the trained model, you can denote the paths to the pretrained models by filling in net/weights and net/control_weights in the yaml file, and then run the following command:

python ddp_main.py --config-name epn_control_test.yaml train.is_train=False

Citation

If you find our work useful in your research, please consider citing:

@article{chu2024diffcomplete,
  title={Diffcomplete: Diffusion-based generative 3d shape completion},
  author={Chu, Ruihang and Xie, Enze and Mo, Shentong and Li, Zhenguo and Nie{\ss}ner, Matthias and Fu, Chi-Wing and Jia, Jiaya},
  journal={Advances in Neural Information Processing Systems},
  year={2023}
}

Acknowledgement

We would like to thank the following repos for their great work:

About

Official Codebase of "DiffComplete: Diffusion-based Generative 3D Shape Completion"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages