MicroGen3D is a conditional latent diffusion model framework for generating high-resolution 3D multiphase microstructures with user-defined attributes such as volume fraction and tortuosity. Designed to accelerate materials discovery, it can synthesizes microstructures within a few seconds and predicts associated manufacturing parameters.
# 1. Clone the repo
git clone https://github.com/baskargroup/MicroGen3D.git
cd MicroGen3D
# 2. Set up environment
python -m venv venv
source venv/bin/activate # On Windows use: venv\Scripts\activate
# 3. Install dependencies
pip install -r requirements.txt
# 4. Download dataset and weights (Hugging Face)
# Make sure HF CLI is installed and you're logged in: `huggingface-cli login`
from huggingface_hub import hf_hub_download
# Download sample data
hf_hub_download(repo_id="BGLab/microgen3D", filename="sample_data.h5", repo_type="dataset", local_dir="data")
# Download model weights
hf_hub_download(repo_id="BGLab/microgen3D", filename="vae.ckpt", local_dir="models/weights/experimental")
hf_hub_download(repo_id="BGLab/microgen3D", filename="fp.ckpt", local_dir="models/weights/experimental")
hf_hub_download(repo_id="BGLab/microgen3D", filename="ddpm.ckpt", local_dir="models/weights/experimental")
- task: Auto-generated if left null
- data_path: Path to training dataset (
../data/sample_train.h5
) - model_dir: Directory to save model weights
- batch_size: Batch size for training
- image_shape: Shape of the 3D images
[C, D, H, W]
latent_dim_channels
: Latent space channels size.kld_loss_weight
: Weight of KL divergence lossmax_epochs
: Training epochspretrained
: Whether to use pretrained VAEpretrained_path
: Path to pretrained VAE model
dropout
: Dropout ratemax_epochs
: Training epochspretrained
: Whether to use pretrained FPpretrained_path
: Path to pretrained FP model
timesteps
: Number of diffusion timestepsn_feat
: Number of feature channels for Unet. Higher the channels more model capacity.learning_rate
: Learning ratemax_epochs
: Training epochs
- data_path: Path to inference/test dataset (
../data/sample_test.h5
)
batch_size
,num_batches
,num_timesteps
,learning_rate
,max_epochs
: Optional parameters
latent_dim_channels
: Latent space channels size.n_feat
: Number of feature channels for Unet.image_shape
: Expected image input shape
- List of features/targets to predict:
ABS_f_D
CT_f_D_tort1
CT_f_A_tort1
ddpm_path
: Path to trained DDPM modelvae_path
: Path to trained VAE modelfc_path
: Path to trained FP modeloutput_dir
: Where to store inference results
Navigate to the training folder and run:
cd training
python training.py
After training, switch to the inference folder and run:
cd ../inference
python inference.py
Make sure the paths in params.yaml
are correctly set and pretrained models are placed in models/weights/
.
- Sample data and pretrained models must be downloaded from here
- Model outputs will be saved in the folder specified by
output_dir
inparams.yaml
- Image shape and features must be consistent across config files and dataset format