Skip to content

Add additive manufacturing example - from shape deviation prediction and compensation #661

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 36 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
0a01f34
add compensation gan network init files
dearleiii Aug 28, 2024
7ec1e7e
add config file
dearleiii Aug 28, 2024
130bb1f
add train files
dearleiii Aug 28, 2024
74a512c
clean up dataloader & add comments to dataloader
dearleiii Dec 18, 2024
fed4a47
Merge branch 'NVIDIA:main' into compensation
dearleiii Dec 20, 2024
ad3a3b5
Merge branch 'main' into compensation
dearleiii Jan 9, 2025
743f9a1
add all licensing information
dearleiii Jan 9, 2025
b3d9d2a
reformatting using black
dearleiii Jan 9, 2025
e5d0c63
Merge branch 'compensation' of https://github.com/dearleiii/modulus i…
dearleiii Jan 9, 2025
212add9
edited readme installation & debugged for cpu training
dearleiii Jan 11, 2025
fd22d48
add inference file
dearleiii Jan 11, 2025
1503d15
add documentation as checked by Iterrogate
dearleiii Jan 15, 2025
3ebf11d
updated from RUff check
dearleiii Jan 15, 2025
f669b35
updated from RUff check
dearleiii Jan 15, 2025
7743c40
updated from RUff check
dearleiii Jan 16, 2025
21df166
Merge branch 'NVIDIA:main' into compensation
dearleiii Jan 16, 2025
998d1f1
add model test file
dearleiii Jan 18, 2025
6d1bacb
clean up inference file
dearleiii Jan 18, 2025
ffcb744
clean up data preprocess file
dearleiii Jan 19, 2025
dd2c619
Merge branch 'main' into compensation
mnabian Jan 23, 2025
e7f7c30
update review request
dearleiii Jan 27, 2025
4601f30
Merge branch 'NVIDIA:main' into compensation
dearleiii Jan 27, 2025
87a5b6f
Merge branch 'compensation' of https://github.com/dearleiii/modulus i…
dearleiii Jan 27, 2025
6372db4
Merge branch 'main' into compensation
mnabian Feb 4, 2025
61b6d43
Merge branch 'main' into compensation
mnabian Feb 5, 2025
6cf792d
add images for readme
dearleiii Feb 12, 2025
3d5a563
Merge branch 'compensation' of https://github.com/dearleiii/modulus i…
dearleiii Feb 12, 2025
f6fbff1
Merge branch 'NVIDIA:main' into compensation
dearleiii Feb 12, 2025
8ddca0c
add images for readme
dearleiii Feb 12, 2025
c9f61a7
Merge branch 'compensation' of https://github.com/dearleiii/modulus i…
dearleiii Feb 12, 2025
1d66820
add images for readme
dearleiii Feb 12, 2025
cfe7868
add images for readme
dearleiii Feb 12, 2025
6111092
change disclosure for readme
dearleiii Feb 18, 2025
cba1602
update the data shared path
dearleiii Apr 24, 2025
8a99958
merge the remote and updates from compensation
dearleiii Apr 24, 2025
e86176c
Merge branch 'main' into compensation
mnabian May 1, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/img/GraphCompNet/bar_chamber.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/GraphCompNet/dl_comp_test-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/GraphCompNet/figure_dl_predict.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/GraphCompNet/overall_arch-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/GraphCompNet/table1_fig-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
132 changes: 132 additions & 0 deletions examples/additive_manufacturing/compensation/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@


# PyTorch version of deformation predictor & compensation

## Introduction

This work addresses shape deviation modeling and compensation in additive manufacturing (AM) to improve geometric accuracy for industrial-scale production. While traditional methods laid the groundwork, recent machine learning (ML) advancements offer better precision. However, challenges remain in generalizing across complex geometries and adapting to position-dependent variations in batch production. We introduce GraphCompNet, a novel framework combining graph-based neural networks with GAN-inspired training to model geometries and incorporate position-specific thermal and mechanical variations. Through a two-stage adversarial process, the framework refines compensated designs, improving accuracy by 35-65% across the print space. This approach enhances AM's real-time, scalable compensation capabilities, paving the way for high-precision, automated manufacturing systems.




[//]: # (<p align="center">)

[//]: # (<img src="../../../docs/img/GraphCompNet/bar_chamber.png" width="560" />)

[//]: # (</p>)

## Sample results

Prediction & compensation accuracy (mm) to be updated

[//]: # (<p align="center">)

[//]: # (<img src="../../../docs/img/GraphCompNet/dl_comp_test-2.png" width="500" />)

[//]: # (</p>)

[//]: # (Compensation on Molded fiber dataset:)

[//]: # ()
[//]: # (Comparison of four sample parts in one print run, the top row illustrates the difference between the design CAD file and the scanned printed part geometry before applying compensation, the bottom row shows the difference between the design CAD file and the scanned printed part geometry after applying compensation using our trained prediction and compensation engine.)

[//]: # ()
[//]: # (<p align="center">)

[//]: # (<img src="../../../docs/img/GraphCompNet/table1_fig-2.png" width="900" />)

[//]: # (</p>)

## Key requirments

1. ``Torch_Geometric 2.5.1 or above``: PyTorch based geometric/graph neural network library

- https://pytorch-geometric.readthedocs.io/en/latest/install/installation.html#installation-via-anaconda

- conda install pyg=*=*cu* -c pyg

2. ``pip install trimesh``

3. ``pip install matplotlib``

4. ``pip install pandas``

5. ``pip install hydra-core --upgrade --pre``

6. ``PyTorch3D``: PyTorch based 3D computer vision library

- Check requirements from official install page: https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md
- when tested, Pytorch3D requires Python 3.8, 3.9 or 3.10

- ``pip install -U iopath``

- Install directly from the source ``pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable" ``

7. ``pip install torch-cluster``

To test in customized CUDA environment, install compatible torch version compatible with cudatoolkit, i.e.

``pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu121``

Refer to:
https://pytorch.org/get-started/previous-versions/

Other dependencies for development:

- ``open3d``: pip install open3d, tested version 0.18.0
- ``torch-cluster``: conda install pytorch-cluster -c pyg



## Dataset
- Currently available:
- Bar repository [link not working yet](https://drive.google.com/file/d/1inUN4KIg8NOtuwaJa2d1j3tssRGUxgAQ/view?usp=sharing)
- Molded-fiber repository [Download sample data](https://drive.google.com/file/d/1inUN4KIg8NOtuwaJa2d1j3tssRGUxgAQ/view?usp=sharing)

- Sample input data folder format:

- input_data.txt: logs for each row, the build geometry folder

- /part_folder_i:

- cad_<part_id>.txt: contains 3 columns for point location

- scan_red<part_id>.csv: contains 3 columns for point location

[//]: # (- Post-processing: )

[//]: # ( )
[//]: # ( - https://github.azc.ext.hp.com/Shape-Compensation/Shape_compensator)


## Training

- To test running with cpu ``Connfig.yaml`` setting (not recommended):

- `` cuda: False ``
- ``use_distributed: False``
- ``use_multigpu: False``
- Gpu training: set params listed above to True

- There are two training codes that need to run in sequential manner.
1. ``train_dis.py``: This code trains the discriminator (predict part deformations with its position and geometry)
2. ``train_gen.py``: This code trains the generator (compensate part geometry)

## Inference

[Download pre-trained model checkpoint](https://drive.google.com/file/d/1Htd7MLGgvjmidIGyYquDtLkZe0gSEqRu/view?usp=drive_link)

- Supported 3D formats:
- Stereolitography (STL)
- Wavefront file (OBJ)
- How to run:
- ``python inference.py``


## References

[GraphCompNet: A Position-Aware Model for Predicting and Compensating Shape Deviations in 3D Printing](to be added)

```text
```
64 changes: 64 additions & 0 deletions examples/additive_manufacturing/compensation/conf/config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# ignore_header_test
# ruff: noqa: E402

# © Copyright 2023 HP Development Company, L.P.
# SPDX-FileCopyrightText: Copyright (c) 2023 - 2024 NVIDIA CORPORATION & AFFILIATES.
# SPDX-FileCopyrightText: All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


general:
seed: 1234
random_sample: True
cuda: True
use_distributed: False
sync_batch: True
use_multigpu: False # Default: False for D, True for G

train_dis_options:
model_path: './pretrained/11parts_lr-3/pred_model_0000.pth'
log_dir: './pretrained/11parts_lr-3/'
save_path: './pretrained/11parts_lr-3/'
num_epoch: 15001
num_batch: 2
learning_rate: 0.0001
pretrain: False
num_points: 190000

train_gen_options:
num_points: 190000
pred_model_path: "./pretrained/ocardo_iso_p500k/pred_model_3000.pth" # For D
gen_model_path: "./pretrained/ocardo_iso_p500k/pred_model_3000.pth" # For G
log_dir: './pretrained/ocardo_iso_p500k/'
save_path: './pretrained/ocardo_iso_p500k/'
num_epoch: 50001
num_batch: 1
learning_rate: 0.001

inference_options:
seed: 1234
num_points: 20000 # 'Num of points to use'
data_path: '/home/chenle/codes/DL_prediction_compensation-master/data/molded_fiber/10/cad' # for other dataset: 'input_data_bar_sample','molded_fiber'
discriminator_path: './pretrained/pretrained_os/pred_model_3000.pth' # 'discriminator model path'
generator_path: './pretrained/pretrained_os/gen_model_46500.pth' # 'generator model path'
save_path: './output/test' # 'save output path'
save_extra: True # 'exports prediction and additional data csv'


data_options:
dataset_name: "Ocardo" # choices=['Ocardo', 'Bar']
data_path: '/home/chenle/codes/DL_prediction_compensation-master/data/molded_fiber' # for other dataset: 'input_data_bar_sample','molded_fiber'
cad_format: 'txt'
scan_format: 'csv'
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
# ignore_header_test
# ruff: noqa: E402

# © Copyright 2023 HP Development Company, L.P.
# SPDX-FileCopyrightText: Copyright (c) 2023 - 2024 NVIDIA CORPORATION & AFFILIATES.
# SPDX-FileCopyrightText: All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


import os

import numpy as np
import open3d as o3d
import pandas as pd
import torch
import torch_geometric
import trimesh


def generate_mesh_train(
ply_path,
scan_pcd_path,
save_csv=True,
save_mesh_path=None,
part_name="bar",
part_id="3",
export_format="pth",
filter_dist=False,
):
"""
A PLY file is a computer file format for storing 3D data as a collection of polygons.
PLY stands for Polygon File Format, and it's also known as the Stanford Triangle Format.
PLY files are used to store 3D data from 3D scanners.
This function load a CAD file in PLY format, or STL format with trimesh:
i.e. <trimesh.Trimesh(vertices.shape=(point_cnt, 3), faces.shape=(cnt, 3), name=`ply_path`)>
Load the raw scan file sampled points in PCD format, then save the updated scan mesh in OBJ format.
Parameters:
- ply_path = os.path.join(root_data_path, "data_pipeline_bar/remesh98.ply")
- scan_pcd_path = os.path.join(root_data_path, "data_pipeline_bar/bar_98/scan/98_SAMPLED_POINTS_aligned.pcd")
- save_mesh_path = "test_data_pipeline"
Return:
Saved scan mesh path
"""
os.makedirs(save_mesh_path, exist_ok=True)

# Load cad mesh from PLY file
cad_mesh = trimesh.load(ply_path)

# Centralize the coordinates
cad_pts = torch.FloatTensor(np.asarray(cad_mesh.vertices)) - torch.FloatTensor(
cad_mesh.bounds.mean(0)
)

# Load raw scan file in PCD, o3d function to read PointCloud from file
scan_pts = o3d.io.read_point_cloud(scan_pcd_path)

# Centralize the coordinates
scan_pts = torch.FloatTensor(np.asarray(scan_pts.points)) - torch.FloatTensor(
cad_mesh.bounds.mean(0)
)

# Fined one-to-one matching
idx1, idx2 = torch_geometric.nn.knn(scan_pts, cad_pts, 1)
new_vert = scan_pts[idx2]

if filter_dist:
dist = torch.sqrt(torch.sum(torch.pow(cad_pts - new_vert, 2), 1))
filt = dist > 1.2
new_vert[filt] = cad_pts[filt]

# Updates the scan coordinates to the original CAD mesh
scan_mesh = cad_mesh
vertices = new_vert + torch.FloatTensor(cad_mesh.bounds.mean(0))
scan_mesh.vertices = vertices

if export_format == "obj":
scan_mesh.export(os.path.join(save_mesh_path, "data_out.obj"))
elif export_format == "pth":
torch.save(vertices, os.path.join(save_mesh_path, f"{part_id}/{part_name}.pth"))
else:
print("Export format should be OBJ or PTH")
exit()

if save_csv:
# save the original CAD points with centralize coordinates
np.savetxt(
os.path.join(save_mesh_path, f"{part_id}/{part_name}_cad.csv"), cad_pts
)
# save the mapped scan_pts points with centralize coordinates
np.savetxt(
os.path.join(save_mesh_path, f"{part_id}/{part_name}_scan.csv"), new_vert
)

return os.path.join(save_mesh_path, "data_out.obj")


def generate_mesh_eval(cad_path, comp_out_path, export_path, view=False):
"""
Function to load a 3D object pair (Original design file v.s. Scanned printed / Compensated part),
- CAD design in format of OBJ or STL
- Scanned printed, or compensated part points, in CSV or TXT
Export the Scanned in mesh, OBJ format
Parameters:
- object_name = "bar"
- part_id = 5
- cad_path = "%s_%d/cad/%s_%d_uptess.obj" % (object_name, part_id, object_name, part_id)
- comp_out_path = comp/out__%02d.csv" % (part_id)
Return:
Saved scan mesh path
"""
os.makedirs(export_path, exist_ok=True)

# Sample design CAD name
cad_mesh = trimesh.load(cad_path)

# Sample scanned printed file, or generated compensated file, in CSV or TXT
# change the reading format, if data was saved with other separators, " ", ","
scan_pts = pd.read_csv(comp_out_path, sep=",").values

# Define the new vertices as the scanned printed points coordinates
new_vert = torch.FloatTensor(scan_pts)

# Define the mesh from the Design CAD
scan_mesh = cad_mesh

# Export new mesh
scan_mesh.vertices = new_vert
scan_mesh.export(os.path.join(export_path, "export_out.obj"))
if view:
scan_mesh.show()
Loading