Backprop RGB image losses to mesh shape? #840
priyasundaresan
started this conversation in
General
Replies: 2 comments 2 replies
-
@priyasundaresan Thanks for the post. Correct me if I am wrong, but you don't seem to be reporting a specific bug in the PyTorch3D library. I suggest you move this post to the Github Discussions page where other users can weigh in on your problem. |
Beta Was this translation helpful? Give feedback.
1 reply
-
@priyasundaresan Hi, do you have any updates about your issue? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Backprop RGB image losses to mesh shape?
I am trying to fit a source mesh (sphere) to a target mesh (cow) per the example from https://pytorch3d.org/tutorials/fit_textured_mesh. In particular, I would like to propagate losses taken over the rendered RGB images of the current and target mesh to the vertex positions of the current mesh being deformed.
I am able to achieve the desired results using a 50-50 weighting of L1 silhouette loss and L1 RGB loss taken over the rendered images, and the training progression across 200 iterations is shown here:
However, using only L1 RGB loss, the mesh doesn't converge to the desired shape as shown here:
I have tried using L2 RGB loss and changing the texture color, but still have this issue. Is it possible to use RGB image supervision without silhouettes and propagate to mesh shape? I have referred to the similar issue here but have confirmed that the rasterization settings do not have this problem.
This can be reproduced by running the following code, and replacing
losses = {"rgb": {"weight": 0.5, "values": []}, "silhouette": {"weight": 0.5, "values": []}}
withlosses = {"rgb": {"weight": 1.0, "values": []}, "silhouette": {"weight": 0.0, "values": []}}
The cow mesh data is available by running:
Thanks in advance for your help!
Beta Was this translation helpful? Give feedback.
All reactions