You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My idea was to try to learn some parameters used to generate meshes (not vertex positions)
I'm not sure to understand how the loss between 2 differentiable render can output gradient informations in the backward process.
As an easy example I just want to learn a sphere radius first but I got this error :
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I'm using the tutorial to learn camera positions, in my model I use a single parameter to start : self.register_parameter( "sphere_radius", nn.Parameter( torch.tensor([0.5], dtype=torch.float32), requires_grad=True ), )
I render a new mesh generated from this parameter : image = self.renderer( meshes_world=newmesh, R=R , T=T )
My idea is to link my radius to the mesh with a pretrain neuronal network so I should be able to back propagate my loss ?
I could link my radius to vertex position with a formula, but I'm not sure how to handle more complexe generation.
I'm not sure how the model would look like in pytorch if we only have to use pytorch operation to be able to backpropagate.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
My idea was to try to learn some parameters used to generate meshes (not vertex positions)
I'm not sure to understand how the loss between 2 differentiable render can output gradient informations in the backward process.
As an easy example I just want to learn a sphere radius first but I got this error :
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I'm using the tutorial to learn camera positions, in my model I use a single parameter to start :
self.register_parameter( "sphere_radius", nn.Parameter( torch.tensor([0.5], dtype=torch.float32), requires_grad=True ), )
I render a new mesh generated from this parameter :
image = self.renderer( meshes_world=newmesh, R=R , T=T )
My idea is to link my radius to the mesh with a pretrain neuronal network so I should be able to back propagate my loss ?
I could link my radius to vertex position with a formula, but I'm not sure how to handle more complexe generation.
I'm not sure how the model would look like in pytorch if we only have to use pytorch operation to be able to backpropagate.
thanks
Beta Was this translation helpful? Give feedback.
All reactions