assign back to self.l2 in reset_parameters in RGATConv #10203
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I can't quite explain this, and I do not (yet) have the minimal code to reproduce this, but I was randomly getting all nan's for the
l2
parameter of RGATConv. Randomly meaning I would start up my script multiple times and initialize 13 models with RGATConv layers and I would end up with different numbers of ones withl2
being all nan. This is my workaroundSo I don't know why my fix would fix the randomness, but in investigating it I found this bug where the result of torch.full is dropped and not assigned back to self.l2.