-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add torch Dataset classes #120
base: divya/fix-normalization
Are you sure you want to change the base?
Conversation
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## divya/fix-normalization #120 +/- ##
===========================================================
+ Coverage 97.51% 97.61% +0.09%
===========================================================
Files 38 39 +1
Lines 3781 3981 +200
===========================================================
+ Hits 3687 3886 +199
- Misses 94 95 +1 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comments (except maybe the pafs sim ordering), otherwise looks fantastic!! Great work on the modularization and testing :)
This is the first PR for #119. The torch
Dataset
modules for each model type is implemented. The user has the flexibility to choose the data pipeline framework: LitData or vanilla torchDataset
s, which is passed as an argument while initializingsleap_nn.training.model_trainer.ModelTrainer
class.