You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The primary bottleneck in our training pipeline is the dataloader performance (currently training time per epoch is very high due to the dataloader - IterDatapipe).
We should implement the PreLoader module (check SLEAP) in Sleap-NN and benchmark the training time. The training time should be optimized to improve/ match the current SLEAP performance.
We compared and benchmarked the performance of IterDatapipes with LitData, and found that LitData is much faster and efficient for data processing. #80 addresses the plan for refactoring our current data pipeline.
The primary bottleneck in our training pipeline is the dataloader performance (currently training time per epoch is very high due to the dataloader - IterDatapipe).
We should implement the PreLoader module (check SLEAP) in Sleap-NN and benchmark the training time. The training time should be optimized to improve/ match the current SLEAP performance.
Ref (current Torch issue): pytorch/data#1196
The text was updated successfully, but these errors were encountered: