Releases: activeloopai/deeplake
Releases · activeloopai/deeplake
1.2.0
Release Notes
- Adds support for dataset filtering (#460)(@AbhinavTuli)
- Greatly improves to_tensorflow performance (#481) (@AbhinavTuli)
- Benchmarks added for Hub 1.x (#486) (@benchislett)
- Fixes a bug that caused issues on windows machines (#472)(@FayazRahman)
- Fixes a bug that caused issues with TF 2.4.0 (#478) (@DebadityaPal)
- Fixes docker build issue (#463) (@Darkborderman)
- Added Chinese readme (#458) (@EYH0602)
- Better automatic determination of Dataset mode depending on permissions (#466)(@edogrigqv2)
- CoLA dataset uploaded to Hub, upload script added to examples (#487)(@mynameisvinn)
- Fixes a bug with dataset slicing (#480) (@AbhinavTuli)
- Adds support for custom s3 endpoints (including MinIO) (#482) (@AbhinavTuli)
- Adds the ability to set a name to a dataset so it appears better on the visualizer (#468) (@AbhinavTuli)
1.1.3
Fixes an issue in to_pytorch when using a dataset that the user doesn't own.
1.1.0
Release Notes
- Custom s3storage with 5-10x faster than S3FS
- Faster pytorch dataset with current chunk logic
- Fixed caching with in-memory per process without LMDB
- Better Exception handling for loading a dataset, shape and type checks, casting
- Added examples, tutorials, and better GitHub issue handling
- Add the opportunity to fill in additional information about the dataset such as description, license, citation
- Native support with .compute() in the middle for nested tensors
Contributors include. @edogrigqv2 @AbhinavTuli @mynameisvinn @Anselmoo @sparkingdark @sanchitvj @Atom-101
Release v1.0.7
Private dataset support
Improved error handling and exceptions
Test coverage reached 73%->80%
Various bug fixes
Transform speedup ~2x, hence from_x convertors work faster
version 1.0.6
Fixes some issues with segmentation and RAM issues in transform
Version 1.0.6
Fixes some issues with segmentation and RAM issues in transform