-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chunk download #429
Chunk download #429
Conversation
So @horheynm, just so I understand. The logic now computes the amounts of chunks that are to be downloaded, uses concurrency to download all the chunks asynchronously, and the last job is there to "concatenate" all the chunks into a single file right? |
3a359a5
to
9e69304
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks great @horheynm, very easy to follow. Would be great to add some simple unit tests to confirm the chunking is happening - e2e functionality is definitely thoroughly covered since every other pathway uses download
yes |
* chunk download, break down into 10 * lint * threads download * draft * chunk download draft * job based download and combining/deleteing chunks * delete old code * lint * fix num jobs if file_size is less than the chunk size * doc string and return types * test * lint
* chunk download, break down into 10 * lint * threads download * draft * chunk download draft * job based download and combining/deleteing chunks * delete old code * lint * fix num jobs if file_size is less than the chunk size * doc string and return types * test * lint
* `RegistryMixin` improved alias management (#404) * initial commit * add docstrings * simplify * hardening * refactor * format registry lookup strings to be lowercases * standardise aliases * Move evaluator registry (#411) * More control over external data size (#412) * When splitting external data, avoid renaming `model.data` to `model.data.1` if only one external data file gets eventually saved (#414) * [model.download] fix function returning nothing (#420) * [BugFix] Path not expanded (#418) * [Fix] Allow for processing Path in the sparsezoo analysis (#417) * Raise TypeError instead of ValueError (#426) * Fix misleading docstring (#416) Add test * add support for benchmark.yaml (#415) * add support for benchmark.yaml recent zoo models use `benchmark.yaml` instead of `benchmarks.yaml`. adding this additional pathway so `benchmark.yaml` is downloaded in the bulk model download * update files filter * fix tests --------- Co-authored-by: dbogunowicz <[email protected]> * [BugFix] Add analyze to init (#421) * Add analyze to init * Move onnxruntime to deps * Print model analysis (#423) * [model.download] fix function returning nothing (#420) * [BugFix] Path not expanded (#418) * print model-analysis * [Fix] Allow for processing Path in the sparsezoo analysis (#417) * add print statement at the end of cli run --------- Co-authored-by: Dipika Sikka <[email protected]> Co-authored-by: Rahul Tuli <[email protected]> Co-authored-by: dbogunowicz <[email protected]> * Omit scalar weight (#424) * ommit scalar weights: * remove unwanted files * comment * Update src/sparsezoo/utils/onnx/analysis.py Co-authored-by: Benjamin Fineran <[email protected]> --------- Co-authored-by: Benjamin Fineran <[email protected]> --------- Co-authored-by: George <[email protected]> Co-authored-by: Dipika Sikka <[email protected]> Co-authored-by: dbogunowicz <[email protected]> Co-authored-by: Benjamin Fineran <[email protected]> * update analyze help message for correctness (#432) * initial commit (#430) * [sparsezoo.analyze] Fix pathway such that it works for larger models (#437) * fix analyze to work with larger models * update for failing tests; add comments * Update src/sparsezoo/utils/onnx/external_data.py Co-authored-by: dbogunowicz <[email protected]> --------- Co-authored-by: Dipika Sikka <[email protected]> Co-authored-by: dbogunowicz <[email protected]> * Delete hehe.py (#439) * Download deployment dir for llms (#435) * Download deployment dir for llms * Use path instead of download * only set save_as_external_data to true if the model originally had external data (#442) * Add Channel Wise Quantization Support (#441) * Chunk download (#429) * chunk download, break down into 10 * lint * threads download * draft * chunk download draft * job based download and combining/deleteing chunks * delete old code * lint * fix num jobs if file_size is less than the chunk size * doc string and return types * test * lint * fix type hints (#445) * fix bug if the value is a dict (#447) * [deepsparse.analyze] Fix v1 functionality to work with llms (#451) * fix equivalent changes made to analyze_v2 such that inference session works for llms; update wanrings to be debug printouts * typo * overwrite file (#450) Co-authored-by: 21 <[email protected]> * Adds a `numpy_array_representer` to yaml (#454) on runtime, to avoid serialization issues * Avoid division by zero (#457) Avoid log of zero * op analysis total counts had double sparse counts (#461) * Rename legacy analyze to analyze_v1 (#459) * Fixing Quant % Calcuation (#462) * initial fix * style * Include Sparsity in Size Calculation (#463) * initial fix * style * incorporate sparsity into size calculation * quality * op analysis total counts had double sparse counts (#461) * Fixing Quant % Calcuation (#462) * initial fix * style * Include Sparsity in Size Calculation (#463) * initial fix * style * incorporate sparsity into size calculation * quality * Revert "Merge branch 'main' into analyze_cherry_picks" This reverts commit 509fa1a, reversing changes made to 08f94c4. --------- Co-authored-by: dbogunowicz <[email protected]> Co-authored-by: Rahul Tuli <[email protected]> Co-authored-by: Dipika Sikka <[email protected]> Co-authored-by: Benjamin Fineran <[email protected]> Co-authored-by: dbogunowicz <[email protected]> Co-authored-by: George <[email protected]> Co-authored-by: Dipika Sikka <[email protected]> Co-authored-by: 21 <[email protected]>
Desctiption
Chunk download if endpoint supports chunked download.
Problem
Downloading large files tend to fail
Solution
Add chunk download, redownload the chunk if it fails less than N times.
Design
Use threads to speed up downloads. Download speed is limited by bandwidth and write speed to the disk. Write to disk speed usually differ by usage. so constant write time.
Any download is a job. If chunk download, we need to combine and delete the chunk files so they are a job also.
We have two queues.
Ex.
job_queue1 = Queue()
download_job1 = Job(...)
download_job2 = Job(...)
job_queue1.put(download_job1).put(download_job2)
job_queue2 = Queue()
combine_job1 = Job(...)
job_queue2.put(combine_job1)
job_queues = Queue()
job_queues.put(job_queue1)
job_queues.put(job_queue2) # run this queue if the previous job_queue is done to guarantee downloads.
Usage:
Code
Output:
Testing
Basic url mock and donwload count. actual download is covered in e2e