-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Allow more concurrent requests/threads for EXPORT and "Watching for changes"? #315
Comments
try this external tool in python work fine and morely fast |
Interesting, I'll take a look, although I would ideally like to keep the functionality within the Google Drive addon for ease for use. *UPDATE - Also, this code does not make use of multiple threads, correct? Cheers, |
I have 1489 tvshow with 58000+ episodes and 1400+ movies in total my library google drive, in some tests I did I don't remember the exact time, but I can guarantee that it took less than 2h30m. |
I have my own private fork of
Takes about 10-20 minutes to do a full export of 25,000 videos (Not including tmdb scraping). I'll see about making a pull request with it here without having to change too much. My personal fork is changed a lot to cater to my specific needs. |
I made your "pull requests" changes and it was much faster to create just the .STRM @0o120 thank you so much |
I'm not sure I could make that part faster since it's actually downloading the .nfo file and requires it's own request to get the file. Nothing is really being downloaded when the .strm files are created, they are just being generated by the google drives file ID, so it's basically instant. The only thing I can suggest that would make that faster is having all your NFO's in 1 zip file, then making your own fork of the clouddrive addon that downloads that zip file and unpacks it after it's downloaded. Or implement threading somewhere in the clouddrive addon which might take some work but I'm honestly not sure if that would work out okay. That's all I can think of... Sorry. If I get a chance, I'll look into threads and see if it's a viable solution. Then add it as an option and allow number of threads to be set, no promises though. There's issues you can run into, such as google drive API limits, depending on how many threads you have going. If you want to look into it yourself, the .nfo download is happening here after |
So I got threading to work and tested it with some NFO's, but anymore than 3 threads wouldn't make a difference. Here's my test results:
3 Threads seems to be the sweet spot and shaves off about half the time it normally takes to download them all. If you're interested, I could put the code up for you, but it's experimental and requires testing on your part to make sure it works as intended. If you're downloading a ton of NFO's, you could possibly also run into some google drive API limits which could also cause issues if using threads. |
@0o120 @klyco if any of you are feeling daring, try my google drive addon and let me know how you find the STRM export performance. https://github.com/JDRIVO/gDrive |
@JDRIVO - Yes I would be honored to test it! :-) I just need a few days as I am travelling. Once I test it, I will post my thoughts |
@klyco I admire your bravery. |
Hi there Carlos!
This is Ken from Kodi forums :-) Thanks again so much for this brilliant addon!
I currently have a TV and Movie library that is tens of thousands of files. When I export them, it takes a very long time (like 18-24 hours) on a Windows computer with 12th Gen i5 with 16Gb RAM.
Also, when Google Drive is watching changes, and there are a load of them, it is slow.
I suspect the bottleneck is the requests to Google.
Is it possible to have a setting, where we can choose how many concurrent requests/ threads are used for the EXPORT, and Watching Changes?
Alternatively, perhaps the fact that the TVShows and MOVIES are in the same export db slows it down? Maybe we could split into 2 databases? Although that maybe irrelevant.
Thanks in advance for your consideration!
Ken
The text was updated successfully, but these errors were encountered: