-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terribly slow for big files #6
Comments
This is one of the items I really have plans to work on. |
It has just worked well for a 3,3 Go backup right now. Not fast, but well. Sometimes it is much important. Especially concerning datas, isn't it ? |
I tried it on a 20GB file. It was a big file, that is why I wanted to split it. It was very slow. I think it is because it is using sed to step through the entire file for each table it extracts, at least in that use case which is what I am doing. It seems it would instead have to capture the environment stuff to save it, then write out each table as it comes across it on one pass through the file which would speed it up. But that is a non-trivial reworking of the way the script processes things. |
I still didn't get chance to change the script logic to extract all tables anyways and write to a file if it passes through the filter. I guess, until that happens, it is best to extract all the tables and choose the ones you need. Thank you. |
(mistakenly closed) |
I solved this some time ago by switching to mydumper/myloader. It's much faster and writes tables in individual files. It might work for others. |
worked ok, but so slow Initially I took a full database dump, about 1gb compressed, 8gb uncompressed, in less than 30 min splitting the uncompressed dump into uncompressed databases took easily 10h or more using this tool |
I'm trying to use this script on a 70GB compressed file and it's terribly slow. It's decompressing the whole source file again and again for every table and it does a sed over the whole file everytime.
The text was updated successfully, but these errors were encountered: