-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: evaluate!
crashes being called several times in a row when acceleration is used
#788
Comments
evaluate!
crashes being called several times in a row when acceleration is used
evaluate!
crashes being called several times in a row when acceleration is usedevaluate!
crashes being called several times in a row when acceleration is used
Thanks for reporting. Interesting issue. Unfortunately I cannot yet reproduce. The issue appears related to ProgressMeter. It looks like we are not using the same version. Can you please post the output of Also, if you know how to do this, can you pin your ProgressMeter version to 1.5 and see if you still get an error? (For example, from the REPL you could do |
Thanks for your answer. The result of
|
@ablaom After pinning ProgressMeter version to 1.5 the error disappeared. |
@irublev Great, thanks. So the workaround is to pin ProgressMeter to version 1.5. Preliminary investigation. Looks like ProgressMeter introduced an new check that is tripping the code sometimes. The stack trace beginning this thread points to the relevant code in version 1.6.2 of ProgessMeter. The relevant MLJBase call is here: https://github.com/alan-turing-institute/MLJBase.jl/blob/f04698bc62dd8876b53326aec38758ab7bd373c4/src/resampling.jl#L785 . Something about our input is generating an The fact that the error only occurs sometimes smells of something not being thread safe. @OkonSamuel Be great if you have a chance to look at this. |
For the record, this is turning up in MLJTuning as well, in a non mulit-theading context: https://github.com/alan-turing-institute/MLJTuning.jl/runs/2707686278 |
@irublev It seems that the issue was with Can you update your environment and, ensuring ProgressMeter is at 1.7.1, see if you can still reproduce the fail? Thanks for your patience. cc @OkonSamuel |
@ablaom Thanks for your help, it seems all is fixed with updating And may be it is not the right place at all to put this question here, but could you please take a look at JuliaAI/MLJLinearModels.jl#98? I'd like to investigate what is the reason of significantly slower performance for LogisticClassifier in comparison to LogisticRegression in scikit-learn in Python (I'd like to note that some other models like DecisionTreeClassifier perform better than scikit-learn). And I do not understand what to do further. It is impossible to configure the solver I used without injecting into the code (to make a comparison fair). But first of all I do not understand where may be the problem: in MLJLinearModels or in Optim.jl itself? All I'd like to do by now is to ask for your advice, may be it is better to create an issue not only in MLJLinearModels, but somewhere else, just to attract more attention of the community? Thank you very much in advance. |
I'm afraid I would have nothing to add beyond what has been posted at the referenced issue by the author of |
Describe the bug
When evalulate! is called several times in a row, it crashes with the following stack trace:
To Reproduce
Expected behavior
The behaviour should be the same with acceleration and without it, without any crashes.
Additional context
The code was run in Windows 10, julia Version 1.6.1 (2021-04-23), julia was launched with 12 threads:
Versions
MLJ v0.16.4
MLJBase v0.18.6
MLJLinearModels v0.5.4
The text was updated successfully, but these errors were encountered: