Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENH] Probabilities calibration #1128

Open
annapintara opened this issue Feb 25, 2025 · 1 comment
Open

[ENH] Probabilities calibration #1128

annapintara opened this issue Feb 25, 2025 · 1 comment

Comments

@annapintara
Copy link

annapintara commented Feb 25, 2025

Scores returned by predict_proba method after applying undersampling or cost-sensitive learning are biased - they don't equal probabilities and need calibration.

However, there is a simple transformation to calibrate such scores, obtained from Bayes' rule, as described in the following article: https://ieeexplore.ieee.org/abstract/document/7376606 (over 700 citations). It would be great to implement this method.

@glemaitre
Copy link
Member

I assume that it is the transform that is applied in this PR: #1077

An alternative is to apply one of the calibration method: https://scikit-learn.org/stable/modules/calibration.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants