-
Notifications
You must be signed in to change notification settings - Fork 37
Docstring update for L2 penalty in SparseLogisticRegression #281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This reverts commit 4563f22.
Docstring update for ElasticNet in SparseLogisticRegression (completes scikit-learn-contrib#244)
Hi @floriankozikowski you should avoid sending your PRs from your main branch, this will make your life difficult in the long run as your main and upstream/main will diverge For this PR it's Ok, but afterwards you should delete your local |
The linter fails because you have trailing whitespaces. Add |
skglm/estimators.py
Outdated
.. math:: | ||
\frac{1}{n_{\text{samples}}} \sum_{i=1}^{n_{\text{samples}}} | ||
\log\left(1 + \exp(-y_i x_i^T w)\right) | ||
+ \alpha \cdot \left( \text{l1_ratio} \cdot \|w\|_1 + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
go without the cdots for lighter formulas; also the squared L2 term should be divided by 2 (check the Enet docstring)
The pytest failure is due to R install not to this PR : LGTM, merging |
Context of the PR
This PR finalizes and replaces #244, which was stalled. The PR adds ElasticNet regularization support to skglm.SparseLogisticRegression via the penalty="l1_plus_l2" option and l1_ratio parameter. All technical steps have been solved in previous PRs. This is just a docstring update.
Contributions of the PR
Updated class-level docstring and l1_ratio parameter doc to SparseLogisticRegression
Checks before merging PR