Skip to content

Docstring update for L2 penalty in SparseLogisticRegression #281

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Apr 2, 2025

Conversation

floriankozikowski
Copy link
Contributor

Context of the PR

This PR finalizes and replaces #244, which was stalled. The PR adds ElasticNet regularization support to skglm.SparseLogisticRegression via the penalty="l1_plus_l2" option and l1_ratio parameter. All technical steps have been solved in previous PRs. This is just a docstring update.

Contributions of the PR

Updated class-level docstring and l1_ratio parameter doc to SparseLogisticRegression

Checks before merging PR

  • [ yes ] added documentation for any new feature
  • [no, already pushed in previous PR] added unit tests
  • [yes ] edited the what's new

@mathurinm
Copy link
Collaborator

Hi @floriankozikowski you should avoid sending your PRs from your main branch, this will make your life difficult in the long run as your main and upstream/main will diverge
Check this out:
https://github.com/mathurinm/github-assignment/?tab=readme-ov-file#summary-how-to-contribute-to-an-existing-repository

For this PR it's Ok, but afterwards you should delete your local main branch and get it back from upstream/main

@mathurinm
Copy link
Collaborator

The linter fails because you have trailing whitespaces. Add "files.trimTrailingWhitespace": true, to your user settings in vscode

.. math::
\frac{1}{n_{\text{samples}}} \sum_{i=1}^{n_{\text{samples}}}
\log\left(1 + \exp(-y_i x_i^T w)\right)
+ \alpha \cdot \left( \text{l1_ratio} \cdot \|w\|_1 +
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

go without the cdots for lighter formulas; also the squared L2 term should be divided by 2 (check the Enet docstring)

@mathurinm mathurinm changed the title Docstring update for ElasticNet in SparseLogisticRegression (completes #244) Docstring update for L2 penalty in SparseLogisticRegression Apr 2, 2025
@mathurinm
Copy link
Collaborator

The pytest failure is due to R install not to this PR : LGTM, merging

@mathurinm mathurinm merged commit 41792a0 into scikit-learn-contrib:main Apr 2, 2025
3 of 4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants