Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include ElasticNet like regularization for SparseLogisticRegression #244

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

hoodaty
Copy link

@hoodaty hoodaty commented Apr 10, 2024

Context of the PR

Tries to add an ElasticNet like regularization for SparseLogisticRegression

Contributions of the PR

Added an L1_plus_L2 penalty

Checks before merging PR

  • added documentation for any new feature
  • added unittests
  • edited the what's new(if applicable)

Closes #231

@mathurinm
Copy link
Collaborator

Hi @hoodaty, to ease reviewing we try to address only one issue per PR ; your latest commit is for #231 rather than for #223, so can you revert its changes, and do a separate PR for the changes required by #231? Thanks a lot

@hoodaty
Copy link
Author

hoodaty commented Apr 10, 2024

Sorry @mathurinm for the misunderstanding, I tried doing it, is it alright now?

@hoodaty hoodaty changed the title Altered Quadratic class for weights Include ElasticNet like regularization for SparseLogisticRegression Apr 10, 2024
@@ -1035,7 +1036,7 @@ def fit(self, X, y):
max_iter=self.max_iter, max_pn_iter=self.max_epochs, tol=self.tol,
fit_intercept=self.fit_intercept, warm_start=self.warm_start,
verbose=self.verbose)
return _glm_fit(X, y, self, Logistic(), L1(self.alpha), solver)
return _glm_fit(X, y, self, Logistic(), L1_plus_L2(self.alpha,self.l1ratio), solver)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not PEP8 compliant, by convention you need to add a space after the comma. You can configure your editor to do it automatically : https://github.com/mathurinm/github-assignment/?tab=readme-ov-file#vscode-configuration

@@ -1003,10 +1003,11 @@ class SparseLogisticRegression(LinearClassifierMixin, SparseCoefMixin, BaseEstim
Number of subproblems solved to reach the specified tolerance.
"""

def __init__(self, alpha=1.0, tol=1e-4, max_iter=20, max_epochs=1_000, verbose=0,
def __init__(self, alpha=1.0, l1ratio=0.5, tol=1e-4, max_iter=20, max_epochs=1_000, verbose=0,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to add a description of this parameter in the docstring above; also, call it l1_ratio for compatibility with sklearn.

@mathurinm
Copy link
Collaborator

mathurinm commented Apr 11, 2024

Yes much better like this, thank you!

Can you add "Closes #231" in the first message of this PR, so that the corresponding is matched and closed automatically when this is merged?

As for any new feature, you need to add a new unit test to check that the implementation is correct.
A good test would be to check that we get the same results as https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html when using penalty="elasticnet"

@mathurinm
Copy link
Collaborator

@hoodaty do you have any update on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ENH add support for L1 + L2 regularization in SparseLogisticRegression
2 participants