-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Include ElasticNet like regularization for SparseLogisticRegression #244
base: main
Are you sure you want to change the base?
Conversation
Sorry @mathurinm for the misunderstanding, I tried doing it, is it alright now? |
This reverts commit 4563f22.
@@ -1035,7 +1036,7 @@ def fit(self, X, y): | |||
max_iter=self.max_iter, max_pn_iter=self.max_epochs, tol=self.tol, | |||
fit_intercept=self.fit_intercept, warm_start=self.warm_start, | |||
verbose=self.verbose) | |||
return _glm_fit(X, y, self, Logistic(), L1(self.alpha), solver) | |||
return _glm_fit(X, y, self, Logistic(), L1_plus_L2(self.alpha,self.l1ratio), solver) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not PEP8 compliant, by convention you need to add a space after the comma. You can configure your editor to do it automatically : https://github.com/mathurinm/github-assignment/?tab=readme-ov-file#vscode-configuration
@@ -1003,10 +1003,11 @@ class SparseLogisticRegression(LinearClassifierMixin, SparseCoefMixin, BaseEstim | |||
Number of subproblems solved to reach the specified tolerance. | |||
""" | |||
|
|||
def __init__(self, alpha=1.0, tol=1e-4, max_iter=20, max_epochs=1_000, verbose=0, | |||
def __init__(self, alpha=1.0, l1ratio=0.5, tol=1e-4, max_iter=20, max_epochs=1_000, verbose=0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to add a description of this parameter in the docstring above; also, call it l1_ratio
for compatibility with sklearn.
Yes much better like this, thank you! Can you add "Closes #231" in the first message of this PR, so that the corresponding is matched and closed automatically when this is merged? As for any new feature, you need to add a new unit test to check that the implementation is correct. |
@hoodaty do you have any update on this? |
Context of the PR
Tries to add an ElasticNet like regularization for SparseLogisticRegression
Contributions of the PR
Added an L1_plus_L2 penalty
Checks before merging PR
Closes #231