-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: the LLA algorithm for general folded concave penalties #185
Comments
Thanks for the proposition and the references! We mostly need a better naming than AdatativeLasso, since we are actually targeting what you describe.
We'll try to choose a more suitable name then... |
Great to hear! I agree reweighted L1 is a good name i.e. tells you exactly what the algorithm does. That being said the stats literature tends to use LLA (e.g. see the above references). This is relevant because of the importance of the one-step version of this algorithm that many users may want. |
Hello @idc9, thanks for the interest. As far as the naming goes, I think IterativeReweightedLasso is more explicit and corresponds to the truth ; we can duplicate the classe to AdaptiveLasso which seems to be the most popular naming. It'll requier a bit of explanation in the docstring. |
Awesome! It might be worth adding SCAD and MCP to the defaults because of their nice statistical properties (and it's hard to find these implemented penalties in python!) |
I totally agree for MCP (though I found SCAD overrated: https://hal.inria.fr/hal-01267701v2/document, section 5.2). Moreover, in terms of history, I found an even earlier work where the iterative reweighted scheme was used for non-convex sparse regularization, in the signal/image community (if you know older, that would be nice): "Sparse Multinomial Logistic Regression:Fast Algorithms and Generalization Bounds" It does not help for the naming though... |
Hi all, is there any point I can help on this or most of the things have already been done? |
I am very happy to that see someone implementing adaptive Lasso in Python (#169)! It would be great if celer also implemented the more general LLA algorithm for any folded concave penalty e.g. see One-step sparse estimates in nonconcave penalized likelihood models, (Zou and Li, 2008) and Strong oracle optimality of folded concave penalized estimation, (Fan el at. 2014). The LLA algorithm is a mild, but statistically very nice generalization of AdaptiveLasso.
The main differences between the general LLA algorithm and AdaptiveLasso are
The LLA algorithm should be fairly straightforward to implement, granted I'm not yet very familiar with the backend of celer.
LLA algorithm sketch
User input:
for s= 1, 2, ....
The text was updated successfully, but these errors were encountered: