Timeline for Updating the lasso fit with new observations
Current License: CC BY-SA 3.0
21 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Feb 17, 2016 at 15:13 | history | edited | user603 | CC BY-SA 3.0 | deleted 4 characters in body |
| Jun 12, 2014 at 15:07 | history | edited | user603 | CC BY-SA 3.0 | added 331 characters in body |
| Jun 12, 2014 at 14:55 | history | edited | user603 | CC BY-SA 3.0 | added 12 characters in body |
| Jun 12, 2014 at 13:48 | history | bounty awarded | user2763361 | ||
| Jun 12, 2014 at 13:19 | history | edited | user603 | CC BY-SA 3.0 | added 62 characters in body |
| Jun 12, 2014 at 9:30 | history | edited | user603 | CC BY-SA 3.0 | deleted 1 character in body |
| Jun 12, 2014 at 9:23 | history | edited | user603 | CC BY-SA 3.0 | added 383 characters in body |
| Jun 12, 2014 at 9:10 | history | edited | user603 | CC BY-SA 3.0 | added 383 characters in body |
| Jun 12, 2014 at 8:58 | history | edited | user603 | CC BY-SA 3.0 | added 975 characters in body |
| Jun 12, 2014 at 8:48 | history | edited | user603 | CC BY-SA 3.0 | added 975 characters in body |
| Jun 12, 2014 at 8:39 | history | edited | user603 | CC BY-SA 3.0 | added 975 characters in body |
| Jun 12, 2014 at 7:41 | comment | added | user603 | @user2763361: I recommend COIN-OR but any interior point solver should be able to do it. I think I understand what you mean now. I will update my answer during the day. | |
| Jun 12, 2014 at 7:28 | comment | added | user603 | @Kochede: no because all the points are used in the new optimization. The idea is that you have to visit less edges of the simplex before finding the good one because you already start from one close to the good one. | |
| Jun 12, 2014 at 7:22 | comment | added | Kochede | This method doesn't seem correct. Yes, it will start with last estimates of $\beta$'s, but it will then find new $\beta$'s that are optimal only for few newly added datapoints, not for the whole sample observed so far. So if you have added 10 new points then this will give you $\beta$'s optimal for these particular 10 points which may be very different from the whole sample $\beta$'s. | |
| Jun 6, 2014 at 3:19 | comment | added | user2763361 | Any libraries you know that can do this without needing to edit the source code? | |
| May 10, 2011 at 16:13 | vote | accept | NPE | ||
| Nov 20, 2010 at 16:27 | comment | added | user603 | @aix. Yes, it all depends on the implementation you use and the facilities you have access to. For example: if you have access to a good lp solver, you can feed it with the past optimal values of $\beta$ and it'll carry the 1-2 step to the new solution very efficiently. You should add these details to your question. | |
| Nov 17, 2010 at 17:48 | comment | added | NPE | I was looking to update the entire path. However, if there's a good way to do it for a fixed penalty ($\lambda$ in the formula below), this may be a good start. Is this what you are proposing? $$ \hat{\beta}^{lasso} = \underset{\beta}{\operatorname{argmin}} \left \{ {1 \over 2} \sum_{i=1}^N(y_i-\beta_0-\sum_{j=1}^p x_{ij} \beta_j)^2 + \lambda \sum_{j=1}^p |\beta_j| \right \} $$ | |
| Nov 17, 2010 at 16:57 | comment | added | user603 | @aix: do you want to update the whole lasso path or just the solution ? (i.e. is the sparsity penalty fixed?). | |
| Nov 17, 2010 at 11:34 | comment | added | NPE | Thanks, but I am afraid I don't follow. LARS produces a piecewise-linear path (with exactly $p+1$ points for the least angles and possibly more points for the lasso.) Each point has its own set of $\beta$. When we add more observations, all the betas can move (except $\beta^0$, which is always $0_p$.) Please could you expand on your answer? Thanks. | |
| Nov 17, 2010 at 10:59 | history | answered | user603 | CC BY-SA 2.5 |