Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

7
  • 4
    $\begingroup$ Your professor is correct that it is shrinking relevant parameters, but so what? It only shrinks them to the extent that they are not contributing significantly to reducing the error. And why be focused on doing proper variable selection.. Shouldn't the focus be on reducing (test) error $\endgroup$ Commented Dec 2, 2015 at 7:27
  • $\begingroup$ For most problems yes I would agree. However, for some problems (eg. cancer detection with gene expression) it is super important to find which features are the contributing factors. p.s. I've since moved on from my postdoc since he is a moron. Machine learning ftw!!! $\endgroup$ Commented Dec 20, 2016 at 23:14
  • $\begingroup$ Spike and Slab happens to be the gold standard in variable selection and I also prefer to work with LASSO. @Sachin_ruk: the spike and slab prior can be implemented using Variational Bayes too... $\endgroup$ Commented Sep 15, 2017 at 9:55
  • $\begingroup$ @SandipanKarmakar could you post a link referring to spike and slab with Variational Bayes. $\endgroup$ Commented Sep 17, 2017 at 11:18
  • $\begingroup$ Your question merges modelling [which prior?] and implementation [variational Bayes] issues. They should be processed separately. $\endgroup$ Commented Oct 25, 2017 at 5:17