Timeline for State-of-the-art ensemble learning algorithm in pattern recognition tasks?
Current License: CC BY-SA 3.0
16 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Jun 11, 2020 at 14:32 | history | edited | CommunityBot | Commonmark migration | |
| Jan 12, 2017 at 15:16 | history | edited | kjetil b halvorsen♦ | edited tags | |
| S Sep 11, 2016 at 6:12 | history | bounty ended | Erba Aitbayev | ||
| S Sep 11, 2016 at 6:12 | history | notice removed | Erba Aitbayev | ||
| Sep 11, 2016 at 6:01 | vote | accept | Erba Aitbayev | ||
| Sep 9, 2016 at 17:25 | comment | added | horaceT | The success and failure of an ensemble model is a function of the member models of the ensemble and the nature of the data. Ensemble works because the member models yield a degree of diversity. Your question is probably unanswerable without the specifics of both those models you put into your ensemble and the dataset in question. | |
| Sep 9, 2016 at 17:03 | answer | added | jeandut | timeline score: 3 | |
| Sep 8, 2016 at 7:15 | history | edited | Erba Aitbayev | edited tags | |
| Sep 4, 2016 at 16:31 | comment | added | Alexey Grigorev | The answer is short: the one that gives the best CV score. Usually it's stacking | |
| Sep 3, 2016 at 16:55 | history | tweeted | twitter.com/StackStats/status/772115751926915072 | ||
| Sep 3, 2016 at 16:18 | answer | added | Franck Dernoncourt | timeline score: 11 | |
| Sep 3, 2016 at 14:15 | comment | added | Sangwoong Yoon | What I heard recently is that people love XGBoost and it showed really impressive performance on several Kaggle competitions. | |
| S Sep 3, 2016 at 12:33 | history | bounty started | Erba Aitbayev | ||
| S Sep 3, 2016 at 12:33 | history | notice added | Erba Aitbayev | Draw attention | |
| Sep 1, 2016 at 12:48 | answer | added | Rob | timeline score: 2 | |
| Sep 1, 2016 at 12:20 | history | asked | Erba Aitbayev | CC BY-SA 3.0 |