2
$\begingroup$

I'm not completly sure about the bias/variance of boosted decision trees (LightGBM especially), thus I wonder if we generally would expect a performance boost by creating an ensemble of multiple LightGBM models, just like with Random Forest?

$\endgroup$

1 Answer 1

0
$\begingroup$

Usually you can tune a GBM to accomplish a good bias/variance tradeoff by itself. You could try to set the hyperparameters of the GBM to overfit, and then bag those, but in most situations I wouldn't expect much/any gain over a better-tuned single GBM.

Somewhat related: Microsoft's InterpretML package implements their version of a GA2M model, involving several layers of bagging and boosting. There the point is to produce something more interpretable, so the boosting happens on only one or two features at a time (but in a cyclic fashion, so it's not totally independent).

$\endgroup$

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.