By default, emmeans includes the Kenward-Roger degrees of freedom approximation for lme4::lmer models. I would argue that for linear mixed models this is the way to go and thereby a sensible default.
You can request other methods via lmer.df=, here's confirmation that ggemmeans defaults to asymptotic (Wald) intervals:
emmeans(model, ~ Days, at = list(Days = 0:9), lmer.df="asymptotic") |> summary() |> _[["asymp.LCL"]] |> all.equal(ggem[["conf.low"]]) #> TRUE ## Works the other way around too: request another method in ggemmeans ggemmeans(model, "Days", lmer.df="kenward-roger") |> _[["conf.low"]] |> all.equal(summary(em)[["lower.CL"]]) #> TRUE
It turns out that ggpredict calculates $t$-based intervals using "residual degrees of freedom" which doesn't really make sense in a mixed model: this treats all $n=180$ observations as independent which they clearly are not. Here's what's going on exactly:
rdf <- df.residual(model) ## =176 crit <- qt(.975, rdf) preds <- predict(model, data.frame(Days=0:9, Subject=0), allow.new.levels=TRUE, se.fit=TRUE, re.form=NA) lower <- preds$fit - crit * preds$se upper <- preds$fit + crit * preds$se all.equal(lower, ggpred$conf.low, check.attributes=FALSE) #> TRUE
To be sure, these intervals are a little bit more conservative than the asymptotic ones, but they're not really based on relevant mixed models theory and considerably overestimate the actual degrees of freedom. I'd still go for KR or Satterthwaite via emmeans (which is what ggemmeans also uses internally).