I was fitting a bunch of logistic regression models to some dataset, where the variables to predict were all binary. After I fit the models, I then ran some simple code to use the ROC curve to find the best thresholds for each of them, and I noticed all the thresholds I obtained coincided with the mean of the predicted variables (i.e. for the variable $y_i$, if $16\%$ of it was ones and the rest zeros, the ROC curve method suggested a threshold of $0.16$, and so on with the others).
I also fit other models where you can overcome data imbalance by setting the weight of each class, such as random forests or XGB; and the suggested thresholds from the ROC curve for those where more diverse. So my questions are: is the ROC curve’s optimal threshold always the same as the mean of the predicted variable in the case of logit models? And if so, why does it happen? (could be just some intuition on why it works). Lastly, are there any other models where this happens?