You might be interested in quantile regression. When you run a quantile regression, you get to decide how much high misses and low misses are penalized, and these do not have to be equal. You could fit a low quantile (perhaps quantile $0.75$) so that the model tends to aim high.
Quantile regression optimizes the following loss function $L_{\tau}$, where $\tau$ is the quantile you want to estimate.
$$ l_{\tau}(y_i, \hat y_i) = \begin{cases} \tau\vert y_i - \hat y_i\vert, & y_i - \hat y_i \ge 0 \\ (1 - \tau)\vert y_i - \hat y_i\vert, & y_i - \hat y_i < 0 \end{cases}\\ L_{\tau}(y, \hat y) = \sum_{i=1}^n l_{\tau}(y_i, \hat y_i) $$

When $\tau=0.5$, low and high misses are penalized equally. If $\tau>0.5$, missing low incurs a more severe penalty than missing high, incentivizing your model to miss high rather than miss low.
As far as Python goes, quantile random forests appear to be implemented in scikit-garden. More common (even if not what works for you) would be a linear quantile regression, which is implemented in sklearn and in statsmodels.