I tend to prefer that business logic happen outside the model, because modeling the truth seems more likely to be easier than modeling the business. But, I think that can be data-, model-, and business-dependent.
Aside from implementation concerns, I think the question is whether modeling y_true or y_business is easier (for you and your model). Probably for parametric models (esp. linear ones), the relationship between your features and the true target continues past the business floor, and truncating there will cause the model to over- and under-estimate in different areas in an attempt to compensate:

As you move to more expressive models (whether by adding feature engineering, or just more complex models), the difference is probably less stark. For example, tree-based model will prefer the truncated target, not needing to make further splits in branches where the target has (mostly or entirely) been truncated.


Splines plus a linear regression only have a problem near the kink:

You mentioned neural networks. Depending on the activation function, approximating the constant region may be more or less easy. And I think the effect at the kink will be similar to the spline plot.
So I'd advocate for doing some EDA to determine how your situation compares, or just try them both (but be sure to measure their performance on fair grounds).
Colab notebook generating these plots
y < min_valueis possible. I know that the best practice is to make the best prediction and only then to add the business logic, however this time I can not. $\endgroup$however this time I can notWhy not? $\endgroup$