You can do this, and it should give you a better estimator (lower expected mean square error) than just taking the $95\%$ quantile of your data if your sample size were large enough to allow this. Simulate it and see, and in particular note how much uncertainty there is with small to medium sized samples. Whether it is the best estimator is a different question: for very small $n$, I suspect $\overline{x} +\sqrt{\frac{n^2-1}{n^2}} t_{0.95}(n)\cdot s$ may be better and there may be others which are better still.
As an illustration, here is a simulation with $\mu=77$ and $\sigma=14$ so the $95$th percentile of a normal distribution should be close to $100$. With a sample size of $n=100$, your estimator is distributed as the black empirical density below and mine in red (they almost exactly overlap), while the blue line is an estimator directly taking the quantile from the data while ignoring the fact we know it was sampled from a normal distribution.
That used the following R code:
est <- function(sampsize, p, simmean=0, simsd=1){ x <- rnorm(sampsize, simmean, simsd) mx <- mean(x) sx <- sd(x) return(c(mx, sx, quantile(x, p), mx + qt(p,sampsize-1)*sx, mx + qt(p,sampsize)*sx*sqrt((sampsize^2-1)/sampsize^2) )) } set.seed(2025) p <- 0.95 sampsize <- 100 simmean <- 77 simsd <- 14 cases <- 10^5 sims <- replicate(cases, est(sampsize, p, simmean, simsd)) plot(density(sims[5,]), col="red" ) lines(density(sims[4,]), col="black") lines(density(sims[3,]), col="blue") abline(v=qnorm(p, simmean, simsd)) If instead this had used a much smaller sample size, say $n=4$, the density of the estimators would be much more widely spread, and mine in red would appear to tend to perform better than yours in black.

