A useful example to ponder is the lognormal, all of whose moments are finite.
We can (i) focus on the right tail, (ii) take $\mu=0$ without loss of generality, and (iii) work with the logs.
Consider, for $n$ i.i.d. variables $\sim N(0,\sigma^2)$, the probability $P(X_{(n)}<\frac12\sigma^2\,) \, $.
This is the same event as the largest order statistic of the corresponding lognormal being below its population mean, but taken to the log scale. The probability that the sample range doesn't include the population mean will be strictly larger than this.
The location of the distribution of $X_{(n)}$ grows slowly with $n$ (roughly like the log), and has a scale also involving $n$ (slowly getting smaller rather than larger), but here we fix $n$ so we won't need to worry about its properties in terms of $n$. The location and scale of the distribution of $X_{(n)}$ are proportional to $\sigma$. Consequently, for any $ε>0$, at any given $n$, you can make that probability given above larger than $1−ε$ by taking sufficiently large $\sigma$.
A quick illustrative example in R:
nsim=100000;sig=9;n=100;logmean=sig^2/2 mean(replicate(nsim,max(rnorm(n,0,sig))<logmean)) [1] 0.99969
For that $\sigma$ and $n$, almost 100% of lognormal sample ranges don't include the lognormal's mean. The corresponding lognormal distribution is really skew and heavy-tailed*, but we're working on the log scale here, so conveniently we don't have to worry much about generating numbers that are too large.
* e.g. for large $\sigma$ the third moment-based skewness goes like ~$\exp( \frac{3}{2} \sigma^2)$ (in the sense that the ratio of the skewness to this approximation becomes close to 1), so this skewness would be very roughly on the order of $10^{52}$ (similarly the kurtosis is on the order of about $10^{140}$), where 'roughly' and 'about' with 'on the order of' refers to the exponent being about the right size.