The conditional expected time remaining for an event to occur seems to grow with waiting time. This seems either wrong or like some sort of paradox.
Let's take the Lomax distribution as an example. Assume that we are waiting for an event and the waiting time for events is distributed as
$$f(x)=\frac{a}{b}\left[1+\frac{x}{b}\right]^{-(a+1)}$$
with $a>2$ and $b>0$. Also assume that we have observed at time $T$ that the event has not yet occurred. The conditional density for the waiting time becomes
$$f(x\mid X\geq T)=\frac{f(x)}{f(x\geq T)} = \frac{f(x)}{1-f(x< T)} = \frac{\frac{a}{b} \left[1+\frac{x}{b}\right]^{-(a+1)}}{\left[1+\frac{T}{b}\right]^{-a}}$$
The expected value of the waiting time given we have observed the event has not occurred at time $T$ is
\begin{align} & E[X\mid x\geq T]=\int_T^\infty xg(x\mid x\geq T) \\[8pt] = {} & \left[1+\frac{T}{b}\right]^a \int_T^\infty \frac{a}{b} x \left[1+\frac{x}{b}\right]^{-(a+1)} \, dx \end{align}
The integral can be evaluated using integration by parts.
\begin{align} & \int_T^\infty \frac{a}{b} x\left[1+\frac{x}{b}\right]^{-(a+1)} \, dx \\[8pt] = {} & \left.-x\left(1+\frac{x}{b} \right)^{-a} \right\vert_T^\infty + \left. \frac{b}{-a+1} \left(1+\frac{x}{b}\right)^{-a+1} \right\vert_T^\infty \\[8pt] = {} & T\left(1+\frac{T}{b}\right)^{-a}-\frac{b}{-a+1} \left(1+\frac{T}{b}\right)^{-a+1} \end{align}
I have used the following manipulation to evaluate the expression at $\infty$. \begin{align} & x\left(1+\frac{x}{b}\right)^{-a} = x\left(\frac{b+x}{b} \right)^{-a} \\[8pt] = {} & x\left(\frac{b}{b+x}\right)^a = b^a \frac{x}{(b+x)^a}<b^a \frac{x}{x^a} \\[8pt] = {} & b^a \frac{1}{x^{a-1}} \end{align}
It should be clear that this goes to zero as $x\rightarrow\infty$ for $a>2$ and $b>0$.
Then finally the expected value is
$$\left[1+\frac{T}{b}\right]^a \left[T\left(1+\frac{T}{b}\right)^{-a} - \frac{b}{-a+1} \left(1+\frac{T}{b}\right)^{-a+1}\right] = T+\frac{b}{a-1} \left[1+\frac{T}{b} \right]^a$$
If we care about the waiting time after the observation as opposed to the total waiting time that is
$$\frac{b}{a-1}\left[1+\frac{T}{b}\right]^a$$
So the expected additional waiting time after we take note the event has not occurred yet grows, at minimum, quadratically ($a>2$) with the time of our observation. This seems very counter intuitive to me. Typically if you've waited a while for an event to occur and it hasn't yet, you assume it will occur soon.
I'm wondering if I've missed something or messed something up? Maybe this is the result of the Lomax distribution having a heavy tail? Is there an intuitive explanation for this result?
This same behavior seems to apply to a conditional Weibull distribution as well (I seem to recall seeing this during numerical simulation a few years ago). Given this is the case, why are the Lomax and Weibull distributions used in fields such as actuarial sciences and reliability modeling where I'd imagine conditional waiting time is important, and such a paradox would hinder the business goal (I think)?