0
$\begingroup$

There are lots of examples and posts of general rules to find normalizing constants for extreme value theory, see here and there for instance. However, these deal with normalizing to the asymptotic distribution, my question is whether or not you should use the same normalizing constants to compare when you have finite sample sizes. Are there known adjustments in general or are these found on a case by case basis? If I have complete knowledge of my distribution and its parameters $\theta$, how do I choose the best constants to normalize the distribution such that it is best approximated by its asymptotic distribution?

Take for example the truncated pareto distribution, $f(x) = \mathbb 1_{[0,\ell]}(1-\ell^{-\alpha})^{-1}(1 - x^{-\alpha})$. You can use the quantile and hazard function with $a_n = h(F^{-1}(1 - \frac{1}{n})), b_n = F^{-1}(1 - \frac{1}{n})$ to derive the constants $a_n x + b_n$, but the CDF of $F(a_n x + b_n)^n$ converges very slowly to the inverse weibull, and so for less than very large $n$, its a poor approximation. Is there a way to choose better constants $a_n x + b_n$?

$\endgroup$
3
  • $\begingroup$ Please explain how you quantify the goodness of approximation, for surely that will determine the choices of the $a_n$ and $b_n.$ $\endgroup$ Commented May 28 at 20:23
  • $\begingroup$ @whuber Minimizing any form of distance between distributions like Smirnov-Kolmogorov, Ky-Fan metric, or a divergence of $D(f(a_n x + b_n) || G)$ where $G$ is the asymptotic distribution. $\endgroup$ Commented May 28 at 21:18
  • $\begingroup$ You may be more interested by a so-called penultimate approximation than by an optimal ultimate approximation since even with the best choice in some sense, the convergence can remain very slow. $\endgroup$ Commented Jun 21 at 15:30

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.