Let $\epsilon_\pm \ge 1$ be real numbers. Consider a following random variable: \begin{equation} {\mathcal R}:= \frac{1-Z}{Z} \cdot \Xi \quad (i) \end{equation} where $Z \in (0,1)$ is a random variable with a density $\rho_Z(z) = z^{\epsilon_+-1} (1-z)^{\epsilon_--1} /B(\epsilon_-,\epsilon_+)$ and $\Xi$ is a uniform random variable , i.e. $\Xi = U(0,1)$. Both variables $Z$ and $\Xi$ are independent and $B(\cdot,\cdot)$ is the beta function.
We have shown that the probability density of the variable $R$ is given as follows: \begin{equation} \rho_R(x) = \frac{x^{-1-\epsilon_+}}{(1+\epsilon_+) B(\epsilon_-,\epsilon_+)} \cdot _2F_1\left(\epsilon_++1,\epsilon_-+\epsilon_+;\epsilon_++2;-\frac{1}{x}\right) \cdot 1_{x \ge 0} \quad (ii) \end{equation}
Now, the natural thing is to check the normalization of the pdf above. If we now use the functional identity http://functions.wolfram.com/HypergeometricFunctions/Hypergeometric2F1/17/02/09/0002/ and the series expansion of the hypergeometric function and integrate term by term then after some manipulations we arrive at a following identity:
\begin{eqnarray} B_{\frac{1}{2}}(\epsilon_--1,\epsilon_++1)-B_{\frac{1}{2}}(\epsilon_-,\epsilon_+)-B_{\frac{1}{2}}(\epsilon_+,\epsilon_-)+B_{\frac{1}{2}}(\epsilon_++1,\epsilon_--1) = -\frac{(\epsilon_--\epsilon_+-1) \Gamma (\epsilon_--1) \Gamma (\epsilon_+)}{\Gamma (\epsilon_-+\epsilon_+)} \quad (iii) \end{eqnarray} where $B_z(\cdot,\cdot)$ is the incomplete beta function.
In[566]:= {em, ep} = RandomReal[{1, 10}, 2, WorkingPrecision -> 50]; ( (ep - em + 1) Gamma[em - 1] Gamma[ep])/ Gamma[em + ep] + (NIntegrate[(t^(ep - 1) - t^(em - 2)) (1 - t) (1 + t)^(-em - ep), {t, 0, 1}, WorkingPrecision -> 30]) ((-1 + em - ep) Gamma[-1 + em] Gamma[ep])/ Gamma[em + ep] - (-Beta[1/2, em - 1, ep + 1] + Beta[1/2, em, ep] + Beta[1/2, ep, em] - Beta[1/2, ep + 1, em - 1]) Out[567]= 0.*10^-32 Out[568]= 0.*10^-50 Now, I have two questions. The first one is simple, i.e.how do we prove the identity $(iii)$ otherwise?
The second question is related to the moments of the distribution of ${\mathcal R}$. Take some $m \ge 0$ and real. Then from the definition $(i)$ we clearly have: \begin{equation} E\left[ {\mathcal R}^m \right] = \frac{B(\epsilon_-+m,\epsilon_+-m)}{B(\epsilon_-,\epsilon_+)} \cdot \frac{1}{m+1} \end{equation}
Can we actually prove the same result by using the closed form expression $(ii)$ for the pdf of ${\mathcal R}$ ?
