Actually there are two different measures that are called correlations. Let us then call them little $r$, which is the Pearson correlation coefficient, and big $R$, which is what you have; a correlation (usually as $R^2$) adjusted for a generalized residual. Now $|r|=|R|$ only when we restrict ourselves to ordinary least squares linear regression in $Y$. If for example, we restrict our linear regression to slope only and set the intercept to zero, we would then use $R$, not $r$. Little $r$ is still the same, it just won't describe the correlation between the new regression line and the data anymore.
Little r is normalized covariance, i.e., $ r= \frac{\operatorname{cov}(X,Y)}{\sigma_X \sigma_Y}=\frac{\sum ^n _{i=1}(x_i - \bar{x})(y_i - \bar{y})}{\sqrt{\sum ^n _{i=1}(x_i - \bar{x})^2} \sqrt{\sum ^n _{i=1}(y_i - \bar{y})^2}}$. Finally, $r^2$ is called the coefficient of determination only for the linear case.
Big $R$ is usually explained using ANOVA intermediary quantities:
The total sum of squares proportional to the variance of the data: $\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }$ $SS_\text{tot}=\sum_i (y_i-\bar{y})^2,$
The regression sum of squares, also called the explained sum of squares: $SS_\text{reg}=\sum_i (f_i -\bar{y})^2,$
- The sum of squares of residuals, also called the residual sum of squares: $SS_\text{res}=\sum_i (y_i - f_i)^2=\sum_i e_i^2\,$
The most general definition of the coefficient of determination is
$R^2 \equiv 1 - {SS_{\rm res}\over SS_{\rm tot}}.\,$
Now, what is the meaning of this $r^2$ or more generally $R^2$? $R^2$ is the explained fraction and $1-R^2$ is the unexplained fraction of the total variance.
What is a good coefficient and what is bad one? That depends on who says so in what context. Most medical papers say that a correlation is strong when $|r|\geq 0.8$. Since that only explains $0.64$ of the variance, I would call $0.8$ to be moderate correlation, and in most of my work $R^2\geq 0.95$ with $<5\%$ unexplained variance is called good. In some of my biological work $R^2>0.999$ is required for proper results. On the other hand, for experiments with only short $x$-axis data ranges, and copious noise, one is lucky to even get a significant correlation usually circa $0.5$ as a borderline significant (non-zero) result.
Perhaps the best way to communicate how variable the answer is, is to back calculate what the critical $r$ and $r^2$ values are for a $p<0.05$ significance.
First to calculate the t-value from an r-value let us use
$t=\frac{r}{\sqrt{(1-r^2)/(n-2)}}$, where $n\geq 6$
Then $r=\frac{t}{n-2+t^2}$, where $n\geq 6$ and using the t-significance tables
the critical two-tailed values of $r$ for significance are:
n r r^2 6 0.9496 0.9018 7 0.8541 0.7296 8 0.7827 0.6125 9 0.7267 0.5281 10 0.6812 0.4640 11 0.6434 0.4140 12 0.6113 0.3737 13 0.5836 0.3405 14 0.5594 0.3129 15 0.5377 0.2891 16 0.5187 0.2690 17 0.5013 0.2513 18 0.4857 0.2359 19 0.4715 0.2223 20 0.4584 0.2101 21 0.4463 0.1992 22 0.4352 0.1894 23 0.4249 0.1806 24 0.4152 0.1724 25 0.4063 0.1650 26 0.3978 0.1582 27 0.3899 0.1520 28 0.3824 0.1462 29 0.3753 0.1408 30 0.3685 0.1358 40 0.3167 0.1003 50 0.2821 0.0796 60 0.2568 0.0659 70 0.2371 0.0562 80 0.2215 0.0491 90 0.2086 0.0435 100 0.1977 0.0391
Note that the explained fraction ($r^2$) need for a significant $r$-value varies from 90% for $n=6$ to 3.9% for $n=100$. Nor does it stop there, the higher the value of $n$, the less explained fraction is needed for significance.
Finally, asking what a 'good' $R^2$ is, is also a bit ambiguous. Unlike $r^2$, $R^2$ can (surprise, shock and awe) actually take on negative values. So, although $R^2$ is more general than $r^2$, it also has problems that never occur with $r^2$. Moreover, like $r$ (see above), $R$ is $n$ biased, and if we adjust $R$ for degrees of freedom using adjusted $R^2$, negative $R^2$ values become even more frequent.