In contrast to what was demonstrated in both Chux's and aka.nice their answers and mentioned in various comments, I believe we jumped conclusion by focusing only on a single computation, i.e. sin(5). Eventhough the OP's answer is closer to the true value, it must be said that both the OP's algorithm and the factorial algorithm are as bad for computing sin(5), but in the end, the factorial algorithm is the better one.
This answer will not go into to much detail about Floating-Point Arithmetic. An excelent monograph can be found on this topic in What Every Computer Scientist Should Know About Floating-Point Arithmetic. Neither will I open the can of worms of Intermediate Floating-Point Precision.
disclaimer: I don't want to claim that what I write here is 100% correct and I am for sure not an authority in this field. I just found the question extremely interesting and tried to get as much out of it as I could. I obviously welcome any comment!
some interesting reads that lead to this:
Some initial concepts
floating point representation (FPR): in a finite floating point representation, a number is written as ±0.d1d2 ... dp x be. It has a precision p and an even base b where all digits di are integers with 0 ≤ di < b. It is straightforward that such representation cannot represent all real numbers. For example, if b = 10 and p = 3, the number 1/3 is approximated as 0.333 x 100 or if b=2 and p=24 one has to approximate 0.1 as 0.110011001100110011001101 x 25.
absolute error (AE): if a real number x has an AE of Δx, then we know that the expected true value x̃ of x lays between x - Δx ≤ x̃ ≤ x + Δx. If x is the closest FPR to the true value x̃, then we know that its AE is ( (b/2) b-p - 1 ) x be. For example, if b = 10 and p = 3, the number 1/3 is approximated as 0.333 x 100 and has an absolute error of 0.0005 x 100 indicating that 0.3325 x 100 ≤ 1/3 ≤ 0.3335 x 100
What does this all mean for the Taylor-MacLaurin series?
For a standard IEEE binary64 (double precision) computation (b = 2 and p = 53) of the Taylor-MacLaurin series of sin(5) one quickly sees that the horse already bolted at the 2nd and 3rd term. Here, the FPR is only accurate upto 2^-49 as can be seen in the table below (representing nearest binary64 representation of the true fractions):
j term FPR e AE 0.12345678901234567 1 5^1/1! 5.00000000000000000E+00 3 2^-51 4.4E-16 3 -5^3/3! -2.08333333333333321E+01 5 2^-49 1.8E-15 5 5^5/5! 2.60416666666666679E+01 5 2^-49 1.8E-15 7 -5^7/7! -1.55009920634920633E+01 4 2^-50 8.9E-16 j sum FPR e AE 0.12345678901234567 1 5 5.00000000000000000E+00 3 2^-51 4.4E-16 3 -95/6 -1.58333333333333339E+01 4 2^-50 8.9E-16 5 245/24 1.02083333333333339E+01 4 2^-50 8.9E-16 7 -5335/1008 -5.29265873015873023E+00 3 2^-51 4.4E-16
The accuracy upto 2^-49 can be understood in the following way. If you look at the term 5^3/3! then the nearest FPR of this number is the fraction (5864062014805333 / 9007199254740992) * 2^5. As you see, we are missing a part here namely
5^3/3! - 5864062014805333 / 9007199254740992) * 2^5 = 1/844424930131968 ~ 1.1842378929335002E-15 < 2^-49
So no matter wat we try to do, we always miss this part. so it is not possible to compute sin(5) with an accuracy better then 2^-49. As this is the accuarcy of the biggest term (absolute value) that is added to the sum. However, other terms also introduce errors and all these errors accumulate. So after the first 30 terms, we know that the AE for sin(5) accumulate to be :
2^-51 + 2^-49 + 2^-49 + 2^50 + ... = 5.45594...E-15
This number, is however too idealistic as there is more loss of precision due to loss of significance.
Which algorithm is better?
The two presented algorithms are (assuming all variables are IEEE binary64 (double precision) variables with exception of i):
algorithm 1: This is a slightly adopted version of the algorithm presented here
fact = 1.0_dp; pow = x; sum = x; xx = x*x do i=2,30 j=2*i-1 fact = fact*j*(j-1) pow = -pow * xx term = pow/fact sum = sum + term end do
algorithm 2: This is a slightly adopted version of the algorithm presented here
term = x; sum = x; xx = x*x do i=2,30 j=2*i-1 term = (-term*xx)/(j*(j-1)) sum = sum + term end do
While both algorithms are mathematically the same, there is a subtle numerical difference. In floating-point operations, divisions and multiplications are considered to be safe, that is they will always result in the nearest FPR of the true result. That is, with the given input. This, however, does not mean that multiple divisions and multiplication will lead to the nearest FPR of the full computation.
This is why algorithm 2 is slightly worse then algorithm 1. The computation of term = (-term*xx)/(j*(j-1)) contains multiple multiplications and divisions and already makes use of an approximated version of term. This in contrast to algorithm 1 where term = pow/fact is a single operation.
The table below shows the differences in term starting from j=5:
j term(alg-1) term(alg-2) 0.12345678901234567 0.12345678901234567 1 5.00000000000000000E+00 5.00000000000000000E+00 3 -2.08333333333333321E+01 -2.08333333333333321E+01 5 2.60416666666666679E+01 2.60416666666666643E+01 7 -1.55009920634920633E+01 -1.55009920634920633E+01 9 5.38228891093474449E+00 5.38228891093474360E+00 11 -1.22324747975789649E+00 -1.22324747975789627E+00 13 1.96033249961201361E-01 1.96033249961201306E-01 15 -2.33372916620477808E-02 -2.33372916620477738E-02 17 2.14497166011468543E-03 2.14497166011468499E-03
The size of the created error is of the magnitude :
AE(term(j-2)) * xx / (j*(j-1))
Can we still make improvements?
The answer is clearly a yes, but we need to use some 'tricks'. The easiest way would be to use a higher-precision and then convert everything to double-precision, but this is cheating!
Earlier it was mentioned that loss of significance will introduce more errors and in this comment it was mentioned from 23! onwards, you loose accuracy in your factorial as it cannot be represented correctly anymore as a binary64 number.
We will try to improve the answer, by keeping track of an error term and making use of what is known as compensated summation or Kahan summation. As was indicated early in the beginning of this post, it is the inverse factorial that essentially leads to loss of precision.
computing the inverse factorial: instead of computing the factorial we will compute the inverse of the factorial in the following way. Imagine we computed f ~ 1/(n-1)! with corresponding error-term q such that f + q ~ 1/(n-1)! . We know that the 'FPR' of f can be written in the following way :f = a * b e-p with a, b, e, p integers. So it is possible to use integer division to compute the 1/n! and its corresponding error term using the following set of operations:
f = (a/n) * b^(e-p) q = (q + mod(a,n) * b^(e-p))/n
in Fortran this leads to the following code :
f = 1.0_dp; q = 0.0_dp do i = 2,10 ! retrieving the integer from 1/(i-1)! a = int(scale(fraction(f),digits(f)),kind=INT64) ! computing the error on f while keeping track of the ! previous error q = (q + scale(real(mod(a,i),kind=dp),exponent(f)-digits(f)))/i ! computing f/i resembling 1/i! f = scale(real(a/i ,kind=dp),exponent(f)-digits(f)) ! rebalancing the terms t = f; f = f + q; q = q - (f - t) end do
Here, the last line rebalances f with its error q. A trick stemming from the Kahan summation.
Kahan summation with inverse factorial ... the new sin:
It is now possible to combine this trick with algorithm 2 to make it a bit better :
pow = x; sum = x; xx = x*x; err = 0.0_dp; f = 1.0_dp; q = 0.0_dp do i=2,30 j=2*i-1 ! The factorial part a = int(scale(fraction(f),digits(f)),kind=INT64) q = (q + scale(real(mod(a,j*(j-1)),kind=dp),exponent(f)-digits(f)))/(j*(j-1)) f = scale(real(a/(j*(j-1)) ,kind=dp),exponent(f)-digits(f)) t = f; f = f + q; q = q - (f - t) pow = -pow*xx ! computes x^j ! Kahan summation t = pow*f; err = err + (t - ((sum + t) - sum)); sum = sum + t t = pow*q; err = err + (t - ((sum + t) - sum)); sum = sum + t t = sum; sum = sum + err; err = err - (sum - t) end do
This algorithm leads to remarkably good results. An excellent accuracy in the quadrant 1 and 2 and slightly worse in the third quadrant. The fourth quadrant is still okay. It has however, horrible accuracy near Pi and 2Pi.