5

I'm having trouble with the precision of constant numerics in Fortran.

Do I need to write every 0.1 as 0.1d0 to have double precision? I know the compiler has a flag such as -fdefault-real-8 in gfortran that solves this kind of problem. Would it be a portable and reliable way to do? And how could I check if the flag option actually works for my code?

I was using F2py to call Fortran code in my Python code, and it doesn't report an error even if I give an unspecified flag, and that's what's worrying me.

1
  • 2
    With gfortran, one should avoid using -fdefault-real-8 as it does not solve the problem, and it can in fact cause other problems. The solution is to write DOUBLE PRECISION numbers with either the D0 suffix or a kind parameter suffix. Commented Jun 18, 2018 at 19:09

3 Answers 3

6

In a Fortran program 1.0 is always a default real literal constant and 1.0d0 always a double precision literal constant.

However, "double precision" means different things in different contexts.

In Fortran contexts "double precision" refers to a particular kind of real which has greater precision than the default real kind. In more general communication "double precision" is often taken to mean a particular real kind of 64 bits which matches the IEEE floating point specification.

gfortran's compiler flag -fdefault-real-8 means that the default real takes 8 bytes and is likely to be that which the compiler would use to represent IEEE double precision.

So, 1.0 is a default real literal constant, not a double precision literal constant, but a default real may happen to be the same as an IEEE double precision.

Questions like this one reflect implications of precision in literal constants. To anyone who asked my advice about flags like -fdefault-real-8 I would say to avoid them.

Sign up to request clarification or add additional context in comments.

Comments

4

Adding to @francescalus's response above, in my opinion, since the double precision definition can change across different platforms and compilers, it is a good practice to explicitly declare the desired kind of the constant using the standard Fortran convention, like the following example:

program test use, intrinsic :: iso_fortran_env, only: RK => real64 implicit none write(*,"(*(g20.15))") "real64: ", 2._RK / 3._RK write(*,"(*(g20.15))") "double precision: ", 2.d0 / 3.d0 write(*,"(*(g20.15))") "single precision: ", 2.e0 / 3.e0 end program test 

Compiling this code with gfortran gives:

$gfortran -std=gnu *.f95 -o main $main real64: .666666666666667 double precision: .666666666666667 single precision: .666666686534882 

Here, the results in the first two lines (explicit request for 64-bit real kind, and double precision kind) are the same. However, in general, this may not be the case and the double precision result could depend on the compiler flags or the hardware, whereas the real64 kind will always conform to 64-bit real kind computation, regardless of the default real kind.

Now consider another scenario where one has declared a real variable to be of kind 64-bit, however, the numerical computation is done in 32-bit precision,

program test use, intrinsic :: iso_fortran_env, only: RK => real64 implicit none real(RK) :: real_64 real_64 = 2.e0 / 3.e0 write(*,"(*(g30.15))") "32-bit accuracy is returned: ", real_64 real_64 = 2._RK / 3._RK write(*,"(*(g30.15))") "64-bit accuracy is returned: ", real_64 end program test 

which gives the following output,

$gfortran -std=gnu *.f95 -o main $main 32-bit accuracy is returned: 0.666666686534882 64-bit accuracy is returned: 0.666666666666667 

Even though the variable is declared as real64, the results in the first line are still wrong, in the sense that they do not conform to double precision kind (64-bit that you desire). The reason is that the computations are first done in the requested (default 32-bit) precision of the literal constants and then stored in the 64-bit variable real_64, hence, getting a different result from the more accurate answer on the second line in the output.

So the bottom-line message is: It is always a good practice to explicitly declare the kind of the literal constants in Fortran using the "underscore" convention.

1 Comment

Note that real64 or real32 (a) do not need to be supported by the processor (although most compilers will do) and (b) do not need to follow the IEEE 754 standard (if you want to ensure that, you need to obtain your kinds from appropriate calls to the IEEE_SELECTED_REAL_KIND function provided by the IEEE_ARITHMETIC intrinsic module).
0

The answer to your question is : Yes you do need to indicate the constant is double precision. Using 0.1 is a common example of this problem, as the 4-byte and 8-byte representations are different. Other constants (eg 0.5) where the extended precision bytes are all zero don't have this problem.

This was introduced into Fortran at F90 and has caused problems for conversion and reuse of many legacy FORTRAN codes. Prior to F90, the result of double precision a = 0.1 could have used a real 0.1 or double 0.1 constant, although all compilers I used provided a double precision value. This can be a common source of inconsistent results when testing legacy codes with published results. Examples are frequently reported, eg PI=3.141592654 was in code on a forum this week.

However, using 0.1 as a subroutine argument has always caused problems, as this would be transferred as a real constant.

So given the history of how real constants have been handled, you do need to explicitly specify a double precision constant when it is required. It is not a user friendly approach.

4 Comments

Double precision and the 1d.0 literals were already present in FORTRAN IV and the FORTRAN 66 standard.
"Prior to F90, the result of double precision a = 0.1 could have used a real 0.1 or double 0.1 constant, although all compilers I used provided a double precision value." Really!?! I'm 99% sure this is incorrect - 0.1 was and is real, 0.1d0 was and is double precision, and the compiler is not free to convert from one to tuther in the example you give. See francescalus' answer
@Ian Bush, When converting code developed on 60 bit CDC to 64 bit minis, the Pr1me FTN compiler provided increased accuracy for "a = 0.1". Prior to F90, compilers could provide either precision for this statement, but, importantly, since then (now) they are required not to. Lahey F95 documentation also refers to this in "Fortran 77 Compatibility". This has caused problems with some legacy codes developed in that environment, that continue to be used.
The f66 compiler i learned on made constants double precision by default if written with 9 or more digits. Half precision constants were employed when exact.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.