7

SQL-92 says:

 16)For the <exact numeric type>s DECIMAL and NUMERIC: a) The maximum value of <precision> is implementation-defined. <precision> shall not be greater than this value. 

And yet it's 38 in Oracle, MS SQL Server, PostgreSQL, Snowflake, Teradata. Only MySQL and DB2 are different, allowing up to 65 and 31 significant digits respectively. I guess it's because a 128 bits integer can guarantee 38 decimal digits of precision, but how did almost everyone agree on using a 128 bits integer for storing the mantissa?

2
  • That 128 bit to 38 decimals precision makes perfect sense. Popular binary types are 8, 16, 32, 64 and 128 bits long. I do not know of practical implementations for 256 and up so folks choosing the maximum for representing high precision numbers is to be expected and hardly a coincidence. The other numbers do raise some interesting questions though. 65 may map to 192 bits and 31 possibly to 80. Wikipedia has a convenient page related to this: en.wikipedia.org/wiki/Power_of_two . Commented Jun 6, 2023 at 11:28
  • There's decimal128, which supports 34 significant digits. I know it's much newer than most of these systems, but it allows the whole number to fit into 128 bits, not just the mantissa. Commented Jun 13, 2023 at 12:13

1 Answer 1

5

The standard 38-digit precision in most RDBMS aligns with the storage limits and computational efficiency of 128-bit integer representations, which comfortably support up to 38 decimal digits. Adopting 128 bits reflects a practical balance in hardware architectures, where power-of-two bit sizes (eg., 64, 128) ensure consistent performance across operations.

While SQL-92 specifies that maximum precision is implementation-defined, major RDBMSs like Oracle, SQL Server, and PostgreSQL (and modern cloud-based SaaS data warehouses such as Snowflake) have settled on 38 digits to promote interoperability, facilitating easier data migrations across systems. MySQL, however, supports up to 65 digits, likely achieved through a 192-bit format, while DB2 limits precision to 31 digits, which may relate to legacy architectures and storage optimisations.

Although the Decimal128 standard offers up to 34 digits of floating-point representation within 128 bits, it is generally designed for floating-point use rather than fixed-precision storage. The RDBMS choice of 38 digits reflects an optimal trade-off between precision, storage efficiency, and industry compatibility within the 128-bit framework.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.