The standard 38-digit precision in most RDBMS aligns with the storage limits and computational efficiency of 128-bit integer representations, which comfortably support up to 38 decimal digits. Adopting 128 bits reflects a practical balance in hardware architectures, where power-of-two bit sizes (eg., 64, 128) ensure consistent performance across operations.
While SQL-92 specifies that maximum precision is implementation-defined, major RDBMSs like Oracle, SQL Server, and PostgreSQL (and modern cloud-based SaaS data warehouses such as Snowflake) have settled on 38 digits to promote interoperability, facilitating easier data migrations across systems. MySQL, however, supports up to 65 digits, likely achieved through a 192-bit format, while DB2 limits precision to 31 digits, which may relate to legacy architectures and storage optimisations.
Although the Decimal128 standard offers up to 34 digits of floating-point representation within 128 bits, it is generally designed for floating-point use rather than fixed-precision storage. The RDBMS choice of 38 digits reflects an optimal trade-off between precision, storage efficiency, and industry compatibility within the 128-bit framework.