In computing, decimal128 is a decimal floating-point computer numbering format that occupies 16 bytes (128 bits) in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.
Decimal128 supports 34 decimal digits of significand and an exponent range of −6143 to +6144, i.e. ±0.000000000000000000000000000000000×10 −6143 to ±9.999999999999999999999999999999999×10 6144. (Equivalently, ±0000000000000000000000000000000000×10 −6176 to ±9999999999999999999999999999999999×10 6111.) Therefore, decimal128 has the greatest range of values compared with other IEEE basic floating point formats. Because the significand is not normalized, most values with less than 34 significant digits have multiple possible representations; 1×102=0.1×103=0.01×104, etc. Zero has 12288 possible representations (24576 if you include both signed zeros).