*** Welcome to piglix ***

IBM Floating Point Architecture


IBM System/360 computers, and subsequent machines based on that architecture (mainframes), support a hexadecimal floating-point format.

In comparison to IEEE 754 floating-point, the IBM floating-point format has a longer significand, and a shorter exponent. All IBM floating-point formats have 7 bits of exponent with a bias of 64. The normalized range of representable numbers is from 16−65 to 1663 (approx. 5.39761 × 10−79 to 7.237005 × 1075).

The number is represented as the following formula: (−1)sign × 0.significand × 16exponent−64.

A single-precision binary floating-point number is stored in a 32-bit word:

In this format the initial bit is not suppressed, and the radix point is set to the left of the mantissa in increments of 4 bits.

Since the base is 16, the exponent in this form is about twice as large as the equivalent in IEEE 754, in order to have similar exponent range in binary, 9 exponent bits would be required.

Consider encoding the value −118.625 as an IBM single-precision floating-point value.

The value is negative, so the sign bit is 1.

The value 118.62510 in binary is 1110110.1012. This value is normalized by moving the radix point left four bits (one hexadecimal digit) at a time until the leftmost digit is zero, yielding 0.011101101012. The remaining rightmost digits are padded with zeros, yielding a 24-bit fraction of .0111 0110 1010 0000 0000 00002.

The normalized value moved the radix point two digits to the left, yielding a multiplier and exponent of 16+2. A bias of +64 is added to the exponent (+2), yielding +66, which is 100 00102.

Combining the sign, exponent plus bias, and normalized fraction produces this encoding:

The number represented is +0.FFFFFF16 × 16127 − 64 = (1 − 16−6) × 1663 ≈ +7.2370051 × 1075

The number represented is +0.116 × 160 − 64 = 16−1 × 16−64 ≈ +5.397605 × 10−79


...
Wikipedia

...