*** Welcome to piglix ***

Binary coded decimal


In computing and electronic systems, binary-coded decimal (BCD) is a class of binary encodings of decimal numbers where each decimal digit is represented by a fixed number of bits, usually four or eight. Special bit patterns are sometimes used for a sign or for other indications (e.g., error or overflow).

In byte-oriented systems (i.e. most modern computers), the term unpacked BCD usually implies a full byte for each digit (often including a sign), whereas packed BCD typically encodes two decimal digits within a single byte by taking advantage of the fact that four bits are enough to represent the range 0 to 9. The precise 4-bit encoding may vary however, for technical reasons, see Excess-3 for instance. The ten states representing a BCD decimal digit are sometimes called tetrades (for the nibble typically needed to hold them also known as tetrade) with those don't care-states unused named pseudo-tetrad(e)s () or pseudo-decimal digit).

BCD's main virtue is its more accurate representation and rounding of decimal quantities as well as an ease of conversion into human-readable representations, in comparison to binary positional systems. BCD's principal drawbacks are a small increase in the complexity of the circuits needed to implement basic arithmetics and a slightly less dense storage.

BCD was used in many early decimal computers, and is implemented in the instruction set of machines such as the IBM System/360 series and its descendants and Digital's VAX. Although BCD per se is not as widely used as in the past and is no longer implemented in newer computers' instruction sets (such as ARM; x86 still has BCD instructions, but they are no longer optimized for speed) decimal fixed-point and floating-point formats are still important and continue to be used in financial, commercial, and industrial computing, where subtle conversion and fractional rounding errors that are inherent in floating point binary representations cannot be tolerated.


...
Wikipedia

...