|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
In computing and telecommunications, a unit of information is the capacity of some standard data storage system or communication channel, used to measure the capacities of other systems and channels. In information theory, units of information are also used to measure the information contents or entropy of random variables.
The most common units are the bit, the capacity of a system which can exist in only two states, and the byte (or octet), which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes (power-of-ten prefixes) or the newer IEC binary prefixes (binary power prefixes). Information capacity is a dimensionless quantity.
In 1928, Ralph Hartley observed a fundamental storage principle, which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm logbN of the number N of possible states of that system. Changing the basis of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely logcN = (logcb) logbN. Therefore, the choice of the basis b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with N possible states.
When b is 2, the unit is the shannon, equal to the information content of one "bit" (a portmanteau of binary digit). A system with 8 possible states, for example, can store up to log28 = 3 bits of information. Other units that have been named include: