In computing, signedness is a property of data types representing numbers in computer programs. A numeric variable is signed if it can represent both positive and negative numbers, and unsigned if it can only represent non-negative numbers (zero or positive numbers).
As signed numbers can represent negative numbers, they lose a range of positive numbers that can only be represented with unsigned numbers of the same size (in bits) because half the possible values are non-positive values (so if an 8-bit is signed, positive unsigned values 128 to 255 are gone while -128 to 127 are present). Unsigned variables can dedicate all the possible values to the positive number range.
For example, a two's complement signed 16-bit integer can hold the values −32768 to 32767 inclusively, while an unsigned 16 bit integer can hold the values 0 to 65535. For this sign representation method, the leftmost bit (most significant bit) denotes whether the value is positive or negative (0 for positive, 1 for negative).
For most architectures, there is no signed–unsigned type distinction in the machine language. Nevertheless, arithmetic instructions usually set different CPU flags such as the carry flag for unsigned arithmetic and the overflow flag for signed. Those values can be taken into account by subsequent branch or arithmetic commands.
The C programming language, with its derivatives, implements the signedness for all integer data types, as well as for "character". The unsigned modifier defines the type to be unsigned. The default integer signedness is signed, but can be set explicitly with signed modifier. Integer literals can be made unsigned with U suffix. For example, 0xFFFFFFFF gives −1, but 0xFFFFFFFFU gives 4,294,967,295 for 32-bit code.