*** Welcome to piglix ***

GFLOPS

Computer performance
Name Abbr. FLOPS
kiloFLOPS kFLOPS 103
megaFLOPS MFLOPS 106
gigaFLOPS GFLOPS 109
teraFLOPS TFLOPS 1012
petaFLOPS PFLOPS 1015
exaFLOPS EFLOPS 1018
zettaFLOPS ZFLOPS 1021
yottaFLOPS YFLOPS 1024

In computing, floating point operations per second (FLOPS) is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases it is a more accurate measure than measuring instructions per second.

The similar term FLOP is often used for floating-point operation, for example as a unit of counting floating-point operations carried out by an algorithm or computer hardware.

Floating-point arithmetic is needed for very large or very small real numbers, or computations that require a large dynamic range. Floating-point representation is similar to scientific notation, except everything is carried out in base two, rather than base ten. The encoding scheme stores the sign, the exponent (in base two for Cray and IEEE floating point formats, or base 16 for IBM Floating Point Architecture) and the mantissa (number after the decimal point). While several similar formats are in use, the most common is ANSI/IEEE Std. 754-1985. This standard defines the format for 32-bit numbers called single precision, as well as 64-bit numbers called double precision and longer numbers called extended precision (used for intermediate results). Floating-point representations can support a much wider range of values than fixed-point, with the ability to represent very small numbers and very large numbers.

The exponentiation inherent in floating-point computation assures a much larger dynamic range – the largest and smallest numbers that can be represented – which is especially important when processing data sets which are extremely large or where the range may be unpredictable. As such, floating-point processors are ideally suited for computationally intensive applications.

FLOPS measures the computational performance of a computer. An example of a floating-point operation is the calculation of mathematical equations; as such, FLOPS is a useful measure of supercomputer performance. MIPS is used to measure the integer performance of a computer. Examples of integer operation include data movement (A to B) or value testing (If A = B, then C). MIPS as a performance benchmark is adequate for the computer when it is used in database query, word processing, spreadsheets, or to run multiple virtual operating systems. Frank H. McMahon, of the Lawrence Livermore National Laboratory, invented the terms FLOPS and MFLOPS (megaFLOPS) so that he could compare the so-called supercomputers of the day by the number of floating-point calculations they performed per second. This was much better than using the prevalent MIPS to compare computers as this statistic usually had little bearing on the arithmetic capability of the machine.


...
Wikipedia

...