Floating point operations per second

This is an old revision of this page, as edited by Whkoh (talk | contribs) at 04:47, 12 May 2004 (fix typo). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

FLOPS (which stands for floating point operations per second) is an approximate measure of a computer's processing speed. Because modern computers can make many such calculations in a given second, the standard SI prefixes are used:

  • Megaflops means one million FLOPS
  • Gigaflops means one billion FLOPS
  • Teraflops means 1012 FLOPS. The most powerful supercomputers have speeds measured in teraflops.
  • Petaflops means 1015 FLOPS, or a thousand teraflops.
  • Zettaflops means 1021 FLOPS, a thousand petaflops

It is interesting to note that the combined calculating power of all the computers on the planet is only several petaflops.

FLOPS, like MIPS, are arguably not very useful as a benchmark for modern computers because there are many other factors in computer performance other than raw floating-point computation speed, such as interprocessor communication, cache coherence, and the memory hierarchy.

For ordinary (non-scientific) applications, integer operations (rather than floating point) are far more common. Measuring floating point operation speed, therefore, does not give an accurate measure of how the processor will perform. On the other hand, for scientific jobs, such as analysis of data, FLOPS measurement is effective.