Floating point operations per second

This is an old revision of this page, as edited by Nitecow (talk | contribs) at 23:37, 29 September 2004. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computing, flops is an acronym of floating point operations per second. This is used as a metric (with the flops as unit) for a computer's performance, especially in fields of scientific calculations that make heavy use of floating point calculations.

One should speak in the singular of a flops and not of a flop, although the latter is frequently encountered. The final s stands for second and does not indicate a plural.

The performance spectrum

Computing devices exhibit an enormous range of performance levels in floating-point applications. Thus it makes sense to introduce larger units than the flops; the standard SI decimal prefixes are used for this purpose. For example, a cheap but modern desktop computer can make billions of floating point operations per second, so its performance is in the range of a few gigaflops (109 flops).

Today's most powerful supercomputers have speeds measured in teraflops (1012 flops). The fastest computer in world as of September 29, 2004 is the IBM Blue Gene, measuring 36.01 teraflops beating the Earth Simulator which has held the record since March 11, 2002 with performance measuring 35.86 teraflops. The now implemented Blue Gene architecture may eventually reach speeds in excess of one petaflops (1015 flops). The most successful distributed computing projects are not far behind, with both GIMPS and SETI@home running virtual computers at some 14 teraflops (as of May 2004).

Pocket calculators are at the other end of the performance spectrum. Any response time below 0.1 second is experienced as 'instantaneous' by a human operator. Because it makes no sense to create a faster calculator, one may conclude that a pocket calculator performs at about 10 flops.

Of course, humans are even worse floating-point processors. If it takes a person a quarter of an hour to carry out a pencil-and-paper long division with 10 significant digits, that person would be calculating in the milliflops range.

Flops as a metric

In order for flops to be useful as a metric for floating-point performance, a standard benchmark must be available on all computers of interest. An example is the LINPACK benchmark.

Flops in isolation are arguably not very useful as a benchmark for modern computers. There are many other factors in computer performance other than raw floating-point computation speed, such as interprocessor communication, cache coherence, and the memory hierarchy.

For ordinary (non-scientific) applications, integer operations (measured in MIPS) are far more common. Measuring floating point operation speed, therefore, does not predict accurately how the processor will perform on just any problem. However, for many scientific jobs such as analysis of data, a flops rating is effective.