Search

The Online Encyclopedia and Dictionary

 
     
 

Encyclopedia

Dictionary

Quotes

 

FLOPS

Flops redirects here. For the poker term, see flop (poker). For commercial failures, see list of commercial failures.

In computing, FLOPS is an abbreviation of floating point operations per second. This is used as a measure of a computer's performance, especially in fields of scientific calculations that make heavy use of floating point calculations.

Computing devices exhibit an enormous range of performance levels in floating-point applications, so it makes sense to introduce larger units than the FLOPS. The standard SI prefixes can be used for this purpose, resulting in such units as the megaFLOPS (MFLOPS, 106 FLOPS), the gigaFLOPS (GFLOPS, 109 FLOPS) and the teraFLOPS (TFLOPS, 1012 FLOPS).

One should speak in the singular of a FLOPS and not of a FLOP, although the latter is frequently encountered. The final S stands for second and does not indicate a plural.

The performance spectrum

A cheap but modern desktop computer can perform billions of floating point operations per second, so its performance is in the range of a few gigaFLOPS.

The original supercomputer, the Cray-1, was set up at Los Alamos National Laboratory in 1976. The Cray-1 was capable of 80 megaFLOPS. In less than 30 years since then, the computational speed of supercomputers has jumped a millionfold.

The fastest computer in world as of November 5, 2004 was the IBM Blue Gene supercomputer, measuring 70.72 TFLOPS. This supercomputer was a prototype of the Blue Gene/L machine IBM is building for the Lawrence Livermore National Laboratory in California. During a speed test on 24th March 2005, it was rated at 135.5 TFLOPS. Blue Gene's new record was achieved by doubling the number of current racks to 32. Each rack holds 1,024 processors, yet the chips are the same as those found in high-end computers on the High Street. The complete version will have a total of 64 racks and a theoretical speed measured at 360 TeraFLOPS. Distributed computing has allowed SETI@home, the largest such project, to compute data at more than 100 TFLOPS.

Pocket calculators are at the other end of the performance spectrum. Each calculation request to a typical calculator requires only a single operation, so there is rarely any need for its response time to exceed that needed by the operator. Any response time below 0.1 second is experienced as instantaneous by a human operator, so a simple calculator could be said to operate at about 10 FLOPS.

Humans are even worse floating-point processors. If it takes a person a quarter of an hour to carry out a pencil-and-paper long division with 10 significant digits, that person would be calculating in the milliFLOPS range.

FLOPS as a measure of performance

In order for FLOPS to be useful as a measure of floating-point performance, a standard benchmark must be available on all computers of interest. One example is the LINPACK benchmark.

FLOPS in isolation are arguably not very useful as a benchmark for modern computers. There are many other factors in computer performance other than raw floating-point computation speed, such as interprocessor communication , cache coherence, and the memory hierarchy.

For ordinary (non-scientific) applications, integer operations (measured in MIPS) are far more common. Measuring floating point operation speed, therefore, does not predict accurately how the processor will perform on just any problem. However, for many scientific jobs such as analysis of data, a FLOPS rating is effective.

The contents of this article are licensed from Wikipedia.org under the GNU Free Documentation License. How to see transparent copy