Supercomputer
A supercomputer is a computer which leads the world in terms of processing capacity, particularly speed of calculation, at the time of its introduction. As such the term is rather fluid and today's supercomputer tends to become tomorrow's also-ran. Thus the term defines different computers at different times.
Supercomputers tend to be used for highly calculation intensive tasks such as weather forecasting, cryptanalysis, physical simulations (including simulation of the detonation of nuclear weapons and research into nuclear fusion), climate research (including research into global warming), and the like. Military agencies and meteorological agencies are heavy users.
As of 2002, Moore's Law is the dominant factor in supercomputer design: a single modern desktop PC is now more powerful than a 15-year old supercomputer.
For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of cheaper computers which can be programmed to act as one large computer. Many of these use the Linux operating system.
High-end cluster computers use specialised interconnects derived from previous supercomputer technologies. Low-end cluster computers consist of many inexpensive commodity computers linked by a high-bandwidth local area network.
However, the fastest supercomputers in the world still use tightly-clustered special-purpose computers.
The world's fastest supercomputer, as of early 2002, is the Earth Simulator at the Yokohama Institute for Earth Sciences. The Earth Simulator is a cluster of 640 custom-designed 8-processor vector processor computers based on the NEC SX-6 architecture (a total of 5120 processors). It uses a customised version of the UNIX operating system.
Its performance is over 5 times that of the previous fastest supercomputer, the cluster computer ASCI White at Lawrence Livermore National Laboratory. The United States Government ASCI initiative aims to replace nuclear testing with simulation, in order to keep its strategic advantage in the presence of nuclear test-ban treaties.
History of general-purpose supercomputers
Supercomputers tradionally gained their speed over conventional computers through the use of unconventional designs which allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialised for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hieraychy is very carefully designed to ensure the processor is kept fed with data and instructions at all times - in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierachy design and componentry. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue as supercomputers are not used for transaction processing.
Vector processing techniques were first developed for supercomputers, and continue to be used in specialist high-performance applications. Vector processing techniques have 'trickled down' to the mass market in DSP architectures and SIMD processing instructions for general-purpose computers.
Their operating systems, often variants of UNIX, tend not to be as sophisticated as those for smaller machines since supercomputers are typically dedicated to one task at a time rather than the multitude of simultaneous jobs that makes up the workload of smaller devices.
The unusual architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Special purpose FORTRAN compilers are often able to generate faster code than the C or C++ compilers and so FORTRAN remains the language of choice for scientific programming, and hence for most of the programs run on supercomputers.
Technologies developed for supercomputers include:
Seymour Cray is intimately associated with the history of supercomputers, having designed many of the world's fastest computers throughout the 1960s, 1970s and 1980s either for Control Data Corporation or for Cray Research.
Period | Supercomputer | Speed | Location |
---|---|---|---|
1945-1950 | Manchester Mark I | University of Manchester, England | |
1950-1955 | |||
1955-1960 | |||
1960-1965 | |||
1965-1970 | |||
1970-1975 | |||
1975-1980 | Cray-1 | 160 MFLOPS | Los Alamos National Laboratory, New Mexico (1976) |
1980-1985 | |||
1985-1990 | |||
1990-1995 | |||
1995-2000 | |||
2000-2002 | IBM ASCI White, SP Power3 375 MHz | 7226 GFLOPS | Lawrence Livermore Laboratory, California |
2002- | Earth Simulator | 35 TFLOPS | Yokohama Institute for Earth Sciences, Japan |
future |
Special-purpose supercomputers
See also:
External links:
- See the TOP500 Supercomputer Sites for more information.
- HP announcement of contract to build Linux supercomputer
- ASCI White press release
- Article about Japanese "Earth Simulator" computer
- "Earth Simulator" website (in English)
- NEC high-performance computing information
- [Pittsburgh Supercomputing Center] operated by University of Pittsburgh and Carnegie Mellon University.