Parallel computing

This is an old revision of this page, as edited by Arvindn (talk | contribs) at 09:30, 24 December 2002. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computer science, parallel computing refers to the simultaneous execution of the same task on multiple processors in order to obtain faster results.

While a system of n parallel processors is not more efficient than one processor of n times the speed, the parallel system is often cheaper to build. Therefore, for tasks which require very large amounts of computation and/or have time constraints on completion, parallel computation is an excellent solution. In fact, in recent years, most high performance computing systems, also known as supercomputers, have a parallel architecture.

Parallel computers are theoretically modeled as PRAMs. The PRAM model ignores the cost of interconnection between the constituent computing units, but is nevertheless very useful in providing upper bounds on the parallel solvability of many problems. In reality the interconnection plays a significant role.

It should not be imagined that successful parallel computing is a matter of obtaining the required hardware and connecting it suitably. In practice, linear speedup (i.e., speedup proportional to the number of processors) is very difficult to achieve. This is because many algorithms are essentially sequential in nature. They must be redesigned in order to make effective use of the parallel hardware. Further, programs which work correctly in a single CPU system may not do so in a parallel enviroment. This is because multiple copies of the same program may interfere with each other, for instance by accessing the same memory location at the same time. Therefore, careful programming is required in a parallel system.

This is a stub article: a full article should be written here on this topic.

Topics for a parallel computing article:

Generic:

Problem sets:

Computer science topics:

Approaches:

Practical problems:

  • Parallel computer interconnects
  • Parallel computer I/O
  • Reliability problems in large systems

Programming languages:

Specific:

Companies:

External links: