Parallel computing
In computer science, parallel computing refers to the simultaneous execution of the same task on multiple processors in order to obtain faster results.
While a system of n parallel processors is not more efficient than one processor of n times the speed, the parallel system is often cheaper to build. Therefore, for tasks which require very large amounts of computation and/or have time constraints on completion, parallel computation is an excellent solution. In fact, in recent years, most high performance computing systems, also known as supercomputers, have a parallel architecture.
Parallel computers are theoretically modeled as PRAMs. The PRAM model ignores the cost of interconnection between the constituent computing units, but is nevertheless very useful in providing upper bounds on the parallel solvability of many problems. In reality the interconnection plays a significant role.
It should not be imagined that successful parallel computing is a matter of obtaining the required hardware and connecting it suitably. In practice, linear speedup (i.e., speedup proportional to the number of processors) is very difficult to achieve. This is because many algorithms are essentially sequential in nature. They must be redesigned in order to make effective use of the parallel hardware. Further, programs which work correctly in a single CPU system may not do so in a parallel enviroment. This is because multiple copies of the same program may interfere with each other, for instance by accessing the same memory location at the same time. Therefore, careful programming is required in a parallel system.
This is a stub article: a full article should be written here on this topic.
Topics for a parallel computing article:
Generic:
- Multiprocessing
- Parallel programming
- Finding parallelism in problems and algorithms
- Optimising compilers
- Amdahl's law
Problem sets:
Computer science topics:
- Lazy evaluation vs strict evaluation
- Complexity class NC
- Communicating sequential processes
- Dataflow architecture
- Parallel graph reduction
Approaches:
- Computer cluster
- Parallel supercomputers
- Distributed computing
- NUMA vs. SMP vs. massively parallel computer systems
- Grid computing
Practical problems:
- Parallel computer interconnects
- Parallel computer I/O
- Reliability problems in large systems
Programming languages:
Specific:
- PVM vs. MPI libraries
- ILLIAC-IV
- Transputer
- Atari Transputer Workstation
- Beowulf cluster
- Deep Blue
- Meiko Computing Surface
- NCUBE
- Blue Gene
Companies:
External links: