Parallel programming
Parallel programming is a computer programming technique that provides for the execution of operations in parallel, either within a single computer, or across a number of systems. In the latter case, the term distributed computing is used
Parallel programming is now often considered to be a special case of concurrent programming because parallism by itself does not require the use of shared resources that can change. E.g., see the paper by Nick Benton, Luca Cardelli, and Cédric Fournet.
In parallel programming, single tasks are split into a number of subtasks that can be computed relatively independently and then aggregated to form a single coherent solution. Parallel programming is most effective for tasks that can be easily broken down into independent tasks such as purely mathematical problems, e.g. factorisation. Multiprocessor machines can often achieve better performance by taking advantage of this kind of programming.
One way to achieve parallel programming is through distributed computing, which is a method of information processing in which work is performed by separate computers linked through a communications network.
Parallel programming often relies on specialized algorithms, which allow problems to be split up into pieces. However, not all algorithms can be optimized to run in a distributed environment, often leading to different performance issues from single processor systems.
Major issues stem from trying to prevent concurrent processes from interfering with each other. Consider the following algorithm for a checking account:
1:bool withdraw(int withdrawal) { 2: if( balance > withdrawal ) { 3: balance -= withdrawal; 3: return true; 4: } else return false; 5:}
Suppose the balance=500, and two processes both call withdraw(300), and withdraw(350), concurrently. If line 2 in both operatons executes before line 3, in both cases balance>withdrawal. However, more will end up being withdrawn than the balance. These sorts of issues require the use of Concurrency control, or non-blocking algorithm.
Pioneers in the field of concurrent programming include Edsger Dijkstra and C. A. R. Hoare.
See also
- Parallel computing
- Concurrent programming language
- Critical sections
- Mutual exclusion
- Synchronization
- Computer multitasking
- Multithreading
- Concurrency control
- Coroutines
- Parallel processor
- Completion ports
- Parawiki
Reference
- Nick Benton, Luca Cardelli, and Cédric Fournet. Modern Concurrency Abstractions for C# ECOOP 2002.
External links
- Article "Multiprocessor Optimizations: Fine-Tuning Concurrent Access to Large Data Collections" by Ian Emmons
- Article "The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software" by Herb Sutter
- Citations from CiteSeer