Parallel programming: Difference between revisions
tighten, and move some stuff to Concurrent computing (where it is more relevant) |
|||
Line 1: | Line 1: | ||
'''Parallel programming''' is a [[computer]] [[programming]] technique that provides for the execution of operations in parallel, either within a single computer, or across a number of systems. |
'''Parallel programming''' is a [[computer]] [[programming]] technique that provides for the execution of operations in parallel, either within a single computer, or across a number of systems. The latter case is a form of [[distributed computing]], which is a method of information processing in which work is performed by separate computers linked through a communications network. |
||
Some people consider parallel programming to be synonymous with [[concurrent computing]]. Others draw a distinction between ''concurrency'' in general, which encompasses both parallel execution and interaction between parallel entities while they are executing, and ''parallel programming'', which involves parallelism ''without'' interaction between executing entities. The essence of parallel programming in the latter sense is to split a single task into a number of subtasks that can be computed relatively independently, and then aggregated to form a single coherent solution. The independence of the subtasks eliminates concerns involving [[Mutual exclusion|exclusive access to shared resources]], and ensuring [[Synchronization|synchronization]] between different subtasks, which can permit [[Multiprocessor|multiprocessor]] machines to achieve better performance than their single processor counterparts by executing independent tasks on separate processors. However, such parallel programming techniques are most effective for problems that can be easily broken down into independent tasks, such as purely mathematical problems (e.g. factorization). Unfortunately, not all algorithms can be optimized to run in a parallel setting, which can create performance issues in multiprocessor systems different from those encountered in single processor systems. |
|||
Certain kinds of parallel programming can be accomplished in the lambda calculus, ''e.g.'', the parallel evaluation of arguments. However, the lambda calculus cannot in general express the concurrency of the [[Actor model]] and [[process calculi]]. For example the lambda calculus cannot express the concurrency of a shared financial account in which deposits are performed concurrently. However the [[Actor model]] and [[process calculi]] can express the parallelism in the lambda calculus. |
|||
In parallel programming, single tasks are split into a number of subtasks that can be computed relatively independently and then aggregated to form a single coherent solution. Parallel programming is most effective for tasks that can be easily broken down into independent tasks such as purely mathematical problems, e.g. factorization. [[Multiprocessor]] machines can often achieve better performance by taking advantage of this kind of programming. |
|||
One way to achieve parallel programming is through [[distributed computing]], which is a method of information processing in which work is performed by separate computers linked through a communications network. |
|||
Parallel programming often relies on specialized algorithms, which allow problems to be split up into pieces. However, not all algorithms can be optimized to run in a distributed environment, often leading to different performance issues from single processor systems. |
|||
Major issues stem from trying to prevent concurrent processes from interfering with each other. Consider the following algorithm for a checking account: |
|||
# bool withdraw(int withdrawal) { |
|||
# if( balance > withdrawal ) { |
|||
# balance -= withdrawal; |
|||
# return true; |
|||
# } else return false; |
|||
# } |
|||
Suppose the balance=500, and two processes both call withdraw(300), and withdraw(350), concurrently. If line 2 in both operations executes before line 3, in both cases balance>withdrawal. However, more will end up being withdrawn than the balance. These sorts of issues require the use of [[Concurrency control]], or [[non-blocking algorithm]]s. |
|||
Pioneers in the field of concurrent programming include [[Edsger Dijkstra]] and [[C. A. R. Hoare]]. |
|||
==See also== |
==See also== |
Revision as of 19:30, 13 January 2006
Parallel programming is a computer programming technique that provides for the execution of operations in parallel, either within a single computer, or across a number of systems. The latter case is a form of distributed computing, which is a method of information processing in which work is performed by separate computers linked through a communications network.
Some people consider parallel programming to be synonymous with concurrent computing. Others draw a distinction between concurrency in general, which encompasses both parallel execution and interaction between parallel entities while they are executing, and parallel programming, which involves parallelism without interaction between executing entities. The essence of parallel programming in the latter sense is to split a single task into a number of subtasks that can be computed relatively independently, and then aggregated to form a single coherent solution. The independence of the subtasks eliminates concerns involving exclusive access to shared resources, and ensuring synchronization between different subtasks, which can permit multiprocessor machines to achieve better performance than their single processor counterparts by executing independent tasks on separate processors. However, such parallel programming techniques are most effective for problems that can be easily broken down into independent tasks, such as purely mathematical problems (e.g. factorization). Unfortunately, not all algorithms can be optimized to run in a parallel setting, which can create performance issues in multiprocessor systems different from those encountered in single processor systems.
See also
- Parallel computing
- Parallel programming model
- Concurrent programming language
- Critical sections
- Mutual exclusion
- Synchronization
- Computer multitasking
- Multithreading
- Concurrency control
- Coroutines
- Parallel processor
- Completion ports
- Parawiki