Parallel programming: Difference between revisions
tweak |
propose merage to Parallel computing |
||
Line 1: | Line 1: | ||
{{mergeto|Parallel computing}} |
|||
'''Parallel programming''' is a [[computer]] [[programming]] technique that provides for the execution of operations in parallel, either within a single computer, or across a number of systems. The latter case is a form of [[distributed computing]], which is a method of information processing in which work is performed by separate computers linked through a communications network. |
'''Parallel programming''' is a [[computer]] [[programming]] technique that provides for the execution of operations in parallel, either within a single computer, or across a number of systems. The latter case is a form of [[distributed computing]], which is a method of information processing in which work is performed by separate computers linked through a communications network. |
||
Revision as of 20:25, 13 January 2006
![]() |
Parallel programming is a computer programming technique that provides for the execution of operations in parallel, either within a single computer, or across a number of systems. The latter case is a form of distributed computing, which is a method of information processing in which work is performed by separate computers linked through a communications network.
Some people consider parallel programming to be synonymous with concurrent programming. Others draw a distinction between concurrent programming in general, which encompasses both parallel execution and interaction between parallel entities while they are executing, and parallel programming, which involves parallelism without interaction between executing entities. The essence of parallel programming in the latter sense is to split a single task into a number of subtasks that can be computed relatively independently, and then aggregated to form a single coherent solution. The independence of the subtasks eliminates concerns involving exclusive access to shared resources, and ensuring synchronization between different subtasks, which can permit multiprocessor machines to achieve better performance than their single processor counterparts by executing independent tasks on separate processors. However, such parallel programming techniques are most effective for problems that can be easily broken down into independent tasks, such as purely mathematical problems (e.g. factorization). Unfortunately, not all algorithms can be optimized to run in a parallel setting, which can create performance issues in multiprocessor systems different from those encountered in single processor systems.
See also
- Parallel computing
- Parallel programming model
- Concurrent programming language
- Critical sections
- Mutual exclusion
- Synchronization
- Computer multitasking
- Multithreading
- Concurrency control
- Coroutines
- Parallel processor
- Completion ports
- Parawiki