https://en.wikipedia.org/w/index.php?action=history&feed=atom&title=Parallel_algorithm
Parallel algorithm - Revision history
2025-05-25T12:46:43Z
Revision history for this page on the wiki
MediaWiki 1.45.0-wmf.2
https://en.wikipedia.org/w/index.php?title=Parallel_algorithm&diff=1269968749&oldid=prev
OlliverWithDoubleL: Added short description
2025-01-17T08:29:40Z
<p>Added short description</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 08:29, 17 January 2025</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>{{Short description|Algorithm which can do multiple operations in a given time}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{More citations needed|date=November 2012}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{More citations needed|date=November 2012}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>In [[computer science]], a '''parallel algorithm''', as opposed to a traditional [[serial algorithm]], is an [[algorithm]] which can do multiple operations in a given time. It has been a tradition of [[computer science]] to describe serial algorithms in [[abstract machine]] models, often the one known as [[random-access machine]]. Similarly, many computer science researchers have used a so-called [[parallel random-access machine]] (PRAM) as a parallel abstract machine (shared-memory).<ref>{{cite web |title=Parallel Algorithms |first1=Guy E. |last1=Blelloch |first2=Bruce M. |last2=Maggs |publisher=School of Computer Science, [[Carnegie Mellon University]] |location=USA |access-date=2015-07-27 |url=https://www.cs.cmu.edu/~guyb/papers/BM04.pdf}}</ref><ref>{{cite web |title=Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques, 104 pages |first=Uzi |last=Vishkin |publisher=Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland, College Park, Tel Aviv University and the Technion |date=2009 |url=http://users.umiacs.umd.edu/~vishkin/PUBLICATIONS/classnotes.pdf}}</ref></div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>In [[computer science]], a '''parallel algorithm''', as opposed to a traditional [[serial algorithm]], is an [[algorithm]] which can do multiple operations in a given time. It has been a tradition of [[computer science]] to describe serial algorithms in [[abstract machine]] models, often the one known as [[random-access machine]]. Similarly, many computer science researchers have used a so-called [[parallel random-access machine]] (PRAM) as a parallel abstract machine (shared-memory).<ref>{{cite web |title=Parallel Algorithms |first1=Guy E. |last1=Blelloch |first2=Bruce M. |last2=Maggs |publisher=School of Computer Science, [[Carnegie Mellon University]] |location=USA |access-date=2015-07-27 |url=https://www.cs.cmu.edu/~guyb/papers/BM04.pdf}}</ref><ref>{{cite web |title=Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques, 104 pages |first=Uzi |last=Vishkin |publisher=Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland, College Park, Tel Aviv University and the Technion |date=2009 |url=http://users.umiacs.umd.edu/~vishkin/PUBLICATIONS/classnotes.pdf}}</ref></div></td>
</tr>
</table>
OlliverWithDoubleL
https://en.wikipedia.org/w/index.php?title=Parallel_algorithm&diff=1236615836&oldid=prev
Worldec478: added information about developing an effective parallel algorithm for solution in "parallelizability" section and citation to book
2024-07-25T17:16:10Z
<p>added information about developing an effective parallel algorithm for solution in "parallelizability" section and citation to book</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 17:16, 25 July 2024</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 11:</td>
<td colspan="2" class="diff-lineno">Line 11:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Some problems cannot be split up into parallel portions, as they require the results from a preceding step to effectively carry on with the next step – these are called '''{{visible anchor|inherently serial problem}}s'''. Examples include iterative [[numerical analysis|numerical methods]], such as [[Newton's method]], iterative solutions to the [[three-body problem]], and most of the available algorithms to compute [[pi]] (π).{{citation needed|date=August 2019}} Some sequential algorithms can be converted into parallel algorithms using [[automatic parallelization]].<ref name="MXian1997">{{cite book |author1=Megson G M|author2=Chen Xian|title=Automatic Parallelization For A Class Of Regular Computations|url=https://books.google.com/books?id=1wHtCgAAQBAJ&q=%22parallel+algorithm%22|date=4 January 1997|publisher=World Scientific|isbn=978-981-4498-41-8}}</ref></div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Some problems cannot be split up into parallel portions, as they require the results from a preceding step to effectively carry on with the next step – these are called '''{{visible anchor|inherently serial problem}}s'''. Examples include iterative [[numerical analysis|numerical methods]], such as [[Newton's method]], iterative solutions to the [[three-body problem]], and most of the available algorithms to compute [[pi]] (π).{{citation needed|date=August 2019}} Some sequential algorithms can be converted into parallel algorithms using [[automatic parallelization]].<ref name="MXian1997">{{cite book |author1=Megson G M|author2=Chen Xian|title=Automatic Parallelization For A Class Of Regular Computations|url=https://books.google.com/books?id=1wHtCgAAQBAJ&q=%22parallel+algorithm%22|date=4 January 1997|publisher=World Scientific|isbn=978-981-4498-41-8}}</ref></div></td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>In many cases developing an effective parallel algorithm for solution of some task requires attraction of new ideas and methods comparing to creating a sequential algorithm version. These are, for instance, practically important problems of searching a target element in data structures, evaluation of an algebraic expression, etc.<ref>{{Cite book |last=Kurgalin |first=Sergei |title=The discrete math workbook: a companion manual using Python |last2=Borzunov |first2=Sergei |date=2020 |publisher=Springer Naturel |isbn=978-3-030-42220-2 |edition=2nd |series=Texts in Computer Science |location=Cham, Switzerland}}</ref></div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Motivation==</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Motivation==</div></td>
</tr>
</table>
Worldec478
https://en.wikipedia.org/w/index.php?title=Parallel_algorithm&diff=1217074377&oldid=prev
JayBeeEll: Reverted 1 edit by 2A03:D000:1400:93EE:7D68:23BD:8EA6:9648 (talk): Rv refspam
2024-04-03T17:55:52Z
<p>Reverted 1 edit by <a href="/wiki/Special:Contributions/2A03:D000:1400:93EE:7D68:23BD:8EA6:9648" title="Special:Contributions/2A03:D000:1400:93EE:7D68:23BD:8EA6:9648">2A03:D000:1400:93EE:7D68:23BD:8EA6:9648</a> (<a href="/wiki/User_talk:2A03:D000:1400:93EE:7D68:23BD:8EA6:9648" title="User talk:2A03:D000:1400:93EE:7D68:23BD:8EA6:9648">talk</a>): Rv refspam</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 17:55, 3 April 2024</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 11:</td>
<td colspan="2" class="diff-lineno">Line 11:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Some problems cannot be split up into parallel portions, as they require the results from a preceding step to effectively carry on with the next step – these are called '''{{visible anchor|inherently serial problem}}s'''. Examples include iterative [[numerical analysis|numerical methods]], such as [[Newton's method]], iterative solutions to the [[three-body problem]], and most of the available algorithms to compute [[pi]] (π).{{citation needed|date=August 2019}} Some sequential algorithms can be converted into parallel algorithms using [[automatic parallelization]].<ref name="MXian1997">{{cite book |author1=Megson G M|author2=Chen Xian|title=Automatic Parallelization For A Class Of Regular Computations|url=https://books.google.com/books?id=1wHtCgAAQBAJ&q=%22parallel+algorithm%22|date=4 January 1997|publisher=World Scientific|isbn=978-981-4498-41-8}}</ref></div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Some problems cannot be split up into parallel portions, as they require the results from a preceding step to effectively carry on with the next step – these are called '''{{visible anchor|inherently serial problem}}s'''. Examples include iterative [[numerical analysis|numerical methods]], such as [[Newton's method]], iterative solutions to the [[three-body problem]], and most of the available algorithms to compute [[pi]] (π).{{citation needed|date=August 2019}} Some sequential algorithms can be converted into parallel algorithms using [[automatic parallelization]].<ref name="MXian1997">{{cite book |author1=Megson G M|author2=Chen Xian|title=Automatic Parallelization For A Class Of Regular Computations|url=https://books.google.com/books?id=1wHtCgAAQBAJ&q=%22parallel+algorithm%22|date=4 January 1997|publisher=World Scientific|isbn=978-981-4498-41-8}}</ref></div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td colspan="2" class="diff-empty diff-side-added"></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>In many cases developing an effective parallel algorithm for solution of some task requires attraction of new ideas and methods comparing to creating a sequential algorithm version. These are, for instance, practically important problems of searching a target element in data structures,</div></td>
<td colspan="2" class="diff-empty diff-side-added"></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>evaluation of an algebraic expression, etc. <ref>Kurgalin, Sergei; Borzunov, Sergei (2020). "The Discrete Math Workbook: A Companion Manual Using Python". 2nd ed. Springer. ISBN 978-3-030-42220-2, 978-3-030-42221-9. https://doi.org/10.1007/978-3-030-42221-9 </ref></div></td>
<td colspan="2" class="diff-empty diff-side-added"></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Motivation==</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Motivation==</div></td>
</tr>
</table>
JayBeeEll
https://en.wikipedia.org/w/index.php?title=Parallel_algorithm&diff=1217038413&oldid=prev
2A03:D000:1400:93EE:7D68:23BD:8EA6:9648: /* Parallelizability */
2024-04-03T13:29:32Z
<p><span class="autocomment">Parallelizability</span></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 13:29, 3 April 2024</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 11:</td>
<td colspan="2" class="diff-lineno">Line 11:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Some problems cannot be split up into parallel portions, as they require the results from a preceding step to effectively carry on with the next step – these are called '''{{visible anchor|inherently serial problem}}s'''. Examples include iterative [[numerical analysis|numerical methods]], such as [[Newton's method]], iterative solutions to the [[three-body problem]], and most of the available algorithms to compute [[pi]] (π).{{citation needed|date=August 2019}} Some sequential algorithms can be converted into parallel algorithms using [[automatic parallelization]].<ref name="MXian1997">{{cite book |author1=Megson G M|author2=Chen Xian|title=Automatic Parallelization For A Class Of Regular Computations|url=https://books.google.com/books?id=1wHtCgAAQBAJ&q=%22parallel+algorithm%22|date=4 January 1997|publisher=World Scientific|isbn=978-981-4498-41-8}}</ref></div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Some problems cannot be split up into parallel portions, as they require the results from a preceding step to effectively carry on with the next step – these are called '''{{visible anchor|inherently serial problem}}s'''. Examples include iterative [[numerical analysis|numerical methods]], such as [[Newton's method]], iterative solutions to the [[three-body problem]], and most of the available algorithms to compute [[pi]] (π).{{citation needed|date=August 2019}} Some sequential algorithms can be converted into parallel algorithms using [[automatic parallelization]].<ref name="MXian1997">{{cite book |author1=Megson G M|author2=Chen Xian|title=Automatic Parallelization For A Class Of Regular Computations|url=https://books.google.com/books?id=1wHtCgAAQBAJ&q=%22parallel+algorithm%22|date=4 January 1997|publisher=World Scientific|isbn=978-981-4498-41-8}}</ref></div></td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>In many cases developing an effective parallel algorithm for solution of some task requires attraction of new ideas and methods comparing to creating a sequential algorithm version. These are, for instance, practically important problems of searching a target element in data structures,</div></td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>evaluation of an algebraic expression, etc. <ref>Kurgalin, Sergei; Borzunov, Sergei (2020). "The Discrete Math Workbook: A Companion Manual Using Python". 2nd ed. Springer. ISBN 978-3-030-42220-2, 978-3-030-42221-9. https://doi.org/10.1007/978-3-030-42221-9 </ref></div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Motivation==</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Motivation==</div></td>
</tr>
</table>
2A03:D000:1400:93EE:7D68:23BD:8EA6:9648
https://en.wikipedia.org/w/index.php?title=Parallel_algorithm&diff=1181418540&oldid=prev
Tom.Reding: +{{Authority control}} (1 ID from Wikidata); WP:GenFixes & cleanup on
2023-10-22T22:41:06Z
<p>+{{<a href="/wiki/Template:Authority_control" title="Template:Authority control">Authority control</a>}} (<a href="https://www.wikidata.org/wiki/Q1087987" class="extiw" title="d:Q1087987">1 ID</a> from <a href="/wiki/Wikidata" title="Wikidata">Wikidata</a>); <a href="/wiki/Wikipedia:GenFixes" class="mw-redirect" title="Wikipedia:GenFixes">WP:GenFixes</a> & cleanup on</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 22:41, 22 October 2023</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 49:</td>
<td colspan="2" class="diff-lineno">Line 49:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{Parallel computing}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{Parallel computing}}</div></td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>{{Authority control}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>[[Category:Parallel computing]]</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>[[Category:Parallel computing]]</div></td>
</tr>
</table>
Tom.Reding
https://en.wikipedia.org/w/index.php?title=Parallel_algorithm&diff=1035829104&oldid=prev
A876: +link. +comma. fixed 2 ref errors. checked links.
2021-07-27T22:36:16Z
<p>+link. +comma. fixed 2 ref errors. checked links.</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 22:36, 27 July 2021</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>{{<del style="font-weight: bold; text-decoration: none;">refimprove</del>|date=November 2012}}</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>{{<ins style="font-weight: bold; text-decoration: none;">More citations needed</ins>|date=November 2012}}</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>In [[computer science]], a '''parallel algorithm''', as opposed to a traditional [[serial algorithm]], is an [[algorithm]] which can do multiple operations in a given time. It has been a tradition of [[computer science]] to describe serial algorithms in abstract machine models, often the one known as [[random-access machine]]. Similarly, many computer science researchers have used a so-called [[parallel random-access machine]] (PRAM) as a parallel abstract machine (shared-memory).<ref>{{cite <del style="font-weight: bold; text-decoration: none;">document|</del> title=Parallel Algorithms |<del style="font-weight: bold; text-decoration: none;"> </del>first1=Guy E. |<del style="font-weight: bold; text-decoration: none;"> </del>last1=Blelloch |<del style="font-weight: bold; text-decoration: none;"> </del>first2=Bruce M. |<del style="font-weight: bold; text-decoration: none;"> </del>last2=Maggs |<del style="font-weight: bold; text-decoration: none;"> </del>publisher=School of Computer Science, [[Carnegie Mellon University]] |<del style="font-weight: bold; text-decoration: none;"> </del>location=USA |<del style="font-weight: bold; text-decoration: none;"> </del>access-date=2015-07-27 |<del style="font-weight: bold; text-decoration: none;"> </del>url=<del style="font-weight: bold; text-decoration: none;"> </del>https://www.cs.cmu.edu/~guyb/papers/BM04.pdf<del style="font-weight: bold; text-decoration: none;"> </del>}}</ref><ref>{{cite <del style="font-weight: bold; text-decoration: none;">document</del>|<del style="font-weight: bold; text-decoration: none;"> </del>title<del style="font-weight: bold; text-decoration: none;"> </del>=<del style="font-weight: bold; text-decoration: none;"> </del>Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques, 104 pages |<del style="font-weight: bold; text-decoration: none;"> </del>first<del style="font-weight: bold; text-decoration: none;"> </del>=<del style="font-weight: bold; text-decoration: none;"> </del>Uzi |<del style="font-weight: bold; text-decoration: none;"> </del>last<del style="font-weight: bold; text-decoration: none;"> </del>=<del style="font-weight: bold; text-decoration: none;"> </del>Vishkin |<del style="font-weight: bold; text-decoration: none;"> </del>publisher<del style="font-weight: bold; text-decoration: none;"> </del>=<del style="font-weight: bold; text-decoration: none;"> </del>Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland, College Park, Tel Aviv University and the Technion |<del style="font-weight: bold; text-decoration: none;"> </del>date<del style="font-weight: bold; text-decoration: none;"> </del>=<del style="font-weight: bold; text-decoration: none;"> </del>2009 |<del style="font-weight: bold; text-decoration: none;"> </del>url=http://<del style="font-weight: bold; text-decoration: none;">www</del>.umiacs.umd.edu/<del style="font-weight: bold; text-decoration: none;">users/</del>vishkin/PUBLICATIONS/classnotes.pdf<del style="font-weight: bold; text-decoration: none;"> </del>}}</ref></div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>In [[computer science]], a '''parallel algorithm''', as opposed to a traditional [[serial algorithm]], is an [[algorithm]] which can do multiple operations in a given time. It has been a tradition of [[computer science]] to describe serial algorithms in <ins style="font-weight: bold; text-decoration: none;">[[</ins>abstract machine<ins style="font-weight: bold; text-decoration: none;">]]</ins> models, often the one known as [[random-access machine]]. Similarly, many computer science researchers have used a so-called [[parallel random-access machine]] (PRAM) as a parallel abstract machine (shared-memory).<ref>{{cite <ins style="font-weight: bold; text-decoration: none;">web</ins> <ins style="font-weight: bold; text-decoration: none;">|</ins>title=Parallel Algorithms |first1=Guy E. |last1=Blelloch |first2=Bruce M. |last2=Maggs |publisher=School of Computer Science, [[Carnegie Mellon University]] |location=USA |access-date=2015-07-27 |url=https://www.cs.cmu.edu/~guyb/papers/BM04.pdf}}</ref><ref>{{cite <ins style="font-weight: bold; text-decoration: none;">web </ins>|title=Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques, 104 pages |first=Uzi |last=Vishkin |publisher=Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland, College Park, Tel Aviv University and the Technion |date=2009 |url=http://<ins style="font-weight: bold; text-decoration: none;">users</ins>.umiacs.umd.edu/<ins style="font-weight: bold; text-decoration: none;">~</ins>vishkin/PUBLICATIONS/classnotes.pdf}}</ref></div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Many parallel algorithms are executed [[concurrent computing|concurrently]] – though in general [[concurrent algorithm]]s are a distinct concept – and thus these concepts are often conflated, with which aspect of an algorithm is parallel and which is concurrent not being clearly distinguished. Further, non-parallel, non-concurrent algorithms are often referred to as "[[sequential algorithm]]s", by contrast with concurrent algorithms.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Many parallel algorithms are executed [[concurrent computing|concurrently]] – though in general [[concurrent algorithm]]s are a distinct concept – and thus these concepts are often conflated, with which aspect of an algorithm is parallel and which is concurrent not being clearly distinguished. Further, non-parallel, non-concurrent algorithms are often referred to as "[[sequential algorithm]]s", by contrast with concurrent algorithms.</div></td>
</tr>
<tr>
<td colspan="2" class="diff-lineno">Line 8:</td>
<td colspan="2" class="diff-lineno">Line 8:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Algorithms vary significantly in how parallelizable they are, ranging from easily parallelizable to completely unparallelizable. Further, a given problem may accommodate different algorithms, which may be more or less parallelizable.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Algorithms vary significantly in how parallelizable they are, ranging from easily parallelizable to completely unparallelizable. Further, a given problem may accommodate different algorithms, which may be more or less parallelizable.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Some problems are easy to divide up into pieces in this way – these are called ''[[embarrassingly parallel<del style="font-weight: bold; text-decoration: none;"> problem</del>]]<del style="font-weight: bold; text-decoration: none;">s</del>.'' Examples include many algorithms to solve [[Rubik's Cube]]s and find values which result in a given [[<del style="font-weight: bold; text-decoration: none;">Associative</del> array|hash]].{{<del style="font-weight: bold; text-decoration: none;">Citation</del> needed|date=December 2019}}</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Some problems are easy to divide up into pieces in this way – these are called ''[[embarrassingly parallel]]<ins style="font-weight: bold; text-decoration: none;"> problems</ins>.'' Examples include many algorithms to solve [[Rubik's Cube]]s and find values which result in a given [[<ins style="font-weight: bold; text-decoration: none;">associative</ins> array|hash]].{{<ins style="font-weight: bold; text-decoration: none;">citation</ins> needed|date=December 2019}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Some problems cannot be split up into parallel portions, as they require the results from a preceding step to effectively carry on with the next step – these are called '''{{visible anchor|inherently serial problem}}s'''. Examples include iterative [[<del style="font-weight: bold; text-decoration: none;">Numerical</del> analysis|numerical methods]], such as [[Newton's method]], iterative solutions to the [[three-body problem]], and most of the available algorithms to compute [[pi]] (π).{{citation needed|date=August 2019}} Some sequential algorithms can be converted into parallel algorithms using [[automatic parallelization]].<ref name="MXian1997">{{cite book|author1=Megson G M|author2=Chen Xian|title=Automatic Parallelization For A Class Of Regular Computations|url=https://books.google.com/books?id=1wHtCgAAQBAJ&q=%22parallel+algorithm%22|date=4 January 1997|publisher=World Scientific|isbn=978-981-4498-41-8}}</ref></div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Some problems cannot be split up into parallel portions, as they require the results from a preceding step to effectively carry on with the next step – these are called '''{{visible anchor|inherently serial problem}}s'''. Examples include iterative [[<ins style="font-weight: bold; text-decoration: none;">numerical</ins> analysis|numerical methods]], such as [[Newton's method]], iterative solutions to the [[three-body problem]], and most of the available algorithms to compute [[pi]] (π).{{citation needed|date=August 2019}} Some sequential algorithms can be converted into parallel algorithms using [[automatic parallelization]].<ref name="MXian1997">{{cite book<ins style="font-weight: bold; text-decoration: none;"> </ins>|author1=Megson G M|author2=Chen Xian|title=Automatic Parallelization For A Class Of Regular Computations|url=https://books.google.com/books?id=1wHtCgAAQBAJ&q=%22parallel+algorithm%22|date=4 January 1997|publisher=World Scientific|isbn=978-981-4498-41-8}}</ref></div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Motivation==</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Motivation==</div></td>
</tr>
<tr>
<td colspan="2" class="diff-lineno">Line 20:</td>
<td colspan="2" class="diff-lineno">Line 20:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The cost or complexity of serial algorithms is estimated in terms of the space (memory) and time (processor cycles) that they take. Parallel algorithms need to optimize one more resource, the communication between different processors. There are two ways parallel processors communicate, shared memory or message passing.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The cost or complexity of serial algorithms is estimated in terms of the space (memory) and time (processor cycles) that they take. Parallel algorithms need to optimize one more resource, the communication between different processors. There are two ways parallel processors communicate, shared memory or message passing.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>[[<del style="font-weight: bold; text-decoration: none;">Shared memory (interprocess communication)|</del>Shared memory]] processing needs additional [[<del style="font-weight: bold; text-decoration: none;">Lock</del> (computer science)|locking]] for the data, imposes the overhead of additional processor and bus cycles, and also serializes some portion of the algorithm.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>[[Shared memory]] processing needs additional [[<ins style="font-weight: bold; text-decoration: none;">lock</ins> (computer science)|locking]] for the data, imposes the overhead of additional processor and bus cycles, and also serializes some portion of the algorithm.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>[[Message passing]] processing uses channels and message boxes but this communication adds transfer overhead on the bus, additional memory need for queues and message boxes and latency in the messages. Designs of parallel processors use special [[<del style="font-weight: bold; text-decoration: none;">Bus</del> (computing)|buses]] like [[<del style="font-weight: bold; text-decoration: none;">Crossbar</del> switch|crossbar]] so that the communication overhead will be small but it is the parallel algorithm that decides the volume of the traffic.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>[[Message passing]] processing uses channels and message boxes but this communication adds transfer overhead on the bus, additional memory need for queues and message boxes and latency in the messages. Designs of parallel processors use special [[<ins style="font-weight: bold; text-decoration: none;">bus</ins> (computing)|buses]] like [[<ins style="font-weight: bold; text-decoration: none;">crossbar</ins> switch|crossbar]] so that the communication overhead will be small but it is the parallel algorithm that decides the volume of the traffic.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>If the communication overhead of additional processors outweighs the benefit of adding another processor, one encounters [[parallel slowdown]].</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>If the communication overhead of additional processors outweighs the benefit of adding another processor, one encounters [[parallel slowdown]].</div></td>
</tr>
<tr>
<td colspan="2" class="diff-lineno">Line 28:</td>
<td colspan="2" class="diff-lineno">Line 28:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>===Load balancing===</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>===Load balancing===</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{main|Load balancing (computing)}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{main|Load balancing (computing)}}</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Another problem with parallel algorithms is ensuring that they are suitably [[<del style="font-weight: bold; text-decoration: none;">Load</del> balancing (computing)|load balanced]], by ensuring that ''load'' (overall work) is balanced, rather than input size being balanced. For example, checking all numbers from one to a hundred thousand for primality is easy to split <del style="font-weight: bold; text-decoration: none;">amongst</del> processors; however, if the numbers are simply divided out evenly (1–1,000, 1,001–2,000, etc.), the amount of work will be unbalanced, as smaller numbers are easier to process by this algorithm (easier to test for primality), and thus some processors will get more work to do than the others, which will sit idle until the loaded processors complete.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Another problem with parallel algorithms is ensuring that they are suitably [[<ins style="font-weight: bold; text-decoration: none;">load</ins> balancing (computing)|load balanced]], by ensuring that ''load'' (overall work) is balanced, rather than input size being balanced. For example, checking all numbers from one to a hundred thousand for primality is easy to split <ins style="font-weight: bold; text-decoration: none;">among</ins> processors; however, if the numbers are simply divided out evenly (1–1,000, 1,001–2,000, etc.), the amount of work will be unbalanced, as smaller numbers are easier to process by this algorithm (easier to test for primality), and thus some processors will get more work to do than the others, which will sit idle until the loaded processors complete.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Distributed algorithms==</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Distributed algorithms==</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{main|Distributed algorithm}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{main|Distributed algorithm}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{expand section|date=February 2014}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{expand section|date=February 2014}}</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>A subtype of parallel algorithms, ''[[distributed algorithm]]s'' are algorithms designed to work in [[cluster computing]] and [[distributed computing]] environments, where additional concerns beyond the scope of "classical" parallel algorithms need to be addressed.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>A subtype of parallel algorithms, ''[[distributed algorithm]]s''<ins style="font-weight: bold; text-decoration: none;">,</ins> are algorithms designed to work in [[cluster computing]] and [[distributed computing]] environments, where additional concerns beyond the scope of "classical" parallel algorithms need to be addressed.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==See also==</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==See also==</div></td>
</tr>
<tr>
<td colspan="2" class="diff-lineno">Line 43:</td>
<td colspan="2" class="diff-lineno">Line 43:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==References==</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==References==</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>{{<del style="font-weight: bold; text-decoration: none;">reflist</del>}}</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>{{<ins style="font-weight: bold; text-decoration: none;">Reflist</ins>}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==External links==</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==External links==</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>*[<del style="font-weight: bold; text-decoration: none;">http</del>://www.mcs.anl.gov/~itf/dbpp/ Designing and Building Parallel Programs<del style="font-weight: bold; text-decoration: none;"> page at the</del> US Argonne National <del style="font-weight: bold; text-decoration: none;">Laboratories]</del></div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>*[<ins style="font-weight: bold; text-decoration: none;">https</ins>://www.mcs.anl.gov/~itf/dbpp/ Designing and Building Parallel Programs<ins style="font-weight: bold; text-decoration: none;">],</ins> US Argonne National <ins style="font-weight: bold; text-decoration: none;">Laboratory</ins></div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>{{Parallel <del style="font-weight: bold; text-decoration: none;">Computing</del>}}</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>{{Parallel <ins style="font-weight: bold; text-decoration: none;">computing</ins>}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>[[Category:Parallel computing]]</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>[[Category:Parallel computing]]</div></td>
</tr>
</table>
A876
https://en.wikipedia.org/w/index.php?title=Parallel_algorithm&diff=1019236150&oldid=prev
Haeinous at 06:44, 22 April 2021
2021-04-22T06:44:27Z
<p></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 06:44, 22 April 2021</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{refimprove|date=November 2012}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{refimprove|date=November 2012}}</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>In [[computer science]], a '''parallel algorithm''', as opposed to a traditional [[serial algorithm]], is an [[algorithm]] which can do multiple operations in a given time. It has been a tradition of [[computer science]] to describe serial algorithms in abstract machine models, often the one known as [[<del style="font-weight: bold; text-decoration: none;">Random</del>-access machine]]. Similarly, many computer science researchers have used a so-called [[parallel random-access machine]] (PRAM) as a parallel abstract machine (shared-memory).<ref>{{cite document| title=Parallel Algorithms | first1=Guy E. | last1=Blelloch | first2=Bruce M. | last2=Maggs | publisher=School of Computer Science, [[Carnegie Mellon University]] | location=USA | access-date=2015-07-27 | url= https://www.cs.cmu.edu/~guyb/papers/BM04.pdf }}</ref><ref>{{cite document| title = Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques, 104 pages | first = Uzi | last = Vishkin | publisher = Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland, College Park, Tel Aviv University and the Technion | date = 2009 | url=http://www.umiacs.umd.edu/users/vishkin/PUBLICATIONS/classnotes.pdf }}</ref></div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>In [[computer science]], a '''parallel algorithm''', as opposed to a traditional [[serial algorithm]], is an [[algorithm]] which can do multiple operations in a given time. It has been a tradition of [[computer science]] to describe serial algorithms in abstract machine models, often the one known as [[<ins style="font-weight: bold; text-decoration: none;">random</ins>-access machine]]. Similarly, many computer science researchers have used a so-called [[parallel random-access machine]] (PRAM) as a parallel abstract machine (shared-memory).<ref>{{cite document| title=Parallel Algorithms | first1=Guy E. | last1=Blelloch | first2=Bruce M. | last2=Maggs | publisher=School of Computer Science, [[Carnegie Mellon University]] | location=USA | access-date=2015-07-27 | url= https://www.cs.cmu.edu/~guyb/papers/BM04.pdf }}</ref><ref>{{cite document| title = Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques, 104 pages | first = Uzi | last = Vishkin | publisher = Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland, College Park, Tel Aviv University and the Technion | date = 2009 | url=http://www.umiacs.umd.edu/users/vishkin/PUBLICATIONS/classnotes.pdf }}</ref></div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Many parallel algorithms are executed [[concurrent computing|concurrently]] – though in general [[concurrent algorithm]]s are a distinct concept – and thus these concepts are often conflated, with which aspect of an algorithm is parallel and which is concurrent not being clearly distinguished. Further, non-parallel, non-concurrent algorithms are often referred to as "[[sequential algorithm]]s", by contrast with concurrent algorithms.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Many parallel algorithms are executed [[concurrent computing|concurrently]] – though in general [[concurrent algorithm]]s are a distinct concept – and thus these concepts are often conflated, with which aspect of an algorithm is parallel and which is concurrent not being clearly distinguished. Further, non-parallel, non-concurrent algorithms are often referred to as "[[sequential algorithm]]s", by contrast with concurrent algorithms.</div></td>
</tr>
<tr>
<td colspan="2" class="diff-lineno">Line 22:</td>
<td colspan="2" class="diff-lineno">Line 22:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>[[Shared memory (interprocess communication)|Shared memory]] processing needs additional [[Lock (computer science)|locking]] for the data, imposes the overhead of additional processor and bus cycles, and also serializes some portion of the algorithm.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>[[Shared memory (interprocess communication)|Shared memory]] processing needs additional [[Lock (computer science)|locking]] for the data, imposes the overhead of additional processor and bus cycles, and also serializes some portion of the algorithm.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>[[Message passing]] processing uses channels and message boxes but this communication adds transfer overhead on the bus, additional memory need for queues and message boxes and latency in the messages. Designs of parallel processors use special buses like [[Crossbar switch|crossbar]] so that the communication overhead will be small but it is the parallel algorithm that decides the volume of the traffic.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>[[Message passing]] processing uses channels and message boxes but this communication adds transfer overhead on the bus, additional memory need for queues and message boxes and latency in the messages. Designs of parallel processors use special <ins style="font-weight: bold; text-decoration: none;">[[Bus (computing)|</ins>buses<ins style="font-weight: bold; text-decoration: none;">]]</ins> like [[Crossbar switch|crossbar]] so that the communication overhead will be small but it is the parallel algorithm that decides the volume of the traffic.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>If the communication overhead of additional processors outweighs the benefit of adding another processor, one encounters [[parallel slowdown]].</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>If the communication overhead of additional processors outweighs the benefit of adding another processor, one encounters [[parallel slowdown]].</div></td>
</tr>
</table>
Haeinous
https://en.wikipedia.org/w/index.php?title=Parallel_algorithm&diff=996704570&oldid=prev
Monkbot: Task 18 (cosmetic): eval 3 templates: hyphenate params (1×);
2020-12-28T04:50:45Z
<p><a href="/wiki/User:Monkbot/task_18" class="mw-redirect" title="User:Monkbot/task 18">Task 18 (cosmetic)</a>: eval 3 templates: hyphenate params (1×);</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 04:50, 28 December 2020</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{refimprove|date=November 2012}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{refimprove|date=November 2012}}</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>In [[computer science]], a '''parallel algorithm''', as opposed to a traditional [[serial algorithm]], is an [[algorithm]] which can do multiple operations in a given time. It has been a tradition of [[computer science]] to describe serial algorithms in abstract machine models, often the one known as [[Random-access machine]]. Similarly, many computer science researchers have used a so-called [[parallel random-access machine]] (PRAM) as a parallel abstract machine (shared-memory).<ref>{{cite document| title=Parallel Algorithms | first1=Guy E. | last1=Blelloch | first2=Bruce M. | last2=Maggs | publisher=School of Computer Science, [[Carnegie Mellon University]] | location=USA | <del style="font-weight: bold; text-decoration: none;">accessdate</del>=2015-07-27 | url= https://www.cs.cmu.edu/~guyb/papers/BM04.pdf }}</ref><ref>{{cite document| title = Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques, 104 pages | first = Uzi | last = Vishkin | publisher = Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland, College Park, Tel Aviv University and the Technion | date = 2009 | url=http://www.umiacs.umd.edu/users/vishkin/PUBLICATIONS/classnotes.pdf }}</ref></div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>In [[computer science]], a '''parallel algorithm''', as opposed to a traditional [[serial algorithm]], is an [[algorithm]] which can do multiple operations in a given time. It has been a tradition of [[computer science]] to describe serial algorithms in abstract machine models, often the one known as [[Random-access machine]]. Similarly, many computer science researchers have used a so-called [[parallel random-access machine]] (PRAM) as a parallel abstract machine (shared-memory).<ref>{{cite document| title=Parallel Algorithms | first1=Guy E. | last1=Blelloch | first2=Bruce M. | last2=Maggs | publisher=School of Computer Science, [[Carnegie Mellon University]] | location=USA | <ins style="font-weight: bold; text-decoration: none;">access-date</ins>=2015-07-27 | url= https://www.cs.cmu.edu/~guyb/papers/BM04.pdf }}</ref><ref>{{cite document| title = Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques, 104 pages | first = Uzi | last = Vishkin | publisher = Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland, College Park, Tel Aviv University and the Technion | date = 2009 | url=http://www.umiacs.umd.edu/users/vishkin/PUBLICATIONS/classnotes.pdf }}</ref></div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Many parallel algorithms are executed [[concurrent computing|concurrently]] – though in general [[concurrent algorithm]]s are a distinct concept – and thus these concepts are often conflated, with which aspect of an algorithm is parallel and which is concurrent not being clearly distinguished. Further, non-parallel, non-concurrent algorithms are often referred to as "[[sequential algorithm]]s", by contrast with concurrent algorithms.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Many parallel algorithms are executed [[concurrent computing|concurrently]] – though in general [[concurrent algorithm]]s are a distinct concept – and thus these concepts are often conflated, with which aspect of an algorithm is parallel and which is concurrent not being clearly distinguished. Further, non-parallel, non-concurrent algorithms are often referred to as "[[sequential algorithm]]s", by contrast with concurrent algorithms.</div></td>
</tr>
</table>
Monkbot
https://en.wikipedia.org/w/index.php?title=Parallel_algorithm&diff=987442692&oldid=prev
Citation bot: Alter: template type, url. URLs might have been internationalized/anonymized. | You can use this bot yourself. Report bugs here. | Suggested by AManWithNoPlan | All pages linked from cached copy of User:AManWithNoPlan/sandbox2 | via #UCB_webform_linked 1368/3977
2020-11-07T02:14:07Z
<p>Alter: template type, url. URLs might have been internationalized/anonymized. | You can <a href="/wiki/Wikipedia:UCB" class="mw-redirect" title="Wikipedia:UCB">use this bot</a> yourself. <a href="/wiki/Wikipedia:DBUG" class="mw-redirect" title="Wikipedia:DBUG">Report bugs here</a>. | Suggested by AManWithNoPlan | All pages linked from cached copy of User:AManWithNoPlan/sandbox2 | via #UCB_webform_linked 1368/3977</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 02:14, 7 November 2020</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{refimprove|date=November 2012}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{refimprove|date=November 2012}}</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>In [[computer science]], a '''parallel algorithm''', as opposed to a traditional [[serial algorithm]], is an [[algorithm]] which can do multiple operations in a given time. It has been a tradition of [[computer science]] to describe serial algorithms in abstract machine models, often the one known as [[Random-access machine]]. Similarly, many computer science researchers have used a so-called [[parallel random-access machine]] (PRAM) as a parallel abstract machine (shared-memory).<ref>{{cite <del style="font-weight: bold; text-decoration: none;">paper</del>| title=Parallel Algorithms | first1=Guy E. | last1=Blelloch | first2=Bruce M. | last2=Maggs | publisher=School of Computer Science, [[Carnegie Mellon University]] | location=USA | accessdate=2015-07-27 | url= https://www.cs.cmu.edu/~guyb/papers/BM04.pdf }}</ref><ref>{{cite <del style="font-weight: bold; text-decoration: none;">paper</del>| title = Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques, 104 pages | first = Uzi | last = Vishkin | publisher = Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland, College Park, Tel Aviv University and the Technion | date = 2009 | url=http://www.umiacs.umd.edu/users/vishkin/PUBLICATIONS/classnotes.pdf }}</ref></div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>In [[computer science]], a '''parallel algorithm''', as opposed to a traditional [[serial algorithm]], is an [[algorithm]] which can do multiple operations in a given time. It has been a tradition of [[computer science]] to describe serial algorithms in abstract machine models, often the one known as [[Random-access machine]]. Similarly, many computer science researchers have used a so-called [[parallel random-access machine]] (PRAM) as a parallel abstract machine (shared-memory).<ref>{{cite <ins style="font-weight: bold; text-decoration: none;">document</ins>| title=Parallel Algorithms | first1=Guy E. | last1=Blelloch | first2=Bruce M. | last2=Maggs | publisher=School of Computer Science, [[Carnegie Mellon University]] | location=USA | accessdate=2015-07-27 | url= https://www.cs.cmu.edu/~guyb/papers/BM04.pdf }}</ref><ref>{{cite <ins style="font-weight: bold; text-decoration: none;">document</ins>| title = Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques, 104 pages | first = Uzi | last = Vishkin | publisher = Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland, College Park, Tel Aviv University and the Technion | date = 2009 | url=http://www.umiacs.umd.edu/users/vishkin/PUBLICATIONS/classnotes.pdf }}</ref></div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Many parallel algorithms are executed [[concurrent computing|concurrently]] – though in general [[concurrent algorithm]]s are a distinct concept – and thus these concepts are often conflated, with which aspect of an algorithm is parallel and which is concurrent not being clearly distinguished. Further, non-parallel, non-concurrent algorithms are often referred to as "[[sequential algorithm]]s", by contrast with concurrent algorithms.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Many parallel algorithms are executed [[concurrent computing|concurrently]] – though in general [[concurrent algorithm]]s are a distinct concept – and thus these concepts are often conflated, with which aspect of an algorithm is parallel and which is concurrent not being clearly distinguished. Further, non-parallel, non-concurrent algorithms are often referred to as "[[sequential algorithm]]s", by contrast with concurrent algorithms.</div></td>
</tr>
<tr>
<td colspan="2" class="diff-lineno">Line 10:</td>
<td colspan="2" class="diff-lineno">Line 10:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Some problems are easy to divide up into pieces in this way – these are called ''[[embarrassingly parallel problem]]s.'' Examples include many algorithms to solve [[Rubik's Cube]]s and find values which result in a given [[Associative array|hash]].{{Citation needed|date=December 2019}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Some problems are easy to divide up into pieces in this way – these are called ''[[embarrassingly parallel problem]]s.'' Examples include many algorithms to solve [[Rubik's Cube]]s and find values which result in a given [[Associative array|hash]].{{Citation needed|date=December 2019}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Some problems cannot be split up into parallel portions, as they require the results from a preceding step to effectively carry on with the next step – these are called '''{{visible anchor|inherently serial problem}}s'''. Examples include iterative [[Numerical analysis|numerical methods]], such as [[Newton's method]], iterative solutions to the [[three-body problem]], and most of the available algorithms to compute [[pi]] (π).{{citation needed|date=August 2019}} Some sequential algorithms can be converted into parallel algorithms using [[automatic parallelization]].<ref name="MXian1997">{{cite book|author1=Megson G M|author2=Chen Xian|title=Automatic Parallelization For A Class Of Regular Computations|url=https://books.google.com/books?id=1wHtCgAAQBAJ<del style="font-weight: bold; text-decoration: none;">&printsec=frontcover#v=onepage</del>&q=%22parallel<del style="font-weight: bold; text-decoration: none;">%20algorithm</del>%22<del style="font-weight: bold; text-decoration: none;">&f=false</del>|date=4 January 1997|publisher=World Scientific|isbn=978-981-4498-41-8}}</ref></div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Some problems cannot be split up into parallel portions, as they require the results from a preceding step to effectively carry on with the next step – these are called '''{{visible anchor|inherently serial problem}}s'''. Examples include iterative [[Numerical analysis|numerical methods]], such as [[Newton's method]], iterative solutions to the [[three-body problem]], and most of the available algorithms to compute [[pi]] (π).{{citation needed|date=August 2019}} Some sequential algorithms can be converted into parallel algorithms using [[automatic parallelization]].<ref name="MXian1997">{{cite book|author1=Megson G M|author2=Chen Xian|title=Automatic Parallelization For A Class Of Regular Computations|url=https://books.google.com/books?id=1wHtCgAAQBAJ&q=%22parallel<ins style="font-weight: bold; text-decoration: none;">+algorithm</ins>%22|date=4 January 1997|publisher=World Scientific|isbn=978-981-4498-41-8}}</ref></div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Motivation==</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Motivation==</div></td>
</tr>
</table>
Citation bot
https://en.wikipedia.org/w/index.php?title=Parallel_algorithm&diff=952835041&oldid=prev
Freeknowledgecreator: formatting
2020-04-24T09:03:38Z
<p>formatting</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 09:03, 24 April 2020</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{refimprove|date=November 2012}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{refimprove|date=November 2012}}</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>In [[computer science]], a '''parallel algorithm''', as opposed to a traditional [[serial algorithm]], is an [[algorithm]] which can do multiple operations in a given time. It has been a tradition of [[computer science]] to describe serial algorithms in abstract machine models, often the one known as [[Random-access machine]]. Similarly, many computer science researchers have used a so-called [[parallel random-access machine]] (PRAM) as a parallel abstract machine (shared-memory).</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>In [[computer science]], a '''parallel algorithm''', as opposed to a traditional [[serial algorithm]], is an [[algorithm]] which can do multiple operations in a given time. It has been a tradition of [[computer science]] to describe serial algorithms in abstract machine models, often the one known as [[Random-access machine]]. Similarly, many computer science researchers have used a so-called [[parallel random-access machine]] (PRAM) as a parallel abstract machine (shared-memory).<ins style="font-weight: bold; text-decoration: none;"><ref>{{cite paper| title=Parallel Algorithms | first1=Guy E. | last1=Blelloch | first2=Bruce M. | last2=Maggs | publisher=School of Computer Science, [[Carnegie Mellon University]] | location=USA | accessdate=2015-07-27 | url= https://www.cs.cmu.edu/~guyb/papers/BM04.pdf }}</ref><ref>{{cite paper| title = Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques, 104 pages | first = Uzi | last = Vishkin | publisher = Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland, College Park, Tel Aviv University and the Technion | date = 2009 | url=http://www.umiacs.umd.edu/users/vishkin/PUBLICATIONS/classnotes.pdf }}</ref></ins></div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><ref>{{cite paper| title=Parallel Algorithms | first1=Guy E. | last1=Blelloch | first2=Bruce M. | last2=Maggs | publisher=School of Computer Science, [[Carnegie Mellon University]] | location=USA | accessdate=2015-07-27 | url= https://www.cs.cmu.edu/~guyb/papers/BM04.pdf }}</ref></div></td>
<td colspan="2" class="diff-empty diff-side-added"></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><ref>{{cite paper| title = Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques, 104 pages | first = Uzi | last = Vishkin | publisher = Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland, College Park, Tel Aviv University and the Technion | date = 2009 | url=http://www.umiacs.umd.edu/users/vishkin/PUBLICATIONS/classnotes.pdf }}</ref></div></td>
<td colspan="2" class="diff-empty diff-side-added"></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Many parallel algorithms are executed [[concurrent computing|concurrently]] – though in general [[concurrent algorithm]]s are a distinct concept – and thus these concepts are often conflated, with which aspect of an algorithm is parallel and which is concurrent not being clearly distinguished. Further, non-parallel, non-concurrent algorithms are often referred to as "[[sequential algorithm]]s", by contrast with concurrent algorithms.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Many parallel algorithms are executed [[concurrent computing|concurrently]] – though in general [[concurrent algorithm]]s are a distinct concept – and thus these concepts are often conflated, with which aspect of an algorithm is parallel and which is concurrent not being clearly distinguished. Further, non-parallel, non-concurrent algorithms are often referred to as "[[sequential algorithm]]s", by contrast with concurrent algorithms.</div></td>
</tr>
</table>
Freeknowledgecreator