Jump to content

External memory algorithm

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by DavidCary (talk | contribs) at 15:15, 29 July 2016 (explicit link to an article about the topic alluded to.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Out-of-core or external memory algorithms are algorithms that are designed to process data that is too large to fit into a computer's main memory at one time. Such algorithms must be optimized to efficiently fetch and access data stored in slow bulk memory (auxiliary memory) such as hard drives or tape drives.[1]

A typical example is geographic information systems, especially digital elevation models, where the full data set easily exceeds several gigabytes or even terabytes of data.

This notion naturally extends to a network connecting a data server to a treatment or visualization workstation. Popular mass-of-data based web applications such as Google Maps or Google Earth enter this topic.

This extends beyond general purpose CPUs, and also includes GPU computing as well as classical digital signal processing. In GPGPU based computing where powerful graphics cards (GPUs) with little memory (compared to the more familiar system memory which is most often referred to simply as RAM) and slow CPU to GPU memory transfer (when compared to computation bandwidth).

See also

References

  1. ^ Vitter, JS (2001). "External Memory Algorithms and Data Structures: Dealing with MASSIVE DATA". ACM Computing Surveys. 33 (2): 209–271. doi:10.1145/384192.384193.