Spatial computing

From Wikipedia, the free encyclopedia

Spatial computing was defined in 2003 by Simon Greenwold,[1] as "human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces". [Yale, June 1995.]

With the advent of consumer virtual reality,[2] augmented reality,[3] and mixed reality, companies such as Microsoft[4] and Magic Leap[5] use "spatial computing" in reference to the practice of using physical actions (head and body movements, gestures, speech) as inputs for interactive digital media systems, with perceived 3D physical space as the canvas for video, audio, and haptic outputs. It is also tied to the concept of 'digital twins'.[6]

Apple announced a spatial computing platform with the Vision Pro on 5 June 2023. It features several features such as Spatial Audio, two micro-OLED displays, the Apple R1 chip and eye tracking. It is planned to be released in 2024 in the United States.[7]


  1. ^ Greenwold, Simon (June 2003). "Spatial Computing" (PDF). MIT Graduate Thesis. Retrieved 22 December 2019.
  2. ^ Rubin, Peter. "The WIRED Guide to Virtual Reality". WIRED. Condé Nast. Retrieved 22 December 2019.
  3. ^ Nichols, Greg. "Spatial computing is reinventing how mobile techs work". ZDNet. Retrieved 22 December 2019.
  4. ^ "A new era of spatial computing brings fresh challenges—and solutions—to VR". Microsoft. Oct 21, 2019. Retrieved 22 December 2019.
  5. ^ Abovitz, R; Greco, P; Pellet, Y; Welch, H; Sam Miller. "Spatial Computing: An Overview for Our Techie Friends". Magic Leap. Retrieved 22 December 2019.
  6. ^ Ling, Corinna E. Lathan,Geoffrey. "Spatial Computing Could Be the Next Big Thing". Scientific American. Retrieved 2021-01-25.{{cite web}}: CS1 maint: multiple names: authors list (link)
  7. ^ "Apple Vision Pro". Apple. Apple Inc. Retrieved 5 June 2023.