Jump to content

Draft:Address-Event Representation

From Wikipedia, the free encyclopedia
  • Comment: I previously declined your submission, but you did not make any meaningful changes to your draft before resubmitting it. Please do not submit this draft again until you edit it to fix its problems. Repeated resubmission without fixing your draft increases the chance of it being permanently rejected or deleted. pythoncoder (talk | contribs) 15:21, 1 July 2025 (UTC)

Address-Event Representation (AER) is a communication protocol and data format used in neuromorphic engineering to transmit discrete events between neural processing elements using space-time coordinates. Originally developed by Misha Mahowald and Carver Mead in the late 1980s, AER enables asynchronous, event-driven communication that mimics the sparse, temporal nature of biological neural networks.

AER is particularly valuable for representing sparse temporal data where most values are zero and can be ignored, making it closely related to coordinate-based representations of sparse matrices and tensors. It has become the de facto standard for neuromorphic sensors, including event cameras, cochlear implants, and some tactile sensors.

Overview

[edit]

In biological neural networks, neurons communicate through discrete action potentials (spikes) that occur at specific times. AER replicates this communication pattern in artificial systems by encoding events as addresses that specify when and where neural activity occurs, rather than continuously transmitting all possible states.

The fundamental principle of AER is to represent neural events as tuples containing spatial coordinates and temporal information:

where the address typically encodes spatial location and the data may contain additional information such as polarity or intensity.

AER representations are seen in most event-based data formats, such as AEDAT [1] and EVT [2], that are used by event cameras and event-based datasets.

History and Development

[edit]

AER was first proposed by Carver Mead's group at California Institute of Technology around 1991 as part of their work on analog very-large-scale integration (VLSI) neural networks.[3][4] Their goal was to create a communication protocol that could provide high-bandwidth communication among large arrays of neurons while maintaining the asynchronous, event-driven nature of biological neural computation.[5]

The development of AER was driven by the need to interconnect arrays of analog neural processing elements without requiring dedicated point-to-point connections between every pair of neurons. This approach enabled the construction of large-scale neuromorphic systems with thousands of processing elements.[6]

Technical Description

[edit]

Event Representation

[edit]

A typical AER event contains:

  • Timestamp: When the event occurred (absolute or relative time)
  • Source address: Spatial coordinates of the originating element
  • Event type: Polarity, intensity, or other relevant data

For example, in a dynamic vision sensor (event camera), each pixel generates events when it detects changes in light intensity:

where polarity indicates whether the light intensity increased (+1) or decreased (-1).

Applications

[edit]

Neuromorphic Sensors

[edit]

AER has become the standard output format for neuromorphic sensors:

  • Event cameras: Also known as dynamic vision sensors, these cameras output pixel-level brightness changes as AER events rather than traditional frames.[7]
  • Silicon cochleae: Neuromorphic auditory sensors that convert sound into spike trains using AER formatting.[8]
  • Tactile sensors: Touch-sensitive arrays that generate events upon contact or pressure changes.

Neuromorphic Computing

[edit]

AER enables the construction of large-scale neuromorphic computing systems by providing a standardized interface between processing elements. Notable implementations include:

  • SpiNNaker: A massively parallel neuromorphic platform that uses AER-like packet-based communication.[9]
  • Intel Loihi: A neuromorphic research chip that implements spike-based communication protocols derived from AER principles.[10]

See Also

[edit]

References

[edit]
  1. ^ "AEDAT 4.0". Inivation. 2025-06-29.
  2. ^ "EVT 3.0 format". Prophesee. 2025-06-29..
  3. ^ Liu, Shi-Chii (2014-12-24). Event-Based Neuromorphic Systems. John Wiley & Sons. ISBN 978-1-118-92762-5.
  4. ^ Mahowald M, Mead C (1991). The silicon retina. Proc. SPIE 1473. pp. 52–58. doi:10.1117/12.45540.
  5. ^ Mahowald M (1994). An Analog VLSI System for Stereoscopic Vision. Springer US. doi:10.1007/978-1-4615-2724-4. ISBN 978-1-4613-6174-9.
  6. ^ Boahen KA (2000). "Point-to-point connectivity between neuromorphic chips using address events". IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing. 47 (5): 416–434. doi:10.1109/82.842110.
  7. ^ Gallego G, Delbruck T, Orchard GM, Bartolozzi C, Taba B, Censi A, Leutenegger S, Davison AJ, Conradt J, Daniilidis K, Scaramuzza D (2022). "Event-based Vision: A Survey". IEEE Transactions on Pattern Analysis and Machine Intelligence. 44 (1): 154–180. doi:10.1109/TPAMI.2020.3008413. PMID 32750812.
  8. ^ Liu SC, van Schaik A, Minch BA, Delbruck T (2014). "Asynchronous binaural spatial audition sensor with 2×64×4 channel output". IEEE Transactions on Biomedical Circuits and Systems. 8 (4): 453–464. doi:10.1109/TBCAS.2013.2281834. PMID 24216772.
  9. ^ Furber SB, Galluppi F, Temple S, Plana LA (2014). "The SpiNNaker Project". Proceedings of the IEEE. 102 (5): 652–665. doi:10.1109/JPROC.2014.2304638.
  10. ^ Davies M, Srinivasa N, Lin TH, Chinya G, Cao Y, Choday SH, Dimou G, Joshi P, Imam N, Jain S, Liao Y, Lin CK, Lines A, Liu R, Mathaikutty D, McCoy S, Paul A, Tse J, Venkataramanan G, Weng YH, Wild A, Yang Y, Wang H (2018). "Loihi: A Neuromorphic Manycore Processor with On-Chip Learning". IEEE Micro. 38 (1): 82–99. doi:10.1109/MM.2018.112130359.