Documents
Presentation Slides
EFFICIENT VIDEO AND AUDIO PROCESSING WITH LOIHI 2
- DOI:
- 10.60864/cw23-0m04
- Citation Author(s):
- Submitted by:
- Sumit Shrestha
- Last updated:
- 6 June 2024 - 10:22am
- Document Type:
- Presentation Slides
- Document Year:
- 2024
- Event:
- Presenters:
- Sumit Bam Shrestha
- Paper Code:
- SS-L17.6
- Log in to post comments
Loihi 2 is a fully event-based neuromorphic processor that supports a wide range of synaptic connectivity configurations and temporal neuron dynamics. Loihi 2's temporal and event-based paradigm is naturally well-suited to processing data from an event-based sensor, such as a Dynamic Vision Sensor (DVS) or a Silicon Cochlea. However, this begs the question: How general are signal processing efficiency gains on Loihi 2 versus conventional computer architectures? For instance, how efficiently can Loihi 2 process a \textit{standard} temporal input datastream, such as a frame-based video or a streaming audio waveform? In this work, we address this question by designing and benchmarking efficient audio and video applications on Loihi 2. We utilize the wide variety of powerful dynamics supported by the flexible neuron programming on Loihi 2, such as Leaky Integrate and Fire, Adaptive Leaky Integrate and Fire, Resonate and Fire, and Sigma-Delta encapsulation. We find that these powerful neuron dynamics, when matched with appropriate synaptic connectivity such as hardware-accelerated convolutions, enable highly efficient processing for a variety of tasks on standard audio and video on Loihi 2.