Sorry, you need to enable JavaScript to visit this website.

EFFICIENT VIDEO AND AUDIO PROCESSING WITH LOIHI 2

Citation Author(s):
Submitted by:
Sumit Shrestha
Last updated:
18 April 2024 - 6:55am
Document Type:
Presentation Slides
Document Year:
2024
Event:
Presenters:
Sumit Bam Shrestha
Paper Code:
SS-L17.6
 

Loihi 2 is a fully event-based neuromorphic processor that supports a wide range of synaptic connectivity configurations and temporal neuron dynamics. Loihi 2's temporal and event-based paradigm is naturally well-suited to processing data from an event-based sensor, such as a Dynamic Vision Sensor (DVS) or a Silicon Cochlea. However, this begs the question: How general are signal processing efficiency gains on Loihi 2 versus conventional computer architectures? For instance, how efficiently can Loihi 2 process a \textit{standard} temporal input datastream, such as a frame-based video or a streaming audio waveform? In this work, we address this question by designing and benchmarking efficient audio and video applications on Loihi 2. We utilize the wide variety of powerful dynamics supported by the flexible neuron programming on Loihi 2, such as Leaky Integrate and Fire, Adaptive Leaky Integrate and Fire, Resonate and Fire, and Sigma-Delta encapsulation. We find that these powerful neuron dynamics, when matched with appropriate synaptic connectivity such as hardware-accelerated convolutions, enable highly efficient processing for a variety of tasks on standard audio and video on Loihi 2.

up
0 users have voted: