Sorry, you need to enable JavaScript to visit this website.

Multi-Head ReLU Implicit Neural Representation Networks

Citation Author(s):
Arya Aftab,Alireza Morsali,Shahrokh Ghaemmaghami
Submitted by:
Arya Aftab
Last updated:
8 May 2022 - 2:48am
Document Type:
Presentation Slides
Document Year:
2022
Event:
Presenters:
Arya Aftab
Paper Code:
5178
 

In this paper, a novel multi-head multi-layer perceptron (MLP) structure is presented for implicit neural representation (INR). Since conventional rectified linear unit (ReLU) networks are shown to exhibit spectral bias towards learning low-frequency features of the signal, we aim at mitigating this defect by taking advantage of local structure of the signals. To be more specific, an MLP is used to capture the global features of the underlying generator function of the desired signal. Then, several heads are utilized to reconstruct disjoint local features of the signal, and to reduce the computational complexity, sparse layers are deployed for attaching heads to the body. Through various experiments, we show that the proposed model does not suffer from the special bias of conventional ReLU networks and has superior generalization capabilities. Finally, simulation results confirm that the proposed multi-head structure outperforms existing INR methods with considerably less computational cost. The source code is available at https://github.com/AlirezaMorsali/MH-RELU-INR

up
0 users have voted: