Sorry, you need to enable JavaScript to visit this website.

Partially Fake Audio Detection by Self-attention-based Fake Span Discovery

Citation Author(s):
Haibin Wu, Heng-Cheng Kuo, Naijun Zheng, Kuo-Hsuan Hung, Hung-yi Lee, Yu Tsao, Hsin-Min Wang, Helen Meng
Submitted by:
Haibin Wu
Last updated:
4 May 2022 - 11:29pm
Document Type:
Presentation Slides
Document Year:
2022
Event:
Presenters:
Haibin Wu
Paper Code:
CHAL-5.5
Categories:
 

The past few years have witnessed the significant advances of speech synthesis and voice conversion technologies. However, such technologies can undermine the robustness of broadly implemented biometric identification models and can be harnessed by in-the-wild attackers for illegal uses. The ASVspoof challenge mainly focuses on synthesized audios by advanced speech synthesis and voice conversion models, and replay attacks. Recently, the first Audio Deep Synthesis Detection challenge (ADD 2022) extends the attack scenarios into more aspects. Also, ADD 2022 is the first challenge to propose the partially fake audio detection task. Such brand new attacks are dangerous and how to tackle such attacks remains an open question. Thus, we propose a novel framework by introducing the question-answering (fake span discovery) strategy with the self-attention mechanism to detect partially fake audios. The proposed fake span detection module tasks the anti-spoofing model to predict the start and end positions of the fake clip within the partially fake audio, address the model's attention into discovering the fake spans rather than other shortcuts with less generalization, and finally equips the model with the discrimination capacity between real and partially fake audios. Our submission ranked second in the partially fake audio detection track of ADD 2022.

up
0 users have voted: