Documents
Poster
Defense against adversarial attacks on spoofing countermeasures of ASV
- Citation Author(s):
- Submitted by:
- Haibin Wu
- Last updated:
- 13 May 2020 - 9:21pm
- Document Type:
- Poster
- Document Year:
- 2020
- Event:
- Categories:
- Log in to post comments
Various spearheads countermeasure methods for automatic speaker verification (ASV) with considerable performance for anti-spoofing are proposed in ASVspoof 2019 challenge. However, previous work has shown that countermeasure models are subject to adversarial examples indistinguishable from natural data. A good countermeasure model should not only be robust to spoofing audio, including synthetic, converted, and replayed audios, but counter deliberately generated examples by malicious adversaries. In this work, we introduce one passive defense method, spatial smoothing, and one proactive defense method, adversarial training, to mitigate the vulnerability of ASV spoofing countermeasure models against adversarial examples. This paper is among the first ones using defense methods to improve the robustness of ASV spoofing countermeasure models under adversarial attacks. The experimental results show that these two defense methods do help spoofing countermeasure models counter adversarial examples.
Comments
DEFENSE AGAINST ADVERSARIAL ATTACKS ON SPOOFING COUNTERMEASURES
Various spearheads countermeasure methods for automatic speaker verification (ASV) with considerable performance for anti-spoofing are proposed in ASVspoof 2019 challenge. However, previous work has shown that countermeasure models are subject to adversarial examples indistinguishable from natural data. A good countermeasure model should not only be robust to spoofing audio, including synthetic, converted, and replayed audios, but counter deliberately generated examples by malicious adversaries. In this work, we introduce one passive defense method, spatial smoothing, and one proactive defense method, adversarial training, to mitigate the vulnerability of ASV spoofing countermeasure models against adversarial examples. This paper is among the first ones using defense methods to improve the robustness of ASV spoofing countermeasure models under adversarial attacks. The experimental results show that these two defense methods do help spoofing countermeasure models counter adversarial examples.