Sorry, you need to enable JavaScript to visit this website.

Seeing Through the Conversation: Audio-visual Speech Separation based on Diffusion Model

DOI:
10.60864/2hbx-fj26
Citation Author(s):
Submitted by:
Suyeon Lee
Last updated:
15 April 2024 - 11:43pm
Document Type:
Presentation Slides
Presenters:
Suyeon Lee
Paper Code:
SLP-L1.6
 

The objective of this work is to extract the target speaker’s voice from a mixture of voices using visual cues. Existing works on audio-visual speech separation have demonstrated their performance with promising intelligibility, but maintaining naturalness remains challenging. To address this issue, we propose AVDiffuSS, an audio-visual speech separation model based on a diffusion mechanism known for its capability to generate natural samples. We also propose a cross-attention-based feature fusion mechanism for an effective fusion of the two modalities for diffusion. This mechanism is specifically tailored for the speech domain to integrate the phonetic information from audiovisual correspondence in speech generation. In this way, the fusion process maintains the high temporal resolution of the features, without excessive computational requirements. We demonstrate that the proposed framework achieves state-of-the-art results on two benchmarks, including VoxCeleb2 and LRS3, producing speech with notably better naturalness. Project page with demo: https://mm.kaist.ac.kr/projects/avdiffuss/

up
2 users have voted: Suyeon Lee, chaeyoung Jung