Sorry, you need to enable JavaScript to visit this website.

Acoustic Matching by Embedding Impulse Responses

Primary tabs

Citation Author(s):
Adam Finkelstein
Submitted by:
Jiaqi Su
Last updated:
23 May 2020 - 11:34am
Document Type:
Presentation Slides
Document Year:
2020
Event:
Presenters Name:
Jiaqi Su
Paper Code:
5239

Abstract 

Abstract: 

The goal of acoustic matching is to transform an audio recording made in one acoustic environment to sound as if it had been recorded in a different environment, based on reference audio from the target environment. This paper introduces a deep learning solution for two parts of the acoustic matching problem. First, we characterize acoustic environments by mapping audio into a low-dimensional embedding invariant to speech content and speaker identity. Next, a waveform-to-waveform neural network conditioned on this embedding learns to transform an input waveform to match the acoustic qualities encoded in the target embedding. Listening tests on both simulated and real environments show that the proposed approach improves on state-of-the-art baseline methods.

up
0 users have voted:

Dataset Files

presentation

(111)