Sorry, you need to enable JavaScript to visit this website.

Robust Unstructured Knowledge Access In Conversational Dialogue With ASR Errors

Citation Author(s):
Yik-Cheung Tam, Jiacheng Xu, Seeger Zou, Zecheng Wang, Tinglong Liao, Shuhan Yuan
Submitted by:
Yik Tam
Last updated:
5 May 2022 - 9:04am
Document Type:
Presentation Slides
Document Year:
2022
Event:
Presenters:
Yik-Cheung Tam
Paper Code:
SPE-24.4
 

Performance of spoken language understanding (SLU) can be degraded with automatic speech recognition (ASR) errors. We propose a novel approach to improve SLU robustness by randomly corrupting clean training text with an ASR error simulator, followed by self-correcting the errors and minimizing the target classification loss in a joint manner. In the proposed error simulator, we leverage confusion networks generated from an ASR decoder without human transcriptions to generate variety of error patterns for model training. We evaluate our approach on DSTC10 challenge targeted for knowledge-grounded task-oriented conversational dialogues with ASR errors. Experimental results show effectiveness of our proposed approach, boosting the knowledge-seeking turn detection (KTD) F1 significantly from 0.9433 to 0.9904. Knowledge cluster classification is boosted from 0.7924 to 0.9333 in Recall@1. After knowledge document re-ranking, our approach shows significant improvement in all knowledge selection metrics, from 0.7358 to 0.7806 in Recall@1, from 0.8301 to 0.9333 in Recall@5, and from 0.7798 to 0.8460 in MRR@5 (Mean Reciprocal Rank) on the test set. On the recent DSTC10 evaluation, our approach demonstrates significant improvement in knowledge selection, boosting Recall@1 from 0.495 to 0.7105 compared to the official baseline. Our source code is released in GitHub https://github.com/yctam/dstc10_track2_task2.git

up
0 users have voted: