Documents
Presentation Slides
Mixup Regularized Adversarial Networks for Multi-Domain Text Classification
- Citation Author(s):
- Submitted by:
- Yuan Wu
- Last updated:
- 21 June 2021 - 5:44pm
- Document Type:
- Presentation Slides
- Document Year:
- 2021
- Event:
- Presenters:
- Yuan Wu
- Paper Code:
- HLT-14.6
- Categories:
- Keywords:
- Log in to post comments
Using the shared-private paradigm and adversarial training
can significantly improve the performance of multi-domain
text classification (MDTC) models. However, there are two
issues for the existing methods: First, instances from the multiple
domains are not sufficient for domain-invariant feature
extraction. Second, aligning on the marginal distributions
may lead to a fatal mismatch. In this paper, we propose mixup
regularized adversarial networks (MRANs) to address these
two issues. More specifically, the domain and category mixup
regularizations are introduced to enrich the intrinsic features
in the shared latent space and enforce consistent predictions
in-between training instances such that the learned features
can be more domain-invariant and discriminative. We conduct
experiments on two benchmarks: The Amazon review
dataset and the FDU-MTL dataset. Our approach on these
two datasets yields average accuracies of 87.64% and 89.0%
respectively, outperforming all relevant baselines.