Sorry, you need to enable JavaScript to visit this website.

Using the shared-private paradigm and adversarial training
can significantly improve the performance of multi-domain
text classification (MDTC) models. However, there are two
issues for the existing methods: First, instances from the multiple
domains are not sufficient for domain-invariant feature
extraction. Second, aligning on the marginal distributions
may lead to a fatal mismatch. In this paper, we propose mixup
regularized adversarial networks (MRANs) to address these
two issues. More specifically, the domain and category mixup

Categories:
11 Views

We introduce a new Nonnegative Matrix Factorization (NMF) model called Nonnegative Unimodal Matrix Factorization (NuMF), which adds on top of NMF the unimodal condition on the columns of the basis matrix. NuMF finds applications for example in analytical chemistry. We propose a simple but naive brute-force heuristics strategy based on accelerated projected gradient. It is then improved by using multi-grid for which we prove that the restriction operator preserves the unimodality.

Categories:
11 Views

We introduce a new Nonnegative Matrix Factorization (NMF) model called Nonnegative Unimodal Matrix Factorization (NuMF), which adds on top of NMF the unimodal condition on the columns of the basis matrix. NuMF finds applications for example in analytical chemistry. We propose a simple but naive brute-force heuristics strategy based on accelerated projected gradient. It is then improved by using multi-grid for which we prove that the restriction operator preserves the unimodality.

Categories:
21 Views

The key principle of unsupervised domain adaptation is to minimize the divergence between source and target domain. Many recent methods follow this principle to learn domain-invariant features. They train task-specific classifiers to maximize the divergence and feature extractors to minimize the divergence in an adversarial way. However, this strategy often limits their performance. In this paper, we present a novel method that learns feature representations that minimize the domain divergence. We show that model uncertainty is a useful surrogate for the domain divergence.

Categories:
68 Views

There exist many background subtraction algorithms to detect motion in videos. To help comparing them, datasets with ground-truth data such as CDNET or LASIESTA have been proposed. These datasets organize videos in categories that represent typical challenges for background subtraction. The evaluation procedure promoted by their authors consists in measuring performance indicators for each video separately and to average them hierarchically, within a category first, then between categories, a procedure which we name “summarization”.

Categories:
15 Views

In this paper, we focus on learning the underlying product graph structure from multidomain training data. We assume that the product graph is formed from a Cartesian graph product of two smaller factor graphs. We then pose the product graph learning problem as the factor graph Laplacian matrix estimation problem. To estimate the factor graph Laplacian matrices, we assume that the data is smooth with respect to the underlying product graph.

Categories:
21 Views

In this paper, we study the problem of online matrix completion (MC) aiming to achieve robustness to the variations in both low-rank subspace and noises. In contrast to existing methods, we progressively fit a specific Gaussian Mixture Model (GMM) for noises at each time slot, which ensures the adaptiveness of the model to dynamic complex noises under real application scenarios. Consequently, we formalize the online MC into an optimization problem based on the GMM regularizer.

Categories:
25 Views

Pages