- Read more about A Unified Approach to Translate Classical Bandit algorithms to Structured Bandits
- Log in to post comments

- Categories:

- Read more about TRAINING A BANK OF WIENER MODELS WITH A NOVEL QUADRATIC MUTUAL INFORMATION COST FUNCTION
- 1 comment
- Log in to post comments

- Categories:

- Read more about Mixup Regularized Adversarial Networks for Multi-Domain Text Classification
- Log in to post comments

Using the shared-private paradigm and adversarial training

can significantly improve the performance of multi-domain

text classification (MDTC) models. However, there are two

issues for the existing methods: First, instances from the multiple

domains are not sufficient for domain-invariant feature

extraction. Second, aligning on the marginal distributions

may lead to a fatal mismatch. In this paper, we propose mixup

regularized adversarial networks (MRANs) to address these

two issues. More specifically, the domain and category mixup

## Poster.pdf

- Categories:

- Read more about ICASSP21 Poster of `Nonnegative Unimodal Matrix Factorization'
- Log in to post comments

We introduce a new Nonnegative Matrix Factorization (NMF) model called Nonnegative Unimodal Matrix Factorization (NuMF), which adds on top of NMF the unimodal condition on the columns of the basis matrix. NuMF finds applications for example in analytical chemistry. We propose a simple but naive brute-force heuristics strategy based on accelerated projected gradient. It is then improved by using multi-grid for which we prove that the restriction operator preserves the unimodality.

- Categories:

- Read more about ICASSP21 slide of `Nonnegative Unimodal Matrix Factorization'
- Log in to post comments

We introduce a new Nonnegative Matrix Factorization (NMF) model called Nonnegative Unimodal Matrix Factorization (NuMF), which adds on top of NMF the unimodal condition on the columns of the basis matrix. NuMF finds applications for example in analytical chemistry. We propose a simple but naive brute-force heuristics strategy based on accelerated projected gradient. It is then improved by using multi-grid for which we prove that the restriction operator preserves the unimodality.

- Categories:

The key principle of unsupervised domain adaptation is to minimize the divergence between source and target domain. Many recent methods follow this principle to learn domain-invariant features. They train task-specific classifiers to maximize the divergence and feature extractors to minimize the divergence in an adversarial way. However, this strategy often limits their performance. In this paper, we present a novel method that learns feature representations that minimize the domain divergence. We show that model uncertainty is a useful surrogate for the domain divergence.

- Categories:

- Read more about Summarizing the performances of a background subtraction algorithm measured on several videos
- Log in to post comments

There exist many background subtraction algorithms to detect motion in videos. To help comparing them, datasets with ground-truth data such as CDNET or LASIESTA have been proposed. These datasets organize videos in categories that represent typical challenges for background subtraction. The evaluation procedure promoted by their authors consists in measuring performance indicators for each video separately and to average them hierarchically, within a category first, then between categories, a procedure which we name “summarization”.

- Categories:

- Read more about Learn-by-Calibrating: Using Calibration as a Training Objective
- Log in to post comments

- Categories:

- Read more about Learning Product Graphs from Multidomain Signals
- Log in to post comments

In this paper, we focus on learning the underlying product graph structure from multidomain training data. We assume that the product graph is formed from a Cartesian graph product of two smaller factor graphs. We then pose the product graph learning problem as the factor graph Laplacian matrix estimation problem. To estimate the factor graph Laplacian matrices, we assume that the data is smooth with respect to the underlying product graph.

- Categories: