Sorry, you need to enable JavaScript to visit this website.

Even though zero padding is usually a staple in convolutional
neural networks to maintain the output size, it is highly suspicious
because it significantly alters the input distribution
around border region. To mitigate this problem, in this paper,
we propose a new padding technique termed as distribution
padding. The goal of the method is to approximately maintain
the statistics of the input border regions. We introduce
two different ways to achieve our goal. In both approaches,
the padded values are derived from the means of the border

Categories:
16 Views

In dynamic state-space models, the state can be estimated through recursive computation of the posterior distribution of the state given all measurements. In scenarios where active sensing/querying is possible, a hard decision is made when the state posterior achieves a pre-set confidence threshold. This mandate to meet a hard threshold may sometimes unnecessarily require more queries. In application domains where sensing/querying cost is of concern, some potential accuracy may be sacrificed for greater gains in sensing cost.

Categories:
7 Views

In dynamic state-space models, the state can be estimated through recursive computation of the posterior distribution of the state given all measurements. In scenarios where active sensing/querying is possible, a hard decision is made when the state posterior achieves a pre-set confidence threshold. This mandate to meet a hard threshold may sometimes unnecessarily require more queries. In application domains where sensing/querying cost is of concern, some potential accuracy may be sacrificed for greater gains in sensing cost.

Categories:
5 Views

We propose a novel adversarial speaker adaptation (ASA) scheme, in which adversarial learning is applied to regularize the distribution of deep hidden features in a speaker-dependent (SD) deep neural network (DNN) acoustic model to be close to that of a fixed speaker-independent (SI) DNN acoustic model during adaptation. An additional discriminator network is introduced to distinguish the deep features generated by the SD model from those produced by the SI model.

Categories:
16 Views

Adversarial domain-invariant training (ADIT) proves to be effective in suppressing the effects of domain variability in acoustic modeling and has led to improved performance in automatic speech recognition (ASR). In ADIT, an auxiliary domain classifier takes in equally-weighted deep features from a deep neural network (DNN) acoustic model and is trained to improve their domain-invariance by optimizing an adversarial loss function.

Categories:
18 Views

The use of deep networks to extract embeddings for speaker recognition has proven successfully. However, such embeddings are susceptible to performance degradation due to the mismatches among the training, enrollment, and test conditions. In this work, we propose an adversarial speaker verification (ASV) scheme to learn the condition-invariant deep embedding via adversarial multi-task training. In ASV, a speaker classification network and a condition identification network are jointly optimized to minimize the speaker classification loss and simultaneously mini-maximize the condition loss.

Categories:
16 Views

The teacher-student (T/S) learning has been shown to be effective for a variety of problems such as domain adaptation and model compression. One shortcoming of the T/S learning is that a teacher model, not always perfect, sporadically produces wrong guidance in form of posterior probabilities that misleads the student model towards a suboptimal performance.

Categories:
47 Views

Tensor decomposition is a powerful tool for analyzing multiway data. Nowadays, with the fast development of multisensor technology, more and more data appear in higher-order (order >= 4) and nonnegative form. However, the decomposition of higher-order nonnegative tensor suffers from poor convergence and low speed. In this study, we propose a new nonnegative CANDECOM/PARAFAC (NCP) model using proximal algorithm. The block principal pivoting method in alternating nonnegative least squares (ANLS) framework is employed to minimize the objective function.

Categories:
132 Views

Pages