Sorry, you need to enable JavaScript to visit this website.

We introduce the matrix-based Renyi’s α-order entropy functional to parameterize Tishby et al. information bottleneck (IB) principle with a neural network. We term our methodology Deep Deterministic Information Bottleneck (DIB), as it avoids variational inference and distribution assumption. We show that deep neural networks trained with DIB outperform the variational objective counterpart and those that are trained
with other forms of regularization, in terms of generalization performance and robustness to adversarial attack. Code available at

Categories:
12 Views

Several recent works in communication systems have proposed to leverage the power of neural networks in the design of encoders and decoders. In this approach, these blocks can be tailored to maximize the transmission rate based on aggregated samples from the channel. Motivated by the fact that, in many communication schemes, the achievable transmission rate is determined by a conditional mutual information term, this paper focuses on neural-based estimators for this information-theoretic quantity.

Categories:
65 Views

In this paper, we derive generic bounds on the maximum deviations in prediction errors for sequential prediction via an information-theoretic approach. The fundamental bounds are shown to depend only on the conditional entropy of the data point to be predicted given the previous data points. In the asymptotic case, the bounds are achieved if and only if the prediction error is white and uniformly distributed.

Categories:
52 Views

Active learning is a form of machine learning which combines supervised learning and feedback to minimize the training set size, subject to low generalization errors. Since direct optimization of the generalization error is difficult, many heuristics have been developed which lack a firm theoretical foundation. In this paper, a new information theoretic criterion is proposed based on a minimax log-loss regret formulation of the active learning problem. In the first part of this paper, a Redundancy Capacity theorem for active learning is derived along with an optimal learner.

Categories:
89 Views

Single-molecule sensors based on carbon nanotubes transducer, enable to probe stochastic molecular dynamics thanks to long acquisition periods and high throughput measurements. With such sampling conditions, the sensor baseline may drift significantly and induce fake states and transitions in the recorded signal, leading to wrong kinetic estimates from the inferred state model.

We present MDL-AdaCHIP a multiscale signal compression technique based on the Minimum Description Length (MDL) principle, combined with an Adaptive piecewise Cubic Hermite Interpolation (AdaCHIP), both implemented into a blind source separation framework to compensate the parasitic baseline drift in single-molecule biosensors

Categories:
409 Views

Increasingly, post-secondary instructors are incorporating innovative teaching practices into their classrooms to improve student learning outcomes. In order to assess the effect of these techniques, it is helpful to quantify the types of activity being conducted in the classroom. Unfortunately, self-reporting is unreliable and manual annotation is tedious and scales poorly.

Categories:
28 Views

Feature selection and reducing the dimensionality of data is an essential step in data analysis. In this work, we propose a new criterion for feature selection that is formulated as conditional information between features given the labeled variable. Instead of using the standard mutual information measure based on Kullback-Leibler divergence, we use our proposed criterion to filter out redundant features for the purpose of multiclass classification.

Categories:
11 Views

Pages