Sorry, you need to enable JavaScript to visit this website.

An increasing number of distributed machine learning applications require efficient communication of neural network parameterizations. DeepCABAC, an algorithm in the current working draft of the emerging MPEG-7 part 17 standard for compression of neural networks for multimedia content description and analysis, has demonstrated high compression gains for a variety of neural network models. In this paper we propose a method for employing DeepCABAC in a Federated Learning scenario for the exchange of intermediate differential parameterizations.


Multichannel processing is widely used for speech enhancement but several limitations appear when trying to deploy these solutions in the real world. Distributed sensor arrays that consider several devices with a few microphones is a viable solution which allows for exploiting the multiple devices equipped with microphones that we are using in our everyday life. In this context, we propose to extend the distributed adaptive node-specific signal estimation approach to a neural network framework.


Time series data compression is emerging as an important problem with the growth in IoT devices and sensors. Due to the presence of noise in these datasets, lossy compression can often provide significant compression gains without impacting the performance of downstream applications. In this work, we propose an error-bounded lossy compressor, LFZip, for multivariate floating-point time series data that provides guaranteed reconstruction up to user-specified maximum absolute error.


Time-delay estimation is an essential building block of many signal processing applications. This paper follows up on earlier work for acoustic source localization and time delay estimation using pattern recognition techniques; it presents high performance results obtained with supervised training of neural networks which challenge the state of the art and compares its performance to that of well-known methods such as the Generalized Cross-Correlation or Adaptive Eigenvalue Decomposition.


This paper is concerned with estimating unknown multidimensional frequencies from linear compressive measurements. This is accomplished by employing the recently proposed atomic norm minimization framework to recover these frequencies under a sparsity prior without imposing any grid restriction on these frequencies. To this end, we give a rigorous derivation of an iterative scheme called alternating direction of multipliers method, which is able to incorporate multiple compressive snapshots from a multi-dimensional superposition of complex harmonics.


It is known that the calculation of a matrix–vector product can be accelerated if this matrix can be recast (or approximated) by the Kronecker product of two smaller matrices. In array signal processing, the manifold matrix can be described as the Kronecker product of two other matrices if the sensor array displays a separable geometry. This forms the basis of the Kronecker Array Transform (KAT), which was previously introduced to speed up the calculations of acoustic images with microphone arrays.