Sorry, you need to enable JavaScript to visit this website.

This paper presents a technique to interpret and visualize intermediate layers in generative CNNs trained on raw speech data in an unsupervised manner. We argue that averaging over feature maps after ReLU activation in each transpose convolutional layer yields interpretable time-series data. This technique allows for acoustic analysis of intermediate layers that parallels the acoustic analysis of human speech data: we can extract F0, intensity, duration, formants, and other acoustic properties from intermediate layers in order to test where and how CNNs encode various types of information.

Categories:
29 Views

Hand gesture, a common non-verbal language, is being studied for Human Computer Interaction. Hand gestures can be categorized as static hand gestures and dynamic hand gestures. In recent years, effective approaches have been applied to hand gesture recognition.

Categories:
12 Views

Speech production involves the synchronization of neural activity between the speech centers of the brain and the oralmotor system, allowing for the conversion of thoughts into

Categories:
15 Views

One of the common modalities for observing mental activity is electroencephalogram (EEG) signals. However, EEG recording is highly susceptible to various sources of noise and to inter-subject differences. In order to solve these problems, we present a deep recurrent neural network (RNN) architecture to learn robust features and predict the levels of the cognitive load from EEG recordings. Using a deep learning approach, we first transform the EEG time series into a sequence of multispectral images which carries spatial information.

Categories:
407 Views

Emotion recognition based on electroencephalography (EEG) has received attention as a way to implement human-centric
services. However, there is still much room for improvement, particularly in terms of the recognition accuracy. In this paper, we propose a novel deep learning approach using convolutional neural networks (CNNs) for EEG-based emotion recognition. In particular, we employ brain connectivity features that have not been used with deep learning models in

Categories:
64 Views