Sorry, you need to enable JavaScript to visit this website.

DEEP LEARNING THE EEG MANIFOLD FOR PHONOLOGICAL CATEGORIZATION FROM ACTIVE THOUGHTS

Citation Author(s):
Pramit Saha, Muhammad Abdul Mageed, Sidney Fels
Submitted by:
Pramit Saha
Last updated:
10 May 2019 - 8:57am
Document Type:
Presentation Slides
Document Year:
2019
Event:
Presenters:
Pramit Saha
Paper Code:
2585
 

Speech-related Brain Computer Interfaces (BCI) aim primarily at finding an alternative vocal communication pathway for
people with speaking disabilities. As a step towards full decoding of imagined speech from active thoughts, we present a
BCI system for subject-independent classification of phonological categories exploiting a novel deep learning based
hierarchical feature extraction scheme. To better capture the complex representation of high-dimensional electroencephalography (EEG) data, we compute the joint variability of EEG electrodes into a channel cross-covariance matrix. We then extract the spatio-temporal information encoded within the matrix using a mixed deep neural network strategy. Our model framework is composed of a convolutional neural network (CNN), a long-short term network (LSTM), and a deep autoencoder. We train the individual networks hierarchically, feeding their combined outputs in a final gradient boosting classification step. Our best models achieve an average accuracy of 77.9% across five different binary classification tasks, providing a significant 22.5% improvement over previous methods. As we also show visually, our work demonstrates that the speech imagery EEG possesses significant discriminative information about the intended articulatory movements responsible for natural speech synthesis.

up
0 users have voted: