Sorry, you need to enable JavaScript to visit this website.

Short-Segment Heart Sound Classification Using an Ensemble of Deep Convolutional Neural Networks

Citation Author(s):
Chee-Ming Ting, Sh-Hussain Salleh, Hernando Ombao
Submitted by:
Fuad Noman
Last updated:
9 May 2019 - 3:40am
Document Type:
Poster
Document Year:
2019
Event:
Presenters:
Ting Chee Ming
Paper Code:
BISP-P5.10(1492)
 

This paper proposes a framework based on deep convolutional neural networks (CNNs) for automatic heart sound classification using short-segments of individual heart beats. We design a 1D-CNN that directly learns features from raw heart-sound signals, and a 2D-CNN that takes inputs of two-dimensional time-frequency feature maps based on Mel-frequency cepstral coefficients. We further develop a time-frequency CNN ensemble (TF-ECNN) combining the 1D-CNN and 2D-CNN based on score-level fusion of the class probabilities. On the large PhysioNet CinC challenge 2016 database, the proposed CNN models outperformed traditional classifiers based on support vector machine and hidden Markov models with various hand-crafted time- and frequency-domain features. Best classification scores with 89.22% accuracy and 89.94% sensitivity were achieved by the ECNN, and 91.55% specificity and 88.82% modified accuracy by the 2D-CNN alone on the test set.

https://ieeexplore.ieee.org/abstract/document/8682668

up
0 users have voted: