Sorry, you need to enable JavaScript to visit this website.

Greedy Deep Transform Learning

Citation Author(s):
Jyoti Maggu, Angshul Majumdar
Submitted by:
Jyoti Maggu
Last updated:
18 September 2017 - 1:57pm
Document Type:
Presentation Slides
Document Year:
2017
Event:
Presenters:
Jyoti
Paper Code:
1136
 

We introduce deep transform learning – a new
tool for deep learning. Deeper representation is learnt by
stacking one transform after another. The learning proceeds in
a greedy way. The first layer learns the transform and features
from the input training samples. Subsequent layers use the
features (after activation) from the previous layers as training
input. Experiments have been carried out with other deep
representation learning tools – deep dictionary learning,
stacked denoising autoencoder, deep belief network and PCANet
(a version of convolutional neural network). Results show
that our proposed technique is better than all the said
techniques, at least on the benchmark datasets (MNIST,
CIFAR-10 and SVHN) compared on.

up
0 users have voted: