Documents
Poster
Deep CNN Sparse Coding Analysis
- Citation Author(s):
- Submitted by:
- Michael Murray
- Last updated:
- 31 May 2018 - 12:05pm
- Document Type:
- Poster
- Document Year:
- 2018
- Event:
- Presenters:
- Michael Murray
- Categories:
- Keywords:
- Log in to post comments
Deep Convolutional Sparse Coding (D-CSC) is a framework reminiscent
of deep convolutional neural nets (DCNN), but by omitting the learning of the
dictionaries one can more transparently analyse the role of the
activation function and its ability to recover activation paths
through the layers. Papyan, Romano, and Elad conducted an analysis of
such an architecture \cite{2016arXiv160708194P}, demonstrated the
relationship with DCNNs and proved conditions under which a D-CSC is
guaranteed to recover activation paths. A technical innovation of
their work highlights that one can view the efficacy of the ReLU nonlinear activation
function of a DCNN through a new variant of the tensor's sparsity,
referred to as stripe-sparsity.Using this they
proved that representations with an activation density proportional to the
ambient dimension of the data are recoverable. We extend their uniform guarantees
to a modified model and prove that with high
probability the true activation is typically possible to recover
for a greater density of activations per layer. Our extension
follows from incorporating the prior work on one step thresholding by
Schnass and Vandergheynst into the appropriately
modified architecture of Papyan et al.