Sorry, you need to enable JavaScript to visit this website.

In this paper we examine a technique for developing prognostic image characteristics, termed radiomics, for non-small cell lung cancer based on a tumour edge region-based analysis. Texture features were extracted from the rind of the tumour in a publicly available 3D CT data set to predict two-year survival. The derived models were compared against the previous methods of training radiomic signatures that are descriptive of the whole tumour volume. Radiomic features derived solely from regions external, but neighbouring, the tumour were shown to also have prognostic value.

Categories:
4 Views

The development of children’s cognitive and perceptual skills depends heavily on object exploration and manipulative experiences. New types of robotic assistive technologies that enable children with disabilities to interact with their environment, which prove to be beneficial for their cognitive and perceptual skills development, have emerged in recent years. In this study, a human-robot interface that uses Event-Related Desynchronization (ERD) brain response during movement was developed.

Categories:
4 Views

Sparse approximation is a well-established theory, with a profound impact on the fields of signal and image processing. In this talk we start by presenting this model and its features, and then turn to describe two special cases of it – the convolutional sparse coding (CSC) and its multi-layered version (ML-CSC). Amazingly, as we will carefully show, ML-CSC provides a solid theoretical foundation to … deep-learning.

Categories:
337 Views

Voxels are an effective approach to 3D mesh and point cloud classification because they build upon mature Convolutional Neural Network concepts. We show however that their cubic increase in dimensionality is unsuitable for more challenging problems such as object detection in a complex point cloud scene. We observe that 3D meshes are analogous to graph data and can thus be treated with graph signal processing techniques.

Categories:
44 Views

We introduce deep transform learning – a new
tool for deep learning. Deeper representation is learnt by
stacking one transform after another. The learning proceeds in
a greedy way. The first layer learns the transform and features
from the input training samples. Subsequent layers use the
features (after activation) from the previous layers as training
input. Experiments have been carried out with other deep
representation learning tools – deep dictionary learning,
stacked denoising autoencoder, deep belief network and PCANet

Categories:
72 Views

Pages