Sorry, you need to enable JavaScript to visit this website.

The mean curvature has been shown a proper regularization in various ill-posed inverse problems in signal processing. Traditional solvers are based on either gradient descent methods or Euler Lagrange Equation. However, it is not clear if this mean curvature regularization term itself is convex or not. In this paper, we first prove that the mean curvature regularization is convex if the dimension of imaging domain is not

Categories:
6 Views

This paper presents a two-step supervised face hallucination framework based on class-specific dictionary learning. Since the performance of learning-based face hallucination relies on its training set, an inappropriate training set (e.g., an input face image is very different from the training set) can reduce the visual quality of reconstructed high-resolution (HR) face significantly.

Categories:
4 Views

Ph.D. Thesis by Donald McCuan (advisor Andrew Knyazev), Department of Mathematical and Statistical Sciences, University of Colorado Denver, 2012, originally posted at http://math.ucdenver.edu/theses/McCuan_PhdThesis.pdf (1128)

Categories:
57 Views

We develop novel algorithms and software on parallel computers for data clustering of large datasets. We are interested in applying our approach, e.g., for analysis of large datasets of microarrays or tiling arrays in molecular biology and for segmentation of high resolution images.

Categories:
15 Views

In this paper, we present an automated system for robust biometric recognition based upon sparse representation and dictionary learning. In sparse representation, extracted features from the training data are used to develop a dictionary. Training data of real world applications are likely to be exposed to geometric transformations, which is a big challenge for designing of discriminative dictionaries. Classification is achieved by representing the extracted features of the test data as a linear combination of entries in the dictionary.

Categories:
7 Views

The inter prediction decoding is one of the most time consuming modules in modern video decoders, which may significantly limit their real-time capabilities. To circumvent this issue, an efficient acceleration of the HEVC inter prediction decoding module is proposed, by offloading the involved workload to GPU devices. The proposed approach aims at efficiently exploiting the GPU resources by carefully managing the processing within the computational kernels, as well as by optimizing the usage of the complex GPU memory hierarchy.

Categories:
16 Views

Pages