Sorry, you need to enable JavaScript to visit this website.

The automatic diagnosis of lung infections using chest computed
tomography (CT) scans has been recently obtained remarkable significance,
particularly during the COVID-19 pandemic that the early
diagnosis of the disease is of utmost importance. In addition, infection
diagnosis is the main building block of most automated diagnostic/
prognostic frameworks. Recently, due to the devastating effects
of the radiation on the body caused by the CT scan, there has been
a surge in acquiring low and ultra-low-dose CT scans instead of the


Occlusion removal is an interesting application of image enhancement, for which, existing work suggests manually-annotated or domain-specific occlusion removal. No work tries to address automatic occlusion detection and removal as a context-aware generic problem. In this paper, we present a novel methodology to identify objects that do not relate to the image context as occlusions and remove them, reconstructing the space occupied coherently.


This paper proposed a modified YOLOv3 which has an extra object depth prediction module for obstacle detection and avoidance. We use a pre-processed KITTI dataset to train the proposed, unified model for (i) object detection and (ii) depth prediction and use the AirSim flight simulator to generate synthetic aerial images to verify that our model can be applied in different data domains.


Colorization is a challenging task that has recently been tackled by deep learning. Line art colorization is particularly difficult because there is no grayscale value to indicate the color intensities as there is in black-and-white photograph images. When designing a character, concept artists often need to try different color schemes, however, colorization is a time-consuming task. In this article, we propose a semi-automatic framework for colorizing manga concept arts by letting concept artists try different color schemes and obtain colorized results in fashion time.


Occlusion removal is an interesting application of image enhancement, for which, existing work suggests manually-


Binary neural networks are a promising approach to execute convolutional neural networks on devices with low computational power. Previous work on this subject often quantizes pretrained full-precision models and uses complex training strategies. In our work, we focus on increasing the performance of binary neural networks by training from scratch with a simple training strategy. In our experiments we show that we are able to achieve state-of-the-art results on standard benchmark datasets.


Capsule Networks (CapsNets) are recently introduced to overcome some of the shortcomings of traditional Convolutional Neural Networks (CNNs). CapsNets replace neurons in CNNs with vectors to retain spatial relationships among the features. In this paper, we propose a CapsNet architecture that employs individual video frames for human action recognition without explicitly extracting motion information. We also propose weight pooling to reduce the computational complexity and improve the classification accuracy by appropriately removing some of the extracted features.


Crowd counting, for estimating the number of people in a crowd using vision-based computer techniques, has attracted much interest in the research community. Although many attempts have been reported, real-world problems, such as huge variation in subjects’ sizes in images and serious occlusion among people, make it still a challenging problem. In this paper, we propose an Adaptive Counting Convolutional Neural Network (A-CCNN) and consider the scale variation of objects in a frame adaptively so as to improve the accuracy of counting.