Sorry, you need to enable JavaScript to visit this website.

The International Conference on Image Processing (ICIP), sponsored by the IEEE Signal Processing Society, is the premier forum for the presentation of technological advances and research results in the fields of theoretical, experimental, and applied image and video processing. ICIP has been held annually since 1994, brings together leading engineers and scientists in image and video processing from around the world. Visit website.

Camera-equipped drones have recently revolutionized aerial cinematography, allowing easy acquisition of impressive footage. Although they are currently manually operated, autonomous functionalities based on machine learning and computer vision are becoming popular. However, the emerging area of autonomous UAV filming has to face several challenges, especially when visually tracking fast and unpredictably moving targets. In the latter case, an important issue is how to determine the shot types that are achievable without risking failure of the 2D visual tracker.

Categories:
9 Views

We present a solution to the problem of discovering all periodic
segments of a video and of estimating their period in
a completely unsupervised manner. These segments may be
located anywhere in the video, may differ in duration, speed,
period and may represent unseen motion patterns of any type
of objects (e.g., humans, animals, machines, etc). The proposed
method capitalizes on earlier research on the problem
of detecting common actions in videos, also known as commonality
detection or video co-segmentation. The proposed

Categories:
24 Views

Recently, there has been increasing interest in the processing of
dynamic scenes as captured by 3D scanners, ideally suited for
challenging applications such as immersive tele-presence systems
and gaming. Despite the fact that the resolution and accuracy of
the modern 3D scanners are constantly improving, the captured
3D point clouds are usually noisy with a perceptive percentage of
outliers, stressing the need of an approach with low computational
requirements which will be able to automatically remove the outliers

Categories:
17 Views

We present a region based method for segmenting and splitting
images of cells in an automatic and unsupervised manner.
The detection of cell nuclei is based on the Bradley’s method.
False positives are automatically identified and rejected based
on shape and intensity features. Additionally, the proposed
method is able to automatically detect and split touching cells.
To do so, we employ a variant of a region based multi-ellipse
fitting method (DEFA) that makes use of constraints on the

Categories:
72 Views

The recent trend towards miniaturization of mobile projectors is allowing new forms of information presentation and interaction. Projectors can easily be moved freely in space either by humans or by mobile robots. This paper presents a technique to dynamically track the orientation and position of the projection plane only by analyzing the distortion of the projection by itself, independent of the presented content. It allows distortion-free projection with a fixed metric size for moving projector-camera systems.

Categories:
5 Views

Oil spills pose a major threat of the oceanic and coastal environments, hence, an automatic detection and a continuous monitoring system comprises an appealing option for minimizing the response time of relevant operations. Numerous efforts have been conducted towards such solutions by exploiting a variety of sensing systems such as satellite Synthetic Aperture Radar (SAR) which can identify oil spills over sea surfaces in any environmental conditions and operational time. Such approaches include the use of artificial neural networks which effectively identify the polluted areas.

Categories:
52 Views

Optical coherence tomography (OCT) is a powerful method for imaging the retinal layers. In this paper, we develop a novel 3D fully convolutional deep architecture for automated segmentation of retinal layers in OCT scans. This model extracts features from both the spatial and the inter-frame dimensions by performing 3D convolutions, thereby capturing the information encoded in multiple adjacent frames.

Categories:
48 Views

Pages