Sorry, you need to enable JavaScript to visit this website.

Image/Video Processing

Visual Tracking via Structural Patch-based Dictionary Pair Learning

Paper Details

Authors:
Submitted On:
16 September 2017 - 11:27am
Short Link:
Type:
Event:

Document Files

TaoZhou_ICIP_2017.pdf

(19 downloads)

Keywords

Subscribe

[1] , "Visual Tracking via Structural Patch-based Dictionary Pair Learning", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2197. Accessed: Oct. 19, 2017.
@article{2197-17,
url = {http://sigport.org/2197},
author = { },
publisher = {IEEE SigPort},
title = {Visual Tracking via Structural Patch-based Dictionary Pair Learning},
year = {2017} }
TY - EJOUR
T1 - Visual Tracking via Structural Patch-based Dictionary Pair Learning
AU -
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2197
ER -
. (2017). Visual Tracking via Structural Patch-based Dictionary Pair Learning. IEEE SigPort. http://sigport.org/2197
, 2017. Visual Tracking via Structural Patch-based Dictionary Pair Learning. Available at: http://sigport.org/2197.
. (2017). "Visual Tracking via Structural Patch-based Dictionary Pair Learning." Web.
1. . Visual Tracking via Structural Patch-based Dictionary Pair Learning [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2197

MULTIPLE PATH SEARCH FOR ACTION TUBE DETECTION IN VIDEOS


This paper presents an efficient convolutional neural net- work (CNN)-based multiple path search (MPS) algorithm to detect multiple spatial-temporal action tubes in videos. With the pass information and the accumulated scores generated by forward message passing, the new algorithm reuses these information to simultaneously find multiple paths in back- ward path tracing without repeating the search process. More- over, to rectify the potentially inaccurate bounding boxes, we also propose a video localization refinement scheme to further boost the detection accuracy.

Paper Details

Authors:
Erick Hendra Putra Alwand Wen-Hsien Fang
Submitted On:
16 September 2017 - 11:23am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

MULTIPLE PATH SEARCH FOR ACTION TUBE DETECTION IN VIDEOS

(35 downloads)

Keywords

Subscribe

[1] Erick Hendra Putra Alwand Wen-Hsien Fang, "MULTIPLE PATH SEARCH FOR ACTION TUBE DETECTION IN VIDEOS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2196. Accessed: Oct. 19, 2017.
@article{2196-17,
url = {http://sigport.org/2196},
author = {Erick Hendra Putra Alwand Wen-Hsien Fang },
publisher = {IEEE SigPort},
title = {MULTIPLE PATH SEARCH FOR ACTION TUBE DETECTION IN VIDEOS},
year = {2017} }
TY - EJOUR
T1 - MULTIPLE PATH SEARCH FOR ACTION TUBE DETECTION IN VIDEOS
AU - Erick Hendra Putra Alwand Wen-Hsien Fang
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2196
ER -
Erick Hendra Putra Alwand Wen-Hsien Fang. (2017). MULTIPLE PATH SEARCH FOR ACTION TUBE DETECTION IN VIDEOS. IEEE SigPort. http://sigport.org/2196
Erick Hendra Putra Alwand Wen-Hsien Fang, 2017. MULTIPLE PATH SEARCH FOR ACTION TUBE DETECTION IN VIDEOS. Available at: http://sigport.org/2196.
Erick Hendra Putra Alwand Wen-Hsien Fang. (2017). "MULTIPLE PATH SEARCH FOR ACTION TUBE DETECTION IN VIDEOS." Web.
1. Erick Hendra Putra Alwand Wen-Hsien Fang. MULTIPLE PATH SEARCH FOR ACTION TUBE DETECTION IN VIDEOS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2196

WORDFENCE: TEXT DETECTION IN NATURAL IMAGES WITH BORDER AWARENESS


In recent years, text recognition has achieved remarkable success in recognizing scanned document text. However, word recognition in natural images is still an open problem, which generally requires time consuming post-processing steps. We present a novel architecture for individual word detection in scene images based on semantic segmentation. Our contributions are twofold: the concept of WordFence, which detects border areas surrounding each individual word and a novel pixelwise weighted softmax loss function which penalizes background and emphasizes small text regions.

Paper Details

Authors:
Andrei Polzounov, Artsiom Ablavatski, Sergio Escalera, Shijian Lu, Jianfei Cai
Submitted On:
16 September 2017 - 8:02am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:

Document Files

ICIP_wordfence_presentation.pdf

(13 downloads)

Keywords

Subscribe

[1] Andrei Polzounov, Artsiom Ablavatski, Sergio Escalera, Shijian Lu, Jianfei Cai, "WORDFENCE: TEXT DETECTION IN NATURAL IMAGES WITH BORDER AWARENESS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2188. Accessed: Oct. 19, 2017.
@article{2188-17,
url = {http://sigport.org/2188},
author = {Andrei Polzounov; Artsiom Ablavatski; Sergio Escalera; Shijian Lu; Jianfei Cai },
publisher = {IEEE SigPort},
title = {WORDFENCE: TEXT DETECTION IN NATURAL IMAGES WITH BORDER AWARENESS},
year = {2017} }
TY - EJOUR
T1 - WORDFENCE: TEXT DETECTION IN NATURAL IMAGES WITH BORDER AWARENESS
AU - Andrei Polzounov; Artsiom Ablavatski; Sergio Escalera; Shijian Lu; Jianfei Cai
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2188
ER -
Andrei Polzounov, Artsiom Ablavatski, Sergio Escalera, Shijian Lu, Jianfei Cai. (2017). WORDFENCE: TEXT DETECTION IN NATURAL IMAGES WITH BORDER AWARENESS. IEEE SigPort. http://sigport.org/2188
Andrei Polzounov, Artsiom Ablavatski, Sergio Escalera, Shijian Lu, Jianfei Cai, 2017. WORDFENCE: TEXT DETECTION IN NATURAL IMAGES WITH BORDER AWARENESS. Available at: http://sigport.org/2188.
Andrei Polzounov, Artsiom Ablavatski, Sergio Escalera, Shijian Lu, Jianfei Cai. (2017). "WORDFENCE: TEXT DETECTION IN NATURAL IMAGES WITH BORDER AWARENESS." Web.
1. Andrei Polzounov, Artsiom Ablavatski, Sergio Escalera, Shijian Lu, Jianfei Cai. WORDFENCE: TEXT DETECTION IN NATURAL IMAGES WITH BORDER AWARENESS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2188

ROBUST FACE ALIGNMENT WITH CASCADED COARSE-TO-FINE AUTO-ENCODER NETWORK

Paper Details

Authors:
Yongxin Ge, Mingjian Hong, Sheng Huang, Dan Yang
Submitted On:
16 September 2017 - 7:42am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icip2017新 .pptx

(0)

Keywords

Subscribe

[1] Yongxin Ge, Mingjian Hong, Sheng Huang, Dan Yang, "ROBUST FACE ALIGNMENT WITH CASCADED COARSE-TO-FINE AUTO-ENCODER NETWORK", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2187. Accessed: Oct. 19, 2017.
@article{2187-17,
url = {http://sigport.org/2187},
author = {Yongxin Ge; Mingjian Hong; Sheng Huang; Dan Yang },
publisher = {IEEE SigPort},
title = {ROBUST FACE ALIGNMENT WITH CASCADED COARSE-TO-FINE AUTO-ENCODER NETWORK},
year = {2017} }
TY - EJOUR
T1 - ROBUST FACE ALIGNMENT WITH CASCADED COARSE-TO-FINE AUTO-ENCODER NETWORK
AU - Yongxin Ge; Mingjian Hong; Sheng Huang; Dan Yang
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2187
ER -
Yongxin Ge, Mingjian Hong, Sheng Huang, Dan Yang. (2017). ROBUST FACE ALIGNMENT WITH CASCADED COARSE-TO-FINE AUTO-ENCODER NETWORK. IEEE SigPort. http://sigport.org/2187
Yongxin Ge, Mingjian Hong, Sheng Huang, Dan Yang, 2017. ROBUST FACE ALIGNMENT WITH CASCADED COARSE-TO-FINE AUTO-ENCODER NETWORK. Available at: http://sigport.org/2187.
Yongxin Ge, Mingjian Hong, Sheng Huang, Dan Yang. (2017). "ROBUST FACE ALIGNMENT WITH CASCADED COARSE-TO-FINE AUTO-ENCODER NETWORK." Web.
1. Yongxin Ge, Mingjian Hong, Sheng Huang, Dan Yang. ROBUST FACE ALIGNMENT WITH CASCADED COARSE-TO-FINE AUTO-ENCODER NETWORK [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2187

A GRAPH-BASED APPROACH FOR FEATURE EXTRACTION AND SEGMENTATION OF MULTIMODAL IMAGES

Paper Details

Authors:
Geoffrey Iyer, Jocelyn Chanussot, Andrea Bertozzi
Submitted On:
16 September 2017 - 4:59am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Iyer_ICIP2017_Poster_Draft_10-09-17.pdf

(12 downloads)

Keywords

Subscribe

[1] Geoffrey Iyer, Jocelyn Chanussot, Andrea Bertozzi, "A GRAPH-BASED APPROACH FOR FEATURE EXTRACTION AND SEGMENTATION OF MULTIMODAL IMAGES", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2185. Accessed: Oct. 19, 2017.
@article{2185-17,
url = {http://sigport.org/2185},
author = {Geoffrey Iyer; Jocelyn Chanussot; Andrea Bertozzi },
publisher = {IEEE SigPort},
title = {A GRAPH-BASED APPROACH FOR FEATURE EXTRACTION AND SEGMENTATION OF MULTIMODAL IMAGES},
year = {2017} }
TY - EJOUR
T1 - A GRAPH-BASED APPROACH FOR FEATURE EXTRACTION AND SEGMENTATION OF MULTIMODAL IMAGES
AU - Geoffrey Iyer; Jocelyn Chanussot; Andrea Bertozzi
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2185
ER -
Geoffrey Iyer, Jocelyn Chanussot, Andrea Bertozzi. (2017). A GRAPH-BASED APPROACH FOR FEATURE EXTRACTION AND SEGMENTATION OF MULTIMODAL IMAGES. IEEE SigPort. http://sigport.org/2185
Geoffrey Iyer, Jocelyn Chanussot, Andrea Bertozzi, 2017. A GRAPH-BASED APPROACH FOR FEATURE EXTRACTION AND SEGMENTATION OF MULTIMODAL IMAGES. Available at: http://sigport.org/2185.
Geoffrey Iyer, Jocelyn Chanussot, Andrea Bertozzi. (2017). "A GRAPH-BASED APPROACH FOR FEATURE EXTRACTION AND SEGMENTATION OF MULTIMODAL IMAGES." Web.
1. Geoffrey Iyer, Jocelyn Chanussot, Andrea Bertozzi. A GRAPH-BASED APPROACH FOR FEATURE EXTRACTION AND SEGMENTATION OF MULTIMODAL IMAGES [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2185

Visual Salience and Stack Extension Based Ghost Removal for High-dynamic-range Imaging


High-dynamic-range imaging (HDRI) techniques are proposed to extend the dynamic range of captured images against
sensor limitation. The key issue of multi-exposure fusion in HDRI is removing ghost artifacts caused by the motion of moving objects and handheld cameras. This paper proposes a ghost-free HDRI algorithm based on visual salience and
stack extension. To improve the accuracy of ghost areas detection, visual salience based bilateral motion detection is

Paper Details

Authors:
Zijie Wang, Qin Liu, Takeshi Ikenaga
Submitted On:
16 September 2017 - 4:37am
Short Link:
Type:
Event:

Document Files

WZJ_poster_ICIP_final.pdf

(16 downloads)

Keywords

Subscribe

[1] Zijie Wang, Qin Liu, Takeshi Ikenaga, "Visual Salience and Stack Extension Based Ghost Removal for High-dynamic-range Imaging", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2183. Accessed: Oct. 19, 2017.
@article{2183-17,
url = {http://sigport.org/2183},
author = {Zijie Wang; Qin Liu; Takeshi Ikenaga },
publisher = {IEEE SigPort},
title = {Visual Salience and Stack Extension Based Ghost Removal for High-dynamic-range Imaging},
year = {2017} }
TY - EJOUR
T1 - Visual Salience and Stack Extension Based Ghost Removal for High-dynamic-range Imaging
AU - Zijie Wang; Qin Liu; Takeshi Ikenaga
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2183
ER -
Zijie Wang, Qin Liu, Takeshi Ikenaga. (2017). Visual Salience and Stack Extension Based Ghost Removal for High-dynamic-range Imaging. IEEE SigPort. http://sigport.org/2183
Zijie Wang, Qin Liu, Takeshi Ikenaga, 2017. Visual Salience and Stack Extension Based Ghost Removal for High-dynamic-range Imaging. Available at: http://sigport.org/2183.
Zijie Wang, Qin Liu, Takeshi Ikenaga. (2017). "Visual Salience and Stack Extension Based Ghost Removal for High-dynamic-range Imaging." Web.
1. Zijie Wang, Qin Liu, Takeshi Ikenaga. Visual Salience and Stack Extension Based Ghost Removal for High-dynamic-range Imaging [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2183

DenseNet for Dense Flow


Efficient Large-Scale Video Understanding in The Wild

Paper Details

Authors:
Yi Zhu,Shawn Newsam
Submitted On:
16 September 2017 - 2:54am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP17_phd_forum_poster.pdf

(20 downloads)

Keywords

Subscribe

[1] Yi Zhu,Shawn Newsam, "DenseNet for Dense Flow", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2182. Accessed: Oct. 19, 2017.
@article{2182-17,
url = {http://sigport.org/2182},
author = {Yi Zhu;Shawn Newsam },
publisher = {IEEE SigPort},
title = {DenseNet for Dense Flow},
year = {2017} }
TY - EJOUR
T1 - DenseNet for Dense Flow
AU - Yi Zhu;Shawn Newsam
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2182
ER -
Yi Zhu,Shawn Newsam. (2017). DenseNet for Dense Flow. IEEE SigPort. http://sigport.org/2182
Yi Zhu,Shawn Newsam, 2017. DenseNet for Dense Flow. Available at: http://sigport.org/2182.
Yi Zhu,Shawn Newsam. (2017). "DenseNet for Dense Flow." Web.
1. Yi Zhu,Shawn Newsam. DenseNet for Dense Flow [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2182

DenseNet for Dense Flow


Classical approaches for estimating optical flow have achieved rapid progress in the last decade. However, most of them are too slow to be applied in real-time video analysis. Due to the great success of deep learning, recent work has focused on using CNNs to solve such dense prediction problems. In this paper, we investigate a new deep architecture, Densely Connected Convolutional Networks (DenseNet), to learn optical flow. This specific architecture is ideal for the problem at hand as it provides shortcut connections throughout the network, which leads to implicit deep supervision.

Paper Details

Authors:
Yi Zhu,Shawn Newsam
Submitted On:
16 September 2017 - 2:45am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP17_paper2550_slides_yizhu.pdf

(24 downloads)

Keywords

Subscribe

[1] Yi Zhu,Shawn Newsam, "DenseNet for Dense Flow", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2181. Accessed: Oct. 19, 2017.
@article{2181-17,
url = {http://sigport.org/2181},
author = {Yi Zhu;Shawn Newsam },
publisher = {IEEE SigPort},
title = {DenseNet for Dense Flow},
year = {2017} }
TY - EJOUR
T1 - DenseNet for Dense Flow
AU - Yi Zhu;Shawn Newsam
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2181
ER -
Yi Zhu,Shawn Newsam. (2017). DenseNet for Dense Flow. IEEE SigPort. http://sigport.org/2181
Yi Zhu,Shawn Newsam, 2017. DenseNet for Dense Flow. Available at: http://sigport.org/2181.
Yi Zhu,Shawn Newsam. (2017). "DenseNet for Dense Flow." Web.
1. Yi Zhu,Shawn Newsam. DenseNet for Dense Flow [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2181

IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Slides)


We target at solving the problem of automatic image segmentation. Although 1D contour and 2D surface cues have been widely utilized in existing work, 3D depth information of an image, a necessary cue according to human visual perception, is however overlooked in automatic image segmentation. In this paper, we study how to fully utilize 1D contour, 2D surface, and 3D depth cues for image segmentation. First, three elementary segmentation modules are developed for these cues respectively.

Paper Details

Authors:
Chen Chen, Jian Li, Changhu Wang, C.-C. Jay Kuo
Submitted On:
16 September 2017 - 1:44am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

2405_slides

(13 downloads)

Keywords

Additional Categories

Subscribe

[1] Chen Chen, Jian Li, Changhu Wang, C.-C. Jay Kuo, " IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Slides)", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2176. Accessed: Oct. 19, 2017.
@article{2176-17,
url = {http://sigport.org/2176},
author = {Chen Chen; Jian Li; Changhu Wang; C.-C. Jay Kuo },
publisher = {IEEE SigPort},
title = { IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Slides)},
year = {2017} }
TY - EJOUR
T1 - IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Slides)
AU - Chen Chen; Jian Li; Changhu Wang; C.-C. Jay Kuo
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2176
ER -
Chen Chen, Jian Li, Changhu Wang, C.-C. Jay Kuo. (2017). IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Slides). IEEE SigPort. http://sigport.org/2176
Chen Chen, Jian Li, Changhu Wang, C.-C. Jay Kuo, 2017. IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Slides). Available at: http://sigport.org/2176.
Chen Chen, Jian Li, Changhu Wang, C.-C. Jay Kuo. (2017). " IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Slides)." Web.
1. Chen Chen, Jian Li, Changhu Wang, C.-C. Jay Kuo. IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Slides) [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2176

IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Poster)


We target at solving the problem of automatic image segmentation. Although 1D contour and 2D surface cues have been widely utilized in existing work, 3D depth information of an image, a necessary cue according to human visual perception, is however overlooked in automatic image segmentation. In this paper, we study how to fully utilize 1D contour, 2D surface, and 3D depth cues for image segmentation. First, three elementary segmentation modules are developed for these cues respectively.

Paper Details

Authors:
Chen Chen, Jian Li, Changhu Wang, C.-C. Jay Kuo
Submitted On:
16 September 2017 - 1:45am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

2405_poster

(19 downloads)

Keywords

Additional Categories

Subscribe

[1] Chen Chen, Jian Li, Changhu Wang, C.-C. Jay Kuo, " IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Poster)", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2175. Accessed: Oct. 19, 2017.
@article{2175-17,
url = {http://sigport.org/2175},
author = {Chen Chen; Jian Li; Changhu Wang; C.-C. Jay Kuo },
publisher = {IEEE SigPort},
title = { IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Poster)},
year = {2017} }
TY - EJOUR
T1 - IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Poster)
AU - Chen Chen; Jian Li; Changhu Wang; C.-C. Jay Kuo
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2175
ER -
Chen Chen, Jian Li, Changhu Wang, C.-C. Jay Kuo. (2017). IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Poster). IEEE SigPort. http://sigport.org/2175
Chen Chen, Jian Li, Changhu Wang, C.-C. Jay Kuo, 2017. IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Poster). Available at: http://sigport.org/2175.
Chen Chen, Jian Li, Changhu Wang, C.-C. Jay Kuo. (2017). " IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Poster)." Web.
1. Chen Chen, Jian Li, Changhu Wang, C.-C. Jay Kuo. IMAGE SEGMENTATION USING CONTOUR, SURFACE, AND DEPTH CUES (Poster) [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2175

Pages