Sorry, you need to enable JavaScript to visit this website.

Image/Video Processing

DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS


In this work, we address human action recognition problem under viewpoint variation. The proposed model is formulated by wisely combining convolution neural network (CNN) model with principle component analysis (PCA). In this context, we pass real depth videos through a CNN model in a frame-wise manner. The view invariant features are extracted by employing convolution layers as mid-outputs and considered as 3D nonnegative tensors.

Paper Details

Authors:
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang
Submitted On:
4 October 2018 - 6:02am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP_2018_Poster.pdf

(24 downloads)

CNN_PCA_FTP_Action_Recog_Paper.pdf

(23 downloads)

Subscribe

[1] Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang, "DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3387. Accessed: Dec. 13, 2018.
@article{3387-18,
url = {http://sigport.org/3387},
author = {Manh-Quan Bui; Viet-Hang Duong; Tzu-Chiang Tai; and Jia-Ching Wang },
publisher = {IEEE SigPort},
title = {DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS},
year = {2018} }
TY - EJOUR
T1 - DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS
AU - Manh-Quan Bui; Viet-Hang Duong; Tzu-Chiang Tai; and Jia-Ching Wang
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3387
ER -
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang. (2018). DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS. IEEE SigPort. http://sigport.org/3387
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang, 2018. DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS. Available at: http://sigport.org/3387.
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang. (2018). "DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS." Web.
1. Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang. DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3387

DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS


In this work, we address human action recognition problem under viewpoint variation. The proposed model is formulated by wisely combining convolution neural network (CNN) model with principle component analysis (PCA). In this context, we pass real depth videos through a CNN model in a frame-wise manner. The view invariant features are extracted by employing convolution layers as mid-outputs and considered as 3D nonnegative tensors.

Paper Details

Authors:
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang
Submitted On:
4 October 2018 - 5:42am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP_2018_Poster.pdf

(26 downloads)

Subscribe

[1] Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang, "DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3386. Accessed: Dec. 13, 2018.
@article{3386-18,
url = {http://sigport.org/3386},
author = {Manh-Quan Bui; Viet-Hang Duong; Tzu-Chiang Tai; and Jia-Ching Wang },
publisher = {IEEE SigPort},
title = {DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS},
year = {2018} }
TY - EJOUR
T1 - DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS
AU - Manh-Quan Bui; Viet-Hang Duong; Tzu-Chiang Tai; and Jia-Ching Wang
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3386
ER -
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang. (2018). DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS. IEEE SigPort. http://sigport.org/3386
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang, 2018. DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS. Available at: http://sigport.org/3386.
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang. (2018). "DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS." Web.
1. Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang. DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3386

Low Rank Fourier Ptychography

Paper Details

Authors:
Zhengyu Chen, Gauri Jagatap, Seyedehsara Nayer, Chinmay Hegde, Namrata Vaswani
Submitted On:
23 April 2018 - 11:43am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_lrptych_new.pdf

(98 downloads)

Subscribe

[1] Zhengyu Chen, Gauri Jagatap, Seyedehsara Nayer, Chinmay Hegde, Namrata Vaswani, "Low Rank Fourier Ptychography", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3150. Accessed: Dec. 13, 2018.
@article{3150-18,
url = {http://sigport.org/3150},
author = {Zhengyu Chen; Gauri Jagatap; Seyedehsara Nayer; Chinmay Hegde; Namrata Vaswani },
publisher = {IEEE SigPort},
title = {Low Rank Fourier Ptychography},
year = {2018} }
TY - EJOUR
T1 - Low Rank Fourier Ptychography
AU - Zhengyu Chen; Gauri Jagatap; Seyedehsara Nayer; Chinmay Hegde; Namrata Vaswani
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3150
ER -
Zhengyu Chen, Gauri Jagatap, Seyedehsara Nayer, Chinmay Hegde, Namrata Vaswani. (2018). Low Rank Fourier Ptychography. IEEE SigPort. http://sigport.org/3150
Zhengyu Chen, Gauri Jagatap, Seyedehsara Nayer, Chinmay Hegde, Namrata Vaswani, 2018. Low Rank Fourier Ptychography. Available at: http://sigport.org/3150.
Zhengyu Chen, Gauri Jagatap, Seyedehsara Nayer, Chinmay Hegde, Namrata Vaswani. (2018). "Low Rank Fourier Ptychography." Web.
1. Zhengyu Chen, Gauri Jagatap, Seyedehsara Nayer, Chinmay Hegde, Namrata Vaswani. Low Rank Fourier Ptychography [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3150

A 203 FPS VLSI ARCHITECTURE OF IMPROVED DENSE TRAJECTORIES FOR REAL-TIME HUMAN ACTION RECOGNITION


This paper introduces architecture with high throughput, low on-chip memory, and efficient data access for Improved Dense Trajectories (iDT) as video representations for real-time action recognition. The iDT feature can capture long-term motion cues better than any existing deep feature, which makes it crucial in state-of-the-art action recognition systems.

Paper Details

Authors:
Zhi-Yi Lin, Jia-Lin Chen, Liang-Gee Chen
Submitted On:
22 April 2018 - 10:53am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP 2018.pdf

(113 downloads)

Subscribe

[1] Zhi-Yi Lin, Jia-Lin Chen, Liang-Gee Chen, "A 203 FPS VLSI ARCHITECTURE OF IMPROVED DENSE TRAJECTORIES FOR REAL-TIME HUMAN ACTION RECOGNITION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3132. Accessed: Dec. 13, 2018.
@article{3132-18,
url = {http://sigport.org/3132},
author = {Zhi-Yi Lin; Jia-Lin Chen; Liang-Gee Chen },
publisher = {IEEE SigPort},
title = {A 203 FPS VLSI ARCHITECTURE OF IMPROVED DENSE TRAJECTORIES FOR REAL-TIME HUMAN ACTION RECOGNITION},
year = {2018} }
TY - EJOUR
T1 - A 203 FPS VLSI ARCHITECTURE OF IMPROVED DENSE TRAJECTORIES FOR REAL-TIME HUMAN ACTION RECOGNITION
AU - Zhi-Yi Lin; Jia-Lin Chen; Liang-Gee Chen
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3132
ER -
Zhi-Yi Lin, Jia-Lin Chen, Liang-Gee Chen. (2018). A 203 FPS VLSI ARCHITECTURE OF IMPROVED DENSE TRAJECTORIES FOR REAL-TIME HUMAN ACTION RECOGNITION. IEEE SigPort. http://sigport.org/3132
Zhi-Yi Lin, Jia-Lin Chen, Liang-Gee Chen, 2018. A 203 FPS VLSI ARCHITECTURE OF IMPROVED DENSE TRAJECTORIES FOR REAL-TIME HUMAN ACTION RECOGNITION. Available at: http://sigport.org/3132.
Zhi-Yi Lin, Jia-Lin Chen, Liang-Gee Chen. (2018). "A 203 FPS VLSI ARCHITECTURE OF IMPROVED DENSE TRAJECTORIES FOR REAL-TIME HUMAN ACTION RECOGNITION." Web.
1. Zhi-Yi Lin, Jia-Lin Chen, Liang-Gee Chen. A 203 FPS VLSI ARCHITECTURE OF IMPROVED DENSE TRAJECTORIES FOR REAL-TIME HUMAN ACTION RECOGNITION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3132

RATE-DISTORTION OPTIMIZED ILLUMINATION ESTIMATION FOR WAVELET-BASED VIDEO CODING


We propose a rate-distortion optimized framework for estimating
illumination changes (lighting variations, fade in/out
effects) in a highly scalable coding system. Illumination
variations are realized using multiplicative factors in the image
domain and are estimated considering the coding cost
of the illumination field and input frames which are first
subject to a temporal Lifting-based Illumination Adaptive
Transform (LIAT). The coding cost is modelled by an L1-
norm optimization problem which is derived to approximate

Paper Details

Authors:
Maryam Haghighat, Reji Mathew, Aous Naman, Sean Young and David Taubman
Submitted On:
21 April 2018 - 1:54am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2018

(131 downloads)

Subscribe

[1] Maryam Haghighat, Reji Mathew, Aous Naman, Sean Young and David Taubman, "RATE-DISTORTION OPTIMIZED ILLUMINATION ESTIMATION FOR WAVELET-BASED VIDEO CODING", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3121. Accessed: Dec. 13, 2018.
@article{3121-18,
url = {http://sigport.org/3121},
author = {Maryam Haghighat; Reji Mathew; Aous Naman; Sean Young and David Taubman },
publisher = {IEEE SigPort},
title = {RATE-DISTORTION OPTIMIZED ILLUMINATION ESTIMATION FOR WAVELET-BASED VIDEO CODING},
year = {2018} }
TY - EJOUR
T1 - RATE-DISTORTION OPTIMIZED ILLUMINATION ESTIMATION FOR WAVELET-BASED VIDEO CODING
AU - Maryam Haghighat; Reji Mathew; Aous Naman; Sean Young and David Taubman
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3121
ER -
Maryam Haghighat, Reji Mathew, Aous Naman, Sean Young and David Taubman. (2018). RATE-DISTORTION OPTIMIZED ILLUMINATION ESTIMATION FOR WAVELET-BASED VIDEO CODING. IEEE SigPort. http://sigport.org/3121
Maryam Haghighat, Reji Mathew, Aous Naman, Sean Young and David Taubman, 2018. RATE-DISTORTION OPTIMIZED ILLUMINATION ESTIMATION FOR WAVELET-BASED VIDEO CODING. Available at: http://sigport.org/3121.
Maryam Haghighat, Reji Mathew, Aous Naman, Sean Young and David Taubman. (2018). "RATE-DISTORTION OPTIMIZED ILLUMINATION ESTIMATION FOR WAVELET-BASED VIDEO CODING." Web.
1. Maryam Haghighat, Reji Mathew, Aous Naman, Sean Young and David Taubman. RATE-DISTORTION OPTIMIZED ILLUMINATION ESTIMATION FOR WAVELET-BASED VIDEO CODING [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3121

ORTHOGONALLY REGULARIZED DEEP NETWORKS FOR IMAGE SUPER-RESOLUTION

Paper Details

Authors:
HOJJAT SEYED MOUSAVI, VISHAL MONGA
Submitted On:
21 April 2018 - 1:45am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:

Document Files

poster_ORDSR_2.pdf

(157 downloads)

Subscribe

[1] HOJJAT SEYED MOUSAVI, VISHAL MONGA, "ORTHOGONALLY REGULARIZED DEEP NETWORKS FOR IMAGE SUPER-RESOLUTION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3120. Accessed: Dec. 13, 2018.
@article{3120-18,
url = {http://sigport.org/3120},
author = {HOJJAT SEYED MOUSAVI; VISHAL MONGA },
publisher = {IEEE SigPort},
title = {ORTHOGONALLY REGULARIZED DEEP NETWORKS FOR IMAGE SUPER-RESOLUTION},
year = {2018} }
TY - EJOUR
T1 - ORTHOGONALLY REGULARIZED DEEP NETWORKS FOR IMAGE SUPER-RESOLUTION
AU - HOJJAT SEYED MOUSAVI; VISHAL MONGA
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3120
ER -
HOJJAT SEYED MOUSAVI, VISHAL MONGA. (2018). ORTHOGONALLY REGULARIZED DEEP NETWORKS FOR IMAGE SUPER-RESOLUTION. IEEE SigPort. http://sigport.org/3120
HOJJAT SEYED MOUSAVI, VISHAL MONGA, 2018. ORTHOGONALLY REGULARIZED DEEP NETWORKS FOR IMAGE SUPER-RESOLUTION. Available at: http://sigport.org/3120.
HOJJAT SEYED MOUSAVI, VISHAL MONGA. (2018). "ORTHOGONALLY REGULARIZED DEEP NETWORKS FOR IMAGE SUPER-RESOLUTION." Web.
1. HOJJAT SEYED MOUSAVI, VISHAL MONGA. ORTHOGONALLY REGULARIZED DEEP NETWORKS FOR IMAGE SUPER-RESOLUTION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3120

OBJECT-ORIENTED ANOMALY DETECTION IN SURVEILLANCE VIDEOS


Detecting and localizing anomalies in surveillance videos is an ongoing challenge. Most existing methods are patch or trajectory-based, which lack semantic understanding of scenes and may split targets into pieces. To handle this prob-lem, this paper proposes a novel and effective algorithm by incorporating deep object detection and tracking with full utilization of spatial and temporal information.

Paper Details

Authors:
Xiaodan Li, Weihai Li, Bin Liu, Qiankun Liu, Nenghai Yu
Submitted On:
20 April 2018 - 10:55am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2018-2394.pdf

(175 downloads)

Subscribe

[1] Xiaodan Li, Weihai Li, Bin Liu, Qiankun Liu, Nenghai Yu, "OBJECT-ORIENTED ANOMALY DETECTION IN SURVEILLANCE VIDEOS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3108. Accessed: Dec. 13, 2018.
@article{3108-18,
url = {http://sigport.org/3108},
author = {Xiaodan Li; Weihai Li; Bin Liu; Qiankun Liu; Nenghai Yu },
publisher = {IEEE SigPort},
title = {OBJECT-ORIENTED ANOMALY DETECTION IN SURVEILLANCE VIDEOS},
year = {2018} }
TY - EJOUR
T1 - OBJECT-ORIENTED ANOMALY DETECTION IN SURVEILLANCE VIDEOS
AU - Xiaodan Li; Weihai Li; Bin Liu; Qiankun Liu; Nenghai Yu
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3108
ER -
Xiaodan Li, Weihai Li, Bin Liu, Qiankun Liu, Nenghai Yu. (2018). OBJECT-ORIENTED ANOMALY DETECTION IN SURVEILLANCE VIDEOS. IEEE SigPort. http://sigport.org/3108
Xiaodan Li, Weihai Li, Bin Liu, Qiankun Liu, Nenghai Yu, 2018. OBJECT-ORIENTED ANOMALY DETECTION IN SURVEILLANCE VIDEOS. Available at: http://sigport.org/3108.
Xiaodan Li, Weihai Li, Bin Liu, Qiankun Liu, Nenghai Yu. (2018). "OBJECT-ORIENTED ANOMALY DETECTION IN SURVEILLANCE VIDEOS." Web.
1. Xiaodan Li, Weihai Li, Bin Liu, Qiankun Liu, Nenghai Yu. OBJECT-ORIENTED ANOMALY DETECTION IN SURVEILLANCE VIDEOS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3108

END-TO-END LOW-RESOURCE LIP-READING WITH MAXOUT CNN AND LSTM

Paper Details

Authors:
Ivan Fung, Brian Mak
Submitted On:
20 April 2018 - 3:22am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

real_gray240.pdf

(159 downloads)

Subscribe

[1] Ivan Fung, Brian Mak, "END-TO-END LOW-RESOURCE LIP-READING WITH MAXOUT CNN AND LSTM", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3091. Accessed: Dec. 13, 2018.
@article{3091-18,
url = {http://sigport.org/3091},
author = {Ivan Fung; Brian Mak },
publisher = {IEEE SigPort},
title = {END-TO-END LOW-RESOURCE LIP-READING WITH MAXOUT CNN AND LSTM},
year = {2018} }
TY - EJOUR
T1 - END-TO-END LOW-RESOURCE LIP-READING WITH MAXOUT CNN AND LSTM
AU - Ivan Fung; Brian Mak
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3091
ER -
Ivan Fung, Brian Mak. (2018). END-TO-END LOW-RESOURCE LIP-READING WITH MAXOUT CNN AND LSTM. IEEE SigPort. http://sigport.org/3091
Ivan Fung, Brian Mak, 2018. END-TO-END LOW-RESOURCE LIP-READING WITH MAXOUT CNN AND LSTM. Available at: http://sigport.org/3091.
Ivan Fung, Brian Mak. (2018). "END-TO-END LOW-RESOURCE LIP-READING WITH MAXOUT CNN AND LSTM." Web.
1. Ivan Fung, Brian Mak. END-TO-END LOW-RESOURCE LIP-READING WITH MAXOUT CNN AND LSTM [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3091

DEPTH SUPER-RESOLUTION USING JOINT ADAPTIVE WEIGHTED LEAST SQUARES AND PATCHING GRADIENT


This paper presents a flexible framework for the challenging task of color-guided depth upsampling. Some state-of-the-art approaches apply an aligned RGB image for depth recovery. Unfortunately, these kinds of methods may result in texture copying artifacts and edge blurring artifacts. To address these difficulties, we propose an adaptive weighted least squares framework of choosing different guidance weight for variant conditions flexibly.

Paper Details

Authors:
Yuyuan LI, Jiarui Sun, Bingshu Wang, Yong Zhao
Submitted On:
20 April 2018 - 1:54am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Depth map super-resolution, ToF, WLS, Patching-gradient method, De-noising

(106 downloads)

Keywords

Additional Categories

Subscribe

[1] Yuyuan LI, Jiarui Sun, Bingshu Wang, Yong Zhao, "DEPTH SUPER-RESOLUTION USING JOINT ADAPTIVE WEIGHTED LEAST SQUARES AND PATCHING GRADIENT", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3085. Accessed: Dec. 13, 2018.
@article{3085-18,
url = {http://sigport.org/3085},
author = {Yuyuan LI; Jiarui Sun; Bingshu Wang; Yong Zhao },
publisher = {IEEE SigPort},
title = {DEPTH SUPER-RESOLUTION USING JOINT ADAPTIVE WEIGHTED LEAST SQUARES AND PATCHING GRADIENT},
year = {2018} }
TY - EJOUR
T1 - DEPTH SUPER-RESOLUTION USING JOINT ADAPTIVE WEIGHTED LEAST SQUARES AND PATCHING GRADIENT
AU - Yuyuan LI; Jiarui Sun; Bingshu Wang; Yong Zhao
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3085
ER -
Yuyuan LI, Jiarui Sun, Bingshu Wang, Yong Zhao. (2018). DEPTH SUPER-RESOLUTION USING JOINT ADAPTIVE WEIGHTED LEAST SQUARES AND PATCHING GRADIENT. IEEE SigPort. http://sigport.org/3085
Yuyuan LI, Jiarui Sun, Bingshu Wang, Yong Zhao, 2018. DEPTH SUPER-RESOLUTION USING JOINT ADAPTIVE WEIGHTED LEAST SQUARES AND PATCHING GRADIENT. Available at: http://sigport.org/3085.
Yuyuan LI, Jiarui Sun, Bingshu Wang, Yong Zhao. (2018). "DEPTH SUPER-RESOLUTION USING JOINT ADAPTIVE WEIGHTED LEAST SQUARES AND PATCHING GRADIENT." Web.
1. Yuyuan LI, Jiarui Sun, Bingshu Wang, Yong Zhao. DEPTH SUPER-RESOLUTION USING JOINT ADAPTIVE WEIGHTED LEAST SQUARES AND PATCHING GRADIENT [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3085

HARD SHADOWS REMOVAL USING AN APPROXIMATE ILLUMINATION INVARIANT


Hard shadows detection and removal from foreground masks is a challenging step in change detection. This paper gives a simple and effective method to address hard shadows. There are inside portion and boundary portion in hard shadows. Pixel-wise neighborhood ratio is calculat¬ed to remove the most of inside shadow points. For the boundaries of shadow regions, we take advantage of color constancy to eliminate the edges of hard shadows and obtain relative accurate objects contours. Then, morphology processing is explored to enhance the integrity of objects.

Paper Details

Authors:
Submitted On:
20 April 2018 - 1:47am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

BingshuWang_Poster_2018ICASSP.pdf

(126 downloads)

Subscribe

[1] , "HARD SHADOWS REMOVAL USING AN APPROXIMATE ILLUMINATION INVARIANT", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3082. Accessed: Dec. 13, 2018.
@article{3082-18,
url = {http://sigport.org/3082},
author = { },
publisher = {IEEE SigPort},
title = {HARD SHADOWS REMOVAL USING AN APPROXIMATE ILLUMINATION INVARIANT},
year = {2018} }
TY - EJOUR
T1 - HARD SHADOWS REMOVAL USING AN APPROXIMATE ILLUMINATION INVARIANT
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3082
ER -
. (2018). HARD SHADOWS REMOVAL USING AN APPROXIMATE ILLUMINATION INVARIANT. IEEE SigPort. http://sigport.org/3082
, 2018. HARD SHADOWS REMOVAL USING AN APPROXIMATE ILLUMINATION INVARIANT. Available at: http://sigport.org/3082.
. (2018). "HARD SHADOWS REMOVAL USING AN APPROXIMATE ILLUMINATION INVARIANT." Web.
1. . HARD SHADOWS REMOVAL USING AN APPROXIMATE ILLUMINATION INVARIANT [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3082

Pages