Sorry, you need to enable JavaScript to visit this website.

ICIP 2018

The International Conference on Image Processing (ICIP), sponsored by the IEEE Signal Processing Society, is the premier forum for the presentation of technological advances and research results in the fields of theoretical, experimental, and applied image and video processing. ICIP has been held annually since 1994, brings together leading engineers and scientists in image and video processing from around the world. Visit website.

DEPTH FROM GAZE


Eye trackers are found on various electronic devices. In this paper, we propose to exploit the gaze information acquired by an eye tracker for depth estimation. The data collected from the eye tracker in a fixation interval are used to estimate the depth of a gazed object. The proposed method can be used to construct a sparse depth map of an augmented reality space. The resulting depth map can be applied to, for example, controlling the visual information displayed to the viewer.

Poster.pdf

PDF icon Poster.pdf (578 downloads)

Paper Details

Authors:
Tzu-Sheng Kuo, Kuang-Tsu Shih, Sheng-Lung Chung, Homer Chen
Submitted On:
17 October 2018 - 6:44pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster.pdf

(578 downloads)

Subscribe

[1] Tzu-Sheng Kuo, Kuang-Tsu Shih, Sheng-Lung Chung, Homer Chen, "DEPTH FROM GAZE", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3398. Accessed: Dec. 10, 2018.
@article{3398-18,
url = {http://sigport.org/3398},
author = {Tzu-Sheng Kuo; Kuang-Tsu Shih; Sheng-Lung Chung; Homer Chen },
publisher = {IEEE SigPort},
title = {DEPTH FROM GAZE},
year = {2018} }
TY - EJOUR
T1 - DEPTH FROM GAZE
AU - Tzu-Sheng Kuo; Kuang-Tsu Shih; Sheng-Lung Chung; Homer Chen
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3398
ER -
Tzu-Sheng Kuo, Kuang-Tsu Shih, Sheng-Lung Chung, Homer Chen. (2018). DEPTH FROM GAZE. IEEE SigPort. http://sigport.org/3398
Tzu-Sheng Kuo, Kuang-Tsu Shih, Sheng-Lung Chung, Homer Chen, 2018. DEPTH FROM GAZE. Available at: http://sigport.org/3398.
Tzu-Sheng Kuo, Kuang-Tsu Shih, Sheng-Lung Chung, Homer Chen. (2018). "DEPTH FROM GAZE." Web.
1. Tzu-Sheng Kuo, Kuang-Tsu Shih, Sheng-Lung Chung, Homer Chen. DEPTH FROM GAZE [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3398

Motion Inpainting by an Image-Based Geodesic AMLE Method


This work presents an automatic method for optical flow inpainting. Given a video, each frame domain is endowed with a Riemannian metric based on the video pixel values. The missing optical flow is recovered by solving the Absolutely Minimizing Lipschitz Extension (AMLE) partial differential equation on the Riemannian manifold. An efficient numerical algorithm is proposed using eikonal operators for nonlinear elliptic partial differential equations on a finite graph.

Paper Details

Authors:
Gloria Haro, Coloma Ballester
Submitted On:
4 October 2018 - 9:30am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_icip18.pdf

(21 downloads)

Subscribe

[1] Gloria Haro, Coloma Ballester, "Motion Inpainting by an Image-Based Geodesic AMLE Method", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3397. Accessed: Dec. 10, 2018.
@article{3397-18,
url = {http://sigport.org/3397},
author = {Gloria Haro; Coloma Ballester },
publisher = {IEEE SigPort},
title = {Motion Inpainting by an Image-Based Geodesic AMLE Method},
year = {2018} }
TY - EJOUR
T1 - Motion Inpainting by an Image-Based Geodesic AMLE Method
AU - Gloria Haro; Coloma Ballester
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3397
ER -
Gloria Haro, Coloma Ballester. (2018). Motion Inpainting by an Image-Based Geodesic AMLE Method. IEEE SigPort. http://sigport.org/3397
Gloria Haro, Coloma Ballester, 2018. Motion Inpainting by an Image-Based Geodesic AMLE Method. Available at: http://sigport.org/3397.
Gloria Haro, Coloma Ballester. (2018). "Motion Inpainting by an Image-Based Geodesic AMLE Method." Web.
1. Gloria Haro, Coloma Ballester. Motion Inpainting by an Image-Based Geodesic AMLE Method [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3397

A STUDY ON THE 4D SPARSITY OF JPEG PLENO LIGHT FIELDS USING THE DISCRETE COSINE TRANSFORM


In this work we study the 4D sparsity of light fields using as main tool the 4D-Discrete Cosine Transform. We analyze the two JPEG Pleno light field datasets, namely the lenslet-based and the High- Density Camera Array (HDCA) datasets. The results suggest that the lenslets datasets exhibit a high 4D redundancy, with a larger inter-view sparsity than the intra-view one. For the HDCA datasets, there is also 4D redundancy worthy to be exploited, yet in a smaller degree. Unlike the lenslets case, the intra-view redundancy is much larger than the inter-view one.

icip-2018-4d-dct.pdf

PDF icon Poster (29 downloads)

Paper Details

Authors:
Gustavo Alves, Márcio P. Pereira, Murilo B. de Carvalho, Fernando Pereira, Carla L. Pagliari, Vanessa Testoni, Eduardo A. B. da Silva
Submitted On:
4 October 2018 - 9:24am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster

(29 downloads)

Subscribe

[1] Gustavo Alves, Márcio P. Pereira, Murilo B. de Carvalho, Fernando Pereira, Carla L. Pagliari, Vanessa Testoni, Eduardo A. B. da Silva, "A STUDY ON THE 4D SPARSITY OF JPEG PLENO LIGHT FIELDS USING THE DISCRETE COSINE TRANSFORM", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3396. Accessed: Dec. 10, 2018.
@article{3396-18,
url = {http://sigport.org/3396},
author = {Gustavo Alves; Márcio P. Pereira; Murilo B. de Carvalho; Fernando Pereira; Carla L. Pagliari; Vanessa Testoni; Eduardo A. B. da Silva },
publisher = {IEEE SigPort},
title = {A STUDY ON THE 4D SPARSITY OF JPEG PLENO LIGHT FIELDS USING THE DISCRETE COSINE TRANSFORM},
year = {2018} }
TY - EJOUR
T1 - A STUDY ON THE 4D SPARSITY OF JPEG PLENO LIGHT FIELDS USING THE DISCRETE COSINE TRANSFORM
AU - Gustavo Alves; Márcio P. Pereira; Murilo B. de Carvalho; Fernando Pereira; Carla L. Pagliari; Vanessa Testoni; Eduardo A. B. da Silva
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3396
ER -
Gustavo Alves, Márcio P. Pereira, Murilo B. de Carvalho, Fernando Pereira, Carla L. Pagliari, Vanessa Testoni, Eduardo A. B. da Silva. (2018). A STUDY ON THE 4D SPARSITY OF JPEG PLENO LIGHT FIELDS USING THE DISCRETE COSINE TRANSFORM. IEEE SigPort. http://sigport.org/3396
Gustavo Alves, Márcio P. Pereira, Murilo B. de Carvalho, Fernando Pereira, Carla L. Pagliari, Vanessa Testoni, Eduardo A. B. da Silva, 2018. A STUDY ON THE 4D SPARSITY OF JPEG PLENO LIGHT FIELDS USING THE DISCRETE COSINE TRANSFORM. Available at: http://sigport.org/3396.
Gustavo Alves, Márcio P. Pereira, Murilo B. de Carvalho, Fernando Pereira, Carla L. Pagliari, Vanessa Testoni, Eduardo A. B. da Silva. (2018). "A STUDY ON THE 4D SPARSITY OF JPEG PLENO LIGHT FIELDS USING THE DISCRETE COSINE TRANSFORM." Web.
1. Gustavo Alves, Márcio P. Pereira, Murilo B. de Carvalho, Fernando Pereira, Carla L. Pagliari, Vanessa Testoni, Eduardo A. B. da Silva. A STUDY ON THE 4D SPARSITY OF JPEG PLENO LIGHT FIELDS USING THE DISCRETE COSINE TRANSFORM [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3396

TS-Net: Combining Modality Specific and Common Features for Multimodal Patch Matching

Paper Details

Authors:
Sovann EN, Alexis LECHERVY, Frédéric JURIE
Submitted On:
4 October 2018 - 9:18am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP_18_poster.pdf

(35 downloads)

Subscribe

[1] Sovann EN, Alexis LECHERVY, Frédéric JURIE, "TS-Net: Combining Modality Specific and Common Features for Multimodal Patch Matching", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3395. Accessed: Dec. 10, 2018.
@article{3395-18,
url = {http://sigport.org/3395},
author = {Sovann EN; Alexis LECHERVY; Frédéric JURIE },
publisher = {IEEE SigPort},
title = {TS-Net: Combining Modality Specific and Common Features for Multimodal Patch Matching},
year = {2018} }
TY - EJOUR
T1 - TS-Net: Combining Modality Specific and Common Features for Multimodal Patch Matching
AU - Sovann EN; Alexis LECHERVY; Frédéric JURIE
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3395
ER -
Sovann EN, Alexis LECHERVY, Frédéric JURIE. (2018). TS-Net: Combining Modality Specific and Common Features for Multimodal Patch Matching. IEEE SigPort. http://sigport.org/3395
Sovann EN, Alexis LECHERVY, Frédéric JURIE, 2018. TS-Net: Combining Modality Specific and Common Features for Multimodal Patch Matching. Available at: http://sigport.org/3395.
Sovann EN, Alexis LECHERVY, Frédéric JURIE. (2018). "TS-Net: Combining Modality Specific and Common Features for Multimodal Patch Matching." Web.
1. Sovann EN, Alexis LECHERVY, Frédéric JURIE. TS-Net: Combining Modality Specific and Common Features for Multimodal Patch Matching [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3395

GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION


In this paper, we propose a novel approach for the domain of makeup-robust face recognition. Most face recognition schemes usually fail to generalize well on these data where there is a large difference between the training and testing sets, e.g., makeup changes. Our method focuses on the problem of determining whether face images before and after makeup refer to the same identity. The work on this fundamental research topic benefits various real-world applications, for example automated passport control, security in general, and surveillance.

Poster_A1.pdf

PDF icon Poster_A1.pdf (20 downloads)

Paper Details

Authors:
Horst Eidenberger
Submitted On:
4 October 2018 - 9:15am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster_A1.pdf

(20 downloads)

Subscribe

[1] Horst Eidenberger, "GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3394. Accessed: Dec. 10, 2018.
@article{3394-18,
url = {http://sigport.org/3394},
author = {Horst Eidenberger },
publisher = {IEEE SigPort},
title = {GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION},
year = {2018} }
TY - EJOUR
T1 - GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION
AU - Horst Eidenberger
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3394
ER -
Horst Eidenberger. (2018). GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION. IEEE SigPort. http://sigport.org/3394
Horst Eidenberger, 2018. GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION. Available at: http://sigport.org/3394.
Horst Eidenberger. (2018). "GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION." Web.
1. Horst Eidenberger. GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3394

Improved Prediction via Thresholding Transform Coefficients


This work presents a thresholding method for processing the predicted samples in the state-of-the-art High Efficiency Video Coding (HEVC) standard. The method applies an integer-based approximation of the discrete cosine transform to an extended prediction block and sets transform coefficients beneath a certain threshold to zero. Transforming back into the sample domain yields the improved prediction signal. The method is incorporated into a software implementation that is conforming to the HEVC standard and applies to both intra and inter predictions.

Paper Details

Authors:
Michael Schäfer, Jonathan Pfaff, Jennifer Rasch, Tobias Hinz, Heiko Schwarz, Tung Nguyen, Gerhard Tech, Detlev Marpe, Thomas Wiegand
Submitted On:
8 October 2018 - 6:53am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Thresholding Poster R6.pdf

(14 downloads)

Subscribe

[1] Michael Schäfer, Jonathan Pfaff, Jennifer Rasch, Tobias Hinz, Heiko Schwarz, Tung Nguyen, Gerhard Tech, Detlev Marpe, Thomas Wiegand, "Improved Prediction via Thresholding Transform Coefficients", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3393. Accessed: Dec. 10, 2018.
@article{3393-18,
url = {http://sigport.org/3393},
author = {Michael Schäfer; Jonathan Pfaff; Jennifer Rasch; Tobias Hinz; Heiko Schwarz; Tung Nguyen; Gerhard Tech; Detlev Marpe; Thomas Wiegand },
publisher = {IEEE SigPort},
title = {Improved Prediction via Thresholding Transform Coefficients},
year = {2018} }
TY - EJOUR
T1 - Improved Prediction via Thresholding Transform Coefficients
AU - Michael Schäfer; Jonathan Pfaff; Jennifer Rasch; Tobias Hinz; Heiko Schwarz; Tung Nguyen; Gerhard Tech; Detlev Marpe; Thomas Wiegand
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3393
ER -
Michael Schäfer, Jonathan Pfaff, Jennifer Rasch, Tobias Hinz, Heiko Schwarz, Tung Nguyen, Gerhard Tech, Detlev Marpe, Thomas Wiegand. (2018). Improved Prediction via Thresholding Transform Coefficients. IEEE SigPort. http://sigport.org/3393
Michael Schäfer, Jonathan Pfaff, Jennifer Rasch, Tobias Hinz, Heiko Schwarz, Tung Nguyen, Gerhard Tech, Detlev Marpe, Thomas Wiegand, 2018. Improved Prediction via Thresholding Transform Coefficients. Available at: http://sigport.org/3393.
Michael Schäfer, Jonathan Pfaff, Jennifer Rasch, Tobias Hinz, Heiko Schwarz, Tung Nguyen, Gerhard Tech, Detlev Marpe, Thomas Wiegand. (2018). "Improved Prediction via Thresholding Transform Coefficients." Web.
1. Michael Schäfer, Jonathan Pfaff, Jennifer Rasch, Tobias Hinz, Heiko Schwarz, Tung Nguyen, Gerhard Tech, Detlev Marpe, Thomas Wiegand. Improved Prediction via Thresholding Transform Coefficients [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3393

DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?


Visual attention allows the human visual system to effectively deal with the huge flow of visual information acquired by the retina. Since the years 2000, the human visual system began to be modelled in computer vision to predict abnormal, rare and surprising data. Attention is a product of the continuous interaction between bottom-up (mainly feature-based) and top-down (mainly learning-based) information. Deep-learning (DNN) is now well established in visual attention modelling with very effective models.

Paper Details

Authors:
Phutphalla Kong, Matei Mancas, Nimol Thuon, Seng Kheang, Bernard Gosselin
Submitted On:
4 October 2018 - 9:19am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_ICIP2018.pdf

(80 downloads)

Subscribe

[1] Phutphalla Kong, Matei Mancas, Nimol Thuon, Seng Kheang, Bernard Gosselin, "DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3392. Accessed: Dec. 10, 2018.
@article{3392-18,
url = {http://sigport.org/3392},
author = {Phutphalla Kong; Matei Mancas; Nimol Thuon; Seng Kheang; Bernard Gosselin },
publisher = {IEEE SigPort},
title = {DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?},
year = {2018} }
TY - EJOUR
T1 - DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?
AU - Phutphalla Kong; Matei Mancas; Nimol Thuon; Seng Kheang; Bernard Gosselin
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3392
ER -
Phutphalla Kong, Matei Mancas, Nimol Thuon, Seng Kheang, Bernard Gosselin. (2018). DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?. IEEE SigPort. http://sigport.org/3392
Phutphalla Kong, Matei Mancas, Nimol Thuon, Seng Kheang, Bernard Gosselin, 2018. DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?. Available at: http://sigport.org/3392.
Phutphalla Kong, Matei Mancas, Nimol Thuon, Seng Kheang, Bernard Gosselin. (2018). "DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?." Web.
1. Phutphalla Kong, Matei Mancas, Nimol Thuon, Seng Kheang, Bernard Gosselin. DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY? [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3392

FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION

Paper Details

Authors:
Christian Bailer, Oliver Wasenmüller, Didier Stricker
Submitted On:
4 October 2018 - 9:03am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

schuster2018ffpp_poster

(36 downloads)

Subscribe

[1] Christian Bailer, Oliver Wasenmüller, Didier Stricker, "FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3388. Accessed: Dec. 10, 2018.
@article{3388-18,
url = {http://sigport.org/3388},
author = {Christian Bailer; Oliver Wasenmüller; Didier Stricker },
publisher = {IEEE SigPort},
title = {FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION},
year = {2018} }
TY - EJOUR
T1 - FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION
AU - Christian Bailer; Oliver Wasenmüller; Didier Stricker
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3388
ER -
Christian Bailer, Oliver Wasenmüller, Didier Stricker. (2018). FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION. IEEE SigPort. http://sigport.org/3388
Christian Bailer, Oliver Wasenmüller, Didier Stricker, 2018. FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION. Available at: http://sigport.org/3388.
Christian Bailer, Oliver Wasenmüller, Didier Stricker. (2018). "FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION." Web.
1. Christian Bailer, Oliver Wasenmüller, Didier Stricker. FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3388

DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS


In this work, we address human action recognition problem under viewpoint variation. The proposed model is formulated by wisely combining convolution neural network (CNN) model with principle component analysis (PCA). In this context, we pass real depth videos through a CNN model in a frame-wise manner. The view invariant features are extracted by employing convolution layers as mid-outputs and considered as 3D nonnegative tensors.

Paper Details

Authors:
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang
Submitted On:
4 October 2018 - 6:02am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP_2018_Poster.pdf

(24 downloads)

CNN_PCA_FTP_Action_Recog_Paper.pdf

(22 downloads)

Subscribe

[1] Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang, "DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3387. Accessed: Dec. 10, 2018.
@article{3387-18,
url = {http://sigport.org/3387},
author = {Manh-Quan Bui; Viet-Hang Duong; Tzu-Chiang Tai; and Jia-Ching Wang },
publisher = {IEEE SigPort},
title = {DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS},
year = {2018} }
TY - EJOUR
T1 - DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS
AU - Manh-Quan Bui; Viet-Hang Duong; Tzu-Chiang Tai; and Jia-Ching Wang
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3387
ER -
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang. (2018). DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS. IEEE SigPort. http://sigport.org/3387
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang, 2018. DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS. Available at: http://sigport.org/3387.
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang. (2018). "DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS." Web.
1. Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang. DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3387

DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS


In this work, we address human action recognition problem under viewpoint variation. The proposed model is formulated by wisely combining convolution neural network (CNN) model with principle component analysis (PCA). In this context, we pass real depth videos through a CNN model in a frame-wise manner. The view invariant features are extracted by employing convolution layers as mid-outputs and considered as 3D nonnegative tensors.

Paper Details

Authors:
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang
Submitted On:
4 October 2018 - 5:42am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP_2018_Poster.pdf

(26 downloads)

Subscribe

[1] Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang, "DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3386. Accessed: Dec. 10, 2018.
@article{3386-18,
url = {http://sigport.org/3386},
author = {Manh-Quan Bui; Viet-Hang Duong; Tzu-Chiang Tai; and Jia-Ching Wang },
publisher = {IEEE SigPort},
title = {DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS},
year = {2018} }
TY - EJOUR
T1 - DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS
AU - Manh-Quan Bui; Viet-Hang Duong; Tzu-Chiang Tai; and Jia-Ching Wang
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3386
ER -
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang. (2018). DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS. IEEE SigPort. http://sigport.org/3386
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang, 2018. DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS. Available at: http://sigport.org/3386.
Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang. (2018). "DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS." Web.
1. Manh-Quan Bui, Viet-Hang Duong, Tzu-Chiang Tai, and Jia-Ching Wang. DEPTH HUMAN ACTION RECOGNITION BASED ON CONVOLUTION NUERAL NETWORKS AND PRINCIPAL COMPONENT ANALYSIS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3386

Pages