Sorry, you need to enable JavaScript to visit this website.

Image/Video Processing

Poster of pixel level data augmentation for semantic image segmentation using generative adversarial networks


Semantic segmentation is one of the basic topics in computer vision, it aims to assign semantic labels to every pixel of an image. Unbalanced semantic label distribution could have a negative influence on segmentation accuracy. In this paper, we investigate using data augmentation approach to balance the label distribution in order to improve segmentation performance. We propose using generative adversarial networks (GANs) to generate realistic images for improving the performance of semantic segmentation networks.

Paper Details

Authors:
Shuangting Liu, Jiaqi Zhang, Yuxin Chen, Yifan Liu, Tao Wan
Submitted On:
8 May 2019 - 7:18am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster of the paper

(11)

Subscribe

[1] Shuangting Liu, Jiaqi Zhang, Yuxin Chen, Yifan Liu, Tao Wan, "Poster of pixel level data augmentation for semantic image segmentation using generative adversarial networks", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4071. Accessed: Jul. 20, 2019.
@article{4071-19,
url = {http://sigport.org/4071},
author = {Shuangting Liu; Jiaqi Zhang; Yuxin Chen; Yifan Liu; Tao Wan },
publisher = {IEEE SigPort},
title = {Poster of pixel level data augmentation for semantic image segmentation using generative adversarial networks},
year = {2019} }
TY - EJOUR
T1 - Poster of pixel level data augmentation for semantic image segmentation using generative adversarial networks
AU - Shuangting Liu; Jiaqi Zhang; Yuxin Chen; Yifan Liu; Tao Wan
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4071
ER -
Shuangting Liu, Jiaqi Zhang, Yuxin Chen, Yifan Liu, Tao Wan. (2019). Poster of pixel level data augmentation for semantic image segmentation using generative adversarial networks. IEEE SigPort. http://sigport.org/4071
Shuangting Liu, Jiaqi Zhang, Yuxin Chen, Yifan Liu, Tao Wan, 2019. Poster of pixel level data augmentation for semantic image segmentation using generative adversarial networks. Available at: http://sigport.org/4071.
Shuangting Liu, Jiaqi Zhang, Yuxin Chen, Yifan Liu, Tao Wan. (2019). "Poster of pixel level data augmentation for semantic image segmentation using generative adversarial networks." Web.
1. Shuangting Liu, Jiaqi Zhang, Yuxin Chen, Yifan Liu, Tao Wan. Poster of pixel level data augmentation for semantic image segmentation using generative adversarial networks [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4071

A Novel Framework Of Hand Localization And Hand Pose Estimation


In this paper, we propose a novel framework for hand localization and pose estimation from a single depth image. For hand localization, unlike most existing methods that using heuristic strategies, e.g. color segmentation, we propose Hierarchical Hand location Networks (HHLN) to estimate the hand location from coarse to fine in depth images, which is robust to the complex environment and efficient. It first applied at a low resolution octree of the whole depth image and produce coarse hand region and then constructs the hand region into a high resolution octree for fine location estimation.

Paper Details

Authors:
Yunlong Che, Yuxiang Song, Yue Qi
Submitted On:
8 May 2019 - 4:41am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster

(14)

Subscribe

[1] Yunlong Che, Yuxiang Song, Yue Qi, "A Novel Framework Of Hand Localization And Hand Pose Estimation ", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4049. Accessed: Jul. 20, 2019.
@article{4049-19,
url = {http://sigport.org/4049},
author = {Yunlong Che; Yuxiang Song; Yue Qi },
publisher = {IEEE SigPort},
title = {A Novel Framework Of Hand Localization And Hand Pose Estimation },
year = {2019} }
TY - EJOUR
T1 - A Novel Framework Of Hand Localization And Hand Pose Estimation
AU - Yunlong Che; Yuxiang Song; Yue Qi
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4049
ER -
Yunlong Che, Yuxiang Song, Yue Qi. (2019). A Novel Framework Of Hand Localization And Hand Pose Estimation . IEEE SigPort. http://sigport.org/4049
Yunlong Che, Yuxiang Song, Yue Qi, 2019. A Novel Framework Of Hand Localization And Hand Pose Estimation . Available at: http://sigport.org/4049.
Yunlong Che, Yuxiang Song, Yue Qi. (2019). "A Novel Framework Of Hand Localization And Hand Pose Estimation ." Web.
1. Yunlong Che, Yuxiang Song, Yue Qi. A Novel Framework Of Hand Localization And Hand Pose Estimation [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4049

MULTI-MODAL IMAGE STITCHING WITH NONLINEAR OPTIMIZATION


Despite significant advances in recent years, the problem of image stitching still lacks a robust solution. Most of the feature based image stitching algorithms perform image alignment based on either homography-based transformation or content-preserving warping. Pairwise homography-based approach miserably fails to handle parallax whereas content-preserving warping approach does not preserve the structural property of the images. In this paper, we propose a nonlinear optimization to find out the global homographies using pairwise homography estimates and point correspondences.

Paper Details

Authors:
Arindam Saha, Soumyadip Maity, Brojeshwar Bhowmick
Submitted On:
8 May 2019 - 3:17am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster presentation

(13)

Subscribe

[1] Arindam Saha, Soumyadip Maity, Brojeshwar Bhowmick, "MULTI-MODAL IMAGE STITCHING WITH NONLINEAR OPTIMIZATION", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4033. Accessed: Jul. 20, 2019.
@article{4033-19,
url = {http://sigport.org/4033},
author = {Arindam Saha; Soumyadip Maity; Brojeshwar Bhowmick },
publisher = {IEEE SigPort},
title = {MULTI-MODAL IMAGE STITCHING WITH NONLINEAR OPTIMIZATION},
year = {2019} }
TY - EJOUR
T1 - MULTI-MODAL IMAGE STITCHING WITH NONLINEAR OPTIMIZATION
AU - Arindam Saha; Soumyadip Maity; Brojeshwar Bhowmick
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4033
ER -
Arindam Saha, Soumyadip Maity, Brojeshwar Bhowmick. (2019). MULTI-MODAL IMAGE STITCHING WITH NONLINEAR OPTIMIZATION. IEEE SigPort. http://sigport.org/4033
Arindam Saha, Soumyadip Maity, Brojeshwar Bhowmick, 2019. MULTI-MODAL IMAGE STITCHING WITH NONLINEAR OPTIMIZATION. Available at: http://sigport.org/4033.
Arindam Saha, Soumyadip Maity, Brojeshwar Bhowmick. (2019). "MULTI-MODAL IMAGE STITCHING WITH NONLINEAR OPTIMIZATION." Web.
1. Arindam Saha, Soumyadip Maity, Brojeshwar Bhowmick. MULTI-MODAL IMAGE STITCHING WITH NONLINEAR OPTIMIZATION [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4033

Video Quality Assessment for Encrypted Http Adaptive Streaming: Attention-based Hybrid RNN-HMM Model


End-to-end encryption challenges mobile network operators to assess the quality of the HTTP Adaptive Streaming (HAS), where the quality assessment is coarse-grained, e.g., detecting if there exist stalling during the whole playback. Targeting on this issue, this paper proposes an attention-based hybrid RNN-HMM model, which integrates HMM with attention mechanism to predict the player states. The model is trained and evaluated based on the download speed and player state sequences of encrypted video sessions collected from YouTube.

Paper Details

Authors:
Shuang Tang, Xiaowei Qin, Xiaohui Chen, Guo Wei
Submitted On:
7 May 2019 - 9:37pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

2019-ICASSP-TangShuang-Paper#2932-poster.pdf

(17)

Subscribe

[1] Shuang Tang, Xiaowei Qin, Xiaohui Chen, Guo Wei, "Video Quality Assessment for Encrypted Http Adaptive Streaming: Attention-based Hybrid RNN-HMM Model", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/3980. Accessed: Jul. 20, 2019.
@article{3980-19,
url = {http://sigport.org/3980},
author = {Shuang Tang; Xiaowei Qin; Xiaohui Chen; Guo Wei },
publisher = {IEEE SigPort},
title = {Video Quality Assessment for Encrypted Http Adaptive Streaming: Attention-based Hybrid RNN-HMM Model},
year = {2019} }
TY - EJOUR
T1 - Video Quality Assessment for Encrypted Http Adaptive Streaming: Attention-based Hybrid RNN-HMM Model
AU - Shuang Tang; Xiaowei Qin; Xiaohui Chen; Guo Wei
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/3980
ER -
Shuang Tang, Xiaowei Qin, Xiaohui Chen, Guo Wei. (2019). Video Quality Assessment for Encrypted Http Adaptive Streaming: Attention-based Hybrid RNN-HMM Model. IEEE SigPort. http://sigport.org/3980
Shuang Tang, Xiaowei Qin, Xiaohui Chen, Guo Wei, 2019. Video Quality Assessment for Encrypted Http Adaptive Streaming: Attention-based Hybrid RNN-HMM Model. Available at: http://sigport.org/3980.
Shuang Tang, Xiaowei Qin, Xiaohui Chen, Guo Wei. (2019). "Video Quality Assessment for Encrypted Http Adaptive Streaming: Attention-based Hybrid RNN-HMM Model." Web.
1. Shuang Tang, Xiaowei Qin, Xiaohui Chen, Guo Wei. Video Quality Assessment for Encrypted Http Adaptive Streaming: Attention-based Hybrid RNN-HMM Model [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/3980

HOW VIDEO OBJECT TRACKING IS AFFECTED BY IN-CAPTURE DISTORTIONS?


Video Object Tracking -VOT- in realistic scenarios is a difficult task. Image factors such as occlusion, clutter, confusion, object shape, and zooming, among others, have an impact on video tracker methods performance. While these conditions do affect trackers performance, there is not a clear distinction between the scene content challenges like occlusion and clutter, against challenges due to distortions generated by capture, compression, processing, and transmission of videos. This paper is concerned with the latter interpretation of quality as it affects VOT performance.

Paper Details

Authors:
Hernan Dario Benitez Restrepo, Ivan Cabezas
Submitted On:
7 May 2019 - 8:25pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ObjectTracking_4127

(11)

Keywords

Additional Categories

Subscribe

[1] Hernan Dario Benitez Restrepo, Ivan Cabezas, "HOW VIDEO OBJECT TRACKING IS AFFECTED BY IN-CAPTURE DISTORTIONS?", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/3973. Accessed: Jul. 20, 2019.
@article{3973-19,
url = {http://sigport.org/3973},
author = {Hernan Dario Benitez Restrepo; Ivan Cabezas },
publisher = {IEEE SigPort},
title = {HOW VIDEO OBJECT TRACKING IS AFFECTED BY IN-CAPTURE DISTORTIONS?},
year = {2019} }
TY - EJOUR
T1 - HOW VIDEO OBJECT TRACKING IS AFFECTED BY IN-CAPTURE DISTORTIONS?
AU - Hernan Dario Benitez Restrepo; Ivan Cabezas
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/3973
ER -
Hernan Dario Benitez Restrepo, Ivan Cabezas. (2019). HOW VIDEO OBJECT TRACKING IS AFFECTED BY IN-CAPTURE DISTORTIONS?. IEEE SigPort. http://sigport.org/3973
Hernan Dario Benitez Restrepo, Ivan Cabezas, 2019. HOW VIDEO OBJECT TRACKING IS AFFECTED BY IN-CAPTURE DISTORTIONS?. Available at: http://sigport.org/3973.
Hernan Dario Benitez Restrepo, Ivan Cabezas. (2019). "HOW VIDEO OBJECT TRACKING IS AFFECTED BY IN-CAPTURE DISTORTIONS?." Web.
1. Hernan Dario Benitez Restrepo, Ivan Cabezas. HOW VIDEO OBJECT TRACKING IS AFFECTED BY IN-CAPTURE DISTORTIONS? [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/3973

Real-time tracker with fast recovery from target loss


In this paper, we introduce a variation of a state-of-the-art real-time tracker (CFNet), which adds to the original algorithm robustness to target loss without a significant computational overhead. The new method is based on the assumption that the feature map can be used to estimate the tracking confidence more accurately.

Paper Details

Authors:
Alessandro Bay, Panagiotis Sidiropoulos, Eduard Vazquez, Michele Sasdelli
Submitted On:
7 May 2019 - 5:10pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:

Document Files

FRTL_poster.pdf

(11)

Subscribe

[1] Alessandro Bay, Panagiotis Sidiropoulos, Eduard Vazquez, Michele Sasdelli, "Real-time tracker with fast recovery from target loss", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/3957. Accessed: Jul. 20, 2019.
@article{3957-19,
url = {http://sigport.org/3957},
author = {Alessandro Bay; Panagiotis Sidiropoulos; Eduard Vazquez; Michele Sasdelli },
publisher = {IEEE SigPort},
title = {Real-time tracker with fast recovery from target loss},
year = {2019} }
TY - EJOUR
T1 - Real-time tracker with fast recovery from target loss
AU - Alessandro Bay; Panagiotis Sidiropoulos; Eduard Vazquez; Michele Sasdelli
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/3957
ER -
Alessandro Bay, Panagiotis Sidiropoulos, Eduard Vazquez, Michele Sasdelli. (2019). Real-time tracker with fast recovery from target loss. IEEE SigPort. http://sigport.org/3957
Alessandro Bay, Panagiotis Sidiropoulos, Eduard Vazquez, Michele Sasdelli, 2019. Real-time tracker with fast recovery from target loss. Available at: http://sigport.org/3957.
Alessandro Bay, Panagiotis Sidiropoulos, Eduard Vazquez, Michele Sasdelli. (2019). "Real-time tracker with fast recovery from target loss." Web.
1. Alessandro Bay, Panagiotis Sidiropoulos, Eduard Vazquez, Michele Sasdelli. Real-time tracker with fast recovery from target loss [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/3957

Cross-Language Speech Dependent Lip-Synchronization


Understanding videos of people speaking across international borders is hard as audiences from different demographies do not understand the language. Such speech videos are often supplemented with language subtitles. However, these hamper the viewing experience as the attention is shared. Simple audio dubbing in a different language makes the video appear unnatural due to unsynchronized lip motion. In this paper, we propose a system for automated cross-language lip synchronization for re-dubbed videos.

Paper Details

Authors:
Abhishek Jha, Vikram Voleti, Vinay Namboodiri, C. V. Jawahar
Submitted On:
7 May 2019 - 1:43pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

20190517_Brighton_ICASSP_Cross_Language_Speech_Dependent_Lip_Synchronization.pdf

(15)

Subscribe

[1] Abhishek Jha, Vikram Voleti, Vinay Namboodiri, C. V. Jawahar, "Cross-Language Speech Dependent Lip-Synchronization", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/3934. Accessed: Jul. 20, 2019.
@article{3934-19,
url = {http://sigport.org/3934},
author = {Abhishek Jha; Vikram Voleti; Vinay Namboodiri; C. V. Jawahar },
publisher = {IEEE SigPort},
title = {Cross-Language Speech Dependent Lip-Synchronization},
year = {2019} }
TY - EJOUR
T1 - Cross-Language Speech Dependent Lip-Synchronization
AU - Abhishek Jha; Vikram Voleti; Vinay Namboodiri; C. V. Jawahar
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/3934
ER -
Abhishek Jha, Vikram Voleti, Vinay Namboodiri, C. V. Jawahar. (2019). Cross-Language Speech Dependent Lip-Synchronization. IEEE SigPort. http://sigport.org/3934
Abhishek Jha, Vikram Voleti, Vinay Namboodiri, C. V. Jawahar, 2019. Cross-Language Speech Dependent Lip-Synchronization. Available at: http://sigport.org/3934.
Abhishek Jha, Vikram Voleti, Vinay Namboodiri, C. V. Jawahar. (2019). "Cross-Language Speech Dependent Lip-Synchronization." Web.
1. Abhishek Jha, Vikram Voleti, Vinay Namboodiri, C. V. Jawahar. Cross-Language Speech Dependent Lip-Synchronization [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/3934

TOWARDS GENERATING AMBISONICS USING AUDIO-VISUAL CUE FOR VIRTUAL REALITY


Ambisonics i.e., a full-sphere surround sound, is quintessential with 360° visual content to provide a realistic virtual reality (VR) experience. While 360° visual content capture gained a tremendous boost recently, the estimation of corresponding spatial sound is still challenging due to the required sound-field microphones or information about the sound-source locations. In this paper, we introduce a novel problem of generating Ambisonics in 360° videos using the audiovisual cue.

Paper Details

Authors:
Aakanksha Rana, Cagri Ozcinar, Aljosa Smolic
Submitted On:
7 May 2019 - 1:26pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Icassp2019_Vsense.pdf

(11)

Subscribe

[1] Aakanksha Rana, Cagri Ozcinar, Aljosa Smolic, "TOWARDS GENERATING AMBISONICS USING AUDIO-VISUAL CUE FOR VIRTUAL REALITY", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/3929. Accessed: Jul. 20, 2019.
@article{3929-19,
url = {http://sigport.org/3929},
author = {Aakanksha Rana; Cagri Ozcinar; Aljosa Smolic },
publisher = {IEEE SigPort},
title = {TOWARDS GENERATING AMBISONICS USING AUDIO-VISUAL CUE FOR VIRTUAL REALITY},
year = {2019} }
TY - EJOUR
T1 - TOWARDS GENERATING AMBISONICS USING AUDIO-VISUAL CUE FOR VIRTUAL REALITY
AU - Aakanksha Rana; Cagri Ozcinar; Aljosa Smolic
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/3929
ER -
Aakanksha Rana, Cagri Ozcinar, Aljosa Smolic. (2019). TOWARDS GENERATING AMBISONICS USING AUDIO-VISUAL CUE FOR VIRTUAL REALITY. IEEE SigPort. http://sigport.org/3929
Aakanksha Rana, Cagri Ozcinar, Aljosa Smolic, 2019. TOWARDS GENERATING AMBISONICS USING AUDIO-VISUAL CUE FOR VIRTUAL REALITY. Available at: http://sigport.org/3929.
Aakanksha Rana, Cagri Ozcinar, Aljosa Smolic. (2019). "TOWARDS GENERATING AMBISONICS USING AUDIO-VISUAL CUE FOR VIRTUAL REALITY." Web.
1. Aakanksha Rana, Cagri Ozcinar, Aljosa Smolic. TOWARDS GENERATING AMBISONICS USING AUDIO-VISUAL CUE FOR VIRTUAL REALITY [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/3929

GRAPH-BASED RGB-D IMAGE SEGMENTATION USING COLOR-DIRECTIONAL-REGION MERGING


Color and depth information provided simultaneously in RGB-D images can be used to segment scenes into disjoint regions. In this paper, a graph-based segmentation method for RGB-D image is proposed, in which an adaptive data-driven combination of color- and normal-variation is presented to construct dissimilarity between two adjacent pixels and a novel region merging threshold exploiting normal information in adjacent regions is proposed to control the proceeding of the region merging.

Paper Details

Authors:
Xiong Pan, Zejun Zhang, Yizhang Liu, Changcai Yang, Qiufeng Chen, Li Cheng, Jiaxiang Lin, Riqing Chen
Submitted On:
18 April 2019 - 11:15pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

GRAPH-BASED RGB-D IMAGE SEGMENTATION USING COLOR-DIRECTIONAL-REGION MERGING.pdf

(42)

Subscribe

[1] Xiong Pan, Zejun Zhang, Yizhang Liu, Changcai Yang, Qiufeng Chen, Li Cheng, Jiaxiang Lin, Riqing Chen, "GRAPH-BASED RGB-D IMAGE SEGMENTATION USING COLOR-DIRECTIONAL-REGION MERGING", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/3889. Accessed: Jul. 20, 2019.
@article{3889-19,
url = {http://sigport.org/3889},
author = {Xiong Pan; Zejun Zhang; Yizhang Liu; Changcai Yang; Qiufeng Chen; Li Cheng; Jiaxiang Lin; Riqing Chen },
publisher = {IEEE SigPort},
title = {GRAPH-BASED RGB-D IMAGE SEGMENTATION USING COLOR-DIRECTIONAL-REGION MERGING},
year = {2019} }
TY - EJOUR
T1 - GRAPH-BASED RGB-D IMAGE SEGMENTATION USING COLOR-DIRECTIONAL-REGION MERGING
AU - Xiong Pan; Zejun Zhang; Yizhang Liu; Changcai Yang; Qiufeng Chen; Li Cheng; Jiaxiang Lin; Riqing Chen
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/3889
ER -
Xiong Pan, Zejun Zhang, Yizhang Liu, Changcai Yang, Qiufeng Chen, Li Cheng, Jiaxiang Lin, Riqing Chen. (2019). GRAPH-BASED RGB-D IMAGE SEGMENTATION USING COLOR-DIRECTIONAL-REGION MERGING. IEEE SigPort. http://sigport.org/3889
Xiong Pan, Zejun Zhang, Yizhang Liu, Changcai Yang, Qiufeng Chen, Li Cheng, Jiaxiang Lin, Riqing Chen, 2019. GRAPH-BASED RGB-D IMAGE SEGMENTATION USING COLOR-DIRECTIONAL-REGION MERGING. Available at: http://sigport.org/3889.
Xiong Pan, Zejun Zhang, Yizhang Liu, Changcai Yang, Qiufeng Chen, Li Cheng, Jiaxiang Lin, Riqing Chen. (2019). "GRAPH-BASED RGB-D IMAGE SEGMENTATION USING COLOR-DIRECTIONAL-REGION MERGING." Web.
1. Xiong Pan, Zejun Zhang, Yizhang Liu, Changcai Yang, Qiufeng Chen, Li Cheng, Jiaxiang Lin, Riqing Chen. GRAPH-BASED RGB-D IMAGE SEGMENTATION USING COLOR-DIRECTIONAL-REGION MERGING [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/3889

PATCH-AWARE AVERAGING FILTER FOR SCALING IN POINT CLOUD COMPRESSION


With the development of augmented reality, the delivery and storage of 3D content have become an important research area. Among the proposals for point cloud compression collected by MPEG, Apple’s Test Model Category 2 (TMC2) achieves the highest quality for 3D sequences under a bitrate constraint. However, the TMC2 framework is not spatially scalable. In this paper, we add interpolation compo- nents which make TMC2 suitable for flexible resolution. We apply a patch-aware averaging filter to eliminate most outliers which result from the interpolation.

Paper Details

Authors:
Submitted On:
29 November 2018 - 2:08pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

GlobalSIP_Poster_revised.pdf

(65)

Subscribe

[1] , "PATCH-AWARE AVERAGING FILTER FOR SCALING IN POINT CLOUD COMPRESSION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3832. Accessed: Jul. 20, 2019.
@article{3832-18,
url = {http://sigport.org/3832},
author = { },
publisher = {IEEE SigPort},
title = {PATCH-AWARE AVERAGING FILTER FOR SCALING IN POINT CLOUD COMPRESSION},
year = {2018} }
TY - EJOUR
T1 - PATCH-AWARE AVERAGING FILTER FOR SCALING IN POINT CLOUD COMPRESSION
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3832
ER -
. (2018). PATCH-AWARE AVERAGING FILTER FOR SCALING IN POINT CLOUD COMPRESSION. IEEE SigPort. http://sigport.org/3832
, 2018. PATCH-AWARE AVERAGING FILTER FOR SCALING IN POINT CLOUD COMPRESSION. Available at: http://sigport.org/3832.
. (2018). "PATCH-AWARE AVERAGING FILTER FOR SCALING IN POINT CLOUD COMPRESSION." Web.
1. . PATCH-AWARE AVERAGING FILTER FOR SCALING IN POINT CLOUD COMPRESSION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3832

Pages