Sorry, you need to enable JavaScript to visit this website.

Virtual reality and 3D imaging

TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS


Modelling human visual attention is of great importance in the field of computer vision and has been widely explored for 3D imaging. Yet, in the absence of ground truth data, it is unclear whether such predictions are in alignment with the actual human viewing behavior in virtual reality environments. In this study, we work towards solving this problem by conducting an eye-tracking experiment in an immersive 3D scene that offers 6 degrees of freedom. A wide range of static point cloud models is inspected by human subjects, while their gaze is captured in real-time.

Paper Details

Authors:
Evangelos Alexiou, Peisen Xu, Touradj Ebrahimi
Submitted On:
24 September 2019 - 7:29am
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

2019-ICIP-presentation.pdf

(11)

Keywords

Additional Categories

Subscribe

[1] Evangelos Alexiou, Peisen Xu, Touradj Ebrahimi, "TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4831. Accessed: Oct. 15, 2019.
@article{4831-19,
url = {http://sigport.org/4831},
author = {Evangelos Alexiou; Peisen Xu; Touradj Ebrahimi },
publisher = {IEEE SigPort},
title = {TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS},
year = {2019} }
TY - EJOUR
T1 - TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS
AU - Evangelos Alexiou; Peisen Xu; Touradj Ebrahimi
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4831
ER -
Evangelos Alexiou, Peisen Xu, Touradj Ebrahimi. (2019). TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS. IEEE SigPort. http://sigport.org/4831
Evangelos Alexiou, Peisen Xu, Touradj Ebrahimi, 2019. TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS. Available at: http://sigport.org/4831.
Evangelos Alexiou, Peisen Xu, Touradj Ebrahimi. (2019). "TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS." Web.
1. Evangelos Alexiou, Peisen Xu, Touradj Ebrahimi. TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4831

BODYFITR: Robust automatic 3D human body fitting


This paper proposes BODYFITR, a fully automatic method to fit a human body model to static 3D scans with complex poses. Automatic and reliable 3D human body fitting is necessary for many applications related to healthcare, digital ergonomics, avatar creation and security, especially in industrial contexts for large-scale product design. Existing works either make prior assumptions on the pose, require manual annotation of the data or have difficulty handling complex poses.

Paper Details

Authors:
Submitted On:
19 September 2019 - 12:58pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

bodyfitr poster

(14)

Subscribe

[1] , "BODYFITR: Robust automatic 3D human body fitting", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4744. Accessed: Oct. 15, 2019.
@article{4744-19,
url = {http://sigport.org/4744},
author = { },
publisher = {IEEE SigPort},
title = {BODYFITR: Robust automatic 3D human body fitting},
year = {2019} }
TY - EJOUR
T1 - BODYFITR: Robust automatic 3D human body fitting
AU -
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4744
ER -
. (2019). BODYFITR: Robust automatic 3D human body fitting. IEEE SigPort. http://sigport.org/4744
, 2019. BODYFITR: Robust automatic 3D human body fitting. Available at: http://sigport.org/4744.
. (2019). "BODYFITR: Robust automatic 3D human body fitting." Web.
1. . BODYFITR: Robust automatic 3D human body fitting [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4744

FAST: Flow-Assisted Shearlet Transform for Densely-sampled Light Field Reconstruction


Shearlet Transform (ST) is one of the most effective methods for Densely-Sampled Light Field (DSLF) reconstruction from a Sparsely-Sampled Light Field (SSLF). However, ST requires a precise disparity estimation of the SSLF. To this end, in this paper a state-of-the-art optical flow method, i.e. PWC-Net, is employed to estimate bidirectional disparity maps between neighboring views in the SSLF. Moreover, to take full advantage of optical flow and ST for DSLF reconstruction, a novel learning-based method, referred to as Flow-Assisted Shearlet Transform (FAST), is proposed in this paper.

Paper Details

Authors:
Reinhard Koch, Robert Bregovic, Atanas Gotchev
Submitted On:
16 September 2019 - 12:24pm
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

ICIP2019_FAST.pdf

(7)

Subscribe

[1] Reinhard Koch, Robert Bregovic, Atanas Gotchev , "FAST: Flow-Assisted Shearlet Transform for Densely-sampled Light Field Reconstruction", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4643. Accessed: Oct. 15, 2019.
@article{4643-19,
url = {http://sigport.org/4643},
author = {Reinhard Koch; Robert Bregovic; Atanas Gotchev },
publisher = {IEEE SigPort},
title = {FAST: Flow-Assisted Shearlet Transform for Densely-sampled Light Field Reconstruction},
year = {2019} }
TY - EJOUR
T1 - FAST: Flow-Assisted Shearlet Transform for Densely-sampled Light Field Reconstruction
AU - Reinhard Koch; Robert Bregovic; Atanas Gotchev
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4643
ER -
Reinhard Koch, Robert Bregovic, Atanas Gotchev . (2019). FAST: Flow-Assisted Shearlet Transform for Densely-sampled Light Field Reconstruction. IEEE SigPort. http://sigport.org/4643
Reinhard Koch, Robert Bregovic, Atanas Gotchev , 2019. FAST: Flow-Assisted Shearlet Transform for Densely-sampled Light Field Reconstruction. Available at: http://sigport.org/4643.
Reinhard Koch, Robert Bregovic, Atanas Gotchev . (2019). "FAST: Flow-Assisted Shearlet Transform for Densely-sampled Light Field Reconstruction." Web.
1. Reinhard Koch, Robert Bregovic, Atanas Gotchev . FAST: Flow-Assisted Shearlet Transform for Densely-sampled Light Field Reconstruction [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4643

PPSAN: PERCEPTUAL-AWARE 3D POINT CLOUD SEGMENTATION VIA ADVERSARIAL LEARNING


Point cloud segmentation is a key problem of 3D multimedia signal processing. Existing methods usually use single network structure which is trained by per-point loss. These methods mainly focus on geometric similarity between the prediction results and the ground truth, ignoring visual perception difference. In this paper, we present a segmentation adversarial network to overcome the drawbacks above. Discriminator is introduced to provide a perceptual loss to increase the rationality judgment of prediction and guide the further optimization of the segmentator.

Paper Details

Authors:
Hongyan Li, Zhengxing Sun, Yunjie Wu, Bo Li
Submitted On:
9 May 2019 - 9:47pm
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

ICASSP2019_Poster-lihy.pdf

(28)

Keywords

Additional Categories

Subscribe

[1] Hongyan Li, Zhengxing Sun, Yunjie Wu, Bo Li, "PPSAN: PERCEPTUAL-AWARE 3D POINT CLOUD SEGMENTATION VIA ADVERSARIAL LEARNING", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4252. Accessed: Oct. 15, 2019.
@article{4252-19,
url = {http://sigport.org/4252},
author = {Hongyan Li; Zhengxing Sun; Yunjie Wu; Bo Li },
publisher = {IEEE SigPort},
title = {PPSAN: PERCEPTUAL-AWARE 3D POINT CLOUD SEGMENTATION VIA ADVERSARIAL LEARNING},
year = {2019} }
TY - EJOUR
T1 - PPSAN: PERCEPTUAL-AWARE 3D POINT CLOUD SEGMENTATION VIA ADVERSARIAL LEARNING
AU - Hongyan Li; Zhengxing Sun; Yunjie Wu; Bo Li
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4252
ER -
Hongyan Li, Zhengxing Sun, Yunjie Wu, Bo Li. (2019). PPSAN: PERCEPTUAL-AWARE 3D POINT CLOUD SEGMENTATION VIA ADVERSARIAL LEARNING. IEEE SigPort. http://sigport.org/4252
Hongyan Li, Zhengxing Sun, Yunjie Wu, Bo Li, 2019. PPSAN: PERCEPTUAL-AWARE 3D POINT CLOUD SEGMENTATION VIA ADVERSARIAL LEARNING. Available at: http://sigport.org/4252.
Hongyan Li, Zhengxing Sun, Yunjie Wu, Bo Li. (2019). "PPSAN: PERCEPTUAL-AWARE 3D POINT CLOUD SEGMENTATION VIA ADVERSARIAL LEARNING." Web.
1. Hongyan Li, Zhengxing Sun, Yunjie Wu, Bo Li. PPSAN: PERCEPTUAL-AWARE 3D POINT CLOUD SEGMENTATION VIA ADVERSARIAL LEARNING [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4252

OUTLIERS REMOVAL & CONSOLIDATION OF DYNAMIC POINT CLOUD


Recently, there has been increasing interest in the processing of
dynamic scenes as captured by 3D scanners, ideally suited for
challenging applications such as immersive tele-presence systems
and gaming. Despite the fact that the resolution and accuracy of
the modern 3D scanners are constantly improving, the captured
3D point clouds are usually noisy with a perceptive percentage of
outliers, stressing the need of an approach with low computational
requirements which will be able to automatically remove the outliers

Paper Details

Authors:
Gerasimos Arvanitis, Aris Spathis-Papadiotis, Aris S. Lalos, Konstantinos Moustakas, Nikos Fakotakis
Submitted On:
5 October 2018 - 3:54am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

OUTLIERS REMOVAL & CONSOLIDATION OF DYNAMIC POINT CLOUD

(86)

Subscribe

[1] Gerasimos Arvanitis, Aris Spathis-Papadiotis, Aris S. Lalos, Konstantinos Moustakas, Nikos Fakotakis, "OUTLIERS REMOVAL & CONSOLIDATION OF DYNAMIC POINT CLOUD", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3515. Accessed: Oct. 15, 2019.
@article{3515-18,
url = {http://sigport.org/3515},
author = {Gerasimos Arvanitis; Aris Spathis-Papadiotis; Aris S. Lalos; Konstantinos Moustakas; Nikos Fakotakis },
publisher = {IEEE SigPort},
title = {OUTLIERS REMOVAL & CONSOLIDATION OF DYNAMIC POINT CLOUD},
year = {2018} }
TY - EJOUR
T1 - OUTLIERS REMOVAL & CONSOLIDATION OF DYNAMIC POINT CLOUD
AU - Gerasimos Arvanitis; Aris Spathis-Papadiotis; Aris S. Lalos; Konstantinos Moustakas; Nikos Fakotakis
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3515
ER -
Gerasimos Arvanitis, Aris Spathis-Papadiotis, Aris S. Lalos, Konstantinos Moustakas, Nikos Fakotakis. (2018). OUTLIERS REMOVAL & CONSOLIDATION OF DYNAMIC POINT CLOUD. IEEE SigPort. http://sigport.org/3515
Gerasimos Arvanitis, Aris Spathis-Papadiotis, Aris S. Lalos, Konstantinos Moustakas, Nikos Fakotakis, 2018. OUTLIERS REMOVAL & CONSOLIDATION OF DYNAMIC POINT CLOUD. Available at: http://sigport.org/3515.
Gerasimos Arvanitis, Aris Spathis-Papadiotis, Aris S. Lalos, Konstantinos Moustakas, Nikos Fakotakis. (2018). "OUTLIERS REMOVAL & CONSOLIDATION OF DYNAMIC POINT CLOUD." Web.
1. Gerasimos Arvanitis, Aris Spathis-Papadiotis, Aris S. Lalos, Konstantinos Moustakas, Nikos Fakotakis. OUTLIERS REMOVAL & CONSOLIDATION OF DYNAMIC POINT CLOUD [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3515

DEPTH FROM GAZE


Eye trackers are found on various electronic devices. In this paper, we propose to exploit the gaze information acquired by an eye tracker for depth estimation. The data collected from the eye tracker in a fixation interval are used to estimate the depth of a gazed object. The proposed method can be used to construct a sparse depth map of an augmented reality space. The resulting depth map can be applied to, for example, controlling the visual information displayed to the viewer.

Paper Details

Authors:
Tzu-Sheng Kuo, Kuang-Tsu Shih, Sheng-Lung Chung, Homer Chen
Submitted On:
17 October 2018 - 6:44pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster.pdf

(683)

Subscribe

[1] Tzu-Sheng Kuo, Kuang-Tsu Shih, Sheng-Lung Chung, Homer Chen, "DEPTH FROM GAZE", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3398. Accessed: Oct. 15, 2019.
@article{3398-18,
url = {http://sigport.org/3398},
author = {Tzu-Sheng Kuo; Kuang-Tsu Shih; Sheng-Lung Chung; Homer Chen },
publisher = {IEEE SigPort},
title = {DEPTH FROM GAZE},
year = {2018} }
TY - EJOUR
T1 - DEPTH FROM GAZE
AU - Tzu-Sheng Kuo; Kuang-Tsu Shih; Sheng-Lung Chung; Homer Chen
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3398
ER -
Tzu-Sheng Kuo, Kuang-Tsu Shih, Sheng-Lung Chung, Homer Chen. (2018). DEPTH FROM GAZE. IEEE SigPort. http://sigport.org/3398
Tzu-Sheng Kuo, Kuang-Tsu Shih, Sheng-Lung Chung, Homer Chen, 2018. DEPTH FROM GAZE. Available at: http://sigport.org/3398.
Tzu-Sheng Kuo, Kuang-Tsu Shih, Sheng-Lung Chung, Homer Chen. (2018). "DEPTH FROM GAZE." Web.
1. Tzu-Sheng Kuo, Kuang-Tsu Shih, Sheng-Lung Chung, Homer Chen. DEPTH FROM GAZE [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3398

VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning


In this paper, we propose a novel virtual reality image quality assessment (VR IQA) with adversarial learning for omnidirectional images. To take into account the characteristics of the omnidirectional image, we devise deep networks including novel quality score predictor and human perception guider. The proposed quality score predictor automatically predicts the quality score of distorted image using the latent spatial and position feature.

Paper Details

Authors:
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro
Submitted On:
20 April 2018 - 8:00am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

VR IQA NET-ICASSP2018

(208)

Subscribe

[1] Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro, "VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3102. Accessed: Oct. 15, 2019.
@article{3102-18,
url = {http://sigport.org/3102},
author = {Heoun-taek Lim; Hak Gu Kim; and Yong Man Ro },
publisher = {IEEE SigPort},
title = {VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning},
year = {2018} }
TY - EJOUR
T1 - VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning
AU - Heoun-taek Lim; Hak Gu Kim; and Yong Man Ro
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3102
ER -
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. (2018). VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning. IEEE SigPort. http://sigport.org/3102
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro, 2018. VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning. Available at: http://sigport.org/3102.
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. (2018). "VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning." Web.
1. Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3102

3D Mesh Coding with Predefined Region-of-Interest


We introduce a novel functionality for wavelet-based irregular mesh codecs which allows for prioritizing at the encoding side a region-of-interest (ROI) over a background (BG), and for transmitting the encoded data such that the quality in these regions increases first. This is made possible by appropriately scaling wavelet coefficients. To improve the decoded geometry in the BG, we propose an ROI-aware inverse wavelet transform which only upscales the connectivity in the required regions. Results show clear bitrate and vertex savings.

Paper Details

Authors:
Adrian Munteanu,Peter Lambert
Submitted On:
15 September 2017 - 11:58pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

2017.09.20 - ICIP2017 - 3D Mesh Coding with Predefined Region-of-Interest.pdf

(31)

Subscribe

[1] Adrian Munteanu,Peter Lambert, "3D Mesh Coding with Predefined Region-of-Interest", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2170. Accessed: Oct. 15, 2019.
@article{2170-17,
url = {http://sigport.org/2170},
author = {Adrian Munteanu;Peter Lambert },
publisher = {IEEE SigPort},
title = {3D Mesh Coding with Predefined Region-of-Interest},
year = {2017} }
TY - EJOUR
T1 - 3D Mesh Coding with Predefined Region-of-Interest
AU - Adrian Munteanu;Peter Lambert
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2170
ER -
Adrian Munteanu,Peter Lambert. (2017). 3D Mesh Coding with Predefined Region-of-Interest. IEEE SigPort. http://sigport.org/2170
Adrian Munteanu,Peter Lambert, 2017. 3D Mesh Coding with Predefined Region-of-Interest. Available at: http://sigport.org/2170.
Adrian Munteanu,Peter Lambert. (2017). "3D Mesh Coding with Predefined Region-of-Interest." Web.
1. Adrian Munteanu,Peter Lambert. 3D Mesh Coding with Predefined Region-of-Interest [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2170

360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares


Dual-fisheye lens cameras are becoming popular for 360-degree video capture, especially for User-generated content (UGC), since they are affordable and portable. Images generated by the dual-fisheye cameras have limited overlap and hence require non-conventional stitching techniques to produce high-quality 360x180-degree panoramas. This paper introduces a novel method to align these images using interpolation grids based on rigid moving least squares. Furthermore, jitter is the critical issue arising when one applies the image-based stitching algorithms to video.

Paper Details

Authors:
Tuan Ho, Ioannis Schizas, K. R. Rao, Madhukar Budagavi
Submitted On:
16 September 2017 - 5:42pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Tuan_ICIP17_Slides.pdf

(294)

ICIP_Supplement_Materials_paperID_2811.zip

(283)

Subscribe

[1] Tuan Ho, Ioannis Schizas, K. R. Rao, Madhukar Budagavi, "360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1984. Accessed: Oct. 15, 2019.
@article{1984-17,
url = {http://sigport.org/1984},
author = {Tuan Ho; Ioannis Schizas; K. R. Rao; Madhukar Budagavi },
publisher = {IEEE SigPort},
title = {360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares},
year = {2017} }
TY - EJOUR
T1 - 360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares
AU - Tuan Ho; Ioannis Schizas; K. R. Rao; Madhukar Budagavi
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1984
ER -
Tuan Ho, Ioannis Schizas, K. R. Rao, Madhukar Budagavi. (2017). 360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares. IEEE SigPort. http://sigport.org/1984
Tuan Ho, Ioannis Schizas, K. R. Rao, Madhukar Budagavi, 2017. 360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares. Available at: http://sigport.org/1984.
Tuan Ho, Ioannis Schizas, K. R. Rao, Madhukar Budagavi. (2017). "360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares." Web.
1. Tuan Ho, Ioannis Schizas, K. R. Rao, Madhukar Budagavi. 360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1984

VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES


High-fidelity virtual content is essential for the creation of compelling and effective virtual reality (VR) experiences.

Paper Details

Authors:
Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg
Submitted On:
7 September 2017 - 8:43pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:

Document Files

ICIP_Poster_ver1.pdf

(202)

Subscribe

[1] Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg, "VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1856. Accessed: Oct. 15, 2019.
@article{1856-17,
url = {http://sigport.org/1856},
author = {Chih-Fan Chen; Mark Bolas; Evan Suma Rosenberg },
publisher = {IEEE SigPort},
title = {VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES},
year = {2017} }
TY - EJOUR
T1 - VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES
AU - Chih-Fan Chen; Mark Bolas; Evan Suma Rosenberg
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1856
ER -
Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg. (2017). VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES. IEEE SigPort. http://sigport.org/1856
Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg, 2017. VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES. Available at: http://sigport.org/1856.
Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg. (2017). "VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES." Web.
1. Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg. VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1856

Pages