Sorry, you need to enable JavaScript to visit this website.

Virtual reality and 3D imaging

VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning


In this paper, we propose a novel virtual reality image quality assessment (VR IQA) with adversarial learning for omnidirectional images. To take into account the characteristics of the omnidirectional image, we devise deep networks including novel quality score predictor and human perception guider. The proposed quality score predictor automatically predicts the quality score of distorted image using the latent spatial and position feature.

Paper Details

Authors:
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro
Submitted On:
20 April 2018 - 8:00am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

VR IQA NET-ICASSP2018

(101 downloads)

Subscribe

[1] Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro, "VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3102. Accessed: Sep. 24, 2018.
@article{3102-18,
url = {http://sigport.org/3102},
author = {Heoun-taek Lim; Hak Gu Kim; and Yong Man Ro },
publisher = {IEEE SigPort},
title = {VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning},
year = {2018} }
TY - EJOUR
T1 - VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning
AU - Heoun-taek Lim; Hak Gu Kim; and Yong Man Ro
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3102
ER -
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. (2018). VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning. IEEE SigPort. http://sigport.org/3102
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro, 2018. VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning. Available at: http://sigport.org/3102.
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. (2018). "VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning." Web.
1. Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3102

3D Mesh Coding with Predefined Region-of-Interest


We introduce a novel functionality for wavelet-based irregular mesh codecs which allows for prioritizing at the encoding side a region-of-interest (ROI) over a background (BG), and for transmitting the encoded data such that the quality in these regions increases first. This is made possible by appropriately scaling wavelet coefficients. To improve the decoded geometry in the BG, we propose an ROI-aware inverse wavelet transform which only upscales the connectivity in the required regions. Results show clear bitrate and vertex savings.

Paper Details

Authors:
Adrian Munteanu,Peter Lambert
Submitted On:
15 September 2017 - 11:58pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

2017.09.20 - ICIP2017 - 3D Mesh Coding with Predefined Region-of-Interest.pdf

(130 downloads)

Subscribe

[1] Adrian Munteanu,Peter Lambert, "3D Mesh Coding with Predefined Region-of-Interest", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2170. Accessed: Sep. 24, 2018.
@article{2170-17,
url = {http://sigport.org/2170},
author = {Adrian Munteanu;Peter Lambert },
publisher = {IEEE SigPort},
title = {3D Mesh Coding with Predefined Region-of-Interest},
year = {2017} }
TY - EJOUR
T1 - 3D Mesh Coding with Predefined Region-of-Interest
AU - Adrian Munteanu;Peter Lambert
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2170
ER -
Adrian Munteanu,Peter Lambert. (2017). 3D Mesh Coding with Predefined Region-of-Interest. IEEE SigPort. http://sigport.org/2170
Adrian Munteanu,Peter Lambert, 2017. 3D Mesh Coding with Predefined Region-of-Interest. Available at: http://sigport.org/2170.
Adrian Munteanu,Peter Lambert. (2017). "3D Mesh Coding with Predefined Region-of-Interest." Web.
1. Adrian Munteanu,Peter Lambert. 3D Mesh Coding with Predefined Region-of-Interest [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2170

360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares


Dual-fisheye lens cameras are becoming popular for 360-degree video capture, especially for User-generated content (UGC), since they are affordable and portable. Images generated by the dual-fisheye cameras have limited overlap and hence require non-conventional stitching techniques to produce high-quality 360x180-degree panoramas. This paper introduces a novel method to align these images using interpolation grids based on rigid moving least squares. Furthermore, jitter is the critical issue arising when one applies the image-based stitching algorithms to video.

Paper Details

Authors:
Tuan Ho, Ioannis Schizas, K. R. Rao, Madhukar Budagavi
Submitted On:
16 September 2017 - 5:42pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Tuan_ICIP17_Slides.pdf

(196 downloads)

ICIP_Supplement_Materials_paperID_2811.zip

(165 downloads)

Subscribe

[1] Tuan Ho, Ioannis Schizas, K. R. Rao, Madhukar Budagavi, "360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1984. Accessed: Sep. 24, 2018.
@article{1984-17,
url = {http://sigport.org/1984},
author = {Tuan Ho; Ioannis Schizas; K. R. Rao; Madhukar Budagavi },
publisher = {IEEE SigPort},
title = {360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares},
year = {2017} }
TY - EJOUR
T1 - 360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares
AU - Tuan Ho; Ioannis Schizas; K. R. Rao; Madhukar Budagavi
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1984
ER -
Tuan Ho, Ioannis Schizas, K. R. Rao, Madhukar Budagavi. (2017). 360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares. IEEE SigPort. http://sigport.org/1984
Tuan Ho, Ioannis Schizas, K. R. Rao, Madhukar Budagavi, 2017. 360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares. Available at: http://sigport.org/1984.
Tuan Ho, Ioannis Schizas, K. R. Rao, Madhukar Budagavi. (2017). "360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares." Web.
1. Tuan Ho, Ioannis Schizas, K. R. Rao, Madhukar Budagavi. 360-degree Video Stitching for Dual-fisheye Lens Cameras Based On Rigid Moving Least Squares [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1984

VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES


High-fidelity virtual content is essential for the creation of compelling and effective virtual reality (VR) experiences.

Paper Details

Authors:
Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg
Submitted On:
7 September 2017 - 8:43pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:

Document Files

ICIP_Poster_ver1.pdf

(131 downloads)

Subscribe

[1] Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg, "VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1856. Accessed: Sep. 24, 2018.
@article{1856-17,
url = {http://sigport.org/1856},
author = {Chih-Fan Chen; Mark Bolas; Evan Suma Rosenberg },
publisher = {IEEE SigPort},
title = {VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES},
year = {2017} }
TY - EJOUR
T1 - VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES
AU - Chih-Fan Chen; Mark Bolas; Evan Suma Rosenberg
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1856
ER -
Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg. (2017). VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES. IEEE SigPort. http://sigport.org/1856
Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg, 2017. VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES. Available at: http://sigport.org/1856.
Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg. (2017). "VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES." Web.
1. Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg. VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1856

A Novel Kinect V2 Registration Method For Large-Displacement Environments Using Camera And Scene Constraints


In a lot of multi-Kinect V2-based systems, the registration of these Kinect V2 sensors is an important step which directly affects the system precision. The coarse-to-fine method using calibration objects is an effective way to solve the Kinect V2 registration problem. However, for the registration of Kinect V2 cameras with large displacements, this kind of method may fail. To this end, a novel Kinect V2 registration method, which is also based on the coarse-to-fine framework, is proposed by using camera and scene constraints.

Paper Details

Authors:
Sandro Esquivel, Reinhard Koch, Matthias Ziegler, Frederik Zilly, Joachim Keinert
Submitted On:
19 April 2018 - 11:16am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

2017.09.19ICIP.pdf

(154 downloads)

Subscribe

[1] Sandro Esquivel, Reinhard Koch, Matthias Ziegler, Frederik Zilly, Joachim Keinert, "A Novel Kinect V2 Registration Method For Large-Displacement Environments Using Camera And Scene Constraints", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1849. Accessed: Sep. 24, 2018.
@article{1849-17,
url = {http://sigport.org/1849},
author = {Sandro Esquivel; Reinhard Koch; Matthias Ziegler; Frederik Zilly; Joachim Keinert },
publisher = {IEEE SigPort},
title = {A Novel Kinect V2 Registration Method For Large-Displacement Environments Using Camera And Scene Constraints},
year = {2017} }
TY - EJOUR
T1 - A Novel Kinect V2 Registration Method For Large-Displacement Environments Using Camera And Scene Constraints
AU - Sandro Esquivel; Reinhard Koch; Matthias Ziegler; Frederik Zilly; Joachim Keinert
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1849
ER -
Sandro Esquivel, Reinhard Koch, Matthias Ziegler, Frederik Zilly, Joachim Keinert. (2017). A Novel Kinect V2 Registration Method For Large-Displacement Environments Using Camera And Scene Constraints. IEEE SigPort. http://sigport.org/1849
Sandro Esquivel, Reinhard Koch, Matthias Ziegler, Frederik Zilly, Joachim Keinert, 2017. A Novel Kinect V2 Registration Method For Large-Displacement Environments Using Camera And Scene Constraints. Available at: http://sigport.org/1849.
Sandro Esquivel, Reinhard Koch, Matthias Ziegler, Frederik Zilly, Joachim Keinert. (2017). "A Novel Kinect V2 Registration Method For Large-Displacement Environments Using Camera And Scene Constraints." Web.
1. Sandro Esquivel, Reinhard Koch, Matthias Ziegler, Frederik Zilly, Joachim Keinert. A Novel Kinect V2 Registration Method For Large-Displacement Environments Using Camera And Scene Constraints [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1849

Surface-based background completion in 3D scene

Paper Details

Authors:
YUNG-LIN HUANG, SHAO-YI, CHIEN
Submitted On:
6 December 2016 - 3:03pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

GlobalSIP2016_Poster_Bob_Done.pdf

(235 downloads)

Subscribe

[1] YUNG-LIN HUANG, SHAO-YI, CHIEN, "Surface-based background completion in 3D scene", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1374. Accessed: Sep. 24, 2018.
@article{1374-16,
url = {http://sigport.org/1374},
author = {YUNG-LIN HUANG; SHAO-YI; CHIEN },
publisher = {IEEE SigPort},
title = {Surface-based background completion in 3D scene},
year = {2016} }
TY - EJOUR
T1 - Surface-based background completion in 3D scene
AU - YUNG-LIN HUANG; SHAO-YI; CHIEN
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1374
ER -
YUNG-LIN HUANG, SHAO-YI, CHIEN. (2016). Surface-based background completion in 3D scene. IEEE SigPort. http://sigport.org/1374
YUNG-LIN HUANG, SHAO-YI, CHIEN, 2016. Surface-based background completion in 3D scene. Available at: http://sigport.org/1374.
YUNG-LIN HUANG, SHAO-YI, CHIEN. (2016). "Surface-based background completion in 3D scene." Web.
1. YUNG-LIN HUANG, SHAO-YI, CHIEN. Surface-based background completion in 3D scene [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1374

Three-dimensional Reconstruction from Heterogeneous Video Devices With Camera-In-View Information


In this work, a 3D modelization of the surrounding environment is enabled with an improvised ad-hoc camera networks of both static and mobile devices (cloud vision network).

poster.pdf

PDF icon poster.pdf (920 downloads)

Paper Details

Authors:
Submitted On:
23 February 2016 - 1:38pm
Short Link:
Type:
Event:
Presenter's Name:

Document Files

poster.pdf

(920 downloads)

Subscribe

[1] , "Three-dimensional Reconstruction from Heterogeneous Video Devices With Camera-In-View Information", IEEE SigPort, 2015. [Online]. Available: http://sigport.org/224. Accessed: Sep. 24, 2018.
@article{224-15,
url = {http://sigport.org/224},
author = { },
publisher = {IEEE SigPort},
title = {Three-dimensional Reconstruction from Heterogeneous Video Devices With Camera-In-View Information},
year = {2015} }
TY - EJOUR
T1 - Three-dimensional Reconstruction from Heterogeneous Video Devices With Camera-In-View Information
AU -
PY - 2015
PB - IEEE SigPort
UR - http://sigport.org/224
ER -
. (2015). Three-dimensional Reconstruction from Heterogeneous Video Devices With Camera-In-View Information. IEEE SigPort. http://sigport.org/224
, 2015. Three-dimensional Reconstruction from Heterogeneous Video Devices With Camera-In-View Information. Available at: http://sigport.org/224.
. (2015). "Three-dimensional Reconstruction from Heterogeneous Video Devices With Camera-In-View Information." Web.
1. . Three-dimensional Reconstruction from Heterogeneous Video Devices With Camera-In-View Information [Internet]. IEEE SigPort; 2015. Available from : http://sigport.org/224