Sorry, you need to enable JavaScript to visit this website.

Image, Video, and Multidimensional Signal Processing

Immersive Optical-See-Through Augmented Reality (Keynote Talk)


Immersive Optical-See-Through Augmented Reality. Augmented Reality has been getting ready for the last 20 years, and is finally becoming real, powered by progress in enabling technologies such as graphics, vision, sensors, and displays. In this talk I’ll provide a personal retrospective on my journey, working on all those enablers, getting ready for the coming AR revolution. At Meta, we are working on immersive optical-see-through AR headset, as well as the full software stack. We’ll discuss the differences of optical vs.

Paper Details

Authors:
Kari Pulli
Submitted On:
22 December 2017 - 1:30pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICIP_2017_Meta_AR_small.pdf

(296)

Subscribe

[1] Kari Pulli, "Immersive Optical-See-Through Augmented Reality (Keynote Talk)", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2261. Accessed: May. 26, 2019.
@article{2261-17,
url = {http://sigport.org/2261},
author = {Kari Pulli },
publisher = {IEEE SigPort},
title = {Immersive Optical-See-Through Augmented Reality (Keynote Talk)},
year = {2017} }
TY - EJOUR
T1 - Immersive Optical-See-Through Augmented Reality (Keynote Talk)
AU - Kari Pulli
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2261
ER -
Kari Pulli. (2017). Immersive Optical-See-Through Augmented Reality (Keynote Talk). IEEE SigPort. http://sigport.org/2261
Kari Pulli, 2017. Immersive Optical-See-Through Augmented Reality (Keynote Talk). Available at: http://sigport.org/2261.
Kari Pulli. (2017). "Immersive Optical-See-Through Augmented Reality (Keynote Talk)." Web.
1. Kari Pulli. Immersive Optical-See-Through Augmented Reality (Keynote Talk) [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2261

EARLY WILDFIRE SMOKE DETECTION BASED ON MOTION-BASED GEOMETRIC IMAGE TRANSFORMATION AND DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS


Immediate and accurate detection of wildfire is essentially important in forest monitoring systems
•One of the most harmful hazards in rural areas
•For wildfire detection, the use of visible-range video captured by surveillance cameras are suitable
•They can be deployed and operated in a cost-effective manner
•The challenge is to provide a robust detection system with negligible false positive rates
•If the flames are visible, they can be detected by analyzing the motion and color clues of a video

Paper Details

Authors:
Suleyman Aslan, Ugur Gudukbay, Behçet Uğur Töreyin, Ahmet Enis Çetin
Submitted On:
10 May 2019 - 10:49pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

sa_wildfire_dcgan.pdf

(6)

Subscribe

[1] Suleyman Aslan, Ugur Gudukbay, Behçet Uğur Töreyin, Ahmet Enis Çetin, "EARLY WILDFIRE SMOKE DETECTION BASED ON MOTION-BASED GEOMETRIC IMAGE TRANSFORMATION AND DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4428. Accessed: May. 26, 2019.
@article{4428-19,
url = {http://sigport.org/4428},
author = {Suleyman Aslan; Ugur Gudukbay; Behçet Uğur Töreyin; Ahmet Enis Çetin },
publisher = {IEEE SigPort},
title = {EARLY WILDFIRE SMOKE DETECTION BASED ON MOTION-BASED GEOMETRIC IMAGE TRANSFORMATION AND DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS},
year = {2019} }
TY - EJOUR
T1 - EARLY WILDFIRE SMOKE DETECTION BASED ON MOTION-BASED GEOMETRIC IMAGE TRANSFORMATION AND DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS
AU - Suleyman Aslan; Ugur Gudukbay; Behçet Uğur Töreyin; Ahmet Enis Çetin
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4428
ER -
Suleyman Aslan, Ugur Gudukbay, Behçet Uğur Töreyin, Ahmet Enis Çetin. (2019). EARLY WILDFIRE SMOKE DETECTION BASED ON MOTION-BASED GEOMETRIC IMAGE TRANSFORMATION AND DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS. IEEE SigPort. http://sigport.org/4428
Suleyman Aslan, Ugur Gudukbay, Behçet Uğur Töreyin, Ahmet Enis Çetin, 2019. EARLY WILDFIRE SMOKE DETECTION BASED ON MOTION-BASED GEOMETRIC IMAGE TRANSFORMATION AND DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS. Available at: http://sigport.org/4428.
Suleyman Aslan, Ugur Gudukbay, Behçet Uğur Töreyin, Ahmet Enis Çetin. (2019). "EARLY WILDFIRE SMOKE DETECTION BASED ON MOTION-BASED GEOMETRIC IMAGE TRANSFORMATION AND DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS." Web.
1. Suleyman Aslan, Ugur Gudukbay, Behçet Uğur Töreyin, Ahmet Enis Çetin. EARLY WILDFIRE SMOKE DETECTION BASED ON MOTION-BASED GEOMETRIC IMAGE TRANSFORMATION AND DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4428

Blind Quality Assessment for 3D-Synthesized Images by Measuring Geometric Distortions and Image Complexity


Free viewpoint video (FVV), owing to its comprehensive applications in immersive entertainment, remote surveillance and distanced education, has received extensive attention and been regarded as a new important direction of video technology development. Depth image-based rendering (DIBR) technologies are employed to synthesize FVV images in the “blind” environment. Therefore, a real-time reliable blind quality assessment metric is urgently required. However, existing stste-of-art quality assessment methods are limited to estimate geometric distortions generated by DIBR.

Paper Details

Authors:
Guangcheng Wang, Zhongyuan Wang, Ke Gu, Zhifang Xia
Submitted On:
9 May 2019 - 10:49pm
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

poster

(14)

Subscribe

[1] Guangcheng Wang, Zhongyuan Wang, Ke Gu, Zhifang Xia, "Blind Quality Assessment for 3D-Synthesized Images by Measuring Geometric Distortions and Image Complexity", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4254. Accessed: May. 26, 2019.
@article{4254-19,
url = {http://sigport.org/4254},
author = {Guangcheng Wang; Zhongyuan Wang; Ke Gu; Zhifang Xia },
publisher = {IEEE SigPort},
title = {Blind Quality Assessment for 3D-Synthesized Images by Measuring Geometric Distortions and Image Complexity},
year = {2019} }
TY - EJOUR
T1 - Blind Quality Assessment for 3D-Synthesized Images by Measuring Geometric Distortions and Image Complexity
AU - Guangcheng Wang; Zhongyuan Wang; Ke Gu; Zhifang Xia
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4254
ER -
Guangcheng Wang, Zhongyuan Wang, Ke Gu, Zhifang Xia. (2019). Blind Quality Assessment for 3D-Synthesized Images by Measuring Geometric Distortions and Image Complexity. IEEE SigPort. http://sigport.org/4254
Guangcheng Wang, Zhongyuan Wang, Ke Gu, Zhifang Xia, 2019. Blind Quality Assessment for 3D-Synthesized Images by Measuring Geometric Distortions and Image Complexity. Available at: http://sigport.org/4254.
Guangcheng Wang, Zhongyuan Wang, Ke Gu, Zhifang Xia. (2019). "Blind Quality Assessment for 3D-Synthesized Images by Measuring Geometric Distortions and Image Complexity." Web.
1. Guangcheng Wang, Zhongyuan Wang, Ke Gu, Zhifang Xia. Blind Quality Assessment for 3D-Synthesized Images by Measuring Geometric Distortions and Image Complexity [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4254

Learning Motion Disfluencies for Automatic Sign Language Segmentation


We introduce a novel technique for the automatic detection of word boundaries within continuous sentence expressions in Japanese Sign Language from three-dimensional body joint positions. First, the flow of signed sentence data within a temporal neighborhood is determined utilizing the spatial correlations between line segments of inter-joint pairs. Next, a frame-wise binary random forest classifier is trained to distinguish word and non-word frame content based on the extracted spatio-temporal features.

Paper Details

Authors:
Iva Farag
Submitted On:
9 May 2019 - 2:18am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster.pdf

(50)

Subscribe

[1] Iva Farag, "Learning Motion Disfluencies for Automatic Sign Language Segmentation", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4154. Accessed: May. 26, 2019.
@article{4154-19,
url = {http://sigport.org/4154},
author = {Iva Farag },
publisher = {IEEE SigPort},
title = {Learning Motion Disfluencies for Automatic Sign Language Segmentation},
year = {2019} }
TY - EJOUR
T1 - Learning Motion Disfluencies for Automatic Sign Language Segmentation
AU - Iva Farag
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4154
ER -
Iva Farag. (2019). Learning Motion Disfluencies for Automatic Sign Language Segmentation. IEEE SigPort. http://sigport.org/4154
Iva Farag, 2019. Learning Motion Disfluencies for Automatic Sign Language Segmentation. Available at: http://sigport.org/4154.
Iva Farag. (2019). "Learning Motion Disfluencies for Automatic Sign Language Segmentation." Web.
1. Iva Farag. Learning Motion Disfluencies for Automatic Sign Language Segmentation [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4154

Progressive Filtering for Feature Matching


In this paper, we propose a simple yet efficient method termed as Progressive Filtering for Feature Matching, which is able to establish accurate correspondences between two images of common or similar scenes. Our algorithm first grids the correspondence space and calculates a typical motion vector for each cell, and then removes false matches by checking the consistency between each putative match and the typical motion vector in the corresponding cell, which is achieved by a convolution operation.

Paper Details

Authors:
Xingyu Jiang, Jiayi Ma, Jun Chen
Submitted On:
8 May 2019 - 9:46am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:

Document Files

poster11.pdf

(6)

Subscribe

[1] Xingyu Jiang, Jiayi Ma, Jun Chen, "Progressive Filtering for Feature Matching", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4100. Accessed: May. 26, 2019.
@article{4100-19,
url = {http://sigport.org/4100},
author = {Xingyu Jiang; Jiayi Ma; Jun Chen },
publisher = {IEEE SigPort},
title = {Progressive Filtering for Feature Matching},
year = {2019} }
TY - EJOUR
T1 - Progressive Filtering for Feature Matching
AU - Xingyu Jiang; Jiayi Ma; Jun Chen
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4100
ER -
Xingyu Jiang, Jiayi Ma, Jun Chen. (2019). Progressive Filtering for Feature Matching. IEEE SigPort. http://sigport.org/4100
Xingyu Jiang, Jiayi Ma, Jun Chen, 2019. Progressive Filtering for Feature Matching. Available at: http://sigport.org/4100.
Xingyu Jiang, Jiayi Ma, Jun Chen. (2019). "Progressive Filtering for Feature Matching." Web.
1. Xingyu Jiang, Jiayi Ma, Jun Chen. Progressive Filtering for Feature Matching [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4100

CUTENSOR-TUBAL: OPTIMIZED GPU LIBRARY FOR LOW-TUBAL-RANK TENSORS


In this paper, we optimize the computations of third-order low-tubal-rank tensor operations on many-core GPUs. Tensor operations are compute-intensive and existing studies optimize such operations in a case-by-case manner, which can be inefficient and error-prone. We develop and optimize a BLAS-like library for the low-tubal-rank tensor model called cuTensor-tubal, which includes efficient GPU primitives for tensor operations and key processes. We compute tensor operations in the frequency domain and fully exploit tube-wise and slice-wise parallelisms.

Paper Details

Authors:
Tao Zhang, Xiao-Yang Liu
Submitted On:
8 May 2019 - 8:21am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

The poster for paper entitled "CUTENSOR-TUBAL: OPTIMIZED GPU LIBRARY FOR LOW-TUBAL-RANK TENSORS"

(3)

Subscribe

[1] Tao Zhang, Xiao-Yang Liu, "CUTENSOR-TUBAL: OPTIMIZED GPU LIBRARY FOR LOW-TUBAL-RANK TENSORS", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4082. Accessed: May. 26, 2019.
@article{4082-19,
url = {http://sigport.org/4082},
author = {Tao Zhang; Xiao-Yang Liu },
publisher = {IEEE SigPort},
title = {CUTENSOR-TUBAL: OPTIMIZED GPU LIBRARY FOR LOW-TUBAL-RANK TENSORS},
year = {2019} }
TY - EJOUR
T1 - CUTENSOR-TUBAL: OPTIMIZED GPU LIBRARY FOR LOW-TUBAL-RANK TENSORS
AU - Tao Zhang; Xiao-Yang Liu
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4082
ER -
Tao Zhang, Xiao-Yang Liu. (2019). CUTENSOR-TUBAL: OPTIMIZED GPU LIBRARY FOR LOW-TUBAL-RANK TENSORS. IEEE SigPort. http://sigport.org/4082
Tao Zhang, Xiao-Yang Liu, 2019. CUTENSOR-TUBAL: OPTIMIZED GPU LIBRARY FOR LOW-TUBAL-RANK TENSORS. Available at: http://sigport.org/4082.
Tao Zhang, Xiao-Yang Liu. (2019). "CUTENSOR-TUBAL: OPTIMIZED GPU LIBRARY FOR LOW-TUBAL-RANK TENSORS." Web.
1. Tao Zhang, Xiao-Yang Liu. CUTENSOR-TUBAL: OPTIMIZED GPU LIBRARY FOR LOW-TUBAL-RANK TENSORS [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4082

Classification of Severely Occluded Image Sequences via Convolutional Recurrent Neural Networks

Paper Details

Authors:
Jian Zheng, Yifan Wang, Xiaonan Zhang, Xiaohua Li
Submitted On:
29 November 2018 - 3:44am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

GlobalSIP_poster_Final.pdf

(55)

Subscribe

[1] Jian Zheng, Yifan Wang, Xiaonan Zhang, Xiaohua Li, "Classification of Severely Occluded Image Sequences via Convolutional Recurrent Neural Networks", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3830. Accessed: May. 26, 2019.
@article{3830-18,
url = {http://sigport.org/3830},
author = {Jian Zheng; Yifan Wang; Xiaonan Zhang; Xiaohua Li },
publisher = {IEEE SigPort},
title = {Classification of Severely Occluded Image Sequences via Convolutional Recurrent Neural Networks},
year = {2018} }
TY - EJOUR
T1 - Classification of Severely Occluded Image Sequences via Convolutional Recurrent Neural Networks
AU - Jian Zheng; Yifan Wang; Xiaonan Zhang; Xiaohua Li
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3830
ER -
Jian Zheng, Yifan Wang, Xiaonan Zhang, Xiaohua Li. (2018). Classification of Severely Occluded Image Sequences via Convolutional Recurrent Neural Networks. IEEE SigPort. http://sigport.org/3830
Jian Zheng, Yifan Wang, Xiaonan Zhang, Xiaohua Li, 2018. Classification of Severely Occluded Image Sequences via Convolutional Recurrent Neural Networks. Available at: http://sigport.org/3830.
Jian Zheng, Yifan Wang, Xiaonan Zhang, Xiaohua Li. (2018). "Classification of Severely Occluded Image Sequences via Convolutional Recurrent Neural Networks." Web.
1. Jian Zheng, Yifan Wang, Xiaonan Zhang, Xiaohua Li. Classification of Severely Occluded Image Sequences via Convolutional Recurrent Neural Networks [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3830

Sparse tensor recovery via N-mode FISTA with support augmentation

Paper Details

Authors:
Ashley Prater-Bennette, Lixin Shen
Submitted On:
28 November 2018 - 6:12pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

PraterBennette_GlobalSIP_1176_Presentation_v3.pdf

(59)

Subscribe

[1] Ashley Prater-Bennette, Lixin Shen, "Sparse tensor recovery via N-mode FISTA with support augmentation", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3827. Accessed: May. 26, 2019.
@article{3827-18,
url = {http://sigport.org/3827},
author = {Ashley Prater-Bennette; Lixin Shen },
publisher = {IEEE SigPort},
title = {Sparse tensor recovery via N-mode FISTA with support augmentation},
year = {2018} }
TY - EJOUR
T1 - Sparse tensor recovery via N-mode FISTA with support augmentation
AU - Ashley Prater-Bennette; Lixin Shen
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3827
ER -
Ashley Prater-Bennette, Lixin Shen. (2018). Sparse tensor recovery via N-mode FISTA with support augmentation. IEEE SigPort. http://sigport.org/3827
Ashley Prater-Bennette, Lixin Shen, 2018. Sparse tensor recovery via N-mode FISTA with support augmentation. Available at: http://sigport.org/3827.
Ashley Prater-Bennette, Lixin Shen. (2018). "Sparse tensor recovery via N-mode FISTA with support augmentation." Web.
1. Ashley Prater-Bennette, Lixin Shen. Sparse tensor recovery via N-mode FISTA with support augmentation [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3827

Sparse tensor recovery via N-mode FISTA with support augmentation

Paper Details

Authors:
Ashley Prater-Bennette, Lixin Shen
Submitted On:
28 November 2018 - 6:12pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

PraterBennette_GlobalSIP_1176_Presentation_v3.pdf

(59)

Subscribe

[1] Ashley Prater-Bennette, Lixin Shen, "Sparse tensor recovery via N-mode FISTA with support augmentation", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3826. Accessed: May. 26, 2019.
@article{3826-18,
url = {http://sigport.org/3826},
author = {Ashley Prater-Bennette; Lixin Shen },
publisher = {IEEE SigPort},
title = {Sparse tensor recovery via N-mode FISTA with support augmentation},
year = {2018} }
TY - EJOUR
T1 - Sparse tensor recovery via N-mode FISTA with support augmentation
AU - Ashley Prater-Bennette; Lixin Shen
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3826
ER -
Ashley Prater-Bennette, Lixin Shen. (2018). Sparse tensor recovery via N-mode FISTA with support augmentation. IEEE SigPort. http://sigport.org/3826
Ashley Prater-Bennette, Lixin Shen, 2018. Sparse tensor recovery via N-mode FISTA with support augmentation. Available at: http://sigport.org/3826.
Ashley Prater-Bennette, Lixin Shen. (2018). "Sparse tensor recovery via N-mode FISTA with support augmentation." Web.
1. Ashley Prater-Bennette, Lixin Shen. Sparse tensor recovery via N-mode FISTA with support augmentation [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3826

The Greedy Dirichlet Process Filter - An Online Clustering Multi-Target Tracker


Reliable collision avoidance is one of the main requirements for autonomous driving.
Hence, it is important to correctly estimate the states of an unknown number of static and dynamic objects in real-time.
Here, data association is a major challenge for every multi-target tracker.
We propose a novel multi-target tracker called Greedy Dirichlet Process Filter (GDPF) based on the non-parametric Bayesian model called Dirichlet Processes and the fast posterior computation algorithm Sequential Updating and Greedy Search (SUGS).

Paper Details

Authors:
Patrick Burger, Hans-Joachim Wuensche
Submitted On:
27 November 2018 - 1:23pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

gdpf_presentation.zip

(36)

Subscribe

[1] Patrick Burger, Hans-Joachim Wuensche, "The Greedy Dirichlet Process Filter - An Online Clustering Multi-Target Tracker", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3815. Accessed: May. 26, 2019.
@article{3815-18,
url = {http://sigport.org/3815},
author = {Patrick Burger; Hans-Joachim Wuensche },
publisher = {IEEE SigPort},
title = {The Greedy Dirichlet Process Filter - An Online Clustering Multi-Target Tracker},
year = {2018} }
TY - EJOUR
T1 - The Greedy Dirichlet Process Filter - An Online Clustering Multi-Target Tracker
AU - Patrick Burger; Hans-Joachim Wuensche
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3815
ER -
Patrick Burger, Hans-Joachim Wuensche. (2018). The Greedy Dirichlet Process Filter - An Online Clustering Multi-Target Tracker. IEEE SigPort. http://sigport.org/3815
Patrick Burger, Hans-Joachim Wuensche, 2018. The Greedy Dirichlet Process Filter - An Online Clustering Multi-Target Tracker. Available at: http://sigport.org/3815.
Patrick Burger, Hans-Joachim Wuensche. (2018). "The Greedy Dirichlet Process Filter - An Online Clustering Multi-Target Tracker." Web.
1. Patrick Burger, Hans-Joachim Wuensche. The Greedy Dirichlet Process Filter - An Online Clustering Multi-Target Tracker [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3815

Pages