Sorry, you need to enable JavaScript to visit this website.

Image, Video, and Multidimensional Signal Processing

IMAGE REPRESENTATION USING SUPERVISED AND UNSUPERVISED LEARNING METHODS ON COMPLEX DOMAIN


Matrix factorization (MF) and its extensions have been intensively studied in computer vision and machine learning. In this paper, unsupervised and supervised learning methods based on MF technique on complex domain are introduced. Projective complex matrix factorization (PCMF) and discriminant projective complex matrix factorization (DPCMF) present two frameworks of projecting complex data to a lower dimension space. The optimization problems are formulated as the minimization of the real-valued functions of complex variables.

Paper Details

Authors:
Manh-Quan Bui, Viet-Hang Duong, Yung-Hui Li, Tzu-Chiang Tai, Jia-Ching Wang
Submitted On:
13 April 2018 - 5:40am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2018_DPCMF_2.pdf

(122 downloads)

ICASSP2018_DPCMF_Origin1.pdf

(59 downloads)

Subscribe

[1] Manh-Quan Bui, Viet-Hang Duong, Yung-Hui Li, Tzu-Chiang Tai, Jia-Ching Wang, "IMAGE REPRESENTATION USING SUPERVISED AND UNSUPERVISED LEARNING METHODS ON COMPLEX DOMAIN", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2618. Accessed: Nov. 13, 2018.
@article{2618-18,
url = {http://sigport.org/2618},
author = {Manh-Quan Bui; Viet-Hang Duong; Yung-Hui Li; Tzu-Chiang Tai; Jia-Ching Wang },
publisher = {IEEE SigPort},
title = {IMAGE REPRESENTATION USING SUPERVISED AND UNSUPERVISED LEARNING METHODS ON COMPLEX DOMAIN},
year = {2018} }
TY - EJOUR
T1 - IMAGE REPRESENTATION USING SUPERVISED AND UNSUPERVISED LEARNING METHODS ON COMPLEX DOMAIN
AU - Manh-Quan Bui; Viet-Hang Duong; Yung-Hui Li; Tzu-Chiang Tai; Jia-Ching Wang
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2618
ER -
Manh-Quan Bui, Viet-Hang Duong, Yung-Hui Li, Tzu-Chiang Tai, Jia-Ching Wang. (2018). IMAGE REPRESENTATION USING SUPERVISED AND UNSUPERVISED LEARNING METHODS ON COMPLEX DOMAIN. IEEE SigPort. http://sigport.org/2618
Manh-Quan Bui, Viet-Hang Duong, Yung-Hui Li, Tzu-Chiang Tai, Jia-Ching Wang, 2018. IMAGE REPRESENTATION USING SUPERVISED AND UNSUPERVISED LEARNING METHODS ON COMPLEX DOMAIN. Available at: http://sigport.org/2618.
Manh-Quan Bui, Viet-Hang Duong, Yung-Hui Li, Tzu-Chiang Tai, Jia-Ching Wang. (2018). "IMAGE REPRESENTATION USING SUPERVISED AND UNSUPERVISED LEARNING METHODS ON COMPLEX DOMAIN." Web.
1. Manh-Quan Bui, Viet-Hang Duong, Yung-Hui Li, Tzu-Chiang Tai, Jia-Ching Wang. IMAGE REPRESENTATION USING SUPERVISED AND UNSUPERVISED LEARNING METHODS ON COMPLEX DOMAIN [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2618

IMPROVING DISPARITY MAP ESTIMATION FOR MULTI-VIEW NOISY IMAGES


A robust multi-view disparity estimation algorithm for noisy images is presented. The proposed algorithm constructs 3D focus image stacks (3DFIS) by projecting and stacking multi-view images and estimates a disparity map based on the 3DFIS. To make the algorithm robust to noise and occlusion, a texture-based view selection and patch size variation scheme based on texture map is proposed.

Paper Details

Authors:
Shiwei Zhou, Zhengyang Lou, Yu Hen Hu, Hongrui Jiang
Submitted On:
12 April 2018 - 12:44pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_poster.pdf

(1370 downloads)

Subscribe

[1] Shiwei Zhou, Zhengyang Lou, Yu Hen Hu, Hongrui Jiang, "IMPROVING DISPARITY MAP ESTIMATION FOR MULTI-VIEW NOISY IMAGES ", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2442. Accessed: Nov. 13, 2018.
@article{2442-18,
url = {http://sigport.org/2442},
author = {Shiwei Zhou; Zhengyang Lou; Yu Hen Hu; Hongrui Jiang },
publisher = {IEEE SigPort},
title = {IMPROVING DISPARITY MAP ESTIMATION FOR MULTI-VIEW NOISY IMAGES },
year = {2018} }
TY - EJOUR
T1 - IMPROVING DISPARITY MAP ESTIMATION FOR MULTI-VIEW NOISY IMAGES
AU - Shiwei Zhou; Zhengyang Lou; Yu Hen Hu; Hongrui Jiang
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2442
ER -
Shiwei Zhou, Zhengyang Lou, Yu Hen Hu, Hongrui Jiang. (2018). IMPROVING DISPARITY MAP ESTIMATION FOR MULTI-VIEW NOISY IMAGES . IEEE SigPort. http://sigport.org/2442
Shiwei Zhou, Zhengyang Lou, Yu Hen Hu, Hongrui Jiang, 2018. IMPROVING DISPARITY MAP ESTIMATION FOR MULTI-VIEW NOISY IMAGES . Available at: http://sigport.org/2442.
Shiwei Zhou, Zhengyang Lou, Yu Hen Hu, Hongrui Jiang. (2018). "IMPROVING DISPARITY MAP ESTIMATION FOR MULTI-VIEW NOISY IMAGES ." Web.
1. Shiwei Zhou, Zhengyang Lou, Yu Hen Hu, Hongrui Jiang. IMPROVING DISPARITY MAP ESTIMATION FOR MULTI-VIEW NOISY IMAGES [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2442

ADAPTIVE VISUAL TARGET TRACKING BASED ON LABEL CONSISTENT K-SVD SPARSE CODING AND KERNEL PARTICLE FILTER


We propose an adaptive visual target tracking algorithm based on Label-Consistent K-Singular Value Decomposition (LC-KSVD) dictionary learning. To construct target templates, local patch features are sampled from foreground and background of the target. LC-KSVD then is applied to these local patches to simultaneously estimate a set of low-dimension dictionary and classification parameters (CP). To track the target over time, a kernel particle filter (KPF) is proposed that integrates both local and global motion information of the target.

Paper Details

Authors:
Jinlong Yang, Xiaoping Chen, Yu Hen Hu Jianjun Liu
Submitted On:
12 April 2018 - 12:25pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster-2119.pdf

(60 downloads)

Subscribe

[1] Jinlong Yang, Xiaoping Chen, Yu Hen Hu Jianjun Liu, "ADAPTIVE VISUAL TARGET TRACKING BASED ON LABEL CONSISTENT K-SVD SPARSE CODING AND KERNEL PARTICLE FILTER", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2432. Accessed: Nov. 13, 2018.
@article{2432-18,
url = {http://sigport.org/2432},
author = {Jinlong Yang; Xiaoping Chen; Yu Hen Hu Jianjun Liu },
publisher = {IEEE SigPort},
title = {ADAPTIVE VISUAL TARGET TRACKING BASED ON LABEL CONSISTENT K-SVD SPARSE CODING AND KERNEL PARTICLE FILTER},
year = {2018} }
TY - EJOUR
T1 - ADAPTIVE VISUAL TARGET TRACKING BASED ON LABEL CONSISTENT K-SVD SPARSE CODING AND KERNEL PARTICLE FILTER
AU - Jinlong Yang; Xiaoping Chen; Yu Hen Hu Jianjun Liu
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2432
ER -
Jinlong Yang, Xiaoping Chen, Yu Hen Hu Jianjun Liu. (2018). ADAPTIVE VISUAL TARGET TRACKING BASED ON LABEL CONSISTENT K-SVD SPARSE CODING AND KERNEL PARTICLE FILTER. IEEE SigPort. http://sigport.org/2432
Jinlong Yang, Xiaoping Chen, Yu Hen Hu Jianjun Liu, 2018. ADAPTIVE VISUAL TARGET TRACKING BASED ON LABEL CONSISTENT K-SVD SPARSE CODING AND KERNEL PARTICLE FILTER. Available at: http://sigport.org/2432.
Jinlong Yang, Xiaoping Chen, Yu Hen Hu Jianjun Liu. (2018). "ADAPTIVE VISUAL TARGET TRACKING BASED ON LABEL CONSISTENT K-SVD SPARSE CODING AND KERNEL PARTICLE FILTER." Web.
1. Jinlong Yang, Xiaoping Chen, Yu Hen Hu Jianjun Liu. ADAPTIVE VISUAL TARGET TRACKING BASED ON LABEL CONSISTENT K-SVD SPARSE CODING AND KERNEL PARTICLE FILTER [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2432

AIRCRAFT FUSELAGE DEFECT DETECTION USING DEEP NEURAL NETWORKS


To ensure flight safety of aircraft structures, it is necessary to have regular maintenance using visual and nondestructive inspection (NDI) methods. In this paper, we propose an automatic image-based aircraft defect detection using Deep Neural Networks (DNNs). To the best of our knowledge, this is the first work for aircraft defect detection using DNNs. We perform a comprehensive evaluation of state-of-the-art feature descriptors and show that the best performance is achieved by vgg-f DNN as feature extractor with a linear SVM classifier.

Paper Details

Authors:
TOUBA MALEKZADEH, MILAD ABDOLLAHZADEH HOSSEIN NEJATI, NGAI-MAN CHEUNG
Submitted On:
13 November 2017 - 8:57am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

AIRCRAFT FUSELAGE DEFECT DETECTION USING DEEP NEURAL NETWORKS__v2.pdf

(174 downloads)

Subscribe

[1] TOUBA MALEKZADEH, MILAD ABDOLLAHZADEH HOSSEIN NEJATI, NGAI-MAN CHEUNG, "AIRCRAFT FUSELAGE DEFECT DETECTION USING DEEP NEURAL NETWORKS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2333. Accessed: Nov. 13, 2018.
@article{2333-17,
url = {http://sigport.org/2333},
author = {TOUBA MALEKZADEH; MILAD ABDOLLAHZADEH HOSSEIN NEJATI; NGAI-MAN CHEUNG },
publisher = {IEEE SigPort},
title = {AIRCRAFT FUSELAGE DEFECT DETECTION USING DEEP NEURAL NETWORKS},
year = {2017} }
TY - EJOUR
T1 - AIRCRAFT FUSELAGE DEFECT DETECTION USING DEEP NEURAL NETWORKS
AU - TOUBA MALEKZADEH; MILAD ABDOLLAHZADEH HOSSEIN NEJATI; NGAI-MAN CHEUNG
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2333
ER -
TOUBA MALEKZADEH, MILAD ABDOLLAHZADEH HOSSEIN NEJATI, NGAI-MAN CHEUNG. (2017). AIRCRAFT FUSELAGE DEFECT DETECTION USING DEEP NEURAL NETWORKS. IEEE SigPort. http://sigport.org/2333
TOUBA MALEKZADEH, MILAD ABDOLLAHZADEH HOSSEIN NEJATI, NGAI-MAN CHEUNG, 2017. AIRCRAFT FUSELAGE DEFECT DETECTION USING DEEP NEURAL NETWORKS. Available at: http://sigport.org/2333.
TOUBA MALEKZADEH, MILAD ABDOLLAHZADEH HOSSEIN NEJATI, NGAI-MAN CHEUNG. (2017). "AIRCRAFT FUSELAGE DEFECT DETECTION USING DEEP NEURAL NETWORKS." Web.
1. TOUBA MALEKZADEH, MILAD ABDOLLAHZADEH HOSSEIN NEJATI, NGAI-MAN CHEUNG. AIRCRAFT FUSELAGE DEFECT DETECTION USING DEEP NEURAL NETWORKS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2333

Hierarchical multinomial latent model with G0 distribution for remote sensing image semantic segmentation

Paper Details

Authors:
Submitted On:
12 November 2017 - 7:58pm
Short Link:
Type:
Event:
Paper Code:

Document Files

ID1185-Yiping Duan-tsinghua.pdf

(175 downloads)

Subscribe

[1] , "Hierarchical multinomial latent model with G0 distribution for remote sensing image semantic segmentation", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2323. Accessed: Nov. 13, 2018.
@article{2323-17,
url = {http://sigport.org/2323},
author = { },
publisher = {IEEE SigPort},
title = {Hierarchical multinomial latent model with G0 distribution for remote sensing image semantic segmentation},
year = {2017} }
TY - EJOUR
T1 - Hierarchical multinomial latent model with G0 distribution for remote sensing image semantic segmentation
AU -
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2323
ER -
. (2017). Hierarchical multinomial latent model with G0 distribution for remote sensing image semantic segmentation. IEEE SigPort. http://sigport.org/2323
, 2017. Hierarchical multinomial latent model with G0 distribution for remote sensing image semantic segmentation. Available at: http://sigport.org/2323.
. (2017). "Hierarchical multinomial latent model with G0 distribution for remote sensing image semantic segmentation." Web.
1. . Hierarchical multinomial latent model with G0 distribution for remote sensing image semantic segmentation [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2323

GLAND SEGMENTATION GUIDED BY GLANDULAR STRUCTURES: A LEVEL SET FRAMEWORK WITH TWO LEVELS

Paper Details

Authors:
Ji Bao, Hong Bu
Submitted On:
3 October 2017 - 4:28am
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

ICIP_poster3433.pdf

(199 downloads)

Subscribe

[1] Ji Bao, Hong Bu, "GLAND SEGMENTATION GUIDED BY GLANDULAR STRUCTURES: A LEVEL SET FRAMEWORK WITH TWO LEVELS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2253. Accessed: Nov. 13, 2018.
@article{2253-17,
url = {http://sigport.org/2253},
author = {Ji Bao; Hong Bu },
publisher = {IEEE SigPort},
title = {GLAND SEGMENTATION GUIDED BY GLANDULAR STRUCTURES: A LEVEL SET FRAMEWORK WITH TWO LEVELS},
year = {2017} }
TY - EJOUR
T1 - GLAND SEGMENTATION GUIDED BY GLANDULAR STRUCTURES: A LEVEL SET FRAMEWORK WITH TWO LEVELS
AU - Ji Bao; Hong Bu
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2253
ER -
Ji Bao, Hong Bu. (2017). GLAND SEGMENTATION GUIDED BY GLANDULAR STRUCTURES: A LEVEL SET FRAMEWORK WITH TWO LEVELS. IEEE SigPort. http://sigport.org/2253
Ji Bao, Hong Bu, 2017. GLAND SEGMENTATION GUIDED BY GLANDULAR STRUCTURES: A LEVEL SET FRAMEWORK WITH TWO LEVELS. Available at: http://sigport.org/2253.
Ji Bao, Hong Bu. (2017). "GLAND SEGMENTATION GUIDED BY GLANDULAR STRUCTURES: A LEVEL SET FRAMEWORK WITH TWO LEVELS." Web.
1. Ji Bao, Hong Bu. GLAND SEGMENTATION GUIDED BY GLANDULAR STRUCTURES: A LEVEL SET FRAMEWORK WITH TWO LEVELS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2253

Probabilistic Approach to People-Centric Photo Selection and Sequencing


We present a crowdsourcing (CS) study to examine how specific attributes probabilistically affect the selection and sequencing of images from personal photo collections. 13 image attributes are explored, including 7 people-centric properties. We first propose a novel dataset shaping technique based on Mixed Integer Linear Programming (MILP) to identify a subset of photos in which the attributes of interest are uniformly distributed and minimally correlated.

poster.pdf

PDF icon poster.pdf (190 downloads)

Paper Details

Authors:
Vassilios Vonikakis, Ramanathan Subramanian, Jonas Arnfred, Stefan Winkler
Submitted On:
27 September 2017 - 11:08pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

poster.pdf

(190 downloads)

Subscribe

[1] Vassilios Vonikakis, Ramanathan Subramanian, Jonas Arnfred, Stefan Winkler, "Probabilistic Approach to People-Centric Photo Selection and Sequencing", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2250. Accessed: Nov. 13, 2018.
@article{2250-17,
url = {http://sigport.org/2250},
author = {Vassilios Vonikakis; Ramanathan Subramanian; Jonas Arnfred; Stefan Winkler },
publisher = {IEEE SigPort},
title = {Probabilistic Approach to People-Centric Photo Selection and Sequencing},
year = {2017} }
TY - EJOUR
T1 - Probabilistic Approach to People-Centric Photo Selection and Sequencing
AU - Vassilios Vonikakis; Ramanathan Subramanian; Jonas Arnfred; Stefan Winkler
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2250
ER -
Vassilios Vonikakis, Ramanathan Subramanian, Jonas Arnfred, Stefan Winkler. (2017). Probabilistic Approach to People-Centric Photo Selection and Sequencing. IEEE SigPort. http://sigport.org/2250
Vassilios Vonikakis, Ramanathan Subramanian, Jonas Arnfred, Stefan Winkler, 2017. Probabilistic Approach to People-Centric Photo Selection and Sequencing. Available at: http://sigport.org/2250.
Vassilios Vonikakis, Ramanathan Subramanian, Jonas Arnfred, Stefan Winkler. (2017). "Probabilistic Approach to People-Centric Photo Selection and Sequencing." Web.
1. Vassilios Vonikakis, Ramanathan Subramanian, Jonas Arnfred, Stefan Winkler. Probabilistic Approach to People-Centric Photo Selection and Sequencing [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2250

BAFT: Binary Affine Feature Transform


We introduce BAFT, a fast binary and quasi affine invariant local image feature. It combines the affine invariance of Harris Affine feature descriptors with the speed of binary descriptors such as BRISK and ORB. BAFT derives its speed and precision from sampling local image patches in a pattern that depends on the second moment matrix of the same image patch. This approach results in a fast but discriminative descriptor, especially for image pairs with large perspective changes.

poster.pdf

PDF icon poster.pdf (307 downloads)

Paper Details

Authors:
Jonas T. Arnfred, Viet Dung Nguyen, Stefan Winkler
Submitted On:
27 September 2017 - 11:05pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

poster.pdf

(307 downloads)

Subscribe

[1] Jonas T. Arnfred, Viet Dung Nguyen, Stefan Winkler, "BAFT: Binary Affine Feature Transform", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2249. Accessed: Nov. 13, 2018.
@article{2249-17,
url = {http://sigport.org/2249},
author = {Jonas T. Arnfred; Viet Dung Nguyen; Stefan Winkler },
publisher = {IEEE SigPort},
title = {BAFT: Binary Affine Feature Transform},
year = {2017} }
TY - EJOUR
T1 - BAFT: Binary Affine Feature Transform
AU - Jonas T. Arnfred; Viet Dung Nguyen; Stefan Winkler
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2249
ER -
Jonas T. Arnfred, Viet Dung Nguyen, Stefan Winkler. (2017). BAFT: Binary Affine Feature Transform. IEEE SigPort. http://sigport.org/2249
Jonas T. Arnfred, Viet Dung Nguyen, Stefan Winkler, 2017. BAFT: Binary Affine Feature Transform. Available at: http://sigport.org/2249.
Jonas T. Arnfred, Viet Dung Nguyen, Stefan Winkler. (2017). "BAFT: Binary Affine Feature Transform." Web.
1. Jonas T. Arnfred, Viet Dung Nguyen, Stefan Winkler. BAFT: Binary Affine Feature Transform [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2249

CAN NO-REFERENCE IMAGE QUALITY METRICS ASSESS VISIBLE WAVELENGTH IRIS SAMPLE QUALITY?

Paper Details

Authors:
Submitted On:
21 September 2017 - 10:18am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Landscape_Poster_ICIP_Xinwei LIU.pdf

(338 downloads)

Subscribe

[1] , "CAN NO-REFERENCE IMAGE QUALITY METRICS ASSESS VISIBLE WAVELENGTH IRIS SAMPLE QUALITY?", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2243. Accessed: Nov. 13, 2018.
@article{2243-17,
url = {http://sigport.org/2243},
author = { },
publisher = {IEEE SigPort},
title = {CAN NO-REFERENCE IMAGE QUALITY METRICS ASSESS VISIBLE WAVELENGTH IRIS SAMPLE QUALITY?},
year = {2017} }
TY - EJOUR
T1 - CAN NO-REFERENCE IMAGE QUALITY METRICS ASSESS VISIBLE WAVELENGTH IRIS SAMPLE QUALITY?
AU -
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2243
ER -
. (2017). CAN NO-REFERENCE IMAGE QUALITY METRICS ASSESS VISIBLE WAVELENGTH IRIS SAMPLE QUALITY?. IEEE SigPort. http://sigport.org/2243
, 2017. CAN NO-REFERENCE IMAGE QUALITY METRICS ASSESS VISIBLE WAVELENGTH IRIS SAMPLE QUALITY?. Available at: http://sigport.org/2243.
. (2017). "CAN NO-REFERENCE IMAGE QUALITY METRICS ASSESS VISIBLE WAVELENGTH IRIS SAMPLE QUALITY?." Web.
1. . CAN NO-REFERENCE IMAGE QUALITY METRICS ASSESS VISIBLE WAVELENGTH IRIS SAMPLE QUALITY? [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2243

Audio-Visual Attention: Eye-Tracking Dataset and Analysis ToolBox


Although many visual attention models have been proposed, very few saliency models investigated the impact of audio information. To develop audio-visual attention models, researchers need to have a ground truth of eye movements recorded while exploring complex natural scenes in different audio conditions. They also need tools to compare eye movements and gaze patterns between these different audio conditions.

Paper Details

Authors:
Marighetto P., Coutrot A., Riche N., Guyader N., Mancas M., Gosselin B.
Submitted On:
20 September 2017 - 1:23am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

audiovisualSaliency.pdf

(209 downloads)

Subscribe

[1] Marighetto P., Coutrot A., Riche N., Guyader N., Mancas M., Gosselin B., "Audio-Visual Attention: Eye-Tracking Dataset and Analysis ToolBox", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2238. Accessed: Nov. 13, 2018.
@article{2238-17,
url = {http://sigport.org/2238},
author = {Marighetto P.; Coutrot A.; Riche N.; Guyader N.; Mancas M.; Gosselin B. },
publisher = {IEEE SigPort},
title = {Audio-Visual Attention: Eye-Tracking Dataset and Analysis ToolBox},
year = {2017} }
TY - EJOUR
T1 - Audio-Visual Attention: Eye-Tracking Dataset and Analysis ToolBox
AU - Marighetto P.; Coutrot A.; Riche N.; Guyader N.; Mancas M.; Gosselin B.
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2238
ER -
Marighetto P., Coutrot A., Riche N., Guyader N., Mancas M., Gosselin B.. (2017). Audio-Visual Attention: Eye-Tracking Dataset and Analysis ToolBox. IEEE SigPort. http://sigport.org/2238
Marighetto P., Coutrot A., Riche N., Guyader N., Mancas M., Gosselin B., 2017. Audio-Visual Attention: Eye-Tracking Dataset and Analysis ToolBox. Available at: http://sigport.org/2238.
Marighetto P., Coutrot A., Riche N., Guyader N., Mancas M., Gosselin B.. (2017). "Audio-Visual Attention: Eye-Tracking Dataset and Analysis ToolBox." Web.
1. Marighetto P., Coutrot A., Riche N., Guyader N., Mancas M., Gosselin B.. Audio-Visual Attention: Eye-Tracking Dataset and Analysis ToolBox [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2238

Pages