Sorry, you need to enable JavaScript to visit this website.

Neural network learning (MLR-NNLR)

Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs

Paper Details

Authors:
Jean-Charles Vialatte, Vincent Gripon, Gilles Coppin
Submitted On:
13 November 2017 - 12:29pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

slides.pdf

(245 downloads)

Keywords

Subscribe

[1] Jean-Charles Vialatte, Vincent Gripon, Gilles Coppin, "Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2339. Accessed: Nov. 24, 2017.
@article{2339-17,
url = {http://sigport.org/2339},
author = {Jean-Charles Vialatte; Vincent Gripon; Gilles Coppin },
publisher = {IEEE SigPort},
title = {Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs},
year = {2017} }
TY - EJOUR
T1 - Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs
AU - Jean-Charles Vialatte; Vincent Gripon; Gilles Coppin
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2339
ER -
Jean-Charles Vialatte, Vincent Gripon, Gilles Coppin. (2017). Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs. IEEE SigPort. http://sigport.org/2339
Jean-Charles Vialatte, Vincent Gripon, Gilles Coppin, 2017. Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs. Available at: http://sigport.org/2339.
Jean-Charles Vialatte, Vincent Gripon, Gilles Coppin. (2017). "Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs." Web.
1. Jean-Charles Vialatte, Vincent Gripon, Gilles Coppin. Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2339

ARTERY/VEIN CLASSIFICATION IN FUNDUS IMAGES USING CNN AND LIKELIHOOD SCORE PROPAGATION


Artery/vein classification in fundus images is a prerequisite for the assessment of diseases such as diabetes, hypertension or other cardiovascular pathologies. One clinical measure used to assess the severity of cardiovascular risk is the retinal arterio-venous ratio (AVR), which significantly depends on the accuracy of vessel classification into arteries or veins. This paper proposes a novel method for artery/vein classification combining deep learning and graph propagation strategies.

Paper Details

Authors:
Fantin Girard, Farida Cheriet
Submitted On:
11 November 2017 - 10:33am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:

Document Files

GlobalSIP 2017 slides

(8 downloads)

Keywords

Subscribe

[1] Fantin Girard, Farida Cheriet, "ARTERY/VEIN CLASSIFICATION IN FUNDUS IMAGES USING CNN AND LIKELIHOOD SCORE PROPAGATION", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2307. Accessed: Nov. 24, 2017.
@article{2307-17,
url = {http://sigport.org/2307},
author = {Fantin Girard; Farida Cheriet },
publisher = {IEEE SigPort},
title = {ARTERY/VEIN CLASSIFICATION IN FUNDUS IMAGES USING CNN AND LIKELIHOOD SCORE PROPAGATION},
year = {2017} }
TY - EJOUR
T1 - ARTERY/VEIN CLASSIFICATION IN FUNDUS IMAGES USING CNN AND LIKELIHOOD SCORE PROPAGATION
AU - Fantin Girard; Farida Cheriet
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2307
ER -
Fantin Girard, Farida Cheriet. (2017). ARTERY/VEIN CLASSIFICATION IN FUNDUS IMAGES USING CNN AND LIKELIHOOD SCORE PROPAGATION. IEEE SigPort. http://sigport.org/2307
Fantin Girard, Farida Cheriet, 2017. ARTERY/VEIN CLASSIFICATION IN FUNDUS IMAGES USING CNN AND LIKELIHOOD SCORE PROPAGATION. Available at: http://sigport.org/2307.
Fantin Girard, Farida Cheriet. (2017). "ARTERY/VEIN CLASSIFICATION IN FUNDUS IMAGES USING CNN AND LIKELIHOOD SCORE PROPAGATION." Web.
1. Fantin Girard, Farida Cheriet. ARTERY/VEIN CLASSIFICATION IN FUNDUS IMAGES USING CNN AND LIKELIHOOD SCORE PROPAGATION [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2307

Multiple-image Super Resolution Using Both Reconstruction Optimization and Deep Neural Network


We present an efficient multi-image super resolution (MISR) method. Our solution consists of a L1-norm optimized reconstruction scheme for super resolution (SR), and a three-layer convolutional network for artifacts removal, in a concatenated fashion. Such a two-stage method achieves excellent performance, which outperforms the existing state-of-the-art SR methods in both subjective and objective measurements (e.g., 5 to 7 dB improvements on popular image database using PSNR metric).

Paper Details

Authors:
Jie Wu, Tao Yue, Qiu Shen, Xun Cao, Zhan Ma
Submitted On:
9 November 2017 - 10:11pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Super-resolution

(8 downloads)

Keywords

Subscribe

[1] Jie Wu, Tao Yue, Qiu Shen, Xun Cao, Zhan Ma, "Multiple-image Super Resolution Using Both Reconstruction Optimization and Deep Neural Network", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2279. Accessed: Nov. 24, 2017.
@article{2279-17,
url = {http://sigport.org/2279},
author = {Jie Wu; Tao Yue; Qiu Shen; Xun Cao; Zhan Ma },
publisher = {IEEE SigPort},
title = {Multiple-image Super Resolution Using Both Reconstruction Optimization and Deep Neural Network},
year = {2017} }
TY - EJOUR
T1 - Multiple-image Super Resolution Using Both Reconstruction Optimization and Deep Neural Network
AU - Jie Wu; Tao Yue; Qiu Shen; Xun Cao; Zhan Ma
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2279
ER -
Jie Wu, Tao Yue, Qiu Shen, Xun Cao, Zhan Ma. (2017). Multiple-image Super Resolution Using Both Reconstruction Optimization and Deep Neural Network. IEEE SigPort. http://sigport.org/2279
Jie Wu, Tao Yue, Qiu Shen, Xun Cao, Zhan Ma, 2017. Multiple-image Super Resolution Using Both Reconstruction Optimization and Deep Neural Network. Available at: http://sigport.org/2279.
Jie Wu, Tao Yue, Qiu Shen, Xun Cao, Zhan Ma. (2017). "Multiple-image Super Resolution Using Both Reconstruction Optimization and Deep Neural Network." Web.
1. Jie Wu, Tao Yue, Qiu Shen, Xun Cao, Zhan Ma. Multiple-image Super Resolution Using Both Reconstruction Optimization and Deep Neural Network [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2279

When Harmonic Analysis Meets Machine Learning: Lipschitz Analysis of Deep Convolution Networks


Deep neural networks have led to dramatic improvements in performance for many machine learning tasks, yet the mathematical reasons for this success remain largely unclear. In this talk we present recent developments in the mathematical framework of convolutive neural networks (CNN). In particular we discuss the scattering network of Mallat and how it relates to another problem in harmonic analysis, namely the phase retrieval problem. Then we discuss the general convolutive neural network from a theoretician point of view.

Paper Details

Authors:
Radu Balan
Submitted On:
19 October 2017 - 11:56am
Short Link:
Type:
Document Year:
Cite

Document Files

Presentation slides (pdf version)

(39 downloads)

Keywords

Subscribe

[1] Radu Balan, "When Harmonic Analysis Meets Machine Learning: Lipschitz Analysis of Deep Convolution Networks", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2263. Accessed: Nov. 24, 2017.
@article{2263-17,
url = {http://sigport.org/2263},
author = {Radu Balan },
publisher = {IEEE SigPort},
title = {When Harmonic Analysis Meets Machine Learning: Lipschitz Analysis of Deep Convolution Networks},
year = {2017} }
TY - EJOUR
T1 - When Harmonic Analysis Meets Machine Learning: Lipschitz Analysis of Deep Convolution Networks
AU - Radu Balan
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2263
ER -
Radu Balan. (2017). When Harmonic Analysis Meets Machine Learning: Lipschitz Analysis of Deep Convolution Networks. IEEE SigPort. http://sigport.org/2263
Radu Balan, 2017. When Harmonic Analysis Meets Machine Learning: Lipschitz Analysis of Deep Convolution Networks. Available at: http://sigport.org/2263.
Radu Balan. (2017). "When Harmonic Analysis Meets Machine Learning: Lipschitz Analysis of Deep Convolution Networks." Web.
1. Radu Balan. When Harmonic Analysis Meets Machine Learning: Lipschitz Analysis of Deep Convolution Networks [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2263

THE WITS INTELLIGENT TEACHING SYSTEM: DETECTING STUDENT ENGAGEMENT DURING LECTURES USING CONVOLUTIONAL NEURAL NETWORKS

Paper Details

Authors:
Turgay Celik
Submitted On:
17 September 2017 - 9:18pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Klein-Poster.pdf

(31 downloads)

Keywords

Subscribe

[1] Turgay Celik, "THE WITS INTELLIGENT TEACHING SYSTEM: DETECTING STUDENT ENGAGEMENT DURING LECTURES USING CONVOLUTIONAL NEURAL NETWORKS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2217. Accessed: Nov. 24, 2017.
@article{2217-17,
url = {http://sigport.org/2217},
author = {Turgay Celik },
publisher = {IEEE SigPort},
title = {THE WITS INTELLIGENT TEACHING SYSTEM: DETECTING STUDENT ENGAGEMENT DURING LECTURES USING CONVOLUTIONAL NEURAL NETWORKS},
year = {2017} }
TY - EJOUR
T1 - THE WITS INTELLIGENT TEACHING SYSTEM: DETECTING STUDENT ENGAGEMENT DURING LECTURES USING CONVOLUTIONAL NEURAL NETWORKS
AU - Turgay Celik
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2217
ER -
Turgay Celik. (2017). THE WITS INTELLIGENT TEACHING SYSTEM: DETECTING STUDENT ENGAGEMENT DURING LECTURES USING CONVOLUTIONAL NEURAL NETWORKS. IEEE SigPort. http://sigport.org/2217
Turgay Celik, 2017. THE WITS INTELLIGENT TEACHING SYSTEM: DETECTING STUDENT ENGAGEMENT DURING LECTURES USING CONVOLUTIONAL NEURAL NETWORKS. Available at: http://sigport.org/2217.
Turgay Celik. (2017). "THE WITS INTELLIGENT TEACHING SYSTEM: DETECTING STUDENT ENGAGEMENT DURING LECTURES USING CONVOLUTIONAL NEURAL NETWORKS." Web.
1. Turgay Celik. THE WITS INTELLIGENT TEACHING SYSTEM: DETECTING STUDENT ENGAGEMENT DURING LECTURES USING CONVOLUTIONAL NEURAL NETWORKS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2217

DenseNet for Dense Flow


Classical approaches for estimating optical flow have achieved rapid progress in the last decade. However, most of them are too slow to be applied in real-time video analysis. Due to the great success of deep learning, recent work has focused on using CNNs to solve such dense prediction problems. In this paper, we investigate a new deep architecture, Densely Connected Convolutional Networks (DenseNet), to learn optical flow. This specific architecture is ideal for the problem at hand as it provides shortcut connections throughout the network, which leads to implicit deep supervision.

Paper Details

Authors:
Yi Zhu,Shawn Newsam
Submitted On:
16 September 2017 - 2:45am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP17_paper2550_slides_yizhu.pdf

(47 downloads)

Keywords

Subscribe

[1] Yi Zhu,Shawn Newsam, "DenseNet for Dense Flow", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2181. Accessed: Nov. 24, 2017.
@article{2181-17,
url = {http://sigport.org/2181},
author = {Yi Zhu;Shawn Newsam },
publisher = {IEEE SigPort},
title = {DenseNet for Dense Flow},
year = {2017} }
TY - EJOUR
T1 - DenseNet for Dense Flow
AU - Yi Zhu;Shawn Newsam
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2181
ER -
Yi Zhu,Shawn Newsam. (2017). DenseNet for Dense Flow. IEEE SigPort. http://sigport.org/2181
Yi Zhu,Shawn Newsam, 2017. DenseNet for Dense Flow. Available at: http://sigport.org/2181.
Yi Zhu,Shawn Newsam. (2017). "DenseNet for Dense Flow." Web.
1. Yi Zhu,Shawn Newsam. DenseNet for Dense Flow [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2181

TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING


Deep network pruning is an effective method to reduce the storage and computation cost of deep neural networks when applying them to resource-limited devices. Among many pruning granularities, neuron level pruning will remove redundant neurons and filters in the model and result in thinner networks. In this paper, we propose a gradually global pruning scheme for neuron level pruning. In each pruning step,

Paper Details

Authors:
Zhengtao Wang, Ce Zhu, Zhiqiang Xia, Qi Guo, Yipeng Liu
Submitted On:
15 September 2017 - 1:19pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP1701

(19 downloads)

Keywords

Subscribe

[1] Zhengtao Wang, Ce Zhu, Zhiqiang Xia, Qi Guo, Yipeng Liu, " TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2155. Accessed: Nov. 24, 2017.
@article{2155-17,
url = {http://sigport.org/2155},
author = {Zhengtao Wang; Ce Zhu; Zhiqiang Xia; Qi Guo; Yipeng Liu },
publisher = {IEEE SigPort},
title = { TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING},
year = {2017} }
TY - EJOUR
T1 - TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING
AU - Zhengtao Wang; Ce Zhu; Zhiqiang Xia; Qi Guo; Yipeng Liu
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2155
ER -
Zhengtao Wang, Ce Zhu, Zhiqiang Xia, Qi Guo, Yipeng Liu. (2017). TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING. IEEE SigPort. http://sigport.org/2155
Zhengtao Wang, Ce Zhu, Zhiqiang Xia, Qi Guo, Yipeng Liu, 2017. TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING. Available at: http://sigport.org/2155.
Zhengtao Wang, Ce Zhu, Zhiqiang Xia, Qi Guo, Yipeng Liu. (2017). " TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING." Web.
1. Zhengtao Wang, Ce Zhu, Zhiqiang Xia, Qi Guo, Yipeng Liu. TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2155

ATTRIBUTE-CONTROLLED FACE PHOTO SYNTHESIS FROM SIMPLE LINE DRAWING

Paper Details

Authors:
Qi Guo, Ce Zhu, Zhiqiang Xia, Zhengtao Wang, Yipeng Liu
Submitted On:
15 September 2017 - 11:50am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

ATTRIBUTE-CONTROLLED FACE PHOTO SYNTHESIS FROM SIMPLE LINE DRAWING.pdf

(21 downloads)

Keywords

Subscribe

[1] Qi Guo, Ce Zhu, Zhiqiang Xia, Zhengtao Wang, Yipeng Liu , "ATTRIBUTE-CONTROLLED FACE PHOTO SYNTHESIS FROM SIMPLE LINE DRAWING", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2146. Accessed: Nov. 24, 2017.
@article{2146-17,
url = {http://sigport.org/2146},
author = {Qi Guo; Ce Zhu; Zhiqiang Xia; Zhengtao Wang; Yipeng Liu },
publisher = {IEEE SigPort},
title = {ATTRIBUTE-CONTROLLED FACE PHOTO SYNTHESIS FROM SIMPLE LINE DRAWING},
year = {2017} }
TY - EJOUR
T1 - ATTRIBUTE-CONTROLLED FACE PHOTO SYNTHESIS FROM SIMPLE LINE DRAWING
AU - Qi Guo; Ce Zhu; Zhiqiang Xia; Zhengtao Wang; Yipeng Liu
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2146
ER -
Qi Guo, Ce Zhu, Zhiqiang Xia, Zhengtao Wang, Yipeng Liu . (2017). ATTRIBUTE-CONTROLLED FACE PHOTO SYNTHESIS FROM SIMPLE LINE DRAWING. IEEE SigPort. http://sigport.org/2146
Qi Guo, Ce Zhu, Zhiqiang Xia, Zhengtao Wang, Yipeng Liu , 2017. ATTRIBUTE-CONTROLLED FACE PHOTO SYNTHESIS FROM SIMPLE LINE DRAWING. Available at: http://sigport.org/2146.
Qi Guo, Ce Zhu, Zhiqiang Xia, Zhengtao Wang, Yipeng Liu . (2017). "ATTRIBUTE-CONTROLLED FACE PHOTO SYNTHESIS FROM SIMPLE LINE DRAWING." Web.
1. Qi Guo, Ce Zhu, Zhiqiang Xia, Zhengtao Wang, Yipeng Liu . ATTRIBUTE-CONTROLLED FACE PHOTO SYNTHESIS FROM SIMPLE LINE DRAWING [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2146

Search Video Action Proposal with Recurrent and Static YOLO


In this paper, we propose a new approach for searching action proposals in unconstrained videos. Our method first produces snippet action proposals by combining state-of-the-art YOLO detector (Static YOLO) and our regression based RNN detector (Recurrent YOLO). Then, these short action proposals are integrated to form final action proposals by solving two-pass dynamic programming which maximizes actioness score and temporal smoothness concurrently.

Paper Details

Authors:
Romain Vial, Hongyuan Zhu, Yonghong Tian, Shijian Lu
Submitted On:
20 September 2017 - 10:51am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

presentation.pdf

(392 downloads)

Keywords

Subscribe

[1] Romain Vial, Hongyuan Zhu, Yonghong Tian, Shijian Lu, "Search Video Action Proposal with Recurrent and Static YOLO", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2140. Accessed: Nov. 24, 2017.
@article{2140-17,
url = {http://sigport.org/2140},
author = {Romain Vial; Hongyuan Zhu; Yonghong Tian; Shijian Lu },
publisher = {IEEE SigPort},
title = {Search Video Action Proposal with Recurrent and Static YOLO},
year = {2017} }
TY - EJOUR
T1 - Search Video Action Proposal with Recurrent and Static YOLO
AU - Romain Vial; Hongyuan Zhu; Yonghong Tian; Shijian Lu
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2140
ER -
Romain Vial, Hongyuan Zhu, Yonghong Tian, Shijian Lu. (2017). Search Video Action Proposal with Recurrent and Static YOLO. IEEE SigPort. http://sigport.org/2140
Romain Vial, Hongyuan Zhu, Yonghong Tian, Shijian Lu, 2017. Search Video Action Proposal with Recurrent and Static YOLO. Available at: http://sigport.org/2140.
Romain Vial, Hongyuan Zhu, Yonghong Tian, Shijian Lu. (2017). "Search Video Action Proposal with Recurrent and Static YOLO." Web.
1. Romain Vial, Hongyuan Zhu, Yonghong Tian, Shijian Lu. Search Video Action Proposal with Recurrent and Static YOLO [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2140

Foveated Neural Network: Gaze Prediction On Egocentric Videos


A novel deep convolution neural network, named as Foveated Neural Network (FNN), is proposed to predict gaze on current frames in egocentric videos. The retina-like visual inputs from the region of interest on the previous frame get analysed and encoded. The fusion of the hidden representation of the previous frame and the feature maps of the current frame guides the gaze prediction process on the current frame. In order to simulate motions, we also include the dense optical flow between these adjacent frames as additional inputs to FNN.

Paper Details

Authors:
Mengmi Zhang, Keng-Teck Ma, Joo-Hwee Lim, Qi Zhao
Submitted On:
15 September 2017 - 4:11am
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

ICIP17_poster.pdf

(48 downloads)

Keywords

Subscribe

[1] Mengmi Zhang, Keng-Teck Ma, Joo-Hwee Lim, Qi Zhao, "Foveated Neural Network: Gaze Prediction On Egocentric Videos", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2100. Accessed: Nov. 24, 2017.
@article{2100-17,
url = {http://sigport.org/2100},
author = {Mengmi Zhang; Keng-Teck Ma; Joo-Hwee Lim; Qi Zhao },
publisher = {IEEE SigPort},
title = {Foveated Neural Network: Gaze Prediction On Egocentric Videos},
year = {2017} }
TY - EJOUR
T1 - Foveated Neural Network: Gaze Prediction On Egocentric Videos
AU - Mengmi Zhang; Keng-Teck Ma; Joo-Hwee Lim; Qi Zhao
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2100
ER -
Mengmi Zhang, Keng-Teck Ma, Joo-Hwee Lim, Qi Zhao. (2017). Foveated Neural Network: Gaze Prediction On Egocentric Videos. IEEE SigPort. http://sigport.org/2100
Mengmi Zhang, Keng-Teck Ma, Joo-Hwee Lim, Qi Zhao, 2017. Foveated Neural Network: Gaze Prediction On Egocentric Videos. Available at: http://sigport.org/2100.
Mengmi Zhang, Keng-Teck Ma, Joo-Hwee Lim, Qi Zhao. (2017). "Foveated Neural Network: Gaze Prediction On Egocentric Videos." Web.
1. Mengmi Zhang, Keng-Teck Ma, Joo-Hwee Lim, Qi Zhao. Foveated Neural Network: Gaze Prediction On Egocentric Videos [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2100

Pages