Sorry, you need to enable JavaScript to visit this website.

Pattern recognition and classification (MLR-PATT)

Fisher’s Linear Discriminant Analysis and Its Use in Feature Selection


Introductory derivations are done before introducing the metric of Fisher’s LDA. Then, after introducing Fisher's LDA and its different forms, Fisher’s Metrics and their maximizations are given. The idea of traces in Fisher's LDA is explained. Weighted features method is used for feature selection. Various simple examples are provided to clarify the idea for the feature selection.

Sunu1.pptx

File Sunu1.pptx (13 downloads)

Paper Details

Authors:
Mehmet Koc, Ozen Yelbasi
Submitted On:
4 October 2017 - 9:26am
Short Link:
Type:
Document Year:
Cite

Document Files

Sunu1.pptx

(13 downloads)

Keywords

Subscribe

[1] Mehmet Koc, Ozen Yelbasi, "Fisher’s Linear Discriminant Analysis and Its Use in Feature Selection", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2255. Accessed: Oct. 20, 2017.
@article{2255-17,
url = {http://sigport.org/2255},
author = {Mehmet Koc; Ozen Yelbasi },
publisher = {IEEE SigPort},
title = {Fisher’s Linear Discriminant Analysis and Its Use in Feature Selection},
year = {2017} }
TY - EJOUR
T1 - Fisher’s Linear Discriminant Analysis and Its Use in Feature Selection
AU - Mehmet Koc; Ozen Yelbasi
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2255
ER -
Mehmet Koc, Ozen Yelbasi. (2017). Fisher’s Linear Discriminant Analysis and Its Use in Feature Selection. IEEE SigPort. http://sigport.org/2255
Mehmet Koc, Ozen Yelbasi, 2017. Fisher’s Linear Discriminant Analysis and Its Use in Feature Selection. Available at: http://sigport.org/2255.
Mehmet Koc, Ozen Yelbasi. (2017). "Fisher’s Linear Discriminant Analysis and Its Use in Feature Selection." Web.
1. Mehmet Koc, Ozen Yelbasi. Fisher’s Linear Discriminant Analysis and Its Use in Feature Selection [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2255

LOCALIZED MULTI-KERNEL DISCRIMINATIVE CANONICAL CORRELATION ANALYSIS FOR VIDEO-BASED PERSON RE-IDENTIFICATION

Paper Details

Authors:
Guangyi Chen, Jiwen Lu, Jianjiang Feng, Jie Zhou
Submitted On:
16 September 2017 - 9:31am
Short Link:
Type:

Document Files

ICIP_presentation.pdf

(16 downloads)

Keywords

Subscribe

[1] Guangyi Chen, Jiwen Lu, Jianjiang Feng, Jie Zhou, "LOCALIZED MULTI-KERNEL DISCRIMINATIVE CANONICAL CORRELATION ANALYSIS FOR VIDEO-BASED PERSON RE-IDENTIFICATION", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2192. Accessed: Oct. 20, 2017.
@article{2192-17,
url = {http://sigport.org/2192},
author = {Guangyi Chen; Jiwen Lu; Jianjiang Feng; Jie Zhou },
publisher = {IEEE SigPort},
title = {LOCALIZED MULTI-KERNEL DISCRIMINATIVE CANONICAL CORRELATION ANALYSIS FOR VIDEO-BASED PERSON RE-IDENTIFICATION},
year = {2017} }
TY - EJOUR
T1 - LOCALIZED MULTI-KERNEL DISCRIMINATIVE CANONICAL CORRELATION ANALYSIS FOR VIDEO-BASED PERSON RE-IDENTIFICATION
AU - Guangyi Chen; Jiwen Lu; Jianjiang Feng; Jie Zhou
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2192
ER -
Guangyi Chen, Jiwen Lu, Jianjiang Feng, Jie Zhou. (2017). LOCALIZED MULTI-KERNEL DISCRIMINATIVE CANONICAL CORRELATION ANALYSIS FOR VIDEO-BASED PERSON RE-IDENTIFICATION. IEEE SigPort. http://sigport.org/2192
Guangyi Chen, Jiwen Lu, Jianjiang Feng, Jie Zhou, 2017. LOCALIZED MULTI-KERNEL DISCRIMINATIVE CANONICAL CORRELATION ANALYSIS FOR VIDEO-BASED PERSON RE-IDENTIFICATION. Available at: http://sigport.org/2192.
Guangyi Chen, Jiwen Lu, Jianjiang Feng, Jie Zhou. (2017). "LOCALIZED MULTI-KERNEL DISCRIMINATIVE CANONICAL CORRELATION ANALYSIS FOR VIDEO-BASED PERSON RE-IDENTIFICATION." Web.
1. Guangyi Chen, Jiwen Lu, Jianjiang Feng, Jie Zhou. LOCALIZED MULTI-KERNEL DISCRIMINATIVE CANONICAL CORRELATION ANALYSIS FOR VIDEO-BASED PERSON RE-IDENTIFICATION [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2192

Appearance and Motion based Deep Learning Architecture for Moving Object Detection in Moving Camera


Background subtraction from the given image is a widely used method for moving object detection. However, this method is vulnerable to dynamic background in a moving camera video. In this paper, we propose a novel moving object detection approach using deep learning to achieve a robust performance even in a dynamic background. The proposed approach considers appearance features as well as motion features. To this end, we design a deep learning architecture composed of two networks: an appearance network and a motion network.

Paper Details

Authors:
Byeongho Heo, Kimin Yun, Jin Young Choi
Submitted On:
15 September 2017 - 11:38am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

slide

(29 downloads)

Keywords

Additional Categories

Subscribe

[1] Byeongho Heo, Kimin Yun, Jin Young Choi, "Appearance and Motion based Deep Learning Architecture for Moving Object Detection in Moving Camera", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2144. Accessed: Oct. 20, 2017.
@article{2144-17,
url = {http://sigport.org/2144},
author = {Byeongho Heo; Kimin Yun; Jin Young Choi },
publisher = {IEEE SigPort},
title = {Appearance and Motion based Deep Learning Architecture for Moving Object Detection in Moving Camera},
year = {2017} }
TY - EJOUR
T1 - Appearance and Motion based Deep Learning Architecture for Moving Object Detection in Moving Camera
AU - Byeongho Heo; Kimin Yun; Jin Young Choi
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2144
ER -
Byeongho Heo, Kimin Yun, Jin Young Choi. (2017). Appearance and Motion based Deep Learning Architecture for Moving Object Detection in Moving Camera. IEEE SigPort. http://sigport.org/2144
Byeongho Heo, Kimin Yun, Jin Young Choi, 2017. Appearance and Motion based Deep Learning Architecture for Moving Object Detection in Moving Camera. Available at: http://sigport.org/2144.
Byeongho Heo, Kimin Yun, Jin Young Choi. (2017). "Appearance and Motion based Deep Learning Architecture for Moving Object Detection in Moving Camera." Web.
1. Byeongho Heo, Kimin Yun, Jin Young Choi. Appearance and Motion based Deep Learning Architecture for Moving Object Detection in Moving Camera [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2144

REAL-TIME OBJECT DETECTION BY A MULTI-FEATURE FULLY CONVOLUTIONAL NETWORK

Paper Details

Authors:
Yajing Guo, Xiaoqiang Guo, Zhuqing Jiang, Aidong Men, Yun Zhou
Submitted On:
15 September 2017 - 6:19am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

2259-REAL-TIME OBJECT DETECTION BY A MULTI-FEATURE FULLY CONVOLUTIONAL NETWORK.pdf

(19 downloads)

Keywords

Subscribe

[1] Yajing Guo, Xiaoqiang Guo, Zhuqing Jiang, Aidong Men, Yun Zhou, "REAL-TIME OBJECT DETECTION BY A MULTI-FEATURE FULLY CONVOLUTIONAL NETWORK", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2117. Accessed: Oct. 20, 2017.
@article{2117-17,
url = {http://sigport.org/2117},
author = {Yajing Guo; Xiaoqiang Guo; Zhuqing Jiang; Aidong Men; Yun Zhou },
publisher = {IEEE SigPort},
title = {REAL-TIME OBJECT DETECTION BY A MULTI-FEATURE FULLY CONVOLUTIONAL NETWORK},
year = {2017} }
TY - EJOUR
T1 - REAL-TIME OBJECT DETECTION BY A MULTI-FEATURE FULLY CONVOLUTIONAL NETWORK
AU - Yajing Guo; Xiaoqiang Guo; Zhuqing Jiang; Aidong Men; Yun Zhou
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2117
ER -
Yajing Guo, Xiaoqiang Guo, Zhuqing Jiang, Aidong Men, Yun Zhou. (2017). REAL-TIME OBJECT DETECTION BY A MULTI-FEATURE FULLY CONVOLUTIONAL NETWORK. IEEE SigPort. http://sigport.org/2117
Yajing Guo, Xiaoqiang Guo, Zhuqing Jiang, Aidong Men, Yun Zhou, 2017. REAL-TIME OBJECT DETECTION BY A MULTI-FEATURE FULLY CONVOLUTIONAL NETWORK. Available at: http://sigport.org/2117.
Yajing Guo, Xiaoqiang Guo, Zhuqing Jiang, Aidong Men, Yun Zhou. (2017). "REAL-TIME OBJECT DETECTION BY A MULTI-FEATURE FULLY CONVOLUTIONAL NETWORK." Web.
1. Yajing Guo, Xiaoqiang Guo, Zhuqing Jiang, Aidong Men, Yun Zhou. REAL-TIME OBJECT DETECTION BY A MULTI-FEATURE FULLY CONVOLUTIONAL NETWORK [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2117

ICIP2017 poster

Paper Details

Authors:
Zhongxing Han, Hui Zhang, Jinfang Zhang, Xiaohui Hu
Submitted On:
15 September 2017 - 4:27am
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

ICIP2017 poster

(34 downloads)

Keywords

Subscribe

[1] Zhongxing Han, Hui Zhang, Jinfang Zhang, Xiaohui Hu, "ICIP2017 poster", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2103. Accessed: Oct. 20, 2017.
@article{2103-17,
url = {http://sigport.org/2103},
author = {Zhongxing Han; Hui Zhang; Jinfang Zhang; Xiaohui Hu },
publisher = {IEEE SigPort},
title = {ICIP2017 poster},
year = {2017} }
TY - EJOUR
T1 - ICIP2017 poster
AU - Zhongxing Han; Hui Zhang; Jinfang Zhang; Xiaohui Hu
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2103
ER -
Zhongxing Han, Hui Zhang, Jinfang Zhang, Xiaohui Hu. (2017). ICIP2017 poster. IEEE SigPort. http://sigport.org/2103
Zhongxing Han, Hui Zhang, Jinfang Zhang, Xiaohui Hu, 2017. ICIP2017 poster. Available at: http://sigport.org/2103.
Zhongxing Han, Hui Zhang, Jinfang Zhang, Xiaohui Hu. (2017). "ICIP2017 poster." Web.
1. Zhongxing Han, Hui Zhang, Jinfang Zhang, Xiaohui Hu. ICIP2017 poster [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2103

COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION


In this paper, we propose a novel coupled dictionary learning method, namely coupled analysis-synthesis dictionary learning, to improve the performance of person re-identification in the non-overlapping fields of different camera views. Most of the existing coupled dictionary learning methods train a coupled synthesis dictionary directly on the original feature spaces, which limits the representation ability of the dictionary.

Paper Details

Authors:
Submitted On:
15 September 2017 - 9:15am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP2017-PaperID-2241-COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION.pdf

(19 downloads)

Keywords

Subscribe

[1] , "COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2090. Accessed: Oct. 20, 2017.
@article{2090-17,
url = {http://sigport.org/2090},
author = { },
publisher = {IEEE SigPort},
title = {COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION},
year = {2017} }
TY - EJOUR
T1 - COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION
AU -
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2090
ER -
. (2017). COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION. IEEE SigPort. http://sigport.org/2090
, 2017. COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION. Available at: http://sigport.org/2090.
. (2017). "COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION." Web.
1. . COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2090

COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX”


We investigate video classification via a 3D deep convolutional neural network (CNN) that directly ingests compressed bitstream information. This idea is based on the observation that video macroblock (MB) motion vectors (that are very compact and directly available from the compressed bitstream) are inherently capturing local spatiotemporal changes in each video scene.

Paper Details

Authors:
Aaron Chadha, Alhabib Abbas, Yiannis Andreopoulos
Submitted On:
14 September 2017 - 8:03pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Compressed_domain_video_classification.pdf

(31 downloads)

Keywords

Subscribe

[1] Aaron Chadha, Alhabib Abbas, Yiannis Andreopoulos, "COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX”", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2057. Accessed: Oct. 20, 2017.
@article{2057-17,
url = {http://sigport.org/2057},
author = {Aaron Chadha; Alhabib Abbas; Yiannis Andreopoulos },
publisher = {IEEE SigPort},
title = {COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX”},
year = {2017} }
TY - EJOUR
T1 - COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX”
AU - Aaron Chadha; Alhabib Abbas; Yiannis Andreopoulos
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2057
ER -
Aaron Chadha, Alhabib Abbas, Yiannis Andreopoulos. (2017). COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX”. IEEE SigPort. http://sigport.org/2057
Aaron Chadha, Alhabib Abbas, Yiannis Andreopoulos, 2017. COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX”. Available at: http://sigport.org/2057.
Aaron Chadha, Alhabib Abbas, Yiannis Andreopoulos. (2017). "COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX”." Web.
1. Aaron Chadha, Alhabib Abbas, Yiannis Andreopoulos. COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX” [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2057

FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS


In order to achieve higher accuracy of image recognition, deeper and wider networks have been used. However, when the network size gets bigger, its forward inference time also takes longer. To address this problem, we propose Deeply-Fused Branchy Network (DFB-Net) by adding small but complete side branches to the target baseline main branch. DFB-Net allows easy-to-discriminate samples to be classified faster. For hard-to-discriminate samples, DFB-Net makes probability fusion by averaging softmax probabilities to make collaborative predictions.

Paper Details

Authors:
Mou-Yue Huang, Ching-Hao Lai, Sin-Horng Chen
Submitted On:
14 September 2017 - 9:20am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP 2017 Paper #3312

(19 downloads)

Keywords

Subscribe

[1] Mou-Yue Huang, Ching-Hao Lai, Sin-Horng Chen, "FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2030. Accessed: Oct. 20, 2017.
@article{2030-17,
url = {http://sigport.org/2030},
author = {Mou-Yue Huang; Ching-Hao Lai; Sin-Horng Chen },
publisher = {IEEE SigPort},
title = {FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS},
year = {2017} }
TY - EJOUR
T1 - FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS
AU - Mou-Yue Huang; Ching-Hao Lai; Sin-Horng Chen
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2030
ER -
Mou-Yue Huang, Ching-Hao Lai, Sin-Horng Chen. (2017). FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS. IEEE SigPort. http://sigport.org/2030
Mou-Yue Huang, Ching-Hao Lai, Sin-Horng Chen, 2017. FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS. Available at: http://sigport.org/2030.
Mou-Yue Huang, Ching-Hao Lai, Sin-Horng Chen. (2017). "FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS." Web.
1. Mou-Yue Huang, Ching-Hao Lai, Sin-Horng Chen. FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2030

Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms


In a broad range of computer vision tasks, convolutional neural networks (CNNs) are one of the most prominent techniques due to their outstanding performance.
Yet it is not trivial to find the best performing network structure for a specific application because it is often unclear how the network structure relates to the network accuracy.
We propose an evolutionary algorithm-based framework to automatically optimize the CNN structure by means of hyper-parameters.

Paper Details

Authors:
Erik Bochinski, Tobias Senst, Thomas Sikora
Submitted On:
14 September 2017 - 8:14am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icip17_poster.pdf

(32 downloads)

Keywords

Subscribe

[1] Erik Bochinski, Tobias Senst, Thomas Sikora, "Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2022. Accessed: Oct. 20, 2017.
@article{2022-17,
url = {http://sigport.org/2022},
author = {Erik Bochinski; Tobias Senst; Thomas Sikora },
publisher = {IEEE SigPort},
title = {Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms},
year = {2017} }
TY - EJOUR
T1 - Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms
AU - Erik Bochinski; Tobias Senst; Thomas Sikora
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2022
ER -
Erik Bochinski, Tobias Senst, Thomas Sikora. (2017). Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms. IEEE SigPort. http://sigport.org/2022
Erik Bochinski, Tobias Senst, Thomas Sikora, 2017. Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms. Available at: http://sigport.org/2022.
Erik Bochinski, Tobias Senst, Thomas Sikora. (2017). "Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms." Web.
1. Erik Bochinski, Tobias Senst, Thomas Sikora. Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2022

SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person


Real-world face recognition using a single sample per person (SSPP) is a challenging task. The problem is exacerbated if the conditions under which the gallery image and the probe set are captured are completely different. To address these issues from the perspective of domain adaptation, we introduce an SSPP domain adaptation network (SSPP-DAN). In the proposed approach, domain adaptation, feature extraction, and classification are performed jointly using a deep architecture with domain-adversarial training.

Paper Details

Authors:
Sungeun Hong, Woobin Im, Jongbin Ryu, Hyun S. Yang
Submitted On:
15 September 2017 - 11:37am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

SSPP-DAN_Slides.pptx

(16 downloads)

SSPP-DAN_Paper.pdf

(16 downloads)

SSPP-DAN_Slides.pdf

(16 downloads)

Keywords

Additional Categories

Subscribe

[1] Sungeun Hong, Woobin Im, Jongbin Ryu, Hyun S. Yang, "SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2017. Accessed: Oct. 20, 2017.
@article{2017-17,
url = {http://sigport.org/2017},
author = {Sungeun Hong; Woobin Im; Jongbin Ryu; Hyun S. Yang },
publisher = {IEEE SigPort},
title = {SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person},
year = {2017} }
TY - EJOUR
T1 - SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person
AU - Sungeun Hong; Woobin Im; Jongbin Ryu; Hyun S. Yang
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2017
ER -
Sungeun Hong, Woobin Im, Jongbin Ryu, Hyun S. Yang. (2017). SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person. IEEE SigPort. http://sigport.org/2017
Sungeun Hong, Woobin Im, Jongbin Ryu, Hyun S. Yang, 2017. SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person. Available at: http://sigport.org/2017.
Sungeun Hong, Woobin Im, Jongbin Ryu, Hyun S. Yang. (2017). "SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person." Web.
1. Sungeun Hong, Woobin Im, Jongbin Ryu, Hyun S. Yang. SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2017

Pages