Sorry, you need to enable JavaScript to visit this website.

Pattern recognition and classification (MLR-PATT)

COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION


In this paper, we propose a novel coupled dictionary learning method, namely coupled analysis-synthesis dictionary learning, to improve the performance of person re-identification in the non-overlapping fields of different camera views. Most of the existing coupled dictionary learning methods train a coupled synthesis dictionary directly on the original feature spaces, which limits the representation ability of the dictionary.

Paper Details

Authors:
Submitted On:
15 September 2017 - 9:15am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP2017-PaperID-2241-COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION.pdf

(9)

Subscribe

[1] , "COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2090. Accessed: Apr. 25, 2019.
@article{2090-17,
url = {http://sigport.org/2090},
author = { },
publisher = {IEEE SigPort},
title = {COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION},
year = {2017} }
TY - EJOUR
T1 - COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION
AU -
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2090
ER -
. (2017). COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION. IEEE SigPort. http://sigport.org/2090
, 2017. COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION. Available at: http://sigport.org/2090.
. (2017). "COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION." Web.
1. . COUPLED ANALYSIS-SYNTHESIS DICTIONARY LEARNING FOR PERSON RE-IDENTIFICATION [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2090

COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX”


We investigate video classification via a 3D deep convolutional neural network (CNN) that directly ingests compressed bitstream information. This idea is based on the observation that video macroblock (MB) motion vectors (that are very compact and directly available from the compressed bitstream) are inherently capturing local spatiotemporal changes in each video scene.

Paper Details

Authors:
Aaron Chadha, Alhabib Abbas, Yiannis Andreopoulos
Submitted On:
14 September 2017 - 8:03pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Compressed_domain_video_classification.pdf

(458)

Subscribe

[1] Aaron Chadha, Alhabib Abbas, Yiannis Andreopoulos, "COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX”", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2057. Accessed: Apr. 25, 2019.
@article{2057-17,
url = {http://sigport.org/2057},
author = {Aaron Chadha; Alhabib Abbas; Yiannis Andreopoulos },
publisher = {IEEE SigPort},
title = {COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX”},
year = {2017} }
TY - EJOUR
T1 - COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX”
AU - Aaron Chadha; Alhabib Abbas; Yiannis Andreopoulos
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2057
ER -
Aaron Chadha, Alhabib Abbas, Yiannis Andreopoulos. (2017). COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX”. IEEE SigPort. http://sigport.org/2057
Aaron Chadha, Alhabib Abbas, Yiannis Andreopoulos, 2017. COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX”. Available at: http://sigport.org/2057.
Aaron Chadha, Alhabib Abbas, Yiannis Andreopoulos. (2017). "COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX”." Web.
1. Aaron Chadha, Alhabib Abbas, Yiannis Andreopoulos. COMPRESSED-DOMAIN VIDEO CLASSIFICATION WITH DEEP NEURAL NETWORKS: “THERE’S WAY TOO MUCH INFORMATION TO DECODE THE MATRIX” [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2057

FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS


In order to achieve higher accuracy of image recognition, deeper and wider networks have been used. However, when the network size gets bigger, its forward inference time also takes longer. To address this problem, we propose Deeply-Fused Branchy Network (DFB-Net) by adding small but complete side branches to the target baseline main branch. DFB-Net allows easy-to-discriminate samples to be classified faster. For hard-to-discriminate samples, DFB-Net makes probability fusion by averaging softmax probabilities to make collaborative predictions.

Paper Details

Authors:
Mou-Yue Huang, Ching-Hao Lai, Sin-Horng Chen
Submitted On:
14 September 2017 - 9:20am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP 2017 Paper #3312

(258)

Subscribe

[1] Mou-Yue Huang, Ching-Hao Lai, Sin-Horng Chen, "FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2030. Accessed: Apr. 25, 2019.
@article{2030-17,
url = {http://sigport.org/2030},
author = {Mou-Yue Huang; Ching-Hao Lai; Sin-Horng Chen },
publisher = {IEEE SigPort},
title = {FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS},
year = {2017} }
TY - EJOUR
T1 - FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS
AU - Mou-Yue Huang; Ching-Hao Lai; Sin-Horng Chen
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2030
ER -
Mou-Yue Huang, Ching-Hao Lai, Sin-Horng Chen. (2017). FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS. IEEE SigPort. http://sigport.org/2030
Mou-Yue Huang, Ching-Hao Lai, Sin-Horng Chen, 2017. FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS. Available at: http://sigport.org/2030.
Mou-Yue Huang, Ching-Hao Lai, Sin-Horng Chen. (2017). "FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS." Web.
1. Mou-Yue Huang, Ching-Hao Lai, Sin-Horng Chen. FAST AND ACCURATE IMAGE RECOGNITION USING DEEPLY-FUSED BRANCHY NETWORKS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2030

Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms


In a broad range of computer vision tasks, convolutional neural networks (CNNs) are one of the most prominent techniques due to their outstanding performance.
Yet it is not trivial to find the best performing network structure for a specific application because it is often unclear how the network structure relates to the network accuracy.
We propose an evolutionary algorithm-based framework to automatically optimize the CNN structure by means of hyper-parameters.

Paper Details

Authors:
Erik Bochinski, Tobias Senst, Thomas Sikora
Submitted On:
14 September 2017 - 8:14am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icip17_poster.pdf

(371)

Subscribe

[1] Erik Bochinski, Tobias Senst, Thomas Sikora, "Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2022. Accessed: Apr. 25, 2019.
@article{2022-17,
url = {http://sigport.org/2022},
author = {Erik Bochinski; Tobias Senst; Thomas Sikora },
publisher = {IEEE SigPort},
title = {Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms},
year = {2017} }
TY - EJOUR
T1 - Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms
AU - Erik Bochinski; Tobias Senst; Thomas Sikora
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2022
ER -
Erik Bochinski, Tobias Senst, Thomas Sikora. (2017). Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms. IEEE SigPort. http://sigport.org/2022
Erik Bochinski, Tobias Senst, Thomas Sikora, 2017. Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms. Available at: http://sigport.org/2022.
Erik Bochinski, Tobias Senst, Thomas Sikora. (2017). "Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms." Web.
1. Erik Bochinski, Tobias Senst, Thomas Sikora. Hyper-Parameter Optimization for Convolutional Neural Network Committees Based on Evolutionary Algorithms [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2022

SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person


Real-world face recognition using a single sample per person (SSPP) is a challenging task. The problem is exacerbated if the conditions under which the gallery image and the probe set are captured are completely different. To address these issues from the perspective of domain adaptation, we introduce an SSPP domain adaptation network (SSPP-DAN). In the proposed approach, domain adaptation, feature extraction, and classification are performed jointly using a deep architecture with domain-adversarial training.

Paper Details

Authors:
Sungeun Hong, Woobin Im, Jongbin Ryu, Hyun S. Yang
Submitted On:
15 September 2017 - 11:37am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

SSPP-DAN_Slides.pptx

(175)

SSPP-DAN_Paper.pdf

(152)

SSPP-DAN_Slides.pdf

(161)

Keywords

Additional Categories

Subscribe

[1] Sungeun Hong, Woobin Im, Jongbin Ryu, Hyun S. Yang, "SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2017. Accessed: Apr. 25, 2019.
@article{2017-17,
url = {http://sigport.org/2017},
author = {Sungeun Hong; Woobin Im; Jongbin Ryu; Hyun S. Yang },
publisher = {IEEE SigPort},
title = {SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person},
year = {2017} }
TY - EJOUR
T1 - SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person
AU - Sungeun Hong; Woobin Im; Jongbin Ryu; Hyun S. Yang
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2017
ER -
Sungeun Hong, Woobin Im, Jongbin Ryu, Hyun S. Yang. (2017). SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person. IEEE SigPort. http://sigport.org/2017
Sungeun Hong, Woobin Im, Jongbin Ryu, Hyun S. Yang, 2017. SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person. Available at: http://sigport.org/2017.
Sungeun Hong, Woobin Im, Jongbin Ryu, Hyun S. Yang. (2017). "SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person." Web.
1. Sungeun Hong, Woobin Im, Jongbin Ryu, Hyun S. Yang. SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2017

TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING


Deep network pruning is an effective method to reduce the storage and computation cost of deep neural networks when applying them to resource-limited devices. Among many pruning granularity, neuron level pruning will remove redundant neurons and filters in the model and result in thinner networks. In this paper, we propose a gradually global pruning scheme for neuron level pruning. In each pruning step, a small percent of neurons were selected and dropped across all layers in the model.

Paper Details

Authors:
Ce Zhu, Zhiqiang Xia, Qi Guo, Yipeng Liu
Submitted On:
13 September 2017 - 12:37pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP1701

(181)

Subscribe

[1] Ce Zhu, Zhiqiang Xia, Qi Guo, Yipeng Liu, "TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1979. Accessed: Apr. 25, 2019.
@article{1979-17,
url = {http://sigport.org/1979},
author = {Ce Zhu; Zhiqiang Xia; Qi Guo; Yipeng Liu },
publisher = {IEEE SigPort},
title = {TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING},
year = {2017} }
TY - EJOUR
T1 - TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING
AU - Ce Zhu; Zhiqiang Xia; Qi Guo; Yipeng Liu
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1979
ER -
Ce Zhu, Zhiqiang Xia, Qi Guo, Yipeng Liu. (2017). TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING. IEEE SigPort. http://sigport.org/1979
Ce Zhu, Zhiqiang Xia, Qi Guo, Yipeng Liu, 2017. TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING. Available at: http://sigport.org/1979.
Ce Zhu, Zhiqiang Xia, Qi Guo, Yipeng Liu. (2017). "TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING." Web.
1. Ce Zhu, Zhiqiang Xia, Qi Guo, Yipeng Liu. TOWARDS THINNER CONVOLUTIONAL NEURAL NETWORKS THROUGH GRADUALLY GLOBAL PRUNING [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1979

Age Group Classification in the Wild with Deep RoR Architecture


Automatically predicting age group from face images acquired in unconstrained conditions is an important and challenging task in many real-world applications. Nevertheless, the conventional methods with manually-designed features on in-the-wild benchmarks are unsatisfactory because of incompetency to tackle large variations in unconstrained images.

Paper Details

Authors:
Ke Zhang, Liru Guo, Ce Gao, Zhenbing Zhao, Miao Sun, Xingfang Yuan
Submitted On:
12 September 2017 - 10:21pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Age Group Classification in the Wild with Deep RoR Architecture

(7)

Subscribe

[1] Ke Zhang, Liru Guo, Ce Gao, Zhenbing Zhao, Miao Sun, Xingfang Yuan, "Age Group Classification in the Wild with Deep RoR Architecture", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1953. Accessed: Apr. 25, 2019.
@article{1953-17,
url = {http://sigport.org/1953},
author = {Ke Zhang; Liru Guo; Ce Gao; Zhenbing Zhao; Miao Sun; Xingfang Yuan },
publisher = {IEEE SigPort},
title = {Age Group Classification in the Wild with Deep RoR Architecture},
year = {2017} }
TY - EJOUR
T1 - Age Group Classification in the Wild with Deep RoR Architecture
AU - Ke Zhang; Liru Guo; Ce Gao; Zhenbing Zhao; Miao Sun; Xingfang Yuan
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1953
ER -
Ke Zhang, Liru Guo, Ce Gao, Zhenbing Zhao, Miao Sun, Xingfang Yuan. (2017). Age Group Classification in the Wild with Deep RoR Architecture. IEEE SigPort. http://sigport.org/1953
Ke Zhang, Liru Guo, Ce Gao, Zhenbing Zhao, Miao Sun, Xingfang Yuan, 2017. Age Group Classification in the Wild with Deep RoR Architecture. Available at: http://sigport.org/1953.
Ke Zhang, Liru Guo, Ce Gao, Zhenbing Zhao, Miao Sun, Xingfang Yuan. (2017). "Age Group Classification in the Wild with Deep RoR Architecture." Web.
1. Ke Zhang, Liru Guo, Ce Gao, Zhenbing Zhao, Miao Sun, Xingfang Yuan. Age Group Classification in the Wild with Deep RoR Architecture [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1953

LEAF CLASSIFICATION BASED ON A QUADRATIC CURVED AXIS


We introduce a new reference axis for leaf classification. The new reference axis, called a Mid-Leaf axis, is based on a quadratic curve that lies on the middle of a leaf. This curve is derived from three basic landmark points: an apex, a centroid, and a petiole. After mapping to a new plane based on this curve, leaf shape features are invariant under translation, rotation, scaling, and bending. We propose the leaf shape features based on partitioning the morphological features and the tangent’s direction angle of the leaf contour.

Paper Details

Authors:
Phuchitsan Chaisuk, Krisada Phromsuthirak, Vutipong Areekul
Submitted On:
11 September 2017 - 12:14am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster - LEAF CLASSIFICATION BASED ON A QUADRATIC CURVED AXIS.pdf

(7)

Subscribe

[1] Phuchitsan Chaisuk, Krisada Phromsuthirak, Vutipong Areekul, "LEAF CLASSIFICATION BASED ON A QUADRATIC CURVED AXIS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1893. Accessed: Apr. 25, 2019.
@article{1893-17,
url = {http://sigport.org/1893},
author = {Phuchitsan Chaisuk; Krisada Phromsuthirak; Vutipong Areekul },
publisher = {IEEE SigPort},
title = {LEAF CLASSIFICATION BASED ON A QUADRATIC CURVED AXIS},
year = {2017} }
TY - EJOUR
T1 - LEAF CLASSIFICATION BASED ON A QUADRATIC CURVED AXIS
AU - Phuchitsan Chaisuk; Krisada Phromsuthirak; Vutipong Areekul
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1893
ER -
Phuchitsan Chaisuk, Krisada Phromsuthirak, Vutipong Areekul. (2017). LEAF CLASSIFICATION BASED ON A QUADRATIC CURVED AXIS. IEEE SigPort. http://sigport.org/1893
Phuchitsan Chaisuk, Krisada Phromsuthirak, Vutipong Areekul, 2017. LEAF CLASSIFICATION BASED ON A QUADRATIC CURVED AXIS. Available at: http://sigport.org/1893.
Phuchitsan Chaisuk, Krisada Phromsuthirak, Vutipong Areekul. (2017). "LEAF CLASSIFICATION BASED ON A QUADRATIC CURVED AXIS." Web.
1. Phuchitsan Chaisuk, Krisada Phromsuthirak, Vutipong Areekul. LEAF CLASSIFICATION BASED ON A QUADRATIC CURVED AXIS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1893

4D Effect Classification by Encoding CNN Features


4D effects are physical effects simulated in sync with videos, movies, and games to augment the events occurring in a story or a virtual world. Types of 4D effects commonly used for the immersive media may include seat motion, vibration, flash, wind, water, scent, thunderstorm, snow, and fog. Currently, the recognition of physical effects from a video is mainly conducted by human experts. Although 4D effects are promising in giving immersive experience and entertainment, this manual production has been the main obstacle to faster and wider application of 4D effects.

Paper Details

Authors:
Thomhert S. Siadari, Mikyong Han, Hyunjin Yoon
Submitted On:
14 September 2017 - 3:13am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

presentation slides

(261)

Subscribe

[1] Thomhert S. Siadari, Mikyong Han, Hyunjin Yoon, "4D Effect Classification by Encoding CNN Features", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1892. Accessed: Apr. 25, 2019.
@article{1892-17,
url = {http://sigport.org/1892},
author = {Thomhert S. Siadari; Mikyong Han; Hyunjin Yoon },
publisher = {IEEE SigPort},
title = {4D Effect Classification by Encoding CNN Features},
year = {2017} }
TY - EJOUR
T1 - 4D Effect Classification by Encoding CNN Features
AU - Thomhert S. Siadari; Mikyong Han; Hyunjin Yoon
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1892
ER -
Thomhert S. Siadari, Mikyong Han, Hyunjin Yoon. (2017). 4D Effect Classification by Encoding CNN Features. IEEE SigPort. http://sigport.org/1892
Thomhert S. Siadari, Mikyong Han, Hyunjin Yoon, 2017. 4D Effect Classification by Encoding CNN Features. Available at: http://sigport.org/1892.
Thomhert S. Siadari, Mikyong Han, Hyunjin Yoon. (2017). "4D Effect Classification by Encoding CNN Features." Web.
1. Thomhert S. Siadari, Mikyong Han, Hyunjin Yoon. 4D Effect Classification by Encoding CNN Features [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1892

Integrated Deep and Shallow Networks for Salient Object Detection

Paper Details

Authors:
Jing Zhang, Bo Li, Yuchao Dai, Fatih Porikli, Mingyi He
Submitted On:
10 September 2017 - 7:48pm
Short Link:
Type:
Event:

Document Files

icip-ppt.pdf

(269)

Subscribe

[1] Jing Zhang, Bo Li, Yuchao Dai, Fatih Porikli, Mingyi He, "Integrated Deep and Shallow Networks for Salient Object Detection", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1889. Accessed: Apr. 25, 2019.
@article{1889-17,
url = {http://sigport.org/1889},
author = {Jing Zhang; Bo Li; Yuchao Dai; Fatih Porikli; Mingyi He },
publisher = {IEEE SigPort},
title = {Integrated Deep and Shallow Networks for Salient Object Detection},
year = {2017} }
TY - EJOUR
T1 - Integrated Deep and Shallow Networks for Salient Object Detection
AU - Jing Zhang; Bo Li; Yuchao Dai; Fatih Porikli; Mingyi He
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1889
ER -
Jing Zhang, Bo Li, Yuchao Dai, Fatih Porikli, Mingyi He. (2017). Integrated Deep and Shallow Networks for Salient Object Detection. IEEE SigPort. http://sigport.org/1889
Jing Zhang, Bo Li, Yuchao Dai, Fatih Porikli, Mingyi He, 2017. Integrated Deep and Shallow Networks for Salient Object Detection. Available at: http://sigport.org/1889.
Jing Zhang, Bo Li, Yuchao Dai, Fatih Porikli, Mingyi He. (2017). "Integrated Deep and Shallow Networks for Salient Object Detection." Web.
1. Jing Zhang, Bo Li, Yuchao Dai, Fatih Porikli, Mingyi He. Integrated Deep and Shallow Networks for Salient Object Detection [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1889

Pages