Sorry, you need to enable JavaScript to visit this website.

Image/Video Storage, Retrieval

GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS


Generative models have recently received considerable attention in the field of compressive sensing. If an image belongs to the range of a pretrained generative network, we can recover it from its compressive measurements by estimating the underlying compact latent code. In practice, all the pretrained generators have certain range beyond which they fail to generate reliably. Recent researches show that convolutional generative structures are biased to generate natural images.

Paper Details

Authors:
M. Salman Asif
Submitted On:
4 December 2019 - 7:13am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster_GENERATIVE MODELS_Hyder

(4)

Subscribe

[1] M. Salman Asif, "GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4966. Accessed: Dec. 12, 2019.
@article{4966-19,
url = {http://sigport.org/4966},
author = {M. Salman Asif },
publisher = {IEEE SigPort},
title = {GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS},
year = {2019} }
TY - EJOUR
T1 - GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS
AU - M. Salman Asif
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4966
ER -
M. Salman Asif. (2019). GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS. IEEE SigPort. http://sigport.org/4966
M. Salman Asif, 2019. GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS. Available at: http://sigport.org/4966.
M. Salman Asif. (2019). "GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS." Web.
1. M. Salman Asif. GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4966

Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval

Paper Details

Authors:
Markus Flierl
Submitted On:
12 November 2019 - 8:47am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

SIP2019.pdf

(14)

Subscribe

[1] Markus Flierl, "Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4950. Accessed: Dec. 12, 2019.
@article{4950-19,
url = {http://sigport.org/4950},
author = {Markus Flierl },
publisher = {IEEE SigPort},
title = {Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval},
year = {2019} }
TY - EJOUR
T1 - Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval
AU - Markus Flierl
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4950
ER -
Markus Flierl. (2019). Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval. IEEE SigPort. http://sigport.org/4950
Markus Flierl, 2019. Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval. Available at: http://sigport.org/4950.
Markus Flierl. (2019). "Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval." Web.
1. Markus Flierl. Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4950

3D Shape Retrieval Through Multilayer RBF Neural Network


3D object retrieval involves more efforts mainly because major computer vision features are designed for 2D images, which is rarely applicable for 3D models. In this paper, we propose to retrieve the 3D models based on the implicit parameters learned from the radial base functions that represent the 3D objects. The radial base functions are learned from the RBF neural network. As deep neural networks can represent the data that is not linearly separable, we apply multiple layers' neural network to train the radial base functions.

Paper Details

Authors:
Yahong Han
Submitted On:
22 September 2019 - 12:37am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

icip3549.pdf

(24)

Subscribe

[1] Yahong Han, "3D Shape Retrieval Through Multilayer RBF Neural Network", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4813. Accessed: Dec. 12, 2019.
@article{4813-19,
url = {http://sigport.org/4813},
author = {Yahong Han },
publisher = {IEEE SigPort},
title = {3D Shape Retrieval Through Multilayer RBF Neural Network},
year = {2019} }
TY - EJOUR
T1 - 3D Shape Retrieval Through Multilayer RBF Neural Network
AU - Yahong Han
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4813
ER -
Yahong Han. (2019). 3D Shape Retrieval Through Multilayer RBF Neural Network. IEEE SigPort. http://sigport.org/4813
Yahong Han, 2019. 3D Shape Retrieval Through Multilayer RBF Neural Network. Available at: http://sigport.org/4813.
Yahong Han. (2019). "3D Shape Retrieval Through Multilayer RBF Neural Network." Web.
1. Yahong Han. 3D Shape Retrieval Through Multilayer RBF Neural Network [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4813

Dual reverse attention networks for person re-identification


In this paper, we enhance feature representation ability of person re-identification (Re-ID) by learning invariances to hard examples. Unlike previous works of hard examples mining and generating in image level, we propose a dual reverse attention network (DRANet) to generate hard examples in the convolutional feature space. Specifically, we use a classification branch of attention mechanism to model that ‘what’ in channel and ‘where’ in spatial dimensions are informative in the feature maps.

Paper Details

Authors:
Shuangwei Liu, Lin Qi, Yunzhou Zhang, Weidong Shi
Submitted On:
20 September 2019 - 11:04am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Dual Reverse Attention Networks for Person Re-id.pdf

(16)

Subscribe

[1] Shuangwei Liu, Lin Qi, Yunzhou Zhang, Weidong Shi, "Dual reverse attention networks for person re-identification", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4784. Accessed: Dec. 12, 2019.
@article{4784-19,
url = {http://sigport.org/4784},
author = {Shuangwei Liu; Lin Qi; Yunzhou Zhang; Weidong Shi },
publisher = {IEEE SigPort},
title = {Dual reverse attention networks for person re-identification},
year = {2019} }
TY - EJOUR
T1 - Dual reverse attention networks for person re-identification
AU - Shuangwei Liu; Lin Qi; Yunzhou Zhang; Weidong Shi
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4784
ER -
Shuangwei Liu, Lin Qi, Yunzhou Zhang, Weidong Shi. (2019). Dual reverse attention networks for person re-identification. IEEE SigPort. http://sigport.org/4784
Shuangwei Liu, Lin Qi, Yunzhou Zhang, Weidong Shi, 2019. Dual reverse attention networks for person re-identification. Available at: http://sigport.org/4784.
Shuangwei Liu, Lin Qi, Yunzhou Zhang, Weidong Shi. (2019). "Dual reverse attention networks for person re-identification." Web.
1. Shuangwei Liu, Lin Qi, Yunzhou Zhang, Weidong Shi. Dual reverse attention networks for person re-identification [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4784

SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS


This paper proposes an approach to automatically categorize the social interactions of a user wearing a photo-camera (2fpm), by relying solely on what the camera is seeing. The problem is challenging due to the overwhelming complexity of social life and the extreme intra-class variability of social interactions captured under unconstrained conditions. We adopt the formalization proposed in Bugental’s social theory, that groups human relations into five social domains with related categories.

Paper Details

Authors:
Petia Radeva, Mariella Dimiccoli
Submitted On:
24 September 2019 - 4:30am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP 2019 Presentation.pdf

(13)

Subscribe

[1] Petia Radeva, Mariella Dimiccoli, "SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4781. Accessed: Dec. 12, 2019.
@article{4781-19,
url = {http://sigport.org/4781},
author = {Petia Radeva; Mariella Dimiccoli },
publisher = {IEEE SigPort},
title = {SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS},
year = {2019} }
TY - EJOUR
T1 - SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS
AU - Petia Radeva; Mariella Dimiccoli
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4781
ER -
Petia Radeva, Mariella Dimiccoli. (2019). SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS. IEEE SigPort. http://sigport.org/4781
Petia Radeva, Mariella Dimiccoli, 2019. SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS. Available at: http://sigport.org/4781.
Petia Radeva, Mariella Dimiccoli. (2019). "SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS." Web.
1. Petia Radeva, Mariella Dimiccoli. SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4781

TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES


Vehicle localization is a critical component for autonomous driving, which estimates the position and orientation of vehicles. To achieve the goal of quick and accurate localization, we develop a system that can dynamically switch the features applied for localization. Specifically, we develop a feature based on convolutional neural network targeting at accurate matching, which proves high rotation invariant property that can help to overcome the relatively large error when vehicles turning at corners.

Paper Details

Authors:
Guoyu Lu, Xue-iuan Wong
Submitted On:
20 September 2019 - 5:45am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICIP_poster_3818.pdf

(14)

Subscribe

[1] Guoyu Lu, Xue-iuan Wong, "TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4773. Accessed: Dec. 12, 2019.
@article{4773-19,
url = {http://sigport.org/4773},
author = {Guoyu Lu; Xue-iuan Wong },
publisher = {IEEE SigPort},
title = {TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES},
year = {2019} }
TY - EJOUR
T1 - TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES
AU - Guoyu Lu; Xue-iuan Wong
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4773
ER -
Guoyu Lu, Xue-iuan Wong. (2019). TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES. IEEE SigPort. http://sigport.org/4773
Guoyu Lu, Xue-iuan Wong, 2019. TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES. Available at: http://sigport.org/4773.
Guoyu Lu, Xue-iuan Wong. (2019). "TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES." Web.
1. Guoyu Lu, Xue-iuan Wong. TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4773

Efficient Codebook and Factorization for Second-Order Representation Learning

Paper Details

Authors:
Submitted On:
19 September 2019 - 1:41pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP 2019 - JCF(3).pdf

(11)

Subscribe

[1] , "Efficient Codebook and Factorization for Second-Order Representation Learning", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4746. Accessed: Dec. 12, 2019.
@article{4746-19,
url = {http://sigport.org/4746},
author = { },
publisher = {IEEE SigPort},
title = {Efficient Codebook and Factorization for Second-Order Representation Learning},
year = {2019} }
TY - EJOUR
T1 - Efficient Codebook and Factorization for Second-Order Representation Learning
AU -
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4746
ER -
. (2019). Efficient Codebook and Factorization for Second-Order Representation Learning. IEEE SigPort. http://sigport.org/4746
, 2019. Efficient Codebook and Factorization for Second-Order Representation Learning. Available at: http://sigport.org/4746.
. (2019). "Efficient Codebook and Factorization for Second-Order Representation Learning." Web.
1. . Efficient Codebook and Factorization for Second-Order Representation Learning [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4746

UNSUPERVISED DOMAIN-ADAPTIVE PERSON RE-IDENTIFICATION BASED ON ATTRIBUTES


Pedestrian attributes, e.g., hair length, clothes type and color, locally describe the semantic appearance of a person. Training person re-identification (ReID) algorithms under the supervision of such attributes have proven to be effective in extracting local features. Different from person identity, at- tributes are consistent across different domains (or datasets). However, most of ReID datasets lack attribute annotations. On the other hand, there are several datasets labeled with sufficient attributes for the case of pedestrian attribute recognition.

Paper Details

Authors:
Xiangping Zhu, Pietro Morerio and Vittorio Murino
Submitted On:
16 September 2019 - 10:17am
Short Link:
Type:
Document Year:
Cite

Document Files

#2281_poster

(16)

Subscribe

[1] Xiangping Zhu, Pietro Morerio and Vittorio Murino, "UNSUPERVISED DOMAIN-ADAPTIVE PERSON RE-IDENTIFICATION BASED ON ATTRIBUTES", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4640. Accessed: Dec. 12, 2019.
@article{4640-19,
url = {http://sigport.org/4640},
author = {Xiangping Zhu; Pietro Morerio and Vittorio Murino },
publisher = {IEEE SigPort},
title = {UNSUPERVISED DOMAIN-ADAPTIVE PERSON RE-IDENTIFICATION BASED ON ATTRIBUTES},
year = {2019} }
TY - EJOUR
T1 - UNSUPERVISED DOMAIN-ADAPTIVE PERSON RE-IDENTIFICATION BASED ON ATTRIBUTES
AU - Xiangping Zhu; Pietro Morerio and Vittorio Murino
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4640
ER -
Xiangping Zhu, Pietro Morerio and Vittorio Murino. (2019). UNSUPERVISED DOMAIN-ADAPTIVE PERSON RE-IDENTIFICATION BASED ON ATTRIBUTES. IEEE SigPort. http://sigport.org/4640
Xiangping Zhu, Pietro Morerio and Vittorio Murino, 2019. UNSUPERVISED DOMAIN-ADAPTIVE PERSON RE-IDENTIFICATION BASED ON ATTRIBUTES. Available at: http://sigport.org/4640.
Xiangping Zhu, Pietro Morerio and Vittorio Murino. (2019). "UNSUPERVISED DOMAIN-ADAPTIVE PERSON RE-IDENTIFICATION BASED ON ATTRIBUTES." Web.
1. Xiangping Zhu, Pietro Morerio and Vittorio Murino. UNSUPERVISED DOMAIN-ADAPTIVE PERSON RE-IDENTIFICATION BASED ON ATTRIBUTES [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4640

Augmented Visual-semantic Embeddings for Image and Sentence Matching

Paper Details

Authors:
Submitted On:
16 September 2019 - 4:23am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

icip.pdf

(278)

Subscribe

[1] , "Augmented Visual-semantic Embeddings for Image and Sentence Matching", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4635. Accessed: Dec. 12, 2019.
@article{4635-19,
url = {http://sigport.org/4635},
author = { },
publisher = {IEEE SigPort},
title = {Augmented Visual-semantic Embeddings for Image and Sentence Matching},
year = {2019} }
TY - EJOUR
T1 - Augmented Visual-semantic Embeddings for Image and Sentence Matching
AU -
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4635
ER -
. (2019). Augmented Visual-semantic Embeddings for Image and Sentence Matching. IEEE SigPort. http://sigport.org/4635
, 2019. Augmented Visual-semantic Embeddings for Image and Sentence Matching. Available at: http://sigport.org/4635.
. (2019). "Augmented Visual-semantic Embeddings for Image and Sentence Matching." Web.
1. . Augmented Visual-semantic Embeddings for Image and Sentence Matching [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4635

Loss Switching Fusion with Similarity Search for Video Classification


From video streaming to security and surveillance applications , video data play an important role in our daily living today. However, managing a large amount of video data and retrieving the most useful information for the user remain a challenging task. In this paper, we propose a novel video classification system that would benefit the scene understanding task. We define our classification problem as classifying background and foreground motions using the same feature representation for outdoor scenes.

Paper Details

Authors:
Lei Wang, Du Q. Huynh, Moussa Reda Mansour
Submitted On:
16 September 2019 - 1:04am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

eposter icip2019 leiwang

(65)

Subscribe

[1] Lei Wang, Du Q. Huynh, Moussa Reda Mansour, "Loss Switching Fusion with Similarity Search for Video Classification", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4630. Accessed: Dec. 12, 2019.
@article{4630-19,
url = {http://sigport.org/4630},
author = {Lei Wang; Du Q. Huynh; Moussa Reda Mansour },
publisher = {IEEE SigPort},
title = {Loss Switching Fusion with Similarity Search for Video Classification},
year = {2019} }
TY - EJOUR
T1 - Loss Switching Fusion with Similarity Search for Video Classification
AU - Lei Wang; Du Q. Huynh; Moussa Reda Mansour
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4630
ER -
Lei Wang, Du Q. Huynh, Moussa Reda Mansour. (2019). Loss Switching Fusion with Similarity Search for Video Classification. IEEE SigPort. http://sigport.org/4630
Lei Wang, Du Q. Huynh, Moussa Reda Mansour, 2019. Loss Switching Fusion with Similarity Search for Video Classification. Available at: http://sigport.org/4630.
Lei Wang, Du Q. Huynh, Moussa Reda Mansour. (2019). "Loss Switching Fusion with Similarity Search for Video Classification." Web.
1. Lei Wang, Du Q. Huynh, Moussa Reda Mansour. Loss Switching Fusion with Similarity Search for Video Classification [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4630

Pages