Sorry, you need to enable JavaScript to visit this website.

Image/Video Storage, Retrieval

Super-Resolution in Compressive Coded Imaging Systems via l2 − l1 − l2 Minimization Under a Deep Learning Approach


In most imaging applications the spatial resolution is a concern of the systems, but increasing the resolution of the sensor increases substantially the implementation cost. One option with lower cost is the use of spatial light modulators, which allows improving the reconstructed image resolution by including a high-resolution codification.

Paper Details

Authors:
Hans Garcia, Miguel Marquez, Henry Arguello
Submitted On:
31 March 2020 - 4:47am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Session:
Document Year:
Cite

Document Files

Presentacion_DCC.pdf

(4)

Subscribe

[1] Hans Garcia, Miguel Marquez, Henry Arguello, "Super-Resolution in Compressive Coded Imaging Systems via l2 − l1 − l2 Minimization Under a Deep Learning Approach", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5075. Accessed: Apr. 07, 2020.
@article{5075-20,
url = {http://sigport.org/5075},
author = {Hans Garcia; Miguel Marquez; Henry Arguello },
publisher = {IEEE SigPort},
title = {Super-Resolution in Compressive Coded Imaging Systems via l2 − l1 − l2 Minimization Under a Deep Learning Approach},
year = {2020} }
TY - EJOUR
T1 - Super-Resolution in Compressive Coded Imaging Systems via l2 − l1 − l2 Minimization Under a Deep Learning Approach
AU - Hans Garcia; Miguel Marquez; Henry Arguello
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5075
ER -
Hans Garcia, Miguel Marquez, Henry Arguello. (2020). Super-Resolution in Compressive Coded Imaging Systems via l2 − l1 − l2 Minimization Under a Deep Learning Approach. IEEE SigPort. http://sigport.org/5075
Hans Garcia, Miguel Marquez, Henry Arguello, 2020. Super-Resolution in Compressive Coded Imaging Systems via l2 − l1 − l2 Minimization Under a Deep Learning Approach. Available at: http://sigport.org/5075.
Hans Garcia, Miguel Marquez, Henry Arguello. (2020). "Super-Resolution in Compressive Coded Imaging Systems via l2 − l1 − l2 Minimization Under a Deep Learning Approach." Web.
1. Hans Garcia, Miguel Marquez, Henry Arguello. Super-Resolution in Compressive Coded Imaging Systems via l2 − l1 − l2 Minimization Under a Deep Learning Approach [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5075

Tensor Dictionary Learning with representation quantization for Remote Sensing Observation Compression


Nowadays, multidimensional data structures, known as tensors, are widely used in many applications like earth observation from remote sensing image sequences. However, the increasing spatial, spectral and temporal resolution of the acquired images, introduces considerable challenges in terms of data storage and transfer, making critical the necessity of an efficient compression system for high dimensional data. In this paper, we propose a tensor-based compression algorithm that retains the structure of the data and achieves a high compression ratio.

Paper Details

Authors:
Anastasia Aidini, Grigorios Tsagkatakis, and Panagiotis Tsakalides
Submitted On:
30 March 2020 - 8:25am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Session:
Document Year:
Cite

Document Files

dcc_aidini.pdf

(5)

Subscribe

[1] Anastasia Aidini, Grigorios Tsagkatakis, and Panagiotis Tsakalides, "Tensor Dictionary Learning with representation quantization for Remote Sensing Observation Compression", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5057. Accessed: Apr. 07, 2020.
@article{5057-20,
url = {http://sigport.org/5057},
author = {Anastasia Aidini; Grigorios Tsagkatakis; and Panagiotis Tsakalides },
publisher = {IEEE SigPort},
title = {Tensor Dictionary Learning with representation quantization for Remote Sensing Observation Compression},
year = {2020} }
TY - EJOUR
T1 - Tensor Dictionary Learning with representation quantization for Remote Sensing Observation Compression
AU - Anastasia Aidini; Grigorios Tsagkatakis; and Panagiotis Tsakalides
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5057
ER -
Anastasia Aidini, Grigorios Tsagkatakis, and Panagiotis Tsakalides. (2020). Tensor Dictionary Learning with representation quantization for Remote Sensing Observation Compression. IEEE SigPort. http://sigport.org/5057
Anastasia Aidini, Grigorios Tsagkatakis, and Panagiotis Tsakalides, 2020. Tensor Dictionary Learning with representation quantization for Remote Sensing Observation Compression. Available at: http://sigport.org/5057.
Anastasia Aidini, Grigorios Tsagkatakis, and Panagiotis Tsakalides. (2020). "Tensor Dictionary Learning with representation quantization for Remote Sensing Observation Compression." Web.
1. Anastasia Aidini, Grigorios Tsagkatakis, and Panagiotis Tsakalides. Tensor Dictionary Learning with representation quantization for Remote Sensing Observation Compression [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5057

Spatial-Temporal Fusion Convolutional Neural Network for Compressed Video enhancement in HEVC

Paper Details

Authors:
Xiaoyu Xu, Jian Qian, Li Yu, Hongkui Wang, Hao Tao, Shengju Yu
Submitted On:
29 March 2020 - 9:20am
Short Link:
Type:
Event:
Presenter's Name:
Session:

Document Files

DCC-2020.ppt

(5)

Subscribe

[1] Xiaoyu Xu, Jian Qian, Li Yu, Hongkui Wang, Hao Tao, Shengju Yu, "Spatial-Temporal Fusion Convolutional Neural Network for Compressed Video enhancement in HEVC", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5048. Accessed: Apr. 07, 2020.
@article{5048-20,
url = {http://sigport.org/5048},
author = {Xiaoyu Xu; Jian Qian; Li Yu; Hongkui Wang; Hao Tao; Shengju Yu },
publisher = {IEEE SigPort},
title = {Spatial-Temporal Fusion Convolutional Neural Network for Compressed Video enhancement in HEVC},
year = {2020} }
TY - EJOUR
T1 - Spatial-Temporal Fusion Convolutional Neural Network for Compressed Video enhancement in HEVC
AU - Xiaoyu Xu; Jian Qian; Li Yu; Hongkui Wang; Hao Tao; Shengju Yu
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5048
ER -
Xiaoyu Xu, Jian Qian, Li Yu, Hongkui Wang, Hao Tao, Shengju Yu. (2020). Spatial-Temporal Fusion Convolutional Neural Network for Compressed Video enhancement in HEVC. IEEE SigPort. http://sigport.org/5048
Xiaoyu Xu, Jian Qian, Li Yu, Hongkui Wang, Hao Tao, Shengju Yu, 2020. Spatial-Temporal Fusion Convolutional Neural Network for Compressed Video enhancement in HEVC. Available at: http://sigport.org/5048.
Xiaoyu Xu, Jian Qian, Li Yu, Hongkui Wang, Hao Tao, Shengju Yu. (2020). "Spatial-Temporal Fusion Convolutional Neural Network for Compressed Video enhancement in HEVC." Web.
1. Xiaoyu Xu, Jian Qian, Li Yu, Hongkui Wang, Hao Tao, Shengju Yu. Spatial-Temporal Fusion Convolutional Neural Network for Compressed Video enhancement in HEVC [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5048

Compressive Classification via Deep Learning using Single-pixel Measurements


Single-pixel camera (SPC) captures encoded projections of the scene in a unique detector such that the number of compressive projections is lower than the size of the image. Traditionally, classification is not performed in the compressive domain because it is necessary to recover the underlying image before to classification. Based on the success of Deep Learning (DL) in classification approaches, this paper proposes to classify images using compressive measurements of SPC.

Paper Details

Authors:
Jorge Bacca, Nelson Diaz, Henry Arguello
Submitted On:
25 March 2020 - 3:14pm
Short Link:
Type:
Event:
Presenter's Name:
Session:
Document Year:
Cite

Document Files

Poster_main.pdf

(9)

Subscribe

[1] Jorge Bacca, Nelson Diaz, Henry Arguello, "Compressive Classification via Deep Learning using Single-pixel Measurements", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5028. Accessed: Apr. 07, 2020.
@article{5028-20,
url = {http://sigport.org/5028},
author = {Jorge Bacca; Nelson Diaz; Henry Arguello },
publisher = {IEEE SigPort},
title = {Compressive Classification via Deep Learning using Single-pixel Measurements},
year = {2020} }
TY - EJOUR
T1 - Compressive Classification via Deep Learning using Single-pixel Measurements
AU - Jorge Bacca; Nelson Diaz; Henry Arguello
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5028
ER -
Jorge Bacca, Nelson Diaz, Henry Arguello. (2020). Compressive Classification via Deep Learning using Single-pixel Measurements. IEEE SigPort. http://sigport.org/5028
Jorge Bacca, Nelson Diaz, Henry Arguello, 2020. Compressive Classification via Deep Learning using Single-pixel Measurements. Available at: http://sigport.org/5028.
Jorge Bacca, Nelson Diaz, Henry Arguello. (2020). "Compressive Classification via Deep Learning using Single-pixel Measurements." Web.
1. Jorge Bacca, Nelson Diaz, Henry Arguello. Compressive Classification via Deep Learning using Single-pixel Measurements [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5028

GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS


Generative models have recently received considerable attention in the field of compressive sensing. If an image belongs to the range of a pretrained generative network, we can recover it from its compressive measurements by estimating the underlying compact latent code. In practice, all the pretrained generators have certain range beyond which they fail to generate reliably. Recent researches show that convolutional generative structures are biased to generate natural images.

Paper Details

Authors:
M. Salman Asif
Submitted On:
4 December 2019 - 7:13am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster_GENERATIVE MODELS_Hyder

(44)

Subscribe

[1] M. Salman Asif, "GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4966. Accessed: Apr. 07, 2020.
@article{4966-19,
url = {http://sigport.org/4966},
author = {M. Salman Asif },
publisher = {IEEE SigPort},
title = {GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS},
year = {2019} }
TY - EJOUR
T1 - GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS
AU - M. Salman Asif
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4966
ER -
M. Salman Asif. (2019). GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS. IEEE SigPort. http://sigport.org/4966
M. Salman Asif, 2019. GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS. Available at: http://sigport.org/4966.
M. Salman Asif. (2019). "GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS." Web.
1. M. Salman Asif. GENERATIVE MODELS FOR LOW-RANK VIDEO REPRESENTATION AND RECONSTRUCTION FROM COMPRESSIVE MEASUREMENTS [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4966

Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval

Paper Details

Authors:
Markus Flierl
Submitted On:
12 November 2019 - 8:47am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

SIP2019.pdf

(44)

Subscribe

[1] Markus Flierl, "Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4950. Accessed: Apr. 07, 2020.
@article{4950-19,
url = {http://sigport.org/4950},
author = {Markus Flierl },
publisher = {IEEE SigPort},
title = {Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval},
year = {2019} }
TY - EJOUR
T1 - Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval
AU - Markus Flierl
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4950
ER -
Markus Flierl. (2019). Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval. IEEE SigPort. http://sigport.org/4950
Markus Flierl, 2019. Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval. Available at: http://sigport.org/4950.
Markus Flierl. (2019). "Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval." Web.
1. Markus Flierl. Learning Product Codebooks using Vector-Quantized Autoencoders for Image Retrieval [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4950

3D Shape Retrieval Through Multilayer RBF Neural Network


3D object retrieval involves more efforts mainly because major computer vision features are designed for 2D images, which is rarely applicable for 3D models. In this paper, we propose to retrieve the 3D models based on the implicit parameters learned from the radial base functions that represent the 3D objects. The radial base functions are learned from the RBF neural network. As deep neural networks can represent the data that is not linearly separable, we apply multiple layers' neural network to train the radial base functions.

Paper Details

Authors:
Yahong Han
Submitted On:
22 September 2019 - 12:37am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

icip3549.pdf

(50)

Subscribe

[1] Yahong Han, "3D Shape Retrieval Through Multilayer RBF Neural Network", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4813. Accessed: Apr. 07, 2020.
@article{4813-19,
url = {http://sigport.org/4813},
author = {Yahong Han },
publisher = {IEEE SigPort},
title = {3D Shape Retrieval Through Multilayer RBF Neural Network},
year = {2019} }
TY - EJOUR
T1 - 3D Shape Retrieval Through Multilayer RBF Neural Network
AU - Yahong Han
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4813
ER -
Yahong Han. (2019). 3D Shape Retrieval Through Multilayer RBF Neural Network. IEEE SigPort. http://sigport.org/4813
Yahong Han, 2019. 3D Shape Retrieval Through Multilayer RBF Neural Network. Available at: http://sigport.org/4813.
Yahong Han. (2019). "3D Shape Retrieval Through Multilayer RBF Neural Network." Web.
1. Yahong Han. 3D Shape Retrieval Through Multilayer RBF Neural Network [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4813

Dual reverse attention networks for person re-identification


In this paper, we enhance feature representation ability of person re-identification (Re-ID) by learning invariances to hard examples. Unlike previous works of hard examples mining and generating in image level, we propose a dual reverse attention network (DRANet) to generate hard examples in the convolutional feature space. Specifically, we use a classification branch of attention mechanism to model that ‘what’ in channel and ‘where’ in spatial dimensions are informative in the feature maps.

Paper Details

Authors:
Shuangwei Liu, Lin Qi, Yunzhou Zhang, Weidong Shi
Submitted On:
20 September 2019 - 11:04am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Dual Reverse Attention Networks for Person Re-id.pdf

(47)

Subscribe

[1] Shuangwei Liu, Lin Qi, Yunzhou Zhang, Weidong Shi, "Dual reverse attention networks for person re-identification", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4784. Accessed: Apr. 07, 2020.
@article{4784-19,
url = {http://sigport.org/4784},
author = {Shuangwei Liu; Lin Qi; Yunzhou Zhang; Weidong Shi },
publisher = {IEEE SigPort},
title = {Dual reverse attention networks for person re-identification},
year = {2019} }
TY - EJOUR
T1 - Dual reverse attention networks for person re-identification
AU - Shuangwei Liu; Lin Qi; Yunzhou Zhang; Weidong Shi
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4784
ER -
Shuangwei Liu, Lin Qi, Yunzhou Zhang, Weidong Shi. (2019). Dual reverse attention networks for person re-identification. IEEE SigPort. http://sigport.org/4784
Shuangwei Liu, Lin Qi, Yunzhou Zhang, Weidong Shi, 2019. Dual reverse attention networks for person re-identification. Available at: http://sigport.org/4784.
Shuangwei Liu, Lin Qi, Yunzhou Zhang, Weidong Shi. (2019). "Dual reverse attention networks for person re-identification." Web.
1. Shuangwei Liu, Lin Qi, Yunzhou Zhang, Weidong Shi. Dual reverse attention networks for person re-identification [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4784

SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS


This paper proposes an approach to automatically categorize the social interactions of a user wearing a photo-camera (2fpm), by relying solely on what the camera is seeing. The problem is challenging due to the overwhelming complexity of social life and the extreme intra-class variability of social interactions captured under unconstrained conditions. We adopt the formalization proposed in Bugental’s social theory, that groups human relations into five social domains with related categories.

Paper Details

Authors:
Petia Radeva, Mariella Dimiccoli
Submitted On:
24 September 2019 - 4:30am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP 2019 Presentation.pdf

(37)

Subscribe

[1] Petia Radeva, Mariella Dimiccoli, "SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4781. Accessed: Apr. 07, 2020.
@article{4781-19,
url = {http://sigport.org/4781},
author = {Petia Radeva; Mariella Dimiccoli },
publisher = {IEEE SigPort},
title = {SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS},
year = {2019} }
TY - EJOUR
T1 - SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS
AU - Petia Radeva; Mariella Dimiccoli
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4781
ER -
Petia Radeva, Mariella Dimiccoli. (2019). SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS. IEEE SigPort. http://sigport.org/4781
Petia Radeva, Mariella Dimiccoli, 2019. SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS. Available at: http://sigport.org/4781.
Petia Radeva, Mariella Dimiccoli. (2019). "SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS." Web.
1. Petia Radeva, Mariella Dimiccoli. SOCIAL RELATION RECOGNITION IN EGOCENTRIC PHOTOSTREAMS [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4781

TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES


Vehicle localization is a critical component for autonomous driving, which estimates the position and orientation of vehicles. To achieve the goal of quick and accurate localization, we develop a system that can dynamically switch the features applied for localization. Specifically, we develop a feature based on convolutional neural network targeting at accurate matching, which proves high rotation invariant property that can help to overcome the relatively large error when vehicles turning at corners.

Paper Details

Authors:
Guoyu Lu, Xue-iuan Wong
Submitted On:
20 September 2019 - 5:45am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICIP_poster_3818.pdf

(37)

Subscribe

[1] Guoyu Lu, Xue-iuan Wong, "TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4773. Accessed: Apr. 07, 2020.
@article{4773-19,
url = {http://sigport.org/4773},
author = {Guoyu Lu; Xue-iuan Wong },
publisher = {IEEE SigPort},
title = {TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES},
year = {2019} }
TY - EJOUR
T1 - TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES
AU - Guoyu Lu; Xue-iuan Wong
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4773
ER -
Guoyu Lu, Xue-iuan Wong. (2019). TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES. IEEE SigPort. http://sigport.org/4773
Guoyu Lu, Xue-iuan Wong, 2019. TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES. Available at: http://sigport.org/4773.
Guoyu Lu, Xue-iuan Wong. (2019). "TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES." Web.
1. Guoyu Lu, Xue-iuan Wong. TAKING ME TO THE CORRECT PLACE: VISION-BASED LOCALIZATION FOR AUTONOMOUS VEHICLES [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4773

Pages