Sorry, you need to enable JavaScript to visit this website.

Applications in Data Fusion (MLR-FUSI)

AUDIO-VISUAL FUSION AND CONDITIONING WITH NEURAL NETWORKS FOR EVENT RECOGNITION


Video event recognition based on audio and visual modalities is an open research problem. The mainstream literature on video event recognition focuses on the visual modality and does not take into account the relevant information present in the audio modality. We propose to study several fusion architectures for the audio-visual recognition task of video events. We first build classical fusion architectures using concatenation, addition or Multimodal Compact Bilinear pooling (MCB).

Paper Details

Authors:
Jean Rouat, Stéphane Dupont
Submitted On:
14 October 2019 - 8:52pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

MLSP_presentation.pdf

(17)

Subscribe

[1] Jean Rouat, Stéphane Dupont, "AUDIO-VISUAL FUSION AND CONDITIONING WITH NEURAL NETWORKS FOR EVENT RECOGNITION", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4873. Accessed: Nov. 21, 2019.
@article{4873-19,
url = {http://sigport.org/4873},
author = {Jean Rouat; Stéphane Dupont },
publisher = {IEEE SigPort},
title = {AUDIO-VISUAL FUSION AND CONDITIONING WITH NEURAL NETWORKS FOR EVENT RECOGNITION},
year = {2019} }
TY - EJOUR
T1 - AUDIO-VISUAL FUSION AND CONDITIONING WITH NEURAL NETWORKS FOR EVENT RECOGNITION
AU - Jean Rouat; Stéphane Dupont
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4873
ER -
Jean Rouat, Stéphane Dupont. (2019). AUDIO-VISUAL FUSION AND CONDITIONING WITH NEURAL NETWORKS FOR EVENT RECOGNITION. IEEE SigPort. http://sigport.org/4873
Jean Rouat, Stéphane Dupont, 2019. AUDIO-VISUAL FUSION AND CONDITIONING WITH NEURAL NETWORKS FOR EVENT RECOGNITION. Available at: http://sigport.org/4873.
Jean Rouat, Stéphane Dupont. (2019). "AUDIO-VISUAL FUSION AND CONDITIONING WITH NEURAL NETWORKS FOR EVENT RECOGNITION." Web.
1. Jean Rouat, Stéphane Dupont. AUDIO-VISUAL FUSION AND CONDITIONING WITH NEURAL NETWORKS FOR EVENT RECOGNITION [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4873

NMF-based Comprehensive Latent Factor Learning with Multiview Data


Multiview representations reveal the latent information of the data from different perspectives, consistency, and complementarity. Unlike most multiview learning approaches, which focus only one perspective, in this paper, we propose a novel unsupervised multiview learning algorithm, called comprehensive latent factor learning (CLFL), which jointly exploits both consistent and complementary information among multiple views. CLFL adopts a non-negative matrix factorization based formulation to learn the latent factors.

Paper Details

Authors:
Hua Zheng, Zhixuan Liang, Feng Tian, Zhong Ming
Submitted On:
11 September 2019 - 2:26am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIPNMF-based CLFL

(27)

Subscribe

[1] Hua Zheng, Zhixuan Liang, Feng Tian, Zhong Ming, "NMF-based Comprehensive Latent Factor Learning with Multiview Data", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4588. Accessed: Nov. 21, 2019.
@article{4588-19,
url = {http://sigport.org/4588},
author = {Hua Zheng; Zhixuan Liang; Feng Tian; Zhong Ming },
publisher = {IEEE SigPort},
title = {NMF-based Comprehensive Latent Factor Learning with Multiview Data},
year = {2019} }
TY - EJOUR
T1 - NMF-based Comprehensive Latent Factor Learning with Multiview Data
AU - Hua Zheng; Zhixuan Liang; Feng Tian; Zhong Ming
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4588
ER -
Hua Zheng, Zhixuan Liang, Feng Tian, Zhong Ming. (2019). NMF-based Comprehensive Latent Factor Learning with Multiview Data. IEEE SigPort. http://sigport.org/4588
Hua Zheng, Zhixuan Liang, Feng Tian, Zhong Ming, 2019. NMF-based Comprehensive Latent Factor Learning with Multiview Data. Available at: http://sigport.org/4588.
Hua Zheng, Zhixuan Liang, Feng Tian, Zhong Ming. (2019). "NMF-based Comprehensive Latent Factor Learning with Multiview Data." Web.
1. Hua Zheng, Zhixuan Liang, Feng Tian, Zhong Ming. NMF-based Comprehensive Latent Factor Learning with Multiview Data [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4588

GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION


Flooding is one of the most harmful natural disasters, as it poses danger to both buildings and human lives. Therefore, it is fundamental to monitor these disasters to define prevention strategies and help authorities in damage control. With the wide use of portable devices (e.g., smartphones), there is an increase of the documentation and communication of flood events in social media. However, the use of these data in monitoring systems is not straightforward and depends on the creation of effective recognition strategies.

Paper Details

Authors:
Rafael de O. Werneck, Icaro C. Dourado, Samuel G. Fadel, Salvatore Tabbone, Ricardo da S. Torres
Submitted On:
4 October 2018 - 10:03am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_icip_landscape.pdf

(115)

Subscribe

[1] Rafael de O. Werneck, Icaro C. Dourado, Samuel G. Fadel, Salvatore Tabbone, Ricardo da S. Torres, "GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3413. Accessed: Nov. 21, 2019.
@article{3413-18,
url = {http://sigport.org/3413},
author = {Rafael de O. Werneck; Icaro C. Dourado; Samuel G. Fadel; Salvatore Tabbone; Ricardo da S. Torres },
publisher = {IEEE SigPort},
title = {GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION},
year = {2018} }
TY - EJOUR
T1 - GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION
AU - Rafael de O. Werneck; Icaro C. Dourado; Samuel G. Fadel; Salvatore Tabbone; Ricardo da S. Torres
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3413
ER -
Rafael de O. Werneck, Icaro C. Dourado, Samuel G. Fadel, Salvatore Tabbone, Ricardo da S. Torres. (2018). GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION. IEEE SigPort. http://sigport.org/3413
Rafael de O. Werneck, Icaro C. Dourado, Samuel G. Fadel, Salvatore Tabbone, Ricardo da S. Torres, 2018. GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION. Available at: http://sigport.org/3413.
Rafael de O. Werneck, Icaro C. Dourado, Samuel G. Fadel, Salvatore Tabbone, Ricardo da S. Torres. (2018). "GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION." Web.
1. Rafael de O. Werneck, Icaro C. Dourado, Samuel G. Fadel, Salvatore Tabbone, Ricardo da S. Torres. GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3413

A MULTI-PERSPECTIVE APPROACH TO ANOMALY DETECTION FOR SELF-AWARE EMBODIED AGENTS


This paper focuses on multi-sensor anomaly detection for moving cognitive agents using both external and private first-person visual observations. Both observation types are used to characterize agents’ motion in a given environment. The proposed method generates locally uniform motion models by dividing a Gaussian process that approximates agents’ displacements on the scene and provides a Shared Level (SL) self-awareness based on Environment Centered (EC) models.

Paper Details

Authors:
Mohamad Baydoun, Mahdyar Ravanbakhsh, Damian Campo, Pablo Marin, David Martin, Lucio Marcenaro, Andrea Cavallaro, Carlo S. Regazzoni
Submitted On:
18 April 2018 - 10:40am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

SS-L2.5 A MULTI-PERSPECTIVE APPROACH TO ANOMALY DETECTION FOR SELF-AWARE EMBODIED AGENTS.pdf

(53)

Subscribe

[1] Mohamad Baydoun, Mahdyar Ravanbakhsh, Damian Campo, Pablo Marin, David Martin, Lucio Marcenaro, Andrea Cavallaro, Carlo S. Regazzoni, "A MULTI-PERSPECTIVE APPROACH TO ANOMALY DETECTION FOR SELF-AWARE EMBODIED AGENTS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2964. Accessed: Nov. 21, 2019.
@article{2964-18,
url = {http://sigport.org/2964},
author = {Mohamad Baydoun; Mahdyar Ravanbakhsh; Damian Campo; Pablo Marin; David Martin; Lucio Marcenaro; Andrea Cavallaro; Carlo S. Regazzoni },
publisher = {IEEE SigPort},
title = {A MULTI-PERSPECTIVE APPROACH TO ANOMALY DETECTION FOR SELF-AWARE EMBODIED AGENTS},
year = {2018} }
TY - EJOUR
T1 - A MULTI-PERSPECTIVE APPROACH TO ANOMALY DETECTION FOR SELF-AWARE EMBODIED AGENTS
AU - Mohamad Baydoun; Mahdyar Ravanbakhsh; Damian Campo; Pablo Marin; David Martin; Lucio Marcenaro; Andrea Cavallaro; Carlo S. Regazzoni
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2964
ER -
Mohamad Baydoun, Mahdyar Ravanbakhsh, Damian Campo, Pablo Marin, David Martin, Lucio Marcenaro, Andrea Cavallaro, Carlo S. Regazzoni. (2018). A MULTI-PERSPECTIVE APPROACH TO ANOMALY DETECTION FOR SELF-AWARE EMBODIED AGENTS. IEEE SigPort. http://sigport.org/2964
Mohamad Baydoun, Mahdyar Ravanbakhsh, Damian Campo, Pablo Marin, David Martin, Lucio Marcenaro, Andrea Cavallaro, Carlo S. Regazzoni, 2018. A MULTI-PERSPECTIVE APPROACH TO ANOMALY DETECTION FOR SELF-AWARE EMBODIED AGENTS. Available at: http://sigport.org/2964.
Mohamad Baydoun, Mahdyar Ravanbakhsh, Damian Campo, Pablo Marin, David Martin, Lucio Marcenaro, Andrea Cavallaro, Carlo S. Regazzoni. (2018). "A MULTI-PERSPECTIVE APPROACH TO ANOMALY DETECTION FOR SELF-AWARE EMBODIED AGENTS." Web.
1. Mohamad Baydoun, Mahdyar Ravanbakhsh, Damian Campo, Pablo Marin, David Martin, Lucio Marcenaro, Andrea Cavallaro, Carlo S. Regazzoni. A MULTI-PERSPECTIVE APPROACH TO ANOMALY DETECTION FOR SELF-AWARE EMBODIED AGENTS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2964

Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics


Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics

In this paper, we present Discriminant Correlation Analysis (DCA), a feature level fusion technique that incorporates the class associations in correlation analysis of the feature sets. DCA performs an effective feature fusion by maximizing the pair-wise correlations across the two feature sets, and at the same time, eliminating the between-class correlations and restricting the correlations to be within classes.

[1] Mohammad Haghighat, Mohamed Abdel-Mottaleb, Wadee Alhalabi, "Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/828. Accessed: Nov. 21, 2019.
@article{828-16,
url = {http://sigport.org/828},
author = {Mohammad Haghighat; Mohamed Abdel-Mottaleb; Wadee Alhalabi },
publisher = {IEEE SigPort},
title = {Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics},
year = {2016} }
TY - EJOUR
T1 - Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics
AU - Mohammad Haghighat; Mohamed Abdel-Mottaleb; Wadee Alhalabi
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/828
ER -
Mohammad Haghighat, Mohamed Abdel-Mottaleb, Wadee Alhalabi. (2016). Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics. IEEE SigPort. http://sigport.org/828
Mohammad Haghighat, Mohamed Abdel-Mottaleb, Wadee Alhalabi, 2016. Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics. Available at: http://sigport.org/828.
Mohammad Haghighat, Mohamed Abdel-Mottaleb, Wadee Alhalabi. (2016). "Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics." Web.
1. Mohammad Haghighat, Mohamed Abdel-Mottaleb, Wadee Alhalabi. Discriminant Correlation Analysis for Feature Level Fusion with Application to Multimodal Biometrics [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/828