Sorry, you need to enable JavaScript to visit this website.

ICASSP 2020

ICASSP is the world’s largest and most comprehensive technical conference focused on signal processing and its applications. The ICASSP 2020 conference will feature world-class presentations by internationally renowned speakers, cutting-edge session topics and provide a fantastic opportunity to network with like-minded professionals from around the world. Visit website.

DEEP NEURAL NETWORKS BASED AUTOMATIC SPEECH RECOGNITION FOR FOUR ETHIOPIAN LANGUAGES


In this work, we present speech recognition systems for four Ethiopian languages: Amharic, Tigrigna, Oromo and Wolaytta. We have used comparable training corpora of about 20 to 29 hours speech and evaluation speech of about 1 hour for each of the languages. For Amharic and Tigrigna, lexical and language models of different vocabulary size have been developed. For Oromo and Wolaytta, the training lexicons have been used for decoding.

Paper Details

Authors:
Solomon Teferra Abate,Martha Yifiru Tachbelie and Tanja Schultz
Submitted On:
20 May 2020 - 9:24am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:

Document Files

MarthaSolomonTanja.pdf

(66)

Subscribe

[1] Solomon Teferra Abate,Martha Yifiru Tachbelie and Tanja Schultz, "DEEP NEURAL NETWORKS BASED AUTOMATIC SPEECH RECOGNITION FOR FOUR ETHIOPIAN LANGUAGES", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5395. Accessed: Oct. 27, 2020.
@article{5395-20,
url = {http://sigport.org/5395},
author = {Solomon Teferra Abate;Martha Yifiru Tachbelie and Tanja Schultz },
publisher = {IEEE SigPort},
title = {DEEP NEURAL NETWORKS BASED AUTOMATIC SPEECH RECOGNITION FOR FOUR ETHIOPIAN LANGUAGES},
year = {2020} }
TY - EJOUR
T1 - DEEP NEURAL NETWORKS BASED AUTOMATIC SPEECH RECOGNITION FOR FOUR ETHIOPIAN LANGUAGES
AU - Solomon Teferra Abate;Martha Yifiru Tachbelie and Tanja Schultz
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5395
ER -
Solomon Teferra Abate,Martha Yifiru Tachbelie and Tanja Schultz. (2020). DEEP NEURAL NETWORKS BASED AUTOMATIC SPEECH RECOGNITION FOR FOUR ETHIOPIAN LANGUAGES. IEEE SigPort. http://sigport.org/5395
Solomon Teferra Abate,Martha Yifiru Tachbelie and Tanja Schultz, 2020. DEEP NEURAL NETWORKS BASED AUTOMATIC SPEECH RECOGNITION FOR FOUR ETHIOPIAN LANGUAGES. Available at: http://sigport.org/5395.
Solomon Teferra Abate,Martha Yifiru Tachbelie and Tanja Schultz. (2020). "DEEP NEURAL NETWORKS BASED AUTOMATIC SPEECH RECOGNITION FOR FOUR ETHIOPIAN LANGUAGES." Web.
1. Solomon Teferra Abate,Martha Yifiru Tachbelie and Tanja Schultz. DEEP NEURAL NETWORKS BASED AUTOMATIC SPEECH RECOGNITION FOR FOUR ETHIOPIAN LANGUAGES [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5395

Effects of Spectral Tilt on Listeners' Preferences And Intelligibility

Paper Details

Authors:
Olympia Simantiraki, Martin Cooke, Yannis Pantazis
Submitted On:
18 May 2020 - 12:27pm
Short Link:
Type:
Event:

Document Files

ICASSP2020_simantiraki.pdf

(48)

Subscribe

[1] Olympia Simantiraki, Martin Cooke, Yannis Pantazis, "Effects of Spectral Tilt on Listeners' Preferences And Intelligibility", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5394. Accessed: Oct. 27, 2020.
@article{5394-20,
url = {http://sigport.org/5394},
author = {Olympia Simantiraki; Martin Cooke; Yannis Pantazis },
publisher = {IEEE SigPort},
title = {Effects of Spectral Tilt on Listeners' Preferences And Intelligibility},
year = {2020} }
TY - EJOUR
T1 - Effects of Spectral Tilt on Listeners' Preferences And Intelligibility
AU - Olympia Simantiraki; Martin Cooke; Yannis Pantazis
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5394
ER -
Olympia Simantiraki, Martin Cooke, Yannis Pantazis. (2020). Effects of Spectral Tilt on Listeners' Preferences And Intelligibility. IEEE SigPort. http://sigport.org/5394
Olympia Simantiraki, Martin Cooke, Yannis Pantazis, 2020. Effects of Spectral Tilt on Listeners' Preferences And Intelligibility. Available at: http://sigport.org/5394.
Olympia Simantiraki, Martin Cooke, Yannis Pantazis. (2020). "Effects of Spectral Tilt on Listeners' Preferences And Intelligibility." Web.
1. Olympia Simantiraki, Martin Cooke, Yannis Pantazis. Effects of Spectral Tilt on Listeners' Preferences And Intelligibility [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5394

BILATERAL RECURRENT NETWORK FOR SINGLE IMAGE DERAINING


Single image deraining has been widely studied in recent years. Motivated by residual learning, most deep learning based deraining approaches devote research attention to extracting rain streaks, usually yielding visual artifacts in final deraining images. To address this issue, we in this paper propose bilateral recurrent network (BRN) to simultaneously exploit rain streak layer and background image layer. Generally, we employ dual residual networks (ResNet) that are recursively unfolded to sequentially extract rain streaks and predict clean background image.

Paper Details

Authors:
Pengfei Zhu, Dongwei Ren, Hong Shi
Submitted On:
18 May 2020 - 11:23am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

BRN_slides.pdf

(53)

Subscribe

[1] Pengfei Zhu, Dongwei Ren, Hong Shi, "BILATERAL RECURRENT NETWORK FOR SINGLE IMAGE DERAINING", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5393. Accessed: Oct. 27, 2020.
@article{5393-20,
url = {http://sigport.org/5393},
author = {Pengfei Zhu; Dongwei Ren; Hong Shi },
publisher = {IEEE SigPort},
title = {BILATERAL RECURRENT NETWORK FOR SINGLE IMAGE DERAINING},
year = {2020} }
TY - EJOUR
T1 - BILATERAL RECURRENT NETWORK FOR SINGLE IMAGE DERAINING
AU - Pengfei Zhu; Dongwei Ren; Hong Shi
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5393
ER -
Pengfei Zhu, Dongwei Ren, Hong Shi. (2020). BILATERAL RECURRENT NETWORK FOR SINGLE IMAGE DERAINING. IEEE SigPort. http://sigport.org/5393
Pengfei Zhu, Dongwei Ren, Hong Shi, 2020. BILATERAL RECURRENT NETWORK FOR SINGLE IMAGE DERAINING. Available at: http://sigport.org/5393.
Pengfei Zhu, Dongwei Ren, Hong Shi. (2020). "BILATERAL RECURRENT NETWORK FOR SINGLE IMAGE DERAINING." Web.
1. Pengfei Zhu, Dongwei Ren, Hong Shi. BILATERAL RECURRENT NETWORK FOR SINGLE IMAGE DERAINING [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5393

Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS)


State-of-the-art hearing aids (HA) are limited in recognizing acoustic environments. Much effort is spent on research to improve listening experience for HA users in every acoustic situation. There is, however, no dedicated public database to train acoustic environment recognition algorithms with a specific focus on HA applications accounting for their requirements. Existing acoustic scene classification databases are inappropriate for HA signal processing.

Paper Details

Authors:
Kamil Adiloğlu, Jörg-Hendrik Bach
Submitted On:
18 May 2020 - 7:01am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

https://download.hoertech.de/hear-ds-data/HEAR-DS/RawAudioCuts/doc/icassp2020-hear-ds-presentation-huewel.mp4

(75)

Subscribe

[1] Kamil Adiloğlu, Jörg-Hendrik Bach, "Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS)", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5392. Accessed: Oct. 27, 2020.
@article{5392-20,
url = {http://sigport.org/5392},
author = {Kamil Adiloğlu; Jörg-Hendrik Bach },
publisher = {IEEE SigPort},
title = {Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS)},
year = {2020} }
TY - EJOUR
T1 - Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS)
AU - Kamil Adiloğlu; Jörg-Hendrik Bach
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5392
ER -
Kamil Adiloğlu, Jörg-Hendrik Bach. (2020). Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS). IEEE SigPort. http://sigport.org/5392
Kamil Adiloğlu, Jörg-Hendrik Bach, 2020. Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS). Available at: http://sigport.org/5392.
Kamil Adiloğlu, Jörg-Hendrik Bach. (2020). "Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS)." Web.
1. Kamil Adiloğlu, Jörg-Hendrik Bach. Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS) [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5392

A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES


Unlike phoneme sequences, movements of speech articulators (lips, tongue, jaw, velum) and the resultant acoustic signal are known to encode not only the linguistic message but also carry para-linguistic information. While several works exist for estimating articulatory movement from acoustic signals, little is known to what extent articulatory movements can be predicted only from linguistic information, i.e., phoneme sequence.

Paper Details

Authors:
Abhayjeet Singh, Aravind Illa, Prasanta Kumar Ghosh
Submitted On:
26 May 2020 - 5:45am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Presentation slides

(45)

Subscribe

[1] Abhayjeet Singh, Aravind Illa, Prasanta Kumar Ghosh, "A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5391. Accessed: Oct. 27, 2020.
@article{5391-20,
url = {http://sigport.org/5391},
author = {Abhayjeet Singh; Aravind Illa; Prasanta Kumar Ghosh },
publisher = {IEEE SigPort},
title = {A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES},
year = {2020} }
TY - EJOUR
T1 - A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES
AU - Abhayjeet Singh; Aravind Illa; Prasanta Kumar Ghosh
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5391
ER -
Abhayjeet Singh, Aravind Illa, Prasanta Kumar Ghosh. (2020). A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES. IEEE SigPort. http://sigport.org/5391
Abhayjeet Singh, Aravind Illa, Prasanta Kumar Ghosh, 2020. A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES. Available at: http://sigport.org/5391.
Abhayjeet Singh, Aravind Illa, Prasanta Kumar Ghosh. (2020). "A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES." Web.
1. Abhayjeet Singh, Aravind Illa, Prasanta Kumar Ghosh. A COMPARATIVE STUDY OF ESTIMATING ARTICULATORY MOVEMENTS FROM PHONEME SEQUENCES AND ACOUSTIC FEATURES [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5391

SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION


The process of spoken language acquisition has been one of the topics which attract the greatest interesting from linguists for decades. By utilizing modern machine learning techniques, we simulated this process on computers, which helps to understand the mystery behind the process, and enable new possibilities of applying this concept on, but not limited to, intelligent robots. This paper proposes a new framework for simulating spoken language acquisition by combining reinforcement learning and unsupervised learning methods.

Paper Details

Authors:
Shengzhou Gao, Wenxin Hou, Tomohiro Tanaka, Takahiro Shinozaki
Submitted On:
18 May 2020 - 3:34am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

2020_icassp_slacquisition_edit_hou.pdf

(85)

Subscribe

[1] Shengzhou Gao, Wenxin Hou, Tomohiro Tanaka, Takahiro Shinozaki, "SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5390. Accessed: Oct. 27, 2020.
@article{5390-20,
url = {http://sigport.org/5390},
author = {Shengzhou Gao; Wenxin Hou; Tomohiro Tanaka; Takahiro Shinozaki },
publisher = {IEEE SigPort},
title = {SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION},
year = {2020} }
TY - EJOUR
T1 - SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION
AU - Shengzhou Gao; Wenxin Hou; Tomohiro Tanaka; Takahiro Shinozaki
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5390
ER -
Shengzhou Gao, Wenxin Hou, Tomohiro Tanaka, Takahiro Shinozaki. (2020). SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION. IEEE SigPort. http://sigport.org/5390
Shengzhou Gao, Wenxin Hou, Tomohiro Tanaka, Takahiro Shinozaki, 2020. SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION. Available at: http://sigport.org/5390.
Shengzhou Gao, Wenxin Hou, Tomohiro Tanaka, Takahiro Shinozaki. (2020). "SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION." Web.
1. Shengzhou Gao, Wenxin Hou, Tomohiro Tanaka, Takahiro Shinozaki. SPOKEN LANGUAGE ACQUISITION BASED ON REINFORCEMENT LEARNING AND WORD UNIT SEGMENTATION [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5390

Learning to rank music tracks using triplet loss


Most music streaming services rely on automatic recommendation algorithms to exploit their large music catalogs. These algorithms aim at retrieving a ranked list of music tracks based on their similarity with a target music track. In this work, we propose a method for direct recommendation based on the audio content without explicitly tagging the music tracks. To that aim, we propose several strategies to perform triplet mining from ranked lists. We train a Convolutional Neural Network to learn the similarity via triplet loss.

Paper Details

Authors:
Laure Pretet, Gael Richard, Geoffroy Peeters
Submitted On:
18 May 2020 - 3:27am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp.pdf

(77)

Subscribe

[1] Laure Pretet, Gael Richard, Geoffroy Peeters, "Learning to rank music tracks using triplet loss", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5389. Accessed: Oct. 27, 2020.
@article{5389-20,
url = {http://sigport.org/5389},
author = {Laure Pretet; Gael Richard; Geoffroy Peeters },
publisher = {IEEE SigPort},
title = {Learning to rank music tracks using triplet loss},
year = {2020} }
TY - EJOUR
T1 - Learning to rank music tracks using triplet loss
AU - Laure Pretet; Gael Richard; Geoffroy Peeters
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5389
ER -
Laure Pretet, Gael Richard, Geoffroy Peeters. (2020). Learning to rank music tracks using triplet loss. IEEE SigPort. http://sigport.org/5389
Laure Pretet, Gael Richard, Geoffroy Peeters, 2020. Learning to rank music tracks using triplet loss. Available at: http://sigport.org/5389.
Laure Pretet, Gael Richard, Geoffroy Peeters. (2020). "Learning to rank music tracks using triplet loss." Web.
1. Laure Pretet, Gael Richard, Geoffroy Peeters. Learning to rank music tracks using triplet loss [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5389

Weighted Speech Distortion Losses for Real-time Speech Enhancement


This paper investigates several aspects of training a RNN (recurrent neural network) that impact the objective and subjective quality of enhanced speech for real-time single-channel speech enhancement. Specifically, we focus on a RNN that enhances short-time speech spectra on a single-frame-in, single-frame-out basis, a framework adopted by most classical signal processing methods. We propose two novel mean-squared-error-based learning objectives that enable separate control over the importance of speech distortion versus noise reduction.

Paper Details

Authors:
Ivan Tashev
Submitted On:
17 May 2020 - 7:34pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

dns-public-v2short.pptx

(70)

Subscribe

[1] Ivan Tashev, "Weighted Speech Distortion Losses for Real-time Speech Enhancement", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5388. Accessed: Oct. 27, 2020.
@article{5388-20,
url = {http://sigport.org/5388},
author = {Ivan Tashev },
publisher = {IEEE SigPort},
title = {Weighted Speech Distortion Losses for Real-time Speech Enhancement},
year = {2020} }
TY - EJOUR
T1 - Weighted Speech Distortion Losses for Real-time Speech Enhancement
AU - Ivan Tashev
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5388
ER -
Ivan Tashev. (2020). Weighted Speech Distortion Losses for Real-time Speech Enhancement. IEEE SigPort. http://sigport.org/5388
Ivan Tashev, 2020. Weighted Speech Distortion Losses for Real-time Speech Enhancement. Available at: http://sigport.org/5388.
Ivan Tashev. (2020). "Weighted Speech Distortion Losses for Real-time Speech Enhancement." Web.
1. Ivan Tashev. Weighted Speech Distortion Losses for Real-time Speech Enhancement [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5388

Clock synchronization over networks using sawtooth models


Clock synchronization and ranging over a wireless network with low communication overhead is a challenging goal with tremendous impact. In this paper, we study the use of time-to-digital converters in wireless sensors, which provides clock synchronization and ranging at negligible communication overhead through a sawtooth signal model for round trip times between two nodes.

Paper Details

Authors:
Pol del Aguila Pla, Lissy Pellaco, Satyam Dwivedi, Peter Händel and Joakim Jaldén
Submitted On:
17 May 2020 - 4:06pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Clock_synchronization_over_networks_using_sawtooth_models.pdf

(50)

Subscribe

[1] Pol del Aguila Pla, Lissy Pellaco, Satyam Dwivedi, Peter Händel and Joakim Jaldén, "Clock synchronization over networks using sawtooth models", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5387. Accessed: Oct. 27, 2020.
@article{5387-20,
url = {http://sigport.org/5387},
author = {Pol del Aguila Pla; Lissy Pellaco; Satyam Dwivedi; Peter Händel and Joakim Jaldén },
publisher = {IEEE SigPort},
title = {Clock synchronization over networks using sawtooth models},
year = {2020} }
TY - EJOUR
T1 - Clock synchronization over networks using sawtooth models
AU - Pol del Aguila Pla; Lissy Pellaco; Satyam Dwivedi; Peter Händel and Joakim Jaldén
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5387
ER -
Pol del Aguila Pla, Lissy Pellaco, Satyam Dwivedi, Peter Händel and Joakim Jaldén. (2020). Clock synchronization over networks using sawtooth models. IEEE SigPort. http://sigport.org/5387
Pol del Aguila Pla, Lissy Pellaco, Satyam Dwivedi, Peter Händel and Joakim Jaldén, 2020. Clock synchronization over networks using sawtooth models. Available at: http://sigport.org/5387.
Pol del Aguila Pla, Lissy Pellaco, Satyam Dwivedi, Peter Händel and Joakim Jaldén. (2020). "Clock synchronization over networks using sawtooth models." Web.
1. Pol del Aguila Pla, Lissy Pellaco, Satyam Dwivedi, Peter Händel and Joakim Jaldén. Clock synchronization over networks using sawtooth models [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5387

Attention based Curiosity-driven Exploration in Deep Reinforcement Learning


Reinforcement Learning enables to train an agent via interaction with the environment. However, in the majority of real-world scenarios, the extrinsic feedback is sparse or not sufficient, thus intrinsic reward formulations are needed to successfully train the agent. This work investigates and extends the paradigm of curiosity-driven exploration. First, a probabilistic approach is taken to exploit the advantages of the attention mechanism, which is successfully applied in other domains of Deep Learning.

Paper Details

Authors:
Patrik Reizinger, Márton Szemenyei
Submitted On:
17 May 2020 - 3:59pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_presentation.pdf

(58)

Keywords

Additional Categories

Subscribe

[1] Patrik Reizinger, Márton Szemenyei, "Attention based Curiosity-driven Exploration in Deep Reinforcement Learning", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5386. Accessed: Oct. 27, 2020.
@article{5386-20,
url = {http://sigport.org/5386},
author = {Patrik Reizinger; Márton Szemenyei },
publisher = {IEEE SigPort},
title = {Attention based Curiosity-driven Exploration in Deep Reinforcement Learning},
year = {2020} }
TY - EJOUR
T1 - Attention based Curiosity-driven Exploration in Deep Reinforcement Learning
AU - Patrik Reizinger; Márton Szemenyei
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5386
ER -
Patrik Reizinger, Márton Szemenyei. (2020). Attention based Curiosity-driven Exploration in Deep Reinforcement Learning. IEEE SigPort. http://sigport.org/5386
Patrik Reizinger, Márton Szemenyei, 2020. Attention based Curiosity-driven Exploration in Deep Reinforcement Learning. Available at: http://sigport.org/5386.
Patrik Reizinger, Márton Szemenyei. (2020). "Attention based Curiosity-driven Exploration in Deep Reinforcement Learning." Web.
1. Patrik Reizinger, Márton Szemenyei. Attention based Curiosity-driven Exploration in Deep Reinforcement Learning [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5386

Pages