Sorry, you need to enable JavaScript to visit this website.

Audio and Acoustic Signal Processing

Fast continuous HRTF acquisition with unconstrained movements of human subjects


Head related transfer function (HRTF) is widely used in 3D audio reproduction, especially over headphones. Conventionally, HRTF database is acquired at discrete directions and the acquisition process is time-consuming. Recent works have been proposed to improve HRTF acquisition efficiency via continuous acquisition. However, these HRTF acquisition techniques still require subject to sit still (with limited head movement) in a rotating chair. In this paper, we further relax the head movement constraint during acquisition by using a head tracker.

Paper Details

Authors:
Rishabh Ranjan
Submitted On:
15 March 2016 - 2:56am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

ICASSP16_JJ_Rish_rev_final - RG version.pdf

(120)

Subscribe

[1] Rishabh Ranjan, "Fast continuous HRTF acquisition with unconstrained movements of human subjects", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/575. Accessed: Jul. 21, 2019.
@article{575-16,
url = {http://sigport.org/575},
author = {Rishabh Ranjan },
publisher = {IEEE SigPort},
title = {Fast continuous HRTF acquisition with unconstrained movements of human subjects},
year = {2016} }
TY - EJOUR
T1 - Fast continuous HRTF acquisition with unconstrained movements of human subjects
AU - Rishabh Ranjan
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/575
ER -
Rishabh Ranjan. (2016). Fast continuous HRTF acquisition with unconstrained movements of human subjects. IEEE SigPort. http://sigport.org/575
Rishabh Ranjan, 2016. Fast continuous HRTF acquisition with unconstrained movements of human subjects. Available at: http://sigport.org/575.
Rishabh Ranjan. (2016). "Fast continuous HRTF acquisition with unconstrained movements of human subjects." Web.
1. Rishabh Ranjan. Fast continuous HRTF acquisition with unconstrained movements of human subjects [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/575

Tutorial: Assisted Listening for headphones and hearing aids: Signal Processing Techniques


With the strong growth of the mobile devices and emerging virtual reality (VR) and augmented reality (AR) applications, headsets are becoming more and more preferable in personal listening due to its convenience and portability. Assistive listening (AL) devices like hearing aids have seen much advancement. Creating a natural and authentic listening experience is the common objective of these VR, AR, and AL applications. In this tutorial, we will present state-of-the-art audio and acoustic signal processing techniques to enhance the sound reproduction in headsets and hearing aids.

Paper Details

Authors:
Submitted On:
23 February 2016 - 1:44pm
Short Link:
Type:
Document Year:
Cite

Document Files

APSIPA 2015 Tutorial_GanHe_Assisted listening for headphones and hearing aids.pdf

(395)

Subscribe

[1] , "Tutorial: Assisted Listening for headphones and hearing aids: Signal Processing Techniques", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/574. Accessed: Jul. 21, 2019.
@article{574-16,
url = {http://sigport.org/574},
author = { },
publisher = {IEEE SigPort},
title = {Tutorial: Assisted Listening for headphones and hearing aids: Signal Processing Techniques},
year = {2016} }
TY - EJOUR
T1 - Tutorial: Assisted Listening for headphones and hearing aids: Signal Processing Techniques
AU -
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/574
ER -
. (2016). Tutorial: Assisted Listening for headphones and hearing aids: Signal Processing Techniques. IEEE SigPort. http://sigport.org/574
, 2016. Tutorial: Assisted Listening for headphones and hearing aids: Signal Processing Techniques. Available at: http://sigport.org/574.
. (2016). "Tutorial: Assisted Listening for headphones and hearing aids: Signal Processing Techniques." Web.
1. . Tutorial: Assisted Listening for headphones and hearing aids: Signal Processing Techniques [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/574

Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction [slides]


Spatial audio reproduction is essential to create a natural listening experience for digital media. Majority of the legacy audio contents are in channel-based format, which is very particular on the desired playback system. Considering the diversity of today's playback systems, the quality of reproduced sound scenes degrades significantly when mismatches between the audio content and the playback system occur. An active sound control approach is required to take the playback system into consideration.

Paper Details

Authors:
Submitted On:
23 February 2016 - 1:44pm
Short Link:
Type:
Document Year:
Cite

Document Files

APSIPA 2015_Apply PAE in spatial audio reproduction.pdf

(99)

Subscribe

[1] , "Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction [slides]", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/573. Accessed: Jul. 21, 2019.
@article{573-16,
url = {http://sigport.org/573},
author = { },
publisher = {IEEE SigPort},
title = {Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction [slides]},
year = {2016} }
TY - EJOUR
T1 - Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction [slides]
AU -
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/573
ER -
. (2016). Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction [slides]. IEEE SigPort. http://sigport.org/573
, 2016. Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction [slides]. Available at: http://sigport.org/573.
. (2016). "Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction [slides]." Web.
1. . Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction [slides] [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/573

Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction


Spatial audio reproduction is essential to create a natural listening experience for digital media. Majority of the legacy audio contents are in channel-based format, which is very particular on the desired playback system. Considering the diversity of today's playback systems, the quality of reproduced sound scenes degrades significantly when mismatches between the audio content and the playback system occur. An active sound control approach is required to take the playback system into consideration.

Paper Details

Authors:
Submitted On:
23 February 2016 - 1:44pm
Short Link:
Type:
Document Year:
Cite

Document Files

HeG15_Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction-IEEE approved.pdf

(103)

Subscribe

[1] , "Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/572. Accessed: Jul. 21, 2019.
@article{572-16,
url = {http://sigport.org/572},
author = { },
publisher = {IEEE SigPort},
title = {Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction},
year = {2016} }
TY - EJOUR
T1 - Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction
AU -
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/572
ER -
. (2016). Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction. IEEE SigPort. http://sigport.org/572
, 2016. Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction. Available at: http://sigport.org/572.
. (2016). "Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction." Web.
1. . Applying Primary Ambient Extraction for Immersive Spatial Audio Reproduction [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/572

Non-negative Matrix Factorization and Local Discontinuity Measures for Singing Voice Separation

Paper Details

Authors:
Hatem Deif
Submitted On:
23 February 2016 - 1:44pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

Deif.pdf

(531)

Subscribe

[1] Hatem Deif, "Non-negative Matrix Factorization and Local Discontinuity Measures for Singing Voice Separation", IEEE SigPort, 2015. [Online]. Available: http://sigport.org/544. Accessed: Jul. 21, 2019.
@article{544-15,
url = {http://sigport.org/544},
author = {Hatem Deif },
publisher = {IEEE SigPort},
title = {Non-negative Matrix Factorization and Local Discontinuity Measures for Singing Voice Separation},
year = {2015} }
TY - EJOUR
T1 - Non-negative Matrix Factorization and Local Discontinuity Measures for Singing Voice Separation
AU - Hatem Deif
PY - 2015
PB - IEEE SigPort
UR - http://sigport.org/544
ER -
Hatem Deif. (2015). Non-negative Matrix Factorization and Local Discontinuity Measures for Singing Voice Separation. IEEE SigPort. http://sigport.org/544
Hatem Deif, 2015. Non-negative Matrix Factorization and Local Discontinuity Measures for Singing Voice Separation. Available at: http://sigport.org/544.
Hatem Deif. (2015). "Non-negative Matrix Factorization and Local Discontinuity Measures for Singing Voice Separation." Web.
1. Hatem Deif. Non-negative Matrix Factorization and Local Discontinuity Measures for Singing Voice Separation [Internet]. IEEE SigPort; 2015. Available from : http://sigport.org/544

Presentation - GlobalSIP 2015

Paper Details

Authors:
Submitted On:
23 February 2016 - 1:44pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

GlobalSIP_Tracking.pdf

(465)

Subscribe

[1] , "Presentation - GlobalSIP 2015", IEEE SigPort, 2015. [Online]. Available: http://sigport.org/403. Accessed: Jul. 21, 2019.
@article{403-15,
url = {http://sigport.org/403},
author = { },
publisher = {IEEE SigPort},
title = {Presentation - GlobalSIP 2015},
year = {2015} }
TY - EJOUR
T1 - Presentation - GlobalSIP 2015
AU -
PY - 2015
PB - IEEE SigPort
UR - http://sigport.org/403
ER -
. (2015). Presentation - GlobalSIP 2015. IEEE SigPort. http://sigport.org/403
, 2015. Presentation - GlobalSIP 2015. Available at: http://sigport.org/403.
. (2015). "Presentation - GlobalSIP 2015." Web.
1. . Presentation - GlobalSIP 2015 [Internet]. IEEE SigPort; 2015. Available from : http://sigport.org/403

Minimum Variance Semi-Supervised Boosting for Multi-label Classification

Paper Details

Authors:
Shaodan Zhai
Submitted On:
23 February 2016 - 1:44pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

GlobalSIP.pdf

(2119)

Subscribe

[1] Shaodan Zhai, "Minimum Variance Semi-Supervised Boosting for Multi-label Classification", IEEE SigPort, 2015. [Online]. Available: http://sigport.org/261. Accessed: Jul. 21, 2019.
@article{261-15,
url = {http://sigport.org/261},
author = {Shaodan Zhai },
publisher = {IEEE SigPort},
title = {Minimum Variance Semi-Supervised Boosting for Multi-label Classification},
year = {2015} }
TY - EJOUR
T1 - Minimum Variance Semi-Supervised Boosting for Multi-label Classification
AU - Shaodan Zhai
PY - 2015
PB - IEEE SigPort
UR - http://sigport.org/261
ER -
Shaodan Zhai. (2015). Minimum Variance Semi-Supervised Boosting for Multi-label Classification. IEEE SigPort. http://sigport.org/261
Shaodan Zhai, 2015. Minimum Variance Semi-Supervised Boosting for Multi-label Classification. Available at: http://sigport.org/261.
Shaodan Zhai. (2015). "Minimum Variance Semi-Supervised Boosting for Multi-label Classification." Web.
1. Shaodan Zhai. Minimum Variance Semi-Supervised Boosting for Multi-label Classification [Internet]. IEEE SigPort; 2015. Available from : http://sigport.org/261

Reader's Choice Supplement: Top Downloads in IEEE Xplore


Reader's choice summarizes the popular downloads from the IEEE Xplore. Reader's choice has appeared in Signal Processing Magazine. This article is a supplement to the print version.

Paper Details

Authors:
Submitted On:
23 February 2016 - 1:43pm
Short Link:
Type:
Document Year:
Cite

Document Files

ReadersChoice_201603_sigport.pdf

(486)

Subscribe

[1] , "Reader's Choice Supplement: Top Downloads in IEEE Xplore", IEEE SigPort, 2015. [Online]. Available: http://sigport.org/256. Accessed: Jul. 21, 2019.
@article{256-15,
url = {http://sigport.org/256},
author = { },
publisher = {IEEE SigPort},
title = {Reader's Choice Supplement: Top Downloads in IEEE Xplore},
year = {2015} }
TY - EJOUR
T1 - Reader's Choice Supplement: Top Downloads in IEEE Xplore
AU -
PY - 2015
PB - IEEE SigPort
UR - http://sigport.org/256
ER -
. (2015). Reader's Choice Supplement: Top Downloads in IEEE Xplore. IEEE SigPort. http://sigport.org/256
, 2015. Reader's Choice Supplement: Top Downloads in IEEE Xplore. Available at: http://sigport.org/256.
. (2015). "Reader's Choice Supplement: Top Downloads in IEEE Xplore." Web.
1. . Reader's Choice Supplement: Top Downloads in IEEE Xplore [Internet]. IEEE SigPort; 2015. Available from : http://sigport.org/256

Memory-Less Gain Quantization in the EVS Codec


Memory-Less Gain Quantization in the EVS Codec

The recent standard on Enhanced Voiced Services (EVS) contains two memory-less gain coding mechanisms achieving better performance than the prediction-based techniques used in 3GPP AMR-WB and ITU-T G.729 codecs. The EVS gain encoder uses joint vector quantization without the need of information from previous frames. Inter-frame prediction is replaced by alternative schemes based on sub-frame prediction or estimated average target signal energy. This eliminates the propagation of error inside the adaptive codebook.

Paper Details

Authors:
Milan Jelinek
Submitted On:
23 February 2016 - 1:38pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

presentation_globalsip_2015.pptx

(495)

Subscribe

[1] Milan Jelinek, "Memory-Less Gain Quantization in the EVS Codec", IEEE SigPort, 2015. [Online]. Available: http://sigport.org/253. Accessed: Jul. 21, 2019.
@article{253-15,
url = {http://sigport.org/253},
author = {Milan Jelinek },
publisher = {IEEE SigPort},
title = {Memory-Less Gain Quantization in the EVS Codec},
year = {2015} }
TY - EJOUR
T1 - Memory-Less Gain Quantization in the EVS Codec
AU - Milan Jelinek
PY - 2015
PB - IEEE SigPort
UR - http://sigport.org/253
ER -
Milan Jelinek. (2015). Memory-Less Gain Quantization in the EVS Codec. IEEE SigPort. http://sigport.org/253
Milan Jelinek, 2015. Memory-Less Gain Quantization in the EVS Codec. Available at: http://sigport.org/253.
Milan Jelinek. (2015). "Memory-Less Gain Quantization in the EVS Codec." Web.
1. Milan Jelinek. Memory-Less Gain Quantization in the EVS Codec [Internet]. IEEE SigPort; 2015. Available from : http://sigport.org/253

Time-Shifting Based Primary-Ambient Extraction for Spatial Audio Reproduction


One of the key issues in spatial audio analysis and reproduction is to decompose a signal into primary and ambient components based on their directional and diffuse spatial features, respectively. Existing approaches employed in primary-ambient extraction (PAE), such as principal component analysis (PCA), are mainly based on a basic stereo signal model. The performance of these PAE approaches has not been well studied for the input signals that do not satisfy all the assumptions of the stereo signal model.

Paper Details

Authors:
Ee-Leng Tan
Submitted On:
23 February 2016 - 1:43pm
Short Link:
Type:

Document Files

2 ASLP_Time shifting PAE_Double Column_final.pdf

(131)

Subscribe

[1] Ee-Leng Tan, "Time-Shifting Based Primary-Ambient Extraction for Spatial Audio Reproduction", IEEE SigPort, 2015. [Online]. Available: http://sigport.org/203. Accessed: Jul. 21, 2019.
@article{203-15,
url = {http://sigport.org/203},
author = {Ee-Leng Tan },
publisher = {IEEE SigPort},
title = {Time-Shifting Based Primary-Ambient Extraction for Spatial Audio Reproduction},
year = {2015} }
TY - EJOUR
T1 - Time-Shifting Based Primary-Ambient Extraction for Spatial Audio Reproduction
AU - Ee-Leng Tan
PY - 2015
PB - IEEE SigPort
UR - http://sigport.org/203
ER -
Ee-Leng Tan. (2015). Time-Shifting Based Primary-Ambient Extraction for Spatial Audio Reproduction. IEEE SigPort. http://sigport.org/203
Ee-Leng Tan, 2015. Time-Shifting Based Primary-Ambient Extraction for Spatial Audio Reproduction. Available at: http://sigport.org/203.
Ee-Leng Tan. (2015). "Time-Shifting Based Primary-Ambient Extraction for Spatial Audio Reproduction." Web.
1. Ee-Leng Tan. Time-Shifting Based Primary-Ambient Extraction for Spatial Audio Reproduction [Internet]. IEEE SigPort; 2015. Available from : http://sigport.org/203

Pages