Sorry, you need to enable JavaScript to visit this website.

Room Acoustics and Acoustic System Modeling

Joint Estimation of the Room Geometry and Modes with Compressed Sensing


Acoustical behavior of a room for a given position of microphone and sound source is usually described using the room impulse response. If we rely on the standard uniform sampling, the estimation of room impulse response for arbitrary positions in the room requires a large number of measurements. In order to lower the required sampling rate, some solutions have emerged that exploit the sparse representation of the room wavefield in the terms of plane waves in the low-frequency domain. The plane wave representation has a simple form in rectangular rooms.

Paper Details

Authors:
Submitted On:
13 April 2018 - 10:07am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

Joint Estimation of the Room Geometry and Modes with Compressed Sensing.pdf

(34 downloads)

Keywords

Subscribe

[1] , "Joint Estimation of the Room Geometry and Modes with Compressed Sensing", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2708. Accessed: May. 25, 2018.
@article{2708-18,
url = {http://sigport.org/2708},
author = { },
publisher = {IEEE SigPort},
title = {Joint Estimation of the Room Geometry and Modes with Compressed Sensing},
year = {2018} }
TY - EJOUR
T1 - Joint Estimation of the Room Geometry and Modes with Compressed Sensing
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2708
ER -
. (2018). Joint Estimation of the Room Geometry and Modes with Compressed Sensing. IEEE SigPort. http://sigport.org/2708
, 2018. Joint Estimation of the Room Geometry and Modes with Compressed Sensing. Available at: http://sigport.org/2708.
. (2018). "Joint Estimation of the Room Geometry and Modes with Compressed Sensing." Web.
1. . Joint Estimation of the Room Geometry and Modes with Compressed Sensing [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2708

ROBUST SEQUENCE-BASED LOCALIZATION IN ACOUSTIC SENSOR NETWORKS


Acoustic source localization in sensor network is a challenging task because of severe constraints on cost, energy, and effective range of sensor devices. To overcome these limitations in existing solutions, this paper formally describes, designs, implements, and evaluates a Half Plane Intersection method to Sequence-Based Localization, i.e., HPI-SBL, in distributed smartphone networks. The localization space can be divided into distinct regions, and each region can be uniquely identified by the node sequence that represents the ranking of distances from the reference nodes to the region.

poster.pdf

PDF icon poster (18 downloads)

Paper Details

Authors:
Naigao Jin, Xin Zhou, Zihan Wang, Yu Liu, Lei Wang
Submitted On:
13 April 2018 - 2:01am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster

(18 downloads)

Keywords

Subscribe

[1] Naigao Jin, Xin Zhou, Zihan Wang, Yu Liu, Lei Wang, "ROBUST SEQUENCE-BASED LOCALIZATION IN ACOUSTIC SENSOR NETWORKS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2616. Accessed: May. 25, 2018.
@article{2616-18,
url = {http://sigport.org/2616},
author = {Naigao Jin; Xin Zhou; Zihan Wang; Yu Liu; Lei Wang },
publisher = {IEEE SigPort},
title = {ROBUST SEQUENCE-BASED LOCALIZATION IN ACOUSTIC SENSOR NETWORKS},
year = {2018} }
TY - EJOUR
T1 - ROBUST SEQUENCE-BASED LOCALIZATION IN ACOUSTIC SENSOR NETWORKS
AU - Naigao Jin; Xin Zhou; Zihan Wang; Yu Liu; Lei Wang
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2616
ER -
Naigao Jin, Xin Zhou, Zihan Wang, Yu Liu, Lei Wang. (2018). ROBUST SEQUENCE-BASED LOCALIZATION IN ACOUSTIC SENSOR NETWORKS. IEEE SigPort. http://sigport.org/2616
Naigao Jin, Xin Zhou, Zihan Wang, Yu Liu, Lei Wang, 2018. ROBUST SEQUENCE-BASED LOCALIZATION IN ACOUSTIC SENSOR NETWORKS. Available at: http://sigport.org/2616.
Naigao Jin, Xin Zhou, Zihan Wang, Yu Liu, Lei Wang. (2018). "ROBUST SEQUENCE-BASED LOCALIZATION IN ACOUSTIC SENSOR NETWORKS." Web.
1. Naigao Jin, Xin Zhou, Zihan Wang, Yu Liu, Lei Wang. ROBUST SEQUENCE-BASED LOCALIZATION IN ACOUSTIC SENSOR NETWORKS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2616

ESTIMATION OF TIME-VARYING ROOM IMPULSE RESPONSES OF MULTIPLE SOUND SOURCES FROM OBSERVED MIXTURE AND ISOLATED SOURCE SIGNALS

Paper Details

Authors:
Joonas Nikunen, Tuomas Virtanen
Submitted On:
13 April 2018 - 1:31am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_nikunen_ICASSP.pdf

(22 downloads)

Keywords

Subscribe

[1] Joonas Nikunen, Tuomas Virtanen, "ESTIMATION OF TIME-VARYING ROOM IMPULSE RESPONSES OF MULTIPLE SOUND SOURCES FROM OBSERVED MIXTURE AND ISOLATED SOURCE SIGNALS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2609. Accessed: May. 25, 2018.
@article{2609-18,
url = {http://sigport.org/2609},
author = {Joonas Nikunen; Tuomas Virtanen },
publisher = {IEEE SigPort},
title = {ESTIMATION OF TIME-VARYING ROOM IMPULSE RESPONSES OF MULTIPLE SOUND SOURCES FROM OBSERVED MIXTURE AND ISOLATED SOURCE SIGNALS},
year = {2018} }
TY - EJOUR
T1 - ESTIMATION OF TIME-VARYING ROOM IMPULSE RESPONSES OF MULTIPLE SOUND SOURCES FROM OBSERVED MIXTURE AND ISOLATED SOURCE SIGNALS
AU - Joonas Nikunen; Tuomas Virtanen
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2609
ER -
Joonas Nikunen, Tuomas Virtanen. (2018). ESTIMATION OF TIME-VARYING ROOM IMPULSE RESPONSES OF MULTIPLE SOUND SOURCES FROM OBSERVED MIXTURE AND ISOLATED SOURCE SIGNALS. IEEE SigPort. http://sigport.org/2609
Joonas Nikunen, Tuomas Virtanen, 2018. ESTIMATION OF TIME-VARYING ROOM IMPULSE RESPONSES OF MULTIPLE SOUND SOURCES FROM OBSERVED MIXTURE AND ISOLATED SOURCE SIGNALS. Available at: http://sigport.org/2609.
Joonas Nikunen, Tuomas Virtanen. (2018). "ESTIMATION OF TIME-VARYING ROOM IMPULSE RESPONSES OF MULTIPLE SOUND SOURCES FROM OBSERVED MIXTURE AND ISOLATED SOURCE SIGNALS." Web.
1. Joonas Nikunen, Tuomas Virtanen. ESTIMATION OF TIME-VARYING ROOM IMPULSE RESPONSES OF MULTIPLE SOUND SOURCES FROM OBSERVED MIXTURE AND ISOLATED SOURCE SIGNALS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2609

Speech Dereverberation Based on Integrated Deep and Ensemble Learning Algorithm

Paper Details

Authors:
Wei-Jen Lee, Fei Chen, Xugang Lu, Shao-Yi Chien,Yu Tsao
Submitted On:
12 April 2018 - 10:44pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2018_poster.pdf

(48 downloads)

Keywords

Subscribe

[1] Wei-Jen Lee, Fei Chen, Xugang Lu, Shao-Yi Chien,Yu Tsao, "Speech Dereverberation Based on Integrated Deep and Ensemble Learning Algorithm", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2563. Accessed: May. 25, 2018.
@article{2563-18,
url = {http://sigport.org/2563},
author = {Wei-Jen Lee; Fei Chen; Xugang Lu; Shao-Yi Chien;Yu Tsao },
publisher = {IEEE SigPort},
title = {Speech Dereverberation Based on Integrated Deep and Ensemble Learning Algorithm},
year = {2018} }
TY - EJOUR
T1 - Speech Dereverberation Based on Integrated Deep and Ensemble Learning Algorithm
AU - Wei-Jen Lee; Fei Chen; Xugang Lu; Shao-Yi Chien;Yu Tsao
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2563
ER -
Wei-Jen Lee, Fei Chen, Xugang Lu, Shao-Yi Chien,Yu Tsao. (2018). Speech Dereverberation Based on Integrated Deep and Ensemble Learning Algorithm. IEEE SigPort. http://sigport.org/2563
Wei-Jen Lee, Fei Chen, Xugang Lu, Shao-Yi Chien,Yu Tsao, 2018. Speech Dereverberation Based on Integrated Deep and Ensemble Learning Algorithm. Available at: http://sigport.org/2563.
Wei-Jen Lee, Fei Chen, Xugang Lu, Shao-Yi Chien,Yu Tsao. (2018). "Speech Dereverberation Based on Integrated Deep and Ensemble Learning Algorithm." Web.
1. Wei-Jen Lee, Fei Chen, Xugang Lu, Shao-Yi Chien,Yu Tsao. Speech Dereverberation Based on Integrated Deep and Ensemble Learning Algorithm [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2563

Improved Noise Characterization for Relative Impulse Response Estimation


Relative Impulse Responses (ReIRs) have several applications in speech enhancement, noise suppression and source localization for multi-channel speech processing in reverberant environments. Noise is usually assumed to be white Gaussian during the estimation of the ReIR between two microphones. We show that the noise in this system identification problem is instead dependent upon the microphone measurements and the ReIR itself.

ICASSP_V3.pdf

PDF icon Poster (75 downloads)

Paper Details

Authors:
Bhaskar D. Rao, Ritwik Giri, Tao Zhang
Submitted On:
12 April 2018 - 4:38pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster

(75 downloads)

Keywords

Subscribe

[1] Bhaskar D. Rao, Ritwik Giri, Tao Zhang, "Improved Noise Characterization for Relative Impulse Response Estimation", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2497. Accessed: May. 25, 2018.
@article{2497-18,
url = {http://sigport.org/2497},
author = {Bhaskar D. Rao; Ritwik Giri; Tao Zhang },
publisher = {IEEE SigPort},
title = {Improved Noise Characterization for Relative Impulse Response Estimation},
year = {2018} }
TY - EJOUR
T1 - Improved Noise Characterization for Relative Impulse Response Estimation
AU - Bhaskar D. Rao; Ritwik Giri; Tao Zhang
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2497
ER -
Bhaskar D. Rao, Ritwik Giri, Tao Zhang. (2018). Improved Noise Characterization for Relative Impulse Response Estimation. IEEE SigPort. http://sigport.org/2497
Bhaskar D. Rao, Ritwik Giri, Tao Zhang, 2018. Improved Noise Characterization for Relative Impulse Response Estimation. Available at: http://sigport.org/2497.
Bhaskar D. Rao, Ritwik Giri, Tao Zhang. (2018). "Improved Noise Characterization for Relative Impulse Response Estimation." Web.
1. Bhaskar D. Rao, Ritwik Giri, Tao Zhang. Improved Noise Characterization for Relative Impulse Response Estimation [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2497

INFRASONIC SCENE FINGERPRINTING FOR AUTHENTICATING SPEAKER LOCATION


Ambient infrasound with frequency ranges well below 20 Hz is known to carry robust navigation cues that can be exploited to authenticate the location of a speaker. Unfortunately, many of the mobile devices like smartphones have been optimized to work in the human auditory range, thereby suppressing information in the infrasonic region. In this paper, we show that these ultra-low frequency cues can still be extracted from a standard smartphone recording by using acceleration-based cepstral features.

Paper Details

Authors:
Kenji Aono, Shantanu Chakrabartty, Toshihiko Yamasaki
Submitted On:
14 March 2017 - 7:06pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP-2017_AASP-P3.5.pdf

(152 downloads)

Keywords

Additional Categories

Subscribe

[1] Kenji Aono, Shantanu Chakrabartty, Toshihiko Yamasaki, "INFRASONIC SCENE FINGERPRINTING FOR AUTHENTICATING SPEAKER LOCATION", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1766. Accessed: May. 25, 2018.
@article{1766-17,
url = {http://sigport.org/1766},
author = {Kenji Aono; Shantanu Chakrabartty; Toshihiko Yamasaki },
publisher = {IEEE SigPort},
title = {INFRASONIC SCENE FINGERPRINTING FOR AUTHENTICATING SPEAKER LOCATION},
year = {2017} }
TY - EJOUR
T1 - INFRASONIC SCENE FINGERPRINTING FOR AUTHENTICATING SPEAKER LOCATION
AU - Kenji Aono; Shantanu Chakrabartty; Toshihiko Yamasaki
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1766
ER -
Kenji Aono, Shantanu Chakrabartty, Toshihiko Yamasaki. (2017). INFRASONIC SCENE FINGERPRINTING FOR AUTHENTICATING SPEAKER LOCATION. IEEE SigPort. http://sigport.org/1766
Kenji Aono, Shantanu Chakrabartty, Toshihiko Yamasaki, 2017. INFRASONIC SCENE FINGERPRINTING FOR AUTHENTICATING SPEAKER LOCATION. Available at: http://sigport.org/1766.
Kenji Aono, Shantanu Chakrabartty, Toshihiko Yamasaki. (2017). "INFRASONIC SCENE FINGERPRINTING FOR AUTHENTICATING SPEAKER LOCATION." Web.
1. Kenji Aono, Shantanu Chakrabartty, Toshihiko Yamasaki. INFRASONIC SCENE FINGERPRINTING FOR AUTHENTICATING SPEAKER LOCATION [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1766

CONFIDENCE MEASURES FOR CTC-BASED PHONE SYNCHRONOUS DECODING

Paper Details

Authors:
Submitted On:
6 March 2017 - 4:49pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

psdcm icassp2017 oral slides_zhc00.pdf

(235 downloads)

Keywords

Subscribe

[1] , "CONFIDENCE MEASURES FOR CTC-BASED PHONE SYNCHRONOUS DECODING", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1668. Accessed: May. 25, 2018.
@article{1668-17,
url = {http://sigport.org/1668},
author = { },
publisher = {IEEE SigPort},
title = {CONFIDENCE MEASURES FOR CTC-BASED PHONE SYNCHRONOUS DECODING},
year = {2017} }
TY - EJOUR
T1 - CONFIDENCE MEASURES FOR CTC-BASED PHONE SYNCHRONOUS DECODING
AU -
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1668
ER -
. (2017). CONFIDENCE MEASURES FOR CTC-BASED PHONE SYNCHRONOUS DECODING. IEEE SigPort. http://sigport.org/1668
, 2017. CONFIDENCE MEASURES FOR CTC-BASED PHONE SYNCHRONOUS DECODING. Available at: http://sigport.org/1668.
. (2017). "CONFIDENCE MEASURES FOR CTC-BASED PHONE SYNCHRONOUS DECODING." Web.
1. . CONFIDENCE MEASURES FOR CTC-BASED PHONE SYNCHRONOUS DECODING [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1668

RECURRENT CONVOLUTIONAL NEURAL NETWORK FOR SPEECH PROCESSING


Different neural networks have exhibited excellent performance on various speech processing tasks, and they usually have specific advantages and disadvantages. We propose to use a recently developed deep learning model, recurrent convolutional neural network (RCNN), for speech processing, which inherits some merits of recurrent neural network (RNN) and convolutional neural network (CNN). The core module can be viewed as a convolutional layer embedded with an RNN, which enables the model to capture both temporal and frequency dependence in the spectrogram of the speech in an efficient way.

Paper Details

Authors:
Yue Zhao, Xingyu Jin, Xiaolin Hu
Submitted On:
5 March 2017 - 10:18am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp2017_poster.pptx

(206 downloads)

Keywords

Subscribe

[1] Yue Zhao, Xingyu Jin, Xiaolin Hu, "RECURRENT CONVOLUTIONAL NEURAL NETWORK FOR SPEECH PROCESSING", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1632. Accessed: May. 25, 2018.
@article{1632-17,
url = {http://sigport.org/1632},
author = {Yue Zhao; Xingyu Jin; Xiaolin Hu },
publisher = {IEEE SigPort},
title = {RECURRENT CONVOLUTIONAL NEURAL NETWORK FOR SPEECH PROCESSING},
year = {2017} }
TY - EJOUR
T1 - RECURRENT CONVOLUTIONAL NEURAL NETWORK FOR SPEECH PROCESSING
AU - Yue Zhao; Xingyu Jin; Xiaolin Hu
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1632
ER -
Yue Zhao, Xingyu Jin, Xiaolin Hu. (2017). RECURRENT CONVOLUTIONAL NEURAL NETWORK FOR SPEECH PROCESSING. IEEE SigPort. http://sigport.org/1632
Yue Zhao, Xingyu Jin, Xiaolin Hu, 2017. RECURRENT CONVOLUTIONAL NEURAL NETWORK FOR SPEECH PROCESSING. Available at: http://sigport.org/1632.
Yue Zhao, Xingyu Jin, Xiaolin Hu. (2017). "RECURRENT CONVOLUTIONAL NEURAL NETWORK FOR SPEECH PROCESSING." Web.
1. Yue Zhao, Xingyu Jin, Xiaolin Hu. RECURRENT CONVOLUTIONAL NEURAL NETWORK FOR SPEECH PROCESSING [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1632

Unsupervised Speaker Adaptation of BLSTM-RNN for LVCSR Based on Speaker Code


Recently, the speaker code based adaptation has been successfully expanded to recurrent neural networks using bidirectional Long Short-Term Memory (BLSTM-RNN) [1]. Experiments on the small-scale TIMIT task have demonstrated that the speaker code based adaptation is also valid for BLSTM-RNN. In this paper, we evaluate this method on large-scale task and introduce an error normalization method to balance the back-propagation errors derived from different layers for speaker codes. Meanwhile, we use singular value decomposition (SVD) method to conduct model compression.

Paper Details

Authors:
Zhiying Huang, Shaofei Xue, Zhijie Yan, Lirong Dai
Submitted On:
14 October 2016 - 10:15am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ISCSLP_presentation_ZhiyingHuang_upload.pdf

(243 downloads)

Keywords

Subscribe

[1] Zhiying Huang, Shaofei Xue, Zhijie Yan, Lirong Dai, "Unsupervised Speaker Adaptation of BLSTM-RNN for LVCSR Based on Speaker Code", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1198. Accessed: May. 25, 2018.
@article{1198-16,
url = {http://sigport.org/1198},
author = {Zhiying Huang; Shaofei Xue; Zhijie Yan; Lirong Dai },
publisher = {IEEE SigPort},
title = {Unsupervised Speaker Adaptation of BLSTM-RNN for LVCSR Based on Speaker Code},
year = {2016} }
TY - EJOUR
T1 - Unsupervised Speaker Adaptation of BLSTM-RNN for LVCSR Based on Speaker Code
AU - Zhiying Huang; Shaofei Xue; Zhijie Yan; Lirong Dai
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1198
ER -
Zhiying Huang, Shaofei Xue, Zhijie Yan, Lirong Dai. (2016). Unsupervised Speaker Adaptation of BLSTM-RNN for LVCSR Based on Speaker Code. IEEE SigPort. http://sigport.org/1198
Zhiying Huang, Shaofei Xue, Zhijie Yan, Lirong Dai, 2016. Unsupervised Speaker Adaptation of BLSTM-RNN for LVCSR Based on Speaker Code. Available at: http://sigport.org/1198.
Zhiying Huang, Shaofei Xue, Zhijie Yan, Lirong Dai. (2016). "Unsupervised Speaker Adaptation of BLSTM-RNN for LVCSR Based on Speaker Code." Web.
1. Zhiying Huang, Shaofei Xue, Zhijie Yan, Lirong Dai. Unsupervised Speaker Adaptation of BLSTM-RNN for LVCSR Based on Speaker Code [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1198

Multichannel identification of room acoustic systems with adaptive filters based on orthonormal basis functions

Paper Details

Authors:
Vairetti G., De Sena E., Catrysse M., Jensen S.H., Moonen M., van Waterschoot T.
Submitted On:
24 March 2016 - 3:09am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2016_Vairetti_1904.pdf

(298 downloads)

Keywords

Subscribe

[1] Vairetti G., De Sena E., Catrysse M., Jensen S.H., Moonen M., van Waterschoot T., "Multichannel identification of room acoustic systems with adaptive filters based on orthonormal basis functions", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1019. Accessed: May. 25, 2018.
@article{1019-16,
url = {http://sigport.org/1019},
author = {Vairetti G.; De Sena E.; Catrysse M.; Jensen S.H.; Moonen M.; van Waterschoot T. },
publisher = {IEEE SigPort},
title = {Multichannel identification of room acoustic systems with adaptive filters based on orthonormal basis functions},
year = {2016} }
TY - EJOUR
T1 - Multichannel identification of room acoustic systems with adaptive filters based on orthonormal basis functions
AU - Vairetti G.; De Sena E.; Catrysse M.; Jensen S.H.; Moonen M.; van Waterschoot T.
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1019
ER -
Vairetti G., De Sena E., Catrysse M., Jensen S.H., Moonen M., van Waterschoot T.. (2016). Multichannel identification of room acoustic systems with adaptive filters based on orthonormal basis functions. IEEE SigPort. http://sigport.org/1019
Vairetti G., De Sena E., Catrysse M., Jensen S.H., Moonen M., van Waterschoot T., 2016. Multichannel identification of room acoustic systems with adaptive filters based on orthonormal basis functions. Available at: http://sigport.org/1019.
Vairetti G., De Sena E., Catrysse M., Jensen S.H., Moonen M., van Waterschoot T.. (2016). "Multichannel identification of room acoustic systems with adaptive filters based on orthonormal basis functions." Web.
1. Vairetti G., De Sena E., Catrysse M., Jensen S.H., Moonen M., van Waterschoot T.. Multichannel identification of room acoustic systems with adaptive filters based on orthonormal basis functions [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1019

Pages