Sorry, you need to enable JavaScript to visit this website.

Audio Coding

Learning about perception of temporal fine structure by building audio codecs


This is a poster presented Thursday August 22, 2019 at the International Symposium on Auditory and Audiological Research (ISAAR). https://www.isaar.eu/index.php

SP.72 - Learning about perception of temporal fine structure by building audio codecs

https://whova.com/embedded/session/isaar_201908/701162/

Paper Details

Authors:
Lars Villemoes, Arijit Biswas, Heidi-Maria Lehtonen, Heiko Purnhagen
Submitted On:
1 October 2019 - 4:43am
Short Link:
Type:
Document Year:
Cite

Document Files

ISAAR19poster.pdf

(27)

Keywords

Subscribe

[1] Lars Villemoes, Arijit Biswas, Heidi-Maria Lehtonen, Heiko Purnhagen , "Learning about perception of temporal fine structure by building audio codecs", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4850. Accessed: Dec. 12, 2019.
@article{4850-19,
url = {http://sigport.org/4850},
author = {Lars Villemoes; Arijit Biswas; Heidi-Maria Lehtonen; Heiko Purnhagen },
publisher = {IEEE SigPort},
title = {Learning about perception of temporal fine structure by building audio codecs},
year = {2019} }
TY - EJOUR
T1 - Learning about perception of temporal fine structure by building audio codecs
AU - Lars Villemoes; Arijit Biswas; Heidi-Maria Lehtonen; Heiko Purnhagen
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4850
ER -
Lars Villemoes, Arijit Biswas, Heidi-Maria Lehtonen, Heiko Purnhagen . (2019). Learning about perception of temporal fine structure by building audio codecs. IEEE SigPort. http://sigport.org/4850
Lars Villemoes, Arijit Biswas, Heidi-Maria Lehtonen, Heiko Purnhagen , 2019. Learning about perception of temporal fine structure by building audio codecs. Available at: http://sigport.org/4850.
Lars Villemoes, Arijit Biswas, Heidi-Maria Lehtonen, Heiko Purnhagen . (2019). "Learning about perception of temporal fine structure by building audio codecs." Web.
1. Lars Villemoes, Arijit Biswas, Heidi-Maria Lehtonen, Heiko Purnhagen . Learning about perception of temporal fine structure by building audio codecs [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4850

EVS AND OPUS AUDIO CODERS PERFORMANCE EVALUATION FOR ORIENTAL AND ORCHESTRAL MUSICAL INSTRUMENTS


In modern telecommunication systems the channel bandwidth and the quality of the reconstructed decoded audio signals are considered as major telecommunication resources. New speech or audio coders must be carefully designed and implemented to meet these requirements. EVS and OPUS audio coders are new coders which used to improve the quality of the reconstructed audio signal at different output bitrates. These coders can operate with different input signal type. The performance of these coders must be evaluated in terms of the quality of the reconstructed signals.

Paper Details

Authors:
Yasser A. Zenhom, Eman Mohammed, Micheal N. Mikhael, Hala A. Mansour
Submitted On:
19 September 2019 - 7:46pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

The performance criteria of the EVS and OPUS audio coders and describe the performance evaluation results for oriental and orche

(32)

Keywords

Subscribe

[1] Yasser A. Zenhom, Eman Mohammed, Micheal N. Mikhael, Hala A. Mansour, "EVS AND OPUS AUDIO CODERS PERFORMANCE EVALUATION FOR ORIENTAL AND ORCHESTRAL MUSICAL INSTRUMENTS", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4753. Accessed: Dec. 12, 2019.
@article{4753-19,
url = {http://sigport.org/4753},
author = {Yasser A. Zenhom; Eman Mohammed; Micheal N. Mikhael; Hala A. Mansour },
publisher = {IEEE SigPort},
title = {EVS AND OPUS AUDIO CODERS PERFORMANCE EVALUATION FOR ORIENTAL AND ORCHESTRAL MUSICAL INSTRUMENTS},
year = {2019} }
TY - EJOUR
T1 - EVS AND OPUS AUDIO CODERS PERFORMANCE EVALUATION FOR ORIENTAL AND ORCHESTRAL MUSICAL INSTRUMENTS
AU - Yasser A. Zenhom; Eman Mohammed; Micheal N. Mikhael; Hala A. Mansour
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4753
ER -
Yasser A. Zenhom, Eman Mohammed, Micheal N. Mikhael, Hala A. Mansour. (2019). EVS AND OPUS AUDIO CODERS PERFORMANCE EVALUATION FOR ORIENTAL AND ORCHESTRAL MUSICAL INSTRUMENTS. IEEE SigPort. http://sigport.org/4753
Yasser A. Zenhom, Eman Mohammed, Micheal N. Mikhael, Hala A. Mansour, 2019. EVS AND OPUS AUDIO CODERS PERFORMANCE EVALUATION FOR ORIENTAL AND ORCHESTRAL MUSICAL INSTRUMENTS. Available at: http://sigport.org/4753.
Yasser A. Zenhom, Eman Mohammed, Micheal N. Mikhael, Hala A. Mansour. (2019). "EVS AND OPUS AUDIO CODERS PERFORMANCE EVALUATION FOR ORIENTAL AND ORCHESTRAL MUSICAL INSTRUMENTS." Web.
1. Yasser A. Zenhom, Eman Mohammed, Micheal N. Mikhael, Hala A. Mansour. EVS AND OPUS AUDIO CODERS PERFORMANCE EVALUATION FOR ORIENTAL AND ORCHESTRAL MUSICAL INSTRUMENTS [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4753

AUDIO CODING BASED ON SPECTRAL RECOVERY BY CONVOLUTIONAL NEURAL NETWORK


This study proposes a new method of audio coding based on spectral recovery, which can enhance the performance of transform audio coding. An encoder represents spectral information of an input in a time-frequency domain and transmits only a portion of it so that the remaining spectral information can be recovered based on the transmitted information. A decoder recovers the magnitudes of missing spectral information using a convolutional neural network. The signs of missing spectral information are either transmitted or randomly assigned, according to their importance.

Paper Details

Authors:
Seong-Hyeon Shin, Seung Kwon Beack, Taejin Lee, Hochong Park
Submitted On:
10 May 2019 - 6:05am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

AUDIO_CODING_BASED_ON_SPECTRAL_RECOVERY_BY_CONVOLUTIONAL_NEURAL_NETWORK.pdf

(47)

Keywords

Subscribe

[1] Seong-Hyeon Shin, Seung Kwon Beack, Taejin Lee, Hochong Park, "AUDIO CODING BASED ON SPECTRAL RECOVERY BY CONVOLUTIONAL NEURAL NETWORK", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4295. Accessed: Dec. 12, 2019.
@article{4295-19,
url = {http://sigport.org/4295},
author = {Seong-Hyeon Shin; Seung Kwon Beack; Taejin Lee; Hochong Park },
publisher = {IEEE SigPort},
title = {AUDIO CODING BASED ON SPECTRAL RECOVERY BY CONVOLUTIONAL NEURAL NETWORK},
year = {2019} }
TY - EJOUR
T1 - AUDIO CODING BASED ON SPECTRAL RECOVERY BY CONVOLUTIONAL NEURAL NETWORK
AU - Seong-Hyeon Shin; Seung Kwon Beack; Taejin Lee; Hochong Park
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4295
ER -
Seong-Hyeon Shin, Seung Kwon Beack, Taejin Lee, Hochong Park. (2019). AUDIO CODING BASED ON SPECTRAL RECOVERY BY CONVOLUTIONAL NEURAL NETWORK. IEEE SigPort. http://sigport.org/4295
Seong-Hyeon Shin, Seung Kwon Beack, Taejin Lee, Hochong Park, 2019. AUDIO CODING BASED ON SPECTRAL RECOVERY BY CONVOLUTIONAL NEURAL NETWORK. Available at: http://sigport.org/4295.
Seong-Hyeon Shin, Seung Kwon Beack, Taejin Lee, Hochong Park. (2019). "AUDIO CODING BASED ON SPECTRAL RECOVERY BY CONVOLUTIONAL NEURAL NETWORK." Web.
1. Seong-Hyeon Shin, Seung Kwon Beack, Taejin Lee, Hochong Park. AUDIO CODING BASED ON SPECTRAL RECOVERY BY CONVOLUTIONAL NEURAL NETWORK [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4295

Speaker-dependent WaveNet-based delay-free ADPCM speech coding


This paper proposes a WaveNet-based delay-free adaptive differential pulse code modulation (ADPCM) speech coding system. The WaveNet generative model, which is a stateof-the-art model for neural-network-based speech waveform synthesis, is used as the adaptive predictor in ADPCM. To further improve speech quality, mel-cepstrum-based noise shaping and postfiltering were integrated with the proposed ADPCM system.

Paper Details

Authors:
Submitted On:
7 May 2019 - 10:45pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

poster.pdf

(52)

Keywords

Subscribe

[1] , "Speaker-dependent WaveNet-based delay-free ADPCM speech coding", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/3996. Accessed: Dec. 12, 2019.
@article{3996-19,
url = {http://sigport.org/3996},
author = { },
publisher = {IEEE SigPort},
title = {Speaker-dependent WaveNet-based delay-free ADPCM speech coding},
year = {2019} }
TY - EJOUR
T1 - Speaker-dependent WaveNet-based delay-free ADPCM speech coding
AU -
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/3996
ER -
. (2019). Speaker-dependent WaveNet-based delay-free ADPCM speech coding. IEEE SigPort. http://sigport.org/3996
, 2019. Speaker-dependent WaveNet-based delay-free ADPCM speech coding. Available at: http://sigport.org/3996.
. (2019). "Speaker-dependent WaveNet-based delay-free ADPCM speech coding." Web.
1. . Speaker-dependent WaveNet-based delay-free ADPCM speech coding [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/3996

Immersive Audio Coding for Virtual Reality Using a Metadata-Assisted Extension of the 3GPP EVS Codec


Virtual Reality (VR) audio scenes may be composed of a very large number of audio elements, including dynamic audio objects, fixed audio channels and scene-based audio elements such as Higher Order Ambisonics (HOA).

Paper Details

Authors:
David McGrath, Stefan Bruhn, Heiko Purnhagen, Michael Eckert, Juan Torres, Stefanie Brown, Dan Darcy
Submitted On:
7 May 2019 - 3:10pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

VRStream.pdf

(63)

Keywords

Subscribe

[1] David McGrath, Stefan Bruhn, Heiko Purnhagen, Michael Eckert, Juan Torres, Stefanie Brown, Dan Darcy, "Immersive Audio Coding for Virtual Reality Using a Metadata-Assisted Extension of the 3GPP EVS Codec", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/3950. Accessed: Dec. 12, 2019.
@article{3950-19,
url = {http://sigport.org/3950},
author = {David McGrath; Stefan Bruhn; Heiko Purnhagen; Michael Eckert; Juan Torres; Stefanie Brown; Dan Darcy },
publisher = {IEEE SigPort},
title = {Immersive Audio Coding for Virtual Reality Using a Metadata-Assisted Extension of the 3GPP EVS Codec},
year = {2019} }
TY - EJOUR
T1 - Immersive Audio Coding for Virtual Reality Using a Metadata-Assisted Extension of the 3GPP EVS Codec
AU - David McGrath; Stefan Bruhn; Heiko Purnhagen; Michael Eckert; Juan Torres; Stefanie Brown; Dan Darcy
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/3950
ER -
David McGrath, Stefan Bruhn, Heiko Purnhagen, Michael Eckert, Juan Torres, Stefanie Brown, Dan Darcy. (2019). Immersive Audio Coding for Virtual Reality Using a Metadata-Assisted Extension of the 3GPP EVS Codec. IEEE SigPort. http://sigport.org/3950
David McGrath, Stefan Bruhn, Heiko Purnhagen, Michael Eckert, Juan Torres, Stefanie Brown, Dan Darcy, 2019. Immersive Audio Coding for Virtual Reality Using a Metadata-Assisted Extension of the 3GPP EVS Codec. Available at: http://sigport.org/3950.
David McGrath, Stefan Bruhn, Heiko Purnhagen, Michael Eckert, Juan Torres, Stefanie Brown, Dan Darcy. (2019). "Immersive Audio Coding for Virtual Reality Using a Metadata-Assisted Extension of the 3GPP EVS Codec." Web.
1. David McGrath, Stefan Bruhn, Heiko Purnhagen, Michael Eckert, Juan Torres, Stefanie Brown, Dan Darcy. Immersive Audio Coding for Virtual Reality Using a Metadata-Assisted Extension of the 3GPP EVS Codec [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/3950

WAVENET BASED LOW RATE SPEECH CODING


Traditional parametric coding of speech facilitates low rate but provides poor reconstruction quality because of the inadequacy of the model used. We describe how a WaveNet generative speech model can be used to generate high quality speech from the bit stream of a standard parametric coder operating at 2.4 kb/s. We compare this parametric coder with a waveform coder based on the same generative model and show that approximating the signal waveform incurs a large rate penalty.

Paper Details

Authors:
W. Bastiaan Kleijn, Felicia S. C. Lim, Alejandro Luebs, Jan Skoglund, Florian Stimberg, Quan Wang, Thomas C. Walters
Submitted On:
4 May 2018 - 2:28pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

WaveNetCoding_b.pdf

(230)

Keywords

Subscribe

[1] W. Bastiaan Kleijn, Felicia S. C. Lim, Alejandro Luebs, Jan Skoglund, Florian Stimberg, Quan Wang, Thomas C. Walters, "WAVENET BASED LOW RATE SPEECH CODING", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3201. Accessed: Dec. 12, 2019.
@article{3201-18,
url = {http://sigport.org/3201},
author = {W. Bastiaan Kleijn; Felicia S. C. Lim; Alejandro Luebs; Jan Skoglund; Florian Stimberg; Quan Wang; Thomas C. Walters },
publisher = {IEEE SigPort},
title = {WAVENET BASED LOW RATE SPEECH CODING},
year = {2018} }
TY - EJOUR
T1 - WAVENET BASED LOW RATE SPEECH CODING
AU - W. Bastiaan Kleijn; Felicia S. C. Lim; Alejandro Luebs; Jan Skoglund; Florian Stimberg; Quan Wang; Thomas C. Walters
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3201
ER -
W. Bastiaan Kleijn, Felicia S. C. Lim, Alejandro Luebs, Jan Skoglund, Florian Stimberg, Quan Wang, Thomas C. Walters. (2018). WAVENET BASED LOW RATE SPEECH CODING. IEEE SigPort. http://sigport.org/3201
W. Bastiaan Kleijn, Felicia S. C. Lim, Alejandro Luebs, Jan Skoglund, Florian Stimberg, Quan Wang, Thomas C. Walters, 2018. WAVENET BASED LOW RATE SPEECH CODING. Available at: http://sigport.org/3201.
W. Bastiaan Kleijn, Felicia S. C. Lim, Alejandro Luebs, Jan Skoglund, Florian Stimberg, Quan Wang, Thomas C. Walters. (2018). "WAVENET BASED LOW RATE SPEECH CODING." Web.
1. W. Bastiaan Kleijn, Felicia S. C. Lim, Alejandro Luebs, Jan Skoglund, Florian Stimberg, Quan Wang, Thomas C. Walters. WAVENET BASED LOW RATE SPEECH CODING [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3201

ADAPTIVE CODING OF NON-NEGATIVE FACTORIZATION PARAMETERS WITH APPLICATION TO INFORMED SOURCE SEPARATION


Informed source separation (ISS) uses source separation for extracting audio objects out of their downmix given some pre-computed parameters. In recent years, non-negative tensor factorization (NTF) has proven to be a good choice for compressing audio objects at an encoding stage. At the decoding stage, these parameters are used to separate the downmix with Wiener-filtering. The quantized NTF parameters have to be encoded to a bitstream prior to transmission.

Paper Details

Authors:
Max Bläser, Christian Rohlfing, Yingbo Gao, Mathias Wien
Submitted On:
22 April 2018 - 1:25pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

BlRo2018_Poster_print.pdf

(250)

Subscribe

[1] Max Bläser, Christian Rohlfing, Yingbo Gao, Mathias Wien, "ADAPTIVE CODING OF NON-NEGATIVE FACTORIZATION PARAMETERS WITH APPLICATION TO INFORMED SOURCE SEPARATION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3135. Accessed: Dec. 12, 2019.
@article{3135-18,
url = {http://sigport.org/3135},
author = {Max Bläser; Christian Rohlfing; Yingbo Gao; Mathias Wien },
publisher = {IEEE SigPort},
title = {ADAPTIVE CODING OF NON-NEGATIVE FACTORIZATION PARAMETERS WITH APPLICATION TO INFORMED SOURCE SEPARATION},
year = {2018} }
TY - EJOUR
T1 - ADAPTIVE CODING OF NON-NEGATIVE FACTORIZATION PARAMETERS WITH APPLICATION TO INFORMED SOURCE SEPARATION
AU - Max Bläser; Christian Rohlfing; Yingbo Gao; Mathias Wien
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3135
ER -
Max Bläser, Christian Rohlfing, Yingbo Gao, Mathias Wien. (2018). ADAPTIVE CODING OF NON-NEGATIVE FACTORIZATION PARAMETERS WITH APPLICATION TO INFORMED SOURCE SEPARATION. IEEE SigPort. http://sigport.org/3135
Max Bläser, Christian Rohlfing, Yingbo Gao, Mathias Wien, 2018. ADAPTIVE CODING OF NON-NEGATIVE FACTORIZATION PARAMETERS WITH APPLICATION TO INFORMED SOURCE SEPARATION. Available at: http://sigport.org/3135.
Max Bläser, Christian Rohlfing, Yingbo Gao, Mathias Wien. (2018). "ADAPTIVE CODING OF NON-NEGATIVE FACTORIZATION PARAMETERS WITH APPLICATION TO INFORMED SOURCE SEPARATION." Web.
1. Max Bläser, Christian Rohlfing, Yingbo Gao, Mathias Wien. ADAPTIVE CODING OF NON-NEGATIVE FACTORIZATION PARAMETERS WITH APPLICATION TO INFORMED SOURCE SEPARATION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3135

Decorrelation for Audio Object Coding


Object-based representations of audio content are increasingly
used in entertainment systems to deliver immersive and
personalized experiences. Efficient storage and transmission
of such content can be achieved by joint object coding algorithms
that convey a reduced number of downmix signals
together with parametric side information that enables object
reconstruction in the decoder. This paper presents an
approach to improve the performance of joint object coding
by adding one or more decorrelators to the decoding process.

Paper Details

Authors:
Lars Villemoes, Toni Hirvonen, and Heiko Purnhagen
Submitted On:
19 May 2017 - 7:40am
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2017_FINAL.pdf

(286)

Keywords

Subscribe

[1] Lars Villemoes, Toni Hirvonen, and Heiko Purnhagen, "Decorrelation for Audio Object Coding", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1795. Accessed: Dec. 12, 2019.
@article{1795-17,
url = {http://sigport.org/1795},
author = {Lars Villemoes; Toni Hirvonen; and Heiko Purnhagen },
publisher = {IEEE SigPort},
title = {Decorrelation for Audio Object Coding},
year = {2017} }
TY - EJOUR
T1 - Decorrelation for Audio Object Coding
AU - Lars Villemoes; Toni Hirvonen; and Heiko Purnhagen
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1795
ER -
Lars Villemoes, Toni Hirvonen, and Heiko Purnhagen. (2017). Decorrelation for Audio Object Coding. IEEE SigPort. http://sigport.org/1795
Lars Villemoes, Toni Hirvonen, and Heiko Purnhagen, 2017. Decorrelation for Audio Object Coding. Available at: http://sigport.org/1795.
Lars Villemoes, Toni Hirvonen, and Heiko Purnhagen. (2017). "Decorrelation for Audio Object Coding." Web.
1. Lars Villemoes, Toni Hirvonen, and Heiko Purnhagen. Decorrelation for Audio Object Coding [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1795

Pre-Echo Noise Reduction in Frequency-Domain Audio Codecs (Poster)


One of the most common yet detrimental compression artifacts in frequency-domain audio codecs is known as pre-echo, which is perceived as a brief noise preceding transient signals, and is discernable even without direct comparison to the original signal. Because of its substantial negative impact on audio quality, many techniques have been proposed to alleviate it, but not without effect on coding efficiency.

Paper Details

Authors:
Jimmy Lapierre, Roch Lefebvre
Submitted On:
1 March 2017 - 7:11pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster

(268)

Keywords

Additional Categories

Subscribe

[1] Jimmy Lapierre, Roch Lefebvre, "Pre-Echo Noise Reduction in Frequency-Domain Audio Codecs (Poster)", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1569. Accessed: Dec. 12, 2019.
@article{1569-17,
url = {http://sigport.org/1569},
author = {Jimmy Lapierre; Roch Lefebvre },
publisher = {IEEE SigPort},
title = {Pre-Echo Noise Reduction in Frequency-Domain Audio Codecs (Poster)},
year = {2017} }
TY - EJOUR
T1 - Pre-Echo Noise Reduction in Frequency-Domain Audio Codecs (Poster)
AU - Jimmy Lapierre; Roch Lefebvre
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1569
ER -
Jimmy Lapierre, Roch Lefebvre. (2017). Pre-Echo Noise Reduction in Frequency-Domain Audio Codecs (Poster). IEEE SigPort. http://sigport.org/1569
Jimmy Lapierre, Roch Lefebvre, 2017. Pre-Echo Noise Reduction in Frequency-Domain Audio Codecs (Poster). Available at: http://sigport.org/1569.
Jimmy Lapierre, Roch Lefebvre. (2017). "Pre-Echo Noise Reduction in Frequency-Domain Audio Codecs (Poster)." Web.
1. Jimmy Lapierre, Roch Lefebvre. Pre-Echo Noise Reduction in Frequency-Domain Audio Codecs (Poster) [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1569

Adaptive selection of lag-window shape for linear predictive analysis in the 3GPP EVS codec

Paper Details

Authors:
Submitted On:
23 February 2016 - 1:44pm
Short Link:
Type:
Event:

Document Files

kmhGlobalSIP2015ALW_1212a.pdf

(494)

Subscribe

[1] , "Adaptive selection of lag-window shape for linear predictive analysis in the 3GPP EVS codec", IEEE SigPort, 2015. [Online]. Available: http://sigport.org/489. Accessed: Dec. 12, 2019.
@article{489-15,
url = {http://sigport.org/489},
author = { },
publisher = {IEEE SigPort},
title = {Adaptive selection of lag-window shape for linear predictive analysis in the 3GPP EVS codec},
year = {2015} }
TY - EJOUR
T1 - Adaptive selection of lag-window shape for linear predictive analysis in the 3GPP EVS codec
AU -
PY - 2015
PB - IEEE SigPort
UR - http://sigport.org/489
ER -
. (2015). Adaptive selection of lag-window shape for linear predictive analysis in the 3GPP EVS codec. IEEE SigPort. http://sigport.org/489
, 2015. Adaptive selection of lag-window shape for linear predictive analysis in the 3GPP EVS codec. Available at: http://sigport.org/489.
. (2015). "Adaptive selection of lag-window shape for linear predictive analysis in the 3GPP EVS codec." Web.
1. . Adaptive selection of lag-window shape for linear predictive analysis in the 3GPP EVS codec [Internet]. IEEE SigPort; 2015. Available from : http://sigport.org/489

Pages