- Transducers
- Spatial and Multichannel Audio
- Source Separation and Signal Enhancement
- Room Acoustics and Acoustic System Modeling
- Network Audio
- Audio for Multimedia
- Audio Processing Systems
- Audio Coding
- Audio Analysis and Synthesis
- Active Noise Control
- Auditory Modeling and Hearing Aids
- Bioacoustics and Medical Acoustics
- Music Signal Processing
- Loudspeaker and Microphone Array Signal Processing
- Echo Cancellation
- Content-Based Audio Processing
- Read more about Contributions of the Piriform Fossa of Female Speakers to Vowel Spectra
- Log in to post comments
The bilateral cavities of the piriform fossa are the side branches of the vocal tract and produce anti-resonance(s) in the transfer function. This effect has been known for male vocal tracts, but female data were few. This study investigates contributions of the piriform fossa to vowel spectra in female vocal tracts by means of MRI-based vocal-tract modeling and acoustic experiment with the water-filling technique. Results from three female subjects indicate that the piriform fossa generates one or two dips in the frequency region of 4-6 kHz.
- Categories:
- Read more about A multi-channel/multi-speaker interactive 3D Audio-Visual Speech Corpus in Mandarin
- Log in to post comments
This paper presents a multi-channel/multi-speaker 3D audiovisual
corpus for Mandarin continuous speech recognition and
other fields, such as speech visualization and speech synthesis.
This corpus consists of 24 speakers with about 18k utterances,
about 20 hours in total. For each utterance, the audio
streams were recorded by two professional microphones in
near-field and far-field respectively, while a marker-based 3D
facial motion capturing system with six infrared cameras was
- Categories:
- Read more about Long Short-term Memory Recurrent Neural Network based Segment Features for Music Genre Classification
- Log in to post comments
In the conventional frame feature based music genre
classification methods, the audio data is represented by
independent frames and the sequential nature of audio is totally
ignored. If the sequential knowledge is well modeled and
combined, the classification performance can be significantly
improved. The long short-term memory(LSTM) recurrent
neural network (RNN) which uses a set of special memory
cells to model for long-range feature sequence, has been
successfully used for many sequence labeling and sequence
- Categories:
- Read more about Mismatched Training Data Enhancement for Automatic Recognition of Children’s Speech using DNN-HMM
- Log in to post comments
The increasing profusion of commercial automatic speech recognition technology applications has been driven by big-data techniques, making use of high quality labelled speech datasets. Children’s speech displays greater time and frequency domain variability than typical adult speech, lacks the depth and breadth of training material, and presents difficulties relating to capture quality. All of these factors act to reduce the achievable performance of systems that recognise children’s speech.
- Categories:
- Read more about Detection of Mood Disorder Using Speech Emotion Profiles and LSTM
- Log in to post comments
In mood disorder diagnosis, bipolar disorder (BD) patients are often misdiagnosed as unipolar depression (UD) on initial presentation. It is crucial to establish an accurate distinction between BD and UD to make a correct and early diagnosis, leading to improvements in treatment and course of illness. To deal with this misdiagnosis problem, in this study, we experimented on eliciting subjects’ emotions by watching six eliciting emotional video clips. After watching each video clips, their speech responses were collected when they were interviewing with a clinician.
- Categories:
- Read more about The Correlation Between Signal Distance and Consonant Pronunciation in Mandarin Words
- Log in to post comments
In Mandarin language speaking, some consonant and vowel pairs are hard to be distinguished and pronounced clearly even for some native speakers. This study investigates the signal distance between consonants compared in pairs from the signal processing point of view to reveal the correlation of signal distance and consonant pronunciation. Some popular speech quality objective measures are innovatively applied to obtain the signal distance.
- Categories:
- Read more about Pronunciation Error Detection using DNN Articulatory Model based on Multi-lingual and Multi-task Learning
- Log in to post comments
poster-v2.pdf
- Categories:
- Read more about Spatial Co-variation of Lip and Tongue at Strong and Weak Syllables
- Log in to post comments
Speech production requires control for coordination among different articulatory organs. During the natural speech, the articulatory co-variation is more common rather than compensation, but the studies supporting this view are few. In this study, the coordination of lip and tongue articulation was examined during speech using articulatory data. Native speakers of Chinese served as subjects. Speech materials consisted of short Chinese sentences, which include words having the cardinal vowels at different locations in sentences with and without emphasis.
- Categories:
- Read more about The Examination of the Relationship between Perception and Production of Mandarin tone of Kazak Students
- Log in to post comments
This study aims at examination on the relationship between the
perception and production of Mandarin tone by Kazak minor
learners from China. The eight-day perceptual training course
of Mandarin tone is designed. Perception is assessed by means
of identification test. Production data is collected both at
pretest and post-test, and evaluated by native speakers of
Mandarin Chinese. The results from the perception at pretest
and post-test reveal that training Kazak learners to perceive
Mandarin tones has been shown to be effective, with
ISCSLP168.pdf
- Categories:
- Read more about Study on the Relation of Fundamental and Formant Frequencies for Affective Speech Synthesis
- Log in to post comments
Directions into Velocities of Articulators (DIVA) model is a kind of self-adaptive neural network model which controls movements of a simulated vocal tract to produce words, syllables or phonemes. However, DIVA model lacks of emotion functions. To implement the emotion function in DIVA model, we investigate the process of affective speech production based on the combination of fundamental frequency (F0) and formant frequencies, as well as the relations between F0 and formants of emotional speech.
- Categories: