Sorry, you need to enable JavaScript to visit this website.

Audio Analysis and Synthesis

Tutorial T-9: Model-based Speech and Audio Processing

Paper Details

Authors:
Mads Græsbøll Christensen, Jesper Kjær Nielsen, and Jesper Rindom Jensen
Submitted On:
16 April 2018 - 5:52pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Slides

(366 downloads)

References

(55 downloads)

Keywords

Subscribe

[1] Mads Græsbøll Christensen, Jesper Kjær Nielsen, and Jesper Rindom Jensen, "Tutorial T-9: Model-based Speech and Audio Processing", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2917. Accessed: May. 25, 2018.
@article{2917-18,
url = {http://sigport.org/2917},
author = {Mads Græsbøll Christensen; Jesper Kjær Nielsen; and Jesper Rindom Jensen },
publisher = {IEEE SigPort},
title = {Tutorial T-9: Model-based Speech and Audio Processing},
year = {2018} }
TY - EJOUR
T1 - Tutorial T-9: Model-based Speech and Audio Processing
AU - Mads Græsbøll Christensen; Jesper Kjær Nielsen; and Jesper Rindom Jensen
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2917
ER -
Mads Græsbøll Christensen, Jesper Kjær Nielsen, and Jesper Rindom Jensen. (2018). Tutorial T-9: Model-based Speech and Audio Processing. IEEE SigPort. http://sigport.org/2917
Mads Græsbøll Christensen, Jesper Kjær Nielsen, and Jesper Rindom Jensen, 2018. Tutorial T-9: Model-based Speech and Audio Processing. Available at: http://sigport.org/2917.
Mads Græsbøll Christensen, Jesper Kjær Nielsen, and Jesper Rindom Jensen. (2018). "Tutorial T-9: Model-based Speech and Audio Processing." Web.
1. Mads Græsbøll Christensen, Jesper Kjær Nielsen, and Jesper Rindom Jensen. Tutorial T-9: Model-based Speech and Audio Processing [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2917

Investigating the Effect of Sound-Event Loudness on Crowdsourced Audio Annotations


Audio annotation is an important step in developing machine-listening systems. It is also a time consuming process, which has motivated investigators to crowdsource audio annotations. However, there are many factors that affect annotations, many of which have not been adequately investigated. In previous work, we investigated the effects of visualization aids and sound scene complexity on the quality of crowdsourced sound-event annotations.

Paper Details

Authors:
Mark Cartwright, Justin Salamon, Ayanna Seals, Oded Nov, Juan Pablo Bello
Submitted On:
14 April 2018 - 5:17pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

cartwright_icassp_2018_poster.pdf

(19 downloads)

Keywords

Subscribe

[1] Mark Cartwright, Justin Salamon, Ayanna Seals, Oded Nov, Juan Pablo Bello, "Investigating the Effect of Sound-Event Loudness on Crowdsourced Audio Annotations", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2853. Accessed: May. 25, 2018.
@article{2853-18,
url = {http://sigport.org/2853},
author = {Mark Cartwright; Justin Salamon; Ayanna Seals; Oded Nov; Juan Pablo Bello },
publisher = {IEEE SigPort},
title = {Investigating the Effect of Sound-Event Loudness on Crowdsourced Audio Annotations},
year = {2018} }
TY - EJOUR
T1 - Investigating the Effect of Sound-Event Loudness on Crowdsourced Audio Annotations
AU - Mark Cartwright; Justin Salamon; Ayanna Seals; Oded Nov; Juan Pablo Bello
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2853
ER -
Mark Cartwright, Justin Salamon, Ayanna Seals, Oded Nov, Juan Pablo Bello. (2018). Investigating the Effect of Sound-Event Loudness on Crowdsourced Audio Annotations. IEEE SigPort. http://sigport.org/2853
Mark Cartwright, Justin Salamon, Ayanna Seals, Oded Nov, Juan Pablo Bello, 2018. Investigating the Effect of Sound-Event Loudness on Crowdsourced Audio Annotations. Available at: http://sigport.org/2853.
Mark Cartwright, Justin Salamon, Ayanna Seals, Oded Nov, Juan Pablo Bello. (2018). "Investigating the Effect of Sound-Event Loudness on Crowdsourced Audio Annotations." Web.
1. Mark Cartwright, Justin Salamon, Ayanna Seals, Oded Nov, Juan Pablo Bello. Investigating the Effect of Sound-Event Loudness on Crowdsourced Audio Annotations [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2853

SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS


This paper presents a SampleRNN-based neural vocoder for statistical parametric speech synthesis. This method utilizes a conditional SampleRNN model composed of a hierarchical structure of GRU layers and feed-forward layers to capture long-span dependencies between acoustic features and waveform sequences. Compared with conventional vocoders based on the source-filter model, our proposed vocoder is trained without assumptions derived from the prior knowledge of speech production and is able to provide a better modeling and recovery of phase information.

Paper Details

Authors:
Yang Ai, Hong-Chuan Wu, Zhen-Hua Ling
Submitted On:
13 April 2018 - 3:29am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2018_poster_aiyang.pdf

(31 downloads)

Keywords

Subscribe

[1] Yang Ai, Hong-Chuan Wu, Zhen-Hua Ling, "SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2633. Accessed: May. 25, 2018.
@article{2633-18,
url = {http://sigport.org/2633},
author = {Yang Ai; Hong-Chuan Wu; Zhen-Hua Ling },
publisher = {IEEE SigPort},
title = {SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS},
year = {2018} }
TY - EJOUR
T1 - SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS
AU - Yang Ai; Hong-Chuan Wu; Zhen-Hua Ling
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2633
ER -
Yang Ai, Hong-Chuan Wu, Zhen-Hua Ling. (2018). SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS. IEEE SigPort. http://sigport.org/2633
Yang Ai, Hong-Chuan Wu, Zhen-Hua Ling, 2018. SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS. Available at: http://sigport.org/2633.
Yang Ai, Hong-Chuan Wu, Zhen-Hua Ling. (2018). "SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS." Web.
1. Yang Ai, Hong-Chuan Wu, Zhen-Hua Ling. SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2633

SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS


This paper presents a SampleRNN-based neural vocoder for statistical parametric speech synthesis. This method utilizes a conditional SampleRNN model composed of a hierarchical structure of GRU layers and feed-forward layers to capture long-span dependencies between acoustic features and waveform sequences. Compared with conventional vocoders based on the source-filter model, our proposed vocoder is trained without assumptions derived from the prior knowledge of speech production and is able to provide a better modeling and recovery of phase information.

Paper Details

Authors:
Yang Ai, Hong-Chuan Wu, Zhen-Hua Ling
Submitted On:
13 April 2018 - 3:29am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster

(44 downloads)

Keywords

Subscribe

[1] Yang Ai, Hong-Chuan Wu, Zhen-Hua Ling, "SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2632. Accessed: May. 25, 2018.
@article{2632-18,
url = {http://sigport.org/2632},
author = {Yang Ai; Hong-Chuan Wu; Zhen-Hua Ling },
publisher = {IEEE SigPort},
title = {SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS},
year = {2018} }
TY - EJOUR
T1 - SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS
AU - Yang Ai; Hong-Chuan Wu; Zhen-Hua Ling
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2632
ER -
Yang Ai, Hong-Chuan Wu, Zhen-Hua Ling. (2018). SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS. IEEE SigPort. http://sigport.org/2632
Yang Ai, Hong-Chuan Wu, Zhen-Hua Ling, 2018. SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS. Available at: http://sigport.org/2632.
Yang Ai, Hong-Chuan Wu, Zhen-Hua Ling. (2018). "SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS." Web.
1. Yang Ai, Hong-Chuan Wu, Zhen-Hua Ling. SAMPLERNN-BASED NEURAL VOCODER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2632

REVISITING THE PROBLEM OF AUDIO-BASED HIT SONG PREDICTION USING CONVOLUTIONAL NEURAL NETWORKS


Being able to predict whether a song can be a hit has important applications in the music industry. Although it is true that the popularity of a song can be greatly affected by external factors such as social and commercial influences, to which degree audio features computed from musical signals (whom we regard as internal factors) can predict song popularity is an interesting research question on its own.

icassp2017.pdf

PDF icon icassp2017.pdf (337 downloads)

Paper Details

Authors:
Li-Chia Yang, Szu-Yu Chou, Jen-Yu Liu, Yi-Hsuan Yang, Yi-An Chen
Submitted On:
3 March 2017 - 12:59am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp2017.pdf

(337 downloads)

Keywords

Subscribe

[1] Li-Chia Yang, Szu-Yu Chou, Jen-Yu Liu, Yi-Hsuan Yang, Yi-An Chen, "REVISITING THE PROBLEM OF AUDIO-BASED HIT SONG PREDICTION USING CONVOLUTIONAL NEURAL NETWORKS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1600. Accessed: May. 25, 2018.
@article{1600-17,
url = {http://sigport.org/1600},
author = {Li-Chia Yang; Szu-Yu Chou; Jen-Yu Liu; Yi-Hsuan Yang; Yi-An Chen },
publisher = {IEEE SigPort},
title = {REVISITING THE PROBLEM OF AUDIO-BASED HIT SONG PREDICTION USING CONVOLUTIONAL NEURAL NETWORKS},
year = {2017} }
TY - EJOUR
T1 - REVISITING THE PROBLEM OF AUDIO-BASED HIT SONG PREDICTION USING CONVOLUTIONAL NEURAL NETWORKS
AU - Li-Chia Yang; Szu-Yu Chou; Jen-Yu Liu; Yi-Hsuan Yang; Yi-An Chen
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1600
ER -
Li-Chia Yang, Szu-Yu Chou, Jen-Yu Liu, Yi-Hsuan Yang, Yi-An Chen. (2017). REVISITING THE PROBLEM OF AUDIO-BASED HIT SONG PREDICTION USING CONVOLUTIONAL NEURAL NETWORKS. IEEE SigPort. http://sigport.org/1600
Li-Chia Yang, Szu-Yu Chou, Jen-Yu Liu, Yi-Hsuan Yang, Yi-An Chen, 2017. REVISITING THE PROBLEM OF AUDIO-BASED HIT SONG PREDICTION USING CONVOLUTIONAL NEURAL NETWORKS. Available at: http://sigport.org/1600.
Li-Chia Yang, Szu-Yu Chou, Jen-Yu Liu, Yi-Hsuan Yang, Yi-An Chen. (2017). "REVISITING THE PROBLEM OF AUDIO-BASED HIT SONG PREDICTION USING CONVOLUTIONAL NEURAL NETWORKS." Web.
1. Li-Chia Yang, Szu-Yu Chou, Jen-Yu Liu, Yi-Hsuan Yang, Yi-An Chen. REVISITING THE PROBLEM OF AUDIO-BASED HIT SONG PREDICTION USING CONVOLUTIONAL NEURAL NETWORKS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1600

Global Variance in Speech Synthesis with Linear Dynamical Models

Paper Details

Authors:
Vassilis Tsiaras, Ranniery Maia, Vassilis Diakoloukas, Yannis Stylianou, Vassilis Digalakis
Submitted On:
11 March 2017 - 8:48pm
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

GV_LDM.pdf

(127 downloads)

Keywords

Subscribe

[1] Vassilis Tsiaras, Ranniery Maia, Vassilis Diakoloukas, Yannis Stylianou, Vassilis Digalakis, "Global Variance in Speech Synthesis with Linear Dynamical Models", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1597. Accessed: May. 25, 2018.
@article{1597-17,
url = {http://sigport.org/1597},
author = {Vassilis Tsiaras; Ranniery Maia; Vassilis Diakoloukas; Yannis Stylianou; Vassilis Digalakis },
publisher = {IEEE SigPort},
title = {Global Variance in Speech Synthesis with Linear Dynamical Models},
year = {2017} }
TY - EJOUR
T1 - Global Variance in Speech Synthesis with Linear Dynamical Models
AU - Vassilis Tsiaras; Ranniery Maia; Vassilis Diakoloukas; Yannis Stylianou; Vassilis Digalakis
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1597
ER -
Vassilis Tsiaras, Ranniery Maia, Vassilis Diakoloukas, Yannis Stylianou, Vassilis Digalakis. (2017). Global Variance in Speech Synthesis with Linear Dynamical Models. IEEE SigPort. http://sigport.org/1597
Vassilis Tsiaras, Ranniery Maia, Vassilis Diakoloukas, Yannis Stylianou, Vassilis Digalakis, 2017. Global Variance in Speech Synthesis with Linear Dynamical Models. Available at: http://sigport.org/1597.
Vassilis Tsiaras, Ranniery Maia, Vassilis Diakoloukas, Yannis Stylianou, Vassilis Digalakis. (2017). "Global Variance in Speech Synthesis with Linear Dynamical Models." Web.
1. Vassilis Tsiaras, Ranniery Maia, Vassilis Diakoloukas, Yannis Stylianou, Vassilis Digalakis. Global Variance in Speech Synthesis with Linear Dynamical Models [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1597

poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK

Paper Details

Authors:
Submitted On:
24 March 2016 - 10:48am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK.pdf

(257 downloads)

Keywords

Subscribe

[1] , "poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1026. Accessed: May. 25, 2018.
@article{1026-16,
url = {http://sigport.org/1026},
author = { },
publisher = {IEEE SigPort},
title = {poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK},
year = {2016} }
TY - EJOUR
T1 - poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK
AU -
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1026
ER -
. (2016). poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK. IEEE SigPort. http://sigport.org/1026
, 2016. poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK. Available at: http://sigport.org/1026.
. (2016). "poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK." Web.
1. . poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1026

poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK

Paper Details

Authors:
Submitted On:
24 March 2016 - 10:48am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK.pdf

(236 downloads)

Keywords

Subscribe

[1] , "poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1025. Accessed: May. 25, 2018.
@article{1025-16,
url = {http://sigport.org/1025},
author = { },
publisher = {IEEE SigPort},
title = {poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK},
year = {2016} }
TY - EJOUR
T1 - poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK
AU -
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1025
ER -
. (2016). poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK. IEEE SigPort. http://sigport.org/1025
, 2016. poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK. Available at: http://sigport.org/1025.
. (2016). "poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK." Web.
1. . poster_STEGANALYSIS OF AAC USINGCALIBRATED MARKOV MODEL OF ADJACENT CODEBOOK [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1025

Lecture ICASSP 2016 Pierre Laffitte


This presentation introduces a Deep Learning model that performs classification of the Audio Scene in the subway environment. The final goal is to detect Screams and Shouts for surveillance purposes. The model is a combination of Deep Belief Network and Deep Neural Network, (generatively pre-trained within the DBN framework and fine-tuned discriminatively within the DNN framework), and is trained on a novel database of pseudo-real signals collected in the Paris metro.

Paper Details

Authors:
Pierre Laffitte, David Sodoyer, Laurent Girin, Charles Tatkeu
Submitted On:
23 March 2016 - 10:01am
Short Link:
Type:
Event:
Paper Code:

Document Files

ICASSP Lecture.pdf

(239 downloads)

Keywords

Subscribe

[1] Pierre Laffitte, David Sodoyer, Laurent Girin, Charles Tatkeu, "Lecture ICASSP 2016 Pierre Laffitte", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/991. Accessed: May. 25, 2018.
@article{991-16,
url = {http://sigport.org/991},
author = {Pierre Laffitte; David Sodoyer; Laurent Girin; Charles Tatkeu },
publisher = {IEEE SigPort},
title = {Lecture ICASSP 2016 Pierre Laffitte},
year = {2016} }
TY - EJOUR
T1 - Lecture ICASSP 2016 Pierre Laffitte
AU - Pierre Laffitte; David Sodoyer; Laurent Girin; Charles Tatkeu
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/991
ER -
Pierre Laffitte, David Sodoyer, Laurent Girin, Charles Tatkeu. (2016). Lecture ICASSP 2016 Pierre Laffitte. IEEE SigPort. http://sigport.org/991
Pierre Laffitte, David Sodoyer, Laurent Girin, Charles Tatkeu, 2016. Lecture ICASSP 2016 Pierre Laffitte. Available at: http://sigport.org/991.
Pierre Laffitte, David Sodoyer, Laurent Girin, Charles Tatkeu. (2016). "Lecture ICASSP 2016 Pierre Laffitte." Web.
1. Pierre Laffitte, David Sodoyer, Laurent Girin, Charles Tatkeu. Lecture ICASSP 2016 Pierre Laffitte [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/991

A Time Regularization Technique for Discrete Spectral Envelopes Through Frequency Derivative


A Time Regularization Technique for Discrete Spectral Envelopes Through Frequency Derivative

Abstract—In most applications of sinusoidal models for speech
signal, an amplitude spectral envelope is necessary. This envelope
is not only assumed to fit the vocal tract filter response as
accurately as possible, but it should also exhibit slow varying
shapes across time. Indeed, time irregularities can generate
artifacts in signal manipulations or increase improperly the
features variance used in statistical models. In this letter, a
simple technique is suggested to improve this time regularity.

Paper Details

Authors:
Submitted On:
21 March 2016 - 11:37am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICASSPMFAPoster.pdf

(318 downloads)

Keywords

Subscribe

[1] , "A Time Regularization Technique for Discrete Spectral Envelopes Through Frequency Derivative", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/715. Accessed: May. 25, 2018.
@article{715-16,
url = {http://sigport.org/715},
author = { },
publisher = {IEEE SigPort},
title = {A Time Regularization Technique for Discrete Spectral Envelopes Through Frequency Derivative},
year = {2016} }
TY - EJOUR
T1 - A Time Regularization Technique for Discrete Spectral Envelopes Through Frequency Derivative
AU -
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/715
ER -
. (2016). A Time Regularization Technique for Discrete Spectral Envelopes Through Frequency Derivative. IEEE SigPort. http://sigport.org/715
, 2016. A Time Regularization Technique for Discrete Spectral Envelopes Through Frequency Derivative. Available at: http://sigport.org/715.
. (2016). "A Time Regularization Technique for Discrete Spectral Envelopes Through Frequency Derivative." Web.
1. . A Time Regularization Technique for Discrete Spectral Envelopes Through Frequency Derivative [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/715

Pages