Scaling Rich Style-Prompted Text-to-Speech Datasets

Authors: Anuj Diwan, Zhisheng Zheng, David Harwath, Eunsol Choi

Abstract: We introduce Paralinguistic Speech Captions (ParaSpeechCaps), a large-scale
dataset that annotates speech utterances with rich style captions. While rich
abstract tags (e.g. guttural, nasal, pained) have been explored in small-scale
human-annotated datasets, existing large-scale datasets only cover basic tags
(e.g. low-pitched, slow, loud). We combine off-the-shelf text and speech
embedders, classifiers and an audio language model to automatically scale rich
tag annotations for the first time. ParaSpeechCaps covers a total of 59 style
tags, including both speaker-level intrinsic tags and utterance-level
situational tags. It consists of 342 hours of human-labelled data (PSC-Base)
and 2427 hours of automatically annotated data (PSC-Scaled). We finetune
Parler-TTS, an open-source style-prompted TTS model, on ParaSpeechCaps, and
achieve improved style consistency (+7.9% Consistency MOS) and speech quality
(+15.5% Naturalness MOS) over the best performing baseline that combines
existing rich style tag datasets. We ablate several of our dataset design
choices to lay the foundation for future work in this space. Our dataset,
models and code are released at https://github.com/ajd12342/paraspeechcaps .

Source: http://arxiv.org/abs/2503.04713v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these