- Description:
LibriTTS is a multi-speaker English corpus of approximately 585 hours of read English speech at 24kHz sampling rate, prepared by Heiga Zen with the assistance of Google Speech and Google Brain team members. The LibriTTS corpus is designed for TTS research. It is derived from the original materials (mp3 audio files from LibriVox and text files from Project Gutenberg) of the LibriSpeech corpus. The main differences from the LibriSpeech corpus are listed below:
- The audio files are at 24kHz sampling rate.
- The speech is split at sentence breaks.
- Both original and normalized texts are included.
- Contextual information (e.g., neighbouring sentences) can be extracted.
- Utterances with significant background noise are excluded.
Additional Documentation: Explore on Papers With Code
Homepage: http://www.openslr.org/60
Source code:
tfds.datasets.libritts.Builder
Versions:
1.0.1
(default): No release notes.
Download size:
78.42 GiB
Dataset size:
271.41 GiB
Auto-cached (documentation): No
Splits:
Split | Examples |
---|---|
'dev_clean' |
5,736 |
'dev_other' |
4,613 |
'test_clean' |
4,837 |
'test_other' |
5,120 |
'train_clean100' |
33,236 |
'train_clean360' |
116,500 |
'train_other500' |
205,044 |
- Feature structure:
FeaturesDict({
'chapter_id': int64,
'id': string,
'speaker_id': int64,
'speech': Audio(shape=(None,), dtype=int64),
'text_normalized': Text(shape=(), dtype=string),
'text_original': Text(shape=(), dtype=string),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
chapter_id | Tensor | int64 | ||
id | Tensor | string | ||
speaker_id | Tensor | int64 | ||
speech | Audio | (None,) | int64 | |
text_normalized | Text | string | ||
text_original | Text | string |
Supervised keys (See
as_supervised
doc):('text_normalized', 'speech')
Figure (tfds.show_examples): Not supported.
Examples (tfds.as_dataframe):
- Citation:
@inproceedings{zen2019libritts,
title = {LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech},
author = {H. Zen and V. Dang and R. Clark and Y. Zhang and R. J. Weiss and Y. Jia and Z. Chen and Y. Wu},
booktitle = {Proc. Interspeech},
month = sep,
year = {2019},
doi = {10.21437/Interspeech.2019-2441},
}