libritts
Stay organized with collections
Save and categorize content based on your preferences.
LibriTTS is a multi-speaker English corpus of approximately 585 hours of read
English speech at 24kHz sampling rate, prepared by Heiga Zen with the assistance
of Google Speech and Google Brain team members. The LibriTTS corpus is designed
for TTS research. It is derived from the original materials (mp3 audio files
from LibriVox and text files from Project Gutenberg) of the LibriSpeech corpus.
The main differences from the LibriSpeech corpus are listed below:
- The audio files are at 24kHz sampling rate.
- The speech is split at sentence breaks.
- Both original and normalized texts are included.
- Contextual information (e.g., neighbouring sentences) can be extracted.
- Utterances with significant background noise are excluded.
Split |
Examples |
'dev_clean' |
5,736 |
'dev_other' |
4,613 |
'test_clean' |
4,837 |
'test_other' |
5,120 |
'train_clean100' |
33,236 |
'train_clean360' |
116,500 |
'train_other500' |
205,044 |
FeaturesDict({
'chapter_id': int64,
'id': string,
'speaker_id': int64,
'speech': Audio(shape=(None,), dtype=int64),
'text_normalized': Text(shape=(), dtype=string),
'text_original': Text(shape=(), dtype=string),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
chapter_id |
Tensor |
|
int64 |
|
id |
Tensor |
|
string |
|
speaker_id |
Tensor |
|
int64 |
|
speech |
Audio |
(None,) |
int64 |
|
text_normalized |
Text |
|
string |
|
text_original |
Text |
|
string |
|
@inproceedings{zen2019libritts,
title = {LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech},
author = {H. Zen and V. Dang and R. Clark and Y. Zhang and R. J. Weiss and Y. Jia and Z. Chen and Y. Wu},
booktitle = {Proc. Interspeech},
month = sep,
year = {2019},
doi = {10.21437/Interspeech.2019-2441},
}
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2022-12-13 UTC.
[null,null,["Last updated 2022-12-13 UTC."],[],[],null,["# libritts\n\n\u003cbr /\u003e\n\n- **Description**:\n\nLibriTTS is a multi-speaker English corpus of approximately 585 hours of read\nEnglish speech at 24kHz sampling rate, prepared by Heiga Zen with the assistance\nof Google Speech and Google Brain team members. The LibriTTS corpus is designed\nfor TTS research. It is derived from the original materials (mp3 audio files\nfrom LibriVox and text files from Project Gutenberg) of the LibriSpeech corpus.\nThe main differences from the LibriSpeech corpus are listed below:\n\n1. The audio files are at 24kHz sampling rate.\n2. The speech is split at sentence breaks.\n3. Both original and normalized texts are included.\n4. Contextual information (e.g., neighbouring sentences) can be extracted.\n5. Utterances with significant background noise are excluded.\n\n- **Additional Documentation** :\n [Explore on Papers With Code\n north_east](https://paperswithcode.com/dataset/libritts)\n\n- **Homepage** : \u003chttp://www.openslr.org/60\u003e\n\n- **Source code** :\n [`tfds.datasets.libritts.Builder`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/libritts/libritts_dataset_builder.py)\n\n- **Versions**:\n\n - **`1.0.1`** (default): No release notes.\n- **Download size** : `78.42 GiB`\n\n- **Dataset size** : `271.41 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|--------------------|----------|\n| `'dev_clean'` | 5,736 |\n| `'dev_other'` | 4,613 |\n| `'test_clean'` | 4,837 |\n| `'test_other'` | 5,120 |\n| `'train_clean100'` | 33,236 |\n| `'train_clean360'` | 116,500 |\n| `'train_other500'` | 205,044 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'chapter_id': int64,\n 'id': string,\n 'speaker_id': int64,\n 'speech': Audio(shape=(None,), dtype=int64),\n 'text_normalized': Text(shape=(), dtype=string),\n 'text_original': Text(shape=(), dtype=string),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|-----------------|--------------|---------|--------|-------------|\n| | FeaturesDict | | | |\n| chapter_id | Tensor | | int64 | |\n| id | Tensor | | string | |\n| speaker_id | Tensor | | int64 | |\n| speech | Audio | (None,) | int64 | |\n| text_normalized | Text | | string | |\n| text_original | Text | | string | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `('text_normalized', 'speech')`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @inproceedings{zen2019libritts,\n title = {LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech},\n author = {H. Zen and V. Dang and R. Clark and Y. Zhang and R. J. Weiss and Y. Jia and Z. Chen and Y. Wu},\n booktitle = {Proc. Interspeech},\n month = sep,\n year = {2019},\n doi = {10.21437/Interspeech.2019-2441},\n }"]]