wikitext

References:

wikitext-103-v1

Use the following command to load this dataset in TFDS:

ds = tfds.load('huggingface:wikitext/wikitext-103-v1')
  • Description:
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
 Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike
 License.
  • License: Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
  • Version: 1.0.0
  • Splits:
Split Examples
'test' 4358
'train' 1801350
'validation' 3760
  • Features:
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

wikitext-2-v1

Use the following command to load this dataset in TFDS:

ds = tfds.load('huggingface:wikitext/wikitext-2-v1')
  • Description:
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
 Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike
 License.
  • License: Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
  • Version: 1.0.0
  • Splits:
Split Examples
'test' 4358
'train' 36718
'validation' 3760
  • Features:
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

wikitext-103-raw-v1

Use the following command to load this dataset in TFDS:

ds = tfds.load('huggingface:wikitext/wikitext-103-raw-v1')
  • Description:
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
 Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike
 License.
  • License: Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
  • Version: 1.0.0
  • Splits:
Split Examples
'test' 4358
'train' 1801350
'validation' 3760
  • Features:
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

wikitext-2-raw-v1

Use the following command to load this dataset in TFDS:

ds = tfds.load('huggingface:wikitext/wikitext-2-raw-v1')
  • Description:
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
 Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike
 License.
  • License: Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
  • Version: 1.0.0
  • Splits:
Split Examples
'test' 4358
'train' 36718
'validation' 3760
  • Features:
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}