wiki_split

مراجع:

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:wiki_split')
  • توضیحات :
One million English sentences, each split into two sentences that together preserve the original meaning, extracted from Wikipedia 
Google's WikiSplit dataset was constructed automatically from the publicly available Wikipedia revision history. Although 
the dataset contains some inherent noise, it can serve as valuable training data for models that split or merge sentences.
  • مجوز : مجوز شناخته شده ای وجود ندارد
  • نسخه : 0.1.0
  • تقسیمات :
تقسیم کنید نمونه ها
'test' 5000
'train' 989944
'validation' 5000
  • ویژگی ها :
{
    "complex_sentence": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    },
    "simple_sentence_1": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    },
    "simple_sentence_2": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}