참고자료:
JRC
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/JRC')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 3410620 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
EMEA
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/EMEA')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 1221233 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
글로벌보이스
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/GlobalVoices')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 897075 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
ECB
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/ECB')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 1875738 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
DOGC
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/DOGC')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 10917053 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
all_wiki
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/all_wikis')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 28109484 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
TED
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/TED')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 157910 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
다중UN
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/multiUN')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 13127490 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
유로팔
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/Europarl')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 2174141 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
뉴스코멘터리11
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/NewsCommentary11')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 288771 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
유엔
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/UN')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 74067 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
EU북샵
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/EUBookShop')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 8214959 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
파라크롤
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/ParaCrawl')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 15510649 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
2018년 자막 열기
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/OpenSubtitles2018')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 213508602 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
DGT
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/DGT')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 3168368 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
결합된
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:large_spanish_corpus/combined')
- 설명 :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
- 라이센스 : MIT
- 버전 : 1.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 302656160 |
- 특징 :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}