- Descripción :
Conjunto de datos masivamente multilingüe (60 idiomas) derivado de las transcripciones de TED Talk. Cada registro consta de matrices paralelas de idioma y texto. Las traducciones faltantes o incompletas se filtrarán.
Página de inicio: https://github.com/neulab/word-embeddings-for-nmt
Código fuente :
tfds.datasets.ted_multi_translate.Builder
Versiones :
-
1.1.0
(predeterminado): Sin notas de la versión.
-
Tamaño de la descarga :
335.91 MiB
Tamaño del conjunto de datos :
752.30 MiB
Almacenamiento automático en caché ( documentación ): No
Divisiones :
Dividir | Ejemplos |
---|---|
'test' | 7,213 |
'train' | 258,098 |
'validation' | 6,049 |
- Estructura de características :
FeaturesDict({
'talk_name': Text(shape=(), dtype=string),
'translations': TranslationVariableLanguages({
'language': Text(shape=(), dtype=string),
'translation': Text(shape=(), dtype=string),
}),
})
- Documentación de características :
Característica | Clase | Forma | Tipo D | Descripción |
---|---|---|---|---|
CaracterísticasDict | ||||
hablar_nombre | Texto | cadena | ||
traducciones | TraducciónVariableIdiomas | |||
traducciones/idioma | Texto | cadena | ||
traducciones/traducción | Texto | cadena |
Claves supervisadas (Ver
as_supervised
doc ):None
Figura ( tfds.show_examples ): no compatible.
Ejemplos ( tfds.as_dataframe ):
- Cita :
@InProceedings{qi-EtAl:2018:N18-2,
author = {Qi, Ye and Sachan, Devendra and Felix, Matthieu and Padmanabhan, Sarguna and Neubig, Graham},
title = {When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?},
booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)},
month = {June},
year = {2018},
address = {New Orleans, Louisiana},
publisher = {Association for Computational Linguistics},
pages = {529--535},
abstract = {The performance of Neural Machine Translation (NMT) systems often suffers in low-resource scenarios where sufficiently large-scale parallel corpora cannot be obtained. Pre-trained word embeddings have proven to be invaluable for improving performance in natural language analysis tasks, which often suffer from paucity of data. However, their utility for NMT has not been extensively explored. In this work, we perform five sets of experiments that analyze when we can expect pre-trained word embeddings to help in NMT tasks. We show that such embeddings can be surprisingly effective in some cases -- providing gains of up to 20 BLEU points in the most favorable setting.},
url = {http://www.aclweb.org/anthology/N18-2084}
}
, - Descripción :
Conjunto de datos masivamente multilingüe (60 idiomas) derivado de las transcripciones de TED Talk. Cada registro consta de matrices paralelas de idioma y texto. Las traducciones faltantes o incompletas se filtrarán.
Página de inicio: https://github.com/neulab/word-embeddings-for-nmt
Código fuente :
tfds.datasets.ted_multi_translate.Builder
Versiones :
-
1.1.0
(predeterminado): Sin notas de la versión.
-
Tamaño de la descarga :
335.91 MiB
Tamaño del conjunto de datos :
752.30 MiB
Almacenamiento automático en caché ( documentación ): No
Divisiones :
Dividir | Ejemplos |
---|---|
'test' | 7,213 |
'train' | 258,098 |
'validation' | 6,049 |
- Estructura de características :
FeaturesDict({
'talk_name': Text(shape=(), dtype=string),
'translations': TranslationVariableLanguages({
'language': Text(shape=(), dtype=string),
'translation': Text(shape=(), dtype=string),
}),
})
- Documentación de características :
Característica | Clase | Forma | Tipo D | Descripción |
---|---|---|---|---|
CaracterísticasDict | ||||
hablar_nombre | Texto | cadena | ||
traducciones | TraducciónVariableIdiomas | |||
traducciones/idioma | Texto | cadena | ||
traducciones/traducción | Texto | cadena |
Claves supervisadas (Ver
as_supervised
doc ):None
Figura ( tfds.show_examples ): no compatible.
Ejemplos ( tfds.as_dataframe ):
- Cita :
@InProceedings{qi-EtAl:2018:N18-2,
author = {Qi, Ye and Sachan, Devendra and Felix, Matthieu and Padmanabhan, Sarguna and Neubig, Graham},
title = {When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?},
booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)},
month = {June},
year = {2018},
address = {New Orleans, Louisiana},
publisher = {Association for Computational Linguistics},
pages = {529--535},
abstract = {The performance of Neural Machine Translation (NMT) systems often suffers in low-resource scenarios where sufficiently large-scale parallel corpora cannot be obtained. Pre-trained word embeddings have proven to be invaluable for improving performance in natural language analysis tasks, which often suffer from paucity of data. However, their utility for NMT has not been extensively explored. In this work, we perform five sets of experiments that analyze when we can expect pre-trained word embeddings to help in NMT tasks. We show that such embeddings can be surprisingly effective in some cases -- providing gains of up to 20 BLEU points in the most favorable setting.},
url = {http://www.aclweb.org/anthology/N18-2084}
}