Conozca lo último en aprendizaje automático, IA generativa y más en el
Simposio WiML 2023.
gigapalabra
Organiza tus páginas con colecciones
Guarda y categoriza el contenido según tus preferencias.
Generación de titulares en un corpus de pares de artículos de Gigaword que consta de alrededor de 4 millones de artículos. Use los 'org_data' proporcionados por https://github.com/microsoft/unilm/ que es idéntico a https://github.com/harvardnlp/sent-summary pero con mejor formato.
Hay dos características: - documento: artículo. - resumen: titular.
Separar | Ejemplos |
---|
'test' | 1,951 |
'train' | 3,803,957 |
'validation' | 189,651 |
- Estructura de características :
FeaturesDict({
'document': Text(shape=(), dtype=string),
'summary': Text(shape=(), dtype=string),
})
- Documentación de características :
Rasgo | Clase | Forma | Tipo D | Descripción |
---|
| CaracterísticasDict | | | |
documento | Texto | | cuerda | |
resumen | Texto | | cuerda | |
@article{graff2003english,
title={English gigaword},
author={Graff, David and Kong, Junbo and Chen, Ke and Maeda, Kazuaki},
journal={Linguistic Data Consortium, Philadelphia},
volume={4},
number={1},
pages={34},
year={2003}
}
@article{Rush_2015,
title={A Neural Attention Model for Abstractive Sentence Summarization},
url={http://dx.doi.org/10.18653/v1/D15-1044},
DOI={10.18653/v1/d15-1044},
journal={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},
publisher={Association for Computational Linguistics},
author={Rush, Alexander M. and Chopra, Sumit and Weston, Jason},
year={2015}
}
Salvo que se indique lo contrario, el contenido de esta página está sujeto a la licencia Atribución 4.0 de Creative Commons, y los ejemplos de código están sujetos a la licencia Apache 2.0. Para obtener más información, consulta las políticas del sitio de Google Developers. Java es una marca registrada de Oracle o sus afiliados.
Última actualización: 2022-12-06 (UTC)
[null,null,["Última actualización: 2022-12-06 (UTC)"],[],[],null,["# gigaword\n\n\u003cbr /\u003e\n\n- **Description**:\n\nHeadline-generation on a corpus of article pairs from Gigaword consisting of\naround 4 million articles. Use the 'org_data' provided by\n\u003chttps://github.com/microsoft/unilm/\u003e which is identical to\n\u003chttps://github.com/harvardnlp/sent-summary\u003e but with better format.\n\nThere are two features: - document: article. - summary: headline.\n\n- **Homepage** :\n \u003chttps://github.com/harvardnlp/sent-summary\u003e\n\n- **Source code** :\n [`tfds.summarization.Gigaword`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/summarization/gigaword.py)\n\n- **Versions**:\n\n - **`1.2.0`** (default): No release notes.\n- **Download size** : `551.61 MiB`\n\n- **Dataset size** : `1.02 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|----------------|-----------|\n| `'test'` | 1,951 |\n| `'train'` | 3,803,957 |\n| `'validation'` | 189,651 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'document': Text(shape=(), dtype=string),\n 'summary': Text(shape=(), dtype=string),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|----------|--------------|-------|--------|-------------|\n| | FeaturesDict | | | |\n| document | Text | | string | |\n| summary | Text | | string | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `('document', 'summary')`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @article{graff2003english,\n title={English gigaword},\n author={Graff, David and Kong, Junbo and Chen, Ke and Maeda, Kazuaki},\n journal={Linguistic Data Consortium, Philadelphia},\n volume={4},\n number={1},\n pages={34},\n year={2003}\n }\n\n @article{Rush_2015,\n title={A Neural Attention Model for Abstractive Sentence Summarization},\n url={http://dx.doi.org/10.18653/v1/D15-1044},\n DOI={10.18653/v1/d15-1044},\n journal={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},\n publisher={Association for Computational Linguistics},\n author={Rush, Alexander M. and Chopra, Sumit and Weston, Jason},\n year={2015}\n }"]]