Aprenda o que há de mais recente em aprendizado de máquina, IA generativa e muito mais no WiML Symposium 2023
Registre-se
gigapalavra
Mantenha tudo organizado com as coleções
Salve e categorize o conteúdo com base nas suas preferências.
Geração de títulos em um corpus de pares de artigos do Gigaword, consistindo em cerca de 4 milhões de artigos. Use o 'org_data' fornecido por https://github.com/microsoft/unilm/ que é idêntico a https://github.com/harvardnlp/sent-summary , mas com formato melhor.
Existem duas características: - documento: artigo. - resumo: manchete.
Dividir | Exemplos |
---|
'test' | 1.951 |
'train' | 3.803.957 |
'validation' | 189.651 |
FeaturesDict({
'document': Text(shape=(), dtype=string),
'summary': Text(shape=(), dtype=string),
})
Característica | Classe | Forma | Tipo D | Descrição |
---|
| RecursosDict | | | |
documento | Texto | | corda | |
resumo | Texto | | corda | |
@article{graff2003english,
title={English gigaword},
author={Graff, David and Kong, Junbo and Chen, Ke and Maeda, Kazuaki},
journal={Linguistic Data Consortium, Philadelphia},
volume={4},
number={1},
pages={34},
year={2003}
}
@article{Rush_2015,
title={A Neural Attention Model for Abstractive Sentence Summarization},
url={http://dx.doi.org/10.18653/v1/D15-1044},
DOI={10.18653/v1/d15-1044},
journal={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},
publisher={Association for Computational Linguistics},
author={Rush, Alexander M. and Chopra, Sumit and Weston, Jason},
year={2015}
}
Exceto em caso de indicação contrária, o conteúdo desta página é licenciado de acordo com a Licença de atribuição 4.0 do Creative Commons, e as amostras de código são licenciadas de acordo com a Licença Apache 2.0. Para mais detalhes, consulte as políticas do site do Google Developers. Java é uma marca registrada da Oracle e/ou afiliadas.
Última atualização 2022-12-06 UTC.
[null,null,["Última atualização 2022-12-06 UTC."],[],[],null,["# gigaword\n\n\u003cbr /\u003e\n\n- **Description**:\n\nHeadline-generation on a corpus of article pairs from Gigaword consisting of\naround 4 million articles. Use the 'org_data' provided by\n\u003chttps://github.com/microsoft/unilm/\u003e which is identical to\n\u003chttps://github.com/harvardnlp/sent-summary\u003e but with better format.\n\nThere are two features: - document: article. - summary: headline.\n\n- **Homepage** :\n \u003chttps://github.com/harvardnlp/sent-summary\u003e\n\n- **Source code** :\n [`tfds.summarization.Gigaword`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/summarization/gigaword.py)\n\n- **Versions**:\n\n - **`1.2.0`** (default): No release notes.\n- **Download size** : `551.61 MiB`\n\n- **Dataset size** : `1.02 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|----------------|-----------|\n| `'test'` | 1,951 |\n| `'train'` | 3,803,957 |\n| `'validation'` | 189,651 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'document': Text(shape=(), dtype=string),\n 'summary': Text(shape=(), dtype=string),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|----------|--------------|-------|--------|-------------|\n| | FeaturesDict | | | |\n| document | Text | | string | |\n| summary | Text | | string | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `('document', 'summary')`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @article{graff2003english,\n title={English gigaword},\n author={Graff, David and Kong, Junbo and Chen, Ke and Maeda, Kazuaki},\n journal={Linguistic Data Consortium, Philadelphia},\n volume={4},\n number={1},\n pages={34},\n year={2003}\n }\n\n @article{Rush_2015,\n title={A Neural Attention Model for Abstractive Sentence Summarization},\n url={http://dx.doi.org/10.18653/v1/D15-1044},\n DOI={10.18653/v1/d15-1044},\n journal={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},\n publisher={Association for Computational Linguistics},\n author={Rush, Alexander M. and Chopra, Sumit and Weston, Jason},\n year={2015}\n }"]]