- Deskripsi :
Rilis pembaruan data E2E NLG Challenge dengan MR yang dibersihkan. Data E2E berisi representasi makna berbasis tindakan dialog (MR) di domain restoran dan hingga 5 referensi dalam bahasa alami, yang perlu diprediksi.
Dokumentasi Tambahan : Jelajahi di Makalah Dengan Kode
Kode sumber :
tfds.datasets.e2e_cleaned.Builder
Versi :
-
0.1.0
(default): Tidak ada catatan rilis.
-
Ukuran unduhan :
13.92 MiB
Ukuran dataset :
14.70 MiB
Di-cache otomatis ( dokumentasi ): Ya
Perpecahan :
Membelah | Contoh |
---|---|
'test' | 4.693 |
'train' | 33.525 |
'validation' | 4.299 |
- Struktur fitur :
FeaturesDict({
'input_text': FeaturesDict({
'table': Sequence({
'column_header': string,
'content': string,
'row_number': int16,
}),
}),
'target_text': string,
})
- Dokumentasi fitur :
Fitur | Kelas | Membentuk | Dtype | Keterangan |
---|---|---|---|---|
fiturDict | ||||
Masukkan teks | fiturDict | |||
masukan_teks/tabel | Urutan | |||
input_text/table/column_header | Tensor | rangkaian | ||
input_teks/tabel/konten | Tensor | rangkaian | ||
input_teks/tabel/nomor_baris | Tensor | int16 | ||
target_text | Tensor | rangkaian |
Kunci yang diawasi (Lihat
as_supervised
doc ):('input_text', 'target_text')
Gambar ( tfds.show_examples ): Tidak didukung.
Contoh ( tfds.as_dataframe ):
- Kutipan :
@inproceedings{dusek-etal-2019-semantic,
title = "Semantic Noise Matters for Neural Natural Language Generation",
author = "Du{\v{s} }ek, Ond{\v{r} }ej and
Howcroft, David M. and
Rieser, Verena",
booktitle = "Proceedings of the 12th International Conference on Natural Language Generation",
month = oct # "{--}" # nov,
year = "2019",
address = "Tokyo, Japan",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W19-8652",
doi = "10.18653/v1/W19-8652",
pages = "421--426",
abstract = "Neural natural language generation (NNLG) systems are known for their pathological outputs, i.e. generating text which is unrelated to the input specification. In this paper, we show the impact of semantic noise on state-of-the-art NNLG models which implement different semantic control mechanisms. We find that cleaned data can improve semantic correctness by up to 97{\%}, while maintaining fluency. We also find that the most common error is omitting information, rather than hallucination.",
}