- Description:
Adversarial NLI (ANLI) is a large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure.
Homepage: https://github.com/facebookresearch/anli
Source code:
tfds.text.Anli
Versions:
0.1.0
(default): No release notes.
Download size:
17.76 MiB
Auto-cached (documentation): Yes
Feature structure:
FeaturesDict({
'context': Text(shape=(), dtype=tf.string),
'hypothesis': Text(shape=(), dtype=tf.string),
'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=3),
'uid': Text(shape=(), dtype=tf.string),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
context | Text | tf.string | ||
hypothesis | Text | tf.string | ||
label | ClassLabel | tf.int64 | ||
uid | Text | tf.string |
Supervised keys (See
as_supervised
doc):None
Figure (tfds.show_examples): Not supported.
Examples (tfds.as_dataframe): Missing.
Citation:
@inproceedings{Nie2019AdversarialNA,
title = "Adversarial NLI: A New Benchmark for Natural Language Understanding",
author = "Nie, Yixin and
Williams, Adina and
Dinan, Emily and
Bansal, Mohit and
Weston, Jason and
Kiela, Douwe",
year="2019",
url ="https://arxiv.org/abs/1910.14599"
}
anli/r1 (default config)
Config description: Round One
Dataset size:
9.04 MiB
Splits:
Split | Examples |
---|---|
'test' |
1,000 |
'train' |
16,946 |
'validation' |
1,000 |
anli/r2
Config description: Round Two
Dataset size:
22.39 MiB
Splits:
Split | Examples |
---|---|
'test' |
1,000 |
'train' |
45,460 |
'validation' |
1,000 |
anli/r3
Config description: Round Three
Dataset size:
47.03 MiB
Splits:
Split | Examples |
---|---|
'test' |
1,200 |
'train' |
100,459 |
'validation' |
1,200 |