참고자료:
pqa_labeled
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:pubmed_qa/pqa_labeled')
- 설명 :
PubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts.
The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative
statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts.
PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances.
Each PubMedQA instance is composed of (1) a question which is either an existing research article
title or derived from one, (2) a context which is the corresponding abstract without its conclusion,
(3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question,
and (4) a yes/no/maybe answer which summarizes the conclusion.
PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their
quantitative contents, is required to answer the questions.
- 라이센스 : MIT 라이센스
- 버전 : 1.0.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 1000 |
- 특징 :
{
"pubid": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"question": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"context": {
"feature": {
"contexts": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"labels": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"meshes": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"reasoning_required_pred": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"reasoning_free_pred": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"long_answer": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"final_decision": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
pqa_라벨이 지정되지 않음
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:pubmed_qa/pqa_unlabeled')
- 설명 :
PubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts.
The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative
statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts.
PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances.
Each PubMedQA instance is composed of (1) a question which is either an existing research article
title or derived from one, (2) a context which is the corresponding abstract without its conclusion,
(3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question,
and (4) a yes/no/maybe answer which summarizes the conclusion.
PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their
quantitative contents, is required to answer the questions.
- 라이센스 : MIT 라이센스
- 버전 : 1.0.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 61249 |
- 특징 :
{
"pubid": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"question": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"context": {
"feature": {
"contexts": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"labels": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"meshes": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"long_answer": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
pqa_artificial
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:pubmed_qa/pqa_artificial')
- 설명 :
PubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts.
The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative
statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts.
PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances.
Each PubMedQA instance is composed of (1) a question which is either an existing research article
title or derived from one, (2) a context which is the corresponding abstract without its conclusion,
(3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question,
and (4) a yes/no/maybe answer which summarizes the conclusion.
PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their
quantitative contents, is required to answer the questions.
- 라이센스 : MIT 라이센스
- 버전 : 1.0.0
- 분할 :
나뉘다 | 예 |
---|---|
'train' | 211269 |
- 특징 :
{
"pubid": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"question": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"context": {
"feature": {
"contexts": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"labels": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"meshes": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"long_answer": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"final_decision": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}