Prompt, Condition, and Generate: Classification of Unsupported Claims with In-Context Learning

Publikation: Working paperPreprintForskning

Standard

Prompt, Condition, and Generate : Classification of Unsupported Claims with In-Context Learning. / Christensen, Peter Ebert; Yadav, Srishti; Belongie, Serge.

arXiv.org, 2023.

Publikation: Working paperPreprintForskning

Harvard

Christensen, PE, Yadav, S & Belongie, S 2023 'Prompt, Condition, and Generate: Classification of Unsupported Claims with In-Context Learning' arXiv.org. <https://arxiv.org/abs/2309.10359>

APA

Christensen, P. E., Yadav, S., & Belongie, S. (2023). Prompt, Condition, and Generate: Classification of Unsupported Claims with In-Context Learning. arXiv.org. https://arxiv.org/abs/2309.10359

Vancouver

Christensen PE, Yadav S, Belongie S. Prompt, Condition, and Generate: Classification of Unsupported Claims with In-Context Learning. arXiv.org. 2023.

Author

Christensen, Peter Ebert ; Yadav, Srishti ; Belongie, Serge. / Prompt, Condition, and Generate : Classification of Unsupported Claims with In-Context Learning. arXiv.org, 2023.

Bibtex

@techreport{5182b8d882274585add20d51880eda59,
title = "Prompt, Condition, and Generate: Classification of Unsupported Claims with In-Context Learning",
abstract = "Unsupported and unfalsifiable claims we encounter in our daily lives can influence our view of the world. Characterizing, summarizing, and -- more generally -- making sense of such claims, however, can be challenging. In this work, we focus on fine-grained debate topics and formulate a new task of distilling, from such claims, a countable set of narratives. We present a crowdsourced dataset of 12 controversial topics, comprising more than 120k arguments, claims, and comments from heterogeneous sources, each annotated with a narrative label. We further investigate how large language models (LLMs) can be used to synthesise claims using In-Context Learning. We find that generated claims with supported evidence can be used to improve the performance of narrative classification models and, additionally, that the same model can infer the stance and aspect using a few training examples. Such a model can be useful in applications which rely on narratives , e.g. fact-checking.",
author = "Christensen, {Peter Ebert} and Srishti Yadav and Serge Belongie",
year = "2023",
language = "English",
publisher = "arXiv.org",
type = "WorkingPaper",
institution = "arXiv.org",

}

RIS

TY - UNPB

T1 - Prompt, Condition, and Generate

T2 - Classification of Unsupported Claims with In-Context Learning

AU - Christensen, Peter Ebert

AU - Yadav, Srishti

AU - Belongie, Serge

PY - 2023

Y1 - 2023

N2 - Unsupported and unfalsifiable claims we encounter in our daily lives can influence our view of the world. Characterizing, summarizing, and -- more generally -- making sense of such claims, however, can be challenging. In this work, we focus on fine-grained debate topics and formulate a new task of distilling, from such claims, a countable set of narratives. We present a crowdsourced dataset of 12 controversial topics, comprising more than 120k arguments, claims, and comments from heterogeneous sources, each annotated with a narrative label. We further investigate how large language models (LLMs) can be used to synthesise claims using In-Context Learning. We find that generated claims with supported evidence can be used to improve the performance of narrative classification models and, additionally, that the same model can infer the stance and aspect using a few training examples. Such a model can be useful in applications which rely on narratives , e.g. fact-checking.

AB - Unsupported and unfalsifiable claims we encounter in our daily lives can influence our view of the world. Characterizing, summarizing, and -- more generally -- making sense of such claims, however, can be challenging. In this work, we focus on fine-grained debate topics and formulate a new task of distilling, from such claims, a countable set of narratives. We present a crowdsourced dataset of 12 controversial topics, comprising more than 120k arguments, claims, and comments from heterogeneous sources, each annotated with a narrative label. We further investigate how large language models (LLMs) can be used to synthesise claims using In-Context Learning. We find that generated claims with supported evidence can be used to improve the performance of narrative classification models and, additionally, that the same model can infer the stance and aspect using a few training examples. Such a model can be useful in applications which rely on narratives , e.g. fact-checking.

M3 - Preprint

BT - Prompt, Condition, and Generate

PB - arXiv.org

ER -

ID: 384868256