MERRIC인
What Evidence Do Language Models Find Convincing?
Alexander Wan(Language Models)
USA | Computation and Language

■ View full text 

Computation and Language

https://www.sciencedirect.com/science/article/pii/S0264127520307954 

 

■ Researchers

Alexander Wan

UCBerkeley

 

■ Abstract

Retrieval-augmented language models are being increasingly tasked with subjective, contentious, and conflicting queries such as "is aspartame linked to cancer". To resolve these ambiguous queries, one must search through a large range of websites and consider "which, if any, of this evidence do I find convincing?". In this work, we study how LLMs answer this question. In particular, we construct ConflictingQA, a dataset that pairs controversial queries with a series of real-world evidence documents that contain different facts (e.g., quantitative results), argument styles (e.g., appeals to authority), and answers (Yes or No). We use this dataset to perform sensitivity and counterfactual analyses to explore which text features most affect LLM predictions. Overall, we find that current models rely heavily on the relevance of a website to the query, while largely ignoring stylistic features that humans find important such as whether a text contains scientific references or is written with a neutral tone. Taken together, these results highlight the importance of RAG corpus quality (e.g., the need to filter misinformation), and possibly even a shift in how LLMs are trained to better align with human judgements.

 

인쇄 Facebook Twitter 스크랩

  전체댓글 0

[로그인]

댓글 입력란
사용자 프로필 이미지
0/500자