(주)마이크로시스템 소프트웨어 개발자 채용
MERRIC인
Benchmarking Large Language Models in Retrieval-Augmented Generation
Jiawei Chen(Chinese Information Processing Laboratory)
China | arXiv 2023

■ View full text 

arXiv 2023

https://arxiv.org/pdf/2309.01431.pdf

 

 

■ Researchers

Jiawei Chen

Chinese Information Processing Laboratory

 

 

■ Abstract

Retrieval-Augmented Generation (RAG) is a promising approach for mitigating the hallucination of large language models (LLMs). However, existing research lacks rigorous evaluation of the impact of retrieval-augmented generation on different large language models, which make it challenging to identify the potential bottlenecks in the capabilities of RAG for different LLMs. In this paper, we systematically investigate the impact of Retrieval-Augmented Generation on large language models. We analyze the performance of different large language models in 4 fundamental abilities required for RAG, including noise robustness, negative rejection, information integration, and counterfactual robustness. To this end, we establish Retrieval-Augmented Generation Benchmark (RGB), a new corpus for RAG evaluation in both English and Chinese. RGB divides the instances within the benchmark into 4 separate testbeds based on the aforementioned fundamental abilities required to resolve the case. Then we evaluate 6 representative LLMs on RGB to diagnose the challenges of current LLMs when applying RAG. Evaluation reveals that while LLMs exhibit a certain degree of noise robustness, they still struggle significantly in terms of negative rejection, information integration, and dealing with false information. The aforementioned assessment outcomes indicate that there is still a considerable journey ahead to effectively apply RAG to LLMs.

 

 

  • Benchmarking Large Language Models
인쇄 Facebook Twitter 스크랩

  전체댓글 0

[로그인]

댓글 입력란
프로필 이미지
0/500자