Abstract

Retrieval Augmented Generation (RAG) pipelines reduce the frequency of Large Language Model (LLM) hallucinations by grounding the LLM context in knowledge base documents. Proposed herein is a lightweight, generalizable framework for evaluating RAG systems regarding both their document retrieval accuracy, as well as their answer-generation accuracy.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS