RAFT: A new way to teach LLMs to be better at RAG

This article introduces RAFT (Retrieval-Augmented Fine-Tuning), a technique aimed at enhancing the learning capacities of Large Language Models (LLMs) through a blend of retrieval-augmented generation and fine-tuning. Authored by Cedric Vidal and Suraj Subramanian, RAFT is presented as a new strategy for domain-specific adaptation of LLMs, overcoming the limitations of existing methods by pre-adapting models to domain knowledge before application. Demonstrated through their research at UC Berkeley, utilising Meta Llama 2 and Azure AI Studio, RAFT promises better performance for LLMs in domain-specific tasks, leveraging both pre-existing documents and fine-tuned domain knowledge for improved context and answer generation in LLM queries.

Visit Original Article →