RAG with LangChain#
LangChain is well adopted by open-source community because of its diverse functionality and clean API usage. In this tutorial we will show how to use LangChain to build an RAG pipeline.
0. Preparation#
First, install all the required packages:
%pip install pypdf langchain langchain-openai langchain-huggingface
Then fill the OpenAI API key below:
# For openai key
import os
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
BGE-M3 is a very powerful embedding model, We would like to know what does that ‘M3’ stands for.
Let’s first ask GPT the question:
from langchain_openai.chat_models import ChatOpenAI
llm = ChatOpenAI(model_name="gpt-4o-mini")
response = llm.invoke("What does M3-Embedding stands for?")
print(response.content)
M3-Embedding typically refers to a specific method or framework used in machine learning and natural language processing for creating embeddings, which are dense vector representations of data. The "M3" could indicate a particular model, method, or version related to embeddings, but without additional context, it's hard to provide a precise definition.
If you have a specific context or source in mind where "M3-Embedding" is used, please provide more details, and I may be able to give a more accurate explanation!
By quickly checking the GitHub repo of BGE-M3. Since BGE-M3 paper is not in its training dataset, GPT is not capable to give us correct answer.
Now, let’s use the paper of BGE-M3 to build an RAG application to answer our question precisely.
1. Data#
The first step is to load the pdf of our paper:
from langchain_community.document_loaders import PyPDFLoader
# Or download the paper and put a path to the local file instead
loader = PyPDFLoader("https://arxiv.org/pdf/2402.03216")
docs = loader.load()
print(docs[0].metadata)
{'source': 'https://arxiv.org/pdf/2402.03216', 'page': 0}
The whole paper contains 18 pages. That’s a huge amount of information. Thus we split the paper into chunks to construct a corpus.
from langchain.text_splitter import RecursiveCharacterTextSplitter
# initialize a splitter
splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, # Maximum size of chunks to return
chunk_overlap=150, # number of overlap characters between chunks
)
# use the splitter to split our paper
corpus = splitter.split_documents(docs)
2. Indexing#
Indexing is one of the most important part in RAG. LangChain provides APIs for embedding models and vector databases that make things simple and straightforward.
Here, we choose bge-base-en-v1.5 to embed all the chunks to vectors, and use Faiss as our vector database.
from langchain_huggingface.embeddings import HuggingFaceEmbeddings
embedding_model = HuggingFaceEmbeddings(model_name="BAAI/bge-base-en-v1.5",
encode_kwargs={"normalize_embeddings": True})
Then create a Faiss vector database given our corpus and embedding model.
If you want to know more about Faiss, refer to the tutorial of Faiss and indexing.
from langchain.vectorstores import FAISS
vectordb = FAISS.from_documents(corpus, embedding_model)
# (optional) save the vector database to a local directory
vectordb.save_local("vectorstore.db")
# Create retriever for later use
retriever = vectordb.as_retriever()
3. Retreive and Generate#
Let’s write a simple prompt template. Modify the contents to match your different use cases.
from langchain_core.prompts import ChatPromptTemplate
template = """
You are a Q&A chat bot.
Use the given context only, answer the question.
<context>
{context}
</context>
Question: {input}
"""
# Create a prompt template
prompt = ChatPromptTemplate.from_template(template)
Now everything is ready. Assemble them to a chain and let the magic happen!
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain.chains import create_retrieval_chain
doc_chain = create_stuff_documents_chain(llm, prompt)
chain = create_retrieval_chain(retriever, doc_chain)
Run the following cell, we can see that the chatbot can answer the question correctly!
response = chain.invoke({"input": "What does M3-Embedding stands for?"})
# print the answer only
print(response['answer'])
M3-Embedding stands for a new embedding model that is distinguished for its versatility in multi-linguality, multi-functionality, and multi-granularity.