Stay Hungry,Stay Foolish!

RAG_SemanticRouting of langchain langgraph llmrouter

RAG_SemanticRouting

https://github.com/UribeAlejandro/RAG_SemanticRouting/tree/main

 

Chat Agent with semantic routing. The question is evaluated and routed to two possible paths: web search or RAG. This agent leverages Ollama, LangChain, LangGraph, LangSmith

 

The architecture of the system is shown below:

Architecture

The system is composed of the following nodes, routes and edges:

  • Route Question: The node evaluates whether the question should be routed to the VectorStore or Web Search. To do so, uses the LLM model to classify the question. Thus, the output is a binary choice {yes, no}.
    • Yes -> VectorStore: The question is routed to the VectorStore to retrieve the most relevant documents.
    • No -> Web Search: The question is routed to the Web Search to include external information.
  • Web Search: The node uses the Tavily API to search information related to the question.
  • Retrieve: The node retrieves the most relevant documents from the VectorStore.
  • Grade Documents: The node grades the documents using the LLM model. Thus, the output is a binary choice {yes, no}.
    • Yes -> Answer: The node answers the question using the retrieved documents.
    • No -> Web Search: The question is routed to the Web Search to include external information.
  • Answer: The node answers the question using the retrieved documents.
  • Hallucinations Detection: The node uses the LLM to detect hallucinations in the answer.
    • not useful -> Web Search: The question is routed to the Web Search to include external information.
    • not supported -> re-renerate the answer
    • useful -> End: The answer is returned.

 

posted @ 2024-11-14 23:09  lightsong  阅读(4)  评论(0编辑  收藏  举报
Life Is Short, We Need Ship To Travel