Stay Hungry,Stay Foolish!

langchain long term memory

Message histories

https://python.langchain.com/docs/integrations/memory/

众多数据库支持。

 

redis数据库

https://www.cnblogs.com/mangod/p/18243321

from langchain_community.chat_message_histories import RedisChatMessageHistory
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI

model = ChatOpenAI(
    model="gpt-3.5-turbo",
    openai_api_key="sk-xxxxxxxxxxxxxxxxxxx",
    openai_api_base="https://api.aigc369.com/v1",
)

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "你是一个擅长{ability}的助手"),
        MessagesPlaceholder(variable_name="history"),
        ("human", "{question}"),
    ]
)

chain = prompt | model

chain_with_history = RunnableWithMessageHistory(
    chain,
    # 使用redis存储聊天记录
    lambda session_id: RedisChatMessageHistory(
        session_id, url="redis://10.22.11.110:6379/3"
    ),
    input_messages_key="question",
    history_messages_key="history",
)

# 每次调用都会保存聊天记录,需要有对应的session_id
chain_with_history.invoke(
    {"ability": "物理", "question": "地球到月球的距离是多少?"},
    config={"configurable": {"session_id": "baily_question"}},
)

chain_with_history.invoke(
    {"ability": "物理", "question": "地球到太阳的距离是多少?"},
    config={"configurable": {"session_id": "baily_question"}},
)

chain_with_history.invoke(
    {"ability": "物理", "question": "地球到他俩之间谁更近"},
    config={"configurable": {"session_id": "baily_question"}},
)

 

Adding Long Term Memory to OpenGPTs

https://blog.langchain.dev/adding-long-term-memory-to-opengpts/

Semantic Memory

Semantic memory is maybe the next most commonly used type of memory. This refers to finding messages that are similar to the current message and bringing them into the prompt in some way.

This is typically done by computing an embedding for each message, and then finding other messages with a similar embedding. This is basically the same idea that powers retrieval augmentation generation (RAG). Except instead of searching for documents, you are searching for messages.

For example, if a user asks "what is my favorite fruit", we would maybe find a previous message like "my favorite fruit is blueberries" (since they are similar in the embedding space). We could then pass that previous message in as context.

However, this approach has some flaws.

 

A Long-Term Memory Agent

https://python.langchain.com/docs/versions/migrating_memory/long_term_memory_agent/

本质上使用向量数据库,检索相似信息。

import json
from typing import List, Literal, Optional

import tiktoken
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.documents import Document
from langchain_core.embeddings import Embeddings
from langchain_core.messages import get_buffer_string
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import tool
from langchain_core.vectorstores import InMemoryVectorStore
from langchain_openai import ChatOpenAI
from langchain_openai.embeddings import OpenAIEmbeddings
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, START, MessagesState, StateGraph
from langgraph.prebuilt import ToolNode

 

Building a Conversational AI Agent with Long-Term Memory Using LangChain and Milvus

https://zilliz.com/blog/building-a-conversational-ai-agent-long-term-memory-langchain-milvus

Large language models (LLMs) have changed the game in artificial intelligence (AI). These advanced models can easily understand and generate human-like text with impressive accuracy, making AI assistants and chatbots much smarter and more useful. Thanks to LLMs, we now have AI tools that can handle complex language tasks, from answering questions to translating languages.

Conversational agents are software programs that chat with users in natural language, just like talking to a real person. They power things like chatbots and virtual assistants, helping us with everyday tasks by understanding and responding to our questions and commands.

LangChain is an open-source framework that makes it easier to build these conversational agents. It provides handy tools and templates to create smart, context-aware chatbots and other AI applications quickly and efficiently.

Introduction to LangChain Agents

LangChain agents are advanced systems that use an LLM to interact with different tools and data sources to complete complex tasks. These agents can understand user inputs, make decisions, and create responses, using the LLM to offer more flexible and adaptive decision-making than traditional methods.

A big advantage of LangChain Agents is their ability to use external tools and data sources. This means they can gather information, perform calculations, and take actions beyond just processing language, making them more powerful and effective for various applications

LangChain Agents vs. Chains

Chains and agents are the two main tools used in LangChain. Chains allow you to create a pre-defined sequence of tool usage, which is useful for tasks that require a specific order of operation.

How LangChain Chains work How LangChain Chains work

On the other hand, Agents enable the large language model to use tools in a loop, allowing it to decide how many times to use tools. This flexibility is ideal for tasks that require iterative processing or dynamic decision-making.

How LangChain Agents work How LangChain Agents work

 

Build a Conversational Agent Using LangChain

Let’s build a conversational agent using LangChain in Python.

Install Dependencies

To build a LangChain agent, we need to install the following dependencies:

  • LangChain: LangChain is an open-source framework that helps developers create applications using large language models (LLMs).

  • Langchain OpenAI: This package contains the LangChain integrations for OpenAI through their openai SDK.

  • OpenAI API SDK: The OpenAI Python library provides convenient access to the OpenAI REST API from any Python 3.7+ application.

  • Dotenv: Python-dotenv reads key-value pairs from a .env file and can set them as environment variables.

  • Milvus: an open-source vector database best for billion-scale vector storage and similarity search. It is also a popular infrastructure component for building Retrieval Augmented Generation (RAG) applications.

  • Pymilvus: The Python SDK for Milvus. It integrates many popular embedding and reranking models which streamlines the building of RAG applications.

  • Tiktoken: a fast BPE tokeniser for use with OpenAI's models.

 

posted @ 2024-11-18 10:24  lightsong  阅读(2)  评论(0编辑  收藏  举报
Life Is Short, We Need Ship To Travel