大模型agent开发之toolkits使用

Toolkits用途

toolkit提供了预定义工具集合,专注于某些特定服务,比如数据库查询、文件处理、Python 代码执行、网络爬虫等任务。这些工具集为 Agent 提供了更高层次的抽象,简化了工具的使用过程。

常见的 Toolkit

SQLDatabaseToolkit:使用场景主要是要通过自然语言对数据库执行查询,可以查询和操作SQL数据库,进而与SQL数据库进行交互。

RequestsToolkit: 主要使用场景是通过Api获取请求,该工具功能可以发送 GET 和 POST 请求,能够处理HTTP任务。

AzureCognitiveServicesToolkit:该工具将 Azure Cognitive Services 提供的 AI 能力融入到 LangChain 框架中,并且非常适合需要多模态处理、语音支持或高级文本分析的场景,为构建智能应用程序提供了更丰富的选择和更高效的集成方式。

Toolkit 的通用使用步骤

1. 初始化工具集:根据任务选择合适的 Toolkit,并初始化。

2. 加载工具到 Agent:将 Toolkit 的工具集成到 Agent 中。

3. 使用 Agent:通过自然语言与 Agent 交互,完成指定任务。

代码示例

复制代码
from langchain.utilities import SerpAPIWrapper
from langchain.chains import LLMChain
from langchain.agents import initialize_agent, AgentType
import os
from langchain.agents import Tool,load_tools
from langchain.memory import ConversationBufferMemory, ReadOnlySharedMemory
#创建toolkits
from langchain.sql_database import SQLDatabase
from langchain.agents.agent_toolkits import AzureCognitiveServicesToolkit,SQLDatabaseToolkit
from langchain.prompts import PromptTemplate,MessagesPlaceholder
# serppai的token
os.environ["SERPAPI_API_KEY"] = ""
class AgentsTemplate:

    def __init__(self,**kwargs):
        #构建一个搜索工具
        search = SerpAPIWrapper()
        self.prompt = kwargs.get("base_prompt")
        self.llm = kwargs.get("llm")
        llm_math_chain = load_tools(["serpapi", "llm-math"], llm=self.llm)
        # 创建一条链总结对话
        template = """
        The following is a conversation between an AI robot and a human:{chat_history}
        Write a conversation summary based on the input and the conversation record above,input:{input}
        """

        self.memory = ConversationBufferMemory(
            memory_key="chat_history",
            return_messages=True,
        )
        prompt = PromptTemplate(
            input_variable=["input", "chat_history"],
            template=template
        )
        self.shared_memory = ReadOnlySharedMemory(memory=self.memory)
        self.summary_chain = LLMChain(
            llm=self.llm,
            prompt = prompt,
            verbose = True,
            memory = self.shared_memory
        )
        self.tools = [
            Tool(
                name="Search",
                func=search.run,
                description= "useful for when you need to answer questions about current events or the current state of the world"
            ),
            Tool(
                name="Summary",
                func=self.SummaryChainFun,
                description="This tool can be used when you are asked to summarize a conversation. The tool input must be a string. Use it only when necessary"
            )
        ]
        #load_tools(["serpapi", "llm-math"], llm=self.llm)
        # 记忆组件


        self.agentType = [AgentType.ZERO_SHOT_REACT_DESCRIPTION,
                          AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
                          AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
                          AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
                          AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION]
    def SummaryChainFun(self, history):
        print("\n============== Summary Chain Execution ==============")
        print("Input History: ", history)
        return self.summary_chain.run(history)

    def createToolkits(self,toolKey):
        toolkit = None
        if toolKey == "azure":
            toolkit = AzureCognitiveServicesToolkit()
        elif toolKey == "sqlData":
            db = SQLDatabase.from_uri("sqlite:///Chinook.db")
            toolkit = SQLDatabaseToolkit(db = db,llm = self.llm)
        return toolkit

    #零样本增强式生成ZERO_SHOT_REACT_DESCRIPTION,
    #使用chatModel的零样本增强式生成CHAT_ZERO_SHOT_REACT_DESCRIPTION,
    def zero_agent(self,question,agentType):
        if agentType not in self.agentType:
            raise ValueError("无效的 AgentType,请选择有效的类型!")
        prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""
        suffix = """Begin!"
        {chat_history}
        Question: {input}
        {agent_scratchpad}"""
        #tookits使用
        tookits = self.createToolkits("azure")
        # 动态构建初始化参数
        agent_params = {
            #"tools": self.tools,
            "toolkit": tookits,
            "llm": self.llm,
            "agent": agentType,
            "verbose": True,
            "memory": self.memory,
            "agent_kwargs" : {
                "chat_history": MessagesPlaceholder(variable_name="chat_history"),
                "agent_scratchpad":MessagesPlaceholder(variable_name="agent_scratchpad"),
                "prefix":prefix,
                "sufix":suffix,
                "input":MessagesPlaceholder("input")

            },
            "handle_parsing_errors": True
        }
        #初始化代理
        agent = initialize_agent(**agent_params)
        print("-------------------")
        # 输出提示词模板
        prompt = agent.agent.llm_chain.prompt
        print("Prompt Template:")
        print(prompt)
        # print(agent.agent.prompt.messages)
        # print(agent.agent.prompt.messages[0])
        # print(agent.agent.prompt.messages[1])
        # print(agent.agent.prompt.messages[2])
        try:
            response = agent.run(question)
            print(f"运行的代理类型: {agentType}, 提问内容: {question}")
            print(f"agent回答: {response}")
            #self.memory.save_context(question,response)
        except Exception as e:
            print(f"代理运行时出错: {e}")
    #使用chatModel的零样本增强式生成
复制代码

 

posted @   我刀呢?  阅读(14)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 分享一个免费、快速、无限量使用的满血 DeepSeek R1 模型,支持深度思考和联网搜索!
· 基于 Docker 搭建 FRP 内网穿透开源项目(很简单哒)
· ollama系列01:轻松3步本地部署deepseek,普通电脑可用
· 25岁的心里话
· 按钮权限的设计及实现
点击右上角即可分享
微信分享提示