ollama 0.2.7 支持函数调用了

就在最新的ollama 发布版本中对于类似openai 的函数调用支持了,但是目前有一些问题,就是相关历史的model 都需要修改下(添加TEMPLATE 对于tools的支持),一些是一个简单的测试

参考示例

使用了phidata 这个工具包

  • 一个参考qwen2:7b 模型的修改
    参考了llama3-groq-tool-use 这个模型的
    Moddelfile
FROM qwen2:7b
 
TEMPLATE """{{- if .Messages }}
{{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|>
 
{{ if .System }}{{ .System }}
{{- end }}
In addition to plain text responses, you can chose to call one or more of the provided functions.
 
Use the following rule to decide when to call a function:
  * if the response can be generated from your internal knowledge (e.g., as in the case of queries like "What is the capital of Poland?"), do so
  * if you need external information that can be obtained by calling one or more of the provided functions, generate a function calls
 
If you decide to call functions:
  * prefix function calls with functools marker (no closing marker required)
  * all function calls should be generated in a single JSON list formatted as functools[{"name": [function name], "arguments": [function arguments as JSON]}, ...]
  * follow the provided JSON schema. Do not hallucinate arguments or values. Do to blindly copy values from the provided samples
  * respect the argument type formatting. E.g., if the type if number and format is float, write value 7 as 7.0
  * make sure you pick the right functions that match the user intent
 
Available functions as JSON spec:
{{- if .Tools }}
{{ .Tools }}
{{- end }}<|eot_id|>
{{- end }}
{{- range .Messages }}
{{- if ne .Role "system" }}<|start_header_id|>{{ .Role }}<|end_header_id|>
{{- if and .Content (eq .Role "tools") }}
 
{"result": {{ .Content }}}
{{- else if .Content }}
 
{{ .Content }}
{{- else if .ToolCalls }}
 
functools[
{{- range .ToolCalls }}{{ "{" }}"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}{{ "}" }}
{{- end }}]
{{- end }}<|eot_id|>
{{- end }}
{{- end }}<|start_header_id|>assistant<|end_header_id|>
 
{{ else }}
{{- if .System }}<|start_header_id|>system<|end_header_id|>
 
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
 
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
 
{{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}
"""

构建命令

ollama create myqwen2:7b -f Modelfile
  • 参考代码
from phi.llm.openai.like import OpenAILike
from phi.assistant import Assistant
import datetime
import json
import time
my_ollama = OpenAILike(
        model="myqwen2:7b",
        api_key="demo",
        base_url="http://localhost:11434/v1"
    )
 
def get_time():
    info = {
        "time": time.strftime("%I:%M %p")
    }
    return json.dumps(info)
 
def get_username():
    info = {
        "username": "John Doe"
    }
    return json.dumps(info)
def get_date():
    info = {
        "date": time.strftime("%Y-%m-%d")
    }
    return json.dumps(info)
 
def get_datatime():
    now = datetime.datetime.now()
    info = {
        "time": now.strftime("%Y-%m-%d %H:%M:%S")
    }
    return json.dumps(info)
 
ollama_assistant = Assistant(
    llm=my_ollama,
    tools=[get_time,get_date], show_tool_calls=True, markdown=False
)
ollama_assistant.print_response("现在的时间", stream=False)
  • 效果

说明

ollama 不少模型还是需要修改才能支持函数调用的,不然会有一些问题,对于支持函数调用还是一个很不错的功能的,如果不使用内置的,基于其他工具的也是一些不错的选择(instructor 值得使用)

参考资料

server/model_test.go
https://github.com/ollama/ollama/blob/main/docs/modelfile.md#template
https://ollama.com/library/qwen2/blobs/62fbfd9ed093
https://ollama.com/library/llama3-groq-tool-use
https://python.useinstructor.com/

posted on 2024-07-19 18:04  荣锋亮  阅读(109)  评论(0编辑  收藏  举报

导航