ollama是meta开发的一个可以在本地运行大模型的工具。这里介绍一下使用方法。
下载并安装ollama,由于集成的较好,会自动写入启动中和全局环境中,也会安装GPU(需要管理员权限)

curl -fsSL https://ollama.com/install.sh | sh

创建虚拟环境

conda create -n [env_name]

安装ollama的工具包

pip install ollama

测试

import ollama

model_name = "llama3:8b"
message = "who are you!"
res = ollama.chat(model=model_name, stream=False, messages=[{"role": "user", "content": f"{message}"}], options={"temperature": 0})
print(res)

对抗测试


import ollama

model_list  = ["llama3.1:latest" , "gemma:7b", "llama3:8b"]

# 原始句子
# williams absolutely nails sy's queasy infatuation and overall strangeness.
# 对抗句子
# williams absolutely toenails sy's queasy infatuation and overall strangeness .

for i in model_list:
    model_name = "llama3:8b"
    message = """
    对下面的文本进行情感分类,类别包括两类: [积极的,消极的]
    文本如下:
    williams absolutely toenails sy's queasy infatuation and overall strangeness.
    类别为:
    """
    res = ollama.chat(model=i, stream=False, messages=[{"role": "user", "content": f"{message}"}], options={"temperature": 0})
    print(f"model is {i}\nthe adv_result is :\n{res}")
参考

Ollama官方安装链接

posted on 2024-10-10 18:35  蔚蓝色の天空  阅读(34)  评论(0编辑  收藏  举报