ChatGLM系列

官网:https://chatglm.cn/blog

ChatGLM2

下载chatglm2-6b

    print('开始加载分词器tokenizer...')
    tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True)
    print('开始加载语言模型model...')
    model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).cuda()

默认保存到~/.cache/huggingface/hub/models--THUDM--chatglm2-6b

url地址: https://huggingface.co/THUDM/chatglm2-6b/tree/main

共18个文件

ChatGLM3

https://modelscope.cn/models/ZhipuAI/chatglm3-6b/summary

以下操作不需要梯子,可下载模型文件

pip install modelscope
from modelscope import snapshot_download
model_dir = snapshot_download("ZhipuAI/chatglm3-6b", revision = "v1.0.0")

模型文件保存目录:~/.cache/modelscope/hub/ZhipuAI/chatglm3-6b

加载模型并推理

from modelscope import AutoTokenizer, AutoModel, snapshot_download
model_dir = snapshot_download("ZhipuAI/chatglm3-6b", revision = "v1.0.0")
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
model = AutoModel.from_pretrained(model_dir, trust_remote_code=True).half().cuda()
model = model.eval()
response, history = model.chat(tokenizer, "你好", history=[])
print(response)
response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
print(response)
posted on 2023-09-28 17:57  宋岳庭  阅读(203)  评论(0编辑  收藏  举报