Fork me on GitHub

大模型-FastChat-Vicuna(小羊驼的部署与安装)

大模型-FastChat-Vicuna(小羊驼的部署与安装)

虚拟环境创建

#官网要求Python版本要>= 3.8
conda create -n fastchat python=3.9 
conda activate fastchat
#安装pytorch
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116
#安装后测试
conda activate fastchat
python
>>> import torch
>>> print(torch.__version__)
1.13.1+cu116
>>> print(torch.version.cuda)
11.6
>>> exit()


#安装fastchat
pip install fschat -i https://pypi.tuna.tsinghua.edu.cn/simple
#安装完fastchat需要重新安装下protobuf
pip install protobuf==3.20.0 -i https://pypi.tuna.tsinghua.edu.cn/simple

7B

#解析7b模型文件
python /home/hcx/transformers-main/src/transformers/models/llama/convert_llama_weights_to_hf.py \
    --input_dir /home/hcx/LLaMA --model_size 7B --output_dir /home/hcx/out/model/transformer_model_13b
#生成FastChat对应的模型Vicuna
export https_proxy=http://192.168.12.65:1984 #使用代理vpn
python -m fastchat.model.apply_delta --base /home/hcx/out/model/transformer_model_7b --target /home/hcx/out/model/vicuna-7b --delta lmsys/vicuna-7b-delta-v1.1

  • --model-path:表示模型的路径
  • --target:表示生成后的vicuna模型的路径
  • --delta

启动小羊驼

#服务器端启动小羊驼
CUDA_VISIBLE_DEVICES='4,5'  python -m fastchat.serve.cli --model-path /home/hcx/out/model/vicuna-7b --num-gpus 2
#webGUI模型启动小羊驼
#s1启动controller服务
python3 -m fastchat.serve.controller
#s2启动work服务
CUDA_VISIBLE_DEVICES='1,2' python -m fastchat.serve.model_worker --model-path /home/hcx/out/model/vicuna-7b --num-gpus 2
#s2.1测试controller与worker服务是否连通
python3 -m fastchat.serve.test_message --model-name vicuna-7b

#s3启动Gradio web server
python -m fastchat.serve.gradio_web_server
#访问IP:7860
posted @ 2023-05-25 16:04  壶小旭  阅读(1931)  评论(0编辑  收藏  举报