Github 通义千问模型测试

通义千问

大模型安装

相关地址
git clone https://github.com/QwenLM/Qwen2-Audio.git
https://github.com/QwenLM/Qwen2-Audio/blob/main/README_CN.md

PS C:\Users\supermao> pip install modelscope   -i https://pypi.tuna.tsinghua.edu.cn/simple 
PS C:\Users\supermao> modelscope download --model qwen/Qwen2-Audio-7B-Instruct
Downloading: 100%|███████████████████████████████████████████████████████████████████████████| 853/853 [00:00<00:00, 1.29kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████| 48.0/48.0 [00:00<00:00, 73.1B/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████| 230/230 [00:00<00:00, 356B/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████| 1.59M/1.59M [00:00<00:00, 1.89MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████| 3.64G/3.64G [02:10<00:00, 29.9MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████| 3.71G/3.71G [01:44<00:00, 38.1MB/s]3
Downloading: 100%|███████████████████████████████████████████████████████████████████████| 3.71G/3.71G [01:56<00:00, 34.1MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████| 3.39G/3.39G [01:36<00:00, 37.8MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████| 1.19G/1.19G [02:58<00:00, 7.16MB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████| 77.1k/77.1k [00:00<00:00, 107kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████| 342/342 [00:00<00:00, 481B/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████| 6.70M/6.70M [00:02<00:00, 2.81MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████| 623k/623k [00:01<00:00, 626kB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████| 2.65M/2.65M [00:01<00:00, 1.81MB/s]
PS C:\Users\supermao>

下载至目录
C:\Users\supermao\.cache\modelscope\hub\qwen
移动至桌面

使用dockerfile

# 第一步:拉取指定的 PyTorch 镜像
FROM pytorch/pytorch:2.1.0-cuda11.8-cudnn8-devel
# 第二步:将你的文件传入容器
# 假设你的文件在当前目录下,将它们复制到容器的 /app 目录
COPY ./Qwen2-Audio-main /workspace/Qwen2-Audio-main

# 设置工作目录
WORKDIR /workspace
RUN apt-get update -y && apt-get install git -y
# 第三步:安装依赖
RUN pip install git+https://github.com/huggingface/transformers
#不通过尝试使用清华源  -i https://pypi.tuna.tsinghua.edu.cn/simple
RUN pip install --no-cache-dir -r ./Qwen2-Audio-main/demo/requirements_web_demo.txt
RUN mkdir /work
RUN pip install 'accelerate>=0.21.0'
RUN pip install  
# 第四步:运行你的 Python 脚本
CMD ["python", "demo/web_demo_audio.py"]



构建并运行

PS C:\Users\supermao> docker build -t my_web_demo_image .
PS C:\Users\supermao> docker run -it --gpus all -p 8000:8000 -v .\Desktop\Qwen2-Audio-7B-Instruct:/work/Qwen2-Audio-7B-Instruct  fe812b39f7d9   /bin/bash
root@1503ec46800c:/workspace/Qwen2-Audio-main/demo# python web_demo_audio.py
Traceback (most recent call last):
  File "/workspace/Qwen2-Audio-main/demo/web_demo_audio.py", line 157, in <module>
    model = Qwen2AudioForConditionalGeneration.from_pretrained(
  File "/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3319, in from_pretrained
    raise ImportError(
ImportError: Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install 'accelerate>=0.21.0'`
root@1503ec46800c:/workspace/Qwen2-Audio-main/demo#

pip install 'accelerate>=0.21.0'
apt install vim 

更改至0.0.0.0
def _get_args():
    parser = ArgumentParser()
    parser.add_argument("-c", "--checkpoint-path", type=str, default=DEFAULT_CKPT_PATH,
                        help="Checkpoint name or path, default to %(default)r")
    parser.add_argument("--cpu-only", action="store_true", help="Run demo with CPU only")
    parser.add_argument("--inbrowser", action="store_true", default=False,
                        help="Automatically launch the interface in a new tab on the default browser.")
    parser.add_argument("--server-port", type=int, default=8000,
                        help="Demo server port.")
    parser.add_argument("--server-name", type=str, default="0.0.0.0",
                        help="Demo server name.")

    args = parser.parse_args()
    return args
root@1503ec46800c:/workspace/Qwen2-Audio-main/demo# python web_demo_audio.py

再次启动




## 进入容器查看cpu使用情况
PS C:\Users\supermao\Desktop> docker exec -it 1503ec46800c /bin/bash
root@1503ec46800c:/workspace# nvidia-smi
Sun Aug 18 12:31:35 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.112                Driver Version: 537.42       CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce GTX 1660 Ti     On  | 00000000:01:00.0  On |                  N/A |
| N/A   70C    P8               7W /  80W |   3085MiB /  6144MiB |      4%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A       358      C   /python3.10                               N/A      |
+---------------------------------------------------------------------------------------+
root@1503ec46800c:/workspace#

root@1503ec46800c:/workspace/Qwen2-Audio-main/demo# python web_demo_audio.py
The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function.
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████| 5/5 [01:40<00:00, 20.02s/it]
Some parameters are on the meta device device because they were offloaded to the cpu.
generation_config GenerationConfig {
  "chat_format": "chatml",
  "do_sample": true,
  "eos_token_id": [
    151643,
    151645
  ],
  "max_new_tokens": 2048,
  "pad_token_id": 151643,
  "repetition_penalty": 1.1,
  "temperature": 0.7,
  "top_k": 20,
  "top_p": 0.5
}

/opt/conda/lib/python3.10/site-packages/gradio/utils.py:985: UserWarning: Expected 1 arguments for function <function reset_state at 0x7f27fae3b010>, received 0.
  warnings.warn(
/opt/conda/lib/python3.10/site-packages/gradio/utils.py:989: UserWarning: Expected at least 1 arguments for function <function reset_state at 0x7f27fae3b010>, received 0.
  warnings.warn(
Running on local URL:  http://0.0.0.0:8000

To create a public link, set `share=True` in `launch()`.

再次封装容器

PS C:\Users\supermao> docker ps
CONTAINER ID   IMAGE          COMMAND                   CREATED          STATUS          PORTS                    NAMES
1503ec46800c   fe812b39f7d9   "/opt/nvidia/nvidia_…"   25 minutes ago   Up 25 minutes   0.0.0.0:8000->8000/tcp   competent_thompson

PS C:\Users\supermao> docker commit 1503ec46800c qwen2-audio-custom
sha256:ba9fa03ab6bc6be03369a5ecb0635ac4d8e3612d2e8a395b1ee4e9b9ebedccd2

PS C:\Users\supermao> docker images
REPOSITORY           TAG       IMAGE ID       CREATED         SIZE
qwen2-audio-custom   latest    ba9fa03ab6bc   7 seconds ago   18.3GB
my_web_demo_image    latest    fe812b39f7d9   4 hours ago     18.2GB
PS C:\Users\supermao>

最终dockerfile

# 第一步:拉取指定的 PyTorch 镜像
FROM pytorch/pytorch:2.1.0-cuda11.8-cudnn8-devel
# 第二步:将你的文件传入容器
# 假设你的文件在当前目录下,将它们复制到容器的 /app 目录
COPY ./Qwen2-Audio-main /workspace/Qwen2-Audio-main

# 设置工作目录
WORKDIR /workspace
RUN apt-get update -y && apt-get install git -y
# 第三步:安装依赖
RUN pip install git+https://github.com/huggingface/transformers
#不通过尝试使用清华源  -i https://pypi.tuna.tsinghua.edu.cn/simple
RUN pip install --no-cache-dir -r ./Qwen2-Audio-main/demo/requirements_web_demo.txt
RUN mkdir /work
RUN pip install 'accelerate>=0.21.0'
RUN apt-get install vim  -y
# 第四步:运行你的 Python 脚本
CMD ["python", "demo/web_demo_audio.py"]

posted @ 2024-08-18 22:07  supermao12  阅读(56)  评论(0编辑  收藏  举报