windows+CUDA+pyenv平台部署StableDiffusion、ComfyUI、Flux

快速运行命令

运行

AUTOMATIC1111/stable-diffusion-webui

也可以用 $ENV:COMMANDLINE_ARGS=""

激活 pyenv 环境

cd C:/w1/stable-diffusion-webui; .venv/Scripts/activate; python launch.py --xformers --medvram --theme=dark --no-gradio-queue --models-dir C:/w1/ai/models/stable-diffusion-webui --listen --port=7860

激活 conda 环境

cd C:/w1/stable-diffusion-webui; conda activate stable_diffusion; python launch.py --xformers --medvram --theme=dark --no-gradio-queue --models-dir C:/w1/ai/models/stable-diffusion-webui --listen --port=7860

comfyanonymous/ComfyUI

激活 pyenv 环境

cd C:/w1/ComfyUI; .venv/Scripts/activate; python main.py --listen --port 8188

激活 conda 环境

cd C:/w1/ComfyUI; conda activate comfy; python main.py --listen --port 8188

black-forest-labs/flux

激活 pyenv 环境

cd C:/w1/flux; .venv/Scripts/activate

激活 conda 环境

cd C:/w1/flux; conda activate flux

克隆

AUTOMATIC1111/stable-diffusion-webui

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

comfyanonymous/ComfyUI

git clone https://github.com/comfyanonymous/ComfyUI.git

black-forest-labs/flux

git clone https://github.com/black-forest-labs/flux

下载

下载 flux.1 模型

查看 https://hf-mirror.com/black-forest-labs/FLUX.1-dev/tree/main

$Env:HF_ENDPOINT="https://hf-mirror.com"
$Env:TOKEN="hf_***"
huggingface-cli download --resume-download black-forest-labs/FLUX.1-dev --local-dir-use-symlinks False --local-dir C:/w1/ai/models/flux/dev --token $Env:TOKEN

下载 flux-lora 模型

查看 https://hf-mirror.com/XLabs-AI/flux-lora-collection/tree/main

$Env:HF_ENDPOINT="https://hf-mirror.com"
$Env:TOKEN="hf_***"
huggingface-cli download --resume-download XLabs-AI/flux-lora-collection --local-dir-use-symlinks False --local-dir C:/w1/ai/models/flux/loras --token $Env:TOKEN

stable-diffusion-webui(py3.10.6+torch251+cu124)

.venv激活后会自己找到对应python,不需要再在$Env:PATH中增加垫片shims了(神奇)

部署

环境准备(pyenv)

$Env:PYENV_ROOT="C:/_env/pyenv"; $Env:PATH+=";$Env:PYENV_ROOT/pyenv-win/bin;$Env:PYENV_ROOT/pyenv-win/shims"
pyenv versions
pyenv local 3.10.6
pip install virtualenv
virtualenv .venv
.venv/Scripts/activate

环境准备(conda)

conda create -n stable_diffusion -y python=3.10.6
cd C:/w1/stable-diffusion-webui; conda activate stable_diffusion
# conda deactivate
# conda env remove -n stable_diffusion -y

安装依赖

requirement.txt 中无 torch 版本要求

# pip install torch --extra-index-url https://download.pytorch.org/whl/cu124
pip install C:/_env/torch-2.5.1+cu124-cp310-cp310-win_amd64.whl
pip install torch==2.5.1 -r requirements.txt
python -c "import torch; print(torch.cuda.get_device_name(0))"

初始化

https://www.bilibili.com/read/cv20466834

国内环境?

查看分析 modules/launch.py,增加代理

$Env:HF_ENDPOINT="https://hf-mirror.com"
$Env:ASSETS_REPO="https://gh.llkk.cc/https://github.com/AUTOMATIC1111/stable-diffusion-webui-assets.git"
$Env:STABLE_DIFFUSION_REPO="https://gh.llkk.cc/https://github.com/Stability-AI/stablediffusion.git"
$Env:STABLE_DIFFUSION_XL_REPO="https://gh.llkk.cc/https://github.com/Stability-AI/generative-models.git"
$Env:K_DIFFUSION_REPO="https://gh.llkk.cc/https://github.com/crowsonkb/k-diffusion.git"
$Env:BLIP_REPO="https://gh.llkk.cc/https://github.com/salesforce/BLIP.git"

如何解决--xformers参数时与CUDA不一致问题?

pip install -U xformers torch==2.5.1 --index-url https://download.pytorch.org/whl/cu124

如何无GPU试运行?

python launch.py --skip-torch-cuda-test --use-cpu all --no-half --precision full

is owned by BUILTIN/Administrators but the current user is DESKTOP-LH35818/admin

git config --global --unset-all safe.directory
git config --global --add safe.directory "*"

下模型

https://civitai.com/
放置在:models/Stable-diffusion,或者 models/Lora

首次运行

会下载 openai--clip-vit-large-patch14,放在 ~/.cache/huggingface/hub/

VAE

放置在:models/VAE

https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main

汉化

放置在:extensions

git clone https://ghproxy.com/github.com/dtlnor/stable-diffusion-webui-localization-zh_CN

控制网

放置在:extensions

git clone https://ghproxy.com/github.com/Mikubill/sd-webui-controlnet

放置在:extensions/sd-webui-controlnet/models

# https://zhuanlan.zhihu.com/p/607139523
# git clone https://huggingface.co/toyxyz/Control_any3
https://huggingface.co/toyxyz/Control_any3/blob/main/control_any3_canny.pth#线稿
https://huggingface.co/toyxyz/Control_any3/blob/main/control_any3_openpose.pth#骨骼
https://huggingface.co/toyxyz/Control_any3/blob/main/control_any3_seg.pth#草图上色
https://huggingface.co/toyxyz/Control_any3/blob/main/control_any3_mlsd.pth#建筑上色
# hed、scribble:线稿
# normal、depth:三维法线、深度

ComfyUI(torch251+cu124)

常用

.venv激活后会自己找到对应python,不需要再在$Env:PATH中增加垫片shims了(神奇)

# 直接输入 deactivate 退出上一个环境
.venv/Scripts/activate

部署

环境准备(pyenv)

$Env:PYENV_ROOT="C:/_env/pyenv"; $Env:PATH+=";$Env:PYENV_ROOT/pyenv-win/bin;$Env:PYENV_ROOT/pyenv-win/shims"
pyenv versions
pyenv local 3.12.6
pip install virtualenv
virtualenv .venv
.venv/Scripts/activate

环境准备(conda)

conda create -n comfy -y python=3.12.6
cd C:/w1/ComfyUI; conda activate comfy
# conda deactivate
# conda env remove -n comfy -y

安装依赖

requirement.txt 中无 torch 版本要求,但要安装 torchvisiontorchaudio

# pip install torch --extra-index-url https://download.pytorch.org/whl/cu124
pip install C:/_env/torch-2.5.1+cu124-cp312-cp312-win_amd64.whl
# 防止安装 torch==2.6.0
pip install torch==2.5.1 torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124
pip install -r requirements.txt
python -c "import torch; print(torch.cuda.get_device_name(0))"

配置

共用SD模型

extra_model_paths.yaml

a111:
    base_path: C:/w1/ai/models/stable-diffusion-webui/
    # ... 删除 models/
comfyui:
    base_path: C:/w1/ai/models/comfyui/
    is_default: true
    clip_interrogator: clip_interrogator/
    prompt_generator: prompt_generator/
flux:
    base_path: C:/w1/ai/models/flux/
    diffusion_models: dev/
    transformer: dev/transformer/
    vae: |
        dev/
        dev/vae/
    text_encoder: |
        dev/text_encoder/
        dev/text_encoder_2/
    loras: loras/

Comfy-Manager 找不到 insightface

选择合适的 insightface-0.7.3-cp312-cp312-win_amd64.whl,使用 pip 安装

wget https://raw.githubusercontent.com/Gourieff/Assets/refs/heads/main/Insightface/insightface-0.7.3-cp312-cp312-win_amd64.whl

Comfy-Manager 找不到 xformers

注意锁定 torch 版本

pip install -U xformers torch==2.5.1 --index-url https://download.pytorch.org/whl/cu124

Comfy-Manager Please REINSTALL package 'opencv-contrib-python'

解决版本冲突

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed.
This behaviour is the source of the following dependency conflicts.

inference-gpu 0.39.0 requires opencv-python<=4.10.0.84,>=4.8.1.78,

pip uninstall opencv-python opencv-python-headless opencv-contrib-python
pip install opencv-python==4.10.0.84 opencv-python-headless==4.10.0.84 opencv-contrib-python==4.10.0.84

FileNotFoundError: [Errno 2] No such file or directory: 'ComfyUI/custom_nodes/comfyui_ultimatesdupscale/repositories/ultimate_sd_upscale/scripts/ultimate-upscale.py'

git clone https://github.com/Coyote-A/ultimate-upscale-for-automatic1111 C:/w1/ComfyUI/custom_nodes/comfyui_ultimatesdupscale/repositories/ultimate_sd_upscale

clip_interrogator_model not found

$Env:HF_ENDPOINT="https://hf-mirror.com"; huggingface-cli download --resume-download Salesforce/blip-image-captioning-base --local-dir C:/w1/ai/models/comfyui/clip_interrogator/Salesforce/blip-image-captioning-base

text_generator_model not found

$Env:HF_ENDPOINT="https://hf-mirror.com"; huggingface-cli download --resume-download succinctly/text2image-prompt-generator --local-dir C:/w1/ai/models/comfyui/prompt_generator/text2image-prompt-generator

zh_en_model not found

$Env:HF_ENDPOINT="https://hf-mirror.com"; huggingface-cli download --resume-download Helsinki-NLP/opus-mt-zh-en --local-dir C:/w1/ai/models/comfyui/prompt_generator/opus-mt-zh-en

flux.1 (py312+torch?+cu?)

这个项目没有UI,连 requirements.txt 都没有

安装

https://blog.csdn.net/m0_59162248/article/details/143177741

环境准备(pyenv)

$Env:PYENV_ROOT="C:/_env/pyenv"; $Env:PATH+=";$Env:PYENV_ROOT/pyenv-win/bin;$Env:PYENV_ROOT/pyenv-win/shims"
pyenv versions
pyenv local 3.12.6
pip install virtualenv
virtualenv .venv
.venv/Scripts/activate

环境准备(conda)

conda create -n flux -y python=3.12.6
cd C:/w1/flux; conda activate flux
# conda deactivate
# conda env remove -n flux -y

安装依赖

pip install huggingface_hub

下载模型

下载flux模型

下载flux_lora模型

运行

命令行(废弃)

import torch
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power

prompt = "A cat holding a sign that says hello world"
image = pipe(
    prompt,
    height=1024,
    width=1024,
    guidance_scale=3.5,
    num_inference_steps=50,
    max_sequence_length=512,
    generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("flux-dev.png")

dec

本地校验

certutil -hashfile 文件名 SHA256

代码库

https://doget.nocsdn.com
https://github.com/civitai/sd_civitai_extension
#
git clone https://github.com/Rudrabha/Wav2Lip
#
git clone https://github.com/alwxkxk/threejs-example.git
git clone https://github.com/alwxkxk/iot-visualization-examples.git
posted @   肚肚1990  阅读(21)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 震惊!C++程序真的从main开始吗?99%的程序员都答错了
· winform 绘制太阳,地球,月球 运作规律
· 【硬核科普】Trae如何「偷看」你的代码?零基础破解AI编程运行原理
· 上周热点回顾(3.3-3.9)
· 超详细:普通电脑也行Windows部署deepseek R1训练数据并当服务器共享给他人
点击右上角即可分享
微信分享提示