Skip to content


Stable Diffusion 以Docker部署CPU运行在Linux系统

基于文本生成图像的强大模型

Stable Diffusion 的代码开源在 GitHub 上,地址如下:
https://github.com/CompVis/stable-diffusion

SDXL是 v1.5 模型的官方升级版,SDXL 的总参数数量为 66 亿,而 v1.5 为 9.8 亿
SDXL 的默认图像大小是 1024×1024,这比 v1.5 模型的 512×512 大 4 倍
SDXL 1.0 应该在具有 8GB VRAM 的 GPU 上有效工作。但sdwebui现在做不到(需要10G),能做到的只有COMFY UI

Stable Diffusion web UI
https://github.com/AUTOMATIC1111/stable-diffusion-webui
1.9.3 2024/4/23
1.6版本性能强劲,对小显存用户比较友好,不再轻易出现跑SDXL大模型爆显存的情况

cpu 版使用这个
https://github.com/openvinotoolkit/stable-diffusion-webui
This repo is a fork of AUTOMATIC1111/stable-diffusion-webui which includes OpenVINO support through a custom script to run it on Intel CPUs and Intel GPUs.
1.6版本

AMD显卡满血Stable Diffusion(SD+Fooocus+ComfyUI)无脑部署笔记(Linux+ROCm6.1.1)
https://zhuanlan.zhihu.com/p/656480759

Stable-Diffusion-WebUI使用指南
https://blog.csdn.net/qq_39108291/article/details/131025589

stablediffusion与midjouney的区别

1. midjouney的特点:

midjouney是一个商业化产品,用户需要付费才能使用,而且只能通过其官方Discord上的Discord机器人使用。midjouney没有公布其技术细节,但是其生成的图像效果非常惊艳,普通人几乎已经很难分辨出它产生的作品,竟然是AI绘画生成的。
midjouney善于适应实际的艺术风格,创造出用户想要的任何效果组合的图像。它擅长环境效果,特别是幻想和科幻场景,看起来就像游戏的艺术效果。midjouney的提示词门槛低,不需要特别精细的描述也可以出不错的图像。但是缺点是画面不太受控,而且被BAN的敏感词非常多,像bare,nude(裸体)这类词就用不了。

2. stablediffusion的特点:

stablediffusion是一个开源的模型,任何人都可以免费使用(但是需要有GPU来跑),也可以部署到GoogleColab和Drive去薅Tesla T4。stablediffusion是基于latent diffusion model(LDM)的条件扩散模型,采用CLIP text encoder提取的text embeddings作为condition。stablediffusion对当代艺术图像有比较好的理解,可以产生充满细节的艺术作品。除了文生图功能外,还支持图生图、图像重绘、个性化模型训练、可控生成等多种扩展应用。stablediffusion比较适合生成复杂的、有创意的插图。但是缺点是提示词门槛高,手部问题,Lora的兼容性等等。

3.midjouney和stablediffusion的优劣

midjouney和stablediffusion各有优劣之处,具体如下:

① midjouney的优点是:出图质量高、出图稳定、提示词简单、艺术风格丰富、环境效果出色。

②midjouney的缺点是:收费昂贵、只能通过Discord使用、技术细节不透明、画面不太受控、敏感词过多。

③stablediffusion的优点是:开源免费、可本地部署或云端使用、技术原理清晰、扩展应用多样、当代艺术理解好。

④stablediffusion的缺点是:需要GPU资源、提示词门槛高、手部问题突出、Lora兼容性差。

       那么对于普通人来说,选择stablediffusion可能比选择midjouney更合适,原因如下:

①stablediffusion是免费的,而midjouney是收费的,对于预算有限的普通人来说,stablediffusion更划算。

②stablediffusion是开源的,而midjouney是闭源的,对于想要了解AI绘图技术原理和细节的普通人来说,stablediffusion更透明。

③stablediffusion是灵活的,而midjouney是固定的,对于想要尝试不同功能和插件的普通人来说,stablediffusion更多样。

④stablediffusion是创新的,而midjouney是成熟的,对于想要挑战自己和发挥想象力的普通人来说,stablediffusion更有趣。

总的来说,如果您会使用midjouney,就相当于您学会了买车票搭车去某个地方,你只能选择路线,而您会使用stablediffusion,就相当于您自己买了一辆车,考了一个驾照,想去哪里就去哪里,想怎么开车就怎么开车(划重点)

安装要求

最核心的关键点:看显卡、看内存、看硬盘、看CPU。其中最重要的是看显卡,显卡N卡(英伟达Nvida显卡),最低10系起步,显存最低4G,6G及格;内存最低8G,16G及格;硬盘可用空间最好有个500G朝上,固态最佳,机械硬盘也没多大问题。CPU其实没太大要求,有好显卡的,CPU一般不会很差。

可能的误区

  • 1.误区1:必须用linux或者windows
    mac os也可以,但是同样在于其它的坑多,不建议在mac os上使用,当然我也没试过。
    windows和linux这个图形界面会占用显存,特别是显存小的时候更明显(windows只有server版可以只装命令行用)

  • 2.误区2:必须用N卡
    N卡坑少,出问题容易找到答案,并不是只能N卡。A卡、cpu也可以、i卡理论上可以,使用难度较大。Nvidia显卡(Cuda)或AMD显卡(ROCm)

  • 3.误区3:必须用conda
    用conda的原因在于很多开发者通常需要多个python环境,个人部署直接装在系统就行。

请放弃尝试在Win下通过虚拟机、WSL安装ROCm;

windows安装

推荐哔哩哔哩站秋叶做的整合包A绘世启动器.支持cpu/N卡/A卡
由于网络下载问题,这里使用的模型直接使用秋叶整合包中下载的。

最新一次更新整合包版本v4.8,24 年 4 月更新。
网盘:https://pan.quark.cn/s/2c832199b09b
解压密码:bilibili@秋葉aaaki
https://www.bilibili.com/video/BV1iM4y1y7oA/?spm_id_from=333.976.0.0

模型文件

Stable-diffusion支持五种模型文件,Checkpoint、Embedding、Hypernetwork、Lora和VAE。

  • 1.Checkpoint
    Checkpoint模型就是我们在webui的Stable Diffusion checkpoint中选择的模型。Checkpoint模型我把它理解为AI的记忆,它控制着AI能画出什么,画出的东西长什么样。
  • 2.Lora
    Lora模型是一种生成对抗网络模型,主要用对Checkpoint模型进行定向的微调。个人理解是用来做Checkpoint模型提示词重映射的。比如,在一个Checkpoint模型中,提示词“猫”对应形象是“加菲猫”,那么Lora就可以将“猫”这个提示词重映射,使其对应的形象变成HelloKitty,那么在加持Lora模型的Checkpoint模型上输入“猫”这个提示词后AI将不再生成加菲猫,而全部生成HelloKitty。
  • 3.Embedding
    在Stable diffuion中,Embedding模型提供了一种向已有模型嵌入新内容的方式,Embedding模型可以使用很少的图片数据,来生成一个具有新的风格或者人物形象的模型,并且可以使用特定的提示词来映射这些特征。
  • 4.Hypernetworks
    Hypernetworks又叫超网络,Hypernetworks模型是一种风格化文件,可以为AI生成的图像应用指定画风。如,同样是画一个HelloKitty,在没有应用Hypernetworks模型的情况,画出来的HelloKitty就是一只正常的HelloKitty,如果给AI应用一个金享泰画风的Hypernetworks模型,那么AI画出来的HelloKitty就变成一只油腻的HelloKitty。
  • 5.VAE
    VAE的作用就是在第一章中讲到的将图片从潜空间的压缩数据解码变成一张正常的图片,不同的VAE会影响AI出图的色调,如果当我们不使用VAE时,AI生成的图片均会有一层灰蒙蒙的感觉,使用VAE会使图片的饱和度有所区别。
  • 6.AestheticGradients
    AestheticGradients(美学渐变)是以插件的形式存在一种模型修改技术,AestheticGradients模型需要依赖AestheticGradients插件才能使用,效果和Hypernetworks差不多,但是基于AestheticGradients插件提供了更多的可调节参数,而Hypernetworks的参数是已经定死了不可更改的。

相关网站

吐司
在线生成,提示词,模型下载
https://tusi.cn/

huggingface
国内不能访问
https://huggingface.co/

c站
Civitai 专门收集和分享Stable Diffusion相关的模型
收集并分享1700+经过筛选的Stable Diffusion模型文件,免费下载。
提供1万+带提示语的示例图片,供用户学习描述内容的方法。
https://www.civita.com
国内镜像
https://civitai.work/

哩布哩布AI
收集了很多不同类型的Stable Diffusion模型,在线生成,下载这些AI生成图片和视频的模型
https://www.liblib.art/

AI提示词
https://www.4b3.com/

AI 作图知识库(教程): https://guide.novelai.dev/
标签超市(解析组合): https://tags.novelai.dev/
原图提取标签: https://spell.novelai.dev/

模型分享下载链接:https://pan.quark.cn/s/32f374eef667

AI绘图的用法拓展

  • ControlNet
    ControlNet的原理主要是使用OpenCV的一些识别算法,如姿势识别、手势识别、边缘检测、深度检测等,先对参考图做一层机器视觉方面的预处理,生成身体骨骼特征点、手势骨骼特征点、描边图、深度图等中间图,然后再让AI参考这些中间图进行创作

  • 制作全景图
    并接360图片

  • mov2mov插件
    https://github.com/Scholar01/sd-webui-mov2mov
    生成同步的视频文件

  • 人物换装

  • OutPainting
    outpainting是官方提供的一项AI图像扩展技术,AI可以根据已有的图像来扩展图像之外的画面。

  • SadTalker
    唇型同步

  • 生成类似的图
    这个功能在通过photoshopP图之后,使用AI来融合P图中的看起来比较违和的元素时非常有用,有了这个功能我们就可以快速的使用PS往图中加入我们想要的元素,然后使用AI来融合,当然出图的效果还是取决于大模型;

  • 小图高清重绘

  • 超清放大

准备虚拟机

virtualbox 虚拟一台rock9.3
10核/17G内存/120G硬盘

查看环境

查看系统版本

cat /etc/redhat-release
Rocky Linux release 9.3 (Blue Onyx)

查看内核版本

uname -a
Linux localhost.localdomain 5.14.0-362.8.1.el9_3.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Nov 8 17:36:32 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

查看ssh及openssl 版本

ssh -V
OpenSSH_8.7p1, OpenSSL 3.0.7 1 Nov 2022
不升级

python版本

python -V
Python 3.9.18

glibc版本

ldd –version
ldd (GNU libc) 2.34

rocky9上打开加密兼容

便于低版本ssh 连接
update-crypto-policies –show
update-crypto-policies –set LEGACY

修改ip

查看当前ip

ip a
nmcli device show
nmcli con show

使用配制文件修改

vi /etc/NetworkManager/system-connections/enp0s3.nmconnection

[ipv4]
method=manual
address1=192.168.244.14/24,192.168.244.1
dns=223.5.5.5;1.1.1.1

重新载入生效

nmcli connection reload
nmcli connection down enp0s3 && nmcli connection up enp0s3

主机名设定

hostnamectl set-hostname dev-ai-diffusion.local

hostnamectl status
 Static hostname: dev-ai-diffusion.local
       Icon name: computer-vm
         Chassis: vm 

                            Machine ID: 45d4ec6ccf3646248a8b9cc382baf29d
         Boot ID: 0b6cfd83e8534a1f81f08de3962b8ba9
  Virtualization: oracle
Operating System: Rocky Linux 9.3 (Blue Onyx)      
     CPE OS Name: cpe:/o:rocky:rocky:9::baseos
          Kernel: Linux 5.14.0-362.8.1.el9_3.x86_64
    Architecture: x86-64
 Hardware Vendor: innotek GmbH
  Hardware Model: VirtualBox
Firmware Version: VirtualBox

安装中文语言包

localectl list-locales |grep zh
dnf list |grep glibc-langpack
dnf install glibc-langpack-zh

关防火墙

systemctl stop firewalld
systemctl disable firewalld

selinux

setenforce 0 && sed -i ‘/SELINUX/s/enforcing/disabled/’ /etc/selinux/config

vim 打开黏贴模

echo ‘set paste’ >> ~/.vimrc

时区

timedatectl set-timezone Asia/Shanghai

启用启动脚本

systemctl enable rc-local.service
systemctl start rc-local.service
systemctl status rc-local.service

chmod +x /etc/rc.d/rc.local
chmod +x /etc/rc.local

常用工具

yum -y install wget vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 tar curl lrzsz rsync psmisc sysstat lsof

docker yum

step 1: 安装必要的一些系统工具

sudo yum install -y yum-utils device-mapper-persistent-data lvm2

Step 2: 添加软件源信息

sudo yum-config-manager –add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

Step 3

sudo sed -i ‘s+download.docker.com+mirrors.aliyun.com/docker-ce+’ /etc/yum.repos.d/docker-ce.repo

Step 4: 更新并安装Docker-CE

sudo yum makecache fast
sudo yum -y install docker-ce

Installed:
  container-selinux-3:2.229.0-1.el9.noarch          containerd.io-1.6.31-3.1.el9.x86_64           docker-buildx-plugin-0.14.0-1.el9.x86_64     docker-ce-3:26.1.2-1.el9.x86_64      docker-ce-cli-1:26.1.2-1.el9.x86_64    
  docker-ce-rootless-extras-26.1.2-1.el9.x86_64     docker-compose-plugin-2.27.0-1.el9.x86_64     fuse-common-3.10.2-8.el9.x86_64              fuse-overlayfs-1.13-1.el9.x86_64     fuse3-3.10.2-8.el9.x86_64              
  fuse3-libs-3.10.2-8.el9.x86_64                    libslirp-4.4.0-7.el9.x86_64                   slirp4netns-1.2.3-1.el9.x86_64               tar-2:1.34-6.el9_1.x86_64           

注意:官方软件源默认启用了最新的软件,安装指定版本的Docker-CE
yum list docker-ce.x86_64 –showduplicates | sort -r

docker-ce.x86_64               3:26.1.2-1.el9                  docker-ce-stable 
docker-ce.x86_64               3:26.1.2-1.el9                  @docker-ce-stable
docker-ce.x86_64               3:26.1.1-1.el9                  docker-ce-stable 

yum -y install docker-ce-[VERSION]

Step 5: 开启Docker服务

sudo service docker start

查看版本

docker version

Engine:
  Version:          26.1.2
  API version:      1.45 (minimum version 1.24)
  Go version:       go1.21.10
  Git commit:       ef1912d
  Built:            Wed May  8 13:59:54 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.31
  GitCommit:        e377cd56a71523140ca6ae87e30244719194a521
 runc:
  Version:          1.1.12
  GitCommit:        v1.1.12-0-g51d5e94
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

docker加速

vi /etc/docker/daemon.json

{
"registry-mirrors":["https://docker.nju.edu.cn/"]
}

docker nvidia驱动

非必要,使用N卡时安装

yum install -y nvidia-docker2

systemctl daemon-reload
systemctl enable docker
systemctl restart docker

git 安装

yum install git

使用三方现有镜像安装

下载Stable Diffusion Docker 相关镜像

要下载Stable Diffusion的Docker镜像,您可以使用以下命令:
官方镜像
docker pull diffusion/stable:latest
docker pull kestr3l/stable-diffusion-webui

其它

腾讯加速
docker pull gpulab.tencentcloudcr.com/ai/stable-diffusion:1.0.8

docker.io 中的民间镜像
docker pull bobzhao1210/diffusion-webui

国外镜像

docker pull siutin/stable-diffusion-webui-docker:latest-cpu
docker pull siutin/stable-diffusion-webui-docker:latest-cuda

国内镜像

docker pull registry.cn-hangzhou.aliyuncs.com/sunsharing/stable-diffusion-webui-docker:latest-cpu

阿里官方gpu镜像,比较新

docker pull egs-registry.cn-hangzhou.cr.aliyuncs.com/egs/sd-webui:4.3.5-full-pytorch2.1.2-ubuntu22.04

使用阿里gpu镜像

docker pull egs-registry.cn-hangzhou.cr.aliyuncs.com/egs/sd-webui:4.3.5-full-pytorch2.1.2-ubuntu22.04

docker images

REPOSITORY                                              TAG                                   IMAGE ID       CREATED         SIZE
egs-registry.cn-hangzhou.cr.aliyuncs.com/egs/sd-webui   4.3.5-full-pytorch2.1.2-ubuntu22.04   22226c589a11   3 weeks ago     34.7GB

镜像中相关版本
version: 1.6.0  •  python: 3.10.10  •  torch: 2.1.2+cu118  •  xformers: N/A  •  gradio: 3.41.2

下载标准sd-1.5模型:

存放到models/Stable-diffusion ,使用GPU时下载,不支持cpu运行 ,如果没有ui启动时会尝试自动下载 ,4g左右

mkdir /host  
cd /host  
wget https://aiacc-inference-public-v2.oss-accelerate.aliyuncs.com/aiacc-inference-webui/models/v1-5-pruned-emaonly.safetensors

下载国内 AnythingV5_v5PrtRE.safetensors 2g左右

wget https://api.tusiart.com/community-web/v1/model/file/url/v2?modelFileId=600998819589521712 AnythingV5_v5PrtRE.safetensors

webui启动时必需要一个大模型,这个支持cpu.

Stable-diffusion WebUI文件夹功能

  • localizations: 汉化包
  • configs: 配制文件
  • embeddings:存放美术风格文件的目录,美术风格文件一般以.pt结尾,大小在几十K左右;
  • extensions:存放扩展插件的目录,我们下载的WebUI的插件就放在这个目录里,WebUI启动时会自动读取插件,插件目录都是git库,可以直接通过git更新;
  • extensions-builtin:存放WebUI内置的扩展;
  • models/hypernetworks:存放风格化文件的目录,风格化文件是配合Prompt使用的,以使AI会出对应风格的图片,风格化文件也以.pt结尾,大小在几百MB左右;
  • models/Lora:存放Lora的文件的目录,Lora文件是用来调整模型的,可以重映射模型文件的Prompt映射,使AI在相应的提示词下按照Lora的样式绘制,Lora文件一般以.safetensors结尾,大小在几百MB左右;
  • models/Stable-diffusion:存放模型的文件的目录,AI绘画时的采样基本从这个文件里采,影响图片的整体样式与画风,一般以.ckpt或.safetensors结尾,大小在几个G左右;
  • models/VAE:存放VAE文件的目录,VAE文件会影响图片整体的色调,如在刚开始玩WebUI时画出的图都比较灰,就是因为WebUI默认没有为我们设置VAE导致的,VAE文件一般以.ckpt或.vae.pt结尾,大小在几百MB或几个G不等;
  • outputs/extras-images:AI放大的原图的默认保存路径;
  • outputs/img2img-grids:批量图生图时的缩略图原图的默认保存路径;
  • outputs/img2img-images:图生图的原图的默认保存路径;
  • outputs/txt2img-grids:批量文生图时的缩略图原图的默认保存路径;
  • outputs/txt2img-images:文生图的原图的默认保存路径;

这些路径我们是可以在WebUI的设置界面修改成自定义路径的。

  • scripts:存放第三方脚本的目录;
  • venv:这个文件夹是WebUI首次运行时配置运行环境自己创建的,出现运行环境的问题时,可以删掉它让WebUI重新生成。
  • repositories: 启动后载入python模块代码,会影响是用gpu还是cpu,stablediffusion,taming-transfomers,stable-diffusion,k-diffusion-sd,CodeFormer,BLIP
  • style.csv 存储当前Prompt和Nagetive prompt的提示词作为提示词模板,目前WebUI没有提供在UI中删除提示词模板的功能,想要删除不需要的提示词模板,我们需要通过修改styles.csv文件

拉取cpu版stable-diffusion-webui

拉取cpu版stable-diffusion-webui
cd /host && git clone https://github.com/openvinotoolkit/stable-diffusion-webui.git ./stable-diffusion-webui-cpu

cpu版和gpu版主要差异文件

/modules/launch_utils.py
/requirements.txt
/requirements_versions.txt
/scripts/openvino_accelerate.py

创建启动变量及目录

h_dir=/host # host dir
h_mdir=/host/models # host models dir
c_dir=/workspace/stable-diffusion-webui # container dir
c_mdir=/workspace/stable-diffusion-webui/models # container dir

#mkdir -p ${h_dir}/{localizations,configs,config_states,outputs,embeddings,extensions,extensions-builtin,scripts,repositories}
#mkdir -p ${h_mdir}/{Stable-diffusion,hypernetworks,ControlNet,Lora,VAE,VAE-approx}
cd $h_mdir

cat >> /etc/profile <<EOF
export h_dir=/host
export h_mdir=/host/models # host models dir
export c_dir=/workspace/stable-diffusion-webui # container dir
export c_mdir=/workspace/stable-diffusion-webui/models # container dir
EOF

数据持久化

#先临时启动
docker run -itd --network=host --name=sd
-v /usr/share/zoneinfo/Asia/Shanghai:/etc/localtime \
-p 7860:7860 egs-registry.cn-hangzhou.cr.aliyuncs.com/egs/sd-webui:4.3.5-full-pytorch2.1.2-ubuntu22.04 bash

#复制容器内已存在文件到宿主机
cdirname='localizations configs outputs embeddings extensions extensions-builtin scripts'  
for cname in $cdirname; do echo ${c_dir}/${cname}; docker cp sd:${c_dir}/${cname} ${h_dir}/; done
docker cp sd:${c_dir}/webui-user.sh .

docker exec sd ls models/Stable-diffusion

docker stop sd
docker rm sd

ls ${h_dir}/models/Stable-diffusion
anything-v5-PrtRE.safetensors  'Put Stable Diffusion checkpoints here.txt'

# 复制python包 8.4G
docker cp sd:/root/miniconda3/lib/python3.10/site-packages /host/

# 复制大模型
mv /host/v1-5-pruned-emaonly.safetensors  /host/models/Stable-diffusion

# 复制出config.json
docker cp sd:/workspace/stable-diffusion-webui/config.json .

准备start.sh
vi start.sh
source ./webui-user.sh
python -u ./launch.py --listen --port 5001  --skip-install

启动容器

docker run -itd --network=host --name=sd \
-e COMMANDLINE_ARGS='--use-cpu all --skip-torch-cuda-test --precision full --no-half --no-half-vae  --api' \
-v /usr/share/zoneinfo/Asia/Shanghai:/etc/localtime \
-v ${h_dir}/site-packages:/root/miniconda3/lib/python3.10/site-packages \
-v ${h_dir}/stable-diffusion-webui-cpu:/workspace/stable-diffusion-webui \
-v ${h_mdir}/Stable-diffusion/:${c_mdir}/Stable-diffusion/ \
-v ${h_mdir}/ControlNet/:${c_mdir}/ControlNet/ \
-v ${h_mdir}/VAE/:${c_mdir}/VAE/ \
-v ${h_mdir}/Lora/:${c_mdir}/Lora/ \
-v ${h_mdir}/hypernetworks/:${c_mdir}/hypernetworks/ \
-v ${h_dir}/style.csv:${c_dir}/style.csv \
-v ${h_dir}/outputs/:${c_dir}/outputs/ \
-v ${h_dir}/localizations/:${c_dir}/localizations/ \
-v ${h_dir}/configs/:${c_dir}/configs/ \
-v ${h_dir}/config_states/:${c_dir}/config_states/ \
-v ${h_dir}/repositories/:${c_dir}/repositories/ \
-v ${h_dir}/extensions/:${c_dir}/extensions/ \
-v ${h_dir}/extensions-builtin/:${c_dir}/extensions-builtin/ \
-v ${h_dir}/webui-user.sh:${c_dir}/webui-user.sh \
-v ${h_dir}/start.sh:${c_dir}/start.sh \
egs-registry.cn-hangzhou.cr.aliyuncs.com/egs/sd-webui:4.3.5-full-pytorch2.1.2-ubuntu22.04 bash ./start.sh

容器启动后,SD-webui服务自动启动。如果需要自定义服务启动命令或者端口,请参照下列命令进行修改:

docker exec -it sd /bin/bash

gpu方式运行

python -u ./launch.py –ckpt-dir ./ckpts –listen –port 7860 –enable-insecure-extension-access –disable-safe-unpickle –api –xformers –skip-install

cpu方式运行变量,方便docker 启动时运行

cd /workspace/stable-diffusion-webui
echo ‘export COMMANDLINE_ARGS="–use-cpu all –skip-torch-cuda-test –precision full –no-half –no-half-vae"’ >> ./webui-user.sh

cpu 方式手动运行

python -u ./launch.py –listen –port 5001 –use-cpu all –skip-torch-cuda-test –precision full –no-half –no-half-vae –api –skip-install
python -u ./launch.py –listen –port 5001 –skip-install

开机启动

echo ‘docker start sd’ >> /etc/rc.local

访问

运行成功后在浏览器输入 http://ip:5001 就可以使用

控制台会输出日志

Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0]
Version: 1.6.0
Commit hash: e5a634da06c62d72dbdc764b16c65ef3408aa588
Launching Web UI with arguments: --listen --port 5001 --skip-install --use-cpu all --skip-torch-cuda-test --precision full --no-half --no-half-vae
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
    PyTorch 2.1.2+cu118 with CUDA 1108 (you have 2.1.0+cu121)
    Python  3.10.13 (you have 3.10.10)
  Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
  Memory-efficient attention, SwiGLU, sparse and more won't be available.
  Set XFORMERS_MORE_DETAILS=1 for more details
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
Loading weights [7f96a1a9ca] from /workspace/stable-diffusion-webui/models/Stable-diffusion/anything-v5-PrtRE.safetensors
Creating model from config: /workspace/stable-diffusion-webui/configs/v1-inference.yaml
2024-05-15 15:07:04,153 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL:  http://0.0.0.0:5001

To create a public link, set `share=True` in `launch()`.
IIB Database file has been successfully backed up to the backup folder.
Startup time: 19.6s (import torch: 2.5s, import gradio: 0.8s, setup paths: 0.3s, other imports: 0.4s, load scripts: 11.3s, create ui: 1.4s, gradio launch: 2.3s, app_started_callback: 0.3s).
Loading VAE weights specified in settings: /workspace/stable-diffusion-webui/models/VAE/vae-ft-mse-840000-ema-pruned.safetensors
Applying attention optimization: InvokeAI... done.
Model loaded in 4.9s (load weights from disk: 0.1s, create model: 1.8s, apply weights to model: 1.1s, apply float(): 1.5s, load VAE: 0.2s, calculate empty prompt: 0.1s).
[W NNPACK.cpp:64] Could not initialize NNPACK! Reason: Unsupported hardware.
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [02:58<00:00,  8.94s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [02:56<00:00,  8.83s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [02:56<00:00,  8.65s/it]

CPU生成一张础基图3分钟左右

docker全新安装

以ubuntu:22.04 做基础镜像
docker pull ubuntu:22.04
docker images

REPOSITORY                                              TAG                                   IMAGE ID       CREATED         SIZE
ubuntu                                                  22.04                                 52882761a72a   2 weeks ago     77.9MB

进入容器

docker run -itd --network=host --name=ub \
-v /usr/share/zoneinfo/Asia/Shanghai:/etc/localtime \
ubuntu:22.04 bash 
docker exec -it ub /bin/bash

cat /etc/issue.net 
Ubuntu 22.04.4 LTS

apt-get update
apt-get -y install --reinstall ca-certificates
cp /etc/apt/sources.list /etc/apt/sources.bak.list 

cat /etc/apt/sources.list

#阿里加速
#sed -n 's/http:\/\/.*ubuntu.com/https:\/\/mirrors.aliyun.com/p' /etc/apt/sources.list
sed -i 's/http:\/\/.*ubuntu.com/https:\/\/mirrors.aliyun.com/g' /etc/apt/sources.list

#清华加速
#echo "deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ jammy main restricted universe multiverse\ndeb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ jammy-updates main restricted universe multiverse\ndeb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ jammy-backports main restricted universe multiverse\ndeb http://security.ubuntu.com/ubuntu/ jammy-security main restricted universe multiverse" > /etc/apt/sources.list

#工具包
apt-get update && apt-get install -y sudo net-tools inetutils-ping procps curl wget vim telnet locales git zhcon fonts-wqy-microhei fonts-wqy-zenhei xfonts-wqy lsrzsz unzip tree

#相关包
apt-get install -y mesa-utils libglib2.0-0

sed -i 's/# zh_CN.UTF-8 UTF-8/zh_CN.UTF-8 UTF-8/' /etc/locale.gen && locale-gen

miniconda 安装

https://docs.anaconda.com/free/miniconda/

cd /root
mkdir -p ~/miniconda3
#wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh
wget https://repo.anaconda.com/miniconda/Miniconda3-py310_22.11.1-1-Linux-x86_64.sh -O ~/miniconda.sh
bash ~/miniconda.sh -b -u -p ~/miniconda3
rm -rf ~/miniconda.sh

~/miniconda3/bin/conda init bash
~/miniconda3/bin/conda init zsh

no change     /root/miniconda3/condabin/conda
no change     /root/miniconda3/bin/conda
no change     /root/miniconda3/bin/conda-env
no change     /root/miniconda3/bin/activate
no change     /root/miniconda3/bin/deactivate
no change     /root/miniconda3/etc/profile.d/conda.sh
no change     /root/miniconda3/etc/fish/conf.d/conda.fish
no change     /root/miniconda3/shell/condabin/Conda.psm1
no change     /root/miniconda3/shell/condabin/conda-hook.ps1
no change     /root/miniconda3/lib/python3.10/site-packages/xontrib/conda.xsh
no change     /root/miniconda3/etc/profile.d/conda.csh
modified      /root/.bashrc

#载入变量
source /root/.bashrc

(base) root@dev-ai-diffusion:~# conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
(base) root@dev-ai-diffusion:~# conda config --set show_channel_urls yes
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple

conda -V
conda 22.11.1

设置不自动激活conda base 环境

再次重启终端之后,你会发现每次打开终端默认都会自动进入到miniconda的base环境中,终端中多了“base”字样。这样会拖慢终端打开速度,并且有可能干扰到其它软件的安装。要退出的话,必须每次打开终端之后先执行conda deactivate命令,让人很难受。执行如下命令,便可以解决终端每次打开都进入conda的base环境的问题
conda config –set auto_activate_base false

python –version
Python 3.10.8

创建python3.10.10

conda create -n python_3.10.10 python=3.10.10
(python_3.10.10) root@dev-ai-diffusion:~# python –version
Python 3.10.10

查看有多少个环境

conda info –envs #

# conda environments:
#
base                     /root/miniconda3
python_3.10.10        *  /root/miniconda3/envs/python_3.10.10

设定默认环境
conda config –set default_environment python_3.10.10

激活环境
conda activate python_3.10.10

退出
conda deactivate

删除环境
conda remove –name test –all

pip install –upgrade pip setuptools wheel

下载 stable-diffusion-webui 仓库
mkdir /workspace && cd /workspace
groupadd -g 1000 website && \
useradd -u 1000 -G website -s /sbin/nologin www

stable-diffusion-webui

40m左右
cpu版
git clone https://github.com/openvinotoolkit/stable-diffusion-webui.git ./stable-diffusion-webui-cpu
gpu版
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git ./stable-diffusion-webui-gpu

ln -s stable-diffusion-webui-cpu stable-diffusion-webui

然后依次下载以下仓库,并切换到合适的版本以避免可能的版本适配问题:

mkdir -p /workspace/common/repositories && cd /workspace/common/repositories

stablediffusion
git clone https://github.com/Stability-AI/stablediffusion.git ./repositories/stable-diffusion-stability-ai

taming-transformers
git clone https://github.com/CompVis/taming-transformers.git ./repositories/taming-transformers

k-diffusion
git clone https://github.com/crowsonkb/k-diffusion.git ./repositories/k-diffusion

CodeFormer
git clone https://github.com/sczhou/CodeFormer.git ./repositories/CodeFormer

BLIP
git clone https://github.com/salesforce/BLIP.git ./repositories/BLIP

generative-models
git clone https://github.com/Stability-AI/generative-models.git ./repositories/generative-models

一共758M

cd /workspace/stable-diffusion-webui/
cat requirements.txt
安装 requirements.txt 中的依赖包

首先确定torch版本和下载的url,不同平台cpu/gpu 使用下列url来确认
https://pytorch.org/get-started/locally/

pip install torch torchvision -i https://pypi.tuna.tsinghua.edu.cn/simple

cpu

pip install torch==2.1.0 torchvision==0.16.0 torchaudio –index-url https://download.pytorch.org/whl/cpu

pip install tb-nightly
pip install tb-nightly -i https://mirrors.aliyun.com/pypi/simple

pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

说明:

必须先安装 torch:因为 basicsr 的安装程序会 import torch.
安装 tb-nightly:清华源缺少这个包,但安装 basicsr 时需要它,所以先不换源把它安装上。
安装其他依赖包
pip install ftfy regex tqdm -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install git+https://github.com/openai/CLIP.git
pip install open_clip_torch xformers -i https://pypi.tuna.tsinghua.edu.cn/simple

Collecting open_clip_torch
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/d9/d2/6ae2ee32d0d2ea9982774920e0ef96d439ee332f459f6d8a941149b1b4ad/open_clip_torch-2.24.0-py3-none-any.whl (1.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 MB 2.3 MB/s eta 0:00:00
Collecting xformers
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/32/e7/27003645ef99e7571fb6964cd2f39da3f1b3f3011aa00bb2d3ac9b790757/xformers-0.0.26.post1-cp310-cp310-manylinux2014_x86_64.whl (222.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 222.7/222.7 MB 2.0 MB/s eta 0:00:00
Requirement already satisfied: torch>=1.9.0 in /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages (from open_clip_torch) (2.1.0+cpu)
Requirement already satisfied: torchvision in /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages (from open_clip_torch) (0.16.0+cpu)
Requirement already satisfied: regex in /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages (from open_clip_torch) (2024.5.10)
Requirement already satisfied: ftfy in /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages (from open_clip_torch) (6.2.0)
Requirement already satisfied: tqdm in /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages (from open_clip_torch) (4.66.4)
Collecting huggingface-hub (from open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/21/2b/516f82c5ba9beb184b24c11976be2ad5e80fb7fe6b2796c887087144445e/huggingface_hub-0.23.0-py3-none-any.whl (401 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 401.2/401.2 kB 2.6 MB/s eta 0:00:00
Collecting sentencepiece (from open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/a6/27/33019685023221ca8ed98e8ceb7ae5e166032686fa3662c68f1f1edf334e/sentencepiece-0.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 4.5 MB/s eta 0:00:00
Requirement already satisfied: protobuf in /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages (from open_clip_torch) (4.25.3)
Collecting timm (from open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/68/99/2018622d268f6017ddfa5ee71f070bad5d07590374793166baa102849d17/timm-0.9.16-py3-none-any.whl (2.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.2/2.2 MB 3.1 MB/s eta 0:00:00
Requirement already satisfied: numpy in /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages (from xformers) (1.26.3)
Collecting torch>=1.9.0 (from open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/43/e5/2ddae60ae999b224aceb74490abeb885ee118227f866cb12046f0481d4c9/torch-2.3.0-cp310-cp310-manylinux1_x86_64.whl (779.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 779.1/779.1 MB 917.4 kB/s eta 0:00:00
Requirement already satisfied: filelock in /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages (from torch>=1.9.0->open_clip_torch) (3.13.1)
Requirement already satisfied: typing-extensions>=4.8.0 in /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages (from torch>=1.9.0->open_clip_torch) (4.9.0)
Requirement already satisfied: sympy in /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages (from torch>=1.9.0->open_clip_torch) (1.12)
Requirement already satisfied: networkx in /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages (from torch>=1.9.0->open_clip_torch) (3.2.1)
Requirement already satisfied: jinja2 in /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages (from torch>=1.9.0->open_clip_torch) (3.1.3)
Requirement already satisfied: fsspec in /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages (from torch>=1.9.0->open_clip_torch) (2024.2.0)
Collecting nvidia-cuda-nvrtc-cu12==12.1.105 (from torch>=1.9.0->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/b6/9f/c64c03f49d6fbc56196664d05dba14e3a561038a81a638eeb47f4d4cfd48/nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (23.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.7/23.7 MB 3.7 MB/s eta 0:00:00
Collecting nvidia-cuda-runtime-cu12==12.1.105 (from torch>=1.9.0->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/eb/d5/c68b1d2cdfcc59e72e8a5949a37ddb22ae6cade80cd4a57a84d4c8b55472/nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (823 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 823.6/823.6 kB 4.2 MB/s eta 0:00:00
Collecting nvidia-cuda-cupti-cu12==12.1.105 (from torch>=1.9.0->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/7e/00/6b218edd739ecfc60524e585ba8e6b00554dd908de2c9c66c1af3e44e18d/nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (14.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.1/14.1 MB 4.0 MB/s eta 0:00:00
Collecting nvidia-cudnn-cu12==8.9.2.26 (from torch>=1.9.0->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/ff/74/a2e2be7fb83aaedec84f391f082cf765dfb635e7caa9b49065f73e4835d8/nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl (731.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 731.7/731.7 MB 832.3 kB/s eta 0:00:00
Collecting nvidia-cublas-cu12==12.1.3.1 (from torch>=1.9.0->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/37/6d/121efd7382d5b0284239f4ab1fc1590d86d34ed4a4a2fdb13b30ca8e5740/nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl (410.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 410.6/410.6 MB 1.8 MB/s eta 0:00:00
Collecting nvidia-cufft-cu12==11.0.2.54 (from torch>=1.9.0->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/86/94/eb540db023ce1d162e7bea9f8f5aa781d57c65aed513c33ee9a5123ead4d/nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl (121.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 121.6/121.6 MB 2.5 MB/s eta 0:00:00
Collecting nvidia-curand-cu12==10.3.2.106 (from torch>=1.9.0->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/44/31/4890b1c9abc496303412947fc7dcea3d14861720642b49e8ceed89636705/nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl (56.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.5/56.5 MB 3.1 MB/s eta 0:00:00
Collecting nvidia-cusolver-cu12==11.4.5.107 (from torch>=1.9.0->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/bc/1d/8de1e5c67099015c834315e333911273a8c6aaba78923dd1d1e25fc5f217/nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl (124.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 124.2/124.2 MB 2.3 MB/s eta 0:00:00
Collecting nvidia-cusparse-cu12==12.1.0.106 (from torch>=1.9.0->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/65/5b/cfaeebf25cd9fdec14338ccb16f6b2c4c7fa9163aefcf057d86b9cc248bb/nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl (196.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 196.0/196.0 MB 3.4 MB/s eta 0:00:00
Collecting nvidia-nccl-cu12==2.20.5 (from torch>=1.9.0->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/4b/2a/0a131f572aa09f741c30ccd45a8e56316e8be8dfc7bc19bf0ab7cfef7b19/nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl (176.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 176.2/176.2 MB 1.9 MB/s eta 0:00:00
Collecting nvidia-nvtx-cu12==12.1.105 (from torch>=1.9.0->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/da/d3/8057f0587683ed2fcd4dbfbdfdfa807b9160b809976099d36b8f60d08f03/nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 99.1/99.1 kB 1.8 MB/s eta 0:00:00
Collecting triton==2.3.0 (from torch>=1.9.0->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/db/ee/8d50d44ed5b63677bb387f4ee67a7dbaaded0189b320ffe82685a6827728/triton-2.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (168.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 168.1/168.1 MB 1.8 MB/s eta 0:00:00
Collecting nvidia-nvjitlink-cu12 (from nvidia-cusolver-cu12==11.4.5.107->torch>=1.9.0->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/ff/ff/847841bacfbefc97a00036e0fce5a0f086b640756dc38caea5e1bb002655/nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (21.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 21.1/21.1 MB 4.7 MB/s eta 0:00:00
Requirement already satisfied: wcwidth<0.3.0,>=0.2.12 in /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages (from ftfy->open_clip_torch) (0.2.13)
Collecting packaging>=20.9 (from huggingface-hub->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/49/df/1fceb2f8900f8639e278b056416d49134fb8d84c5942ffaa01ad34782422/packaging-24.0-py3-none-any.whl (53 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 53.5/53.5 kB 3.6 MB/s eta 0:00:00
Collecting pyyaml>=5.1 (from huggingface-hub->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/29/61/bf33c6c85c55bc45a29eee3195848ff2d518d84735eb0e2d8cb42e0d285e/PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (705 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 705.5/705.5 kB 1.7 MB/s eta 0:00:00
Requirement already satisfied: requests in /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages (from huggingface-hub->open_clip_torch) (2.28.1)
Collecting safetensors (from timm->open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/8f/05/969e1a976b84283285181b00028cf73d78434b77a6627fc2a94194cca265/safetensors-0.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 2.2 MB/s eta 0:00:00
INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while.
Collecting torchvision (from open_clip_torch)
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/d4/7e/d41b771dbffa927b9cc37372b1e18c881348cd18a0e4ad73f2c6bdf56c0e/torchvision-0.18.0-cp310-cp310-manylinux1_x86_64.whl (7.0 MB)

du -sh /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages/
5.5G

pip install -r "requirements.txt" –prefer-binary -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install -r "requirements_versions.txt" –prefer-binary -i https://pypi.tuna.tsinghua.edu.cn/simple

du -sh /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages/
6.6G /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages/

pip install -r repositories/CodeFormer/requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install -r repositories/BLIP/requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install -r repositories/k-diffusion/requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install -r repositories/stable-diffusion-stability-ai/requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

error: can’t find Rust compiler
ERROR: Failed building wheel for tokenizers
这个错误通常是由于缺少Rust编译器引起的,因为tokenizers库需要Rust编译器来构建。
apt-get install build-essential

git clone https://github.com/huggingface/tokenizers.git

安装rust

echo "export RUSTUP_DIST_SERVER=https://mirrors.ustc.edu.cn/rust-static" >> ~/.bashrc
echo "export RUSTUP_UPDATE_ROOT=https://mirrors.ustc.edu.cn/rust-static/rustup" >> ~/.bashrc
source ~/.bashrc
curl –proto ‘=https’ –tlsv1.2 -sSf https://sh.rustup.rs | sh

ust is installed now. Great!

To get started you may need to restart your current shell.
This would reload your PATH environment variable to include
Cargo's bin directory ($HOME/.cargo/bin).

To configure your current shell, you need to source
the corresponding env file under $HOME/.cargo.

This is usually done by running one of the following (note the leading DOT):
. "$HOME/.cargo/env"            # For sh/bash/zsh/ash/dash/pdksh
source "$HOME/.cargo/env.fish"  # For fish

pip install tokenizers

载入
source $HOME/.cargo/env

   SSL error: unknown error; class=Ssl (16)
  error: cargo metadata --manifest-path Cargo.toml --format-version 1 failed with code 101
  -- Output captured from stdout:

模型

下载基础模型放在 ./models/Stable-diffusion/
下载VAE模型放在 ./models/VAE/

先做一个共享目录
cd /workspace/common/
cp -ar /workspace/stable-diffusion-webui/models /workspace/common/
mv /workspace/stable-diffusion-webui/models /workspace/stable-diffusion-webui/models_org
ln -s /workspace/common/models/ /workspace/stable-diffusion-webui/models

退出docker,在宿主机复制现有模型到内部
docker cp Stable-diffusion/anything-v5-PrtRE.safetensors ub:/workspace/common/models/Stable-diffusion/

进docker
python -u ./launch.py –listen –port 5001 –use-cpu all –skip-torch-cuda-test –precision full –no-half –no-half-vae –api –skip-install

Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0]
Version: 1.6.0
Commit hash: e5a634da06c62d72dbdc764b16c65ef3408aa588
Launching Web UI with arguments: --listen --port 5001 --use-cpu all --skip-torch-cuda-test --precision full --no-half --no-half-vae --api
WARNING:xformers:WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
    PyTorch 2.3.0+cu121 with CUDA 1201 (you have 2.1.0+cpu)
    Python  3.10.14 (you have 3.10.10)
  Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
  Memory-efficient attention, SwiGLU, sparse and more won't be available.
  Set XFORMERS_MORE_DETAILS=1 for more details
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [7f96a1a9ca] from /workspace/stable-diffusion-webui-cpu/models/Stable-diffusion/anything-v5-PrtRE.safetensors
Creating model from config: /workspace/stable-diffusion-webui-cpu/configs/v1-inference.yaml
Running on local URL:  http://0.0.0.0:5001

To create a public link, set `share=True` in `launch()`.
Startup time: 5.7s (import torch: 2.4s, import gradio: 0.6s, setup paths: 0.7s, other imports: 0.3s, load scripts: 0.7s, create ui: 0.7s, gradio launch: 0.2s).
Applying attention optimization: InvokeAI... done.
Model loaded in 4.9s (load weights from disk: 0.8s, create model: 0.6s, apply weights to model: 2.7s, apply float(): 0.7s).
[W NNPACK.cpp:64] Could not initialize NNPACK! Reason: Unsupported hardware.

浏览器访问

http://127.0.0.1:5001
ersion: 1.6.0  •  python: 3.10.10  •  torch: 2.1.0+cpu  •  xformers: N/A  •  gradio: 3.41.2

报错

libGL.so.1

libGL.so.1: cannot open shared object file: No such file or directory

apt install mesa-utils

libgthread-2.0.so.0

libgthread-2.0.so.0: cannot open shared object file: No such file or directory

apt-get install libglib2.0-0

torchvision.transforms.functional_tenso

No module named ‘torchvision.transforms.functional_tensor’

pip list |grep torch

clip-anytorch             2.6.0
dctorch                   0.1.2
open-clip-torch           2.20.0
pytorch-lightning         1.9.4
torch                     2.3.0
torchaudio                2.1.0+cpu
torchdiffeq               0.2.3
torchmetrics              1.4.0
torchsde                  0.2.5
torchvision               0.18.0

重新安装支持cpu的torch版本
pip install torch==2.1.0 torchvision==0.16.0 torchaudio –index-url https://download.pytorch.org/whl/cpu

pip list |grep torch

clip-anytorch             2.6.0
dctorch                   0.1.2
open-clip-torch           2.20.0
pytorch-lightning         1.9.4
torch                     2.1.0+cpu
torchaudio                2.1.0+cpu
torchdiffeq               0.2.3
torchmetrics              1.4.0
torchsde                  0.2.5
torchvision               0.16.0+cpu

clip-vit-large-patch14

OSError: Can’t load tokenizer for ‘openai/clip-vit-large-patch14’. If you were trying to load it from ‘https://huggingface.co/models‘, make sure you don’t have a local directory with the same name. Otherwise, make sure ‘openai/clip-vit-large-patch14’ is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

下载clip-vit-large-patch14 放到以下

tree /root/.cache/huggingface/hub/models--openai--clip-vit-large-patch14/
/root/.cache/huggingface/hub/models--openai--clip-vit-large-patch14/
|-- blobs
|-- refs
|   `-- main
`-- snapshots
    `-- 8d052a0f05efbaefbc9e8786ba291cfdf93e5bff
        |-- config.json
        |-- merges.txt
        |-- special_tokens_map.json
        |-- tokenizer_config.json
        `-- vocab.json

4 directories, 6 files

制作 完整版docker 镜像

docker ps

CONTAINER ID   IMAGE          COMMAND   CREATED        STATUS        PORTS     NAMES
0180e369be03   ubuntu:22.04   "bash"    19 hours ago   Up 19 hours             ub

docker commit -m "ubuntu:22.04,stable-diffusion,cpu" ub mysd:cpu

docker images

REPOSITORY                                              TAG                                   IMAGE ID       CREATED          SIZE
mysd                                                    cpu                                   93a4de4e2952   27 seconds ago   16GB

制作优化版docker 镜像

将python包和repositories 放到宿主机。
备份出repositories
docker exec -it ub ls /workspace
docker cp ub:/workspace/common/repositories /host/cpu_repositories
840M

备份出site-package

docker cp ub:/root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages/ /host/cpu_site-package
6G

进入镜像清理

docker exec -it ub bash
apt-get autoremove && apt-get clean && rm -rf /var/lib/apt/lists/*
rm -rf /root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages/
exit

镜像

docker commit -m "ubuntu:22.04,stable-diffusion,cpu" ub mysd:cpu-slim

docker images

REPOSITORY                                              TAG                                   IMAGE ID       CREATED          SIZE
mysd                                                    cpu-slim                              9e5113633411   19 seconds ago   10GB

启动文件

vi ./start.sh

/root/miniconda3/condabin/conda init  bash
conda activate python_3.10.10
export PATH="/root/miniconda3/envs/python_3.10.10/bin:$PATH"
git config --global --add safe.directory "*"
cd /workspace/stable-diffusion-webui
python -u ./launch.py --listen --port 5001 --use-cpu all --skip-torch-cuda-test --precision full --no-half --no-half-vae --api --skip-install

启动变量

h_dir=/host # host dir
h_mdir=/host/models # host models dir
c_dir=/workspace/stable-diffusion-webui # container dir
c_mdir=/workspace/stable-diffusion-webui/models # container dir

docker run -itd –network=host –name=cpusd \
-e COMMANDLINE_ARGS=’–use-cpu all –skip-torch-cuda-test –precision full –no-half –no-half-vae –api’ \
-e CONDA_DEFAULT_ENV=’python_3.10.10′ \
-v ${h_dir}/hub/:/root/.cache/huggingface/hub \
-v ${h_dir}/cpu_site-package:/root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages \
-v ${h_mdir}/:${c_mdir}/ \
-v ${h_dir}/style.csv:${c_dir}/style.csv \
-v ${h_dir}/outputs/:${c_dir}/outputs/ \
-v ${h_dir}/localizations/:${c_dir}/localizations/ \
-v ${h_dir}/configs/:${c_dir}/configs/ \
-v ${h_dir}/config_states/:${c_dir}/config_states/ \
-v ${h_dir}/cpu_repositories/:${c_dir}/repositories/ \
-v ${h_dir}/extensions/:${c_dir}/extensions/ \
-v ${h_dir}/extensions-builtin/:${c_dir}/extensions-builtin/ \
-v ${h_dir}/webui-user.sh:${c_dir}/webui-user.sh \
-v ${h_dir}/start.sh:${c_dir}/start.sh \
mysd:cpu-slim bash -c ‘source /root/.bashrc; conda init bash; conda activate python_3.10.10;conda info –envs;/workspace/stable-diffusion-webui/start.sh’

docker exec -it cpusd bash
docker logs -f cpusd

echo ‘docker start cpusd’ >> /etc/rc.local

sd-webui-bilingual-localization 双语对照翻译插件

cd /host/extensions
git clone https://github.com/journey-ad/sd-webui-bilingual-localization /host/extensions/sd-webui-bilingual-localization

settings->bilingual-localization
本地化文件 zh-Hans(stable)
同时请将"用户界面" – “本地化翻译”选项设置为“无”

保存并重启ui

制作使用dockerfile 镜像

cd /host/
从github下载准备好代码,并从zip转成tar,方便dockerfile解压
stable-diffusion-webui-cpu.tar
stable-diffusion-webui-1.6.0.tar

FROM ubuntu:22.04

LABEL maintainer="[email protected]" \
description="Ubuntu 22.04.4 LTS,miniconda3,Stable-Diffusion-webui" 

#docker build ./ --progress=plain -t sdw_cpu:1.0

ARG RUNUID="1000"
ARG RUNGID="1000"
ARG RUNUSER="www"
ARG RUNGROUP="website"
#ARG LANG="en_US.UTF-8"
ARG LANG="zh_CN.utf8"

ENV TZ=Asia/Shanghai
ENV RUNUID=${RUNUID}
ENV RUNGID=${RUNGID}
ENV RUNUSER=${RUNUSER}
ENV RUNGROUP=${RUNGROUP}
ENV LANG=${LANG}
ENV LC_ALL=${LANG}
ENV LANGUAGE=${LANG}

WORKDIR /workspace

USER root
RUN     groupadd -g $RUNUID $RUNGROUP && \
    useradd -u $RUNGID -G $RUNGROUP  -s /sbin/nologin $RUNUSER && \
    apt-get update && apt-get -y install --reinstall ca-certificates && \
    sed -i 's/http:\/\/.*ubuntu.com/https:\/\/mirrors.aliyun.com/g' /etc/apt/sources.list && \
    apt-get update && apt-get update && apt-get install -y sudo net-tools inetutils-ping procps curl wget vim telnet locales git zhcon  \
    fonts-wqy-microhei fonts-wqy-zenhei xfonts-wqy lrzsz unzip tree mesa-utils libglib2.0-0 && \
    apt-get autoremove && apt-get clean && rm -rf /var/lib/apt/lists/*  && \
    ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && \
    sed -i 's/# zh_CN.UTF-8 UTF-8/zh_CN.UTF-8 UTF-8/' /etc/locale.gen && locale-gen && \
    echo "$TZ" > /etc/timezone && echo "alias ll='ls -l'" >> /etc/profile && \
    cd /root && mkdir -p ~/miniconda3 && wget https://repo.anaconda.com/miniconda/Miniconda3-py310_22.11.1-1-Linux-x86_64.sh -O ~/miniconda.sh  && \
    bash ~/miniconda.sh -b -u -p ~/miniconda3 && ~/miniconda3/bin/conda init bash && . /root/.bashrc && \
    conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/ && \
    conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/ && \
    conda config --set show_channel_urls yes && conda create -n python_3.10.10 python=3.10.10 && conda activate python_3.10.10 && \
    pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple && \
    pip install --upgrade pip setuptools wheel && \
    cd /workspace && mkdir -p /workspace/common/{repositories,models}  
    #git clone https://github.com/openvinotoolkit/stable-diffusion-webui.git ./stable-diffusion-webui-cpu 
ADD stable-diffusion-webui-cpu.tar .
ADD stable-diffusion-webui-1.6.0.tar .
RUN  ln -s  stable-diffusion-webui-cpu  stable-diffusion-webui 

WORKDIR /workspace/stable-diffusion-webui 
EXPOSE 5001

#CMD ["/opt/bin/entry_point.sh"]
CMD ["tail","-f","/dev/null"]

docker build ./ –progress=plain -t sdw_cpu:1.0
镜像可以缩到1.6G

docker run -itd --network=host --name=cpusd \
-e COMMANDLINE_ARGS='--use-cpu all --skip-torch-cuda-test --precision full --no-half --no-half-vae  --api' \
-e CONDA_DEFAULT_ENV='python_3.10.10' \
-v ${h_dir}/hub:/root/.cache/huggingface/hub \
-v ${h_dir}/cpu_site-package:/root/miniconda3/envs/python_3.10.10/lib/python3.10/site-packages \
-v ${h_mdir}/:${c_mdir}/ \
-v ${h_dir}/style.csv:${c_dir}/style.csv \
-v ${h_dir}/outputs/:${c_dir}/outputs/ \
-v ${h_dir}/localizations/:${c_dir}/localizations/ \
-v ${h_dir}/configs/:${c_dir}/configs/ \
-v ${h_dir}/config.json:${c_dir}/config.json \
-v ${h_dir}/config_states/:${c_dir}/config_states/ \
-v ${h_dir}/cpu_repositories/:${c_dir}/repositories/ \
-v ${h_dir}/extensions/:${c_dir}/extensions/ \
-v ${h_dir}/extensions-builtin/:${c_dir}/extensions-builtin/ \
-v ${h_dir}/webui-user.sh:${c_dir}/webui-user.sh \
-v ${h_dir}/start.sh:${c_dir}/start.sh \
sdw_cpu:1.0  bash  -c 'source /root/.bashrc; conda init bash; conda activate python_3.10.10;conda info --envs;/workspace/stable-diffusion-webui/start.sh'

docker exec -it cpusd bash

Posted in AIGC, 技术.

Tagged with , , , , .


No Responses (yet)

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.



Some HTML is OK

or, reply to this post via trackback.