沉默的背影 X-Pacific

keep learning

Deep3DFaceRecon 2D图像转3D模型实战

本案例通过Deep3DFaceRecon_pytorch实现

前置文档:

https://github.com/sicxu/Deep3DFaceRecon_pytorch

https://blog.csdn.net/flyfish1986/article/details/121861086

本文是在本地没有gpu硬件的支持下的实现方案,并不具体描述部署过程,部署过程建议看上面两个文档地址

准备工程文件

将下载好的Deep3DFaceRecon_pytorch工程文件(详见https://blog.csdn.net/flyfish1986/article/details/121861086中百度网盘的地址)

解压到本地(这个工程已经包含20epoch模型,不用再去谷歌网盘下载了)

项目根目录创建environment.sh

apt-get update && apt-get install -y --no-install-recommends \
    pkg-config \
    libglvnd0 \
    libgl1 \
    libglx0 \
    libegl1 \
    libgles2 \
    libglvnd-dev \
    libgl1-mesa-dev \
    libegl1-mesa-dev \
    libgles2-mesa-dev \
    cmake \
    curl \
    libsm6 \
    libxext6 \
    libxrender-dev

# export PYTHONDONTWRITEBYTECODE=1
export PYTHONUNBUFFERED=1

# for GLEW
export LD_LIBRARY_PATH=/usr/lib:$LD_LIBRARY_PATH

# nvidia-container-runtime
export NVIDIA_VISIBLE_DEVICES=all
export NVIDIA_DRIVER_CAPABILITIES=compute,utility,graphics

# Default pyopengl to EGL for good headless rendering support
export PYOPENGL_PLATFORM=egl

cp docker/10_nvidia.json /usr/share/glvnd/egl_vendor.d/10_nvidia.json

pip install --upgrade pip
pip install ninja imageio imageio-ffmpeg

pip install trimesh==3.9.20 -i https://pypi.douban.com/simple
pip install dominate==2.6.0 -i https://pypi.douban.com/simple
pip install kornia==0.5.5 -i https://pypi.douban.com/simple
pip install scikit-image==0.16.2 -i https://pypi.douban.com/simple
pip install numpy==1.18.1 -i https://pypi.douban.com/simple
pip install matplotlib==2.2.5 -i https://pypi.douban.com/simple
pip install opencv-python==3.4.9.33 -i https://pypi.douban.com/simple
pip install tensorboard==1.15.0 -i https://pypi.douban.com/simple
pip install tensorflow==1.15.0 -i https://pypi.douban.com/simple
pip install kornia==0.5.5 -i https://pypi.douban.com/simple
pip install nvdiffrast==0.2.7 -i https://pypi.douban.com/simple
pip install ninja -i https://pypi.douban.com/simple

这一步主要是初始化环境,包含:安装python库等

修改文件util/nvdiffrast.py:

# if self.glctx is None:
# self.glctx = dr.RasterizeGLContext(device=device)
# print("create glctx on device cuda:%d"%device.index)
if self.glctx is None:
    self.glctx = dr.RasterizeCudaContext(device=device)
    print("create glctx on device cuda:%d"%device.index)

这一步需要将OpenGL的依赖替换为cuda的,不然会报错:https://github.com/sicxu/Deep3DFaceRecon_pytorch/issues/81#issuecomment-1918455559

在openbayes上申请帐号

https://openbayes.com/注册帐号,充值一些钱(用于租赁gpu服务器)

数据仓库-数据集中将刚才准备好的工程目录打包成zip并上传(会自动解压)

模型训练-创建新容器

 启动等待分配资源,启动成功后进入shell

 

 容器中运行

因为我们把数据集挂载到了/openbayes/home

所以我们的工程也在/openbayes/home中

①运行我们刚才创建好的environment.sh

sh environment.sh

②安装最新的nvdiffrast

git clone https://github.com/NVlabs/nvdiffrast.git
pip install .

③执行根据图像生成3D模型的脚本

python test.py --name=model_name --epoch=20 --img_folder=./datasets/examples

 运行后,会将./datasets/examples中的20张人脸图片生成20个obj3D模型文件

你也可以自己放照片进去来实时生成人脸模型

原始图片路径:/openbayes/home/datasets/examples

生成模型路径:/openbayes/home/checkpoints/model_name/results/examples

/openbayes/home/datasets/examples/detections中要放一个和图片文件名一样的txt文件

里面是5*2的10个数字,是左眼、右眼、鼻子、左嘴角、右嘴角的坐标(临时可以通过截图工具获取并手动生成此文件)

 

posted @ 2024-01-31 17:10  乂墨EMO  阅读(158)  评论(0编辑  收藏  举报