Loading

docker-Tensorflow-gpu+ Jupyter

参考:
https://www.jianshu.com/p/fce000cf4c0f
前提:
nvidia-docker cuda

镜像

$ nvidia-docker run -it -p 8888:8888 tensorflow/tensorflow:latest-gpu


##持久
nvidia-docker run -e PASSWORD=your_jupyter_passwd \ # set password
    -d \  # run as daemon
    -p 8888:8888 \ # port binding
    --name tensorflow \
    -v /data/dir/on/host/:/data/ \ # bind data volume
    tensorflow/tensorflow:latest-gpu

接上:
修改Jupyter默认启动的terminal所使用的shell
使用的镜像的主进程是Jupyter,修改Jupyter默认启动terminal所使用的shell,最简单的方法是在此脚本中设置SHELL环境变量。通过Jupyter启动terminal或者是docker exec -it tensorflow bash的方法进入容器,然后编辑/run_jupyter.sh,在jupyer notebook "$@"之前添加

export SHELL=/bin/bash

使用anaconda

##1,在容器内安装Anaconda
##2,编辑/run_jupyter.sh

jupyter notebook "$@"  改为:/path/to/anaconda/bin/jupyter notebook "$@"

测试jupyter 是否使用gpu

参考: https://blog.paperspace.com/jupyter-notebook-with-a-gpu-the-easy-way/
启动的tensflow

sudo nvidia-docker run --rm --name tf-notebook -p 8888:8888 -p 6006:6006 gcr.io/tensorflow/tensorflow:latest-gpu jupyter notebook --allow-root

You can confirm that the GPU is working by opening a notebook and typing:

from tensorflow.python.client import device_lib

def get_available_devices():
    local_device_protos = device_lib.list_local_devices()
    return [x.name for x in local_device_protos]

print(get_available_devices())


tensorflow dockerfile

https://github.com/tensorflow/tensorflow/tree/60b4151cdd388856601fedb2f0991f4fa844f0fc/tensorflow/tools/dockerfiles/dockerfiles
https://github.com/tensorflow/tensorflow/tree/60b4151cdd388856601fedb2f0991f4fa844f0fc/tensorflow/tools/dockerfiles
dockerfile示例:

# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
#
# THIS IS A GENERATED DOCKERFILE.
#
# This file was assembled from multiple pieces, whose use is documented
# throughout. Please refer to the TensorFlow dockerfiles documentation
# for more information.

ARG UBUNTU_VERSION=18.04

ARG ARCH=
ARG CUDA=10.1
FROM nvidia/cuda${ARCH:+-$ARCH}:${CUDA}-base-ubuntu${UBUNTU_VERSION} as base
# ARCH and CUDA are specified again because the FROM directive resets ARGs
# (but their default value is retained if set previously)
ARG ARCH
ARG CUDA
ARG CUDNN=7.6.4.38-1
ARG CUDNN_MAJOR_VERSION=7
ARG LIB_DIR_PREFIX=x86_64
ARG LIBNVINFER=6.0.1-1
ARG LIBNVINFER_MAJOR_VERSION=6

# Needed for string substitution
SHELL ["/bin/bash", "-c"]
# Pick up some TF dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
        build-essential \
        cuda-command-line-tools-${CUDA/./-} \
        # There appears to be a regression in libcublas10=10.2.2.89-1 which
        # prevents cublas from initializing in TF. See
        # https://github.com/tensorflow/tensorflow/issues/9489#issuecomment-562394257
        libcublas10=10.2.1.243-1 \ 
        cuda-nvrtc-${CUDA/./-} \
        cuda-cufft-${CUDA/./-} \
        cuda-curand-${CUDA/./-} \
        cuda-cusolver-${CUDA/./-} \
        cuda-cusparse-${CUDA/./-} \
        curl \
        libcudnn7=${CUDNN}+cuda${CUDA} \
        libfreetype6-dev \
        libhdf5-serial-dev \
        libzmq3-dev \
        pkg-config \
        software-properties-common \
        unzip

# Install TensorRT if not building for PowerPC
RUN [[ "${ARCH}" = "ppc64le" ]] || { apt-get update && \
        apt-get install -y --no-install-recommends libnvinfer${LIBNVINFER_MAJOR_VERSION}=${LIBNVINFER}+cuda${CUDA} \
        libnvinfer-plugin${LIBNVINFER_MAJOR_VERSION}=${LIBNVINFER}+cuda${CUDA} \
        && apt-get clean \
        && rm -rf /var/lib/apt/lists/*; }

# For CUDA profiling, TensorFlow requires CUPTI.
ENV LD_LIBRARY_PATH /usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/lib64:$LD_LIBRARY_PATH

# Link the libcuda stub to the location where tensorflow is searching for it and reconfigure
# dynamic linker run-time bindings
RUN ln -s /usr/local/cuda/lib64/stubs/libcuda.so /usr/local/cuda/lib64/stubs/libcuda.so.1 \
    && echo "/usr/local/cuda/lib64/stubs" > /etc/ld.so.conf.d/z-cuda-stubs.conf \
    && ldconfig

ARG USE_PYTHON_3_NOT_2
# TODO(angerson) Completely remove Python 2 support
ARG _PY_SUFFIX=${USE_PYTHON_3_NOT_2:+3}
ARG PYTHON=python${_PY_SUFFIX}
ARG PIP=pip${_PY_SUFFIX}

# See http://bugs.python.org/issue19846
ENV LANG C.UTF-8

RUN apt-get update && apt-get install -y \
    ${PYTHON} \
    ${PYTHON}-pip

RUN ${PIP} --no-cache-dir install --upgrade \
    pip \
    setuptools

# Some TF tools expect a "python" binary
RUN ln -s $(which ${PYTHON}) /usr/local/bin/python

# Options:
#   tensorflow
#   tensorflow-gpu
#   tf-nightly
#   tf-nightly-gpu
# Set --build-arg TF_PACKAGE_VERSION=1.11.0rc0 to install a specific version.
# Installs the latest version by default.
ARG TF_PACKAGE=tensorflow
ARG TF_PACKAGE_VERSION=
RUN ${PIP} install ${TF_PACKAGE}${TF_PACKAGE_VERSION:+==${TF_PACKAGE_VERSION}}

COPY bashrc /etc/bash.bashrc
RUN chmod a+rwx /etc/bash.bashrc

RUN ${PIP} install jupyter matplotlib
# Pin ipykernel and nbformat; see https://github.com/ipython/ipykernel/issues/422
RUN if [[ "${USE_PYTHON_3_NOT_2}" == "1" ]]; then ${PIP} install ipykernel==5.1.1 nbformat==4.4.0; fi
RUN ${PIP} install jupyter_http_over_ws
RUN jupyter serverextension enable --py jupyter_http_over_ws

RUN mkdir -p /tf/tensorflow-tutorials && chmod -R a+rwx /tf/
RUN mkdir /.local && chmod a+rwx /.local
RUN apt-get install -y --no-install-recommends wget
WORKDIR /tf/tensorflow-tutorials
RUN wget https://raw.githubusercontent.com/tensorflow/docs/master/site/en/tutorials/keras/classification.ipynb
RUN wget https://raw.githubusercontent.com/tensorflow/docs/master/site/en/tutorials/keras/overfit_and_underfit.ipynb
RUN wget https://raw.githubusercontent.com/tensorflow/docs/master/site/en/tutorials/keras/regression.ipynb
RUN wget https://raw.githubusercontent.com/tensorflow/docs/master/site/en/tutorials/keras/save_and_load.ipynb
RUN wget https://raw.githubusercontent.com/tensorflow/docs/master/site/en/tutorials/keras/text_classification.ipynb
RUN wget https://raw.githubusercontent.com/tensorflow/docs/master/site/en/tutorials/keras/text_classification_with_hub.ipynb
COPY readme-for-jupyter.md README.md
RUN apt-get autoremove -y && apt-get remove -y wget
WORKDIR /tf
EXPOSE 8888

RUN ${PYTHON} -m ipykernel.kernelspec

CMD ["bash", "-c", "source /etc/bash.bashrc && jupyter notebook --notebook-dir=/tf --ip 0.0.0.0 --no-browser --allow-root"]


测试使用gpu

import tensorflow as tf
import timeit

with tf.device('/cpu:0'):
	cpu_a = tf.random.normal([10000, 1000])
	cpu_b = tf.random.normal([1000, 2000])
	print(cpu_a.device, cpu_b.device)

with tf.device('/gpu:0'):
	gpu_a = tf.random.normal([10000, 1000])
	gpu_b = tf.random.normal([1000, 2000])
	print(gpu_a.device, gpu_b.device)

def cpu_run():
	with tf.device('/cpu:0'):
		c = tf.matmul(cpu_a, cpu_b)
	return c 

def gpu_run():
	with tf.device('/gpu:0'):
		c = tf.matmul(gpu_a, gpu_b)
	return c 


# warm up
cpu_time = timeit.timeit(cpu_run, number=10)
gpu_time = timeit.timeit(gpu_run, number=10)
print('warmup:', cpu_time, gpu_time)


cpu_time = timeit.timeit(cpu_run, number=10)
gpu_time = timeit.timeit(gpu_run, number=10)
print('run time:', cpu_time, gpu_time)


###########################
import tensorflow as tf
import os

os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

a = tf.constant(1.)
b = tf.constant(2.)
print(a+b)

print('GPU:', tf.test.is_gpu_available())



tensorflow1.12+ 与tensorflow2.0的区别

两个版本tensorflow函数对照:
参考链接:
https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0

jupyter 设置密码


# 打开cmd,进入ipython交互环境
ipython

from notebook.auth import passwd
passwd()
# 在这里输入想要设置的登录JupyterLab 的密码 然后会有一串输出,复制下来,等会配置需要使用



修改 JupyterLab 配置文件:
jupyter lab --generate-config

修改以下
c.NotebookApp.allow_root = True
c.NotebookApp.open_browser = False
c.NotebookApp.password = '刚才复制的输出粘贴到这里来'


# 安装一个生成目录的插件
jupyter labextension install @jupyterlab/toc
# 可以查看一下安装的插件
jupyter labextension list

##链接
https://www.cnblogs.com/lskreno/p/10844315.html

官方镜像启动

nvidia-docker run  -d --rm  -p 3333:8888  tensorflow/tensorflow:latest-gpu-py3-jupyter   /bin/bash -c  "jupyter notebook --notebook-dir=/tf --ip 0.0.0.0 --no-browser --allow-root  --NotebookApp.token='jupyterAdmin' "

ubuntu 编译安装python

https://yq.aliyun.com/articles/675910

如何构建包含TensorFlow/Python3/Jupyter的Docker
https://zhuanlan.zhihu.com/p/66278558 :这个启动需要token

ubuntu1604-cuda-cudnn-anaconda-jupyter-tensorflow

FROM  nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04
ENV   PATH  /root/anaconda3/bin:$PATH
RUN   apt update   &&  apt  install  wget  && apt  install  bzip2   &&  cd  /    \
      &&  wget https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda3-5.2.0-Linux-x86_64.sh   \
	  &&  chmod  +x  /Anaconda3-5.2.0-Linux-x86_64.sh   \
	  && ./Anaconda3-5.2.0-Linux-x86_64.sh -b           \
	  &&  rm -rf  ./Anaconda3-5.2.0-Linux-x86_64.sh
RUN   pip  install tensorflow-gpu==1.11.0   -i   https://pypi.tuna.tsinghua.edu.cn/simple  \
      &&   pip  install jupyterlab  \
	  &&   pip  install  msgpack

RUN  echo  'import subprocess\nimport sys\nsubprocess.call("cd /", shell=True)\nsubprocess.call("jupyter lab --ip=0.0.0.0 --no-browser --allow-root  --NotebookApp.allow_root=False --NotebookApp.token='jupyterAdmin' --notebook-dir=/home", shell=True)'  >>/python_service.py
CMD ["python3","/python_service.py"]



FROM  nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04
ENV   PATH  /root/anaconda3/bin:$PATH
RUN   apt update   &&  apt  install  wget  && apt  install  bzip2   &&  cd  /    \
      &&  wget https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda3-5.2.0-Linux-x86_64.sh   \
      &&  chmod  +x  /Anaconda3-5.2.0-Linux-x86_64.sh   \
      &&  ./Anaconda3-5.2.0-Linux-x86_64.sh -b          \
      &&  rm -rf  ./Anaconda3-5.2.0-Linux-x86_64.sh
 RUN  pip install tensorflow-gpu==1.11.0   -i   https://pypi.douban.com/simple/    \
      &&   pip  install  msgpack   -i   https://pypi.douban.com/simple/    \
      &&   pip install jupyterlab   

RUN  echo  'import subprocess\nimport sys\nsubprocess.call("cd /", shell=True)\nsubprocess.call("jupyter lab --ip=0.0.0.0 --no-browser --allow-root  --NotebookApp.allow_root=False --NotebookApp.token='jupyterAdmin' --notebook-dir=/home", shell=True)'  >>/python_service.py
CMD ["python3","/python_service.py"]




FROM  nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04
ENV   PATH  /root/anaconda3/bin:$PATH
RUN   apt update   &&  apt  install  wget  && apt  install  bzip2   &&  cd  /    \
      &&  wget https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda3-5.2.0-Linux-x86_64.sh   \
      &&  chmod  +x  /Anaconda3-5.2.0-Linux-x86_64.sh   \ 
      &&  ./Anaconda3-5.2.0-Linux-x86_64.sh -b          \
      &&  rm -rf  ./Anaconda3-5.2.0-Linux-x86_64.sh
RUN  pip install tensorflow-gpu==1.11.0   -i   https://pypi.douban.com/simple/    \
      &&   pip install  msgpack   -i   https://pypi.douban.com/simple/    \
      &&   pip install jupyterlab   

RUN  echo  'import subprocess\nimport sys\nsubprocess.call("cd /", shell=True)\nsubprocess.call("jupyter lab --ip=0.0.0.0 --no-browser --allow-root  --NotebookApp.allow_root=False --NotebookApp.token='jupyterAdmin' --notebook-dir=/home", shell=True)'  >>/python_service.py
CMD ["python3","/python_service.py"]




posted @ 2020-02-03 15:57  Lust4Life  阅读(1459)  评论(0编辑  收藏  举报