k8s集群节点配置GPU

默认有以下前置条件

  • k8s 1.10+
  • 操作系统:麒麟v10 SP2(Centos,Ubuntu等见“安装NVIDIA Container Toolkit”中其他源配置,经测试新的centos源也可在麒麟v10 SP2使用)

官方文档:NVIDIA/k8s-device-plugin: NVIDIA device plugin for Kubernetes (github.com)

一、安装NVIDIA显卡驱动

验证是否挂载

$ lspci -vv | grep -i nvidia
00:0d.0 VGA compatible controller: NVIDIA Corporation TU104GL [Quadro RTX 5000] (rev a1) (prog-if 00 [VGA controller])
        Subsystem: NVIDIA Corporation Device 129f
        Kernel driver in use: nvidia
        Kernel modules: nouveau, nvidia_drm, nvidia
00:0e.0 Audio device: NVIDIA Corporation TU104 HD Audio Controller (rev a1)
        Subsystem: NVIDIA Corporation Device 129f
00:0f.0 USB controller: NVIDIA Corporation TU104 USB 3.1 Host Controller (rev a1) (prog-if 30 [XHCI])
        Subsystem: NVIDIA Corporation Device 129f
00:10.0 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI Controller (rev a1)
        Subsystem: NVIDIA Corporation Device 129f

如出现以上则已挂载请跳到第二部分进行配置

1.1 安装驱动必要依赖

yum install g++ -y
yum install gcc -y
yum install make -y

卸载系统自带驱动

yum remove nvidia-*

1.2 禁用 nouveau 驱动

在CentOS系统中,Nouveau是开源第三方的NVIDIA显卡驱动程序,但它与NVIDIA的官方驱动程序NVIDIA Proprietary Driver存在兼容性问题,我们需要先将其屏蔽才能安装NVIDIA官方驱动。否则安装驱动的话,会出现报错:
ERROR: The Nouveau kernel driver is currently in use by your system. This driver is incompatible with the NVIDIA driver。

echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "options nouveau modeset=0" >> /etc/modprobe.d/blacklist.conf
dracut --force

设置完成后进行重启

1.3 安装显卡驱动

查看显卡版本

lspci | grep -i vga
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
00:0d.0 VGA compatible controller: NVIDIA Corporation TU104GL [Quadro RTX 5000] (rev a1)
00:0e.0 VGA compatible controller: NVIDIA Corporation TU104GL [Quadro RTX 5000] (rev a1)

下载 NVIDIA 官方驱动 | NVIDIA官方进行下载驱动

点击查找进行下载

将驱动上传至服务器,然后执行

bash NVIDIA-Linux-x86_64-550.107.02.run

安装过程选项

An alternate method of installing the NVIDIA driver was detected. (This is usually a package provided by your distributor.) A driver installed via that method may integrate better with your system than a driver installed by nvidia-installer. Please review the message provided by the maintainer of this alternate installation method and decide how to proceed:
【continue installation】

Install NVIDIA's 32-bit compatibility libraries?
【No】

Would you like to run the nvidia-xconfig utility to aotumatically update your X configuration file so that the NVIDIA X driver will be used when you restart X?ANy pre-existing X configuration file will be backed up.
【No】

Would you like to register the kernel module sources with DKMS? This will allow DKMS to automatically build a new module, if your kernel changes later.
【Rebuild initramfs】

Would you like to run the nvidia-xconfig utility to automatically update your X configuration file so that the NVIDIA X driver will be used when you restart X? Any pre-existing X configuration file will be backed up.
【No】

1.4 验证驱动

使用nvidia-smi进行验证

nvidia-smi全称是NVIDIA System Management Interface ,它是一个基于前面介绍过的NVIDIA Management Library(NVML)构建的命令行实用工具,旨在帮助管理和监控NVIDIA GPU设备。

# nvidia-smi
     
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.113.01               Driver Version: 535.113.01   CUDA Version: 12.2     |
|-------------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M |   Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  Quadro RTX 5000                Off |   00000000:00:0D.0 Off |                  Off |
| 33%   28C    P8             11W /  230W |       1MiB /  16384MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  Quadro RTX 5000                Off |   00000000:00:0E.0 Off |                  Off |
| 33%   23C    P8             18W /  230W |       1MiB /  16384MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

查看动态监控显卡状态

# watch -t -n 1 nvidia-smi

验证cuda版本

nvcc -V

二、 增加GPU节点标签

$ kubectl label nodes 节点名称 node-role.kubernetes.io/gpu=true

三、在容器中启用 GPU 支持

我们需要安装NVIDIA Container来支持docker调用显卡

如不设置则会报如下错误:

docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].

NVIDIA Container主要组件包括nvidia-container-runtime, nvidia-container-toolkit, libnvidia-container

nvidia-container-toolkit:CUDA Toolkit是一个用于开发和优化CUDA应用程序的软件包,其中包括CUDA驱动程序和CUDA运行时库。
nvidia-container-runtime:CUDA运行时库是一个用于在GPU上执行CUDA应用程序的软件组件,它提供了一组CUDA API函数,于管理GPU内存和执行CUDA内核。

官方架构地址:Architecture Overview — NVIDIA Container Toolkit 1.16.0 documentation

依赖关系:

├─ nvidia-container-toolkit (version)
│    ├─ libnvidia-container-tools (>= version)
│    └─ nvidia-container-toolkit-base (version)
│
├─ libnvidia-container-tools (version)
│    └─ libnvidia-container1 (>= version)
└─ libnvidia-container1 (version)

nvidia-container-toolkit已包含运行时,这里我们选择安装nvidia-container-toolkit,用于后续k8s节点使用GPU

因网络原因或在离线环境下NVIDIA Container Toolkit安装部分分为离线安装,在线安装

官方安装 NVIDIA Container Toolkit:Installing the NVIDIA Container Toolkit — NVIDIA Container Toolkit 1.16.0 documentation

运行NVIDIA Container Toolkit的条件:

内核版本 > 3.10 的 GNU/Linux x86_64
Docker >= 19.03(推荐,但某些发行版可能包含旧版本的 Docker。支持的最低版本为 1.12)
架构 >= Kepler(或计算能力 3.0)的 NVIDIA GPU
NVIDIA Linux 驱动程序>= 418.81.07(请注意,不支持较旧的驱动程序版本或分支。)

3.1 离线安装NVIDIA Container Toolkit

在一台同环境可联网的机器按教程配置源

curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo | \
  sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo

使用yumdownloader进行相关包下载

yumdownloader libnvidia-container1 \
libnvidia-container-tools \
nvidia-container-toolkit-base \
nvidia-container-toolkit

如果使用yumdownloader直接下载也可能会出现国内网络下载缓慢问题,可以直接去github仓库直接下载包
libnvidia-container/stable/rpm/x86_64 at gh-pages · NVIDIA/libnvidia-container (github.com)

选择需要的版本进行下载
例如:

libnvidia-container1-1.16.1-1.x86_64.rpm

libnvidia-container-tools-1.16.1-1.x86_64.rpm

nvidia-container-toolkit-base-1.16.1-1.x86_64.rpm

nvidia-container-toolkit-1.16.1-1.x86_64.rpm

下载后将rpm上传到服务器上,目录下执行安装

yum install *.rpm -y

3.2 在线安装NVIDIA Container Toolkit

  • 麒麟系统 v10 sp2设置源:
$ curl -s -L https://nvidia.github.io/nvidia-docker/centos8/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo
$ sudo yum install -y nvidia-container-toolkit
  • CentOS源(经测试麒麟系统也可用此源)
$ curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo | \
  sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
$ sudo yum install -y nvidia-container-toolkit
  • Ubuntu源
$ curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list \
  && \
    sudo apt-get update
$ sudo apt-get install -y nvidia-container-toolkit

其他系统存储库配置:https://nvidia.github.io/nvidia-docker/

3.3 Docker显卡适配

在Docker在19.03版本以后直接安装NVIDIA Container,可以不需要再独立安装nvidia-docker也就是docker2直接设置就可以支持显卡

直接设置daemon.json,增加如下

  "default-runtime": "nvidia",
  "runtimes": {
      "nvidia": {
          "path": "nvidia-container-runtime",
          "runtimeArgs": []
      }
  }

配置完成后进行重启

$ sudo systemctl restart docker

3.4 Cotainerd显卡适配

在config.toml,增加如下:

version = 2
[plugins]
  [plugins."io.containerd.grpc.v1.cri"]
    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "nvidia"

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia]
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia.options]
            BinaryName = "/usr/bin/nvidia-container-runtime"

配置完成后进行重启

sudo systemctl restart containerd

容器内验证

docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
Tue Nov 14 12:29:39 2023       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.113.01             Driver Version: 535.113.01   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Quadro RTX 5000                Off | 00000000:00:0D.0 Off |                  Off |
| 29%   23C    P8              12W / 230W |      0MiB / 17236MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

如上为成功设置

四、在Kubernetes中启用 GPU 支持

4.1 安装k8s插件 nvidia-device-plugin

$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.14.2/nvidia-device-plugin.yml

nvidia-device-plugin.yml内容:

# Copyright (c) 2019, NVIDIA CORPORATION.  All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nvidia-device-plugin-daemonset
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: nvidia-device-plugin-ds
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        name: nvidia-device-plugin-ds
    spec:
      tolerations:
        - key: node-usage.project/name
          operator: Equal
          value: gpu
      priorityClassName: system-node-critical
      containers:
      - image: 'nvidia/k8s-device-plugin:1.11'
        name: nvidia-device-plugin-ctr
        env:
          - name: FAIL_ON_INIT_ERROR
            value: "false"
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop: ["ALL"]
        volumeMounts:
        - name: device-plugin
          mountPath: /var/lib/kubelet/device-plugins
      volumes:
      - name: device-plugin
        hostPath:
          path: /var/lib/kubelet/device-plugins

运行后查看打标签的节点gpu是否启用

$ kubectl get nodes "-o=custom columns=NAME:.metadata.name,GPU:.status.allocatable.nvidia\.com/gpu"
NAME          GPU
k8s-worker01   <none>
k8s-worker02   1
k8s-worker03   1

4.2 k8s部署验证

部署job进行验证

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod
spec:
  restartPolicy: Never
  containers:
    - name: cuda-container
      image: nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2
      resources:
        limits:
          nvidia.com/gpu: 1 # requesting 1 GPU
  tolerations:
  - key: nvidia.com/gpu
    operator: Exists
    effect: NoSchedule
EOF

如下输出则k8s启用GPU成功

 $ kubectl logs gpu-pod

 [Vector addition of 50000 elements]

 Copy input data from the host memory to the CUDA device

 CUDA kernel launch with 196 blocks of 256 threads

 Copy output data from the CUDA device to the host memory

 Test PASSED

 Done

posted @ 2023-11-16 13:31  shookm  阅读(503)  评论(0编辑  收藏  举报