Kubernetes循序渐进

docker基础

在线安装

1、docker安装

  • centos7
 sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
  • 安装yum工具包
yum install -y yum-utils
  • 设置镜像仓库
yum-config-manager \
    --add-repo \
https://download.docker.com/linux/centos/docker-ce.repo 这是国外的,安装国内的阿里云镜像

yum-config-manager \
    --add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
推荐使用用国内的
  • 更新软件包索引
yum makecache fast
  • 安装docker的相关内容
//docker-ce 社区  ee 企业版
yum install docker-ce docker-ce-cli containerd.io -y
//如果不想安装最新版本,可以安装指定版本
yum list docker-ce --showduplicates | sort -r
yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io
  • 启动docker
systemctl start docker
//查看docker是否启动成功
docker version

2、阿里云的镜像加速
登录阿里云,找到镜像加速地址
阿里云镜像加速(阿里云 容器镜像服务)

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://z8wfi803.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

3、卸载docker

  • 卸载依赖
yum remove docker-ce docker-ce-cli containerd.io
  • 删除资源
rm -rf /var/lib/docker              # /var/lib/docker    docker默认的工作路径
rm -rf /var/lib/containerd

离线安装

1、环境

2、安装

  • 解压
tar -xvf docker-19.03.8.tgz
  • 移动(/user/bin/目录下)
cp docker/* /usr/bin/

  • 将docker注册为service
cat > /etc/systemd/system/docker.service <<EOF

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
  • 启动
chmod +x /etc/systemd/system/docker.service 
systemctl daemon-reload           
systemctl start docker            
systemctl enable docker.service    
  • 验证
systemctl status docker         
docker -v                        

修改docker默认存储目录

1、环境:
centos7.x系统,已经装好docker-ce服务包

2、查看当前docker的存储路径

[root@VM-docker ~]$ docker info |grep Dir
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
         Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
 Docker Root Dir: /var/lib/docker              ## docker默认的存储路径

3、关闭docker服务

[root@VM-docker ~]$  systemctl stop docker            ## 关闭docker服务                            
[root@VM-docker ~]$ systemctl status docker		  ## 查看docker服务状态

4、将原有数据迁移至新目录

[root@@VM-docker local]# mkdir /data/service/docker -p
[root@@VM-docker local]# mv /var/lib/docker/* /data/docker/

5、修改docker.service配置文件,使用 --graph 参数指定存储位置

[root@VM-docker local]# vim /etc/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --graph /data/docker

6、重新加载配置文件

[root@VM-docker local]# systemctl daemon-reload

7、启动docker服务

[root@VM-docker local]# systemctl start docker
[root@VM-docker local]# systemctl enable docker
[root@VM-docker local]# systemctl status docker

8、查看修改是否成功

[root@VM-docker ~]# docker info | grep Dir
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
         Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
 Docker Root Dir: /data/docker   #查看修改成功

命令自动补全

Linux

sudo curl \
    -L https://raw.githubusercontent.com/docker/compose/1.29.2/contrib/completion/bash/docker-compose \
    -o /etc/bash_completion.d/docker-compose

source ~/.bashrc

参考文档:
https://docs.docker.com/compose/completion/

搭建本地仓库

1、搭建本地仓库

mkdir /data/docker/myregistry -p
docker run -d -p 5000:5000 -v /data/docker/myregistry:/var/lib/registry registry

2、上传镜像

docker tag httpd:v11 127.0.0.1:5000/michael/httpd:v11
docker push 127.0.0.1:5000/michael/httpd:v11

3、查看 Registry 中的镜像。

#查询registry中所有的镜像名称
curl -XGET http://192.168.2.80:5000/v2/_catalog
#依据镜像名称查询镜像版本
curl -XGET http://192.168.2.80:5000/v2/httpd/tags/list

4、从 Registry 下载镜像 michael/httpd:v11

vim /etc/docker/daemon.json
{
"insecure-registries":["192.168.2.80:5000"]
}
systemctl restart docker
docker pull 192.168.2.80:5000/michael/httpd:v11

docker中centos7中文支持

1、进入docker里配置

添加中文环境编码,安装两个包
# yum install kde-l10n-Chinese -y
# yum install glibc-common -y
转化语言环境和字符集
# localedef -c -f UTF-8 -i zh_CN zh_CN.utf8
添加定义到系统环境变量
# vi /etc/profile
export LC_ALL=zh_CN.utf8
执行生效
# source /etc/profile

2、编写dockerfile文件

FROM centos
MAINTAINER djl
#设置系统编码
RUN yum install kde-l10n-Chinese -y
RUN yum install glibc-common -y
RUN localedef -c -f UTF-8 -i zh_CN zh_CN.utf8
#RUN export LANG=zh_CN.UTF-8
#RUN echo "export LANG=zh_CN.UTF-8" >> /etc/locale.conf
#ENV LANG zh_CN.UTF-8
ENV LC_ALL zh_CN.UTF-8

参考文档:
https://www.lmlphp.com/user/16958/article/item/481698/

dockerfile

tomcat

#基于我们从阿里云下载下来的centos基础镜像
FROM centos
 
#定义维护者的信息
MAINTAINER kgf<kgf@163.com>
 
#把宿主机当前上下文的test1.txt文件拷贝到容器/usr/local/路径下
COPY readme.txt /usr/local/readme.txt
 
#把java与tomcat添加到容器中,使用ADD命令会自动帮我们解压
ADD jdk-8u191-linux-x64.tar.gz /usr/local/
ADD apache-tomcat-9.0.12.tar.gz /usr/local/
 
#安装vim编辑器
RUN yum -y install vim
 
#设置工作访问时候的workdir路径,登录落脚点
ENV MY_PATH /usr/local
WORKDIR $MY_PATH
 
#配置java与tomcat环境变量
ENV JAVA_HOME /usr/local/jdk1.8.0_191
ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.12
ENV CATALINA_BASE /usr/local/apache-tomcat-9.0.12
ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/bin
 
#容器运行时监听的端口
EXPOSE 8080
 
#启动时运行tomcat,下面的三种方式随便一种都可以使用
#ENTRYPOINT ["/usr/local/apache-tomcat-9.0.12/bin/startup.sh"]
#CMD ["/usr/local/apache-tomcat-9.0.12/bin/catalina.sh","run"]
CMD /usr/local/apache-tomcat-9.0.12/bin/startup.sh && tail -F /usr/local/apache-tomcat-9.0.12/bin/logs/catalina.out
FROM centos:7
MAINTAINER <jluocc.com>
ADD server-jre-8u221-linux-x64.tar.gz  /
ADD apache-tomcat-7.0.109.tar.gz /usr/local/
ENV CATALINA_HOME /usr/local/apache-tomcat-7.0.109
ENV JAVA_HOME=/jdk1.8.0_221
ENV CLASSPATH=.:$JAVA_HOME/lib/jrt-fs.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
ENV PATH=$PATH:$JAVA_HOME/bin:$CATALINA_HOME/bin
RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone
WORKDIR $CATALINA_HOME
EXPOSE 8080
CMD ["catalina.sh", "run"]


maven

vi Dockerfile 
# 以centos:laste 构建包含openjdk-17.0.8 和 maven-3.9.4的基础镜像
FROM centos
MAINTAINER <lyg_cn>
WORKDIR /usr/local/java
# ADD和COPY的区别是ADD可以自动解压压缩包ADD jdk1.8.0_211.tar.gz  /usr/local/java/ADD jdk1.8.0_211.tar.gz  /usr/local/java/
ADD jdk-8u391-linux-x64.tar.gz /usr/local/maven/
ENV JAVA_HOME=/usr/local/java/jdk1.8.0_391
ENV MAVEN_HOME=/usr/local/maven/apache-maven-3.6.1
ENV CLASSPATH=.:$JAVA_HOME/lib/jrt-fs.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
ENV PATH=$PATH:$JAVA_HOME/bin:$MAVEN_HOME/bin
# 加这句命令,可以在后台驻留运行
CMD ["/bin/bash"]

docker-compose

安装

  • 下载
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
//国内的
sudo curl -L  https://get.daocloud.io/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose 
  • 修改权限
sudo chmod +x /usr/local/bin/docker-compose
  • 补全命令
curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose version --short)/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
  • 查看是否安装成功
docker-compose version

常用yaml文件

mysql

docker-compose.yml

vi docker-compose.yml
version: '2'
services:
  mysql:
    hostname: mysql
    container_name: mysql
    image: mysql:5.7
    privileged: true
    environment:
      MYSQL_USER: yunwisdom
      MYSQL_PASSWORD: password123
      MYSQL_DATABASE: database
      MYSQL_ROOT_PASSWORD: password123
    ports:
      - 3306:3306
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./data:/var/lib/mysql:rw
      - /etc/hosts:/etc/hosts:rw

redis

vim docker-compose.yml

version: '2'
services:
    redis:
      image: redis:5.0.0
      container_name: redis
      command: redis-server --requirepass 123456
      ports:
        - "16379:6379"
      volumes:
        - ./data:/data

vi redis.conf

# 修改连接为所有ip
bind 0.0.0.0
# 允许外网访问
protected-mode no
port 6379
timeout 0
# RDB存储配置
save 900 1
save 300 10
save 60 10000
rdbcompression yes
dbfilename dump.rdb
# 数据存放位置
dir /data
# 开启aof配置
appendonly yes
appendfsync everysec
appendfilename "appendonly.aof"
# 设置密码
requirepass 123456

vi docker-compose.yml

version: '3'
 
services:
  redis:
    # 镜像名
    image: redis:6.2.0
    # 容器名
    container_name: redis
    # 重启策略
    restart: always
    # 端口映射
    ports:
      - 6379:6379
    environment:
      # 设置环境变量 时区上海 编码UTF-8
      TZ: Asia/Shanghai
      LANG: en_US.UTF-8
    volumes:
      # 配置文件
      - ./redis.conf:/redis.conf:rw
      # 数据文件
      - ./data:/data:rw

k8s安装

linux初始化

1.安装yum源

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

2.工具安装

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

3.所有节点关闭防火墙、selinux、dnsmasq、swap

systemctl disable --now firewalld 
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager

setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

4.关闭swap分区

swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

5.安装ntpdate

rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate -y

6.所有节点同步时间。时间同步配置如下:

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
# 加入到crontab
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com

7.所有节点配置limit:

ulimit -SHn 65535

vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

8.内核配置

cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
cd /root && yum localinstall -y kernel-ml*
#所有节点更改内核启动顺序
grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg

grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
#检查默认内核是不是4.19
[root@k8s-master02 ~]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
#所有节点重启,然后检查内核是不是4.19
[root@k8s-master02 ~]# uname -a
Linux k8s-master02 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux

9.所有节点安装ipvsadm

yum install ipvsadm ipset sysstat conntrack libseccomp -y

所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
vim /etc/modules-load.d/ipvs.conf 
	# 加入以下内容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip

然后执行systemctl enable --now systemd-modules-load.service即可
10.内核参数

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system

11.所有节点配置完内核后,重启服务器,保证重启后内核依旧加载

reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

kubeadm

Runtime安装

所有节点安装docker-ce-20.10

# yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y

可以无需启动Docker,只需要配置和启动Containerd即可。
首先配置Containerd所需的模块(所有节点):

# cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

所有节点加载模块

# modprobe -- overlay
# modprobe -- br_netfilter

所有节点,配置Containerd所需的内核:

# cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

所有节点加载内核

# sysctl --system

所有节点配置Containerd的配置文件

# mkdir -p /etc/containerd
# containerd config default | tee /etc/containerd/config.toml

所有节点将Containerd的Cgroup改为Systemd

# vim /etc/containerd/config.toml

找到containerd.runtimes.runc.options,添加SystemdCgroup = true(如果已存在直接修改,否则会报错),如下图所示:
image
所有节点将sandbox_image的Pause镜像改成符合自己版本的地址registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6:
image
所有节点启动Containerd,并配置开机自启动:

# systemctl daemon-reload
# systemctl enable --now containerd

所有节点配置crictl客户端连接的运行时位置:

# cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

安装Kubernetes组件

首先在Master01节点查看最新的Kubernetes版本是多少:

# yum list kubeadm.x86_64 --showduplicates | sort -r

所有节点安装1.25最新版本kubeadm、kubelet和kubectl:

# yum install kubeadm-1.25* kubelet-1.25* kubectl-1.25* -y

如果选择的是Containerd作为的Runtime,需要更改Kubelet的配置使用Containerd作为Runtime:

# cat >/etc/sysconfig/kubelet<<EOF
KUBELET_KUBEADM_ARGS="--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
EOF

所有节点设置Kubelet开机自启动(由于还未初始化,没有kubelet的配置文件,此时kubelet无法启动,无需管理):

# systemctl daemon-reload
# systemctl enable --now kubelet
此时kubelet是起不来的,日志会有报错不影响!

单机

1.配置hosts

vi  /etc/hosts
192.168.4.20 k8s-master-20
192.168.4.21 k8s-node01-21
192.168.4.22 k8s-node02-22

2.主节点初始化

kubeadm init --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=172.16.0.0/16 --service-cidr=192.169.0.0/16 --ignore-preflight-errors=all

如果初始化失败,重置后再次初始化,命令如下(没有失败不要执行):

kubeadm reset -f ; ipvsadm --clear  ; rm -rf ~/.kube

初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值):

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.4.20:16443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 \
	--control-plane --certificate-key c595f7f4a7a3beb0d5bdb75d9e4eff0a60b977447e76c1d6885e82c3aa43c94c

....

3.节点配置环境变量,用于访问Kubernetes集群

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

4.加入节点

  kubeadm join 192.168.4.20:16443 --token 7t2weq.bjbawausm0jaxury \
	--discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 \

5.查看节点

[root@k8s-master-20 test]# kubectl get nodes
NAME            STATUS   ROLES           AGE    VERSION
h8s-node01-21   Ready    <none>          172m   v1.25.3
k8s-master-20   Ready    control-plane   172m   v1.25.3
k8s-node02-22   Ready    <none>          171m   v1.25.3

提示:Ready是已经安装了网络组件Calico

集群

1.高可用组件安装

二进制(集群)

Calico组件的安装

POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
sed -i "s#POD_CIDR#${POD_SUBNET}#g" calico.yaml
kubectl apply -f calico.yaml
[root@k8s-master-20 test]# kubectl get pod -A  | grep calico
kube-system   calico-kube-controllers-86d8c4fb68-tjwc5   1/1     Running   2 (102m ago)   153m
kube-system   calico-node-27dsd                          1/1     Running   1 (103m ago)   153m
kube-system   calico-node-v4r8n                          1/1     Running   1 (102m ago)   153m
kube-system   calico-node-z6rrm                          1/1     Running   1 (103m ago)   153m
kube-system   calico-typha-768795f74d-4gp4n              1/1     Running   1 (103m ago)   153m

Metrics部署

将Master01节点的front-proxy-ca.crt复制到所有Node节点

scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01-21:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node02-22:/etc/kubernetes/pki/front-proxy-ca.crt

编写yaml文件

vi comp.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt # change to front-proxy-ca.crt for kubeadm
        - --requestheader-username-headers=X-Remote-User
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-extra-headers-prefix=X-Remote-Extra-
        image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.6.1 
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
        - name: ca-ssl
          mountPath: /etc/kubernetes/pki
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
      - name: ca-ssl
        hostPath:
          path: /etc/kubernetes/pki
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

部署

kubectl  create -f comp.yaml 
kubectl get po -n kube-system -l k8s-app=metrics-server
[root@k8s-master-20 ~]# kubectl get po -n kube-system -l k8s-app=metrics-server
NAME                              READY   STATUS    RESTARTS       AGE
metrics-server-74db45c9df-rwc9n   1/1     Running   1 (105m ago)   151m

Dashboard部署

k8s命令自动补全

1.安装bash-completion

yum install bash-completion -y
source /usr/share/bash-completion/bash_completion

2.重新加载kubectl completion

source <(kubectl completion bash)

3.在您的 bash shell 中永久的添加自动补全

echo "source <(kubectl completion bash)" >> ~/.bashrc

4.就能用tab补全命令了

kubectl create clusterrolebinding

pod

创建一个 Pod

定义一个 Pod

# vim nginx.yaml
apiVersion: v1 # 必选,API 的版本号
kind: Pod # 必选,类型 Pod
metadata: # 必选,元数据
 name: nginx # 必选,符合 RFC 1035 规范的 Pod 名称
spec: # 必选,用于定义 Pod 的详细信息
 containers: # 必选,容器列表
 - name: nginx # 必选,符合 RFC 1035 规范的容器名称
 image: nginx:1.23.2 # 必选,容器所用的镜像的地址
 ports: # 可选,容器需要暴露的端口号列表
 - containerPort: 80 # 端口号

创建 Pod:

# kubectl create -f nginx.yaml 
pod/nginx created

查看 Pod 状态:

# kubectl get po nginx
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 0 20s

使用 kubectl run 创建一个 Pod:

# kubectl run nginx-run --image=nginx

更改 Pod 的启动命令和参数

# vim nginx.yaml
apiVersion: v1 # 必选,API 的版本号
kind: Pod # 必选,类型 Pod
metadata: # 必选,元数据
name: nginx # 必选,符合 RFC 1035 规范的 Pod 名称
spec: # 必选,用于定义 Pod 的详细信息
 containers: # 必选,容器列表
 - name: nginx # 必选,符合 RFC 1035 规范的容器名称
 image: nginx:1.23.2 # 必选,容器所用的镜像的地址
 command: # 可选,容器启动执行的命令
 - sleep
 - "10"
 ports: # 可选,容器需要暴露的端口号列表
 - containerPort: 80 # 端口号

Pod 状态及 Pod 故障排查命令

状态 备注
Pending(挂起) Pod 已被 Kubernetes 系统接收,但仍有一个或多个容器未被创建,可以通过kubectl describe 查看处于 Pending 状态的原因
Running(运行中) Pod 已经被绑定到一个节点上,并且所有的容器都已经被创建,而且至少有一个是运行状态,或者是正在启动或者重启,可以通过 kubectl logs 查看 Pod 的日志
Succeeded(成功) 所有容器执行成功并终止,并且不会再次重启,可以通过 kubectl logs 查看 Pod日志
Failed(失败) 所有容器都已终止,并且至少有一个容器以失败的方式终止,也就是说这个容器要么以非零状态退出,要么被系统终止,可以通过 logs 和 describe 查看 Pod 日志和状态
Unknown(未知) 通常是由于通信问题造成的无法获得 Pod 的状态
ImagePullBackOff ErrImagePull 镜像拉取失败,一般是由于镜像不存在、网络不通或者需要登录认证引起的,可以使用 describe 命令查看具体原因
CrashLoopBackOff 容器启动失败,可以通过 logs 命令查看具体原因,一般为启动命令不正确,健康检查不通过等
OOMKilled 容器内存溢出,一般是容器的内存 Limit 设置的过小,或者程序本身有内存溢出,可以通过 logs 查看程序启动日志
Terminating Pod 正在被删除,可以通过 describe 查看状态
SysctlForbidden Pod 自定义了内核配置,但 kubelet 没有添加内核配置或配置的内核参数不支持,可以通过 describe 查看具体原因
Completed 容器内部主进程退出,一般计划任务执行结束会显示该状态,此时可以通过 logs查看容器日志
ContainerCreating Pod 正在创建,一般为正在下载镜像,或者有配置不当的地方,可以通过 describe查看具体原因

提示:
Pod 的 Phase 字段只有 Pending、Running、Succeeded、Failed、Unknown,其余的为处
于上述状态的原因,可以通过 kubectl get po xxx –o yaml 查看。

Pod 镜像拉取策略

通过 spec.containers[].imagePullPolicy 参数可以指定镜像的拉取策略,目前支持的策略如下:

操作方式 说明
Always 总是拉取,当镜像 tag 为 latest 时,且 imagePullPolicy 未配置,默认为 Always
Never 不管是否存在都不会拉取
IfNotPresent 镜像不存在时拉取镜像,如果 tag 为非 latest,且 imagePullPolicy 未配置,默认为 IfNotPresent

更改镜像拉取策略为 IfNotPresent:

# vim nginx.yaml
apiVersion: v1 # 必选,API 的版本号
kind: Pod # 必选,类型 Pod
metadata: # 必选,元数据
 name: nginx # 必选,符合 RFC 1035 规范的 Pod 名称
spec: # 必选,用于定义 Pod 的详细信息
 containers: # 必选,容器列表
 - name: nginx # 必选,符合 RFC 1035 规范的容器名称
 image: nginx:1.23.2 # 必选,容器所用的镜像的地址
 imagePullPolicy: IfNotPresent # 可选,镜像拉取策略
 ports: # 可选,容器需要暴露的端口号列表
 - containerPort: 80 # 端口号

Pod 重启策略

可以使用 spec.restartPolicy 指定容器的重启策略

操作方式 说明
Always 默认策略。容器失效时,自动重启该容器
OnFailure 容器以不为 0 的状态码终止,自动重启该容器
Never 无论何种状态,都不会重启

指定重启策略为 Never:

apiVersion: v1 # 必选,API 的版本号
kind: Pod # 必选,类型 Pod
metadata: # 必选,元数据
 name: nginx # 必选,符合 RFC 1035 规范的 Pod 名称
spec: # 必选,用于定义 Pod 的详细信息
 containers: # 必选,容器列表
 - name: nginx # 必选,符合 RFC 1035 规范的容器名称
 image: nginx:1.23.2 # 必选,容器所用的镜像的地址
 imagePullPolicy: IfNotPresent
 command: # 可选,容器启动执行的命令
 - sleep
 - "10"
 ports: # 可选,容器需要暴露的端口号列表
 - containerPort: 80 # 端口号
 restartPolicy: Never
posted @ 2022-10-14 10:24  jluo123  阅读(86)  评论(0编辑  收藏  举报