二进制部署Kubernetes

部署k8s集群流程:

0.0、环境准备

0.1、keepalived + haproxy

0、部署docker

1、部署etcd集群

2、部署flannel插件

3、部署kubectl组件

4、部署kube-apiserver组件(集群)

5、部署kube-controller-manager组件(集群)

6、部署kube-scheduler组件(集群)

7、部署kubelet组件

8、部署 kube-proxy

9、部署coredns

10、创建一个nginx测试

11、部署 dashboard 插件

12、部署metrics-server插件

13、helm3 安装 ingress

14、部署 kube-prometheus

15、部署 EFK 插件

注:高可用 VIP:192.168.1.200 ,负责调度 apiserver 6443端口

离线部署+在线部署

我这里是提前准备好了离线包了

 

tar xf binary_pkg-v1.19.10.tar.gz -C /usr/local/bin/

chmod +x /usr/local/bin/*

ll /usr/local/bin/

tar xf docker-v19.03.15.tar.gz

tar xf env_pkg.tar.gz

0.0、环境准备

主机名:
# hostnamectl set-hostname k8s-master-01
# hostnamectl set-hostname k8s-master-02
# hostnamectl set-hostname k8s-master-03
# hostnamectl set-hostname node1
# hostnamectl set-hostname node2

cat >> /etc/hosts << EOF
192.168.1.201 k8s-master-01
192.168.1.202 k8s-master-02
192.168.1.203 k8s-master-03
192.168.1.204 node1
192.168.1.205 node2
192.168.1.200 k8s-vip
EOF

for i in k8s-master-01 k8s-master-02 k8s-master-03 node1 node2 k8s-vip;do ping -c 1 $i;done

ssh-keygen

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh-copy-id $i;done

for i in k8s-master-0{1..3} node{1..2}; do echo ">>> $i";ssh-copy-id $i;done

for i in 192.168.1.{201..205}; do echo ">>> $i";scp /etc/hosts root@$i:/etc/; done

创建离线方式文件夹

for i in 192.168.1.{202..205}; do echo ">>> $i";ssh root@$i "mkdir docker iptables ipvs k8s_image keepalived+haproxy kernel ntp";done

同步时间

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "yum -y install ntp";done

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "systemctl start ntpd && systemctl enable ntpd && systemctl status ntpd"; done

离线方式:

for i in 192.168.1.{202..205}; do echo ">>> $i";scp /root/ntp/*.rpm root@$i:/root/ntp; done

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "cd /root/ntp && yum -y install *.rpm";done

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "systemctl start ntpd && systemctl enable ntpd && systemctl status ntpd"; done

升级内核:https://www.kernel.org/ 当然我这里准备好离线包了

查最新ELRepo仓库源:http://elrepo.reloumirrors.net/kernel/el7/x86_64/RPMS

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-5.el7.elrepo.noarch.rpm";done
 
for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "yum --enablerepo=elrepo-kernel install -y kernel-lt";done

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "grub2-set-default 0";done

离线方式:

for i in 192.168.1.{202..205}; do echo ">>> $i";scp /root/kernel/*.rpm root@$i:/root/kernel; done

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "cd /root/kernel && yum -y install *.rpm";done

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "grub2-set-default 0";done

关闭防火墙

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld"; done

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "firewall-cmd --state"; done

检查

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "systemctl stop NetworkManager && systemctl disable NetworkManager && systemctl status NetworkManager"; done

iptables的rpm包:http://www.rpmfind.net/linux/rpm2html/search.php?query=iptables-services

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "yum -y install iptables-services"; done

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "systemctl start iptables && systemctl enable iptables"; done

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X && iptables -P FORWARD ACCEPT && service iptables save"; done

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "iptables -nL"; done

离线方式:

for i in 192.168.6.{101..104}; do echo ">>> $i";scp /root/iptables/*.rpm root@$i:/root/iptables; done

for i in 192.168.6.{101..104}; do echo ">>> $i";ssh root@$i "cd /root/iptables && yum -y install *.rpm"; done

for i in 192.168.6.{101..104}; do echo ">>> $i";ssh root@$i "systemctl start iptables && systemctl enable iptables"; done

for i in 192.168.6.{101..104}; do echo ">>> $i";ssh root@$i "iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X && iptables -P FORWARD ACCEPT && service iptables save"; done

for i in 192.168.6.{101..104}; do echo ">>> $i";ssh root@$i "iptables -nL"; done

关闭selinux

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "setenforce 0 && sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config"; done 

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "sestatus"; done

关闭sqap

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab && free -h|grep Swap";done

加载ipvs模块

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "yum -y install ipset ipvsadm sysstat conntrack libseccomp";done

离线方式
<===
for i in 192.168.1.{202..205}; do echo ">>> $i";scp /root/ipvs/*.rpm root@$i://root/ipvs; done

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "cd /root/ipvs && yum -y install *.rpm";done
===>

# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack
modprobe ip_tables
modprobe ip_set
modprobe xt_set
modprobe ipt_set
modprobe ipt_rpfilter
modprobe ipt_REJECT
modprobe ipip
EOF

for i in 192.168.1.{202..205}; do echo ">>> $i";scp /etc/sysconfig/modules/ipvs.modules root@$i:/etc/sysconfig/modules/; done

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "chmod 755 /etc/sysconfig/modules/ipvs.modules && sh /etc/sysconfig/modules/ipvs.modules ";done

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i lsmod |egrep 'ip_vs|nf_conntrack';done

配置内核参数开启路由转发

# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
net.ipv4.tcp_tw_recycle=0
net.ipv4.neigh.default.gc_thresh1=1024
net.ipv4.neigh.default.gc_thresh2=2048
net.ipv4.neigh.default.gc_thresh3=4096
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

for i in 192.168.1.{202..205}; do echo ">>> $i";scp /etc/sysctl.d/k8s.conf root@$i:/etc/sysctl.d/;done

设置资源配置文件

# cat >> /etc/security/limits.conf <<EOF
* soft nofile 65536
* hard nofile 65536
* soft nproc 65536
* hard nproc 65536
* soft  memlock  unlimited
* hard memlock  unlimited
EOF

for i in 192.168.6.{100..104}; do echo ">>> $i";scp /etc/security/limits.conf root@$i:/etc/security/;done

重启这几台虚拟机,在公司里面这些是提前配置好的哈

for i in 192.168.1.{202..205}; do echo ">>> $i";ssh root@$i "reboot";done

重启完毕之后检查环境配置是否成功

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "ulimit -SHn 65535";done

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "sysctl -p /etc/sysctl.d/k8s.conf";done

查看内核和selinux状态
for i in 192.168.6.{101..104}; do echo ">>> $i";ssh root@$i "uname -r && sestatus";done

 

 

 

0.1、keepalived + haproxy

所有k8s-master节点安装keepalived + haproxy

 

for i in 192.168.1.{201..203}; do echo ">>> $i";ssh root@$i "yum -y install keepalived haproxy && cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak && cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg_bak";done

离线方式:

for i in 192.168.1.{202..203}; do echo ">>> $i";scp /root/keepalived+haproxy/*.rpm root@$i:/root/keepalived+haproxy; done

for i in 192.168.1.{201..203}; do echo ">>> $i";ssh root@$i "cd /root/keepalived+haproxy && yum -y install *.rpm"; done

for i in 192.168.1.{201..203}; do echo ">>> $i";ssh root@$i "cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak && cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg_bak";done

部署keepalived

k8s-master-01(keepalived的MASTER配置)

cat <<EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_k8s
}

vrrp_script check_haproxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 3
    weight -2
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.6.200
    }
    track_script {
        check_haproxy
    }
}
EOF
for i in 192.168.1.{202..203}; do echo ">>> $i";scp /etc/keepalived/keepalived.conf root@$i:/etc/keepalived;done

k8s-master-02(keepalived的BACKUP配置)

for i in 192.168.1.202; do echo ">>> $i";ssh root@$i "sed -ri 's/MASTER/BACKUP/' /etc/keepalived/keepalived.conf";done

for i in 192.168.1.202; do echo ">>> $i";ssh root@$i "sed -ri 's/100/90/' /etc/keepalived/keepalived.conf";done

k8s-master-03(keepalived的BACKUP配置)

for i in 192.168.1.203; do echo ">>> $i";ssh root@$i "sed -ri 's/MASTER/BACKUP/' /etc/keepalived/keepalived.conf";done

for i in 192.168.1.203; do echo ">>> $i";ssh root@$i "sed -ri 's/100/80/' /etc/keepalived/keepalived.conf";done

监测脚本(所有k8s-master)

cat > /etc/keepalived/check_haproxy.sh<<EOF
#!/bin/sh
# HAPROXY down
A=`ps -C haproxy --no-header | wc -l`
if [ \$A -eq 0 ]
    then
systmectl start haproxy
    if [ ps -C haproxy --no-header | wc -l -eq 0 ]
        then
    killall -9 haproxy
    echo "HAPROXY down" | mail -s "haproxy"
sleep 3600
    fi 
fi
EOF
for i in 192.168.1.{201..203}; do echo ">>> $i";scp "/etc/keepalived/check_haproxy.sh" root@$i:/etc/keepalived/ && chmod +x /etc/keepalived/check_haproxy.sh;done

haproxy

所有k8s-master一样

cat > /etc/haproxy/haproxy.cfg << EOF
#-----------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#-----------------------------------------------------

#-----------------------------------------------------
# Global settings
#-----------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#-----------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#-----------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#-----------------------------------------------------
# main frontend which proxys to the backends
#-----------------------------------------------------
frontend  kubernetes-apiserver
    mode                        tcp
    bind                        *:8443
    option                      tcplog
    default_backend             kubernetes-apiserver

#-----------------------------------------------------
# static backend for serving up images, stylesheets and such
#-----------------------------------------------------
listen stats
    bind            *:1080
    stats auth      admin:awesomePassword
    stats refresh   5s
    stats realm     HAProxy\ Statistics
    stats uri       /admin?stats

#-----------------------------------------------------
# round robin balancing between the various backends
#-----------------------------------------------------
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    server  k8s-master-01  192.168.6.100:6443 check
    server  k8s-master-02  192.168.6.101:6443 check
    server  k8s-master-02  192.168.6.102:6443 check
EOF
for i in 192.168.6.{100..102}; do echo ">>> $i";scp "/etc/haproxy/haproxy.cfg" root@$i:/etc/haproxy;done
依次启动keepalived
for i in 192.168.6.{100..102}; do echo ">>> $i";ssh root@$i "systemctl daemon-reload && systemctl start keepalived && systemctl enable keepalived";done

依次启动haproxy
for i in 192.168.6.{100..102}; do echo ">>> $i";ssh root@$i "systemctl daemon-reload && systemctl start haproxy && systemctl enable haproxy";done

#检测haproxy端口
for i in 192.168.6.{100..102}; do echo ">>> $i";ssh root@$i ss -lnt | grep -E "8443|1080";done

检测vip是否漂移
for i in 192.168.6.{100..102}; do echo ">>> $i";ssh root@$i "ip a s ens33 |grep global";done

检测vip是否漂移

1、#首先查看k8s-master-01
[root@k8s-master-01 ~]# ip a s ens33 |grep global
    inet 192.168.1.201/24 brd 192.168.1.255 scope global ens33
    inet 192.168.1.200/32 scope global ens33

2、#然后停止k8s-master-01,查看k8s-master-02是否有VIP
systemctl stop keepalived
#查看k8s-master-02是否有VIP
[root@k8s-master-02 ~]# ip a s ens33 |grep global
    inet 192.168.1.202/24 brd 192.168.1.255 scope global ens33
    inet 192.168.1.200/32 scope global ens33
 
3、#然后停止k8s-master-01,k8s-master-02,查看k8s-master-03是否有VIP
systemctl stop keepalived
#查看k8s-master-03是否有VIP
[root@k8s-master-03 ~]# ip a s ens33 |grep global
    inet 192.168.1.203/24 brd 192.168.1.255 scope global ens33
    inet 192.168.1.200/32 scope global ens33
 
4、#重启k8s-master-01、k8s-master-02,确认k8s-master-01的VIP是否存在
systemctl start keepalived
#确认VIP是否漂移回来
#存在VIP
[root@k8s-master-01 ~]# ip a s ens33 |grep global
    inet 192.168.1.201/24 brd 192.168.1.255 scope global ens33
    inet 192.168.1.200/32 scope global ens33

#不存在VIP
[root@k8s-master-02 ~]# ip a s ens33 |grep global
    inet 192.168.1.202/24 brd 192.168.1.255 scope global ens33

#不存在VIP
[root@k8s-master-03 ~]# ip a s ens33 |grep global
    inet 192.168.1.203/24 brd 192.168.1.255 scope global ens33

说明keepalived高可用成功部署

0、部署docker

阿里源

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo";done

部署

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "yum -y install docker-ce-19.03.15 docker-ce-cli-19.03.15";done

for i in 192.168.1.{201..205}; do echo ">>> $i";ssh root@$i "systemctl start docker && systemctl enable docker";done

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "docker -v";done

# docker version

# cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://7vnz06qj.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
  "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
    ]
}
EOF

for i in 192.168.6.{101..104}do echo ">>> $i";scp "/etc/docker/daemon.json" root@$i:/etc/docker;done

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "systemctl daemon-reload && systemctl restart docker";done

# docker info

离线方式:

for i in 192.168.6.{100..104}; do echo ">>> $i";scp /root/docker/*.rpm root@$i://root/docker; done

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "cd /root/docker && yum -y install *.rpm";done


for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "systemctl start docker && systemctl enable docker";done

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "docker -v";done

# docker version

# cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://7vnz06qj.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
  "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
    ]
}
EOF

for i in 192.168.6.{100..104}; do echo ">>> $i";scp "/etc/docker/daemon.json" root@$i:/etc/docker;done

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "systemctl daemon-reload && systemctl restart docker";done

# docker info

cfss

生成证书的组建的包地址:http://pkg.cfssl.org/

# wget http://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl

# wget http://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson

# wget http://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo

# chmod +x /usr/local/bin/cfssl*

离线方式:

tar xf binary_pkg-v1.19.10.tar.gz -C /usr/local/bin/

chmod +x /usr/local/bin/*

ll /usr/local/bin/  ##因为开头已经解压缩到指定目录了,这里直接查询就可以

创建 etcd + k8s 相关目录

for i in 192.168.6.{201..205}; do echo ">>> $i";ssh root@$i "mkdir -p /etc/{kubernetes,kubernetes/ssl}";done

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "mkdir -p /var/log/kubernetes";done

for i in 192.168.1.{100..104}; do echo ">>> $i";ssh root@$i "mkdir -p /etc/{etcd,etcd/ssl}";done

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "mkdir -p /opt/{etcd,kubernetes}";done

/etc/kubernetes              # 配置文件存放目录
/etc/kubernetes/ssl          # 证书文件存放目录
/var/log/kubernetes          # kubernetes各组件日志文件存放目录 
/opt/kubernetes              # kubernetes各组件数据存放目录

/etc/etcd                    # 配置文件存放目录
/etc/etcd/ssl                # 证书文件存放目录
/opt/etcd                    # 数据存放目录

创建根证书(CA)

cd /etc/kubernetes/ssl

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF
cat > ca-csr.json << EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "opsnull"
    }
  ],
  "ca": {
    "expiry": "876000h"
 }
}
EOF

生成CA证书和秘钥

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

分发CA证书到所有节点

for i in 192.168.6.{101..104}; do echo ">>> $i";scp ca*.pem ca-config.json root@$i:/etc/kubernetes/ssl;done

1、部署etcd集群

下载和分发etcd二进制文件

# cd && wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz

# tar -xf etcd-v3.4.13-linux-amd64.tar.gz

for i in 192.168.1.{201..203}; do echo ">>> $i";scp etcd-v3.4.13-linux-amd64/{etcd,etcdctl} root@$i:/usr/local/bin/;done

离线方式:

tar xf binary_pkg-v1.19.10.tar.gz -C /usr/local/bin/

##同样开头已经解压缩,这里直接查看结果
# ls /usr/local/bin/etcd*
/usr/local/bin/etcd  /usr/local/bin/etcdctl

for i in 192.168.6.{100..102}; do echo ">>> $i";scp /usr/local/bin/{etcd,etcdctl} root@$i:/usr/local/bin/;done

创建etcd证书和私钥

cd /etc/etcd/ssl/

cat > etcd-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "127.0.0.1",
    "192.168.6.100",
    "192.168.6.101",
    "192.168.6.102"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "opsnull"
        }
    ]
}
EOF

生成etcd证书和私钥

cfssl gencert \
-ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/etc/kubernetes/ssl/ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd

ls etcd*pem
etcd-key.pem  etcd.pem

分发证书和私钥至各etcd节点

for i in 192.168.6.{101..102}; do echo ">>> $i";scp etcd*.pem root@$i:/etc/etcd/ssl;done

添加etcd主配置文件

cat > /etc/etcd/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.6.100:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.6.1000:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.6.100:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.6.100:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.6.100:2380,etcd-2=https://192.168.6.101:2380,etcd-3=https://192.168.6.102:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
for i in 192.168.6.{101..102}; do echo ">>> $i";scp /etc/etcd/etcd.conf root@$i:/etc/etcd;done

修改节点2和节点3的etcd.conf配置文件中的节点名称和当前服务器IP

etcd-2

cd /etc/etcd

for i in 192.168.6.101; do echo ">>> $i";ssh root@$i "sed -ri 's#ETCD_NAME=\"etcd-1\"#ETCD_NAME=\"etcd-2\"#' /etc/etcd/etcd.conf";done

for i in 192.168.6.101; do echo ">>> $i";ssh root@$i "sed -ri 's#PEER_URLS=\"https://192.168.6.101:2380\"#PEER_URLS=\"https://192.168.6.101:2380\"#' /etc/etcd/etcd.conf";done

for i in 192.168.6.101; do echo ">>> $i";ssh root@$i "sed -ri 's#CLIENT_URLS=\"https://192.168.6.101:2379\"#CLIENT_URLS=\"https://192.168.6.101:2379\"#' /etc/etcd/etcd.conf";done

etcd-3

for i in 192.168.6.102; do echo ">>> $i";ssh root@$i "sed -ri 's#ETCD_NAME=\"etcd-1\"#ETCD_NAME=\"etcd-3\"#' /etc/etcd/etcd.conf";done

for i in 192.168.6.102; do echo ">>> $i";ssh root@$i "sed -ri 's#PEER_URLS=\"https://192.168.6.102:2380\"#PEER_URLS=\"https://192.168.6.102:2380\"#' /etc/etcd/etcd.conf";done

for i in 192.168.6.102; do echo ">>> $i";ssh root@$i "sed -ri 's#CLIENT_URLS=\"https://192.168.6.102:2379\"#CLIENT_URLS=\"https://192.168.6.102:2379\"#' /etc/etcd/etcd.conf";done

配置etcd为systemctl管理

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
EnvironmentFile=/etc/etcd/etcd.conf
WorkingDirectory=/opt/etcd
ExecStart=/usr/local/bin/etcd \\
  --cert-file=/etc/etcd/ssl/etcd.pem \\
  --key-file=/etc/etcd/ssl/etcd-key.pem \\
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \\
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \\
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --enable-v2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
for i in 192.168.6.{101..102}; do echo ">>> $i";scp /usr/lib/systemd/system/etcd.service root@$i:/usr/lib/systemd/system;done

启动ETCD

为了快速启动,分别在 ectd节点启动 etcd,用 for可能会卡死,时间也比较长

 

for i in 192.168.6.{100..103}; do echo ">>> $i";ssh root@$i "mkdir -p /var/lib/etcd/default.etcd" ;done

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
查看etcd
systemctl status etcd -l
journalctl -xe

验证etcd集群状态

ETCDCTL_API=3方式查看

 

ETCDCTL_API=3 etcdctl \
--write-out=table \
--cacert=/etc/kubernetes/ssl/ca.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
--endpoints=https://192.168.6.100:2379,https://192.168.6.101:2379,https://192.168.6.102:2379 endpoint health

ETCDCTL_API=2方式查看

ETCDCTL_API=2 etcdctl \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--endpoints=https://192.168.6.100:2379,https://192.168.6.101:2379,https://192.168.6.102:2379 \
cluster-health

查看当前leader

ETCDCTL_API=3 etcdctl \
--write-out=table \
--cacert=/etc/kubernetes/ssl/ca.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
--endpoints=https://192.168.1.201:2379,https://192.168.1.202:2379,https://192.168.1.203:2379 endpoint status

 

 

 

2、部署flannel插件

下载flannel二进制文件:https://github.com/coreos/flannel/releases/tag/v0.13.1-rc2

# cd && mkdir flannel && cd flannel

wget https://github.com/coreos/flannel/releases/download/v0.13.1-rc2/flannel-v0.13.1-rc2-linux-amd64.tar.gz

tar -xf flannel-v0.13.1-rc2-linux-amd64.tar.gz

for i in 192.168.1.{201..205}; do echo ">>> $i";scp flanneld mk-docker-opts.sh root@$i:/usr/local/bin;done

离线方式:

tar xf binary_pkg-v1.19.10.tar.gz -C /usr/local/bin/

##同样开头已经解压缩,这里直接查看结果
# ls /usr/local/bin/{flanneld,mk-docker-opts.sh}
/usr/local/bin/flanneld  /usr/local/bin/mk-docker-opts.sh

for i in 192.168.6.{100..104}; do echo ">>> $i";scp /usr/local/bin/{flanneld,mk-docker-opts.sh} root@$i:/usr/local/bin/;done

创建flannel证书和私钥

cd /etc/etcd/ssl/

cat > flanneld-csr.json << EOF
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

生成flannel证书和私钥

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/etc/kubernetes/ssl/ca-config.json \
-profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

ls flanneld*pem
flanneld-key.pem  flanneld.pem
for i in 192.168.6.{100..104}; do echo ">>> $i";scp flanneld*.pem root@$i:/etc/etcd/ssl;done

写入key

ETCDCTL_API=2

 

ETCDCTL_API=2 etcdctl \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/etcd/ssl/flanneld.pem \
--key-file=/etc/etcd/ssl/flanneld-key.pem \
--endpoints=https://192.168.6.100:2379,https://192.168.6.101:2379,https://192.168.6.102:2379 \
mk /kubernetes/network/config '{ "Network": "172.18.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}'
ETCDCTL_API=2 etcdctl \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/etcd/ssl/flanneld.pem \
--key-file=/etc/etcd/ssl/flanneld-key.pem \
--endpoints=https://192.168.6.100:2379,https://192.168.6.101:2379,https://192.168.6.102:2379 \
get /kubernetes/network/config

返回写入结果:
{ "Network": "172.18.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}

配置flannel为systemctl管理

创建 flanneld 的 flanneld.service 文件

cat > /usr/lib/systemd/system/flanneld.service << EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/usr/local/bin/flanneld \\
  -etcd-cafile=/etc/kubernetes/ssl/ca.pem \\
  -etcd-certfile=/etc/etcd/ssl/flanneld.pem \\
  -etcd-keyfile=/etc/etcd/ssl/flanneld-key.pem \\
  -etcd-endpoints=https://192.168.6.100:2379,https://192.168.6.101:2379,https://192.168.1.102:2379 \\
  -etcd-prefix=/kubernetes/network \\
  -iface=eth0 \\
  -ip-masq
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF

解释:

mk-docker-opts.sh:该脚本将分配给 flanneld 的 Pod 子网段信息写入 /run/flannel/docker 文件,后续 docker 启动时使用这个文件中的环境变量配置 docker0 网桥;

flanneld:使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口;

flanneld:运行时需要 root 权限;

-ip-masq: flanneld 为访问 Pod 网络外的流量设置 SNAT 规则,同时将传递给 Docker 的变量 --ip-masq(/run/flannel/docker 文件中)设置为 false,这样 Docker 将不再创建 SNAT 规则; Docker 的 --ip-masq 为 true 时,创建的 SNAT 规则比较“暴力”:将所有本节点 Pod 发起的、访问非 docker0 接口的请求做 SNAT,这样访问其他节点 Pod 的请求来源 IP 会被设置为 flannel.1 接口的 IP,导致目的 Pod 看不到真实的来源 Pod IP。 flanneld 创建的 SNAT 规则比较温和,只对访问非 Pod 网段的请求做 SNAT。

for i in 192.168.6.{100..104}; do echo ">>> $i";scp /usr/lib/systemd/system/flanneld.service root@$i:/usr/lib/systemd/system;done

启动flannel服务

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "systemctl stop docker";done

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "systemctl daemon-reload && systemctl start flanneld && systemctl enable flanneld";done

systemctl status flanneld -l

查看已分配的pod网段列表

ETCDCTL_API=2 etcdctl \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/etcd/ssl/flanneld.pem \
--key-file=/etc/etcd/ssl/flanneld-key.pem \
--endpoints=https://192.168.6.100:2379,https://192.168.6.101:2379,https://192.168.6.102:2379 \
ls /kubernetes/network/subnets

查看各节点是否都存在flannel网卡

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "ip a s | grep flannel | grep -w inet";done

配置 flannel 与 docker 结合 ====> (所有master+node上操作)

获取subnet信息

 

[root@k8s-master-01 flannel]# pwd
/run/flannel
[root@k8s-master-01 flannel]# tree
.
├── docker
└── subnet.env

0 directories, 2 files
# cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=172.18.68.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.18.68.1/24 --ip-masq=false --mtu=1450"

配置docker daemon

添加bip、mtu

# vim /etc/docker/daemon.json
{
"insecure-registries": ["http://192.168.122.33"],
"registry-mirrors":
["https://s27w6kze.mirror.aliyuncs.com","https://registry.docker-cn.com"],
"bip": "172.18.8.1/24",
"mtu": 1450
}

重启docker

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "systemctl restart docker";done

查看各节点的 flannel 和 docker 网卡的网段是否相同

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "ip a s | grep flannel | grep -w inet && ip a s | grep docker | grep -w inet";done

验证:启动容器验证跨docker host网络

以下用master1、2节点验证,其它节点相同

k8s-master-01

 

docker run -d -it --name=c1 centos:latest /bin/bash
ip a s | grep docker | grep -w inet #查看docker0的IP
docker exec c1 ping -c4 172.18.68.1 #docker0的IP
docker exec c1 ip a s #查看容器IP
docker inspect c1 |grep IPAddress |tail -1 #查看容器IP
"IPAddress": "172.18.68.2"

如发现各容器内分配的ip之间相互ping不通,可能由于防火墙问题引起的

iptables -I INPUT -s 192.168.0.0/24 -j ACCEPT
iptables -I INPUT -s 172.18.0.0/24 -j ACCEPT

k8s-master-02

docker run -d -it --name=c2 centos:latest /bin/bash
ip a s | grep docker | grep -w inet #查看docker0的IP
docker exec c2 ping -c4 172.18.22.1 #docker0的IP
docker exec c2 ip a s #查看容器IP
docker inspect c2 |grep IPAddress |tail -1 #查看容器IP
"IPAddress": "172.18.22.2"
[root@k8s-master-01 ~]# docker exec c1 ping -c4 172.18.22.2
#k8s-master-02中c2的IP

[root@k8s-master-02 ~]# docker exec c2 ping -c4 172.18.68.2 #k8s-master-01中c1的IP

下载k8s v1.19.10二进制文件

包地址:https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG

# wget https://dl.k8s.io/v1.19.10/kubernetes-server-linux-amd64.tar.gz
# tar -xf kubernetes-server-linux-amd64.tar.gz

# cd kubernetes/server/bin/

for i in 192.168.1.{201..203}; do echo ">>> $i";scp kube-apiserver kube-controller-manager kube-scheduler kubectl kubeadm root@$i:/usr/local/bin/;done

for i in 192.168.1.{201..205}; do echo ">>> $i";scp kubelet kube-proxy root@$i:/usr/local/bin/;done

离线方式:

tar xf binary_pkg-v1.19.10.tar.gz -C /usr/local/bin/

##同样开头已经解压缩,这里直接查看结果
# ll /usr/local/bin/

for i in 192.168.6.{201..203}; do echo ">>> $i";scp /usr/local/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kubeadm} root@$i:/usr/local/bin/;done

for i in 192.168.6.{101..104}; do echo ">>> $i";scp /usr/local/bin/{kubelet,kube-proxy} root@$i:/usr/local/bin/;done

3、部署kubectl组件

  • kubectl默认从~/.kube/config读取kube-apiserver`地址和认证信息 ~/.kube/config只需要部署一次,然后拷贝到其他的master
  • kubectl作为集群的管理工具,需要被授予最高权限,这里创建具有最高权限的admin证书
  • kubectlapiserver进行https通信apiserver对提供的证书进行认证和`授权

 

创建admin证书和私钥

cd /etc/kubernetes/ssl/

cat > admin-csr.json << EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "opsnull"
    }
  ]
}
EOF
  • O 为system:masters,kube-apiserver收到该证书后将请求的Group设置为system:masters
  • 预定的ClusterRoleBinding cluster-admin将Group system:masters与Role cluster-admin绑定,该Role授予API的权限
  • 该证书只有被kubectl当做client证书使用,所以hosts字段为空

生成证书和私钥

cfssl gencert \
-ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/etc/kubernetes/ssl/ca-config.json \
-profile=kubernetes admin-csr.json | cfssljson -bare admin

ls admin*
admin.csr  admin-csr.json  admin-key.pem  admin.pem

更新PATH变量

cat >> /etc/profile << EOF
export k8s_VIP="192.168.6.200"
export KUBE_APISERVER="https://192.168.6.200:8443"
EOF

# source /etc/profile

echo $k8s_VIP
echo $KUBE_APISERVER

cd /etc/kubernetes/ssl/

创建~/.kube/config文件

1.设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kubectl.kubeconfig

2.设置客户端认证参数
kubectl config set-credentials admin \
--client-certificate=/etc/kubernetes/ssl/admin.pem \
--client-key=/etc/kubernetes/ssl/admin-key.pem \
--embed-certs=true \
--kubeconfig=kubectl.kubeconfig

3.设置上下文参数
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin \
--kubeconfig=kubectl.kubeconfig

4.设置默认上下文
kubectl config use-context kubernetes \
--kubeconfig=kubectl.kubeconfig
  • --certificate-authority 验证kube-apiserver证书的根证书

  • --client-certificate、--client-key 刚生成的admin证书和私钥,连接kube-apiserver时使用

  • --embed-certs=true 将ca.pem和admin.pem证书嵌入到生成的kubectl.kubeconfig文件中 (如果不加入,写入的是证书文件路径,后续拷贝kubeconfig到其它机器时,还需要单独拷贝证书)

分发kubectl.kubeconfig文件到所有master

 

for i in 192.168.6.{100..102}; do echo ">>> $i";ssh root@$i "mkdir -p ~/.kube" && scp kubectl.kubeconfig root@$i:~/.kube/config;done

4、部署kube-apiserver组件(集群)

cd /etc/kubernetes/ssl/

cat > kube-apiserver-csr.json << EOF
{
    "CN": "kubernetes-master",
    "hosts": [
      "127.0.0.1",
      "192.168.6.100",
      "192.168.6.101",
      "192.168.6.102",
      "192.168.6.103",
      "192.168.6.104",
      "$k8s_VIP",
      "10.0.0.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "opsnull"
        }
    ]
}
EOF

需要将集群所有IP、VIP添加到证书内

生成证书和私钥

 

cfssl gencert \
-ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/etc/kubernetes/ssl/ca-config.json \
-profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

ls kube-apiserver*pem
kube-apiserver-key.pem  kube-apiserver.pem
for i in 192.168.6.{100..102}; do echo ">>> $i"; scp /etc/kubernetes/ssl/kube-apiserver*pem root@$i:/etc/kubernetes/ssl/;done

创建加密配置文件

cat > /etc/kubernetes/encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: $(head -c 32 /dev/urandom | base64)
      - identity: {}
EOF
for i in 192.168.6.{101..102}; do echo ">>> $i"; scp /etc/kubernetes/encryption-config.yaml root@$i:/etc/kubernetes;done

创建审计策略文件

kube-apiserver审计日志记录和采集

cat > /etc/kubernetes/audit-policy.yaml <<EOF
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
  # The following requests were manually identified as high-volume and low-risk, so drop them.
  - level: None
    resources:
      - group: ""
        resources:
          - endpoints
          - services
          - services/status
    users:
      - 'system:kube-proxy'
    verbs:
      - watch
 
  - level: None
    resources:
      - group: ""
        resources:
          - nodes
          - nodes/status
    userGroups:
      - 'system:nodes'
    verbs:
      - get
 
  - level: None
    namespaces:
      - kube-system
    resources:
      - group: ""
        resources:
          - endpoints
    users:
      - 'system:kube-controller-manager'
      - 'system:kube-scheduler'
      - 'system:serviceaccount:kube-system:endpoint-controller'
    verbs:
      - get
      - update
 
  - level: None
    resources:
      - group: ""
        resources:
          - namespaces
          - namespaces/status
          - namespaces/finalize
    users:
      - 'system:apiserver'
    verbs:
      - get
 
  # Don't log HPA fetching metrics.
  - level: None
    resources:
      - group: metrics.k8s.io
    users:
      - 'system:kube-controller-manager'
    verbs:
      - get
      - list
 
  # Don't log these read-only URLs.
  - level: None
    nonResourceURLs:
      - '/healthz*'
      - /version
      - '/swagger*'
 
  # Don't log events requests.
  - level: None
    resources:
      - group: ""
        resources:
          - events
 
  # node and pod status calls from nodes are high-volume and can be large, don't log responses
  # for expected updates from nodes
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    users:
      - kubelet
      - 'system:node-problem-detector'
      - 'system:serviceaccount:kube-system:node-problem-detector'
    verbs:
      - update
      - patch
 
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    userGroups:
      - 'system:nodes'
    verbs:
      - update
      - patch
 
  # deletecollection calls can be large, don't log responses for expected namespace deletions
  - level: Request
    omitStages:
      - RequestReceived
    users:
      - 'system:serviceaccount:kube-system:namespace-controller'
    verbs:
      - deletecollection
 
  # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
  # so only log at the Metadata level.
  - level: Metadata
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - secrets
          - configmaps
      - group: authentication.k8s.io
        resources:
          - tokenreviews
  # Get repsonses can be large; skip them.
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
    verbs:
      - get
      - list
      - watch
 
  # Default level for known APIs
  - level: RequestResponse
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
      
  # Default level for all other requests.
  - level: Metadata
    omitStages:
      - RequestReceived
EOF
for i in 192.168.6.{101..102}; do echo ">>> $i"; scp /etc/kubernetes/audit-policy.yaml root@$i:/etc/kubernetes;done

创建后续访问 metrics-server 或 kube-prometheus 使用的证书

cat > /etc/kubernetes/ssl/metrics-server-csr.json <<EOF
{
  "CN": "aggregator",
  "hosts": [
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "opsnull"
    }
  ]
}
EOF

生成metrics-server证书和私钥

cd /etc/kubernetes/ssl

cfssl gencert \
-ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/etc/kubernetes/ssl/ca-config.json \
-profile=kubernetes metrics-server-csr.json | cfssljson -bare metrics-server

ls metrics-server*.pem
metrics-server-key.pem  metrics-server.pem
for i in 192.168.6.{101..102}; do echo ">>> $i"; scp metrics-server*.pem root@$i:/etc/kubernetes/ssl/;done

systemd管理apiserver

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/opt/kubernetes/kube-apiserver
ExecStart=/usr/local/bin/kube-apiserver \\
  --advertise-address=`hostname -i` \\
  --default-not-ready-toleration-seconds=360 \\
  --default-unreachable-toleration-seconds=360 \\
  --max-mutating-requests-inflight=2000 \\
  --max-requests-inflight=4000 \\
  --default-watch-cache-size=200 \\
  --delete-collection-workers=2 \\
  --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \\
  --etcd-certfile=/etc/kubernetes/ssl/kube-apiserver.pem \\
  --etcd-keyfile=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
 --etcd-servers=https://192.168.6.100:2379,https://192.168.6.101:2379,https://192.168.6.102:2379 \\
  --bind-address=0.0.0.0 \\
  --secure-port=6443 \\
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
  --audit-log-maxage=15 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-truncate-enabled \\
  --audit-log-path=/var/log/kubernetes/k8s-audit.log \\
  --audit-policy-file=/etc/kubernetes/audit-policy.yaml \\
  --profiling \\
  --anonymous-auth=false \\
 --client-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --enable-bootstrap-token-auth \\
  --requestheader-allowed-names="aggregator" \\
  --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --service-account-key-file=/etc/kubernetes/ssl/ca.pem \\
  --authorization-mode=Node,RBAC \\
  --runtime-config=api/all=true \\
  --enable-admission-plugins=NodeRestriction \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --event-ttl=168h \\
  --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \\
 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
  --kubelet-timeout=10s \\
  --proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem \\
  --proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem \\
  --service-cluster-ip-range=10.0.0.0/16 \\
  --service-node-port-range=30000-32767  \\
  --logtostderr=true \\
  --v=2 
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

参数介绍:https://v1-19.docs.kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-apiserver/

for i in 192.168.6.{101..102}; do echo ">>> $i"; scp /usr/lib/systemd/system/kube-apiserver.service root@$i:/usr/lib/systemd/system;done

启动服务

for i in 192.168.6.{100..102}; do echo ">>> $i";ssh root@$i "mkdir -p /opt/kubernetes/kube-apiserver && systemctl daemon-reload && systemctl start kube-apiserver && systemctl enable kube-apiserver";done

# systemctl status kube-apiserver -l

测试
# curl --insecure https://192.168.1.201:6443/
# curl --cacert /etc/kubernetes/ssl/ca.pem https://192.168.1.200:8443/
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}
有返回说明启动正常

# netstat -lntup | egrep "8080|6443"

tailf -30 /var/log/messages

查看kube-apiserver写入etcd的数据

etcdctl \
--endpoints=https://192.168.6.100:2379,https://192.168.6.101:2379,https://192.168.6.102:2379 \
--cacert=/etc/kubernetes/ssl/ca.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
get /registry/ --prefix --keys-only

检查集群状态

kubectl cluster-info
kubectl get all --all-namespaces
kubectl config current-context

kubectl config view

kubectl get clusterrole |grep system:node-bootstrapper

kubectl describe clusterrole system:node-bootstrapper

5、部署kube-controller-manager(集群)

cd /etc/kubernetes/ssl

cat > kube-controller-manager-csr.json <<EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.6.100",
      "192.168.6.101",
      "192.168.6.102"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-controller-manager",
        "OU": "opsnull"
      }
    ]
}
EOF

生成证书和私钥

cfssl gencert \
-ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/etc/kubernetes/ssl/ca-config.json \
-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
  
ls kube-controller-manager*pem
kube-controller-manager-key.pem  kube-controller-manager.pem
for i in 192.168.6.{101..102}; do echo ">>> $i";scp kube-controller-manager*.pem root@$i:/etc/kubernetes/ssl;done

创建和分发 kubeconfig 文件

cd /etc/kubernetes/ssl
source /etc/profile

1.设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-controller-manager.kubeconfig

2.设置客户端认证参数
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig

设置上下文参数
kubectl config set-context system:kube-controller-manager \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig

设置默认上下文
kubectl config use-context system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
for i in 192.168.6.{100..102}; do echo ">>> $i";scp kube-controller-manager.kubeconfig root@$i:/etc/kubernetes/;done

systemd管理controller-manager

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=/opt/kubernetes/kube-controller-manager
ExecStart=/usr/local/bin/kube-controller-manager \\
  --profiling \\
  --cluster-name=kubernetes \\
  --controllers=*,bootstrapsigner,tokencleaner \\
  --kube-api-qps=1000 \\
  --kube-api-burst=2000 \\
  --leader-elect \\
  --use-service-account-credentials\\
  --concurrent-service-syncs=2 \\
  --bind-address=0.0.0.0 \\
  --secure-port=10252 \\
  --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \\
  --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \\
  --secure-port=0 \\
  --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --requestheader-allowed-names="aggregator" \\
  --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --cluster-signing-duration=876000h \\
  --horizontal-pod-autoscaler-sync-period=10s \\
  --concurrent-deployment-syncs=10 \\
  --concurrent-gc-syncs=30 \\
  --node-cidr-mask-size=24 \\
  --service-cluster-ip-range=10.0.0.0/24 \\
  --pod-eviction-timeout=6m \\
  --terminated-pod-gc-threshold=10000 \\
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

注意:kube-controller-manager 需要配置 --cluster-signing-cert-file--cluster-signing-key-file 参数,才会为TLS Bootstrap 创建证书和私钥

for i in 192.168.6.{101..102}; do echo ">>> $i"; scp /usr/lib/systemd/system/kube-controller-manager.service root@$i://usr/lib/systemd/system;done

启动kube-controller-manager服务

for i in 192.168.6.{100..102}; do echo ">>> $i";ssh root@$i "mkdir -p /opt/kubernetes/kube-controller-manager && systemctl daemon-reload && systemctl start kube-controller-manager && systemctl enable kube-controller-manager";done

# systemctl status kube-controller-manager -l


# netstat -lntup | egrep "8080|6443"

tailf -30 /var/log/messages


查看输出的metric
netstat -anpt | grep kube-controll
kube-controller-manager监听10252端口,接收https请求

curl -s --cacert /etc/kubernetes/ssl/ca.pem https://127.0.0.1:10252/metrics | head

查看当前的 leader
kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml

测试 kube-controller-manager 集群的高可用

停掉一个或两个节点的 kube-controller-manager 服务,观察其它节点的日志,看是否获取了leader权限

 

6、部署kube-scheduler(集群)

cd /etc/kubernetes/ssl

cat > kube-scheduler-csr.json <<EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.6.100",
      "192.168.6.101",
      "192.168.6.102"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-scheduler",
        "OU": "opsnull"
      }
    ]
}
EOF

生成kube-scheduler证书和私钥

cfssl gencert \
-ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/etc/kubernetes/ssl/ca-config.json \
-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
for i in 192.168.6.{100..102}; do echo ">>> $i";scp kube-scheduler*pem root@$i:/etc/kubernetes/ssl;done

创建kube-scheduler的kubeconfig文件

cd /etc/kubernetes/ssl
source /etc/profile

"设置集群参数"
 kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-scheduler.kubeconfig

"设置客户端认证参数"
 kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig

"设置上下文参数"
 kubectl config set-context system:kube-scheduler \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig

"设置默认上下文"
 kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

分发 kubeconfig 到所有 master 节点

for i in 192.168.6.{100..102}; do echo ">>> $i";scp kube-scheduler.kubeconfig root@$i:/etc/kubernetes/;done

systemd管理scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=/opt/kubernetes/kube-scheduler
ExecStart=/usr/local/bin/kube-scheduler \\
  --leader-elect \\
  --log-dir=/var/log/kubernetes/kube-schedul \\
  --master=127.0.0.1:8080 \\
  --v=2
  
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF
for i in 192.168.6.{101..102}; do echo ">>> $i";scp /usr/lib/systemd/system/kube-scheduler.service root@$i:/usr/lib/systemd/system;done

启动kube-scheduler服务

for i in 192.168.6.{100..102}; do echo ">>> $i";ssh root@$i "mkdir -p /opt/kubernetes/kube-scheduler && mkdir -p /var/log/kubernetes/kube-schedul && systemctl daemon-reload && systemctl start kube-scheduler && systemctl enable kube-scheduler";done

# systemctl status kube-scheduler -l
ps -ef |grep kube |grep -v grep  
ss -nltp | grep kube-scheduler

查看当前leader
# kubectl get endpoints kube-scheduler -n kube-system  -o yaml

master各组件全部部署完成,查看集群状态

kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}

接下来部署node节点的kubelet、kube-proxy组件:此操作在k8s-master-01上执行

master上生成证书、配置文件、启动文件,拷贝到node节点上

7、部署kubelet组件

创建kubelet-bootstrap.kubeconfig文件

 

注意: token 生效时间为 1day , 超过时间未创建自动失效,需要重新创建 token
cd /etc/kubernetes/ssl/

cat >> /etc/profile << EOF
export NODE_NAMES=( k8s-master1 k8s-master2 k8s-master3 k8s-node1 k8s-node2 )
EOF

source /etc/profile
for node_name in ${NODE_NAMES[@]}
do
echo ">>> ${node_name}"
 
# 1.创建 token
export BOOTSTRAP_TOKEN=$(kubeadm token create \
--description kubelet-bootstrap-token \
--groups system:bootstrappers:${node_name} \
--kubeconfig ~/.kube/config)

# 2.设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/etc/kubernetes/ssl/kubelet-bootstrap-${node_name}.kubeconfig

# 3.设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=/etc/kubernetes/ssl/kubelet-bootstrap-${node_name}.kubeconfig

# 4.设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=/etc/kubernetes/ssl/kubelet-bootstrap-${node_name}.kubeconfig

# 5.设置默认上下文
kubectl config use-context default --kubeconfig=/etc/kubernetes/ssl/kubelet-bootstrap-${node_name}.kubeconfig
done

查看生成的 token

#查看 kubeadm 为各节点创建的 token
kubeadm token list --kubeconfig ~/.kube/config

#查看各 token 关联的 Secret
kubectl get secrets  -n kube-system | grep bootstrap-token

拷贝生成的 kubelet-bootstrap-${node_name}.kubeconfig 文件到其它所有的node节点,并重命名

for node_name in ${NODE_NAMES[@]}; \
do echo ">>> ${node_name}"; \
scp /etc/kubernetes/ssl/kubelet-bootstrap-${node_name}.kubeconfig \
    ${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig; \
done

kubelet参数配置文件

cat > /etc/kubernetes/kubelet-config.yaml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "##NODE_IP##"
staticPodPath: ""
syncFrequency: 1m
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodURL: ""
port: 10250
readOnlyPort: 0
rotateCertificates: true
serverTLSBootstrap: true
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509
    clientCAFile: "/etc/kubernetes/ssl/ca.pem"
authorization:
  mode: Webhook
registryPullQPS: 0
registryBurst: 20
eventRecordQPS: 0
eventBurst: 20
enableDebuggingHandlers: true
enableContentionProfiling: true
healthzPort: 10248
healthzBindAddress: "##NODE_IP##"
clusterDomain: "cluster.local"
clusterDNS:
  - "10.0.0.2"
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m
imageMinimumGCAge: 2m
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m
kubeletCgroups: ""
systemCgroups: ""
cgroupRoot: ""
cgroupsPerQOS: true
cgroupDriver: systemd
runtimeRequestTimeout: 10m
hairpinMode: promiscuous-bridge
maxPods: 220
podCIDR: "10.254.0.0/16"
podPidsLimit: -1
resolvConf: /etc/resolv.conf
maxOpenFiles: 1000000
kubeAPIQPS: 1000
kubeAPIBurst: 2000
serializeImagePulls: false
evictionHard:
  memory.available:  "100Mi"
nodefs.available:  "10%"
nodefs.inodesFree: "5%"
imagefs.available: "15%"
evictionSoft: {}
enableControllerAttachDetach: true
failSwapOn: true
containerLogMaxSize: 20Mi
containerLogMaxFiles: 10
systemReserved: {}
kubeReserved: {}
systemReservedCgroup: ""
kubeReservedCgroup: ""
enforceNodeAllocatable: ["pods"]
EOF

systemd 管理 kubelet

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/opt/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \\
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
--cert-dir=/etc/kubernetes/ssl \\
--cni-conf-dir=/etc/cni/net.d \\
--root-dir=/var/log/kubernetes/kubelet \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--config=/etc/kubernetes/kubelet-config.yaml \\
--hostname-override=##NODE_NAME## \\
--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2 \\
--image-pull-progress-deadline=15m \\
--logtostderr=true \\
--v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF
cat >> /etc/profile << EOF
export NODE_IPS=( 192.168.6.100 192.168.6.101 192.168.6.102 192.168.6.103 192.168.6.104 )
EOF

source /etc/profile

for (( i=0; i < 5; i++ ))
do
    sed -e "s/##NODE_IP##/${NODE_IPS[i]}/" /etc/kubernetes/kubelet-config.yaml > \
           /etc/kubernetes/kubelet-config-${NODE_IPS[i]}.yaml
done

for host in ${NODE_IPS[@]}
do
    scp /etc/kubernetes/kubelet-config-${host}.yaml \
        ${host}:/etc/kubernetes/kubelet-config.yaml
done

for node_name in ${NODE_NAMES[@]}
do 
echo ">>> ${node_name}"
    sed -e "s/##NODE_NAME##/${node_name}/" /usr/lib/systemd/system/kubelet.service > /usr/lib/systemd/system/kubelet-${node_name}.service
    scp /usr/lib/systemd/system/kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service
done

for (( i=0; i < 5; i++ ))
do
    rm -rf /etc/kubernetes/kubelet-config-${NODE_IPS[i]}.yaml
done

for host in ${NODE_NAMES[@]}
do
    rm -rf /usr/lib/systemd/system/kubelet-${host}.service
done

pause镜像

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i docker pull registry.aliyuncs.com/google_containers/pause:3.2;done

离线方式

tar xf env_pkg.tar.gz

##同样开头已经解压缩,这里直接查看结果
# ls /root/k8s_image
pause-3.2.docker

for i in 192.168.6.{100..104}; do echo ">>> $i";scp /root/k8s_image/pause-3.2.docker root@$i:/root/k8s_image;done

离线导入镜像

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "cd /root/k8s_image && ls |xargs -i docker load -i {}";done

授予 kube-apiserver 访问 kubelet API 的权限

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes-master

授权kubelet-bootstrap用户组允许请求证书

将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定

 

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

启动并设置开机启动

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "mkdir -p /opt/kubernetes/kubelet && systemctl daemon-reload && systemctl start kubelet && systemctl enable kubelet";done

# systemctl status kubelet -l ##查看有一堆错误,是因为还没有csr

journalctl -u kubelet | grep -i garbage
# journalctl _PID=12700 | vim -

手动==>批准kubelet证书申请并加入集群(手动<-->自动二选一)

kubectl get csr

# kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve ##命令执行两次

kubectl get node

systemctl status kubelet -l 再次查看kubelet全部正常

自动批准

# cat > /etc/kubernetes/csr-crb.yaml <<EOF
 # Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
EOF

# kubectl apply -f csr-crb.yaml

kubectl get node

8、部署kube-proxy

cd /etc/kubernetes/ssl

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "opsnull"
    }
  ]
}
EOF
cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/etc/kubernetes/ssl/ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

ls kube-proxy*pem
kube-proxy-key.pem  kube-proxy.pem

创建kube-proxy-bootstrap.kubeconfig文件

cd /etc/kubernetes/ssl/

source /etc/profile

echo ${KUBE_APISERVER} 

# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/etc/kubernetes/ssl/kube-proxy.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/ssl/kube-proxy.kubeconfig

# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=/etc/kubernetes/ssl/kube-proxy.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=/etc/kubernetes/ssl/kube-proxy.kubeconfig
for i in 192.168.6.{100..104};do echo ">>> $i";scp kube-proxy.kubeconfig root@$i:/etc/kubernetes/;done

创建kube-proxy配置文件

cat > /etc/kubernetes/kube-proxy-config.yaml <<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
  qps: 100
bindAddress: 192.168.6.104
healthzBindAddress: 192.168.6.104:10256
metricsBindAddress: 192.168.6.104:10249
enableProfiling: true
clusterCIDR: 10.0.0.0/16
hostnameOverride: 192.168.6.104
mode: "ipvs"
portRange: ""
iptables:
  masqueradeAll: false
ipvs:
  scheduler: rr
  excludeCIDRs: []
EOF
cd /etc/kubernetes

source /etc/profile

for (( i=0; i < 5; i++ ))
do
    sed -e "s/##NODE_IP##/${NODE_IPS[i]}/" /etc/kubernetes/kube-proxy-config.yaml > \
           /etc/kubernetes/kube-proxy-config-${NODE_IPS[i]}.yaml 
done

for host in ${NODE_IPS[@]}
do
    scp /etc/kubernetes/kube-proxy-config-${host}.yaml \
        ${host}:/etc/kubernetes/kube-proxy-config.yaml
done

for (( i=0; i < 5; i++ ))
do
    rm -rf /etc/kubernetes/kube-proxy-config-${NODE_IPS[i]}.yaml
done

配置kube-proxy为systemctl启动

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/opt/kubernetes/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy-config.yaml \\
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
for i in 192.168.6.{100..104}; do echo ">>> $i";scp /usr/lib/systemd/system/kube-proxy.service root@$i:/usr/lib/systemd/system;done

启动并设置开机启动

for i in 192.168.6.{100..104}; do echo ">>> $i";ssh root@$i "mkdir -p /opt/kubernetes/kube-proxy && systemctl daemon-reload && systemctl start kube-proxy && systemctl enable kube-proxy";done

# systemctl status kube-proxy -l

journalctl -u kube-proxy | grep -i garbage
# journalctl _PID=12700 | vim -

到此为止,k8s所有组件安装完成(可以再次查看各组件状态)

systemctl status etcd -l
systemctl status flanneld -l
systemctl status kube-apiserver -l
systemctl status kube-controller-manager -l
systemctl status kube-scheduler -l
systemctl status kubelet -l
systemctl status kube-proxy -l

kubectl get cs
kubectl get node
kubectl cluster-info
kubectl get all --all-namespaces

9、部署coredns

git clone https://github.com/coredns/deployment.git

mv deployment coredns-deployment
cd coredns-deployment/kubernetes

./deploy.sh -i "10.0.0.2" -d "cluster.local" | kubectl apply -f -

这里,10.0.0.0/24是你的service网段, cluster.local是域名后缀。该命令输出的内容保存为文件generatedCoredns.yaml

测试coredns功能

 

cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28.3
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

注:busybox高版本有nslookup Bug,不建议使用高版本

# kubectl exec busybox -- nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

10、创建一个nginx测试

cat >  nginx-service.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
  name: my-service1
spec:
  type: NodePort #此处有变化
  selector:
    app: nginx
  ports:
  - protocol: TCP
    nodePort: 32222 #此处为新添加
    port: 80
    targetPort: 80
EOF
# kubectl apply -f nginx-service.yaml
# kubectl get pod -o wide
# kubectl get svc
# kubectl get endpoints

curl 192.168.1.200:32222

游览器访问URL: http://192.168.1.200:32222

kubectl delete -f nginx-service.yaml

11、部署 dashboard 插件

下载和修改配置文件

 

mkdir -p k8s-dashboard && cd k8s-dashboard

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
kubectl apply -f recommended.yaml

kubectl get pods -n kubernetes-dashboard
kubectl get svc -n kubernetes-dashboard

测试访问 dashboard

修改dashboard的service为nodeport

 

kubectl edit svc/kubernetes-dashboard -n kubernetes-dashboard

将type修改为NodePort

# kubectl get svc -n kubernetes-dashboard
NAME                     TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard     NodePort   10.0.155.120   <none>        443:30533/TCP   26m

浏览器访问 URL:https://192.168.1.200:30533

token 令牌认证

"获取token"

# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep kubernetes-dashboard | awk '{print $1}')

出现上述问题的原因是因为,kubernetes-dashboard 这个账户的角色权限不够

创建dashboard-admin用户

 

# 创建service account
kubectl create sa dashboard-admin -n kube-system

# 创建角色绑定关系
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

# 查看dashboard-admin的secret名字
ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')

# 打印secret的token
kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}'

12、部署metrics-server插件

启用 API 聚合层

 

# kubectl get apiservice

如果你使用 kubeadm 部署的k8s集群,默认已开启。如果你使用二进制方式部署的话,需要在 kube-APIServer 中添加以下启动参数

[root@master ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
...
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    - --requestheader-allowed-names=aggregator
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --runtime-config=api/all=true
    - --enable-aggregator-routing=true  
...
设置完成后重启kube-apiserver服务,就启用 API 聚合功能了  

1、检查 API Server 是否开启了kube-aggregator ,查看 API Server 是否具有 --enable-aggregator-routing=true 选项。

# ps -ef | grep apiserver

# ps -ef | grep apiserver |grep aggregator

# ps -ef |grep kube-proxy

如果 kube-apiserver 机器没有运行 kube-proxy,则还需要添加 --enable-aggregator-routing=true 参数

下载yaml文件:https://github.com/kubernetes-sigs/metrics-server

mkdir metrics && cd metrics

# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# grep InternalIP components.yaml
- --kubelet-insecure-tls# 添加

# kubectl create -f .

kubectl get pods -n kube-system | grep metrics

# kubectl top node
# kubectl top pod -A

注释:

  • 1、metrics默认使用hostname来通信的,而且coredns中已经添加了宿主机的/etc/resolv.conf,所以只需要添加一个内部的dns服务器或者在pod的deployment的yaml手动添加主机解析记录,再或者改变参数为InternalIP,直接用ip来连接

  • 2、kubelet-insecure-tls: 跳过验证kubelet的ca证书,暂时开启。(不推荐用于生产环境)

 

验证

# 查看node监控
# kubectl top nodes

NAME    CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
test1   377m         18%    5915Mi          76%       
test2   267m         13%    5479Mi          70%

注意:内存单位 Mi=1024*1024字节  M=1000*1000字节
        CPU单位 1核=1000m 即250m=1/4核

# 查看所有pod监控
# kubectl top pod -A

13、helm3 安装 ingress

方式1:tar包安装

 

下载地址:https://github.com/helm/helm/releases

wget https://get.helm.sh/helm-v3.5.4-linux-amd64.tar.gz

tar xf helm-v3.5.4-linux-amd64.tar.gz

mv linux-amd64/helm /usr/local/sbin/helm

helm version

方式2:脚本安装

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

方式3:源码安装

git clone https://github.com/helm/helm.git

cd helm && make

命令补全

echo "source <(helm completion bash)" >> ~/.bash_profile

source !$

添加公用的仓库

配置helm阿里源地址

 

# helm repo add stable http://mirror.azure.cn/kubernetes/charts

# helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

helm repo list

helm repo update

helm 安装 ingress

https://kubernetes.github.io/ingress-nginx/deploy/#using-helm

 

添加 ingress 的 Helm 仓库 并 下载解压ingress

# mkdir ingress && cd ingress

# helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

# helm repo update

# helm repo list

# helm search repo ingress-nginx/ingress-nginx

# helm pull ingress-nginx/ingress-nginx --untar

安装ingress

1、修改values.yaml

[root@master ingress-nginx]# pwd
/root/ingress/ingress-nginx
[root@master ingress-nginx]# grep repository values.yaml
    repository: k8s.gcr.io/ingress-nginx/controller
    repository: docker.io/jettech/kube-webhook-certgen
    repository: k8s.gcr.io/defaultbackend-amd64
# 1、修改controller镜像地址
repository: registry.aliyuncs.com/dotbalo/controller

# 2、dnsPolicy
dnsPolicy: ClusterFirstWithHostNet

# 3、使用hostNetwork,即使用宿主机上的端口80 443
hostNetwork: true
​
# 4、使用DaemonSet,将ingress部署在指定节点上
kind: DaemonSet
​
# 5、节点选择,将需要部署的节点打上ingress=true的label
  nodeSelector:
    kubernetes.io/os: linux
    ingress: "true"
     
# 6、修改type,改为ClusterIP。如果在云环境,有loadbanace可以使用loadbanace
type: ClusterIP
​
# 7、修改kube-webhook-certgen镜像地址
registry.aliyuncs.com/dotbalo/kube-webhook-certgen

2、安装ingress

# 选择节点打label
# kubectl label node node1 node2 ingress=true
# kubectl get node --show-labels
 
# kubectl create ns ingress-nginx

# helm install ingress-nginx -f values.yaml -n ingress-nginx .

# helm list -n ingress-nginx

# kubectl get pods -n ingress-nginx -owide
# kubectl get svc -n ingress-nginx -owide

注意edit修改摘要引用验证码

发布 ClusterIP-Service 服务测试 ingress

所有node提前拉取镜像

 

docker pull docker.io/library/nginx:alpine

部署一个service/ClusterIP的nginx服务

# cat << EOF > nginx-ClusterIP-Service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: ClusterIP
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
EOF

# kubectl apply -f nginx-ClusterIP-Service.yaml

# kubectl get pod

# kubectl get svc

进入容器修改html文件(测试负载均衡)

[root@master ~]# kubectl get pod
NAME                               
nginx-deployment-7fb7fd49b4-cgn4x
nginx-deployment-7fb7fd49b4-qn46z

修改nginx01
# kubectl exec -it nginx-deployment-7fb7fd49b4-cgn4x -- sh
/ # cat << EOF > /usr/share/nginx/html/index.html
<head>
<title>Welcome to nginx 01!</title>
</head>
<body>
<h1>Welcome to nginx01!</h1>
</body>
EOF
按exit退出容器

修改nginx02
# kubectl exec -it nginx-deployment-7fb7fd49b4-qn46z -- sh
/ # cat << EOF > /usr/share/nginx/html/index.html
<head>
<title>Welcome to nginx 02!</title>
</head>
<body>
<h1>Welcome to nginx02!</h>
</body>
EOF
按exit退出容器

用ingress发布

# cat << EOF > ingress_nginx-ClusterIP-Service.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-service-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: lnmp.ltd
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: nginx-service
            port:
              number: 80
EOF

# kubectl apply -f ingress_nginx-ClusterIP-Service.yaml
# kubectl get ingress

# kubectl delete -f ingress_nginx-ClusterIP-Service.yaml

master主机添加域名解析

[root@master ~]# vim /etc/hosts
192.168.1.202   lnmp.ltd  #添加任意一台node_ip

windows添加C:\Windows\System32\drivers\etc\hosts

192.168.1.202   lnmp.ltd

测试访问

浏览器:http://lnmp.ltd/

linux: curl lnmp.ltd 或者 curl http://lnmp.ltd

 

 

14、部署 kube-prometheus

下载和安装 kube-prometheus v0.8.0版本

 

git clone https://github.com/coreos/kube-prometheus.git
cd kube-prometheus/manifests

grep -riE 'image: ' *

=====
# docker search huanhui1314
把官方源修改成我自己的dockerhub(里面的镜像都是FQ从官方拉取的镜像)

# vim alertmanager-alertmanager.yaml
修改为:image: huanhui1314/alertmanager:v0.21.0
# vim blackbox-exporter-deployment.yaml
修改为:image: huanhui1314/blackbox-exporter:v0.18.0
修改为:image: huanhui1314/configmap-reload:v0.5.0
修改为:image: huanhui1314/kube-rbac-proxy:v0.8.0
# vim grafana-deployment.yaml
修改为:huanhui1314/grafana:7.5.4
# vim kube-state-metrics-deployment.yaml
修改为:image: huanhui1314/kube-state-metrics:v2.0.0
修改为:image: huanhui1314/kube-rbac-proxy:v0.8.0
修改为:image: huanhui1314/kube-rbac-proxy:v0.8.0
# vim node-exporter-daemonset.yaml
修改为:image: huanhui1314/node-exporter:v1.1.2
修改为:image: huanhui1314/kube-rbac-proxy:v0.8.0
# vim prometheus-adapter-deployment.yaml
修改为:image: huanhui1314/k8s-prometheus-adapter:v0.8.4
# vim prometheus-prometheus.yaml
修改为:image: huanhui1314/prometheus:v2.26.0
# vim setup/prometheus-operator-deployment.yaml
修改为:image: huanhui1314/prometheus-operator:v0.47.0
修改为:image: huanhui1314/kube-rbac-proxy:v0.8.0
=====

# kubectl apply -f setup/ # 安装 prometheus-operator
kubectl get pod -n monitoring -o wide

# kubectl apply -f . # 安装 promethes metric adapter
kubectl get pod -n monitoring -o wide
kubectl get svc -n monitoring

kubectl edit svc/alertmanager-main -n monitoring
将type修改为NodePort

kubectl edit svc/grafana -n monitoring
将type修改为NodePort

kubectl edit svc/prometheus-k8s -n monitoring
将type修改为NodePort





vim setup/prometheus-operator-deployment.yaml

15、部署 EFK 插件

将下载的 kubernetes-server-linux-amd64.tar.gz 解压后,再解压其中的 kubernetes-src.tar.gz 文件

cd /root/kubernetes/
tar -xzvf kubernetes-src.tar.gz

修改配置文件

EFK 目录是 kubernetes/cluster/addons/fluentd-elasticsearch

 

cd /root/kubernetes/cluster/addons/fluentd-elasticsearch

grep -riE 'image: ' *

# vim es-statefulset.yaml
修改为:huanhui1314/elasticsearch:v7.4.2
# vim fluentd-es-ds.yaml
修改为:huanhui1314/fluentd:v3.0.2
# vim kibana-deployment.yaml
修改为:huanhui1314/kibana-oss:7.2.0

给 Node 设置标签(不然fluentd-es起不来)

DaemonSet fluentd-es 只会调度到设置了标签 beta.kubernetes.io/fluentd-ds-ready=true 的 Node,需要在期望运行 fluentd 的 Node 上设置该标签;

 

[root@k8s-master-01 ~]# kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.168.1.201   Ready    <none>   12h   v1.19.10
192.168.1.202   Ready    <none>   12h   v1.19.10
192.168.1.203   Ready    <none>   12h   v1.19.10
192.168.1.204   Ready    <none>   12h   v1.19.10
192.168.1.205   Ready    <none>   12h   v1.19.10


# kubectl label nodes 192.168.1.201 beta.kubernetes.io/fluentd-ds-ready=true

# kubectl label nodes 192.168.1.202 beta.kubernetes.io/fluentd-ds-ready=true

# kubectl label nodes 192.168.1.203 beta.kubernetes.io/fluentd-ds-ready=true

# kubectl label nodes 192.168.1.204 beta.kubernetes.io/fluentd-ds-ready=true

# kubectl label nodes 192.168.1.205 beta.kubernetes.io/fluentd-ds-ready=true

kubectl get node --show-labels
kubectl apply -f .
kubectl get deployment -n kube-system|grep kibana

kubectl get pods -A |grep -E 'elasticsearch|fluentd|kibana'

kubectl get svc  -A |grep -E 'elasticsearch|kibana'

kibana Pod 第一次启动时会用较长时间(0-20分钟)来优化和 Cache 状态页面,可以 tailf 该 Pod 的日志观察进度

kubectl logs kibana-logging-xxxxxxx -n kube-system -f

注意:只有当 Kibana pod 启动完成后,浏览器才能查看 kibana dashboard,否则会被拒绝

浏览器访问 URL:http://172.27.138.251:8086/api/v1/namespaces/kube-system/services/kibana-logging/proxy

安装helm3

方式1:tar包安装

 

下载地址:https://github.com/helm/helm/releases

wget https://get.helm.sh/helm-v3.5.4-linux-amd64.tar.gz

tar xf helm-v3.5.4-linux-amd64.tar.gz

mv linux-amd64/helm /usr/local/sbin/helm

helm version

方式2:脚本安装

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

方式3:源码安装

git clone https://github.com/helm/helm.git

cd helm && make

命令补全

echo "source <(helm completion bash)" >> ~/.bash_profile

source !$

 

posted @ 2022-09-13 01:01  百因必有果  阅读(42)  评论(0编辑  收藏  举报