离线安装k8s v1.19.8 安装(非高可用版)

注意:部署内网k8s集群之前,先部署内网的镜像仓库,参考连接:内网环境低配版docker镜像仓库搭建

1.准备工作

1.0 获取安装包

链接:https://pan.baidu.com/s/1OS2JcxXPDpklmkWp7ifPdw
提取码:ut83

1.1 主机准备

部署k8s集群的节点按照用途可以划分为如下2类角色:
master:集群的master节点,集群的初始化节点,基础配置不低于2C4G
slave:集群的slave节点,可以多台,基础配置不低于2C4G
本次规划三台机1台master,2台slave

主机名 角色 ip 部署组件
k8s-master master 172.22.14.56 etcd, kube-apiserver,kube-controller-manager, kubectl, kubeadm, kubelet, kube-proxy, flanne
k8s-slave1 slave 172.22.14.57 kubectl, kubelet, kube-proxy, flannel
k8s-slave2 slave 172.22.14.58 kubectl, kubelet, kube-proxy, flannel
1.2 组件版本
组件 版本 说明
Red Hat 7.9 操作系统
Kernel 3.10.0-1160.el7.x86_64 内核版本
etcd 3.4.13-0 分布式高性能键值数据库,存储整个集群的所有元数据
coredns 1.7.0 Go语言实现的链式插件DNS服务端
kubeadm v1.19.8 k8s管理工具
kubectl v1.19.8 用于操作kubernetes集群的命令行接口
kubelet v1.19.8 运行在每个节点上的主要的“节点代理”
kube-proxy v1.19.0 负责为Pod创建代理服务
flannel v0.14.0 网络插件
1.3 设置hosts解析

操作节点:所有节点(k8s-master,k8s-slave)均需执行

  • 修改hostname

master节点

hostnamectl set-hostname k8s-master1;bash

slave1节点

hostnamectl set-hostname k8s-slave1;bash

slave2节点

hostnamectl set-hostname k8s-slave2;bash
注意:hostname必须只能包含小写字母、数字、","、"-",且开头结尾必须是小写字母或数字
  • 添加hosts解析
cat >>/etc/hosts<<EOF
172.22.14.56 k8s-master
172.22.14.57 k8s-slave1
172.22.14.58 k8s-slave2
EOF
1.4 调整系统配置
操作节点: 所有的master和slave节点(k8s-master,k8s-slave)需要执行
  • 设置安全组开放端口
    如果节点间无安全组限制(内网机器间可以任意访问),可以忽略,否则,至少保证如下端口可通:
    k8s-master节点:TCP:6443,2379,2380,60080,60081UDP协议端口全部打开
    k8s-slave节点:UDP协议端口全部打开
  • 设置iptables
iptables -P FORWARD ACCEPT
  • 关闭swap
swapoff -a    # 防止开机自动挂载 swap 分区
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
  • 关闭selinux
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0
  • 关闭防火墙
systemctl disable firewalld && systemctl stop firewalld
  • 调整内核参数
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.max_map_count=262144
EOF
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
  • 设置yum源
curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all && yum makecache

注意:外网安装内网安装时设置,内网无需设置。

1.5 安装docker

操作节点: 所有节点
参考连接:内网环境低配版docker镜像仓库搭建
该链即安装了docker,又部署了内网的镜像仓库。

2. 部署k8s

2.1 安装 kubeadm, kubelet 和 kubectl

操作节点:所有的masterslave 节点(k8s-master,k8s-slave) 需要执行
由于我这里是内网安装,内网yum源没有安装包,已经下载好了rpm安装包和依赖包,
直接yum install *.rpm即可,如下:

[root@k8s-slave1 k8s]# ls -ltr
total 61980
-rw-r--r-- 1 root root 20469886 Jun  1 16:49 kubelet-1.19.8-0.x86_64.rpm
-rw-r--r-- 1 root root  8731038 Jun  1 16:52 kubeadm-1.19.8-0.x86_64.rpm
-rw-r--r-- 1 root root  9451762 Jun  1 16:52 kubectl-1.19.8-0.x86_64.rpm
-rw-r--r-- 1 root root 19487362 Jun  1 17:03 kubernetes-cni-0.8.7-0.x86_64.rpm
-rw-r--r-- 1 root root  5318270 Jun  1 17:04 cri-tools-1.13.0-0.x86_64.rpm
[root@k8s-slave1 k8s]# yum install *rpm
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager

This system is not registered with an entitlement server. You can use subscription-manager to register.

Examining cri-tools-1.13.0-0.x86_64.rpm: cri-tools-1.13.0-0.x86_64
cri-tools-1.13.0-0.x86_64.rpm: does not update installed package.
Examining kubeadm-1.19.8-0.x86_64.rpm: kubeadm-1.19.8-0.x86_64
Marking kubeadm-1.19.8-0.x86_64.rpm to be installed
Examining kubectl-1.19.8-0.x86_64.rpm: kubectl-1.19.8-0.x86_64
Marking kubectl-1.19.8-0.x86_64.rpm to be installed
Examining kubelet-1.19.8-0.x86_64.rpm: kubelet-1.19.8-0.x86_64
Marking kubelet-1.19.8-0.x86_64.rpm to be installed
Examining kubernetes-cni-0.8.7-0.x86_64.rpm: kubernetes-cni-0.8.7-0.x86_64
Marking kubernetes-cni-0.8.7-0.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.19.8-0 will be installed
---> Package kubectl.x86_64 0:1.19.8-0 will be installed
---> Package kubelet.x86_64 0:1.19.8-0 will be installed
---> Package kubernetes-cni.x86_64 0:0.8.7-0 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================================================================================================================
 Package                                  Arch                             Version                               Repository                                                Size
================================================================================================================================================================================
Installing:
 kubeadm                                  x86_64                           1.19.8-0                              /kubeadm-1.19.8-0.x86_64                                  37 M
 kubectl                                  x86_64                           1.19.8-0                              /kubectl-1.19.8-0.x86_64                                  41 M
 kubelet                                  x86_64                           1.19.8-0                              /kubelet-1.19.8-0.x86_64                                 105 M
 kubernetes-cni                           x86_64                           0.8.7-0                               /kubernetes-cni-0.8.7-0.x86_64                            55 M

Transaction Summary
================================================================================================================================================================================
Install  4 Packages

Total size: 238 M
Installed size: 238 M
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : kubernetes-cni-0.8.7-0.x86_64                                                                                                                                1/4 
  Installing : kubelet-1.19.8-0.x86_64                                                                                                                                      2/4 
  Installing : kubectl-1.19.8-0.x86_64                                                                                                                                      3/4 
  Installing : kubeadm-1.19.8-0.x86_64                                                                                                                                      4/4 
  Verifying  : kubeadm-1.19.8-0.x86_64                                                                                                                                      1/4 
  Verifying  : kubelet-1.19.8-0.x86_64                                                                                                                                      2/4 
  Verifying  : kubectl-1.19.8-0.x86_64                                                                                                                                      3/4 
  Verifying  : kubernetes-cni-0.8.7-0.x86_64                                                                                                                                4/4 

Installed:
  kubeadm.x86_64 0:1.19.8-0                 kubectl.x86_64 0:1.19.8-0                 kubelet.x86_64 0:1.19.8-0                 kubernetes-cni.x86_64 0:0.8.7-0                

Complete!
[root@k8s-slave1 k8s]# yum list |grep kube  # 查看已经安装的K8s相关软件
kubeadm.x86_64                          1.19.8-0                   @/kubeadm-1.19.8-0.x86_64
kubectl.x86_64                          1.19.8-0                   @/kubectl-1.19.8-0.x86_64
kubelet.x86_64                          1.19.8-0                   @/kubelet-1.19.8-0.x86_64
kubernetes-cni.x86_64                   0.8.7-0                    @/kubernetes-cni-0.8.7-0.x86_64
[root@k8s-slave1 k8s]# 
  • slave1 安装结果
[root@k8s-slave1 k8s]# yum list|grep kube
kubeadm.x86_64                          1.19.8-0                   @/kubeadm-1.19.8-0.x86_64
kubectl.x86_64                          1.19.8-0                   @/kubectl-1.19.8-0.x86_64
kubelet.x86_64                          1.19.8-0                   @/kubelet-1.19.8-0.x86_64
kubernetes-cni.x86_64                   0.8.7-0                    @/kubernetes-cni-0.8.7-0.x86_64
[root@k8s-slave1 k8s]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.8", GitCommit:"fd5d41537aee486160ad9b5356a9d82363273721", GitTreeState:"clean", BuildDate:"2021-02-17T12:39:33Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-slave1 k8s]# 
  • slave2 安装结果
[root@k8s-slave2 k8s]# yum list|grep kube
kubeadm.x86_64                          1.19.8-0                   @/kubeadm-1.19.8-0.x86_64
kubectl.x86_64                          1.19.8-0                   @/kubectl-1.19.8-0.x86_64
kubelet.x86_64                          1.19.8-0                   @/kubelet-1.19.8-0.x86_64
kubernetes-cni.x86_64                   0.8.7-0                    @/kubernetes-cni-0.8.7-0.x86_64
[root@k8s-slave2 k8s]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.8", GitCommit:"fd5d41537aee486160ad9b5356a9d82363273721", GitTreeState:"clean", BuildDate:"2021-02-17T12:39:33Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-slave2 k8s]#

注意:yum源有安装包的直执行如下命令:

yum install -y kubelet-1.19.8 kubeadm-1.19.8 kubectl-1.19.8 --disableexcludes=kubernetes
  • 查看kubeadm 版本
[root@k8s-master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.8", GitCommit:"fd5d41537aee486160ad9b5356a9d82363273721", GitTreeState:"clean", BuildDate:"2021-02-17T12:39:33Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master ~]# 
  • 设置kubelet开机启动
systemctl enable kubelet
2.2 初始化配置文件

操作节点:master(k8s-master)节点
命令:

kubeadm config print init-defaults > kubeadm.yaml

修改配置:

[root@k8s-master ~]# cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.22.14.56 # apiserver地址,此处默认为1.2.3.4,,因为单master,所以配置master的节点内网IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: 172.22.14.56:5000  # 修改成内网镜像地址
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16  # Pod 网段,flannel插件需要使用这个网段
  serviceSubnet: 10.96.0.0/12
scheduler: {}
[root@k8s-master ~]# 
2.3 下载各个组件镜像

操作节点:master(k8s-master)节点

  • 查看需要使用的镜像列表,若无问题,将得到如下列表

命令:

kubeadm config images list --config kubeadm.yaml

结果:

[root@k8s-master ~]# kubeadm config images list --config kubeadm.yaml
172.22.14.56:5000/kube-apiserver:v1.19.0
172.22.14.56:5000/kube-controller-manager:v1.19.0
172.22.14.56:5000/kube-scheduler:v1.19.0
172.22.14.56:5000/kube-proxy:v1.19.0
172.22.14.56:5000/pause:3.2
172.22.14.56:5000/etcd:3.4.13-0
172.22.14.56:5000/coredns:1.7.0
[root@k8s-master ~]#
  • 拉取镜像

命令:

kubeadm config images pull --config kubeadm.yaml

结果:

[root@k8s-master ~]# kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled 172.22.14.56:5000/kube-apiserver:v1.19.0
[config/images] Pulled 172.22.14.56:5000/kube-controller-manager:v1.19.0
[config/images] Pulled 172.22.14.56:5000/kube-scheduler:v1.19.0
[config/images] Pulled 172.22.14.56:5000/kube-proxy:v1.19.0
[config/images] Pulled 172.22.14.56:5000/pause:3.2
[config/images] Pulled 172.22.14.56:5000/etcd:3.4.13-0
[config/images] Pulled 172.22.14.56:5000/coredns:1.7.0
[root@k8s-master ~]#
2.4 初始化master节点

操作节点:master(k8s-master)节点
命令:

kubeadm init --config kubeadm.yaml

结果:

[root@k8s-master ~]# kubeadm init --config kubeadm.yaml
W0607 22:28:02.679261    5388 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.22.14.56]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.22.14.56 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.22.14.56 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 65.503850 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

**  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config**

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

**kubeadm join 172.22.14.56:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:66d9579b1934750b0db86f354acc94373f785516d1c79e778a0b190d0aa9b41f**
[root@k8s-master ~]# 
2.4.1 初始化后续操作一

操作节点:master(k8s-master)节点
初始化完成后,输出以下命令,在master节点配置kubectl客户端的认证
命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

结果:

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
2.4.1 初始化后续操作二

操作节点:所有的slave(k8s-slave1,k8s-slave2)节点
步骤2.4 初始化master节点初始化完成后,输出以下命令,需要替换成实际初始化后后打印出的命令
命令:

kubeadm join 172.22.14.56:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:66d9579b1934750b0db86f354acc94373f785516d1c79e778a0b190d0aa9b41f

结果:

root@k8s-slave1 ~]# kubeadm join 172.22.14.56:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:66d9579b1934750b0db86f354acc94373f785516d1c79e778a0b190d0aa9b41f
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-slave1 ~]# 

注意:此时使用 kubectl get nodes查看节点应该处于notReady状态,因为还未配置网络插件
若执行初始化过程中出错,根据错误信息调整后,执行kubeadm reset后再次执行init操作即可

※温馨提示※
如果忘记添加命令,可以在master节点通过如下命令生成:

kubeadm token create --print-join-command
2.5 安装flannel插件

操作节点:master(k8s-master)节点

  • 下载flannel的yaml文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • 修改配置
    指定网卡名称,大概在文件的190行,添加一行配置- --iface=eth0
...
   serviceAccountName: flannel
   initContainers:
   - name: install-cni
     image: 172.22.14.56:5000/flannel:v0.14.0 # flannel镜像地址换成本地仓库地址
     command:
     - cp
     args:
     - -f
     - /etc/kube-flannel/cni-conf.json
     - /etc/cni/net.d/10-flannel.conflist
     volumeMounts:
     - name: cni
       mountPath: /etc/cni/net.d
     - name: flannel-cfg
       mountPath: /etc/kube-flannel/
   containers:
   - name: kube-flannel
     image: 172.22.14.56:5000/flannel:v0.14.0 # flannel镜像地址换成本地仓库地址
     command:
     - /opt/bin/flanneld
     args:
     - --ip-masq
     - --kube-subnet-mgr
     - --iface=eth0  # 如果机器存在多网卡,指定内网网卡的名称,默认不指定的话会找第一块网
     resources:
       requests:
         cpu: "100m"
         memory: "50Mi"
       limits:
         cpu: "100m"
         memory: "50Mi"
     securityContext:
...
  • 安装flannel网络插件

国外镜像仓库拉取docker pull quay.io/coreos/flannel:v0.14.0-amd64
从镜像仓库拉取到本地:
命令:

docker pull 172.22.14.56:5000/flannel:v0.14.0

结果

[root@k8s-master ~]# docker pull 172.22.14.56:5000/flannel:v0.14.0
v0.14.0: Pulling from flannel
801bfaa63ef2: Pull complete 
e4264a7179f6: Pull complete 
bc75ea45ad2e: Pull complete 
78648579d12a: Pull complete 
3393447261e4: Pull complete 
071b96dd834b: Pull complete 
4de2f0468a91: Pull complete 
Digest: sha256:635d42b8cc6e9cb1dee3da4d5fe8bbf6a7f883c9951b660b725b0ed2c03e6bde
Status: Downloaded newer image for 172.22.14.56:5000/flannel:v0.14.0
  • 安装

命令:

 kubectl apply -f kube-flannel.yml

结果:

[root@k8s-master ~]#  kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
2.6 设置kubectl自动补全

操作节点:master(k8s-master)节点

yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
2.7 检查验证
  • 2.7.1 验证集群

命令:

kubectl get nodes  #观察集群节点是否全部Ready

结果:

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   34m     v1.19.8
k8s-slave1   Ready    <none>   26m     v1.19.8
k8s-slave2   Ready    <none>   2m50s   v1.19.8
[root@k8s-master ~]# kubectl get po --namespace=kube-system -owide
NAME                                 READY   STATUS    RESTARTS   AGE     IP             NODE         NOMINATED NODE   READINESS GATES
coredns-599885f8cf-8nkfp             1/1     Running   0          34m     10.244.0.2     k8s-master   <none>           <none>
coredns-599885f8cf-z62k6             1/1     Running   0          34m     10.244.0.3     k8s-master   <none>           <none>
etcd-k8s-master                      1/1     Running   0          34m     172.22.14.56   k8s-master   <none>           <none>
kube-apiserver-k8s-master            1/1     Running   0          34m     172.22.14.56   k8s-master   <none>           <none>
kube-controller-manager-k8s-master   1/1     Running   0          34m     172.22.14.56   k8s-master   <none>           <none>
kube-flannel-ds-27qdl                1/1     Running   0          4m27s   172.22.14.56   k8s-master   <none>           <none>
kube-flannel-ds-lr4fs                1/1     Running   0          4m27s   172.22.14.57   k8s-slave1   <none>           <none>
kube-flannel-ds-p7khv                1/1     Running   0          2m52s   172.22.14.58   k8s-slave2   <none>           <none>
kube-proxy-bzpbx                     1/1     Running   0          34m     172.22.14.56   k8s-master   <none>           <none>
kube-proxy-jrsj8                     1/1     Running   0          26m     172.22.14.57   k8s-slave1   <none>           <none>
kube-proxy-rmqdq                     1/1     Running   0          2m52s   172.22.14.58   k8s-slave2   <none>           <none>
kube-scheduler-k8s-master            1/1     Running   0          34m     172.22.14.56   k8s-master   <none>           <none>
[root@k8s-master ~]# 
  • 2.7.2 创建测试nginx服务

命令:

kubectl run  test-nginx --image=nginx:alpine

结果:

[root@k8s-master ~]# kubectl run my-test-nginx --image=172.22.14.56:5000/nginx:alpine
pod/my-test-nginx created
[root@k8s-master ~]# 

查看pod是否创建成功,并访问pod ip测试是否可用

root@k8s-master ~]# kubectl get po -o wide
NAME            READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
my-test-nginx   1/1     Running   0          77s   10.244.2.2   k8s-slave2   <none>           <none>
[root@k8s-master ~]# curl 10.244.2.2 # 使用curl命令验证
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s-master ~]# 
2.8 清理环境

如果集群安装过程中遇到了其他问题,以使用下面的命令来进行重置
操作节点:所有节点

kubeadm reset
ifconfig cni0 down && ip link delete cni0
ifconfig flannel.1 down && ip link delete flannel.1
rm -rf /run/flannel/subnet.env
rm -rf /var/lib/cni/
mv /etc/kubernetes/ /tmp
mv /var/lib/etcd /tmp
mv ~/.kube /tmp
iptables -F
iptables -t nat -F
ipvsadm -C
ip link del kube-ipvs0
ip link del dummy0
2.9 小结
  • 非高可用版本部署起来挺简单的,环境差的不多,基本上按照步骤一步一步都能成功。
  • 如果能连外网,部署起来会比较顺畅,内网稍微麻烦一点,需要手动去拉镜像和相关依赖包。
  • 这里没有部署dashboard,也没有设置master节点是否可调度,如果需要可以自行安装。
  • 想要重新部署,先清理环境,然后重新依次操作即可。
  • 如果有疑问欢迎留言讨论互相学习。

注意:设置master节点是否可调度(可选)
默认情况下,mastert节点是无法调度业务pod,如需设置master节点也可以参与pod的调度,需执行命令:

kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule-
posted @ 2021-06-07 23:21  红桃Z  阅读(1056)  评论(0编辑  收藏  举报