2、kubernetes安装实战->稳定版本v1.14.3

 集群部署架构 规划:

节点网络:192.168.1.0/24
Service网络:10.96.0.0/12
Pod网络:10.244.0.0/16

  

 部署方法参考:https://github.com/kubernetes

 kop方式:AWS (Amazon Web Services) and GCE (Google Cloud Platform) are currently officially supported

 kubeadm方式:https://github.com/kubernetes/kubeadm

                说明文档:https://github.com/kubernetes/kubeadm/blob/main/docs/design/design_v1.10.md

 kubernetes安装方式有很多种,这里kubeadm方式安装,一主两从形式部署。

1、集群信息
a、集群节点规划
主机名     节点ip        角色   部署组件
k8s-master 192.168.1.203 master etcd、proxy、apiserver、controller-manage、scheduler、coredns、pause
k8s-node1  192.168.1.202 slave proxy、coredns、pause
k8s-node2  192.168.1.201 slave proxy、coredns、pause

b、组件版本
组件 版本 说明
ContOS 7.2/7.9.2009
Kernel 3.10.0-1160.el7.x86_64
etcd 3.3.10
coredns 1.3.1
proxy v1.14.3
controller-manager v1.14.3
apiserver v1.14.3
scheduler v1.14.3
coredns 1.3.1
kubelet-1.14.3 系统软件,yum安装
kubeadm-1.14.3 系统软件,yum安装
kubectl-1.14.3 系统软件,yum安装

2、安装前准备
操作节点:所有节点(master、node)均需执行
a、修改hostname
hostname必须只能包含小写字母、数字、",","-",且开头必须是小写字母或数字
在master节点上设置主机名:hostnamectl set-hostname k8s-master
在slave1节点上设置主机名:hostnamectl set-hostname k8s-node1
在slave2节点上设置主机名:hostnamectl set-hostname k8s-node2
b、添加hosts解析
cat >>/etc/hosts<<EOF
192.168.1.203 k8s-master
192.168.1.202 k8s-node1
192.168.1.201 k8s-node2
EOF

3、调整系统配置
操作节点:所有节点(master、node)均需执行
a、设置安全组开发端口(或者直接关闭iptables)
如果节点间无安全组限制(内网机器间可以任意访问),可以忽略,否则,至少保证如下端口可通
k8s-master节点:TCP:6443、2379、2380、60080、60081协议端口全部打开
k8s-slave节点:UDP协议端口全部打开
b、设置ptables:iptables -P FORWARD ACCEPT
c、关闭swap:swapoff -a 【禁止开机启动加载swap: sed -i '/ swap / s/^\(.*\)/#\1/g' /etc/fstab
d、关闭selinux和防火墙
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0
systemctl disable firewalld && systemctl stop firewalld
e、修改内核参数
cat <<EOF> /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.max_map_count=262144
EOF
$ modprobe br_netfilter
$ lsmod |grep filter # 查看
$ sysctl -p /etc/sysctl.d/k8s.conf

f、设置yum源
时钟初始化:ntpdate s1a.time.edu.cn
基础命令初始化:yum install -y wget zip unzip rsync lrzsz vim-enhanced ntpdate tree lsof dstat net-tools
$ curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
$ curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
$ cat <<EOF> /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
$ yum clean all && yum makecache

4、安装docker
操作节点:所有节点(master、node)均需执行
## 查看所有可用版本:yum list docker-ce --showduplicates|sort -r
## 安装旧版本:yum install -y --setopt=obsoletes=0 docker-ce-17.03.3.ce-1.el7(或yum install -y docker-ce-17.03.3.ce-1.el7)
## 安装最新版本:yum install -y docker-ce

## 配置docker加速
$ mkdir -p /etc/docker
$ tee /etc/docker/daemon.json <<-'EOF'
{
  "insecure-registries": ["192.168.1.203:5000"],
  "registry-mirrors": ["https://xxx.mirror.aliyuncs.com"]
}
EOF

##启动docker
$ systemctl daemon-reload
$ systemctl enable docker && systemctl restart docker

5、安装kubernetes
a、安装kubeadm、kubelet和kubectl
操作节点:所有节点(master、node)均需执行
yum install -y kubelet-1.14.3 kubeadm-1.14.3 kubectl-1.14.3 --disableexcludes=kubernetes
查看kubeadm版本:kubectl version
设置开机启动:systemctl enable kubelet
b、初始化配置文件
操作节点:master节点执行
kubeadm config print init-defaults >kubeadm.yaml 导出修改配置文件
[root@k8s-master ~]# grep '##' kubeadm.yaml 如下,修改了4处配置文件。
advertiseAddress: 192.168.1.203 ## master ip
imageRepository: k8s.gcr.io ## k8s.gcr.io-->registry.aliyuncs.com/google_containers
kubernetesVersion: v1.14.3 ## v1.14.0-->v1.14.3
podSubnet: 10.244.0.0/16 ## ""-->"10.244.0.0/16" pod net, flannle required

c、提前下载镜像
操作节点:master节点执行
# 查看需要使用的镜像列表,如无问题,得到如下列表。
[root@k8s-master ~]# kubeadm config images list --config kubeadm.yaml
k8s.gcr.io/kube-apiserver:v1.14.3
k8s.gcr.io/kube-controller-manager:v1.14.3
k8s.gcr.io/kube-scheduler:v1.14.3
k8s.gcr.io/kube-proxy:v1.14.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
# 提前下载镜像到本地
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.14.3
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.3.10
docker pull coredns/coredns:1.3.1
# 镜像标签(将镜像设置好k8s源)
docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3 k8s.gcr.io/kube-apiserver:v1.14.3
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3 k8s.gcr.io/kube-controller-manager:v1.14.3
docker tag mirrorgooglecontainers/etcd-amd64:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3 k8s.gcr.io/kube-scheduler:v1.14.3
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.14.3 k8s.gcr.io/kube-proxy:v1.14.3
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1

docker rmi mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3
docker rmi mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3
docker rmi mirrorgooglecontainers/etcd-amd64:3.3.10
docker rmi mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3
docker rmi mirrorgooglecontainers/kube-proxy-amd64:v1.14.3
docker rmi coredns/coredns:1.3.1
docker rmi mirrorgooglecontainers/pause-amd64:3.1
注意:镜像地址可能出现不可用,需要匹配到合适的镜像使用。

d、初始化master节点
操作节点:master节点执行
kubeadm init --config kubeadm.yaml
...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.203:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:2c5d02ebf2a7cec5e344967297d182664119923eb437a59eb3ad8f5881299124
$ mkdir -p $HOME/.kube
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 2m12s v1.14.3
此时使用kubectl get no查看节点应该处于NotReady状态,因为还没有配置网络插件;若初始化过程中出错,根据错误信息调整后,执行kubeadm reset后再次执行init操作即可。

e、添加slave节点到集群中
操作节点:所有slave节点均需执行,--前提,前面已完成了初始化。
执行如下命令,master节点上 kubectl get no查看。
kubeadm join 192.168.1.203:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:2c5d02ebf2a7cec5e344967297d182664119923eb437a59eb3ad8f5881299124
:如上,通过get查看slave节点已加入集群,但是还是处于NotReady状态。只因缺少(flanel)网络插件。
$ kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 60m v1.14.3
k8s-node1 NotReady <none> 18s v1.14.3

f、安装flannel插件
操作节点:只需在master节点上执行
下载插件(网络原因,多尝试几次):wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
修改kube-flannel.yml配置, 调整可用的镜像地址和指定网卡信息。
...
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.10.0-amd64 ## select used images
#image: docker.io/rancher/mirrored-flannelcni-flannel:v0.22.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=ens33 ## if host has many network nics, the first is default
resources:
requests:
cpu: "100m"
...
:也可指本地私有镜像仓库。
执行安装flannel插件
先拉镜像,提高速度:docker pull quay.io/coreos/flannel:v0.10.0-amd64
执行flannel安装:kubectl create -f kube-flannel.yml

正常情况下,这种情况node节点都是可以处于ready状态的
针对这情况处理方式:确认no-->查pod-->des pod
[root@k8s-node1 ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 11h v1.14.3
k8s-node1 NotReady <none> 10h v1.14.3
[root@k8s-node1 ~]# kubectl get pods -A |grep -v Running
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-fb8b8dccf-bg89f 0/1 Pending 0 11h
kube-system coredns-fb8b8dccf-vpxmf 0/1 Pending 0 11h
kube-system kube-proxy-hdtz7 0/1 ContainerCreating 0 10h
[root@k8s-node1 ~]# kubectl describe pod kube-proxy-hdtz7 -n kube-system
...
error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.204.82:443: i/o timeout
Warning FailedCreatePodSandBox 68s (x7 over 21m) kubelet, k8s-node1 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.189.82:443: i/o timeout
注意关键词,failed pulling image 获取镜像失败,此时需要在node节点上提前下载好镜像。
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.14.3
docker pull coredns/coredns:1.3.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.14.3 k8s.gcr.io/kube-proxy:v1.14.3
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker rmi mirrorgooglecontainers/pause-amd64:3.1
docker rmi mirrorgooglecontainers/kube-proxy-amd64:v1.14.3
docker rmi coredns/coredns:1.3.1

还有一个报错: tailf /var/log/messages
Jun 29 15:01:22 k8s-master kubelet: W0629 15:01:22.792343 8210 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Jun 29 15:01:24 k8s-master kubelet: E0629 15:01:24.157722 8210 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
参考方法:https://blog.csdn.net/weixin_40548480/article/details/122786504#:~:text=%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%9Avim,%2Fvar%2Flib%2Fkubelet%2Fkubeadm-flags.env
去掉配置文件/var/lib/kubelet/kubeadm-flags.env里面的--network-plugin=cni参数,重启systemctl restart kubelet(所有节点都需要修改)
此时get查看集群都正常了。

此时k8s集群所有部署工作全部完成。

但是还是不行,cluster ip不能访问https://www.zhongkehuayu.com/326.html
[root@k8s-master fl]# ls /opt/cni/bin/flannel
ls: 无法访问/opt/cni/bin/flannel: 没有那个文件或目录
[root@k8s-master bin]# cd /opt/cni/bin
[root@k8s-master bin]# tar -xf cni-plugins-linux-amd64-v0.8.6.tgz
[root@k8s-master bin]# ll /opt/cni/bin/flannel
-rwxr-xr-x 1 root root 3069556 5月 14 2020 /opt/cni/bin/flannel
[root@k8s-master ~]# ifconfig cni0
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.1 netmask 255.255.255.0 broadcast 10.244.0.255
inet6 fe80::ecf2:70ff:fe8c:33c prefixlen 64 scopeid 0x20<link>
ether ee:f2:70:8c:03:3c txqueuelen 1000 (Ethernet)
RX packets 5116 bytes 323360 (315.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3950 bytes 1202445 (1.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@k8s-master ~]# ip r
default via 192.168.1.1 dev ens33 proto static metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.203 metric 100
此时多了一个网卡,多了路由策略。
[root@k8s-master ~]# scp -r /etc/cni/net.d/10-flannel.conf k8s-node1:/etc/cni/net.d/
[root@k8s-master ~]# scp -r /run/flannel/subnet.env k8s-node1:/run/flannel/
此时还是不行,参考:https://www.cnblogs.com/l-hh/p/14989850.html,发现是flannel容器没启动。,需要重新下载。

神奇的配置文件[root@k8s-master ~]# ls /root/fl/kube-flannel.yml 

[root@k8s-master ~]# cat kube-flannel.yml
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan",
        "Directrouting": true
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.1.2
       #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-amd64
        #image: quay.io/coreos/flannel:v0.11.0-amd64
       #v0.10.0-amd64 
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.22.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64   ## select used images
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.22.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=ens33  ## if host has many network nics, the first is default
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
[root@k8s-master ~]# 
kube-flannel.yml

[root@k8s-master fl]# kubectl get pods -n kube-system |grep flannel
kube-flannel-ds-amd64-864bn 1/1 Running 0 2m17s
kube-flannel-ds-amd64-9txrj 1/1 Running 0 2m17s
kube-flannel-ds-amd64-wbtzr 1/1 Running 0 2m17s
部署完成后需要清理垃圾文件,重启服务器,会自动生成文件。
[root@k8s-node2 ~]# rm -rf /etc/cni/net.d/
[root@k8s-node2 ~]# rm -rf /run/flannel/
[root@k8s-node2 ~]# reboot # 重启后自动生成两个配置文件。
[root@k8s-node2 ~]# ls /etc/cni/net.d/10-flannel.conflist
[root@k8s-node2 ~]# ls /run/flannel/subnet.env
[root@k8s-node2 ~]# ip r
default via 192.168.1.1 dev ens33 proto static metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 dev cni0 proto kernel scope link src 10.244.2.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.201 metric 100
[root@k8s-node2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:d5:64:85 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.201/24 brd 192.168.1.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed5:6485/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:86:64:5b:c4 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 1a:54:51:98:c9:e2 brd ff:ff:ff:ff:ff:ff
inet 10.244.2.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::1854:51ff:fe98:c9e2/64 scope link
valid_lft forever preferred_lft forever
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether c6:26:a9:78:52:4c brd ff:ff:ff:ff:ff:ff
inet 10.244.2.1/24 brd 10.244.2.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::c426:a9ff:fe78:524c/64 scope link
valid_lft forever preferred_lft forever
6: veth2bced70f@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
link/ether 02:02:20:b5:ab:49 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::2:20ff:feb5:ab49/64 scope link
valid_lft forever preferred_lft forever
验证网络:
[root@k8s-master ~]# kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=2
[root@k8s-master ~]# kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-5bc569c47d-dd76h 1/1 Running 0 18m 10.244.2.2 k8s-node2 <none> <none>
myapp-5bc569c47d-wpkm6 1/1 Running 0 18m 10.244.1.199 k8s-node1 <none> <none>
[root@k8s-master ~]# curl 10.244.2.2
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.244.1.199
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
如上,k8s基础验证完毕。

=====================================================================

相关命令
kubectl cluster-info
kubectl explain no
kubectl describe pod coredns-fb8b8dccf-bg89f -n kube-system
kubectl get pod -A
kubectl get pods -o wide
kubectl describe pod nginx-deploy-55d8d67cf-mc9f7

后续维护
停机断电之后,需要手动拉取服务:systemctl restart kubelet
重建集群
$ kubeadm reset
$ kubeadm init --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
...
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.203:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:7a51552624caad1373e5944a2180b36720d78dae234cb3771964fd2cec6d49f5
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 2m11s v1.14.3
$ kubeadm config images list

posted @ 2023-06-30 11:10  wang_wei123  阅读(213)  评论(0编辑  收藏  举报