centos7.8系统使用Kubeadm安装部署kubernetes1.23.1

一、机器情况

主机 ip 配置 操作系统
master 192.168.0.160 2c4g50G centos7.8
node01 192.168.0.6 2c4g50G centos7.8
node02 192.168.0.167 2c4g50G centos7.8

 

二、机器设置

以下步骤需要在每个节点上执行

1、设置主机名

在各自节点上设置各自得主机名

hostnamectl set-hostname  master

hostnamectl set-hostname  node01

hostnamectl set-hostname  node02

2、设置hosts

cat <<EOF >/etc/hosts
192.168.0.160  master
192.168.0.6    node01
192.168.0.167  node02
EOF

3、设置防火墙以及seliunx

关闭防火墙
systemctl stop firewalld
设置开机不启动
systemctl disable firewalld
关闭selinux
vi /etc/selinux/config
SELINUX=disabled
重启系统
reboot

4、关闭swap分区

vi  /etc/fstab
 #
 # /etc/fstab
 # Created by anaconda on Mon Jan 21 19:19:41 2019
 #
 # Accessible filesystems, by reference, are maintained under '/dev/disk'
 # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
 #
 /dev/mapper/centos-root /                       xfs     defaults        0 0
 UUID=214b916c-ad23-4762-b916-65b53fce1920 /boot                   xfs     defaults        0     0
 #/dev/mapper/centos-swap swap                    swap    defaults        0 0

5、创建/etc/sysctl.d/k8s.conf文件,添加如下内容

cat <<EOF >/etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF

#执行命令使修改生效

modprobe br_netfilter

sysctl -p /etc/sysctl.d/k8s.conf

6、kube-proxy开启ipvs的前置条件

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

加载模块
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

安装了ipset软件包
yum install ipset -y

安装管理工具ipvsadm
yum install ipvsadm -y

三、安装docker

以下步骤需要在每个节点上执行

1、设置阿里云docker yum源

yum-config-manager  --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

如果yum-config-manager不能用,请安装yum-utils
yum -y install yum-utils

查看可安装Docker版本
yum list docker-ce.x86_64  --showduplicates |sort -r

2、安装docker

yum默认是安装最新版本,但是为了兼容性,这里就指定版本安装

yum install -y --setopt=obsoletes=0 docker-ce-18.06.1.ce-3.el7

3、设置docker的Cgroup Driver

mkdir -p /etc/docker

cat <<EOF >/etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"] } EOF

如果上面一行还有内容,记得在上面一行加上逗号

4、启动docker设置开机启动

systemctl start docker && systemctl enable docker

四、使用kubeadm安装kubernetes

1-4需在所有节点执行,5-6在master节点上执行

1、配置yum源

vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
enable=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
    https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

如果不行请将
repo_gpgcheck=1 改为 repo_gpgcheck=0

2、安装kubelet,kubeadm,kubectl

查看要安装的版本

yum list kubelet  --showduplicates |sort -r
yum list kubeadm  --showduplicates |sort -r
yum list kubectl  --showduplicates |sort -r

安装默认版本,一般是安装当前最新版本:

yum makecache fast && yum install -y kubelet  kubeadm kubectl

安装指定版本,比如安装1.23.17的版本:

yum install kubelet-1.23.17-0  kubeadm-1.23.17-0 kubectl-1.23.17-0 -y  

3、修改kubelet的Cgroup Driver 、启动并设置开机启动

cat <<EOF >/etc/sysconfig/kubelet
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
EOF

# 启动并设置开机启动
systemctl start kubelet && systemctl enable kubelet

 

4、下载必要的镜像

[root@master ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.23.1
k8s.gcr.io/kube-controller-manager:v1.23.1
k8s.gcr.io/kube-scheduler:v1.23.1
k8s.gcr.io/kube-proxy:v1.23.1
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6  ##注意这里多了一级coredns

vi k8s.sh
docker pull mirrorgooglecontainers/kube-apiserver:v1.23.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.23.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.23.1
docker pull mirrorgooglecontainers/kube-proxy:v1.23.1
docker pull mirrorgooglecontainers/pause:3.6
docker pull mirrorgooglecontainers/etcd:3.5.1-0
docker pull coredns/coredns:v1.8.6


docker tag mirrorgooglecontainers/kube-apiserver:v1.12.1 k8s.gcr.io/kube-apiserver:v1.23.1
docker tag mirrorgooglecontainers/kube-controller-manager:v1.23.1 k8s.gcr.io/kube-controller-manager:v1.23.1
docker tag mirrorgooglecontainers/kube-scheduler:v1.12.1 k8s.gcr.io/kube-scheduler:v1.23.1
docker tag mirrorgooglecontainers/kube-proxy:v1.12.1  k8s.gcr.io/kube-proxy:v1.23.1
docker tag mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.6
docker tag mirrorgooglecontainers/etcd:3.2.24  k8s.gcr.io/etcd:3.5.1-0
docker tag coredns/coredns:v1.8.6  k8s.gcr.io/coredns/coredns:v1.8.6  ##这里打tag的时候也要多一级coredns



docker rmi mirrorgooglecontainers/kube-apiserver:v1.23.1
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.23.1
docker rmi mirrorgooglecontainers/kube-scheduler:v1.23.1
docker rmi mirrorgooglecontainers/kube-proxy:v1.23.1
docker rmi mirrorgooglecontainers/pause:3.6
docker rmi mirrorgooglecontainers/etcd:3.5.1-0
docker rmi coredns/coredns:v1.8.6

5、初始化

kubeadm init \
--kubernetes-version=v1.23.1 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=192.168.0.160


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果上面的参数填写错误,需要重新初始化需执行重置
kubeadm  reset

kubernetes-version:修改要安装的版本
apiserver-advertise-address:master节点的ip地址

当出现如下信息表示成功

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
在nide01和node02节点上分别执行如下命令,加入集群

kubeadm join 192.168.0.160:6443 --token 57jle4.zbccddfk8d2su6pe \
	--discovery-token-ca-cert-hash sha256:556eeec7a4d742155a785b90a6efaebd95c466ad939047d4ad90ccb55dc35418

  

6、Flannel部署

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f  kube-flannel.yml


如需重新安装需要先删除所创建的网络配置
kubectl delete -f  kube-flannel.yml

如果无法下载kube-flannel.yml,但是请提前下载好涉及的镜像文件并修改pod的网段和实际一样,内容如下:

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "172.14.10.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.2.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.22.2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.22.2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

 

7、节点加入集群

在node01和node02节点上分别执行如下命令加入集群:

kubeadm join 192.168.0.160:6443 --token 57jle4.zbccddfk8d2su6pe \
	--discovery-token-ca-cert-hash sha256:556eeec7a4d742155a785b90a6efaebd95c466ad939047d4ad90ccb55dc35418

8、查看状态

master节点上执行,如果都为Ready表示成功
kubectl  get node

[root@master ~]# kubectl  get node
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   27m   v1.23.1
node01   Ready    <none>                 17m   v1.23.1
node02   Ready    <none>                 17m   v1.23.1

  

 五、镜像地址

我已把相关镜像打包,放到云盘上了,有需要的自取

链接: https://pan.baidu.com/s/1XKN32WXiXmp6XKlsgw-xGw 提取码: q8hs 

 

另附k8s1.23.17镜像下载:

链接: https://pan.baidu.com/s/1GTQUikASXMkSkUYKMPhKLQ 提取码: hd8t

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

posted @ 2021-12-22 15:52  凉生墨客  阅读(1350)  评论(0编辑  收藏  举报