Kubeadm方式搭建K8S集群

Kubeadm方式搭建K8S集群

一、搭建k8s集群(kubeadm方式)
kubeadm部署方式介绍
kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具,这个工具能通过两条指令完成一个kubernetes集群的部署:
第一,创建一个master几点 kubeadm init
第二,将node节点加入到当前集群中 $kubeadm join <Master 节点的IP和端口>

安装要求
部署kubernetes集群机器需要满足以下几个条件:

copy1. 一台或多台机器,操作系统CentOs7.x-86_x64
2. 硬件配置:2GB或更多RAM,2CPU或更多CPU,硬盘30GB或更多
3. 集群中所有机器之间网络互通
4. 可以访问外网,需要拉取镜像
5. 禁止swap分区

最终目标

copy1. 在所有节点上安装docker 和kubeadm
2. 部署kubernetes master
3. 部署容器网络插件
4. 部署kubernetes node,将节点加入kubernetes集群中
5. 部署Dashboard Web页面,可视化查看Kubernetes资源

二、安装步骤

  1. 安装三台虚拟机,安装操作系统centos7.x
copy192.168.72.129
192.168.72.130
192.168.72.131(master)
  1. 对三个操作系统进行初始化操作
copy# 关闭防火墙
systemctl stop fierwalld  #临时关闭
systemctl disable fierwalld  #永久关闭

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时

# 关闭swap分区
swapoff -a  #临时
sed -ri 's/.*swap.*/#&/' /etc/fstab  #永久

# 根据规划设置主机名
hostnamectl set-hostname <hostname>

# 在master添加hosts   只在master中执行
cat >> /etc/hosts << EOF
192.168.72.131 k8s-master
192.168.72.129 k8s-node1
192.168.72.130 k8s-node2
EOF

# 将桥接的 IPv4 流量传递到 iptables 的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system # 生效

# 时间同步
yum install ntpdate -y

ntpdate time.windows.com
  1. 所有节点安装 Docker/kubeadm/kubelet
    安装docker
copywget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo 

yum -y install docker-ce-18.06.1.ce-3.el7 

systemctl enable docker && systemctl start docker 

docker --version

添加阿里云 YUM 软件源

copy# 设置仓库地址
cat > /etc/docker/daemon.json << EOF 
{
 "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"] 
}
EOF
systemctl restart docker  #重启

# 添加 yum 源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装 kubeadm,kubelet 和 kubectl

copy#由于版本更新频繁,这里指定版本
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0 

systemctl enable kubelet
  1. 部署 Kubernetes Master
    在master上执行
copykubeadm init --apiserver-advertise-address=192.168.72.131 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16

由于默认拉取镜像地址 k8s.gcr.io 国内无法访问,这里指定阿里云镜像仓库地址。

使用 kubectl 工具:

copymkdir -p $HOME/.kube 
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
sudo chown $(id -u):$(id -g) $HOME/.kube/config 

kubectl get nodes
  1. 加入 Kubernetes Node
    在 Node 中执行
    向集群添加新节点,执行在 kubeadm init 输出的 kubeadm join 命令:
copykubeadm join 192.168.72.131:6443 --token n0nyws.8j0mkjbfwk16adai \
    --discovery-token-ca-cert-hash sha256:774e4171fc2a86bff58b49aaf153276f1fff56e93f1ba72e29a4ce3df8

默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token

copy# master 查看节点检查token是否有效
kubeadm token list
# 生成新的token和命令。然后在node重新执行
kubeadm token create --print-join-command
# 在 node节点执行新的token和命令
kubeadm join 192.168.136.201:6443 --token 17wwni.taqxzqa3our1wh92 --discovery-token-ca-cert-hash sha256:1472821b3c34f13bc5d7264a737739e9854195b1856a00d2256c79d25118b2e
123456
  1. 部署CNI网络插件
copykubectl apply –f
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kubeflannel.yml

通过以上命令,部署CNI失败,试了好多次仍然不成功,试试下面这种办法。

①master文件中创建文件,将下述内容粘到文件中,然后进行后续安装。
vim kube-flannel.yml

copy---
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: quay.io/coreos/flannel:v0.15.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.15.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

②部署:

copykubectl apply -f  kube-flannel.yml 

③执行如下命令,就会看到节点状态为ready

copykubectl get pods -n kube-system

kubectl get nodes
  1. 测试 kubernetes 集群
    在 Kubernetes 集群中创建一个 pod,验证是否正常运行:
copykubectl create deployment nginx --image=nginx 
kubectl expose deployment nginx --port=80 --type=NodePort 
kubectl get pod,svc

执行kubectl get pod,svc后,有一个端口号,在浏览器输入任意一个node的IP加上这个端口号,能访问nginx说明搭建成功。

posted @   Teddy_boy  阅读(499)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· DeepSeek “源神”启动!「GitHub 热点速览」
· 微软正式发布.NET 10 Preview 1:开启下一代开发框架新篇章
· 我与微信审核的“相爱相杀”看个人小程序副业
· C# 集成 DeepSeek 模型实现 AI 私有化(本地部署与 API 调用教程)
· DeepSeek R1 简明指南:架构、训练、本地部署及硬件要求
历史上的今天:
2021-09-22 paramiko免密登陆
2021-09-22 ssh-keygen无回车生成公钥私钥对
点击右上角即可分享
微信分享提示