使用kubeadm方式搭建k8s集群
1.搭建前准备
三台centos7服务器(虚拟机),ip地址如下
192.168.135.148(node节点)
192.168.135.149(node节点)
192.168.135.150(master节点)
首先确保三台机器关闭发了防火墙、swap交换分区和selinux ,并且可以互相访问
临时关闭:
1 2 3 | setenforce 0 swapoff -a systemctl stop firewalld |
永久关闭:
swap:
cat /etc/fstab 找到 /dev/mapper/centos-swap swap 注释掉,也可使用以下命令
sed -i "s/\/dev\/mapper\/centos-swap/#\/dev\/mapper\/centos-swap/" /etc/fstab reboot
防火墙:
systemctl disable firewalld
selinux:
cat /etc/sysconfig/selinux 将SELINUX=enforcing 改为disabled 或者permissive disabled完全关闭,permissive会提示警告 这里我选择permissive sed -i "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/sysconfig/selinux
reboot
2.安装k8s集群的必须组件kubeadm、kubectl、kubelet
kubeadm可以理解为官方提供的集群部署工具,一般也只会在部署时用到
kubectl时操作集群的客户端软件,他通过与kubelet交互,达到操作集群资源、管理集群的目的
kubelet是集群服务,也是集群中的一个非常重要的组件,负责节点上pod的创建、管理等,同时也负责监控节点的信息
开始安装:
每个服务器中安装
在/etc/yum.repos.d/下新建文件kubernetes.repo,并将[kubernetes]z直到EOF前的信息复制进去,也可直接执行命令
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF #创建完成后直接执行yum安装 yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet
注:此处使用的资源为cetos7 64位最新稳定资源,各组件版本号如下
启用shell的自动补全功能
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
此处参考的官网步骤,稍微有点问题,kubectl不会自动补全
每个服务器安装docker,执行以下命令,详情不再赘述
yum install yum-utils -y yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install docker-ce docker-ce-cli containerd.io -y systemctl start docker ####设置cgroup为systemd cat <<EOF > /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"] } EOF systemctl daemon-reload && systemctl restart docker
注:这一步的设置cgroup driver很重要,如果k8s的cgroup driver与docker不一致,安装将会失败
验证修改是否成功 docker info | grep Cgroup
3.安装完成后使用国内源初始化master节点,创建配置文件存储位置,分发配置文件到每个node节点
kubeadm init --image-repository registry.aliyuncs.com/google_containers mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config sed -i "14a\- --allocate-node-cidrs=true" /etc/kubernetes/manifests/kube-controller-manager.yaml sed -i "14a\ - --cluster-cidr=10.244.0.0/16" /etc/kubernetes/manifests/kube-controller-manager.yaml
在当前登录的用户家目录下创建配置文件存放目录,推荐使用root进行,另外两步sed是开启flannel插件的相关配置,如果不安装flannel插件可以先忽略(如果没有网络插件,集群其实是不能部署pod的)
这一步会稍微等几分钟,等待master节点初始化完成后,会给出一个命令,将这个命令复制到其他要加入集群的node节点中执行,我这里就是 192.168.135148和192.168.135.149
kubeadm join 192.168.135.150:6443 --token ktwbcb.5u2u5uf48yyjfj60 \ --discovery-token-ca-cert-hash sha256:8f0ca1579e36c35f0e557134d1909213cb03d424a6ecc415b4f27262942674bd
这是我的token,每个人的不一样,并且token只在24小时之内有效,如果过期了要生成新的token,查看以下命令
# 生成一个永不过期的token,不建议使用 kubeadm token create --ttl 0 --print-join-command ###生成新token kubeadm token create --print-join-command
接下来的操作只在node节点执行
scp root@192.168.135.150:/etc/kubernetes/admin.conf /etc/kubernetes/ systemctl restart kubelet mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
这一步是将master节点的配置文件拷贝到node节点,然后同样在家目录下创建目录
scp是将主机的该文件拷贝到本机,ip地址不同记得更换
4.部署flannel网络插件
之前说过,没有网络插件的话不能够创建pod,提供集群内的互相通信,网络插件就是干这个用的
k8s的网络插件有很多,这里我选择的flannel ,flannel有官方提供的部署文件,能够“科学”上网的话可以自行下载,我这里提供的flannel插件只适用于k8s1.21~1.25版本(目前的k8s版本也就到1.23 ,官方说的)
首先新建/etc/kubernetes/flannel/目录,创建kube-flannel.yml文件,复制以下的内容。如果不用cat的话记得去掉EOF其实这个文件放在哪里都可以,但是为了方便管理,还是把它放在kubernetes的目录下
mkdir -p /etc/kubernetes/flannel/ cat << EOF > /etc/kubernetes/flannel/kube-flannel.yml --- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN', 'NET_RAW'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni-plugin image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0 command: - cp args: - -f - /flannel - /opt/cni/bin/flannel volumeMounts: - name: cni-plugin mountPath: /opt/cni/bin - name: install-cni image: quay.io/coreos/flannel:v0.15.1 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.15.1 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni-plugin hostPath: path: /opt/cni/bin - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg EOF
使用以下命令安装,注 要当前目录下有这个文件才行,先cd到目录中去
kubectl apply -f kube-flannel.yml
完成后使用 kubectl getpods -A 看到有3个flannel的pod在kube-system中说明插件安装成功了,至此整个集群的安装就结束了,可以自己部署一些pod来测试,关于测试和问题排错之后再写,如果有问题可以留言。
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 全程不用写代码,我用AI程序员写了一个飞机大战
· DeepSeek 开源周回顾「GitHub 热点速览」
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了