K8S集群搭建流程
1. 环境准备
- 三台Centos7主机(3G及以上RAM,2核CPU,40G硬盘空间)
- 网络环境:要求主机之间能够互相通信
k8s-node1 10.15.0.21
k8s-node2 10.15.0.22
k8s-node3 10.15.0.23
2. 配置过程
本环节如果没有特殊说明,其所有命令在所有主机上都要执行
1. 配置主机名
# 第一台主机
hostnamectl set-hostname k8s-node1
# 第二台主机
hostnamectl set-hostname k8s-node2
# 第三台主机
hostnamectl set-hostname k8s-node3
2. 关闭防火墙
# 关闭防火墙
systemctl stop firewalld
# 关闭防火墙的开机自启
systemctl disable firewalld
3. 关闭 swap 分区
# 增加集群稳定性,防止服务器在内存不足的时候使用 swap 分区降低集群性能
# 先临时禁用 swap 分区
sudo swapoff -a
# 修改/etc/fstab文件,注释掉Swap分区的相关行
sed -ri 's/.*swap.*/#&/' /etc/fstab
4. 同步服务器时间
# 安装时间同步工具
yum install ntpdate -y
# 使用微软的时间同步服务器
ntpdate time.windows.com
5. 安装 containerd
# 安装 yum-config-manager 相关依赖
yum install -y yum-utils device-mapper-persistent-data lvm2
# 添加软件源信息
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum makecache fast
# 安装 containerd
yum install -y containerd.io cri-tools
# 配置 containerd
cat > /etc/containerd/config.toml <<EOF
disabled_plugins = ["restart"]
[plugins.linux]
shim_debug = true
[plugins.cri.registry.mirrors."docker.io"]
endpoint = ["https://3j4odb52.mirror.aliyuncs.com"]
[plugins.cri]
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
EOF
# 启动 containerd 服务
systemctl start containerd
# 设置成开机自启
systemctl enable containerd
# 配置 containerd 网络
cat > /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
# 配置 k8s 网络
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# 加载模块
modprobe overlay
modprobe br_netfilter
# 查看 k8s 配置
sysctl -p /etc/sysctl.d/k8s.conf
6. 配置 k8s 镜像源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 查看镜像仓库
yum repolist
7. 安装 k8s
yum install -y kubelet kubeadm kubectl
8. 启动 kubeket
systemctl start kubelet
systemctl enable kubelet
3. 初始化 k8s 集群
在control-plane主机上执行:
kubeadm init \
--apiserver-advertise-address=10.15.0.21 \
--pod-network-cidr=10.244.0.0/16 \
--image-repository registry.aliyuncs.com/google_containers
执行完成后的回显如下,会给一段其他节点加入该集群的命令:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.15.0.21:6443 --token ee016g.60vwpgmf6z7sgdyy \
--discovery-token-ca-cert-hash sha256:f5bf614871aa605779a1bf14ef87aa6a866f13c61219702948640fac6d82e96e
首先根据回显在control-plane主机执行以下三条命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
然后在其他的worknode主机上执行加入集群的命令:
kubeadm join 10.15.0.21:6443 --token ee016g.60vwpgmf6z7sgdyy \
--discovery-token-ca-cert-hash sha256:f5bf614871aa605779a1bf14ef87aa6a866f13c61219702948640fac6d82e96e
在control-plane主机上执行命令查看节点是否加入:
watch -n 1 -d kubectl get nodes
发现节点都已经加入进来了,但是都没有准备就绪,这时候安装Flannel网络插件实现跨主机Pod通信,让节点能够正常就绪:
vi kube-flannel.yml
将以下内容复制到文件中(也可以自己去gihub上下载以后复制,需梯子):
https://github.com/flannel-io/flannel
apiVersion: v1
kind: Namespace
metadata:
labels:
k8s-app: flannel
pod-security.kubernetes.io/enforce: privileged
name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: flannel
name: flannel
namespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: flannel
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- networking.k8s.io
resources:
- clustercidrs
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: flannel
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
kind: ConfigMap
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
name: kube-flannel-cfg
namespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
name: kube-flannel-ds
namespace: kube-flannel
spec:
selector:
matchLabels:
app: flannel
k8s-app: flannel
template:
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
containers:
- args:
- --ip-masq
- --kube-subnet-mgr
command:
- /opt/bin/flanneld
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
image: docker.io/flannel/flannel:v0.22.0
name: kube-flannel
resources:
requests:
cpu: 100m
memory: 50Mi
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
privileged: false
volumeMounts:
- mountPath: /run/flannel
name: run
- mountPath: /etc/kube-flannel/
name: flannel-cfg
- mountPath: /run/xtables.lock
name: xtables-lock
hostNetwork: true
initContainers:
- args:
- -f
- /flannel
- /opt/cni/bin/flannel
command:
- cp
image: docker.io/flannel/flannel-cni-plugin:v1.1.2
name: install-cni-plugin
volumeMounts:
- mountPath: /opt/cni/bin
name: cni-plugin
- args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
command:
- cp
image: docker.io/flannel/flannel:v0.22.0
name: install-cni
volumeMounts:
- mountPath: /etc/cni/net.d
name: cni
- mountPath: /etc/kube-flannel/
name: flannel-cfg
priorityClassName: system-node-critical
serviceAccountName: flannel
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- hostPath:
path: /run/flannel
name: run
- hostPath:
path: /opt/cni/bin
name: cni-plugin
- hostPath:
path: /etc/cni/net.d
name: cni
- configMap:
name: kube-flannel-cfg
name: flannel-cfg
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
最后再执行命令加载部署文件实现网络插件安装:
kubectl apply -f kube-flannel.yml
等待容器初始化完成后,即可查看nodes和pods状态:
kubectl get nodes
kubectl get pods -A
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 震惊!C++程序真的从main开始吗?99%的程序员都答错了
· 单元测试从入门到精通
· 【硬核科普】Trae如何「偷看」你的代码?零基础破解AI编程运行原理
· 上周热点回顾(3.3-3.9)
· winform 绘制太阳,地球,月球 运作规律