CentOS7 部署 k8s 集群

CentOS7 部署 k8s 集群

BUG弄潮儿 2023-06-10 20:32 发表于广东

BUG弄潮儿
多年开发经验,专注软件开发、架构;推送 微服务,分布式,Spring全家桶,Redis,Linux,Nginx等技术,欢迎一起学习、探讨。
396篇原创内容

K8s+Istio+Golang = 微服务+服务网格+云原生

0x0. 环境准备

本文服务器的公网IP:192.168.56.101

  • OS version: CentOS 7

  • CPU Architecture: x86_64/amd64

  • K8s version: v1.23.17

  • Docker version: 20.10.23

0x1. 安装依赖

1
2
3
4
5
6
yum install -y \
curl \
wget \
systemd \
bash-completion \
lrzsz

0x2. 安装前准备

  1. 同步服务器时间

1
2
3
timedatectl set-timezone Asia/Shanghai && timedatectl set-local-rtc 0
systemctl restart rsyslog
systemctl restart crond

2.修改主机名

方便通过主机名访问对于的服务器

1
2
3
4
5
# 主节点
hostnamectl set-hostname k8s-master
# 从节点
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2

修改hosts

1
2
3
4
5
cat >/etc/hosts <<EOF
192.168.56.101 k8s-master
192.168.56.102 k8s-node1
192.168.56.103 k8s-node2
EOF

3.开启必要的端口

  • 开启端口

  • 直接关闭防火墙

1
systemctl disable firewalld.service && systemctl stop firewalld.service

0x3. 容器运行时

  1. 转发IPv4并让iptables看到桥接流量

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
cat >/etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter

cat >/etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
EOF
sysctl --system

# 通过运行以下指令确认 br_netfilter 和 overlay 模块被加载
lsmod | egrep 'overlay|br_netfilter'
# 通过运行以下指令确认 net.bridge.bridge-nf-call-iptables、net.bridge.bridge-nf-call-ip6tables 系统变量在你的 sysctl 配置中被设置为 1
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

2.安装容器运行时

注意:k8s v1.24及以后不再支持Docker Engine

  • 安装Docker

Install Docker Engine on CentOS

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
yum install -y yum-utils
# 设置yum阿里云镜像
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
mkdir -p /etc/docker
# 设置阿里云镜像/日志/cgroup驱动
cat >/etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"registry-mirrors":["https://hub-mirror.c.163.com","https://docker.mirrors.ustc.edu.cn","https://registry.docker-cn.com"]
}
EOF
yum makecache fast
yum install -y docker-ce-20.10.23 docker-ce-cli-20.10.23 containerd.io
systemctl daemon-reload
systemctl enable docker && systemctl restart docker
  • 安装containerd

container-runtimes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum install -y containerd.io
mkdir -p /etc/containerd
# 生成默认文件
containerd config default > /etc/containerd/config.toml
# 编辑配置文件 设置驱动方式为systemd 设置pause镜像 镜像仓库的加速器
sed -i "s#SystemdCgroup = false#SystemdCgroup = true#g" /etc/containerd/config.toml
sed -i "s#registry.k8s.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g" /etc/containerd/config.toml
sed -i "/\[plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors\]/a\ [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"docker.io\"]" /etc/containerd/config.toml
sed -i "/\[plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"docker.io\"\]/a\ endpoint = [\"https://hub-mirror.c.163.com\",\"https://docker.mirrors.ustc.edu.cn\",\"https://registry.docker-cn.com\"]" /etc/containerd/config.toml
sed -i "/endpoint = \[\"https:\/\/hub-mirror.c.163.com\",\"https:\/\/docker.mirrors.ustc.edu.cn\",\"https:\/\/registry.docker-cn.com\"]/a\ [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"registry.k8s.io\"]" /etc/containerd/config.toml
sed -i "/\[plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"registry.k8s.io\"\]/a\ endpoint = [\"registry.cn-hangzhou.aliyuncs.com/google_containers\"]" /etc/containerd/config.toml
sed -i "/endpoint = \[\"registry.cn-hangzhou.aliyuncs.com\/google_containers\"]/a\ [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"k8s.gcr.io\"]" /etc/containerd/config.toml
sed -i "/\[plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"k8s.gcr.io\"\]/a\ endpoint = [\"registry.cn-hangzhou.aliyuncs.com/google_containers\"]" /etc/containerd/config.toml
systemctl daemon-reload
systemctl enable containerd && systemctl restart containerd

0x4. 安装k8s

kubeadm init

kubelet

  1. 关闭swap分区或者禁用swap文件

1
swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab

2.关闭selinux

1
setenforce 0 && sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

3.安装k8s

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# 使用阿里云k8s源
cat >/etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装工具kubelet、kubeadm、kubectl
yum install -y kubelet-1.23.17 kubeadm-1.23.17 kubectl-1.23.17 --disableexcludes=kubernetes
# 设置驱动方式为systemd
cat >/etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF
# 设置容器运行时(仅容器运行时为containerd才需要进行以下设置,容器运行时为Docker则不需要)
crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock
crictl config image-endpoint unix:///var/run/containerd/containerd.sock
sed -i '/KUBELET_KUBEADM_ARGS/s/"$/ --container-runtime=remote --container-runtime-endpoint=unix:\/\/\/run\/containerd\/containerd.sock"/' /var/lib/kubelet/kubeadm-flags.env

# kubelet开机自启
systemctl enable --now kubelet
# 查看kubelet状态
systemctl status kubelet
# 如果报错,查询错误信息
journalctl -xe

0x5. 运行k8s

1
2
3
4
5
6
7
8
9
10
11
mkdir -p /k8sdata/log/
kubeadm init \
--apiserver-advertise-address=192.168.56.101 \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version=v1.23.17 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 | tee /k8sdata/log/kubeadm-init.log

mkdir -p "$HOME"/.kube
cp -i /etc/kubernetes/admin.conf "$HOME"/.kube/config
chown "$(id -u)":"$(id -g)" "$HOME"/.kube/config

提示:

  1. 如果是搭建的服务器是主节点,则服务器至少2核2G,如果没有达到该配置但是仍想安装,则可以在kubeadm init命令行中使用–ignore-preflight-errors=CpuNum即可忽略报错。

  2. 如果初始化失败,通过kubeadm reset进行重设

0x6. 安装网络系统

  • flannel

1
2
3
mkdir -p /k8sdata/network/
wget --no-check-certificate -O /k8sdata/network/flannelkube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl create -f /k8sdata/network/flannelkube-flannel.yml
  • calico

1
2
3
mkdir -p /k8sdata/network/
wget --no-check-certificate -O /k8sdata/network/calico.yml https://docs.projectcalico.org/manifests/calico.yaml
kubectl create -f /k8sdata/network/calico.yml

0x7. k8s命令行补全

1
2
3
4
5
! grep -q kubectl "$HOME/.bashrc" && echo "source /usr/share/bash-completion/bash_completion" >>"$HOME/.bashrc"
! grep -q kubectl "$HOME/.bashrc" && echo "source <(kubectl completion bash)" >>"$HOME/.bashrc"
! grep -q kubeadm "$HOME/.bashrc" && echo "source <(kubeadm completion bash)" >>"$HOME/.bashrc"
! grep -q crictl "$HOME/.bashrc" && echo "source <(crictl completion bash)" >>"$HOME/.bashrc"
source "$HOME/.bashrc"

0x8. k8s常用命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 获取节点
kubectl get nodes -o wide
# 实时查询nodes状态
watch kubectl get nodes -o wide
# 获取pod
kubectl get pods --all-namespaces -o wide
# 查看镜像列表
kubeadm config images list
# 节点加入集群
kubeadm token create --print-join-command
# 描述node
kubectl describe node k8s-master
# 描述pod
kubectl describe pod kube-flannel-ds-hs8bq --namespace=kube-flannel

0x9. 总结

按照本教程可以部署一个可以正常运行的k8s,但本文仍存在一些待优化的地方,如在部署或者使用过程中遇到问题会在本文进行补充。

source: //jonssonyan.com/2022/07/18/CentOS7部署K8s集群/

 

图片

 

阅读 626
BUG弄潮儿
396篇原创内容
 
 
写下你的留言
精选留言
  • 感觉没写完呢啊,flannel和calico应该是选一个装吧,另外从节点好像也没加进来
     
     
已无更多数据
 
posted @ 2023-06-27 09:33  往事已成昨天  阅读(128)  评论(0编辑  收藏  举报