k8s集群搭建详细教程【1master2node】

一、集群分类

 一主多从:一台master 多个多个Node节点 
 多主多从:多个master多个node节点

二、安装方式

minikube:快速安装搭建单节点k8s工具
kubeadm:快速搭建k8s集群的工具---主要使用的方式
二进制包安装:从官网下载每个组件的二进制包进行安装

三、安装规划

192.168.2.109  k8s-master-109
192.168.2.110  k8s-node-110
192.168.2.111  k8s-node-111

四、环境搭建

docker : 20.10.10
kubeadm: 1.23.1
kubelet: 1.23.1
kubectl: 1.23.1
1、环境初始化【所有节点执行】
(1)hosts文件修改
修改主机hosts文件
vim /etc/hosts
192.168.2.109  k8s-master-109
192.168.2.110  k8s-node-110
192.168.2.111  k8s-node-111
(2)停止firewalld和iptables
#关闭firewalld
systemctl stop firewalld
systemctl disable firewalld
#关闭iptables
systemctl stop iptables
systemctl disable iptables
(3)禁用selinux
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
(4)配置时间同步
yum install ntpdate -y 
echo "* */1 * * * /usr/sbin/ntpdate ntp.aliyun.com" >> /var/spool/cron/root
(5)禁用swap分区
swapoff -a
#编辑/etc/fstab文件,注释swap分区
vim /etc/fstab
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
(6)修改内核参数
#修改内核参数 添加网桥过滤和地址转发功能
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
#生效内核参数
sysctl -p
#加载网桥过滤模块
[root@k8s-master-109 ~]# modprobe br_netfilter
#验证模块添加是否正常
[root@k8s-master-109 ~]# lsmod | grep br_netfilter
br_netfilter           28672  0
(7)配置ipvs功能
在k8s中有两种代理模型,一种是基于iptables,另外一种是基于ipvs,两者相比较ipvs性能较高

#安装ipset和ipvsadm
yum install ipset ipvsadm -y

#添加相关模块
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4   #低版本内核
modprobe -- nf_conntrack  #高版本内核
(8)重启服务器
reboot
2、docker安装【所有节点执行】
在线安装:
yum remove docker  docker-common docker-selinux docker-engine

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum -y install docker-ce

#修改配置文件
vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}

#加载配置信息
systemctl daemon-reload
systemctl start docker
systemctl enable docker
3、安装k8s组件【所有节点执行】
#添加软件源文件
[root@k8s-master-109 ~]# vim /etc/yum.repos.d/k8s.repo 
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

#安装k8s组建 kubeadm kubectl  kubelet 
[root@k8s-master-109 ~]# yum install kubeadm kubectl kubelet

#配置kubelet的cgroup
#vim /etc/sysconfig/kubelet

KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"

#设置kubelet开机自启动

systemctl enable kubelet
4、下载集群镜像【所有节点执行】
#根据相关命令输出进行下载对应文件
[root@k8s-master-109 ~]# kubeadm config images list  
k8s.gcr.io/kube-apiserver:v1.23.1
k8s.gcr.io/kube-controller-manager:v1.23.1
k8s.gcr.io/kube-scheduler:v1.23.1
k8s.gcr.io/kube-proxy:v1.23.1
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6
[root@k8s-master-109 ~]# 

#下载集群所需的阿里云镜像文件
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.1
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.1
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.1
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.23.1
docker pull registry.aliyuncs.com/google_containers/pause:3.6
docker pull registry.aliyuncs.com/google_containers/etcd:3.5.1-0
docker pull registry.aliyuncs.com/google_containers/coredns:1.8.6

#修改镜像文件标签为对应标签
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.1   k8s.gcr.io/kube-apiserver:v1.23.1 
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.1    k8s.gcr.io/kube-controller-manager:v1.23.1
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.1    k8s.gcr.io/kube-scheduler:v1.23.1
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.23.1    k8s.gcr.io/kube-proxy:v1.23.1
docker tag registry.aliyuncs.com/google_containers/pause:3.6    k8s.gcr.io/pause:3.6
docker tag registry.aliyuncs.com/google_containers/etcd:3.5.1-0   k8s.gcr.io/etcd:3.5.1-0
docker tag registry.aliyuncs.com/google_containers/coredns:1.8.6 k8s.gcr.io/coredns/coredns:v1.8.6 

#删除旧镜像文件

docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.1 
docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.23.1
docker rmi registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.1 
docker rmi registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.1
docker rmi registry.aliyuncs.com/google_containers/etcd:3.5.1-0  
docker rmi registry.aliyuncs.com/google_containers/coredns:1.8.6
docker rmi registry.aliyuncs.com/google_containers/pause:3.6
5、集群初始化
【master节点执行】
#初始化集群
kubeadm init --kubernetes-version=v1.23.1 --pod-network-cidr=172.26.0.0/16 --service-cidr=10.126.0.0/16 --apiserver-advertise-address=192.168.2.109 

#初始化集群后需要记录返回的添加集群链接,便于集群添加
kubeadm join 192.168.2.109:6443 --token e9cl34.w1nh9tl05pwhh9w3 \
        --discovery-token-ca-cert-hash sha256:649b9f114475b252d16c68ff3558f2a12e42080e187c7b072d19aaab0c84b958 
#如果初始化存在错误,需要使用kubeadm reset 进行重置
#创建必要文件  kubectl 读取该文件
[root@k8s-master-109 ~]# mkdir -p $HOME/.kube
[root@k8s-master-109 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master-109 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
【node节点执行】
#登陆node节点执行如下命令,将node节点添加至集群
kubeadm join 192.168.2.109:6443 --token e9cl34.w1nh9tl05pwhh9w3 \
        --discovery-token-ca-cert-hash sha256:649b9f114475b252d16c68ff3558f2a12e42080e187c7b072d19aaab0c84b958
6、网络插件安装【master节点安装】
#k8s支持多种网络插件,如fiannel/calico/canal等,本次使用calico插件进行安装
#下载网络插件的配置文件
wget --no-check-certificate https://docs.projectcalico.org/manifests/calico.yaml

#修改里面定义Pod网络(CALICO_IPV4POOL_CIDR)那行,该值与Kubeadm init指定的--pod-network-cidr需一致
[root@k8s-master-109 ~]# vim calico.yaml

- name: CALICO_IPV4POOL_CIDR
value: "172.26.0.0/16"
#安装网络插件
[root@k8s-master-109 ~]# kubectl apply -f calico.yaml

#状态检查  均为running时即可使用
kubectl get pods -n kube-system
kubectl get pod -o wide -nkube-system
kubectl get pods --all-namespaces
7、开启ipvs
# 开启ipvs
[root@k8s-master-109 ~]# kubectl edit cm kube-proxy -n kube-system
# 修改mode: "ipvs"
[root@k8s-master-109 ~]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system
posted @ 2022-12-07 14:56  我爱编程到完  阅读(282)  评论(0编辑  收藏  举报