Torres-tao  

初始化集群环境

1.1、环境说明(centos7.6)

IP 主机名 角色 内存
192.168.133.10 k8s-master master 4G
192.168.133.11 k8s-node1 node 2G
192.168.133.12 k8s-node2 node 2G

网络:NET

开启虚拟机的虚拟化

硬盘:40G

1.2、配置静态IP

以k8s-master机器为例,修改静态IP,配置如下:

[root@k8s-master ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=b91ea238-ad35-4e16-9f1d-5e42861a273c
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.133.10
PREFIX=24
GATEWAY=192.168.133.2
DNS1=114.114.114.114
IPV6_PRIVACY=no

修改配置之后,需重启网路服务

[root@k8s-master ~]# systemctl restart network

测试网络是否联通

1.3、配置主机名

在192.168.133.10上执行如下

[root@k8s-master ~]# hostnamectl set-hostname k8s-master

在192.168.133.11上执行如下

[root@k8s-node1 ~]# hostnamectl set-hostname k8s-node1

在192.168.133.12上执行如下

[root@k8s-node2 ~]# hostnamectl set-hostname k8s-node2

1.4、配置hosts文件

修改每台机器的/etc/hosts文件,新增内容如下:

192.168.133.10 k8s-master
192.168.133.11 k8s-node1
192.168.133.12 k8s-node2

1.5、配置免密连接

生成ssh密钥对

[root@k8s-master ~]# ssh-keygen
#一路回车

将本地的ssh公钥文件安装到远程主机对应的账户

[root@k8s-master ~]# ssh-copy-id k8s-master

注:每一台机器都要添加

1.6、关闭firewalld防火墙

[root@k8s-master ~]# systemctl stop firewalld && systemctl disable firewalld
[root@k8s-node1 ~]# systemctl stop firewalld && systemctl disable firewalld
[root@k8s-node2 ~]# systemctl stop firewalld && systemctl disable firewalld

1.7、关闭selinux

#临时关闭
[root@k8s-master ~]# setenforce 0
#永久关闭
[root@k8s-master ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

[root@k8s-node1 ~]# setenforce 0
[root@k8s-node1 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

[root@k8s-node2 ~]# setenforce 0
[root@k8s-node2 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

注:修改selinux配置文件后,重启机器后才会生效

1.8、关闭交换分区swap

[root@k8s-master ~]# swapoff -a
[root@k8s-node1 ~]# swapoff -a
[root@k8s-node2 ~]# swapoff -a

永久关闭:打开/etc/fstab文件,注释swap这一行

注:如果是克隆主机,请将/etc/fstab文件中uuid这一行删除

swap是什么?

​ 当内存不足时,linux会自动使用swap,将部分内存数据存放到磁盘空间中,这样会使性能下降

为何要关闭swap分区?

​ 关闭swap分区只要是为了性能考虑

1.9、修改内核参数

[root@k8s-master ~]# modprobe br_netfilter
[root@k8s-master ~]# echo "modprobe br_netfilter" >> /etc/profile
[root@k8s-master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> EOF
[root@k8s-master ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

注:每台机器都要修改

为何要开启ip_forward?

如果容器的宿主机上的ip_forward未打开,那么该宿主机上的容器则不能被其他宿主机访问

为何要开启net.bridge.bridge-nf-call-ip6tables?

默认情况下,从容器发送到默认网桥的流量,并不会转发到外部。要开启转发:net.bridge.bridge-nf-call-ip6tables = 1

1.10、配置阿里云repo源

s[root@k8s-master ~]# yum install -y wget
[root@k8s-master ~]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
[root@k8s-master ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

[root@k8s-node1 ~]# yum install -y wget
[root@k8s-node1 ~]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
[root@k8s-node1 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

[root@k8s-node2 ~]# yum install -y wget
[root@k8s-node2 ~]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
[root@k8s-node2 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

1.11、配置阿里云docker源

[root@k8s-master ~]# yum install -y yum-utils
[root@k8s-master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

[root@k8s-node1 ~]# yum install -y yum-utils
[root@k8s-node1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

[root@k8s-node2 ~]# yum install -y yum-utils
[root@k8s-node2 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

1.12、配置离线docker和安装k8s组件需要的repo源

离线上传k8s-docker.tar.gz到k8s-master机器上

[root@k8s-master ~]# tar -xvzf k8s-docker.tar.gz -C /opt/
[root@k8s-master ~]# tee /etc/yum.repos.d/k8s-docker.repo << 'EOF'
> [k8s-docker]
> name=k8s-docker
> baseurl=file:///opt/k8s-docker
> enable=1
> gpgcheck=0
> EOF

#k8s-node1配置kubeadm、docker-ce的离线yum源
[root@k8s-master ~]# scp /etc/yum.repos.d/k8s-docker.repo k8s-node1:/etc/yum.repos.d/
k8s-docker.repo                                                                                                 100%   80    51.4KB/s   00:00    
[root@k8s-master ~]# scp -r /opt/k8s-docker/ k8s-node1:/opt/

#k8s-node2配置kubeadm、docker-ce的离线yum源
[root@k8s-master ~]# scp /etc/yum.repos.d/k8s-docker.repo k8s-node2:/etc/yum.repos.d/
k8s-docker.repo                                                                                                 100%   80    51.4KB/s   00:00    
[root@k8s-master ~]# scp -r /opt/k8s-docker/ k8s-node2:/opt/

1.13、配置阿里云K8S yum源

[root@k8s-master ~]# tee /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=0
> EOF
#复制K8S的yum源给k8s-node1和k8s-node2
[root@k8s-master ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-node1:/etc/yum.repos.d/
kubernetes.repo                                                                                                 100%  129    76.7KB/s   00:00    
[root@k8s-master ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-node2:/etc/yum.repos.d/
kubernetes.repo                                                                                                 100%  129   157.4KB/s   00:00 

1.14、配置服务器时间和网络时间同步

[root@k8s-master ~]# yum install -y ntpdate
[root@k8s-master ~]# ntpdate cn.pool.ntp.org
[root@k8s-master ~]# crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
[root@k8s-master ~]# service crond restart

[root@k8s-node1 ~]# yum install -y ntpdate
[root@k8s-node1 ~]# ntpdate cn.pool.ntp.org
[root@k8s-node1 ~]# crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
[root@k8s-node1 ~]# service crond restart

[root@k8s-node2 ~]# yum install -y ntpdate
[root@k8s-node2 ~]# ntpdate cn.pool.ntp.org
[root@k8s-node2 ~]# crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
[root@k8s-node2 ~]# service crond restart

1.15、禁用iptables

k8s使用ipvs,如果没有ipvs,就降级使用iptables

[root@k8s-master ~]# yum install -y iptables-services
#禁用iptables
[root@k8s-master ~]# service iptables stop && systemctl disable iptables

[root@k8s-node1 ~]# yum install -y iptables-services
[root@k8s-node1 ~]# service iptables stop && systemctl disable iptables

[root@k8s-node2 ~]# yum install -y iptables-services
[root@k8s-node2 ~]# service iptables stop && systemctl disable iptables

1.16、开启ipvs

将ipvs.modules文件上传到k8s-master机器的/etc/sysconfig/modules目录下,ipvs.modules脚本内容如下:

#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
 /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
 if [ 0 -eq 0 ]; then
 /sbin/modprobe ${kernel_module}
 fi
done

加载ipvs配置信息

[root@k8s-master ~]# chmod -R 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp              13079  0 
nf_nat                 26583  1 ip_vs_ftp
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs_wlc              12519  0 
ip_vs_lc               12516  0 
ip_vs                 145458  22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack          139264  2 ip_vs,nf_nat
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

复制ipvs.modules配置文件到k8s-node1和k8s-node2,并加载配置

#k8s-node1
[root@k8s-master ~]# scp /etc/sysconfig/modules/ipvs.modules k8s-node1:/etc/sysconfig/modules/
[root@k8s-node1 ~]# chmod -R 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp              13079  0 
nf_nat                 26583  1 ip_vs_ftp
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs_wlc              12519  0 
ip_vs_lc               12516  0 
ip_vs                 145458  22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack          139264  2 ip_vs,nf_nat
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
#k8s-node2
[root@k8s-master ~]# scp /etc/sysconfig/modules/ipvs.modules k8s-node2:/etc/sysconfig/modules/
[root@k8s-node2 ~]# chmod -R 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp              13079  0 
nf_nat                 26583  1 ip_vs_ftp
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs_wlc              12519  0 
ip_vs_lc               12516  0 
ip_vs                 145458  22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack          139264  2 ip_vs,nf_nat
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

1.17、安装基础软件包

[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 \
> wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl \
> curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel \
> python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet
#k8s-node1
[root@k8s-node1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 \
> wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl \
> curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel \
> python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet
#k8s-node2
[root@k8s-node2 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 \
> wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl \
> curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel \
> python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet

1.18、安装docker-ce

[root@k8s-master ~]# yum install -y docker-ce docker-ce-cli containerd.io
#设置docker开机自启动
[root@k8s-master ~]# systemctl start docker && systemctl enable docker.service
#配置镜像加速器
[root@k8s-master ~]# mkdir -p /etc/docker
[root@k8s-master ~]# tee /etc/docker/daemon.json <<-'EOF'
> {
> "registry-mirrors": ["https://p1tlsqnt.mirror.aliyuncs.com"],
> "exec-opts":["native.cgroupdriver=systemd"]
> }
> EOF
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker && systemctl status docker

#k8s-node1
[root@k8s-node1 ~]# yum install -y docker-ce docker-ce-cli containerd.io
#设置docker开机自启动
[root@k8s-node1 ~]# systemctl start docker && systemctl enable docker.service
[root@k8s-node1 ~]# mkdir -p /etc/docker
[root@k8s-node1 ~]# tee /etc/docker/daemon.json <<-'EOF'
> {
>  "registry-mirrors": ["https://p1tlsqnt.mirror.aliyuncs.com"],
>  "exec-opts":["native.cgroupdriver=systemd"]
> }
> EOF
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl restart docker && systemctl status docker

#k8s-node2
[root@k8s-node2 ~]# yum install -y docker-ce docker-ce-cli containerd.io
#设置docker开机自启动
[root@k8s-node2 ~]# systemctl start docker && systemctl enable docker.service
[root@k8s-node2 ~]# mkdir -p /etc/docker
[root@k8s-node2 ~]# tee /etc/docker/daemon.json <<-'EOF'
> {
>  "registry-mirrors": ["https://p1tlsqnt.mirror.aliyuncs.com"],
>  "exec-opts":["native.cgroupdriver=systemd"]
> }
> EOF
[root@k8s-node2 ~]# systemctl daemon-reload
[root@k8s-node2 ~]# systemctl restart docker && systemctl status docker

1.19、安装初始化K8S需要的组件

[root@k8s-master ~]# yum install -y kubelet-1.20.4 kubeadm-1.20.4 kubectl-1.20.4
[root@k8s-master ~]# systemctl enable kubelet && systemctl start kubelet

[root@k8s-node1 ~]# yum install -y kubelet-1.20.4 kubeadm-1.20.4 kubectl-1.20.4
[root@k8s-node1 ~]# systemctl enable kubelet && systemctl start kubelet

[root@k8s-node2 ~]# yum install -y kubelet-1.20.4 kubeadm-1.20.4 kubectl-1.20.4
[root@k8s-node2 ~]# systemctl enable kubelet && systemctl start kubelet

注:

kubelet:运行在集群所有节点上,用于启动Pod和容器等对象的工具

kubeadm:用于初始化集群,启动集群的命令工具

kubectl:用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

1.20、初始化集群

离线导入k8s-images-v1.20.4.tar.gz

[root@k8s-master ~]# docker load -i k8s-images-v1.20.4.tar.gz
[root@k8s-node1 ~]# docker load -i k8s-images-v1.20.4.tar.gz
[root@k8s-node2 ~]# docker load -i k8s-images-v1.20.4.tar.gz

1.21、使用kubeadm初始化K8S集群

[root@k8s-master ~]# kubeadm init --kubernetes-version=1.20.4 \
> --apiserver-advertise-address=192.168.133.10 \
> --image-repository registry.aliyuncs.com/google_containers \
> --pod-network-cidr=10.244.0.0/16

注:--image-repository-address registry.aliyuncs.com/google_containers为保证拉取镜像不到国外站点拉取。kubeadm默认从k8ss.grc.io拉取镜像

配置kubectl的配置文件,保存一个证书

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   4m10s   v1.20.4
#集群处于NotReady状态是因为没有安装网络组件

kubeadm init初始化流程分析

1.22、node节点加入集群

[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.133.10:6443 --token je4fim.jwa1xm2zsgjlnxaw     --discovery-token-ca-cert-hash sha256:f622dc6419e22fbcb390a81c596b92549b8fa6c06e275fb5913e568120ec6593
#k8s-node1加入集群
[root@k8s-node1 ~]# kubeadm join 192.168.133.10:6443 --token je4fim.jwa1xm2zsgjlnxaw     --discovery-token-ca-cert-hash sha256:f622dc6419e22fbcb390a81c596b92549b8fa6c06e275fb5913e568120ec6593

注:当加入节点报如下错误时,更新加入命令,添加--ignore-preflight-errors=SystemVerification即可加入

[root@k8s-node1 ~]# kubeadm join 192.168.133.10:6443 --token je4fim.jwa1xm2zsgjlnxaw     --discovery-token-ca-cert-hash sha256:f622dc6419e22fbcb390a81c596b92549b8fa6c06e275fb5913e568120ec6593 --ignore-preflight-errors=SystemVerification

1.23、安装网络插件Calico

使用yaml文件安装calico插件

#安装
[root@k8s-master ~]# kubectl apply -f calico.yaml
#验证
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE     VERSION
k8s-master   Ready    control-plane,master   3h31m   v1.20.4
k8s-node1    Ready    <none>                 10m     v1.20.4
k8s-node2    Ready    <none>                 9m25s   v1.20.4

k8s中部署Calico主要流程

创建Calico服务,主要包括calico-node和calico policy controller。需要创建的资源对象如下:

  1. 创建ConfigMap calico-config,包含Calico所需的配置参数
  2. 创建Serect calico-etcd-secrets,用于使用TLS方式连接etcd
  3. 在每个Node上都运行calico/node容器,部署为DaemonSet
  4. 在每个Node上都安装Calico CNI二进制文件和网络配置参数(由install-cni容器完成)
  5. 部署一个名为calico/kube-policy-controller的Deployment,以对接K8S集群中为Pod设置的Network Policy。
posted on 2022-04-01 08:33  雷子锅  阅读(521)  评论(0编辑  收藏  举报