VMware快速搭建k8s集群 (CentOS-7.9|Docker-19.03.11|K8S-1.19.6|Flannel)kubeadm安装
0. 规划
k8s-master | 192.168.239.120 | CPU:2核 内存:2G 磁盘:10G | CentOS7.9 |
k8s-node1 | 192.168.239.121 | CPU:2核 内存:2G 磁盘:10G | CentOS7.9 |
k8s-node2 | 192.168.239.122 | CPU:2核 内存:2G 磁盘:10G | CentOS7.9 |
Docker | 19.03.11 |
kubeadm | 1.19.6-0 |
kubelet | 1.19.6-0 |
kubectl | 1.19.6-0 |
kube-apiserver |
1.19.6 |
pause | 3.2 |
etcd | 3.4.13-0 |
coredns | 1.7.0 |
flannel | |
kubeboard |
1.准备
1.1 CentOS镜像文件:https://mirrors.aliyun.com/centos/
1.2 Linux时区:
timedatectl set-timezone Asia/Shanghai
1.3 Linux更新yum源:https://developer.aliyun.com/article/913404
1.4 Linux时间同步服务器
systemctl start chronyd
systemctl enable chronyd
1.5 固定主机名
1. hostnamectl set-hostname k8s-master 2. vim /etc/hosts 192.168.239.20 k8s-master 192.168.239.21 k8s-node1 192.168.239.22 k8s-node2 3.重新载入一下配置文件 nmcli c reload 4. scp /etc/hosts k8s-node1:/etc/ scp /etc/hosts k8s-node2:/etc/
1.6 关闭firewall、selinux,swap
#关闭防火墙 systemctl stop firewalld systemctl disable firewalld #关闭selinux setenforce 0 sed -ri '/^SELINUX=/c SELINUX=disabled' /etc/sysconfig/selinux sed -ri '/^SELINUX=/c SELINUX=disabled' /etc/selinux/config #关闭swap vim /etc/fstab
1.7 修改linux内核参数(添加网桥和地址转发)
1.添加内核参数 vim /etc/sysctl.d/kubernetes.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 2.加载配置 sysctl -p 3.加载网桥过滤模块 modprobe br_netfilter 4.查看是否加载成功 lsmod | grep br_netfilter br_netfilter 22256 0 bridge 151336 1 br_netfilter
1.8 配置IPVS代理
默认是iptables代理, 改用ipvs代理,性能更高。 ipvs即LVS。 1.安装ipset和ipvsadm yum -y install ipset ipvsadmin 2.配置加载模块脚本 vim /etc/sysconfig/modules/ipvs.modules #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 3.增加执行权限 chmod a+x /etc/sysconfig/modules/ipvs.modules 4.执行脚本文件 sh /etc/sysconfig/modules/ipvs.modules 5.查看是否加载成功 lsmod | grep -E 'ip_vs|nf_conntrack' nf_conntrack_ipv4 15053 0 nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 ip_vs_sh 12688 0 ip_vs_wrr 12697 0 ip_vs_rr 12600 0 ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack 133095 2 ip_vs,nf_conntrack_ipv4 libcrc32c 12644 3 xfs,ip_vs,nf_conntrack
1.9 固定主机IP
vi /etc/sysconfig/network-scripts/ifcfg-ens33
如果遇到无法network.service启动问题时, 先关闭NetworkManager systemctl stop NetworkManager 再启动network systemctl restart network 再启动NetworkManager systemctl start NetworkManager
1.10 集群服务器之间的免 密码ssh
1.生成公钥和私钥,一路回车 ssh-keygen -t rsa 2.将本机的公钥复制到远程机器即node-2机器的authorized_keys文件中 ssh-copy-id root@192.168.239.121 3.验证 ssh root@192.168.239.121
2. 安装Docker
1.准备镜像源 [root@k8s-master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo 2.查看当前支持的docker版本 [root@k8s-master ~]# yum list docker-ce --showduplicates 3.安装特定版本的docker-ce [root@k8s-master ~]# yum -y install --setopt=obsoletes=0 docker-ce-19.03.11-3.el7 4.创建docker配置文件 docker默认情况下使用cgroup driver作为cgroupfs,而k8s推荐使用systemd来代替cgroupfs [root@k8s-master ~]# mkdir /etc/docker [root@k8s-master ~]# vim /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": ["https://oemgr772.mirror.aliyuncs.com"] } 5.启动docker [root@k8s-master ~]# systemctl start docker [root@k8s-master ~]# systemctl enable docker 6.查看docker版本 [root@k8s-master ~]# docker version
3.安装k8s
3.1 安装 kubeadm、kubelet、kubectl
1.准备k8s镜像源 [root@k8s-master ~]# vi /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes Repo baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 gpgcheck=0 enabled=1 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpgp http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
2.复制到k8s-node1、k8s-node2
scp /etc/yum.repos.d/kubernetes.repo k8s-node1:/etc/yum.repos.d/
scp /etc/yum.repos.d/kubernetes.repo k8s-node2:/etc/yum.repos.d/
3.查询kubeadm可用的版本 [root@k8s-master ~]# yum list kubeadm --showduplicates 4.安装kubeadm、kubelet、kubect [root@k8s-master ~]# yum -y install --setopt=obsoletes=0 kubeadm-1.19.6-0 kubelet-1.19.6-0 kubectl-1.19.6-0 --downloaddir=/root/soft/kubernetes 5.配置kubelet的cgroup以及使用ipvs转发 [root@k8s-master ~]# vi /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--cgroup-driver=systemd" KUBE_PROXY_MODE="ipvs" 6.设置kubelet开机自启 #安装好kubelet后先不用启动,当集群初始化的时候会自动启动kubelet,选择启动kubelet会报错 [root@k8s-master ~]# systemctl enable kubelet
3.2 安装kube-api-server、kube-controller-manager、kube-scheduler、kube-proxy 和 pause、etcd、coredns
1.查看镜像包列表 [root@k8s-master ~]# kubeadm config images list k8s.gcr.io/kube-apiserver:v1.19.8 k8s.gcr.io/kube-controller-manager:v1.19.8 k8s.gcr.io/kube-scheduler:v1.19.8 k8s.gcr.io/kube-proxy:v1.19.8 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 2.下载镜像 [root@k8s-master ~]# images=( kube-apiserver:v1.19.6 kube-controller-manager:v1.19.6 kube-scheduler:v1.19.6 kube-proxy:v1.19.6 pause:3.2 etcd:3.4.13-0 coredns:1.7.0 ) [root@k8s-master ~]# for imageName in ${images[@]} do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName done
3.3 k8s-master节点初始化
1.集群初始化 [root@k8s-master ~]# kubeadm init \ --kubernetes-version=v1.19.6 \ #指定k8s的版本 --pod-network-cidr=10.244.0.0/16 \ #pod网络地址段 --service-cidr=10.96.0.0/12 \ #service资源网络地址段 --apiserver-advertise-address=192.168.239.120 #apiserver地址 注意 : \ 后面千万不要有空格 2.创建必要文件 这些文件是使用kubectl命令的前提,kubectl命令使用是需要去找config配置文件 [root@k8s-master ~]# mkdir -p $HOME/.kube [root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8s-master ~]# sudo chown $(id -u).$(id -g) $HOME/.kube/config 3.查看master是否加入集群 [root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master NotReady master 12m v1.19.6 4.初始化成功后kubelet也会自动启动 [root@k8s-master ~]# systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (`running`) since 六 2022-12-20 10:14:21 CST; 17min ago
3.4 k8s-node1、k8s-node2加入集群 (分别在两台node机子上执行)
kubeadm join 192.168.239.120:6443 --token 5rde6y.rgkd9gc6qtigu85z \ --discovery-token-ca-cert-hash sha256:129a0d48af737cef7ee22986ad28f8a33fd7f921fb66a68c2355b110242eb8be
4.集群配置网络插件 flannel
4.1 下载flannel包
链接:https://pan.baidu.com/s/1_vlzm3YMxOewIx2HaDOKJg?pwd=sdaa
提取码:sdaa
4.2 k8s-master节点执行
kubectl apply -f kube-flannel.yml chmod a+x flannel cp flannel /opt/cni/bin/ scp flannel node1:/opt/cni/bin/ scp flannel node2:/opt/cni/bin/
4.3三个节点都重启kubelet服务
systemctl restart kubelet
4.4 集群小bug处理
vi /etc/kubernetes/manifests/kube-controller-manager.yaml 删除- --port=0 这一行 vi /etc/kubernetes/manifests/kube-scheduler.yaml 删除- --port=0 这一行 重启kubelet服务: systemctl restart kubelet
4.5 命令
kubectl get ns kubectl get pods -A kubectl get cs
5. 图形界面
极客:https://time.geekbang.org/column/article/42819
博客:https://jiangxl.blog.csdn.net/category_11015859_2.html