@kubernetes(k8s)使用adm安装实现keepalived高可用
文章目录
adm-keepalived
一、环境准备(全部执行)
1、服务器的环境准备
1》nod节点CPU核数必须是 : 大于等于2核2G ,否则k8s无法启动 ,如果不是,则在集群初始化时,后面后面增加参数: --ignore-preflight-errors=NumCPU
2》DNS网络: 最好设置为本地网络连通的DNS,否则网络不通,无法下载一些镜像
3》linux内核: linux内核必须是 4 版本以上就可以,建议最好是4.4之上的,因此必须把linux核心进行升级
4》准备3台虚拟机环境(或者3台云服务器)
2、本地机器准备
主机名 | IP |
---|---|
m01 | 192.168.15.66 |
m02 | 192.168.15.67 |
node01 | 192.168.15.68 |
vip | 192.168.15.69 |
3、设置主机解析
#主机名称更改
[root@m01 ~]# hostnamectl set-hostname m01
[root@m02 ~]# hostnamectl set-hostname m02
[root@node01 ~]# hostnamectl set-hostname node01
#添加hosts解析(三台全部执行)
[root@m01 ~]# cat >> /etc/hosts << EOF
192.168.15.66 m01
192.168.15.67 m02
192.168.15.68 node01
EOF
4、关闭防火墙–关闭selinux
[root@m01 ~]# sudo systemctl stop firewalld
[root@m01 ~]# sudo systemctl disable firewalld
[root@m01 ~]# setenforce 0
[root@m01 ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
5、关闭swap
[root@m01 ~]# swapoff -a
[root@m01 ~]# sed -i 's/.*swap.*/#&/' /etc/fstab
6、配置镜像源
#安装镜像源
[root@m01 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- -100 2523 100 2523 0 0 60651 0 --:--:-- --:--:-- --:--:-- 61536
7、安装常用软件工具包
[root@m01 ~]# yum install wget expect vim net-tools ntp bash-completion ipvsadm ipset jq iptables conntrack sysstat libseccomp -y
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* elrepo: mirrors.tuna.tsinghua.edu.cn
* epel: hk.mirrors.thegigabit.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
base | 3.6 kB 00:00
extras | 2.9 kB 00:00
updates | 2.9 kB 00:00
......
....
8、时间同步
# 安装chrony:
[root@m01 ~]# yum install -y chrony
iptables conntrack sysstat libseccomp -y
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
······
....
#备份配置文件,以防万一(不做也行)
[root@m01 ~]# cp /etc/chrony.conf{,.bak}
#注释默认ntp服务器
[root@m01 ~]# sed -i 's/^server/#&/' /etc/chrony.conf
#指定上游公共 ntp 服务器
[root@m01 ~]# cat >> /etc/chrony.conf << EOF
server 0.asia.pool.ntp.org iburst
server 1.asia.pool.ntp.org iburst
server 2.asia.pool.ntp.org iburst
server 3.asia.pool.ntp.org iburst
EOF
#时间写入内核
[root@m01 ~]# hwclock --systohc
#设置时区
[root@m01 ~]# timedatectl set-timezone Asia/Shanghai
#重启chronyd服务并设为开机启动:
[root@m01 ~]# timedatectl set-timezone Asia/Shanghaisystemctl enable chronyd && systemctl restart chronyd
#验证,查看当前时间以及存在带*的行
[root@m01 ~]# timedatectl && chronyc sources
Local time: 日 2021-08-08 22:05:12 CST
Universal time: 日 2021-08-08 14:05:12 UTC
RTC time: 日 2021-08-08 14:05:12
Time zone: Asia/Shanghai (CST, +0800)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: n/a
210 Number of sources = 4
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^- ott137.hkcable.com.hk 2 10 317 22m -14ms[ -14ms] +/- 258ms
^* ott129.hkcable.com.hk 3 10 237 1128 -16ms[ -16ms] +/- 85ms
^+ send.mx.cdnetworks.com 2 10 377 603 +39ms[ +39ms] +/- 126ms
^- time4.isu.net.sa 2 10 357 603 -18ms[ -18ms] +/- 303ms
9、系统内核更新(建议奈何为4.4之上)
#查看可安装的内核版本
[root@m01 ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* elrepo-kernel: mirror-hk.koddos.net
可安装的软件包
kernel-lt-devel.x86_64 5.4.138-1.el7.elrepo elrepo-kernel
kernel-lt-doc.noarch 5.4.138-1.el7.elrepo elrepo-kernel
kernel-lt-headers.x86_64 5.4.138-1.el7.elrepo elrepo-kernel
kernel-lt-tools.x86_64 5.4.138-1.el7.elrepo elrepo-kernel
kernel-lt-tools-libs.x86_64 5.4.138-1.el7.elrepo elrepo-kernel
kernel-lt-tools-libs-devel.x86_64
5.4.138-1.el7.elrepo elrepo-kernel
kernel-ml.x86_64 5.13.8-1.el7.elrepo elrepo-kernel
kernel-ml-devel.x86_64 5.13.8-1.el7.elrepo elrepo-kernel
kernel-ml-doc.noarch 5.13.8-1.el7.elrepo elrepo-kernel
kernel-ml-headers.x86_64 5.13.8-1.el7.elrepo elrepo-kernel
kernel-ml-tools.x86_64 5.13.8-1.el7.elrepo elrepo-kernel
kernel-ml-tools-libs.x86_64 5.13.8-1.el7.elrepo elrepo-kernel
kernel-ml-tools-libs-devel.x86_64
5.13.8-1.el7.elrepo elrepo-kernel
perf.x86_64 5.13.8-1.el7.elrepo elrepo-kernel
python-perf.x86_64 5.13.8-1.el7.elrepo elrepo-kerne
#下载内核源
[root@m01 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
获取http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
准备中... ################################# [100%]
软件包 elrepo-release-7.0-5.el7.elrepo.noarch (比 elrepo-release-7.0-3.el7.elrepo.noarch 还要新) 已安装
#安装最新内核版本
[root@m01 ~]# yum --enablerepo=elrepo-kernel install -y kernel-lt已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* elrepo: mirrors.tuna.tsinghua.edu.cn
* elrepo-kernel: mirrors.tuna.tsinghua.edu.cn
* epel: ftp.iij.ad.jp
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
.......
....
#查看当前可用内核
[root@m01 ~]# cat /boot/grub2/grub.cfg |grep menuentry
if [ x"${feature_menuentry_id}" = xy ]; then
menuentry_id_option="--id"
menuentry_id_option=""
export menuentry_id_option
menuentry 'CentOS Linux (5.4.138-1.el7.elrepo.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-1160.36.2.el7.x86_64-advanced-507fc260-78cc-4ce0-8310-af00334de578' {
menuentry 'CentOS Linux (3.10.0-1160.36.2.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-1160.36.2.el7.x86_64-advanced-507fc260-78cc-4ce0-8310-af00334de578' {
menuentry 'CentOS Linux (3.10.0-693.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted menuentry_id_option 'gnulinux-3.10.0-693.el7.x86_64-advanced-507fc260-78cc-4ce0-8310-af00334de578' {
menuentry 'CentOS Linux (0-rescue-b9c18819be20424b8f84a2cad6ddf12e) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-0-rescue-b9c18819be20424b8f84a2cad6ddf12e-advanced-507fc260-78cc-4ce0-8310-af00334de578' {
#设置操作系统从新内核启动
[root@m01 ~]# grub2-set-default "CentOS Linux (5.4.221-1.el7.elrepo.x86_64) 7 (Core)"
#查看内核启动项是否修改
[root@m01 ~]# grub2-editenv list
saved_entry=CentOS Linux (5.7.7-1.el7.elrepo.x86_64) 7 (Core)
#最后重启生效
[root@m01 ~]# reboot
#重启后查看
[root@m01 ~]# uname -r
5.4.138-1.el7.elrepo.x86_64
10、增加命令提示安装
#安装软件包
[root@m01 ~]# yum install -y bash-completion
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* elrepo: mirrors.tuna.tsinghua.edu.cn
* elrepo-kernel: mirrors.tuna.tsinghua.edu.cn
* epel: ftp.iij.ad.jp
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
.......
....
[root@m01 ~]# source /usr/share/bash-completion/bash_completion
[root@m01 ~]# source <(kubectl completion bash)
[root@m01 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
11、ipvs安装
#安装软件包
[root@m01 ~]# yum install -y conntrack-tools ipvsadm ipset conntrack libseccomp
#配置ipvs
[root@m01 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF
#模块文件授权及执行
[root@m01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
#查看监控模块状态
[root@m01 ~]# lsmod | grep ip_vs
#修改内核启动参数
[root@m01 ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
EOF
#立即生效添加的内核参数
[root@m01 ~]# sysctl --system
二、docker安装
#卸载就版本
[root@m01 ~]# sudo yum remove docker docker-common docker-selinux docker-engine
#安装依赖的软件包
[root@m01 ~]# sudo yum install -y yum-utils device-mapper-persistent-data lvm2
#安装源(选择其一即可)
[root@m01 ~]# wget -O /etc/yum.repos.d/docker-ce.repo https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo
--2021-08-08 22:19:24-- https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo
正在解析主机 repo.huaweicloud.com (repo.huaweicloud.com)... 58.215.92.70, 180.97.163.21, 117.91.188.35, ...
正在连接 repo.huaweicloud.com (repo.huaweicloud.com)|58.215.92.70|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:1919 (1.9K) [application/octet-stream]
正在保存至: “/etc/yum.repos.d/docker-ce.repo”
100%[=======================>] 1,919 --.-K/s 用时 0s
2021-08-08 22:19:24 (318 MB/s) - 已保存 “/etc/yum.repos.d/docker-ce.repo” [1919/1919])
# 添加Docker repository,这里改为国内阿里云yum源(可选,建议使用)
[root@m01 ~]# yum-config-manager \
> --add-repo \
> http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
已加载插件:fastestmirror
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
#安装docker并更新系统
[root@m01 ~]# yum update -y && yum install -y docker-ce
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* elrepo: mirror-hk.koddos.net
* epel: ftp.iij.ad.jp
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
.....
....
#配置镜像加速器(选其一即可,建议其二)
[root@m01 ~]# cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://hahexyip.mirror.aliyuncs.com"]
}
EOF
#配置镜像加速
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"registry-mirrors": ["https://uyah70su.mirror.aliyuncs.com"]
}
EOF
ps : 注意,由于国内拉取镜像较慢,配置文件最后追加了阿里云镜像加速配置
#启动并加入开机自启动
[root@m01 ~]# systemctl enable docker && systemctl start docker
#查看docker的状态
[root@m01 ~]# docker version
Client: Docker Engine - Community
Version: 20.10.8
API version: 1.41
Go version: go1.16.6
Git commit: 3967b7d
Built: Fri Jul 30 19:55:49 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.8
API version: 1.41 (minimum version 1.12)
Go version: go1.16.6
Git commit: 75249d8
Built: Fri Jul 30 19:54:13 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.9
GitCommit: e25210fe30a0a703442421b0f60afac609f950a3
runc:
Version: 1.0.1
GitCommit: v1.0.1-0-g4144b63
docker-init:
Version: 0.19.0
GitCommit: de40ad0
三、安装kubelet
#配置镜像源
[root@m01 ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#安装软件包(建议版本安装1.21.3)
[root@m01 ~]# sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes (默认安装最新版1.22.0)
#因为这里是自己的本地虚拟机测试环境,所以直接安装最新的版本踩坑,如果要安装指定版本可以在后面指定版本号如下:
sudo yum install -y kubelet-${version} kubeadm-${version} kubectl-${version} --disableexcludes=kubernetes
#指定版本安装
[root@m01 ~]# yum install -y kubelet-1.21.3 kubeadm-1.21.3 kubectl-1.21.3
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* elrepo: mirror-hk.koddos.net
* epel: ftp.iij.ad.jp
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
.....
....
#启动并加入开机自启
[root@m01 ~]# sudo systemctl enable --now kubelet
四、安装haproxy(全部安装)
#高可用需要一个负载均衡器,这里直接使用了一个单节点 haproxy 代替一下,实际生产环境中使用 keepalived 保证 haproxy 的高可用
[root@m01 ~]# yum install -y haproxy
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* elrepo: mirrors.tuna.tsinghua.edu.cn
* epel: my.mirrors.thegigabit.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
docker-ce-stable
·······
·····
#添加配置项
[root@m01 ~]# cat > /etc/haproxy/haproxy.cfg <<EOF
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
listen stats
bind *:8006
mode http
stats enable
stats hide-version
stats uri /stats
stats refresh 30s
stats realm Haproxy\ Statistics
stats auth admin:admin
frontend k8s-master
bind 0.0.0.0:8443
bind 127.0.0.1:8443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
#添加主机名称及IP地址
server m01 192.168.15.66:6443 check inter 2000 fall 2 rise 2 weight 100
server m02 192.168.15.67:6443 check inter 2000 fall 2 rise 2 weight 100
server node01 192.168.15.68:6443 check inter 2000 fall 2 rise 2 weight 100
EOF
五、安装keepalived(所有主节点执行)
1、安装keepalived安装包
[root@m01 ~]# sudo yum install keepalived -y
[root@m02 ~]# sudo yum install keepalived -y
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 6.5 kB 00:00
* base: mirrors.aliyun.com
* elrepo: mirrors.tuna.tsinghua.edu.cn
* epel: mirror.sjtu.edu.cn
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
......
....
#备份配置文件,添加新配置文件
[root@m01 ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak
2、修改配置文件(m01与m02)
【MASTER01】
#配置文件修改
[root@m01 ~]# cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_kubernetes {
script "/etc/keepalived/check_kubernetes.sh"
interval 2
weight -5
fall 3
rise 2
}
vrrp_instance VI_1 {
state MASTER #注意标签BACKUP
interface eth0 #注意使用的网卡
mcast_src_ip 192.168.15.66 #自己的IP
virtual_router_id 51
priority 100 #使用的权重设定
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.15.69 #漂移的vip设置
}
# track_script {
# chk_kubernetes
# }
}
EOF
#启动keepalived并加入开机自启动
[root@m01 ~]# sudo systemctl enable --now keepalived.service haproxy.service kubelet.service
【MASTER02】
[root@m02 ~]# cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_kubernetes {
script "/etc/keepalived/check_kubernetes.sh"
interval 2
weight -5
fall 3
rise 2
}
vrrp_instance VI_1 {
state BACKUP #注意标签BACKUP
interface eth0 #注意使用的网卡
mcast_src_ip 192.168.15.67 #注意使用的ip地址
virtual_router_id 51
priority 90 #使用的权重修改
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.15.69 #漂移的vip设置
}
# track_script {
# chk_kubernetes
# }
}
EOF
#启动keepalived并加入开机自启动
[root@m02 ~]# sudo systemctl enable --now keepalived.service haproxy.service kubelet.service
六、集群初始化(m01)
1、生成初始化配置文件
[root@m01 ~]# kubeadm config print init-defaults >init-config.yaml
[root@m01 ~]# ll |grep init
-rw-r--r-- 1 root root 976 8月 8 21:13 init-config.yaml
2、init配置文件修改(附件)
【附件】
[root@m01 ~]# cat init-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.15.66
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
imagePullPolicy: IfNotPresent
name: m01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
certSANs:
- 192.168.15.69
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.15.69:8443
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/k8sos
kind: ClusterConfiguration
kubernetesVersion: 1.21.3
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
3、init开始
#初始化
[root@m01 ~]# kubeadm init --config init-config.yaml --upload-certs
W0808 21:13:32.175756 92363 strict.go:54] configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"InitConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "imagePullPolicy"
[init] Using Kubernetes version: v1.21.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
········
····
#实时查看拉取过程(实时监控拉取过程)
[root@m01 ~]# while true; do docker images; echo ; sleep 3; clear; done
REPOSITORY TAG IMAGE ID CREATED SIZE
calico/node v3.20.0 5ef66b403f4f 8 days ago 170MB
calico/pod2daemon-flexvol v3.20.0 5991877ebc11 8 days ago 21.7MB
calico/cni v3.20.0 4945b742b8e6 8 days ago 146MB
calico/kube-controllers v3.20.0 76ba70f4748f 8 days ago 63.2MB
registry.cn-hangzhou.aliyuncs.com/k8sos/kube-apiserver v1.21.3 3d174f00aa39 3 weeks ago 126MB
registry.cn-hangzhou.aliyuncs.com/k8sos/kube-scheduler v1.21.3 6be0dc1302e3 3 weeks ago 50.6MB
registry.cn-hangzhou.aliyuncs.com/k8sos/kube-proxy v1.21.3 adb2816ea823 3 weeks ago 103MB
registry.cn-shanghai.aliyuncs.com/hzl-images/kube-proxy v1.21.2 adb2816ea823 3 weeks ago 103MB
registry.cn-hangzhou.aliyuncs.com/k8sos/kube-controller-manager v1.21.3 bc2bb319a703 3 weeks ago 120MB
registry.cn-shanghai.aliyuncs.com/hzl-images/kube-apiserver v1.21.2 106ff58d4308 7 weeks ago 126MB
registry.cn-shanghai.aliyuncs.com/hzl-images/kube-controller-manager v1.21.2 ae24db9aa2cc 7 weeks ago 120MB
registry.cn-shanghai.aliyuncs.com/hzl-images/kube-scheduler v1.21.2 f917b8c8f55b 7 weeks ago 50.6MB
registry.cn-hangzhou.aliyuncs.com/k8sos/pause 3.4.1 0f8457a4c2ec 6 months ago 683kB
registry.cn-shanghai.aliyuncs.com/hzl-images/pause 3.4.1 0f8457a4c2ec 6 months ago 683kB
registry.cn-hangzhou.aliyuncs.com/k8sos/coredns v1.8.0 7916bcd0fd70 9 months ago 42.5MB
registry.cn-shanghai.aliyuncs.com/hzl-images/coredns v1.8.0 7916bcd0fd70 9 months ago 42.5MB
registry.cn-hangzhou.aliyuncs.com/k8sos/etcd 3.4.13-0 8855aefc3b26 11 months ago 253MB
registry.cn-shanghai.aliyuncs.com/hzl-images/etcd 3.4.13-0 8855aefc3b26 11 months ago 253MB
#初始化完成,并保存生成的指令
-----------------------------------------------------------------------
(加入所有master)
join 192.168.15.69:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:552a8af303d27bdcf20700ef6a318e002dca6fad97abb24be68e71d587f4c6c3 \
--control-plane --certificate-key 623076797c43edd46533603426006d0e8c4c4bad5ec91b72a807bf82529270fd
(加入所有node)
kubeadm join 192.168.15.69:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:552a8af303d27bdcf20700ef6a318e002dca6fad97abb24be68e71d587f4c6c3
------------------------------------------------------------------------
# 小插曲 #
############################### 错错错 ##################################################
#搭建时出现以下错误:
[root@m01 ~]# kubeadm init --config init-config.yaml --upload-certs
[init] Using Kubernetes version: v1.22.3
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
........
....
################################# 方案 #######################################
#解决方案:
[root@m01 ~]# echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
3、配置 kubernetes 用户信息
[root@m01 ~]# mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
ps : 如果是root用户,则可以使用:export KUBECONFIG=/etc/kubernetes/admin.conf(只能临时使用,不建议使用)
4、安装网络插件(calico)
#下载网络插件
[root@m01 ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 197k 100 197k 0 0 83317 0 0:00:02 0:00:02 --:--:-- 83353
#添加环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
#部署calico网络插件
[root@m01 ~]# kubectl create -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico
·······
····
#部署即查看pod创建状态
[root@m01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-58497c65d5-kchf8 1/1 Running 0 43s
calico-node-cv7wm 1/1 Running 0 43s
coredns-978bbc4b6-bc6pp 1/1 Running 0 7m1s
coredns-978bbc4b6-hn7hw 1/1 Running 0 7m1s
etcd-m01 1/1 Running 0 7m6s
kube-apiserver-m01 1/1 Running 0 7m6s
kube-controller-manager-m01 1/1 Running 0 7m6s
kube-proxy-n427d 1/1 Running 0 7m1s
kube-scheduler-m01 1/1 Running 0 7m7s
七、初始化加入(m02)
1、加入集群
[root@m02 ~]# kubeadm join 192.168.15.69:8443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:552a8af303d27bdcf20700ef6a318e002dca6fad97abb24be68e71d587f4c6c3 \
> --control-plane --certificate-key 623076797c43edd46533603426006d0e8c4c4bad5ec91b72a807bf82529270fd
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
········
·····
2、配置用户信息
#添加用户信息
[root@m02 ~]# mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#查看node状态
[root@m02 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
m01 Ready control-plane,master 119m v1.21.3
m02 Ready control-plane,master 98m v1.21.3
node01 Ready <none> 91m v1.21.3
八、初始化加入(node01)
root@node01 ~]# kubeadm join 192.168.15.69:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:552a8af303d27bdcf20700ef6a318e002dca6fad97abb24be68e71d587f4c6c3
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
········
····
#查看镜像拉去状态(从初始化成功)
[root@node01 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
calico/pod2daemon-flexvol v3.20.0 5991877ebc11 8 days ago 21.7MB
calico/cni v3.20.0 4945b742b8e6 8 days ago 146MB
registry.cn-hangzhou.aliyuncs.com/k8sos/kube-proxy v1.21.3 adb2816ea823 3 weeks ago 103MB
registry.cn-hangzhou.aliyuncs.com/k8sos/pause 3.4.1 0f8457a4c2ec 6 months ago 683kB
九、检查集群状态
################################## 检查集群状态 ####################################
##查看集群服务配置状态(m01与m02)
[root@m01 /]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-58497c65d5-kchf8 1/1 Running 0 118m
calico-node-cv7wm 1/1 Running 0 118m
calico-node-wpgm2 1/1 Running 0 103m
calico-node-zmkl5 1/1 Running 0 96m
coredns-978bbc4b6-bc6pp 1/1 Running 0 124m
coredns-978bbc4b6-hn7hw 1/1 Running 0 124m
etcd-m01 1/1 Running 0 124m
etcd-m02 1/1 Running 0 103m
kube-apiserver-m01 1/1 Running 0 124m
kube-apiserver-m02 1/1 Running 0 103m
kube-controller-manager-m01 1/1 Running 1 124m
kube-controller-manager-m02 1/1 Running 0 103m
kube-proxy-52r2q 1/1 Running 0 96m
kube-proxy-hwdxr 1/1 Running 0 103m
kube-proxy-n427d 1/1 Running 0 124m
kube-scheduler-m01 1/1 Running 1 124m
kube-scheduler-m02 1/1 Running 0 103m
#查看节点状态(全部正常)
[root@m01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
m01 Ready control-plane,master 32m v1.21.3
m02 Ready control-plane,master 10m v1.21.3
node01 Ready <none> 4m13s v1.21.3
#查看vip
[root@m01 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:a2:de:c6 brd ff:ff:ff:ff:ff:ff
inet 192.168.15.66/24 brd 192.168.15.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet 192.168.15.69/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::57eb:becd:18b6:b774/64 scope link noprefixroute
valid_lft forever preferred_lft forever
十、集群测试
1、keepalived测试
#停止keep alive服务m01
[root@m01 /]# systemctl stop keepalived.service
#vip已经漂移了
[root@m01 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:a2:de:c6 brd ff:ff:ff:ff:ff:ff
inet 192.168.15.66/24 brd 192.168.15.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::57eb:becd:18b6:b774/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#查看m02
[root@m02 ~]# ip a #vip已经漂移到m02上
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:d8:98:7b brd ff:ff:ff:ff:ff:ff
inet 192.168.15.67/24 brd 192.168.15.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet 192.168.15.69/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::abd2:cfda:7db:56ff/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::57eb:becd:18b6:b774/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::a95:bcab:b48f:c797/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#重新启动keepalived服务(根据设定的权重,可以来回让vip漂移)
[root@m01 /]# systemctl resatrt keepalived.service
[root@m01 /]# ip a #vip又漂移到m01
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:a2:de:c6 brd ff:ff:ff:ff:ff:ff
inet 192.168.15.66/24 brd 192.168.15.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet 192.168.15.69/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::57eb:becd:18b6:b774/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#简单测试keepalived,向后的脑裂问题,可以添加配置脚本(此处略)详情如下
2、集群服务测试
#创建 nginx 测试
[root@m01 /]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
#启动创建的实列,指定端口
[root@m01 ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed #启动已创建的nginx
#查看状态pod
[root@m01 /]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-6799fc88d8-hvcnt 1/1 Running 0 75s
#查看服务状态
[root@m01 /]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 142m
nginx NodePort 10.106.245.215 <none> 80:30785/TCP 6s
#测试访问
[root@m01 /]# curl 10.106.245.215
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
#浏览完测试
本文来自博客园,作者:ଲ一笑奈&何,转载请注明原文链接:https://www.cnblogs.com/zeny/p/15121463.html