关注我的个人博客:www.yaoxinlei.com

姚鑫磊的博客园

翻过一座山,山后一片海。

K8S 基于ansible 搭建高可用集权(kubernetes_v1.22.2)

K8S 基于ansible 搭建高可用集权(kubernetes_v1.22.2)


1、准备环境

由于资源有限,只开启了3台虚拟机,前期准备工作请参考闫世成的博客或我的另一篇博客;


序号 角色 IP 备注
1 keepalived+haproxy+vip 192.168.117.132
2 master-etcd-node-2 192.168.117.131
3 mester-etcd-node-1 192.168.117.130
  • 三台虚拟机,130当做master与etcd与node,131与130同样,132当做keepalived+haproxy+vip

2、192.168.117.132 部署安装Keepalived+haproxy+vip

  • 2-1、使用apt install 安装keepalived与haproxy
root@superops:~# apt install keepalived haproxy   #安装部署
root@superops:~# find / -name keepalived*         #查找配置文件
将查找到的:/usr/share/doc/keepalived/samples/keepalived.conf.vrrp复制到/etc/keepalived/命名为keepalived.conf
root@superops:~# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf 
  • 2-2、修改配置文件:/etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER
    interface ens32                         #修改网卡名字
    garp_master_delay 10
    smtp_alert
    virtual_router_id 60
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.117.188 dev ens32 label ens32:0    #创建vip
    }
}
  • 2-3、重启服务查看状态,将服务设置成开机自启
root@superops:~# systemctl restart keepalived.service 
root@superops:~# systemctl status keepalived.service 
root@superops:~# systemctl enable keepalived.service
Synchronizing state of keepalived.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable keepalived
  • 2-4、修改haproxy的配置文件/etc/haproxy/haproxy.cfg
check: 状态检测
inter:间隔时间
fall: 失败测试
rise: 恢复次数
# 将以下添加到最底层
listen k8s-6443
  bind 192.168.117.188:6443   # 监听 vip的6443端口
  mode tcp                    # tcp 模式
  server 192.168.117.130 192.168.117.130:6443 check inter 2s fall 3 rise 3   #将两台master的ip添加上,加上状态检测  间隔时间  2s   失败次数  3次    rise 回复次数 3次
  server 192.168.117.131 192.168.117.131:6443 check inter 2s fall 3 rise 3
  • 2-5、启动服务查看服务状态与端口状态
root@superops:~# systemctl restart haproxy.service 
root@superops:~# systemctl status haproxy.service 
● haproxy.service - HAProxy Load Balancer
     Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-01-10 23:29:11 CST; 5s ago
       Docs: man:haproxy(1)
             file:/usr/share/doc/haproxy/configuration.txt.gz
    Process: 23830 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $EXTRAOPTS (code=exited, status=0/SUCCESS)
   Main PID: 23843 (haproxy)
      Tasks: 3 (limit: 2240)
     Memory: 2.7M
     CGroup: /system.slice/haproxy.service
             ├─23843 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
             └─23844 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock

Jan 10 23:29:11 superops systemd[1]: Starting HAProxy Load Balancer...
Jan 10 23:29:11 superops haproxy[23843]: [WARNING] 009/232911 (23843) : parsing [/etc/haproxy/haproxy.cfg:23] : 'option httplog' not usable with proxy 'k8s-6443' (needs 'mode http'). Fall>
Jan 10 23:29:11 superops haproxy[23843]: Proxy k8s-6443 started.
Jan 10 23:29:11 superops haproxy[23843]: Proxy k8s-6443 started.
Jan 10 23:29:11 superops haproxy[23843]: [NOTICE] 009/232911 (23843) : New worker #1 (23844) forked
Jan 10 23:29:11 superops systemd[1]: Started HAProxy Load Balancer.
Jan 10 23:29:11 superops haproxy[23844]: [WARNING] 009/232911 (23844) : Server k8s-6443/192.168.117.130 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check durat>
Jan 10 23:29:12 superops haproxy[23844]: [WARNING] 009/232912 (23844) : Server k8s-6443/192.168.117.131 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check durat>
Jan 10 23:29:12 superops haproxy[23844]: [ALERT] 009/232912 (23844) : proxy 'k8s-6443' has no server available!
root@superops:~# ss -lnt
State                 Recv-Q                Send-Q                                 Local Address:Port                               Peer Address:Port                Process                
LISTEN                0                     491                                  192.168.117.188:6443                                    0.0.0.0:*                                          
LISTEN                0                     4096                                   127.0.0.53%lo:53                                      0.0.0.0:*                                          
LISTEN                0                     128                                        127.0.0.1:8118                                    0.0.0.0:*                                          
LISTEN                0                     128                                          0.0.0.0:22                                      0.0.0.0:*                                          
LISTEN                0                     128                                        127.0.0.1:6010                                    0.0.0.0:*                                          
LISTEN                0                     16384                                      127.0.0.1:1514                                    0.0.0.0:*                                          
LISTEN                0                     128                                            [::1]:8118                                       [::]:*                                          
LISTEN                0                     128                                             [::]:22                                         [::]:*                                          
LISTEN                0                     128                                            [::1]:6010                                       [::]:*           
  • 2-6、如果服务启动不成功,可能是缺少内核参数
1.使用sysctl -a | grep local
2.找到net.ipv4.ip_nonlocal_bind = 0
3.修改vim /etc/sysctl.conf 将第2条添加进去,把0改为1即可
4.修改完成后执行sysctl -p 即可,然后在重新启动一下服务

3、192.168.117.130使用ansible部署服务

  • 3-1、安装ansible
root@superops:~# apt install ansible
  • 3-2、配置免秘钥认证
root@superops:~# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:azBtE/ipRTeejzNF3k8LFsSYsHUBdDM4TxXJPot+Gsk root@superops
The key's randomart image is:
+---[RSA 3072]----+
|         .o+*B+oo|
|       .  o=+ooo |
|      . o.o =..  |
|       + = = o.o |
|      o S o ooo +|
|       * o +o.o+.|
|      . o + oE ..|
|       .   o ... |
|             .o  |
+----[SHA256]-----+
root@superops:~# ssh-copy-id 192.168.117.130
root@192.168.117.130's password: 

root@superops:~# ssh-copy-id 192.168.117.131
root@192.168.117.131's password: 

root@superops:~# ssh-copy-id 192.168.117.132
root@192.168.117.132's password: 
# 使用ssh测试连接 
ssh 192.168.117.130  连接成功
ssh 192.168.117.131  连接成功
ssh 192.168.117.132  连接成功

4、编排安装K8S

  • 4-1、下载安装脚本ezdown,使用kubeasz版本(3.1.1),下载好的安装包放在/etc/kubeasz
root@superops:~# export release=3.1.1   # 生成环境变量
root@superops:~# wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown  #下载ezdown脚本
root@superops:~# chmod +x ezdown   #赋给执行权限
root@superops:~# ./ezdown -D    # -D 下载所有的安装包
  • 4-2、下载完成后创建集群
root@superops:~# cd /etc/kubeasz/   #切换到/etc/kubeasz/下
root@superops:/etc/kubeasz# ./ezctl new k8s-01  
2022-01-11 00:29:25 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-01
2022-01-11 00:29:25 DEBUG set version of common plugins
2022-01-11 00:29:25 DEBUG cluster k8s-01: files successfully created.
2022-01-11 00:29:25 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-01/hosts'
2022-01-11 00:29:25 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-01/config.yml'
root@superops:/etc/kubeasz# cd clusters/k8s-01/  #切换路径 修改hosts 与config.yml 文件
root@superops:/etc/kubeasz/clusters/k8s-01# ll
total 20
drwxr-xr-x 2 root root 4096 Jan 11 00:29 ./
drwxr-xr-x 3 root root 4096 Jan 11 00:29 ../
-rw-r--r-- 1 root root 6692 Jan 11 00:29 config.yml
-rw-r--r-- 1 root root 1685 Jan 11 00:29 hosts
  • 4-3、修改hosts 文件
root@superops:/etc/kubeasz/clusters/k8s-01# vim hosts   # 修改hosts文件

# 'etcd' cluster should have odd member(s) (1,3,5,...)
# 添加etcd主机ip
[etcd]
192.168.117.130
192.168.117.131

# master node(s)
# 添加master主机ip
[kube_master]
192.168.117.130
192.168.117.131

# work node(s)
# 添加node主机ip
[kube_node]
192.168.117.130
192.168.117.131

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#192.168.1.8 NEW_INSTALL=false

# [optional] loadbalance for accessing k8s from outside
[ex_lb]  # 需要打开,传递安装服务时使用,将负载均衡器的ip添加上即可
192.168.117.132 LB_ROLE=master EX_APISERVER_VIP=192.168.17.188 EX_APISERVER_PORT=6443

# [optional] ntp server for the cluster
[chrony]
#192.168.1.1

[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"

# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"   #网路组件修改成calico

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.100.0.0/16"   #service地址

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.200.0.0/16"   #容器地址

# NodePort Range
NODE_PORT_RANGE="30000-40000"  #端口范围增加到400000

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local"

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/usr/bin"   #可执行程序的路径

# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"    #安装程序的路径

# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s-01"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"
  • 4-4、修改config.yml文件
root@superops:/etc/kubeasz/clusters/k8s-01# vim config.yml  # 修改config.yml

############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "online"

# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false

# 设置时间源服务器【重要:集群内机器时间必须同步】
ntp_servers:
  - "ntp1.aliyun.com"
  - "time1.cloud.tencent.com"
  - "0.cn.pool.ntp.org"

# 设置允许内部时间同步的网络段,比如"10.0.0.0/8",默认全部允许
local_network: "0.0.0.0/0"


############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"

# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"


############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""


############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]启用容器仓库镜像
ENABLE_MIRROR_REGISTRY: true

# [containerd]基础容器镜像
SANDBOX_IMAGE: "easzlab/pause-amd64:3.5"

# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"

# ------------------------------------------- docker
# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"

# [docker]开启Restful API
ENABLE_REMOTE_API: false

# [docker]信任的HTTP仓库
INSECURE_REG: '["127.0.0.1/8"]'


############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
MASTER_CERT_HOSTS:
  - "10.1.1.1"
  - "k8s.test.io"
  #- "www.test.com"

# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24


############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"

# node节点最大pod 数
MAX_PODS: 400

# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "no"

# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"

# haproxy balance mode
BALANCE_ALG: "roundrobin"


############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false

# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.13.0-amd64"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"

# [flannel]离线镜像tar包
flannel_offline: "flannel_{{ flannelVer }}.tar"

# ------------------------------------------- calico
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"

# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"

# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"

# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.19.2"

# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"

# [calico]离线镜像tar包
calico_offline: "calico_{{ calico_ver }}.tar"

# ------------------------------------------- cilium
# [cilium]CILIUM_ETCD_OPERATOR 创建的 etcd 集群节点数 1,3,5,7...
ETCD_CLUSTER_SIZE: 1

# [cilium]镜像版本
cilium_ver: "v1.4.1"

# [cilium]离线镜像tar包
cilium_offline: "cilium_{{ cilium_ver }}.tar"

# ------------------------------------------- kube-ovn
# [kube-ovn]选择 OVN DB and OVN Control Plane 节点,默认为第一个master节点
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"

# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"

# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"

# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: "true"

# [kube-router]kube-router 镜像版本
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"

# [kube-router]kube-router 离线镜像tar包
kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"
busybox_offline: "busybox_{{ busybox_ver }}.tar"


############################
# role:cluster-addon
############################
# coredns 自动安装
dns_install: "yes"
corednsVer: "1.8.4"
ENABLE_LOCAL_DNS_CACHE: false
dnsNodeCacheVer: "1.17.0"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"

# metric server 自动安装
metricsserver_install: "no"
metricsVer: "v0.5.0"

# dashboard 自动安装
dashboard_install: "no"
dashboardVer: "v2.3.1"
dashboardMetricsScraperVer: "v1.0.6"

# ingress 自动安装
ingress_install: "no"
ingress_backend: "traefik"
traefik_chart_ver: "9.12.3"

# prometheus 自动安装
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "12.10.6"

# nfs-provisioner 自动安装
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.1"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"

############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.yourdomain.com"
HARBOR_TLS_PORT: 8443

# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true

# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true

然后根据提示配置'/etc/kubeasz/clusters/k8s-01/hosts' 和 '/etc/kubeasz/clusters/k8s-01/config.yml':根据前面节点规划修改hosts 文件和其他集群层面的主要配置选项;其他集群组件等配置项可以在config.yml 文件中修改

  • 修改完成后,回到主目录/etc/kubeasz
00-规划集群和配置介绍 02-安装etcd集群 04-安装master节点 06-安装集群网络
01-创建证书和安装准备 03-安装容器运行时 05-安装node节点 07-安装集群插件
  • 01:环境初始化,签发证书,系统级别的配置,内核优化等
  • 02:部署etcd
  • 03:配置运行时
  • 04:部署master
  • 05:部署node
  • 06:部署网络组件

5、因为手动安装了 lb 需要在系统初始化的把相应规则删掉

root@superops:/etc/kubeasz# vim playbooks/01.prepare.yml 
- hosts:
  - kube_master
  - kube_node
  - etcd
  - ex_lb   # 删除
  - chrony  # 删除
  roles:
  - { role: os-harden, when: "OS_HARDEN|bool" }
  - { role: chrony, when: "groups['chrony']|length > 0" }

# to create CA, kubeconfig, kube-proxy.kubeconfig etc.
- hosts: localhost
  roles:
  - deploy

# prepare tasks for all nodes
- hosts:
  - kube_master
  - kube_node
  - etcd
  roles:
  - prepare

6、使用ezctl开始安装

6.1 root@superops:/etc/kubeasz# ./ezctl setup k8s-01 01

6.2 root@superops:/etc/kubeasz# ./ezctl setup k8s-01 02

# 当02etcd安装完成后,使用130与131 查看一下服务是否启动
# 192.168.117.130:etcd

root@superops:/etc/kubeasz# systemctl status etcd
● etcd.service - Etcd Server
     Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2022-01-11 01:05:45 CST; 9min ago
       Docs: https://github.com/coreos
   Main PID: 13517 (etcd)
      Tasks: 9 (limit: 2240)
     Memory: 19.8M
     CGroup: /system.slice/etcd.service
             └─13517 /usr/bin/etcd --name=etcd-192.168.117.130 --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem --peer-cert-file=/etc/kubernetes/ssl/et>

Jan 11 01:05:45 superops etcd[13517]: {"level":"info","ts":"2022-01-11T01:05:45.152+0800","caller":"embed/serve.go:140","msg":"serving client traffic insecurely; this is strongly discoura>
Jan 11 01:05:45 superops etcd[13517]: {"level":"info","ts":"2022-01-11T01:05:45.152+0800","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
Jan 11 01:05:45 superops etcd[13517]: {"level":"info","ts":"2022-01-11T01:05:45.154+0800","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.117.130:>
Jan 11 01:05:45 superops etcd[13517]: {"level":"info","ts":"2022-01-11T01:05:45.155+0800","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
Jan 11 01:05:45 superops systemd[1]: Started Etcd Server.
Jan 11 01:05:45 superops etcd[13517]: {"level":"info","ts":"2022-01-11T01:05:45.158+0800","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
Jan 11 01:05:45 superops etcd[13517]: {"level":"info","ts":"2022-01-11T01:05:45.161+0800","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","clu>
Jan 11 01:05:45 superops etcd[13517]: {"level":"info","ts":"2022-01-11T01:05:45.169+0800","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"2b2e8d9883>
Jan 11 01:05:45 superops etcd[13517]: {"level":"info","ts":"2022-01-11T01:05:45.169+0800","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
Jan 11 01:05:45 superops etcd[13517]: {"level":"info","ts":"2022-01-11T01:05:45.169+0800","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}

# 192.168.117.131:etcd

root@superops:~# systemctl status etcd
● etcd.service - Etcd Server
     Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2022-01-11 01:05:45 CST; 10min ago
       Docs: https://github.com/coreos
   Main PID: 8254 (etcd)
      Tasks: 8 (limit: 2240)
     Memory: 19.9M
     CGroup: /system.slice/etcd.service
             └─8254 /usr/bin/etcd --name=etcd-192.168.117.131 --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem --peer-cert-file=/etc/kubernetes/ssl/etc>

Jan 11 01:05:45 superops etcd[8254]: {"level":"info","ts":"2022-01-11T01:05:45.152+0800","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local>
Jan 11 01:05:45 superops etcd[8254]: {"level":"info","ts":"2022-01-11T01:05:45.152+0800","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
Jan 11 01:05:45 superops etcd[8254]: {"level":"info","ts":"2022-01-11T01:05:45.153+0800","caller":"embed/serve.go:140","msg":"serving client traffic insecurely; this is strongly discourag>
Jan 11 01:05:45 superops etcd[8254]: {"level":"info","ts":"2022-01-11T01:05:45.153+0800","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
Jan 11 01:05:45 superops etcd[8254]: {"level":"info","ts":"2022-01-11T01:05:45.153+0800","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
Jan 11 01:05:45 superops systemd[1]: Started Etcd Server.
Jan 11 01:05:45 superops etcd[8254]: {"level":"info","ts":"2022-01-11T01:05:45.156+0800","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.117.131:2>
Jan 11 01:05:45 superops etcd[8254]: {"level":"info","ts":"2022-01-11T01:05:45.166+0800","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
Jan 11 01:05:45 superops etcd[8254]: {"level":"info","ts":"2022-01-11T01:05:45.173+0800","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"2b2e8d9883a>
Jan 11 01:05:45 superops etcd[8254]: {"level":"info","ts":"2022-01-11T01:05:45.173+0800","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
lines 1-20/20 (END)

etcd集群验证

root@superops:/etc/kubeasz# export NODE_IPS="192.168.117.130 192.168.117.131"    #导入环境变量
root@superops:/etc/kubeasz# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health; done
https://192.168.117.130:2379 is healthy: successfully committed proposal: took = 15.076045ms
https://192.168.117.131:2379 is healthy: successfully committed proposal: took = 13.739506ms

calicoctl 网络验证

root@130-me-et-node-1:/etc/kubeasz# calicoctl node status
Calico process is running.

IPv4 BGP status
+-----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS   |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+-----------------+-------------------+-------+----------+-------------+
| 192.168.117.131 | node-to-node mesh | up    | 15:30:17 | Established |
+-----------------+-------------------+-------+----------+-------------+

6.2 root@superops:/etc/kubeasz# ./ezctl setup k8s-01 03

6.3 root@superops:/etc/kubeasz# ./ezctl setup k8s-01 04

root@superops:/etc/kubeasz# kubectl get node
NAME              STATUS   ROLES    AGE   VERSION
192.168.117.130   Ready    master   28s   v1.22.2
192.168.117.131   Ready    master   30s   v1.22.2

6.4 root@superops:/etc/kubeasz# ./ezctl setup k8s-01 05

root@superops:/etc/kubeasz# kubectl get node
NAME              STATUS   ROLES    AGE   VERSION
192.168.117.130   Ready    master,node   28s   v1.22.2
192.168.117.131   Ready    master,node   30s   v1.22.2

6.5 root@superops:/etc/kubeasz# ./ezctl setup k8s-01 06

root@superops:/etc/kubeasz# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-59df8b6856-x2t6x   1/1     Running   0          57s
kube-system   calico-node-9vj86                          1/1     Running   0          57s
kube-system   calico-node-xnnz5                          1/1     Running   0          57s

7、集群验证

  • 7.1 创建容器并运行
root@130-me-et-node-1:/etc/kubeasz# kubectl run net-test1 --image=nginx sleep 60000
root@130-me-et-node-1:/etc/kubeasz# kubectl get pods -o wide
NAME        READY   STATUS    RESTARTS   AGE     IP               NODE              NOMINATED NODE   READINESS GATES
net-test1   1/1     Running   0          5m55s   100.200.70.129   192.168.117.130   <none>           <none>
  • 7.2 测试容器内外网通信
root@130-me-et-node-1:~# kubectl exec -it net-test4 sh   #进入容器
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping 192.168.117.130
PING 192.168.117.130 (192.168.117.130): 56 data bytes
64 bytes from 192.168.117.130: seq=0 ttl=63 time=0.422 ms
64 bytes from 192.168.117.130: seq=1 ttl=63 time=0.293 ms
^C
--- 192.168.117.130 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.293/0.357/0.422 ms
/ # ping 114.114.114.114
PING 114.114.114.114 (114.114.114.114): 56 data bytes
64 bytes from 114.114.114.114: seq=0 ttl=127 time=41.790 ms
64 bytes from 114.114.114.114: seq=1 ttl=127 time=42.713 ms
^C
--- 114.114.114.114 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 41.790/42.251/42.713 ms

posted @ 2022-01-11 13:11  姚鑫磊  阅读(281)  评论(0编辑  收藏  举报
区顶部