kubeasz 安装 Kubernetes 集群

kubeasz 安装 Kubernetes 集群

kubeasz 为 github 上开源项目,内部使用 ansible 自动安装 k8s 集群

服务 版本
CentOS 7.9
Docker 20.10.5
Kubernetes 1.20.5
Ansible 2.9.27
Kubeasz 3.0.1

集群规划

节点名称 IP地址 节点角色
m1 192.168.100.133 master, ansible, kubeasz
n1 192.168.100.134 node
n2 192.168.100.135 node
n3 192.168.100.136 node
n4 192.168.100.137 node

获取 kubeasz 代码

mastr 节点存放 kubeasz 代码

[root@localhost ~]# yum install -y lrzsz vim tree unzip
  • github 上下载 3.0.1 版本代码,并上传服务器
[root@localhost ~]# cd /data/
[root@localhost data]# mv /tmp/kubeasz-3.0.1.zip ./
[root@localhost data]# unzip kubeasz-3.0.1.zip

准备工作

修改主机名

所有节点 修改主机名

$ hostnamectl set-hostname [m1|n1|n2|n3|n4]
$ bash

设置免密登陆

master 节点设置其他节点免密登陆

# 创建密钥
[root@m1 data]# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):    
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:i/OYoJVEAsXiQN8BLze89lP0Jp0TwMtoMQhOkYQifaM root@m1
The key's randomart image is:
+---[RSA 2048]----+
|o=+*=.. ..       |
|=o*.*..o ..      |
|=..*.B  =...     |
| .Eoo oo.oo o    |
|    .o. So *     |
|   .......o .    |
|    + oo.        |
|   o . =.        |
|  .   o .        |
+----[SHA256]-----+

# cp 密钥到对应主机
[root@m1 data]# ssh-copy-id m1
root@m1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'm1'"
and check to make sure that only the key(s) you wanted were added.

[root@m1 data]# ssh-copy-id [n1|n2|n3|n4]

# 测试免密登陆
[root@m1 data]# ssh n2
Last login: Tue Nov 22 15:00:42 2022 from 192.168.100.133
[root@n2 ~]# hostname
n2

修改 /etc/hosts 文件

master 节点设置后,分发给其他节点

[root@m1 data]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.100.133 m1
192.168.100.134 n1
192.168.100.135 n2
192.168.100.136 n3
192.168.100.137 n4
  • 分发 /etc/hosts 文件
[root@m1 data]# for i in {1..4}; do scp /etc/hosts n${i}:/etc/hosts ; done
hosts                                100%  254   387.2KB/s   00:00    
hosts                                100%  254   335.7KB/s   00:00    
hosts                                100%  254   325.7KB/s   00:00    
hosts                                100%  254   369.4KB/s   00:00    

# 检查 /etc/hosts 信息
[root@m1 data]# ssh n1
cLast login: Tue Nov 22 15:02:18 2022 from 192.168.100.133

[root@n1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.100.133 m1
192.168.100.134 n1
192.168.100.135 n2
192.168.100.136 n3
192.168.100.137 n4

安装 ansible 服务

master 节点安装 ansible

# 安装 epel 源
[root@m1 data]# yum -y install epel-release

# 安装 ansible 服务
[root@m1 data]# yum -y install ansible

# 查看服务
[root@m1 data]# ansible --version
ansible 2.9.27
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Oct 14 2020, 14:45:30) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]

安装 Kubernetes 集群

下载项目源码、二进制及离线镜像

[root@m1 data]# cd kubeasz-3.0.1
[root@m1 kubeasz-3.0.1]# ./ezdown -D
......

上述脚本运行成功后,所有文件( kubeasz 代码、二进制、离线镜像)均已整理好放入目录 /etc/kubeasz

创建集群配置实例

[root@m1 kubeasz-3.0.1]# ./ezctl new k8s
2022-11-22 16:27:02 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s
2022-11-22 16:27:02 DEBUG set version of common plugins
2022-11-22 16:27:02 DEBUG cluster k8s: files successfully created.
2022-11-22 16:27:02 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s/hosts'
2022-11-22 16:27:02 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s/config.yml'

然后根据提示配置 /etc/kubeasz/clusters/k8s/hosts/etc/kubeasz/clusters/k8s/config.yml
根据前面节点规划修改 hosts 文件和其他集群层面的主要配置选项
其他集群组件等配置项可以在 config.yml 文件中修改。

  • 修改 /etc/kubeasz/clusters/k8s/hosts 文件信息
[root@m1 kubeasz-3.0.1]# cat /etc/kubeasz/clusters/k8s/hosts 
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
# 单 master 节点,所以 etcd 只有 1 个
192.168.100.133

# master node(s)
[kube_master]
# 单 master 节点,所以 master 只有 1 个
192.168.100.133

# work node(s)
[kube_node]
192.168.100.134
192.168.100.135
192.168.100.136
192.168.100.137

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#192.168.1.8 NEW_INSTALL=false

# [optional] loadbalance for accessing k8s from outside
[ex_lb]
#192.168.1.6 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443
#192.168.1.7 LB_ROLE=master EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443

# [optional] ntp server for the cluster
[chrony]
192.168.100.133

[all:vars]
# --------- Main Variables ---------------
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="flannel"

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.68.0.0/16"

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="172.20.0.0/16"

# NodePort Range
NODE_PORT_RANGE="20000-39999"

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local"

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/opt/kube/bin"

# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"

# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"

一键安装

# 自动执行 01-07 playbook 脚本
[root@m1 kubeasz-3.0.1]# ./ezctl setup k8s all
  • 或者分布安装
[root@m1 kubeasz-3.0.1]# ./ezctl setup k8s 01
[root@m1 kubeasz-3.0.1]# ./ezctl setup k8s 02
[root@m1 kubeasz-3.0.1]# ./ezctl setup k8s 03
[root@m1 kubeasz-3.0.1]# ./ezctl setup k8s 04
......

分布安装 01 02 03 ...... 分布对应 playbooks 目录下 playbook 脚本

[root@m1 kubeasz-3.0.1]# ls playbooks/ -lh
total 88K
-rw-r--r--. 1 root root  443 Mar 28  2021 01.prepare.yml
-rw-r--r--. 1 root root   58 Mar 28  2021 02.etcd.yml
-rw-r--r--. 1 root root  209 Mar 28  2021 03.runtime.yml
-rw-r--r--. 1 root root  470 Mar 28  2021 04.kube-master.yml
-rw-r--r--. 1 root root  140 Mar 28  2021 05.kube-node.yml
-rw-r--r--. 1 root root  408 Mar 28  2021 06.network.yml
-rw-r--r--. 1 root root   77 Mar 28  2021 07.cluster-addon.yml
-rw-r--r--. 1 root root   34 Mar 28  2021 10.ex-lb.yml
-rw-r--r--. 1 root root 3.9K Mar 28  2021 11.harbor.yml
-rw-r--r--. 1 root root 1.6K Mar 28  2021 21.addetcd.yml
-rw-r--r--. 1 root root 1.5K Mar 28  2021 22.addnode.yml
-rw-r--r--. 1 root root 1.1K Mar 28  2021 23.addmaster.yml
-rw-r--r--. 1 root root 3.0K Mar 28  2021 31.deletcd.yml
-rw-r--r--. 1 root root 1.3K Mar 28  2021 32.delnode.yml
-rw-r--r--. 1 root root 1.4K Mar 28  2021 33.delmaster.yml
-rw-r--r--. 1 root root 1.8K Mar 28  2021 90.setup.yml
-rw-r--r--. 1 root root 1.2K Mar 28  2021 91.start.yml
-rw-r--r--. 1 root root 1.1K Mar 28  2021 92.stop.yml
-rw-r--r--. 1 root root 1.1K Mar 28  2021 93.upgrade.yml
-rw-r--r--. 1 root root 1.8K Mar 28  2021 94.backup.yml
-rw-r--r--. 1 root root  999 Mar 28  2021 95.restore.yml
-rw-r--r--. 1 root root  337 Mar 28  2021 99.clean.yml
  • ezctl 脚本部分代码
......
function usage-setup(){
  echo -e "\033[33mUsage:\033[0m ezctl setup <cluster> <step>"
  cat <<EOF
available steps:
    01  prepare            to prepare CA/certs & kubeconfig & other system settings
    02  etcd               to setup the etcd cluster
    03  container-runtime  to setup the container runtime(docker or containerd)
    04  kube-master        to setup the master nodes
    05  kube-node          to setup the worker nodes
    06  network            to setup the network plugin
    07  cluster-addon      to setup other useful plugins
    90  all                to run 01~07 all at once
    10  ex-lb              to install external loadbalance for accessing k8s from outside
    11  harbor             to install a new harbor server or to integrate with an existed one

examples: ./ezctl setup test-k8s 01  (or ./ezctl setup test-k8s prepare)
          ./ezctl setup test-k8s 02  (or ./ezctl setup test-k8s etcd)
          ./ezctl setup test-k8s all
EOF
}

......
function setup() {
    [[ -d "clusters/$1" ]] || { logger error "invalid config, run 'ezctl new $1' first"; return 1; }
    [[ -f "bin/kube-apiserver" ]] || { logger error "no binaries founded, run 'ezdown -D' fist"; return 1; }

    PLAY_BOOK="dummy.yml"
    case "$2" in
      (01|prepare)
          PLAY_BOOK="01.prepare.yml"
          ;;
      (02|etcd)
          PLAY_BOOK="02.etcd.yml"
          ;;
      (03|container-runtime)
          PLAY_BOOK="03.runtime.yml"
          ;;
      (04|kube-master)
          PLAY_BOOK="04.kube-master.yml"
          ;;
      (05|kube-node)
          PLAY_BOOK="05.kube-node.yml"
          ;;
      (06|network)
          PLAY_BOOK="06.network.yml"
          ;;
      (07|cluster-addon)
          PLAY_BOOK="07.cluster-addon.yml"
          ;;
      (90|all)
          PLAY_BOOK="90.setup.yml"
          ;;
      (10|ex-lb)
          PLAY_BOOK="10.ex-lb.yml"
          ;;
      (11|harbor)
          PLAY_BOOK="11.harbor.yml"
          ;;
      (*)
          usage-setup
          exit 1
          ;;
    esac

    logger info "cluster:$1 setup step:$2 begins in 5s, press any key to abort:\n"
    ! (read -r -t5 -n1) || { logger warn "setup abort"; return 1; }

    ansible-playbook -i "clusters/$1/hosts" -e "@clusters/$1/config.yml" "playbooks/$PLAY_BOOK" || return 1
}

检查集群

# 验证集群版本 
[root@m1 kubeasz-3.0.1]# kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:10:43Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:02:01Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

# 验证节点就绪 (Ready) 状态
[root@m1 kubeasz-3.0.1]# kubectl get node 
NAME              STATUS                     ROLES    AGE   VERSION
192.168.100.133   Ready,SchedulingDisabled   master   13m   v1.20.5
192.168.100.134   Ready                      node     13m   v1.20.5
192.168.100.135   Ready                      node     13m   v1.20.5
192.168.100.136   Ready                      node     13m   v1.20.5
192.168.100.137   Ready                      node     13m   v1.20.5

# 验证集群pod状态,默认已安装网络插件、coredns、metrics-server等
[root@m1 kubeasz-3.0.1]# kubectl get pod -A
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   coredns-5787695b7f-jdckk                     1/1     Running   0          12m
kube-system   dashboard-metrics-scraper-79c5968bdc-kgttk   1/1     Running   0          12m
kube-system   kube-flannel-ds-amd64-45gs6                  1/1     Running   0          12m
kube-system   kube-flannel-ds-amd64-bhr94                  1/1     Running   0          12m
kube-system   kube-flannel-ds-amd64-n8jf9                  1/1     Running   0          12m
kube-system   kube-flannel-ds-amd64-qknzk                  1/1     Running   0          12m
kube-system   kube-flannel-ds-amd64-tnhm2                  1/1     Running   0          12m
kube-system   kubernetes-dashboard-c4c6566d6-6r76g         1/1     Running   0          12m
kube-system   metrics-server-8568cf894b-7gzw7              1/1     Running   0          12m
kube-system   node-local-dns-24p25                         1/1     Running   0          12m
kube-system   node-local-dns-9487k                         1/1     Running   0          12m
kube-system   node-local-dns-9fpvs                         1/1     Running   0          12m
kube-system   node-local-dns-gf46t                         1/1     Running   0          12m
kube-system   node-local-dns-hx57r                         1/1     Running   0          12m

# 验证集群服务状态
[root@m1 kubeasz-3.0.1]# kubectl get svc -A 
NAMESPACE     NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes                  ClusterIP   10.68.0.1       <none>        443/TCP                  14m
kube-system   dashboard-metrics-scraper   ClusterIP   10.68.253.27    <none>        8000/TCP                 12m
kube-system   kube-dns                    ClusterIP   10.68.0.2       <none>        53/UDP,53/TCP,9153/TCP   12m
kube-system   kube-dns-upstream           ClusterIP   10.68.64.102    <none>        53/UDP,53/TCP            12m
kube-system   kubernetes-dashboard        NodePort    10.68.234.146   <none>        443:34823/TCP            12m
kube-system   metrics-server              ClusterIP   10.68.40.177    <none>        443/TCP                  12m
kube-system   node-local-dns              ClusterIP   None            <none>        9253/TCP                 12m
posted @ 2022-11-22 16:56  evescn  阅读(724)  评论(0编辑  收藏  举报