Fork me on GitHub

基于Kuboard实现k8s单master集群安装

一、安装要求

采用Kuboard方式安装有一定的安装要求:

  • 至少2台 2核4G 的服务器
  • CentOS 7.6 / 7.7 / 7.8 / 7.9

kubernetes v1.19.x版本采用的Docker,而之上的版本使用的是Container,所以这里安装的是v1.19.x版本,所以参考Kuboard中v1.19.x的安装文档。

环境准备:

节点 核心数 内存 ip 备注
master 2 4 172.16.52.11  
worker 2 4 172.16.52.12  

二、安装

1、检查 centos / hostname

# 在 master 节点和 worker 节点都要执行,查看版本信息
cat /etc/redhat-release

# 此处 hostname 的输出将会是该机器在 Kubernetes 集群中的节点名字
# 不能使用 localhost 作为节点的名字
hostname

# 请使用 lscpu 命令,核对 CPU 信息
# Architecture: x86_64    本安装文档不支持 arm 架构
# CPU(s):       2         CPU 内核数量不能低于 2
lscpu
  • 查看本机
[root@k8smaster ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
[root@k8smaster ~]# hostname
k8smaster
[root@k8smaster ~]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    1
Core(s) per socket:    2
座:                 1
NUMA 节点:         1
厂商 ID:           GenuineIntel
CPU 系列:          6
型号:              165
型号名称:        Intel(R) Core(TM) i5-10400 CPU @ 2.90GHz
步进:              3
CPU MHz:             2903.998
BogoMIPS:            5807.99
超管理器厂商:  VMware
虚拟化类型:     完全
L1d 缓存:          32K
L1i 缓存:          32K
L2 缓存:           256K
L3 缓存:           12288K
NUMA 节点0 CPU:    0,1
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 arat md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
[root@k8smaster ~]# 
View Code

如果hostname不符合要求,进行修改,注意hostname 不是 localhost,且不包含下划线、小数点、大写字母:

# 修改 hostname
hostnamectl set-hostname your-new-host-name
# 查看修改结果
hostnamectl status
# 设置 hostname 解析
echo "127.0.0.1   $(hostname)" >> /etc/hosts

2、检查网络

[root@k8smaster ~]# ip route show
default via 172.16.52.2 dev ens33 proto static metric 100 
blackhole 10.100.16.128/26 proto bird 
10.100.16.129 dev cali4024c32e77c scope link 
10.100.16.130 dev cali8c7cb2844ae scope link 
10.100.16.131 dev calicb075797441 scope link 
10.100.16.132 dev cali0abf7fc80f0 scope link 
10.100.16.133 dev cali9c1c03433c5 scope link 
10.100.16.134 dev cali960be751a26 scope link 
10.100.162.192/26 via 172.16.52.12 dev tunl0 proto bird onlink 
172.16.52.0/24 dev ens33 proto kernel scope link src 172.16.52.11 metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
[root
@k8smaster ~]# ip address 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:b8:55:59 brd ff:ff:ff:ff:ff:ff inet 172.16.52.11/24 brd 172.16.52.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet6 fe80::c044:f2e1:6b26:5a5d/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:19:b1:81:1b brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 inet 10.100.16.128/32 brd 10.100.16.128 scope global tunl0 valid_lft forever preferred_lft forever 5: cali4024c32e77c@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0 6: cali8c7cb2844ae@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1 7: calicb075797441@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2 8: cali0abf7fc80f0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 3 9: cali9c1c03433c5@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 4 10: cali960be751a26@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 5

kubelet使用的IP地址

  • ip route show 命令中,可以知道机器的默认网卡,比如是 ens33,如 default via 172.16.52.2 dev ens33
  • ip address 命令中,可显示默认网卡的 IP 地址,Kubernetes 将使用此 IP 地址与集群内的其他节点通信,如 172.16.52.11
  • 所有节点上 Kubernetes 所使用的 IP 地址必须可以互通(无需 NAT 映射、无安全组或防火墙隔离)

3、安装docker及kubelet

# 阿里云 docker hub 镜像
export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
curl -sSL https://kuboard.cn/install-script/v1.19.x/install_kubelet.sh | sh -s 1.19.5
  • install_kubelet.sh
#!/bin/bash

# 在 master 节点和 worker 节点都要执行

# 安装 docker
# 参考文档如下
# https://docs.docker.com/install/linux/docker-ce/centos/ 
# https://docs.docker.com/install/linux/linux-postinstall/

# 卸载旧版本
yum remove -y docker \
docker-client \
docker-client-latest \
docker-ce-cli \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

# 设置 yum repository
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安装并启动 docker
yum install -y docker-ce-19.03.11 docker-ce-cli-19.03.11 containerd.io-1.2.13

mkdir /etc/docker || true

cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["${REGISTRY_MIRROR}"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl enable docker
systemctl restart docker

# 安装 nfs-utils
# 必须先安装 nfs-utils 才能挂载 nfs 网络存储
yum install -y nfs-utils
yum install -y wget

# 关闭 防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭 SeLinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

# 关闭 swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

# 修改 /etc/sysctl.conf
# 如果有配置,则修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g"  /etc/sysctl.conf
# 可能没有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf
# 执行命令以应用
sysctl -p

# 配置K8S的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 卸载旧版本
yum remove -y kubelet kubeadm kubectl

# 安装kubelet、kubeadm、kubectl
# 将 ${1} 替换为 kubernetes 版本号,例如 1.19.0
yum install -y kubelet-${1} kubeadm-${1} kubectl-${1}

# 重启 docker,并启动 kubelet
systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet && systemctl start kubelet

docker version
View Code

4、初始化master节点

# 只在 master 节点执行
# 替换 x.x.x.x 为 master 节点实际 IP(请使用内网 IP)
# export 命令只在当前 shell 会话中有效,开启新的 shell 窗口后,如果要继续安装过程,请重新执行此处的 export 命令
export MASTER_IP=x.x.x.x
# 替换 apiserver.demo 为 您想要的 dnsName
export APISERVER_NAME=apiserver.demo
# Kubernetes 容器组所在的网段,该网段安装完成后,由 kubernetes 创建,事先并不存在于您的物理网络中
export POD_SUBNET=10.100.0.1/16
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts
curl -sSL https://kuboard.cn/install-script/v1.19.x/init_master.sh | sh -s 1.19.5
  • master节点
# 只在 master 节点执行
export MASTER_IP=172.16.52.11
export APISERVER_NAME=apiserver.demo
export POD_SUBNET=10.100.0.1/16
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts
curl -sSL https://kuboard.cn/install-script/v1.19.x/init_master.sh | sh -s 1.19.5
View Code
  • init_master.sh 
#!/bin/bash

# 只在 master 节点执行

# 脚本出错时终止执行
set -e

if [ ${#POD_SUBNET} -eq 0 ] || [ ${#APISERVER_NAME} -eq 0 ]; then
  echo -e "\033[31;1m请确保您已经设置了环境变量 POD_SUBNET 和 APISERVER_NAME \033[0m"
  echo 当前POD_SUBNET=$POD_SUBNET
  echo 当前APISERVER_NAME=$APISERVER_NAME
  exit 1
fi


# 查看完整配置选项 https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
rm -f ./kubeadm-config.yaml
cat <<EOF > ./kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v${1}
imageRepository: registry.aliyuncs.com/k8sxio
controlPlaneEndpoint: "${APISERVER_NAME}:6443"
networking:
  serviceSubnet: "10.96.0.0/16"
  podSubnet: "${POD_SUBNET}"
  dnsDomain: "cluster.local"
EOF

# kubeadm init
# 根据您服务器网速的情况,您需要等候 3 - 10 分钟
kubeadm config images pull --config=kubeadm-config.yaml
kubeadm init --config=kubeadm-config.yaml --upload-certs

# 配置 kubectl
rm -rf /root/.kube/
mkdir /root/.kube/
cp -i /etc/kubernetes/admin.conf /root/.kube/config

# 安装 calico 网络插件
# 参考文档 https://docs.projectcalico.org/v3.13/getting-started/kubernetes/self-managed-onprem/onpremises
echo "安装calico-3.13.1"
rm -f calico-3.13.1.yaml
wget https://kuboard.cn/install-script/calico/calico-3.13.1.yaml
kubectl apply -f calico-3.13.1.yaml
View Code

当执行完上面的步骤后可以进行查看初始化结果:

# 只在 master 节点执行

# 执行如下命令,等待 3-10 分钟,直到所有的容器组处于 Running 状态
watch kubectl get pod -n kube-system -o wide

# 查看 master 节点初始化结果
kubectl get nodes -o wide

5、初始化worker节点

获取join命令参数,用于将worker节点加入集群,在master节点上执行:

# 只在 master 节点执行
kubeadm token create --print-join-command

如:

# 该 token 的有效时间为 2 个小时,2小时内,您可以使用此 token 初始化任意数量的 worker 节点
root@k8smaster ~]# kubeadm token create --print-join-command
W0921 08:18:57.916684   78637 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join apiserver.demo:6443 --token w5tx9t.uvswl2ivbv1309ta     --discovery-token-ca-cert-hash sha256:cd0e6ec71bb7ed7002e74268a53007cd0c30e4054987c6a1557d02bfb79d3a7d 

6、初始化worker节点

在所有的worker节点上执行:

# 只在 worker 节点执行
# 替换 x.x.x.x 为 master 节点的内网 IP
export MASTER_IP=x.x.x.x
# 替换 apiserver.demo 为初始化 master 节点时所使用的 APISERVER_NAME
export APISERVER_NAME=apiserver.demo
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts

# 替换为 master 节点上 kubeadm token create 命令的输出
kubeadm join apiserver.demo:6443 --token w5tx9t.uvswl2ivbv1309ta     --discovery-token-ca-cert-hash sha256:cd0e6ec71bb7ed7002e74268a53007cd0c30e4054987c6a1557d02bfb79d3a7d 
  • worker节点
# 只在 worker 节点执行

export MASTER_IP=172.16.52.11
export APISERVER_NAME=apiserver.demo
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts
kubeadm join apiserver.demo:6443 --token w5tx9t.uvswl2ivbv1309ta     --discovery-token-ca-cert-hash sha256:cd0e6ec71bb7ed7002e74268a53007cd0c30e4054987c6a1557d02bfb79d3a7d 
View Code

7、检查初始化结果

在master节点上执行:

# 只在 master 节点执行
kubectl get nodes -o wide

输出:

[root@k8smaster ~]# kubectl get nodes -o wide
NAME        STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8smaster   Ready    master   19h   v1.19.5   172.16.52.11   <none>        CentOS Linux 7 (Core)   3.10.0-1160.71.1.el7.x86_64   docker://19.3.11
k8sworker   Ready    <none>   19h   v1.19.5   172.16.52.12   <none>        CentOS Linux 7 (Core)   3.10.0-1160.71.1.el7.x86_64   docker://19.3.11

三、k8s集群管理工具

通过上述的安装后,集群已经可以正常使用了,比如:

# 查看名称空间
[root@k8smaster ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   23h
kube-node-lease   Active   23h
kube-public       Active   23h
kube-system       Active   23h
kuboard           Active   23h

# 创建名称空间
[root@k8smaster ~]# kubectl create ns test
namespace/test created
[root@k8smaster ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   23h
kube-node-lease   Active   23h
kube-public       Active   23h
kube-system       Active   23h
kuboard           Active   23h
test              Active   21s

# 删除名称空间
[root@k8smaster ~]# kubectl delete ns test
namespace "test" deleted
[root@k8smaster ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   23h
kube-node-lease   Active   23h
kube-public       Active   23h
kube-system       Active   23h
kuboard           Active   23h

但是这样是通过命令行的方式进行管理,那么可以通过Kuboard提供的Web UI来进行可视化管理集群。

1、安装Kuboard

# 使用华为云的镜像仓库分发 Kuboard 所需要的镜像
kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3-swr.yaml

查看启动情况:

[root@k8smaster ~]# watch kubectl get pods -n kuboard
very 2.0s: kubectl get pods -n kuboard                                                                                                                                              Wed Sep 21 12:48:05 2022

NAME                             READY   STATUS    RESTARTS   AGE
kuboard-agent-2-9b89fc95-hwkvq   1/1     Running   1          23h
kuboard-agent-5f5d57b669-dcfbq   1/1     Running   0          23h
kuboard-etcd-v7wf5               1/1     Running   0          23h
kuboard-v3-79797c7b84-62wlk      1/1     Running   0          23h

2、访问Kuboard

  •  在浏览器中打开链接 http://your-node-ip-address:30080
  •  输入初始用户名和密码,并登录
用户名: admin
密码: Kuboard123

3、卸载Kuboard

如果不使用Kuboard,卸载步骤:

# 1、执行 Kuboard v3 的卸载
kubectl delete -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml

# 2、清理遗留数据
# 在 master 节点以及带有 k8s.kuboard.cn/role=etcd 标签的节点上执行
rm -rf /usr/share/kuboard

 详情参阅:安装 Kubernetes 管理工具

 

 

 

 

 

 

 

 

 

 

 

 
posted @ 2022-09-21 12:55  iveBoy  阅读(595)  评论(0编辑  收藏  举报
TOP