杨梅冲
每天在想什么呢?

一、实验环境准备

k8s集群角色 IP 主机名 安装组件 配置
控制节点 192.168.10.40 master apiserver、controller-manager、scheduler、etcd、kube-proxy、docker、calicocontained 4核4G
工作节点 192.168.10.41 node1 kubelet-1.24.8、kube-proxy、docker、calico、coredns、contained

2核2G

 

 

 

kubenetes github官网:https://github.com/kubernetes/kubernetes/releases?page=1

containerd github官网:https://github.com/containerd/containerd

1.1 基础环境配置

# 1.修改主机名,配置静态IP
hostnamectl set-hostname master && bash

# 2.配置主机hosts
192.168.10.40 master
192.168.10.41 node1

# 3.配置主机之间相互信任
ssh-keygen -t rsa
ssh-copy-id master

# 4.关闭交换分区
swapoff -a  # 临时关闭
永久关闭为注销/etc/fstab中swap一行

# 5.修改机器内核参数
modprobe br_netfilter
echo "modprobe br_netfilter" >> /etc/profile

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl -p /etc/sysctl.d/k8s.conf

# 6. 关闭防火墙
systemctl stop firewalld ; systemctl disable firewalld

# 7.关闭selinux,修改 x selinux  配置文件之后,重启
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

# 8.配置阿里云yum源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum makecache fast

# 9.配置kubernets源
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

# 10.时间同步并定时同步
yum install ntpdate -y
ntpdate time1.aliyun.com
* */1 * * * /usr/sbin/ntpdate time1.aliyun.com
systemctl restart crond

# 11.开启ipvs支持(组件kube-proxy用到)
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
 /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
 if [ 0 -eq 0 ]; then
 /sbin/modprobe ${kernel_module}
 fi
done

[root@master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp              13079  0 
nf_nat                 26583  1 ip_vs_ftp
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs_wlc              12519  0 
ip_vs_lc               12516  0 
ip_vs                 145458  22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack          139264  2 ip_vs,nf_nat
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

 1.2 基础软件包安装

yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel 
autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm # 停止iptables服务并禁止开机启动 service iptables stop && systemctl disable iptables # 清空规则 iptables -F

1.3 安装 containerd 服务

# 1.3.1 安装containerd
yum install containerd -y
另外一种安装方式:cri-dockerd让Kubernetes 1.24能够继续对接Docker容器运行时,这意味着用户可以像以前一样在Docker Desktop中一键安装并无缝使用最新版的Kubernetes
# github官网下载cri-containerd-cni-1.6.9-linux-amd64.tar.gz
# 解压
tar -zxvf cri-containerd-cni-1.6.4-linux-amd64.tar.gz -C /
cri-containerd-cni

# 注意经测试 cri-containerd-cni-1.6.4-linux-amd64.tar.gz 包中包含的 runc 在 CentOS 7 下的动态链接有问题,这里从 runc 的 github 上单独下载 runc,并替换上面安装的 containerd 中的 runc: 
wget https://github.com/opencontainers/runc/releases/download/v1.1.2/runc.amd64

cp runc.amd64 /usr/local/sbin/runc 
chmod +x /usr/local/sbin/runc  

# 1.3.2 生成默认配置文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml

# 1.3.3 根据文档 Container runtimes 中的内容,对于使用 systemd 作为 init system 的 Linux 的发行版,使用 systemd 作为容器的 cgroup driver 可以确保服务器节点在资源紧张的情况更加稳定,
因此这里配置各个节点上 containerd 的 cgroup driver 为 systemd。修改前面生成的配置文件 [plugins.
"io.containerd.grpc.v1.cri".containerd.runtimes.runc] ... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true # 从false改为true
# 镜像加速
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
            endpoint = ["https://docker.mirrors.ustc.edu.cn","http://hub-mirror.c.163.com"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
            endpoint = ["https://gcr.mirrors.ustc.edu.cn"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
            endpoint = ["https://gcr.mirrors.ustc.edu.cn/google-containers/"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
            endpoint = ["https://quay.mirrors.ustc.edu.cn"]
config.toml
# 再修改 /etc/containerd/config.toml 中的
[plugins."io.containerd.grpc.v1.cri"]
  sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"

# 1.3.4 配置 containerd 开机启动,并启动 containerd
systemctl enable containerd --now
systemctl start containerd && systemctl status containerd

二、安装初始化k8s所需软件包

# 1.安装初始化 k8s 需要的软件包(master、node节点都安装)
yum install kubelet-1.24.8 kubeadm-1.24.8 kubectl-1.24.8 -y
systemctl enable kubelet && systemctl status kubelet
#上面可以看到 kubelet 状态不是 running 状态,这个是正常的,不用管,等 k8s 组件起来这个kubelet 就正常了。
 
注:每个软件包的作用 
Kubeadm: kubeadm 是一个工具,用来初始化 k8s 集群的 
kubelet: 安装在集群所有节点上,用于启动 Pod 的 
kubectl: 通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
# 2.kubeadm 初始化 k8s 集群
# 设置容器运行时(master、node上执行)
crictl config runtime-endpoint /run/containerd/containerd.sock

# 使用 kubeadm 初始化 k8s 集群
kubeadm config print init-defaults > kubeadm.yaml

根据我们自己的需求修改配置文件kubeadm.yaml,比如修改 imageRepository 的值,kube-proxy 的模式为 ipvs,需要注意的是由于我们使用的 containerd 作为运行时,所以在初始化节点的时候需要指定 cgroupDriver为 systemd

[root@master ~]# cat kubeadm.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.10.40  # 控制节点IP
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock # 使用containerd作为容器运行时
  imagePullPolicy: IfNotPresent
  name: node  # 控制节点主机名
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers  # 指定aliyun镜像仓库地址
kind: ClusterConfiguration
kubernetesVersion: 1.24.8  # k8s版本
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16     # Pod网段
  serviceSubnet: 10.96.0.0/12  # Service网段
scheduler: {}
---
# 下面都是新增 apiVersion: kubeproxy.config.k8s.ip/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd
# 查看需要拉取的镜像
[root@master ~]# kubeadm config images list --config kubeadm.yaml
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.8
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.8
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.8
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.8
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.5-0
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6

# 如果网络不行可以先拉去镜像
kubeadm config images pull --config kubeadm.yaml


[root@master ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
# 如果执行失败,记得重置后解决错误再执行
kubeadm reset
# 3.基于 kubeadm.yaml 初始化 k8s 集群
[root@master ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
[init] Using Kubernetes version: v1.24.8
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 3.10.0-1160.el7.x86_64
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found.\n", err: exit status 1
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.10.40]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.10.40 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.10.40 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.002097 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.40:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:500cb8e16b5d2039f07992b380edec2acb3bd4669c2504d10366b490f34144eb
#配置 kubectl 的配置文件 config,相当于对 kubectl 进行授权,这样 kubectl 命令可以使用这个证书对 k8s 集群进行管理
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   3m47s   v1.24.8

 三、扩容k8s集群,添加第一个节点

将node1加入集群

# master执行
[root@master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.10.40:6443 --token 4ruzyb.4tjt6yjxub6ivvhe --discovery-token-ca-cert-hash sha256:500cb8e16b5d2039f07992b380edec2acb3bd4669c2504d10366b490f34144eb

# node1执行
[root@node1 ~]# kubeadm join 192.168.10.40:6443 --token 4ruzyb.4tjt6yjxub6ivvhe --discovery-token-ca-cert-hash sha256:500cb8e16b5d2039f07992b380edec2acb3bd4669c2504d10366b490f34144eb --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=SystemVerification
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 3.10.0-1160.el7.x86_64
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
    [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found.\n", err: exit status 1
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

# 报错
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found.\n", err: exit status 1

方法一、忽略该错误
添加 --ignore-preflight-errors=SystemVerification选项来忽略该错误,暂时无法判断使用该选项,后续会不会出现其他问题。

方法二、升级内核版本
将内核升级到5.13.7后未出现该问题,也不确定是不是内核版本的问题

查看集群节点情况

[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES           AGE    VERSION
master   NotReady   control-plane   38m    v1.24.8
node1    NotReady   <none>          117s   v1.24.8

# NotReady 还未安装网络插件

# 给node1打上标签
[root@master ~]# kubectl label nodes node1 node-role.kubernetes.io/work=work
node/node1 labeled
[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   41m     v1.24.8
node1    NotReady   work            4m49s   v1.24.8

四、安装Kubernetes网络插件Calico

查看calico支持的kubernetes版本:https://projectcalico.docs.tigera.io/archive/v3.24/getting-started/kubernetes/requirements

在线下载calico.yaml配置文件地址: https://docs.projectcalico.org/manifests/calico.yaml

calico github地址:https://github.com/projectcalico/calico/releases

其它安装方法

# 下载到本地导入模式
ctr images import calico.tar.gz

# 直接执行calico.yaml
kubectl apply -f calico.yaml

[root@master calico]# crictl images
I1114 16:49:29.739036   48469 util_unix.go:104] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/run/containerd/containerd.sock" URL="unix:///run/containerd/containerd.sock"
IMAGE                                                                         TAG                 IMAGE ID            SIZE
docker.io/calico/cni                                                          v3.24.5             628dd70880410       87.5MB
docker.io/calico/node                                                         v3.24.5             54637cb36d4a1       81.6MB
registry.aliyuncs.com/google_containers/pause                                 3.7                 221177c6082a8       311kB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.7                 221177c6082a8       311kB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   v1.8.6              a4ca41631cc7a       13.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.5.5-0             4694d02f8e611       102MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.24.8             c7cbaca6e63b4       33.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.24.8             9e2bfc195de6b       31MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.24.8             a49578203a3c2       39.5MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.24.8             9efa6dff568f6       15.5MB

[root@master calico]# kubectl get nodes -o wide
NAME     STATUS   ROLES           AGE    VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
master   Ready    control-plane   130m   v1.24.8   192.168.10.40   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.9
node1    Ready    work            94m    v1.24.8   192.168.10.41   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.9

[root@master calico]# kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE    IP               NODE     NOMINATED NODE   READINESS GATES
calico-kube-controllers-84c476996d-ll7lv   1/1     Running   0          21m    10.244.166.131   node1    <none>           <none>
calico-node-b9wjb                          1/1     Running   0          21m    192.168.10.40    master   <none>           <none>
calico-node-jq5sl                          1/1     Running   0          21m    192.168.10.41    node1    <none>           <none>
coredns-7f74c56694-nxs89                   1/1     Running   0          131m   10.244.166.129   node1    <none>           <none>
coredns-7f74c56694-rnz7r                   1/1     Running   0          131m   10.244.166.130   node1    <none>           <none>
etcd-master                                1/1     Running   1          131m   192.168.10.40    master   <none>           <none>
kube-apiserver-master                      1/1     Running   1          131m   192.168.10.40    master   <none>           <none>
kube-controller-manager-master             1/1     Running   1          131m   192.168.10.40    master   <none>           <none>
kube-proxy-vfvgn                           1/1     Running   0          94m    192.168.10.41    node1    <none>           <none>
kube-proxy-wgchb                           1/1     Running   0          131m   192.168.10.40    master   <none>           <none>
kube-scheduler-master                      1/1     Running   1          131m   192.168.10.40    master   <none>           <none>

五、测试

5.1 测试在k8s创建pod是否可以正常访问网络

# 在node1节点导入busybox-1-28.tar.gz镜像
[root@node1 ~]# ctr images import busybox-1-28.tar.gz 
unpacking docker.io/library/busybox:1.28 (sha256:585093da3a716161ec2b2595011051a90d2f089bc2a25b4a34a18e2cf542527c)...done

[root@node1 ~]# ctr images ls
REF                            TYPE                                                 DIGEST                                                                  SIZE    PLATFORMS   LABELS 
docker.io/library/busybox:1.28 application/vnd.docker.distribution.manifest.v2+json sha256:585093da3a716161ec2b2595011051a90d2f089bc2a25b4a34a18e2cf542527c 1.3 MiB linux/amd64 - 

# k8s创建pod
[root@master ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping baidu.com
PING baidu.com (39.156.66.10): 56 data bytes
64 bytes from 39.156.66.10: seq=0 ttl=127 time=34.372 ms
64 bytes from 39.156.66.10: seq=1 ttl=127 time=31.740 ms
#通过上面可以看到能访问网络,说明 calico 网络插件已经被正常安装了

5.2 测试coredns是否正常

[root@master ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default.svc.cluster.local
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

10.96.0.10 就是我们 coreDNS 的 clusterIP,说明 coreDNS 配置好了。
解析内部 Service 的名称,是通过 coreDNS 去解析的。

 5.3 测试集群中部署tomcat服务

[root@node1 ~]# ctr images import tomcat.tar.gz 
unpacking docker.io/library/tomcat:8.5-jre8-alpine (sha256:463a0b1de051bff2208f81a86bdf4e7004eb68c0edfcc658f2e2f367aab5e342)...done

[root@master ~]# kubectl apply -f tomcat.yaml 
pod/demo-pod created
[root@master ~]# kubectl apply -f tomcat-service.yaml 
service/tomcat created

[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP          30m
tomcat       NodePort    10.108.235.146   <none>        8080:30080/TCP   39s


[root@master ~]# cat tomcat.yaml 
apiVersion: v1  #pod属于k8s核心组v1
kind: Pod  #创建的是一个Pod资源
metadata:  #元数据
  name: demo-pod  #pod名字
  namespace: default  #pod所属的名称空间
  labels:
    app: myapp  #pod具有的标签
    env: dev      #pod具有的标签
spec:
  containers:      #定义一个容器,容器是对象列表,下面可以有多个name
  - name:  tomcat-pod-java  #容器的名字
    ports:
    - containerPort: 8080
    image: tomcat:8.5-jre8-alpine   #容器使用的镜像
    imagePullPolicy: IfNotPresent

[root@master ~]# cat tomcat-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: tomcat
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30080
  selector:
    app: myapp
    env: dev

http://192.168.10.40:30080

 

 六、安装k8s可视化UI界面dashboard

参考:https://www.cnblogs.com/yangmeichong/p/16477200.html

github镜像下载地址: https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml

6.1 服务端安装

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

# 查看pods
[root@master ~]# kubectl get pods -n kubernetes-dashboard
NAME                                        READY   STATUS              RESTARTS   AGE
dashboard-metrics-scraper-8c47d4b5d-xvkjl   0/1     ContainerCreating   0          16s
kubernetes-dashboard-6c75475678-k7pqx       0/1     ContainerCreating   0          16s
# 等待STATUS变为running

# 查看dashboard前段的service
[root@master ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.99.193.248   <none>        8000/TCP   60s
kubernetes-dashboard        ClusterIP   10.96.198.20    <none>        443/TCP    61s

#修改 service type 类型变成 NodePort
[root@master ~]#  kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
service/kubernetes-dashboard edited

[root@master ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.99.193.248   <none>        8000/TCP        4m57s
kubernetes-dashboard        NodePort    10.96.198.20    <none>        443:30493/TCP   4m58s

访问工作节点:https://192.168.10.41:30493

6.2 dashboard登录配置UI界面

 

 

 6.3 通过token令牌访问dashboard

# 创建管理员账号,具有查看任何空间的权限,可以管理所有资源对象
[root@master ~]# kubectl create clusterrolebinding  dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard
clusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admin created

# 查看角色是否创建成功
[root@master ~]# kubectl -n kubernetes-dashboard get serviceaccounts |grep kubernetes-dashboard
kubernetes-dashboard   0         14m

# 创建token
# 'v1.24.0 更新之后进行创建 ServiceAccount 不会自动生成 Secret 需要对其手动创建' #
--duration 设置过期时间,也可以不加 [root@master ~]# kubectl -n kubernetes-dashboard create token kubernetes-dashboard --duration 604800s eyJhbGciOiJSUzI1NiIsImtpZCI6IllOMjBpOVJyNXljbDdIdVJoOGwzbnh0d0t5SzY4TGRvbE1femNueXFyTW8ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjY5ODc3OTUwLCJpYXQiOjE2NjkyNzMxNTAsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInVpZCI6IjJmNDMxMmU5LTdmMjctNDc2ZC1iMGJkLWJlZjVlMjc3MzAyNSJ9fSwibmJmIjoxNjY5MjczMTUwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQifQ.kAwStN-Cz8TfFszD2FpPb6AbBWaEePSZ_19UP2BUBYUjBwGVhL2IfnAjdOuBeU7qGGhzNZNOhjjBI-NQeNraYDexxMrVSzJ7Wh4kN5s6HROaBuStrL1CimKnPvc_YAIuPMpg1nY9FG4S0gDJXphqxAQsoYkrKAGmuLeCpH_lbC-S5pyapYxViwC4iNQT0KEtgh593pFJCebk68n5X-OARRJ0k42tpH_I7Q7fhHBvX16jeAin0MUKQ9AN0SEO3kFhwwNx7Zt4FZ5IE3QODKXE5y1PvB3Pd7w3lmFMTHR1ru5yb747yDdyHUVT3KqEXjPfKIrb2RIhmsFByZ_B5wZxXQ

输入token登录验证:

 

 6.4 通过kubeconfig文件访问dashboard

# 创建服务账号,当原也可以使用上面的管理员账号(本次搭建测试使用管理员账号)
kubectl -n kubernetes-dashboard create serviceaccount dashboard-admin

#查询服务账号信息,没有对 Token 进行创建
[root@master pki]# kubectl -n kubernetes-dashboard describe serviceaccounts kubernetes-dashboard
Name:                kubernetes-dashboard
Namespace:           kubernetes-dashboard
Labels:              k8s-app=kubernetes-dashboard
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   <none>
Tokens:              <none>
Events:              <none>

# 创建secret(yaml方式)
kubectl apply -f- <<EOF
apiVersion: v1                                                                                           
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/service-account.name: "kubernetes-dashboard"
EOF
secret/kubernetes-dashboard created

# kubernetes-dashboard账号关联secret
[root@master pki]# kubectl -n kubernetes-dashboard describe serviceaccounts kubernetes-dashboard
Name:                kubernetes-dashboard
Namespace:           kubernetes-dashboard
Labels:              k8s-app=kubernetes-dashboard
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   <none>
Tokens:              kubernetes-dashboard   'ServiceAccount 已对 Secret 关联'
Events:              <none>

# 再之后与以前的版本很类似了
# 查看kubernetes-dashboard下秘钥
[root@master pki]#  kubectl get secret -n kubernetes-dashboard
NAME                              TYPE                                  DATA   AGE
kubernetes-dashboard              kubernetes.io/service-account-token   3      3m2s
kubernetes-dashboard-certs        Opaque                                0      73m
kubernetes-dashboard-csrf         Opaque                                1      73m
kubernetes-dashboard-key-holder   Opaque                                2      73m

# 查看对应token
[root@master pki]# kubectl describe secret kubernetes-dashboard -n kubernetes-dashboard
Name:         kubernetes-dashboard
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 2f4312e9-7f27-476d-b0bd-bef5e2773025

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1099 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IllOMjBpOVJyNXljbDdIdVJoOGwzbnh0d0t5SzY4TGRvbE1femNueXFyTW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjJmNDMxMmU5LTdmMjctNDc2ZC1iMGJkLWJlZjVlMjc3MzAyNSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.mVJxB2S7kK9bVCLixseS4EFs3oj5Rffhu4eosE8xi4AAd1uqmBPGcoJSSL-MfwMmpX1KfabNU8BMYlp93uoiogF24mQHkxkE2gzvvMyoD9QoEY31WmEGRONRbFHgylW2TkDYXHKWMGAzrlzSwvdpci6U-00W6V6uss28xnfvn04XL5M2oB9y69qpOZXr9UBK7XRAGAQYVidbp_XsrT3G0T0iXx3AqKTt6tctpZpr3T3dbfZNFmDICaXyQaHAL7KZjQ_-YMkIIzipRhxfXl3pmlrvcVFQbKsy1OtbpJ8e0sAb_Tx3kfI2nTGYpn3U1-HQRESuA0eOFGSszMtyGBBrpg
[root@master pki]# kubectl config set-cluster kubernetes --certificate-authority=./ca.crt --server="192.168.10.40:6443" --embed-certs=true --kubeconfig=/root/dashboard-admin.conf
Cluster "kubernetes" set.

[root@master pki]# cat /root/dashboard-admin.conf 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1URXlNakE0TkRjek9Gb1hEVE15TVRFeE9UQTRORGN6T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTWFUCkdCU2thWU5laXg4T3VEODRBOGY5Z3ArWHlTRnFRVUtMR0ovMlFHU21sZzRmMTRTclliR3hxRWVZc2lqNU90N2MKSG9aWFdXemZZSkRsTktwZWdyT08ySFA4dkpSTllKQnlKMDR2eXVadWtUNlpHOWQxQTRFVU53bEVzajhsSnRUNAp0RkVHY0RrY2R5bDl4QnBzb1dTUVZvcUdEdU04QXFaOUVpZXlkbzJwMFBxaU4yNjV0UDJWQWx1TTF1RWtQQ1dUCmFnZDg4QlhsU1BuUElkWVI5S09lRmNKV1pYckJzamJnUk1YYWp1MjdIbkQ4dFNHaWZSNURCbHkvVklxYURMYlMKc3FHVi9RZEhmMnpMTmRzd2FiaXdXaFpZWGNDN055ZHpCYlBQcFc4NHRKRWhMcHQ1N2t1Y3E3K0VGRzZ6bDQ0TQpwUWpUckxwTFdISHhnTm1YaGdjQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCWW9YSUNZVnVrTm1tOHlEcUh5VkIyNEo2ek5NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBTER2ZEZBZjh2Q0oxVk8rdUNnZApibHJuQ29HeWVCZFBLTWM5Y2RLb2tvU3hwR3M1OE85c0Z6VThscXR6VXRsSzVWd1VwcVRxeHp0UmFRblA0Zjd0CnJTSEh3Tk1CbDUxcXVKTlg3MHEzZ1dsOThrQmRoNTU2R21hclB6eEx6OGxsSXpUMjlEQUVBUEdwSEEwVjBYYzkKamFBbDA0ajQwc0JFKzB4b0xjamJNa1Y5UjlZbTR6a0R4UWs5dWFpNVdPRHVnKytmRytFL1ZYT0xMSTROTmgxYQpvc0IrbFhXbkV3QTRnSytFc3dZOHovMW5wQkhUaDdEV1BoWkJQeHNyUHVUbDJPMmVGdFdiUlRzNWxOQlpBV1lJCllTY0VnS2xhOXR1R2VtL3c4b2RpWmlsTnEreVpIOVljbGd0ekhKVWtCRDk5Wmo1TldOL2pYUEFzUmkxWXYxM3IKUE5NPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: 192.168.10.40:6443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

创建credentials

# 创建credentials,需要使用上面的kubernetes-dashboard对应的token信息
[root@master ~]# DEF_NS_ADMIN_TOKEN=$(kubectl get secret kubernetes-dashboard -n kubernetes-dashboard -o jsonpath={.data.token}|base64 -d)
[root@master ~]# kubectl config set-credentials dashboard-admin --token=$DEF_NS_ADMIN_TOKEN --kubeconfig=/root/dashboard-admin.conf
User "dashboard-admin" set
[root@master ~]# cat /root/dashboard-admin.conf 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1URXlNakE0TkRjek9Gb1hEVE15TVRFeE9UQTRORGN6T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTWFUCkdCU2thWU5laXg4T3VEODRBOGY5Z3ArWHlTRnFRVUtMR0ovMlFHU21sZzRmMTRTclliR3hxRWVZc2lqNU90N2MKSG9aWFdXemZZSkRsTktwZWdyT08ySFA4dkpSTllKQnlKMDR2eXVadWtUNlpHOWQxQTRFVU53bEVzajhsSnRUNAp0RkVHY0RrY2R5bDl4QnBzb1dTUVZvcUdEdU04QXFaOUVpZXlkbzJwMFBxaU4yNjV0UDJWQWx1TTF1RWtQQ1dUCmFnZDg4QlhsU1BuUElkWVI5S09lRmNKV1pYckJzamJnUk1YYWp1MjdIbkQ4dFNHaWZSNURCbHkvVklxYURMYlMKc3FHVi9RZEhmMnpMTmRzd2FiaXdXaFpZWGNDN055ZHpCYlBQcFc4NHRKRWhMcHQ1N2t1Y3E3K0VGRzZ6bDQ0TQpwUWpUckxwTFdISHhnTm1YaGdjQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCWW9YSUNZVnVrTm1tOHlEcUh5VkIyNEo2ek5NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBTER2ZEZBZjh2Q0oxVk8rdUNnZApibHJuQ29HeWVCZFBLTWM5Y2RLb2tvU3hwR3M1OE85c0Z6VThscXR6VXRsSzVWd1VwcVRxeHp0UmFRblA0Zjd0CnJTSEh3Tk1CbDUxcXVKTlg3MHEzZ1dsOThrQmRoNTU2R21hclB6eEx6OGxsSXpUMjlEQUVBUEdwSEEwVjBYYzkKamFBbDA0ajQwc0JFKzB4b0xjamJNa1Y5UjlZbTR6a0R4UWs5dWFpNVdPRHVnKytmRytFL1ZYT0xMSTROTmgxYQpvc0IrbFhXbkV3QTRnSytFc3dZOHovMW5wQkhUaDdEV1BoWkJQeHNyUHVUbDJPMmVGdFdiUlRzNWxOQlpBV1lJCllTY0VnS2xhOXR1R2VtL3c4b2RpWmlsTnEreVpIOVljbGd0ekhKVWtCRDk5Wmo1TldOL2pYUEFzUmkxWXYxM3IKUE5NPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: 192.168.10.40:6443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: dashboard-admin
  user:
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6IllOMjBpOVJyNXljbDdIdVJoOGwzbnh0d0t5SzY4TGRvbE1femNueXFyTW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjJmNDMxMmU5LTdmMjctNDc2ZC1iMGJkLWJlZjVlMjc3MzAyNSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.mVJxB2S7kK9bVCLixseS4EFs3oj5Rffhu4eosE8xi4AAd1uqmBPGcoJSSL-MfwMmpX1KfabNU8BMYlp93uoiogF24mQHkxkE2gzvvMyoD9QoEY31WmEGRONRbFHgylW2TkDYXHKWMGAzrlzSwvdpci6U-00W6V6uss28xnfvn04XL5M2oB9y69qpOZXr9UBK7XRAGAQYVidbp_XsrT3G0T0iXx3AqKTt6tctpZpr3T3dbfZNFmDICaXyQaHAL7KZjQ_-YMkIIzipRhxfXl3pmlrvcVFQbKsy1OtbpJ8e0sAb_Tx3kfI2nTGYpn3U1-HQRESuA0eOFGSszMtyGBBrpg

创建context

[root@master pki]# kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=dashboard-admin --kubeconfig=/root/dashboard-admin.conf 
Context "dashboard-admin@kubernetes" created.
[root@master pki]# cat /root/dashboard-admin.conf 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1URXlNakE0TkRjek9Gb1hEVE15TVRFeE9UQTRORGN6T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTWFUCkdCU2thWU5laXg4T3VEODRBOGY5Z3ArWHlTRnFRVUtMR0ovMlFHU21sZzRmMTRTclliR3hxRWVZc2lqNU90N2MKSG9aWFdXemZZSkRsTktwZWdyT08ySFA4dkpSTllKQnlKMDR2eXVadWtUNlpHOWQxQTRFVU53bEVzajhsSnRUNAp0RkVHY0RrY2R5bDl4QnBzb1dTUVZvcUdEdU04QXFaOUVpZXlkbzJwMFBxaU4yNjV0UDJWQWx1TTF1RWtQQ1dUCmFnZDg4QlhsU1BuUElkWVI5S09lRmNKV1pYckJzamJnUk1YYWp1MjdIbkQ4dFNHaWZSNURCbHkvVklxYURMYlMKc3FHVi9RZEhmMnpMTmRzd2FiaXdXaFpZWGNDN055ZHpCYlBQcFc4NHRKRWhMcHQ1N2t1Y3E3K0VGRzZ6bDQ0TQpwUWpUckxwTFdISHhnTm1YaGdjQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCWW9YSUNZVnVrTm1tOHlEcUh5VkIyNEo2ek5NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBTER2ZEZBZjh2Q0oxVk8rdUNnZApibHJuQ29HeWVCZFBLTWM5Y2RLb2tvU3hwR3M1OE85c0Z6VThscXR6VXRsSzVWd1VwcVRxeHp0UmFRblA0Zjd0CnJTSEh3Tk1CbDUxcXVKTlg3MHEzZ1dsOThrQmRoNTU2R21hclB6eEx6OGxsSXpUMjlEQUVBUEdwSEEwVjBYYzkKamFBbDA0ajQwc0JFKzB4b0xjamJNa1Y5UjlZbTR6a0R4UWs5dWFpNVdPRHVnKytmRytFL1ZYT0xMSTROTmgxYQpvc0IrbFhXbkV3QTRnSytFc3dZOHovMW5wQkhUaDdEV1BoWkJQeHNyUHVUbDJPMmVGdFdiUlRzNWxOQlpBV1lJCllTY0VnS2xhOXR1R2VtL3c4b2RpWmlsTnEreVpIOVljbGd0ekhKVWtCRDk5Wmo1TldOL2pYUEFzUmkxWXYxM3IKUE5NPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: 192.168.10.40:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: dashboard-admin
  name: dashboard-admin@kubernetes
current-context: ""
kind: Config
preferences: {}
users:
- name: dashboard-admin
  user:
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6IllOMjBpOVJyNXljbDdIdVJoOGwzbnh0d0t5SzY4TGRvbE1femNueXFyTW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjJmNDMxMmU5LTdmMjctNDc2ZC1iMGJkLWJlZjVlMjc3MzAyNSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.mVJxB2S7kK9bVCLixseS4EFs3oj5Rffhu4eosE8xi4AAd1uqmBPGcoJSSL-MfwMmpX1KfabNU8BMYlp93uoiogF24mQHkxkE2gzvvMyoD9QoEY31WmEGRONRbFHgylW2TkDYXHKWMGAzrlzSwvdpci6U-00W6V6uss28xnfvn04XL5M2oB9y69qpOZXr9UBK7XRAGAQYVidbp_XsrT3G0T0iXx3AqKTt6tctpZpr3T3dbfZNFmDICaXyQaHAL7KZjQ_-YMkIIzipRhxfXl3pmlrvcVFQbKsy1OtbpJ8e0sAb_Tx3kfI2nTGYpn3U1-HQRESuA0eOFGSszMtyGBBrpg

切换context的current-context是dashboard-admin@kubernetes

[root@master pki]# kubectl config use-context dashboard-admin@kubernetes --kubeconfig=/root/dashboard-admin.conf 
Switched to context "dashboard-admin@kubernetes".

[root@master pki]# cat /root/dashboard-admin.conf 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1URXlNakE0TkRjek9Gb1hEVE15TVRFeE9UQTRORGN6T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTWFUCkdCU2thWU5laXg4T3VEODRBOGY5Z3ArWHlTRnFRVUtMR0ovMlFHU21sZzRmMTRTclliR3hxRWVZc2lqNU90N2MKSG9aWFdXemZZSkRsTktwZWdyT08ySFA4dkpSTllKQnlKMDR2eXVadWtUNlpHOWQxQTRFVU53bEVzajhsSnRUNAp0RkVHY0RrY2R5bDl4QnBzb1dTUVZvcUdEdU04QXFaOUVpZXlkbzJwMFBxaU4yNjV0UDJWQWx1TTF1RWtQQ1dUCmFnZDg4QlhsU1BuUElkWVI5S09lRmNKV1pYckJzamJnUk1YYWp1MjdIbkQ4dFNHaWZSNURCbHkvVklxYURMYlMKc3FHVi9RZEhmMnpMTmRzd2FiaXdXaFpZWGNDN055ZHpCYlBQcFc4NHRKRWhMcHQ1N2t1Y3E3K0VGRzZ6bDQ0TQpwUWpUckxwTFdISHhnTm1YaGdjQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCWW9YSUNZVnVrTm1tOHlEcUh5VkIyNEo2ek5NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBTER2ZEZBZjh2Q0oxVk8rdUNnZApibHJuQ29HeWVCZFBLTWM5Y2RLb2tvU3hwR3M1OE85c0Z6VThscXR6VXRsSzVWd1VwcVRxeHp0UmFRblA0Zjd0CnJTSEh3Tk1CbDUxcXVKTlg3MHEzZ1dsOThrQmRoNTU2R21hclB6eEx6OGxsSXpUMjlEQUVBUEdwSEEwVjBYYzkKamFBbDA0ajQwc0JFKzB4b0xjamJNa1Y5UjlZbTR6a0R4UWs5dWFpNVdPRHVnKytmRytFL1ZYT0xMSTROTmgxYQpvc0IrbFhXbkV3QTRnSytFc3dZOHovMW5wQkhUaDdEV1BoWkJQeHNyUHVUbDJPMmVGdFdiUlRzNWxOQlpBV1lJCllTY0VnS2xhOXR1R2VtL3c4b2RpWmlsTnEreVpIOVljbGd0ekhKVWtCRDk5Wmo1TldOL2pYUEFzUmkxWXYxM3IKUE5NPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: 192.168.10.40:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: dashboard-admin
  name: dashboard-admin@kubernetes
current-context: dashboard-admin@kubernetes
kind: Config
preferences: {}
users:
- name: dashboard-admin
  user:
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6IllOMjBpOVJyNXljbDdIdVJoOGwzbnh0d0t5SzY4TGRvbE1femNueXFyTW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjJmNDMxMmU5LTdmMjctNDc2ZC1iMGJkLWJlZjVlMjc3MzAyNSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.mVJxB2S7kK9bVCLixseS4EFs3oj5Rffhu4eosE8xi4AAd1uqmBPGcoJSSL-MfwMmpX1KfabNU8BMYlp93uoiogF24mQHkxkE2gzvvMyoD9QoEY31WmEGRONRbFHgylW2TkDYXHKWMGAzrlzSwvdpci6U-00W6V6uss28xnfvn04XL5M2oB9y69qpOZXr9UBK7XRAGAQYVidbp_XsrT3G0T0iXx3AqKTt6tctpZpr3T3dbfZNFmDICaXyQaHAL7KZjQ_-YMkIIzipRhxfXl3pmlrvcVFQbKsy1OtbpJ8e0sAb_Tx3kfI2nTGYpn3U1-HQRESuA0eOFGSszMtyGBBrpg

将生成的dashboardd-admin.conf放到本地桌面,浏览器再次访问,https://192.168.10.41:30493,导入dashboard-admin.conf文件

 

 七、通过kubernetes-dashboard创建容器pod

在pod节点导入nginx镜像

[root@node1 ~]# ctr images import nginx.tar.gz 
unpacking docker.io/library/nginx:latest (sha256:7165e6259cef192bee32f171c883e3950a8122f14cce1c9009da5b6d86f73828)...done

在dashboard界面添加,右上角点击 + ,进入后切换到"从表单创建"

 

 在主界面就可以看到nginx这个镜像了,点击左侧“Services”,浏览器中访问:http://192.168.10.41:32556/

 

 

 

 八、部署metrics-server组件

github:https://github.com/kubernetes-sigs/metrics-server/releases

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml

#将镜像下载地址替换为国内地址
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP    # 去掉其余ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
- --kubelet-insecure-tls image: registry.aliyuncs.com/google_containers/metrics-server:v0.
6.1 imagePullPolicy: IfNotPresent
[root@master metrics-server]# kubectl apply -f components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

[root@master ~]# kubectl get pods -n kube-system | grep metrics
metrics-server-864d8c5bc7-7mlpt            0/1     ContainerCreating   0             8s

# 查看创建过程
[root@master ~]# kubectl describe pods metrics-server-864d8c5bc7-7mlpt -n kube-system
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  37s   default-scheduler  Successfully assigned kube-system/metrics-server-864d8c5bc7-7mlpt to node1
  Normal  Pulling    36s   kubelet            Pulling image "registry.aliyuncs.com/google_containers/metrics-server:v0.6.1"
  Normal  Pulled     2s    kubelet            Successfully pulled image "registry.aliyuncs.com/google_containers/metrics-server:v0.6.1" in 34.519913264s
  Normal  Created    2s    kubelet            Created container metrics-server
  Normal  Started    1s    kubelet            Started container metrics-server

[root@master ~]# kubectl get pods -n kube-system | grep metrics
metrics-server-864d8c5bc7-7mlpt            0/1     Running   0             4m26s
#在/etc/kubernetes/manifests 里面改一下 apiserver 的配置 
[root@master ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --enable-aggregator-routing=true
[root@master ~]# kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml 
pod/kube-apiserver created
[root@master ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS             RESTARTS        AGE
calico-kube-controllers-84c476996d-vqb94   1/1     Running            2 (75m ago)     2d
calico-node-c9bkr                          1/1     Running            0               2d
calico-node-ktg8g                          1/1     Running            1 (47h ago)     2d
coredns-7f74c56694-jq4r4                   1/1     Running            1 (47h ago)     2d
coredns-7f74c56694-jttgk                   1/1     Running            1 (47h ago)     2d
etcd-master                                1/1     Running            0               2d
kube-apiserver                             0/1     CrashLoopBackOff   3 (14s ago)     109s
kube-apiserver-master                      1/1     Running            0               3m2s
kube-controller-manager-master             1/1     Running            4 (3m31s ago)   2d
kube-proxy-2lncp                           1/1     Running            0               2d
kube-proxy-xv7jn                           1/1     Running            1 (47h ago)     2d
kube-scheduler-master                      1/1     Running            4 (3m31s ago)   2d
metrics-server-864d8c5bc7-7mlpt            0/1     Running            0               9m58s
#kube-apiserver不提供服务,是运行yaml是生成的,提供服务的是kube-apiserver-master,带主机名的

# 删除CrashLoopBackOff状态的apiserver
[root@master ~]# kubectl delete pods kube-apiserver -n kube-system
pod "kube-apiserver" deleted

 测试kubectl top命令

[root@master metrics-server]# kubectl top pod -n kube-system 
NAME                                       CPU(cores)   MEMORY(bytes)   
calico-kube-controllers-84c476996d-vqb94   4m           15Mi            
calico-node-c9bkr                          83m          132Mi           
calico-node-ktg8g                          119m         139Mi           
coredns-7f74c56694-jq4r4                   4m           15Mi            
coredns-7f74c56694-jttgk                   4m           18Mi            
etcd-master                                37m          219Mi           
kube-apiserver-master                      90m          320Mi           
kube-controller-manager-master             63m          49Mi            
kube-proxy-2lncp                           1m           20Mi            
kube-proxy-xv7jn                           8m           19Mi            
kube-scheduler-master                      10m          19Mi            
metrics-server-84cb997f99-qjznb            7m           10Mi 

[root@master metrics-server]# kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   357m         17%    1822Mi          49%       
node1    255m         12%    1543Mi          42% 

 

posted on 2022-11-14 17:19  杨梅冲  阅读(1709)  评论(0编辑  收藏  举报