K8s概述以及集群环境搭建&kubectl 简单使用

  之前学习了docker, 想的成套的学习下K8S相关。

  参考: https://www.kubernetes.org.cn/k8s

1. Kubernetes 概述

1. 基本介绍

  kubernetes 简称k8s, 因为k和s 中间有'ubernete' 8个单词,所以简称k8s。是一个开源的,用于管理云平台中多个主机上的容器化的应用, k8s 的目标是让部署容器化的应用简单并且高效,k8s 提供了应用部署、规划、更新、维护的一种机制。是google 开源的一个容器编排引擎,它支持自动化部署、大规模可伸缩、应用容器化管理。

  最新的部署方式是通过部署容器的方式实现,每个容器之间互相隔离,每个容器有自己的文件系统,容器之间进程不会互相影响, 能区分计算资源。相对于虚拟机,容器能快速部署,由于容器与底层设施、机器文件系统解耦的,所以能再在不同云、不同版本操作系统间进行迁移。

  在生产环境部署一个应用时,通常要部署该应用的多个实例以便对应用请求进行负载均衡。在k8s 中,我们可以创建多个容器,每个容器里面运行一个实例,然后通过内置的负载均衡策略,实现对这一组应用实例的管理、发现、访问。

2. k8s 功能

1. 自动装箱:基于容器对应用运行环境的资源配置要求自动部署应用容器

2.自我修复:当容器失败时,会对容器进行重启;当所部署的node 节点有问题时,会对容器进行重新部署和重新调度

3. 水平扩展:基于kubernetes 自身能力实现服务发现和负载均衡

4.滚动更新:根据应用的变化,对应用容器运行的应用,进行一次性或批量式更新

5.版本回退

6.秘钥和配置管理:在不需要重新构建镜像的情况下,可以部署和更新秘钥和应用配置,类似于热部署

7.存储编排:自动实现存储系统挂载及应用,特别对有状态应用实现数据持久化非常重要。存储系统可以来自于本地目录、网络存储、公共云存储等服务

8.批处理:提供一次性任务,定时任务了满足批量数据梳理和分析的场景

3. k8s 集群架构以及核心概念

1. 集群架构

(1) master  主控节点: 对集群进行调度管理, 接收集群外用户操作请求

apiserver:资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制,以restful 方式,交给etcd 存储

scheduler:节点调度,选择node 节点应用部署;按照预定的调度策略将Pod调度到相应的机器上

controller manager:负责维护集群的状态,比如故障检测、自动扩展、滚动更新等,一个资源对应一个控制器

etcd:存储系统,用于保存集群相关的数据,比如说状态数据、pod数据等

(2) worker node 工作节点, 运行用户业务应用容器

kubelet: 简单说就是master 派到node 节点的代表,管理本机容器的相关操作。负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理

kube-proxy:网络上的代理,包括负载均衡等操作。负责为Service提供cluster内部的服务发现和负载均衡

2. 核心概念

(1) pod:k8s部署的最小单元,一个pod 可以包含多个容器,也就是一组容器的集合;容器内部共享网络;pod 的生命周期是短暂的,重新部署之后是一个新的pod。

  pod是K8s集群中所有业务类型的基础,可以看作运行在K8s集群中的小机器人,不同类型的业务就需要不同类型的小机器人去执行。目前K8s中的业务主要可以分为长期伺服型(long-running)、批处理型(batch)、节点后台支撑型(node-daemon)和有状态应用型(stateful application);分别对应的小机器人控制器为Deployment、Job、DaemonSet和PetSet

(2) Replication Controller(复制控制器):

  RC是K8s集群中最早的保证Pod高可用的API对象。通过监控运行中的Pod来保证集群中运行指定数目的Pod副本。指定的数目可以是多个也可以是1个;少于指定数目,RC就会启动运行新的Pod副本;多于指定数目,RC就会杀死多余的Pod

副本。即使在指定数目为1的情况下,通过RC运行Pod也比直接运行Pod更明智,因为RC也可以发挥它高可用的能力,保证永远有1个Pod在运行。RC是K8s较早期的技术概念,只适用于长期伺服型的业务类型,比如控制小机器人提供高可用的

Web服务。

(3)Service:定义一组pod 的访问规则。每个Service会对应一个集群内部有效的虚拟IP,集群内部通过虚拟IP访问一个服务。

  通过service 定义规则统一入口访问,通过controller 创建pod 进行部署。

3. 集群搭建方式

目前部署k8s 主要有两种方式:

(1) kubeadm

kubeadm 是一个K8S 部署工具,提供kubeadm init 和 kube join, 用于快速部署k8s 集群。 官网: https://kubernetes.io/docs/reference/setup-tools/kubeadm/

(2) 二进制包

从github 下载发行版的二进制包,手动部署每个组件,组成k8s 集群。

kubeadm 部署比较简单,但是屏蔽了很多细节,遇到问题可能比较难以排查。 二进制包部署k8s 集群,可以学习很多原理,也利于后期维护。

2. k8s 集群搭建

  简单的搭建一个master, 两个node, 相关机器以及ip配置如下: 每个机子都需要访问到外网下载相关依赖

k8smaster1    192.168.13.103
k8snode1    192.168.13.104
k8snode2    192.168.13.105

1. 系统初始化(除非指定master, 否则三个节点都执行)

三个机器都需要做如下操作,我选择用虚拟机,然后克隆

1. 关闭防火墙

systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld 查看防火墙状态

2. 关闭selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

3. 关闭swap

free -g #查看分区状态
swapoff -a #临时关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久关闭

4. 修改主机名

hostnamectl set-hostname <hostname>

5. 修改ip 地址为静态IP(注意设置静态IP需要设置DNS,参考之前的rocketmq 集群)

vim /etc/sysconfig/network-scripts/ifcfg-ens33

6. 同步时间

yum install ntpdate -y
ntpdate time.windows.com

7. master节点修改hosts, 配置主机可达

cat >> /etc/hosts << EOF
192.168.13.103 k8smaster1
192.168.13.104 k8snode1
192.168.13.105 k8snode2
EOF

8.  将桥接的IPv4流量传递到iptables的链

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl
--system # 生效

9. 所有节点安装 Docker/kubeadm/kubelet

Kubernetes 默认 CRI(容器运行时)为 Docker,因此先安装 Docker。

(1) 安装 Docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker
docker --version

(2) 添加阿里云 YUM 软件源

设置仓库地址

cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

添加 yum 源:

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

(3) 安装 kubeadm,kubelet 和 kubectl

yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
systemctl enable kubelet

安装成功后可以验证相关信息:

[root@k8smaster1 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:56:30Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
[root@k8smaster1 ~]# kubelet --version
Kubernetes v1.18.0

2.  部署k8s master 

 1. 在master 节点上执行:

kubeadm init --apiserver-advertise-address=192.168.13.103 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16

  其中apiserver-advertise-address 需要修改为master 节点的IP地址; 的哥点指定阿里云镜像仓库地址;第三个是指定kubernetes版本; 后面两个是指定内部访问的ip, 只要不和当前网段冲突即可。 如果上面报错,可以增加--v=6 查看详细的日志, 关于kubeadm 参数详细解释可参考官网。 

     执行完上述命令后,会拉取一系列的docker 镜像,可以新开一个terminal 然后使用docker images 查看相关镜像以及容器如下:

[root@k8smaster1 ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.18.0             43940c34f24f        21 months ago       117MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.0             a31f78c7c8ce        21 months ago       95.3MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.0             74060cea7f70        21 months ago       173MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.0             d3e55153f52f        21 months ago       162MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        23 months ago       683kB
registry.aliyuncs.com/google_containers/coredns                   1.6.7               67da37a9a360        23 months ago       43.8MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        2 years ago         288MB
[root@k8smaster1 ~]# docker ps
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS              PORTS               NAMES
3877168ddb09        43940c34f24f                                        "/usr/local/bin/kube…"   3 minutes ago       Up 3 minutes                            k8s_kube-proxy_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0
18a32d328d49        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0
5f62d3184cd7        303ce5db0e90                                        "etcd --advertise-cl…"   3 minutes ago       Up 3 minutes                            k8s_etcd_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0
2af2a1b5d169        a31f78c7c8ce                                        "kube-scheduler --au…"   3 minutes ago       Up 3 minutes                            k8s_kube-scheduler_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0
c77506ee4dd2        d3e55153f52f                                        "kube-controller-man…"   3 minutes ago       Up 3 minutes                            k8s_kube-controller-manager_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0
303545e4eca9        74060cea7f70                                        "kube-apiserver --ad…"   3 minutes ago       Up 3 minutes                            k8s_kube-apiserver_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0
f9da54e2bfae        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0
007e2a0cd10b        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0
0666c8b43c32        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0
0ca472d7f2cd        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0

最后下载完成之后主窗口提示如下:(看到successful 即可视为是成功了)

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.13.103:6443 --token tcpixp.g14hyo8skehh9kcp \
    --discovery-token-ca-cert-hash sha256:6129c85d48cbf0ca946ea2c65fdc1055c12363be07b18dd7994c3c0a242a286a 

使用kubectl工具:执行上面成功后的信息

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看:

[root@k8smaster1 ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
k8smaster1   NotReady   master   7m17s   v1.18.0
[root@k8smaster1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

 3.  加入Kubernetes Node

  在k8snode1、k8snode2 节点执行如下命令。向集群添加新节点,执行在kubeadm init 输出的kubeadm join命令(有token 相关):

kubeadm join 192.168.13.103:6443 --token tcpixp.g14hyo8skehh9kcp \
    --discovery-token-ca-cert-hash sha256:6129c85d48cbf0ca946ea2c65fdc1055c12363be07b18dd7994c3c0a242a286a

  需要注意这里只能是master 控制台打出的kubeadm join 相关参数,因为token 不一样,默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token, 可以参考官网使用kubeadm token 相关命令。

  当join 成功之后输出的日志如下:

[root@k8snode1 ~]# kubeadm join 192.168.13.103:6443 --token tcpixp.g14hyo8skehh9kcp \
>     --discovery-token-ca-cert-hash sha256:6129c85d48cbf0ca946ea2c65fdc1055c12363be07b18dd7994c3c0a242a286a
W0108 21:20:24.380628   25524 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Hostname]: hostname "k8snode1" could not be reached
        [WARNING Hostname]: hostname "k8snode1": lookup k8snode1 on 114.114.114.114:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

最后master 节点查看nodes:(最终集群信息如下)

[root@k8smaster1 ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE    VERSION
k8smaster1   NotReady   master   69m    v1.18.0
k8snode1     NotReady   <none>   49m    v1.18.0
k8snode2     NotReady   <none>   4m7s   v1.18.0

  状态是notready, 需要安装网络插件

 4. 部署CNI网络插件

sed命令修改为docker hub镜像仓库, 在master 节点执行如下命令。

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl get pods -n kube-system

 执行完第二条命令后等相关的组件都是RUNNING 状态后再次查看集群状态

[root@k8smaster1 ~]# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-stfqz             1/1     Running   0          106m
coredns-7ff77c879f-vhwr7             1/1     Running   0          106m
etcd-k8smaster1                      1/1     Running   0          107m
kube-apiserver-k8smaster1            1/1     Running   0          107m
kube-controller-manager-k8smaster1   1/1     Running   0          107m
kube-flannel-ds-9bx4w                1/1     Running   0          5m31s
kube-flannel-ds-qzqjq                1/1     Running   0          5m31s
kube-flannel-ds-tldt5                1/1     Running   0          5m31s
kube-proxy-6vcvj                     1/1     Running   1          86m
kube-proxy-hn4gx                     1/1     Running   0          106m
kube-proxy-qzwh6                     1/1     Running   0          41m
kube-scheduler-k8smaster1            1/1     Running   0          107m
[root@k8smaster1 ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
k8smaster1   Ready    master   107m   v1.18.0
k8snode1     Ready    <none>   86m    v1.18.0
k8snode2     Ready    <none>   41m    v1.18.0

查看集群详细信息:

[root@k8smaster1 ~]# kubectl get nodes -o wide
NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8smaster1   Ready    master   9h    v1.18.0   192.168.13.103   <none>        CentOS Linux 7 (Core)   3.10.0-1160.49.1.el7.x86_64   docker://18.6.1
k8snode1     Ready    <none>   8h    v1.18.0   192.168.13.104   <none>        CentOS Linux 7 (Core)   3.10.0-1160.49.1.el7.x86_64   docker://18.6.1
k8snode2     Ready    <none>   8h    v1.18.0   192.168.13.105   <none>        CentOS Linux 7 (Core)   3.10.0-1160.49.1.el7.x86_64   docker://18.6.1

5.  测试kubernetes集群

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc

最后生成的信息如下:

[root@k8smaster1 ~]# kubectl get pod,svc
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-f89759699-cnj62   1/1     Running   0          3m5s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        113m
service/nginx        NodePort    10.96.201.24   <none>        80:30951/TCP   2m40s

测试: 从任意一个主机访问即可,端口是30951 端口

curl http://192.168.13.103:30951/
curl http://192.168.13.104:30951/
curl http://192.168.13.105:30951/

查看三个机子的docker 进程:

1. k8smaster

[root@k8smaster1 ~]# docker ps 
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS              PORTS               NAMES
e71930a745f3        67da37a9a360                                        "/coredns -conf /etc…"   14 minutes ago      Up 14 minutes                           k8s_coredns_coredns-7ff77c879f-vhwr7_kube-system_9553d8cc-8efb-48d1-9790-4120e09869c7_0
5aaacb75700b        67da37a9a360                                        "/coredns -conf /etc…"   14 minutes ago      Up 14 minutes                           k8s_coredns_coredns-7ff77c879f-stfqz_kube-system_81bbe584-a3d1-413a-a785-d8edaca7b4c1_0
756d66c75a56        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 14 minutes ago      Up 14 minutes                           k8s_POD_coredns-7ff77c879f-vhwr7_kube-system_9553d8cc-8efb-48d1-9790-4120e09869c7_0
658b02e25f89        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 14 minutes ago      Up 14 minutes                           k8s_POD_coredns-7ff77c879f-stfqz_kube-system_81bbe584-a3d1-413a-a785-d8edaca7b4c1_0
8a6f86753098        404fc3ab6749                                        "/opt/bin/flanneld -…"   14 minutes ago      Up 14 minutes                           k8s_kube-flannel_kube-flannel-ds-qzqjq_kube-system_bf0155d2-0f27-492c-8029-2fe042869579_0
b047ca53a8fe        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 14 minutes ago      Up 14 minutes                           k8s_POD_kube-flannel-ds-qzqjq_kube-system_bf0155d2-0f27-492c-8029-2fe042869579_0
3877168ddb09        43940c34f24f                                        "/usr/local/bin/kube…"   2 hours ago         Up 2 hours                              k8s_kube-proxy_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0
18a32d328d49        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 2 hours ago         Up 2 hours                              k8s_POD_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0
5f62d3184cd7        303ce5db0e90                                        "etcd --advertise-cl…"   2 hours ago         Up 2 hours                              k8s_etcd_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0
2af2a1b5d169        a31f78c7c8ce                                        "kube-scheduler --au…"   2 hours ago         Up 2 hours                              k8s_kube-scheduler_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0
c77506ee4dd2        d3e55153f52f                                        "kube-controller-man…"   2 hours ago         Up 2 hours                              k8s_kube-controller-manager_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0
303545e4eca9        74060cea7f70                                        "kube-apiserver --ad…"   2 hours ago         Up 2 hours                              k8s_kube-apiserver_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0
f9da54e2bfae        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 2 hours ago         Up 2 hours                              k8s_POD_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0
007e2a0cd10b        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 2 hours ago         Up 2 hours                              k8s_POD_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0
0666c8b43c32        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 2 hours ago         Up 2 hours                              k8s_POD_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0
0ca472d7f2cd        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 2 hours ago         Up 2 hours                              k8s_POD_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0

2. k8sndoe1

[root@k8snode1 ~]# docker ps
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS              PORTS               NAMES
8189b507fc4a        404fc3ab6749                                        "/opt/bin/flanneld -…"   10 minutes ago      Up 10 minutes                           k8s_kube-flannel_kube-flannel-ds-9bx4w_kube-system_aa9751c9-7dcf-494b-b720-606cb8950a6d_0
f8e8103639c1        43940c34f24f                                        "/usr/local/bin/kube…"   10 minutes ago      Up 10 minutes                           k8s_kube-proxy_kube-proxy-6vcvj_kube-system_55bc4b97-f479-4dd5-977a-2a61e0fce705_1
6675466fcc0e        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 11 minutes ago      Up 10 minutes                           k8s_POD_kube-flannel-ds-9bx4w_kube-system_aa9751c9-7dcf-494b-b720-606cb8950a6d_0
51d248df0e8c        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 11 minutes ago      Up 10 minutes                           k8s_POD_kube-proxy-6vcvj_kube-system_55bc4b97-f479-4dd5-977a-2a61e0fce705_1

3. k8snode2

[root@k8snode2 ~]# docker ps
CONTAINER ID        IMAGE                                                COMMAND                  CREATED             STATUS              PORTS               NAMES
d8bbbe754ebc        nginx                                                "/docker-entrypoint.…"   4 minutes ago       Up 4 minutes                            k8s_nginx_nginx-f89759699-cnj62_default_9e3e4d90-42ac-4f41-9af7-f7939a52cb01_0
04fbdd617724        registry.aliyuncs.com/google_containers/pause:3.2    "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_nginx-f89759699-cnj62_default_9e3e4d90-42ac-4f41-9af7-f7939a52cb01_0
e9dc459f9664        404fc3ab6749                                         "/opt/bin/flanneld -…"   15 minutes ago      Up 15 minutes                           k8s_kube-flannel_kube-flannel-ds-tldt5_kube-system_5b1ea760-a19e-4b5b-9612-6d9a6dda7f59_0
f1d0312d2308        registry.aliyuncs.com/google_containers/pause:3.2    "/pause"                 15 minutes ago      Up 15 minutes                           k8s_POD_kube-flannel-ds-tldt5_kube-system_5b1ea760-a19e-4b5b-9612-6d9a6dda7f59_0
d6bae886cb61        registry.aliyuncs.com/google_containers/kube-proxy   "/usr/local/bin/kube…"   About an hour ago   Up About an hour                        k8s_kube-proxy_kube-proxy-qzwh6_kube-system_7d72b4a8-c6ee-4982-9e88-df70d9745b2c_0
324507774c8e        registry.aliyuncs.com/google_containers/pause:3.2    "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-proxy-qzwh6_kube-system_7d72b4a8-c6ee-4982-9e88-df70d9745b2c_0

可以看到nginx的容器跑在k8snode2 节点上。

 

也可以用kubectl 查看运行情况

[root@k8smaster1 ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx-f89759699-cnj62   1/1     Running   0          10m   10.244.2.2   k8snode2   <none>           <none>

以yaml 格式输出:

[root@k8smaster1 ~]# kubectl get pods -o yaml 
apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: "2022-01-09T03:49:49Z"
    generateName: nginx-f89759699-
    labels:
      app: nginx
。。。

查看所有namespace 下面所有的pods 相关且输出详细信息

[root@k8smaster1 ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE    IP               NODE         NOMINATED NODE   READINESS GATES
default                nginx-f89759699-cnj62                        1/1     Running   0          51m    10.244.2.2       k8snode2     <none>           <none>
kube-system            coredns-7ff77c879f-stfqz                     1/1     Running   0          161m   10.244.0.3       k8smaster1   <none>           <none>
kube-system            coredns-7ff77c879f-vhwr7                     1/1     Running   0          161m   10.244.0.2       k8smaster1   <none>           <none>
kube-system            etcd-k8smaster1                              1/1     Running   0          161m   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-apiserver-k8smaster1                    1/1     Running   0          161m   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-controller-manager-k8smaster1           1/1     Running   0          161m   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-flannel-ds-9bx4w                        1/1     Running   0          59m    192.168.13.104   k8snode1     <none>           <none>
kube-system            kube-flannel-ds-qzqjq                        1/1     Running   0          59m    192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-flannel-ds-tldt5                        1/1     Running   0          59m    192.168.13.105   k8snode2     <none>           <none>
kube-system            kube-proxy-6vcvj                             1/1     Running   1          140m   192.168.13.104   k8snode1     <none>           <none>
kube-system            kube-proxy-hn4gx                             1/1     Running   0          161m   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-proxy-qzwh6                             1/1     Running   0          95m    192.168.13.105   k8snode2     <none>           <none>
kube-system            kube-scheduler-k8smaster1                    1/1     Running   0          161m   192.168.13.103   k8smaster1   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-78f5d9f487-sfjlr   1/1     Running   0          25m    10.244.2.3       k8snode2     <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-577bd97bc-f2v5g         1/1     Running   0          25m    10.244.1.2       k8snode1     <none>           <none>

3. kubectl 命令行工具

   kubectl 是kubernetes集群的命令行工具,通过kubectl 能对集群本身进行管理,并能够在集群上进行容器化的安装部署。

基本语法:

kubectl [command] [type] [name] [flags]

command: 指要对资源执行的操作,例如create、get、describe、delete

type:指定资源类型,资源类型是大小写敏感的,开发者能够以单数、复数或缩略图的形式。 资源类型可以用 kubectl api-resources 查看

name: 指定资源名称,大小写敏感,如果不指定名称会显示所有

flags:指定可选的参数,例如可以用-s 或者-server 指定Kubernetes APIServer 的地址和端口

[root@k8smaster1 ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8smaster1   Ready    master   8h      v1.18.0
k8snode1     Ready    <none>   7h59m   v1.18.0
k8snode2     Ready    <none>   7h14m   v1.18.0
[root@k8smaster1 ~]# kubectl get node
NAME         STATUS   ROLES    AGE     VERSION
k8smaster1   Ready    master   8h      v1.18.0
k8snode1     Ready    <none>   7h59m   v1.18.0
k8snode2     Ready    <none>   7h14m   v1.18.0
[root@k8smaster1 ~]# kubectl get node k8snode1
NAME       STATUS   ROLES    AGE     VERSION
k8snode1   Ready    <none>   7h59m   v1.18.0

1. kubectl help

[root@k8smaster1 ~]# kubectl help
kubectl controls the Kubernetes cluster manager.

 Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/

Basic Commands (Beginner):
  create        Create a resource from a file or from stdin.
  expose        Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
  run           Run a particular image on the cluster
  set           Set specific features on objects

Basic Commands (Intermediate):
  explain       Documentation of resources
  get           Display one or many resources
  edit          Edit a resource on the server
  delete        Delete resources by filenames, stdin, resources and names, or by resources and label selector

Deploy Commands:
  rollout       Manage the rollout of a resource
  scale         Set a new size for a Deployment, ReplicaSet or Replication Controller
  autoscale     Auto-scale a Deployment, ReplicaSet, or ReplicationController

Cluster Management Commands:
  certificate   Modify certificate resources.
  cluster-info  Display cluster info
  top           Display Resource (CPU/Memory/Storage) usage.
  cordon        Mark node as unschedulable
  uncordon      Mark node as schedulable
  drain         Drain node in preparation for maintenance
  taint         Update the taints on one or more nodes

Troubleshooting and Debugging Commands:
  describe      Show details of a specific resource or group of resources
  logs          Print the logs for a container in a pod
  attach        Attach to a running container
  exec          Execute a command in a container
  port-forward  Forward one or more local ports to a pod
  proxy         Run a proxy to the Kubernetes API server
  cp            Copy files and directories to and from containers.
  auth          Inspect authorization

Advanced Commands:
  diff          Diff live version against would-be applied version
  apply         Apply a configuration to a resource by filename or stdin
  patch         Update field(s) of a resource using strategic merge patch
  replace       Replace a resource by filename or stdin
  wait          Experimental: Wait for a specific condition on one or many resources.
  convert       Convert config files between different API versions
  kustomize     Build a kustomization target from a directory or a remote url.

Settings Commands:
  label         Update the labels on a resource
  annotate      Update the annotations on a resource
  completion    Output shell completion code for the specified shell (bash or zsh)

Other Commands:
  alpha         Commands for features in alpha
  api-resources Print the supported API resources on the server
  api-versions  Print the supported API versions on the server, in the form of "group/version"
  config        Modify kubeconfig files
  plugin        Provides utilities for interacting with plugins.
  version       Print the client and server version information

Usage:
  kubectl [flags] [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).

  上面命令进行了分类。包括基础命令、部署和集群、故障和调试、高级命令、设置命令、其他命令。

2. 基本使用

[root@k8smaster1 ~]# kubectl get nodes -o wide 
NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8smaster1   Ready    master   25h   v1.18.0   192.168.13.103   <none>        CentOS Linux 7 (Core)   3.10.0-1160.49.1.el7.x86_64   docker://18.6.1
k8snode1     Ready    <none>   24h   v1.18.0   192.168.13.104   <none>        CentOS Linux 7 (Core)   3.10.0-1160.49.1.el7.x86_64   docker://18.6.1
k8snode2     Ready    <none>   23h   v1.18.0   192.168.13.105   <none>        CentOS Linux 7 (Core)   3.10.0-1160.49.1.el7.x86_64   docker://18.6.1
[root@k8smaster1 ~]# kubectl get nodes k8smaster1 -o wide 
NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8smaster1   Ready    master   25h   v1.18.0   192.168.13.103   <none>        CentOS Linux 7 (Core)   3.10.0-1160.49.1.el7.x86_64   docker://18.6.1
[root@k8smaster1 ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx-f89759699-cnj62   1/1     Running   0          23h   10.244.2.2   k8snode2   <none>           <none>
[root@k8smaster1 ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
default                nginx-f89759699-cnj62                        1/1     Running   0          23h   10.244.2.2       k8snode2     <none>           <none>
kube-system            coredns-7ff77c879f-stfqz                     1/1     Running   0          25h   10.244.0.3       k8smaster1   <none>           <none>
kube-system            coredns-7ff77c879f-vhwr7                     1/1     Running   0          25h   10.244.0.2       k8smaster1   <none>           <none>
kube-system            etcd-k8smaster1                              1/1     Running   0          25h   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-apiserver-k8smaster1                    1/1     Running   0          25h   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-controller-manager-k8smaster1           1/1     Running   0          25h   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-flannel-ds-9bx4w                        1/1     Running   0          23h   192.168.13.104   k8snode1     <none>           <none>
kube-system            kube-flannel-ds-qzqjq                        1/1     Running   0          23h   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-flannel-ds-tldt5                        1/1     Running   0          23h   192.168.13.105   k8snode2     <none>           <none>
kube-system            kube-proxy-6vcvj                             1/1     Running   1          24h   192.168.13.104   k8snode1     <none>           <none>
kube-system            kube-proxy-hn4gx                             1/1     Running   0          25h   192.168.13.103   k8smaster1   <none>           <none>
kube-system            kube-proxy-qzwh6                             1/1     Running   0          23h   192.168.13.105   k8snode2     <none>           <none>
kube-system            kube-scheduler-k8smaster1                    1/1     Running   0          25h   192.168.13.103   k8smaster1   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-78f5d9f487-sfjlr   1/1     Running   0          22h   10.244.2.3       k8snode2     <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-577bd97bc-f2v5g         1/1     Running   0          22h   10.244.1.2       k8snode1     <none>           <none>
[root@k8smaster1 ~]# kubectl cluster-info    # 查看集群信息
Kubernetes master is running at https://192.168.13.103:6443
KubeDNS is running at https://192.168.13.103:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8smaster1 ~]# kubectl logs nginx-f89759699-cnj62    # 查看日志
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/01/09 03:52:10 [notice] 1#1: using the "epoll" event method
2022/01/09 03:52:10 [notice] 1#1: nginx/1.21.5
2022/01/09 03:52:10 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2022/01/09 03:52:10 [notice] 1#1: OS: Linux 3.10.0-1160.49.1.el7.x86_64
2022/01/09 03:52:10 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/01/09 03:52:10 [notice] 1#1: start worker processes
2022/01/09 03:52:10 [notice] 1#1: start worker process 31
2022/01/09 03:52:10 [notice] 1#1: start worker process 32
2022/01/09 03:52:10 [notice] 1#1: start worker process 33
2022/01/09 03:52:10 [notice] 1#1: start worker process 34
10.244.0.0 - - [09/Jan/2022:03:52:11 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.29.0" "-"
10.244.0.0 - - [09/Jan/2022:03:52:24 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" "-"
2022/01/09 03:52:24 [error] 32#32: *2 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.244.0.0, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "192.168.13.103:30951", referrer: "http://192.168.13.103:30951/"
10.244.0.0 - - [09/Jan/2022:03:52:24 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://192.168.13.103:30951/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" "-"
10.244.0.0 - - [09/Jan/2022:03:55:20 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.67.0" "-"
10.244.1.0 - - [09/Jan/2022:03:55:24 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.67.0" "-"
10.244.2.1 - - [09/Jan/2022:03:55:28 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.67.0" "-"
127.0.0.1 - - [09/Jan/2022:04:48:39 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.74.0" "-"
[root@k8smaster1 ~]# kubectl exec -it nginx-f89759699-cnj62 bash    # 进入容器(等价于docker exec -it cid bash)
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
root@nginx-f89759699-cnj62:/# exit
exit
[root@k8smaster1 ~]# kubectl exec -it nginx-f89759699-cnj62 -- bash    # 进入容器,-- command, 代替上面命令
root@nginx-f89759699-cnj62:/# exit
exit

  可以直接从master 节点进入跑在node 节点的container 容器。 kubectl get pods -A 等价于 kubectl get pods --all-namespaces。 kubectl logs podName -f 可以实时查看pod中第一个container日志, 如果查看某个container 日志,可以加-c containerName 指定。

也可以查看一个pod 的详细信息:可以看到namespace、运行的node、labels、container 容器、events容器启动事件等信息

[root@k8smaster1 ~]# kubectl describe pods nginx-statefulset-0
Name:         nginx-statefulset-0
Namespace:    default
Priority:     0
Node:         k8snode1/192.168.13.104
Start Time:   Sat, 15 Jan 2022 23:30:04 -0500
Labels:       app=nginx
              controller-revision-hash=nginx-statefulset-6df8f484ff
              statefulset.kubernetes.io/pod-name=nginx-statefulset-0
Annotations:  <none>
Status:       Running
IP:           10.244.1.26
IPs:
  IP:           10.244.1.26
Controlled By:  StatefulSet/nginx-statefulset
Containers:
  nginx:
    Container ID:   docker://b8d73855d62c401749f654a5f3876e96ba992b5f8a24a4fac8d4753e15ff0a5c
    Image:          nginx:latest
    Image ID:       docker-pullable://nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sat, 15 Jan 2022 23:30:06 -0500
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5r9hq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-5r9hq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-5r9hq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  11m   default-scheduler  Successfully assigned default/nginx-statefulset-0 to k8snode1
  Normal  Pulling    11m   kubelet, k8snode1  Pulling image "nginx:latest"
  Normal  Pulled     11m   kubelet, k8snode1  Successfully pulled image "nginx:latest"
  Normal  Created    11m   kubelet, k8snode1  Created container nginx
  Normal  Started    11m   kubelet, k8snode1  Started container nginx

 获取所有的资源信息:

[root@k8smaster1 ~]# kubectl get all -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
pod/nginx-f89759699-cnj62   1/1     Running   0          26h   10.244.2.2   k8snode2   <none>           <none>

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE   SELECTOR
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        28h   <none>
service/nginx        NodePort    10.96.201.24   <none>        80:30951/TCP   26h   app=nginx

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES   SELECTOR
deployment.apps/nginx   1/1     1            1           26h   nginx        nginx    app=nginx

NAME                              DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES   SELECTOR
replicaset.apps/nginx-f89759699   1         1         1       26h   nginx        nginx    app=nginx,pod-template-hash=f89759699

获取service 信息:

[root@k8smaster1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        28h
nginx        NodePort    10.96.201.24   <none>        80:30951/TCP   27h
[root@k8smaster1 ~]# kubectl get service
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        28h
nginx        NodePort    10.96.201.24   <none>        80:30951/TCP   27h
[root@k8smaster1 ~]# kubectl get services
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        28h
nginx        NodePort    10.96.201.24   <none>        80:30951/TCP   27h

删除默认default namespace 下面所有的pods:

[root@k8smaster1 ~]# kubectl delete pods --all
pod "mytomcat" deleted
[root@k8smaster1 ~]# kubectl get pods
No resources found in default namespace.

如果想查看一个资源的相关信息,可以导出为yaml 格式,也可以用edit 直接编辑:

[root@k8smaster1 ~]# kubectl edit pod nginx-f89759699-vkf7d

  编辑进入的页面类似于yml 直接编辑,编辑后保存即可生效,如下:

  explain 可以分析相关的参数以及意思:

[root@k8smaster02 /]# kubectl explain secret
KIND:     Secret
VERSION:  v1

DESCRIPTION:
     Secret holds secret data of a certain type. The total bytes of the values
     in the Data field must be less than MaxSecretSize bytes.

FIELDS:
   apiVersion   <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   data <map[string]string>
     Data contains the secret data. Each key must consist of alphanumeric
     characters, '-', '_' or '.'. The serialized form of the secret data is a
     base64 encoded string, representing the arbitrary (possibly non-string)
。。。
[root@k8smaster02 /]# kubectl explain secret.type
KIND:     Secret
VERSION:  v1

FIELD:    type <string>

DESCRIPTION:
     Used to facilitate programmatic handling of secret data. 

补充:  docker 宿主机查看容器内进程信息

    可以看到从主机能看到容器内的应用程序,也就是容器与容器之间隔离,容器与宿主机之间互通。

[root@k8snode2 ~]# docker top d8b
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                13923               13905               0                   Jan09               ?                   00:00:00            nginx: master process nginx -g daemon off;
101                 13984               13923               0                   Jan09               ?                   00:00:00            nginx: worker process
101                 13985               13923               0                   Jan09               ?                   00:00:00            nginx: worker process
101                 13986               13923               0                   Jan09               ?                   00:00:00            nginx: worker process
101                 13987               13923               0                   Jan09               ?                   00:00:00            nginx: worker process
[root@k8snode2 ~]# ps -ef | grep nginx
root      13923  13905  0 Jan09 ?        00:00:00 nginx: master process nginx -g daemon off;
101       13984  13923  0 Jan09 ?        00:00:00 nginx: worker process
101       13985  13923  0 Jan09 ?        00:00:00 nginx: worker process
101       13986  13923  0 Jan09 ?        00:00:00 nginx: worker process
101       13987  13923  0 Jan09 ?        00:00:00 nginx: worker process
root      52391  12958  0 02:29 pts/0    00:00:00 grep --color=auto nginx
[root@k8snode2 ~]# ps -ef | grep 13905
root      13905   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/d8bbbe754ebc3bc4c933862e0f98d0b5c15bfb6f14967791043b193b3c6be72b -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      13923  13905  0 Jan09 ?        00:00:00 nginx: master process nginx -g daemon off;
root      54452  12958  0 02:39 pts/0    00:00:00 grep --color=auto 13905
[root@k8snode2 ~]# ps -ef | grep 9575
root       9575   9567  0 Jan09 ?        00:02:00 docker-containerd --config /var/run/docker/containerd/containerd.toml
root       9982   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/324507774c8ea90c31d8e4f09ee6cc0e85a627f9f9669c66544af0a256eb0d45 -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      10088   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/d6bae886cb61fc6a75a242f4ddf5caf1367bec4b643a09dd58347bfc4d402496 -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      11068   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/f1d0312d230818df533be053f358bde6cd33bb32396d933d210d13dcfc898a23 -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      11391   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/e9dc459f966475bdf6eada28ee4fb9799f0ba9295658c9143d3d78a96568c3da -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      13177   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/04fbdd61772412e3c903547f0b5f78df486daddae954a9f11841277754661c63 -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      13905   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/d8bbbe754ebc3bc4c933862e0f98d0b5c15bfb6f14967791043b193b3c6be72b -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      19161   9575  0 Jan09 ?        00:00:01 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/c6c8e41e9e5e975ad1471fd89a0c9b60e8bd1e5e36284bfb11f186c11d6267a3 -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      19406   9575  0 Jan09 ?        00:00:02 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/d6930163d286c98f7b91b55e489653df2a534100031dd9b7fcb9957edcd57ca2 -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
root      54494  12958  0 02:39 pts/0    00:00:00 grep --color=auto 9575
[root@k8snode2 ~]# ps -ef | grep 9567
root       9567      1  1 Jan09 ?        00:08:47 /usr/bin/dockerd
root       9575   9567  0 Jan09 ?        00:02:00 docker-containerd --config /var/run/docker/containerd/containerd.toml
root      54805  12958  0 02:40 pts/0    00:00:00 grep --color=auto 9567

 

补充:在master 执行kubeadm init初始化时 过程我本地报错如下

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

解决办法:

1. 创建/etc/systemd/system/kubelet.service.d/10-kubeadm.conf 文件,内容如下:

Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"

2. 重启kubelet

systemctl daemon-reload
systemctl restart kubelet

3. 重新执行kubeadm init 

补充: master 多次执行kubeadm init 报错如下

error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists

是因为上次执行kubeadm init 之后没有清理干净,解决办法:

kubeadm reset

补充:master  执行kubeadm init 报错如下 

[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

网上说是下载一些镜像仓库超时, 没有找到相关解决办法,最终我删除该虚拟机,重新克隆了一个虚拟机,然后执行上述初始化以及安装过程。

补充: 在master 部署程中遇到各种错误,尝试安装不同版本kubeadm、kubelet、kubectl

1. 查看指定版本:

yum list kubeadm --showduplicates

2. 删除相关

yum remove -y kubelet kubeadm kubectl

3. 重新安装指定版本 yum install packagename-version

yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0

补充: 在节点加入k8s 集群启动后network 启动失败,报错如下 

Failed to start LSB 

解决办法:

1. 关闭NetworkManager 服务

systemctl stop NetworkManager
systemctl disable NetworkManager

2. 重新启动

systemctl restart network

补充:  node 节点重启后网络访问不通:现象是通过service 暴露的端口只能通过pod 所在node 的ip 进行访问

解决办法: 重启docker 服务即可解决

systemctl daemon-reload
systemctl restart docker

补充:Terminating 状态的pod 不能删除 

现象就是: 一个pod 部署在节点1, 然后我节点1 宕机之后节点1 的pod 自动调度到节点2, 相当于新创建一个pod 到节点2, 原来节点1 的pod 状态变为Terminating

解决办法: 增加--force 参数强制删除

kubectl delete pods web1-f864c756b-zltz7 --force

 补充: 节点重启后不可被调度, 且网络不可达

1. 机器在关机同时,k8s自动为这个节点添加了不可被调度污点, 查看节点的污点值:

[root@k8smaster1 ~]# kubectl describe node k8snode2 | grep Tain
Taints:             node.kubernetes.io/unreachable:NoSchedule

尝试手动删除污点值也无效。

2. 解决办法

    k8s圈内有句话说的比较恰当,如果说APIServer是整个集群的大脑,那么kubelet就是每个节点的小脑,它主要用于跟APIServer交互,让APIServer获取节点的状态信息,现在kubelet已经挂了,很自然无法进行Pod的调度了。

(1) 到node 节点查看kubelet

[root@k8snode2 ~]# systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: inactive (dead)
     Docs: https://kubernetes.io/docs/

(2) 重启kubelet并且设置开机自动启动

[root@k8snode2 ~]# systemctl enable kubelet
[root@k8snode2 ~]# systemctl is-enabled kubelet
enabled
[root@k8snode2 ~]# systemctl start kubelet
[root@k8snode2 ~]# systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2022-01-17 05:10:48 EST; 2min 9s ago
     Docs: https://kubernetes.io/docs/
 Main PID: 1976 (kubelet)
   CGroup: /system.slice/kubelet.service
           └─1976 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kube...

Jan 17 05:10:49 k8snode2 kubelet[1976]: W0117 05:10:49.642289    1976 kuberuntime_container.go:758] No ref for container {"...ce92d"}
Jan 17 05:10:49 k8snode2 kubelet[1976]: I0117 05:10:49.653407    1976 reconciler.go:319] Volume detached for volume "tmp-vo...Path ""
Jan 17 05:10:49 k8snode2 kubelet[1976]: I0117 05:10:49.653437    1976 reconciler.go:319] Volume detached for volume "defaul...Path ""
Jan 17 05:10:49 k8snode2 kubelet[1976]: I0117 05:10:49.653453    1976 reconciler.go:319] Volume detached for volume "kubern...Path ""
Jan 17 05:10:52 k8snode2 kubelet[1976]: I0117 05:10:52.239206    1976 topology_manager.go:219] [topologymanager] RemoveCont...568c3da
Jan 17 05:10:58 k8snode2 kubelet[1976]: I0117 05:10:58.899501    1976 kubelet_node_status.go:70] Attempting to register node k8snode2
Jan 17 05:10:58 k8snode2 kubelet[1976]: I0117 05:10:58.910103    1976 kubelet_node_status.go:112] Node k8snode2 was previou...istered
Jan 17 05:10:58 k8snode2 kubelet[1976]: I0117 05:10:58.910256    1976 kubelet_node_status.go:73] Successfully registered no...8snode2
Jan 17 05:11:48 k8snode2 kubelet[1976]: I0117 05:11:48.858077    1976 topology_manager.go:219] [topologymanager] RemoveCont...c4ee0a2
Jan 17 05:11:48 k8snode2 kubelet[1976]: W0117 05:11:48.942110    1976 cni.go:331] CNI failed to retrieve network namespace ...48afb9"
Hint: Some lines were ellipsized, use -l to show in full.

(3) k8smaster 节点继续查看节点污点值: 发现污点已经不存在

[root@k8smaster1 ~]# kubectl describe node k8snode2 | grep Tain
Taints:             <none>

(4) 测试

  节点可以接受正常的调度以及网络也正常,可以通过该节点的端口访问service 暴露的服务

 补充: kubernetes 实现优雅的停止pod

让其副本数量为0 即可

kubectl scale --replicas=0 deployment/<your-deployment>

 

posted @ 2022-01-09 13:43  QiaoZhi  阅读(1753)  评论(0编辑  收藏  举报