二、K8s集群安装:安装与部署篇(共两篇:亲测有效,踩坑无数,吐血了!!!)
为了看到kubesphere的登录界面真是花了不少功夫。o(╥﹏╥)o
安装成功图:
登录图:
总结:安装过程中出现了很多问题,最后还原到最初备份状态,重新来过,就大告成功了。问题主要原因就是 :
docker、kubeadm、kubelet、kubectl、helm、tiller、openebs、kubesphere版本冲突导致!!!
版本推荐:
docker:19.03.3
kubeadm:1.19.3
kubelet:1.19.3
kubectl:1.19.3
helm:2.16.2
tiller:2.16.2
openebs:1.5.0
kubesphere:3.1.1
1)kubeadm(无需操作)
kubeadm是官方社区推出的一个用于快速部署kuberneters集群的工具。
这个工具能通过两条指令完成一个kuberneters集群的部署
创建一个master节点
$ kuberneters init
将一个node节点加入到当前集群中
$ kubeadm join <Master节点的IP和端口>
2)前置要求(无需操作)
一台或多台机器,操作系统Centos7.x-86_x64
硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
集群中所有的机器之间网络互通
可以访问外网,需要拉取镜像
禁止Swap分区
3)部署步骤(无需操作)
1. 在所有的节点上安装Docker和kubeadm
2. 部署Kubernetes Master
3. 部署容器网络插件
4. 部署Kubernetes Node,将节点加入Kubernetes集群中
5. 部署DashBoard web页面,可视化查看Kubernetes资源

4)环境准备(正式开始)
(1)准备工作
- 使用vagrant快速创建三个虚拟机。虚拟机启动前先设置virtualbox的主机网络。现在全部统一为192.168.56.1,以后所有虚拟机都是56.x的ip地址。

- 在全局设定中,找到一个空间比较大的磁盘用用来存放镜像。
网卡1是NAT,用于虚拟机与本机访问互联网。网卡2是仅主机网络,虚拟机内部共享的虚拟网络
(2)启动三个虚拟机
virtualbox.box分享:
链接:https://pan.baidu.com/s/17b9vZuLXHm7-jKcXF_sa9w
提取码:dy9g
如果提前下载好了.box文件,把viirtualbox.box文件放到 N:\VMboxs\ 这个目录下面,然后修改下面命令,add后面先跟box别名,再跟上文件的路径即成功使用本地的box
执行命令(mycentos7为别名)
$ vagrant box add mycentos7 N:/VMboxs/virtualbox.box
使用我们提供的vagrant文件,复制到非中文无空格目录下,运行vagrant up启动三个虚拟机。其实vagrant完全可以一键部署全部K8s集群
https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster
http://github.com/davidkbainbridge/k8s-playground
下面是vagrantfile,使用它来创建三个虚拟机,分别为k8s-node1,k8s-node2和k8s-node3.
Vagrant.configure("2") do |config| (1..3).each do |i| config.vm.define "k8s-node#{i}" do |node| # 设置虚拟机的Box node.vm.box = "mycentos7" # 设置字符集 Encoding.default_external = 'UTF-8' # 设置虚拟机的主机名 node.vm.hostname="k8s-node#{i}" # 设置虚拟机的IP node.vm.network "private_network", ip: "192.168.56.#{99+i}", netmask: "255.255.255.0" # 设置主机与虚拟机的共享目录 # node.vm.synced_folder "~/Documents/vagrant/share", "/home/vagrant/share" # VirtaulBox相关配置 node.vm.provider "virtualbox" do |v| # 设置虚拟机的名称 v.name = "k8s-node#{i}" # 设置虚拟机的内存大小 v.memory = 4096 # 设置虚拟机的CPU个数 v.cpus = 4 end end end end
- 进入到三个虚拟机,开启root的密码访问权限
vagrant ssh xxx进入到系统后 # vagrant ssh k8s-node1 su root 密码为vagrant vi /etc/ssh/sshd_config 修改 PermitRootLogin yes PasswordAuthentication yes 所有的虚拟机设为4核4G
service sshd restart
192.168.56.100:22
- 先关闭所有设备的电源,选择三个节点,然后执行“管理”->"全局设定"->“网络”,添加一个NAT网络。

- 分别修改每台设备的网络类型,并刷新重新生成
MAC
地址。

刷新一下MAC地址
1网络是集群交互,2网络是宿主交互
- 再次查看三个节点的IP
[root@k8s-node1 ~]# ip addr 。。。 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:7e:dd:f5 brd ff:ff:ff:ff:ff:ff # 10.0.2.15 inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 86357sec preferred_lft 86357sec inet6 fe80::a00:27ff:fe7e:ddf5/64 scope link valid_lft forever preferred_lft =================================================== [root@k8s-node2 ~]# ip addr 。。。 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:86:c0:a2 brd ff:ff:ff:ff:ff:ff # 10.0.2.5 inet 10.0.2.5/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 527sec preferred_lft 527sec inet6 fe80::a00:27ff:fe86:c0a2/64 scope link valid_lft forever preferred_lft forever ================================================= [root@k8s-node3 ~]# ip addr 。。。 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:a1:94:f9 brd ff:ff:ff:ff:ff:ff # 10.0.2.6 inet 10.0.2.6/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 518sec preferred_lft 518sec inet6 fe80::a00:27ff:fea1:94f9/64 scope link valid_lft forever preferred_lft forever
(3)设置Linux环境(三个节点都执行)
- 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
- 关闭seLinux
# linux默认的安全策略 sed -i 's/enforcing/disabled/' /etc/selinux/config setenforce 0
- 关闭swap
swapoff -a #临时关闭 sed -ri 's/.*swap.*/#&/' /etc/fstab #永久关闭 free -g #验证,swap必须为0
- 添加主机名与IP对应关系:
查看主机名:
hostname
如果主机名不正确,可以通过“hostnamectl set-hostname <newhostname> :指定新的hostname”命令来进行修改。
vi /etc/hosts 10.0.2.4 k8s-node1 10.0.2.5 k8s-node2 10.0.2.6 k8s-node3
将桥接的IPV4流量传递到iptables的链:
cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
应用规则:
sysctl --system
疑难问题:遇见提示是只读的文件系统,运行如下命令
mount -o remount rw /
5)所有节点安装docker、kubeadm、kubelet、kubectl
Kubenetes默认CRI(容器运行时)为Docker,因此先安装Docker。
(1)安装Docker
1、卸载之前的docker
sudo yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine
2、安装Docker -CE
安装前,查询要安装的版本
sudo yum install -y yum-utils \ device-mapper-persistent-data \ lvm2 # 设置docker repo的yum位置 sudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
# 安装前,查询要安装的版本
sudo yum list docker-ce.x86_64 --showduplicates | sort -r
# 以下命令安装 docker,docker-cli最新版本,不一定是稳定版,所以应该安装指定的稳定版本 最新版本:sudo yum -y install docker-ce docker-ce-cli containerd.io
指定版本:sudo yum -y install docker-ce-19.03.3 docker-ce-cli-19.03.3 containerd.io(为了避免版本问题,执行这个命令吧,(*^▽^*))
3、配置docker加速
sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://ke9h1pt4.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker
4、启动Docker && 设置docker开机启动
systemctl enable docker
基础环境准备好,可以给三个虚拟机备份一下;
(2)添加阿里与Yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
更多详情见: https://developer.aliyun.com/mirror/kubernetes
(3)安装kubeadm,kubelet和kubectl
yum list|grep kube
安装
yum install -y kubelet-1.19.3 kubeadm-1.19.3 kubectl-1.19.3
ps: 如果上面的命令安装失败,可能由于官网未开放同步方式, 会有索引gpg检查失败的情况, 这时请用以下命令安装
yum install -y --nogpgcheck kubelet-1.19.3 kubeadm-1.19.3 kubectl-1.19.3 (为了避免问题,可以执行这个命令,(*^▽^*))
开机启动
systemctl enable kubelet && systemctl start kubelet
查看kubelet的状态:
systemctl status kubelet
查看kubelet版本:
[root@k8s-node2 ~]# kubelet --version Kubernetes v1.19.3
6)部署k8s-master(仅针对master节点)
(1)master节点初始化
k8s文件分享:
链接:https://pan.baidu.com/s/1S9SZ2qSPqzoi6mxZDfnCEg
提取码:qop5
本机中有k8s文件夹(该文件属于商城项目中整理的),文件夹中有master_images.sh文件,故用 xftp 直接将k8s拖入master节点中。
如果没有k8s文件,那么在Master节点上,创建并执行master_images.sh
#!/bin/bash images=( kube-apiserver:v1.19.3 kube-proxy:v1.19.3 kube-controller-manager:v1.19.3 kube-scheduler:v1.19.3 coredns:1.6.5 etcd:3.4.3-0 pause:3.1 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName # docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName done
查看权限 master_images.sh
[root@k8s-node1 k8s]# ll total 64 -rw-r--r-- 1 root root 7149 Jul 7 13:25 get_helm.sh -rw-r--r-- 1 root root 6310 Jul 7 13:25 ingress-controller.yaml -rw-r--r-- 1 root root 209 Jul 7 13:25 ingress-demo.yml -rw-r--r-- 1 root root 15016 Jul 7 13:25 kube-flannel.yml -rw-r--r-- 1 root root 4737 Jul 7 13:25 kubernetes-dashboard.yaml -rw-r--r-- 1 root root 3841 Jul 7 13:25 kubesphere-complete-setup.yaml -rw-r--r-- 1 root root 392 Jul 7 13:25 master_images.sh -rw-r--r-- 1 root root 283 Jul 7 13:25 node_images.sh -rw-r--r-- 1 root root 1053 Jul 7 13:25 product.yaml -rw-r--r-- 1 root root 977 Jul 7 13:25 Vagrantfile
添加权限 master_images.sh
[root@k8s-node1 k8s]# chmod 700 master_images.sh
执行master_images.sh
[root@k8s-node1 k8s]# ./master_images.sh
初始化kubeadm
kubeadm init \ --apiserver-advertise-address=10.0.2.4 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version v1.19.3 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/16
[root@k8s-node1 k8s]# kubeadm init \ > --apiserver-advertise-address=10.0.2.4 \ > --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ > --kubernetes-version v1.19.3 \ > --service-cidr=10.96.0.0/16 \ > --pod-network-cidr=10.244.0.0/16 W0707 13:28:53.978633 2094 validation.go:28] Cannot validate kube-proxy config - no validator is available W0707 13:28:53.978680 2094 validation.go:28] Cannot validate kubelet config - no validator is available [init] Using Kubernetes version: v1.17.3 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.4] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.4 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.4 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0707 13:28:57.564727 2094 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0707 13:28:57.565357 2094 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 36.502244 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-node1 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: a5pgul.wjroilv2eb4rmwm9 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root:
# 以下命令为 非主节点 加入 主节点的命令,需要记录下来。 kubeadm join 10.0.2.4:6443 --token a5pgul.wjroilv2eb4rmwm9 \ --discovery-token-ca-cert-hash sha256:1e7f590f18b4d43604802b1b7d7a4f541932beccee4a763fe361b08023f9d693
# 上面他也说了如何加入新结点
# 如果过期了还没有加入,百度 kubeadm token过期
注:
- --apiserver-advertise-address=10.0.2.4 :这里的IP地址是master主机的地址,为上面的eth0网卡的地址;
- pod-network-cidr:pod之间的访问
(2)测试Kubectl(主节点执行)
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
详细部署文档:https://kubernetes.io/docs/concepts/cluster-administration/addons/
kubectl get nodes #获取所有节点 # 目前Master状态为notready。等待网络加入完成即可。 journalctl -u kubelet #查看kubelet日志
7)安装POD网络插件(CNI)
在master节点上执行按照POD网络插件
kubectl apply -f \ https://raw.githubusercontent.com/coreos/flanne/master/Documentation/kube-flannel.yml
以上地址可能被墙,可以直接获取本地已经下载的flannel.yml运行即可(https://blog.csdn.net/lxm1720161656/article/details/106436252 可以去下载),如:
本地flannel.yml内容如下:

--- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - amd64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - ppc64le hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - s390x hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
执行命令:
[root@k8s-node1 k8s]# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
同时flannel.yml中指定的images访问不到可以去docker hub找一个wget yml地址
vi 修改yml 所有amd64的地址修改了即可 等待大约3分钟 kubectl get pods -n kube-system 查看指定名称空间的pods kubectl get pods --all-namespaces 查看所有名称空间的pods $ ip link set cni0 down 如果网络出现问题,关闭cni0,重启虚拟机继续测试 执行watch kubectl get pod -n kube-system -o wide 监控pod进度 等待3-10分钟,完全都是running以后继续
查看命名空间:
[root@k8s-node1 k8s]# kubectl get ns NAME STATUS AGE default Active 30m kube-node-lease Active 30m kube-public Active 30m kube-system Active 30m
[root@k8s-node1 k8s]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-546565776c-9sbmk 0/1 Pending 0 31m kube-system coredns-546565776c-t68mr 0/1 Pending 0 31m kube-system etcd-k8s-node1 1/1 Running 0 31m kube-system kube-apiserver-k8s-node1 1/1 Running 0 31m kube-system kube-controller-manager-k8s-node1 1/1 Running 0 31m kube-system kube-flannel-ds-amd64-6xwth 1/1 Running 0 2m50s kube-system kube-proxy-sz2vz 1/1 Running 0 31m kube-system kube-scheduler-k8s-node1 1/1 Running 0 31m
查看master上的节点信息:
[root@k8s-node1 k8s]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 34m v1.19.3 #status为ready才能够执行下面的命令
最后再次执行,并且分别在“k8s-node2
”和“k8s-node3
”上也执行这里命令(该命令为 初始化kubeadmin 执行结果的最后一部分):
kubeadm join 10.0.2.4:6443 --token bt3hkp.yxnpzsgji4a6edy7 \
--discovery-token-ca-cert-hash sha256:64949994a89c53e627d68b115125ff753bfe6ff72a26eb561bdc30f32837415a
[root@k8s-node1 opt]# kubectl get nodes; NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 47m v1.19.3 k8s-node2 NotReady <none> 75s v1.19.3 k8s-node3 NotReady <none> 76s v1.19.3
监控pod进度
# 在master执行 watch kubectl get pod -n kube-system -o wide
等到所有的status都变为running状态后,再次查看节点信息:
[root@k8s-node1 ~]# kubectl get nodes; NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 3h50m v1.19.3 k8s-node2 Ready <none> 3h3m v1.19.3 k8s-node3 Ready <none> 3h3m v1.19.3
8)加入kubenetes的Node节点
在node节点中执行,向集群中添加新的节点,执行在kubeadm init 输出的kubeadm join命令;
确保node节点成功:
token过期怎么办
kubeadm token create --print-join-command
9)入门操作kubernetes集群
1、在主节点上部署一个tomcat
kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8
获取所有的资源:
[root@k8s-node1 k8s]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-7b84fb5fdc-cfd8g 0/1 ContainerCreating 0 41s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 70m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 0/1 1 0 41s NAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-7b84fb5fdc 1 1 0 41s
kubectl get pods -o wide 可以获取到tomcat部署信息,能够看到它被部署到了k8s-node3上了
[root@k8s-node1 k8s]# kubectl get all -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/tomcat6-5f7ccf4cb9-xhrr9 0/1 ContainerCreating 0 77s <none> k8s-node3 <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 68m <none> NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/tomcat6 0/1 1 0 77s tomcat tomcat:6.0.53-jre8 app=tomcat6 NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset.apps/tomcat6-5f7ccf4cb9 1 1 0 77s tomcat tomcat:6.0.53-jre8 app=tomcat6,pod-template-hash=5f7ccf4cb9
查看node3节点上,下载了哪些镜像:
[root@k8s-node3 k8s]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.17.3 ae853e93800d 14 months ago 116MB quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 2 years ago 52.6MB registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 3 years ago 742kB tomcat 6.0.53-jre8 49ab0583115a 3 years ago 290MB
查看Node3节点上,正在运行的容器:
[root@k8s-node3 k8s]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8a197fa41dd9 tomcat "catalina.sh run" About a minute ago Up About a minute k8s_tomcat_tomcat6-5f7ccf4cb9-xhrr9_default_81f186a8-4805-4bbb-8d77-3142269942ed_0 4074d0d63a88 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_tomcat6-5f7ccf4cb9-xhrr9_default_81f186a8-4805-4bbb-8d77-3142269942ed_0 db3faf3a280d ff281650a721 "/opt/bin/flanneld -…" 29 minutes ago Up 29 minutes k8s_kube-flannel_kube-flannel-ds-amd64-vcktd_kube-system_31ca3556-d6c3-48b2-b393-35ff7d89a078_0 be461b54cb4b registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy "/usr/local/bin/kube…" 30 minutes ago Up 30 minutes k8s_kube-proxy_kube-proxy-ptq2t_kube-system_0e1f7df3-7204-481d-bf15-4b0e09cf0c81_0 88d1ab87f400 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 31 minutes ago Up 31 minutes k8s_POD_kube-flannel-ds-amd64-vcktd_kube-system_31ca3556-d6c3-48b2-b393-35ff7d89a078_0 52be28610a02 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 "/pause" 31 minutes ago Up 31 minutes
在node1上执行:
[root@k8s-node1 k8s]# kubectl get pods NAME READY STATUS RESTARTS AGE tomcat6-7b84fb5fdc-cfd8g 1/1 Running 0
5m35s [root@k8s-node1 k8s]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default tomcat6-7b84fb5fdc-cfd8g 1/1 Running 0 163m kube-system coredns-546565776c-9sbmk 1/1 Running 0 3h52m kube-system coredns-546565776c-t68mr 1/1 Running 0 3h52m kube-system etcd-k8s-node1 1/1 Running 0 3h52m kube-system kube-apiserver-k8s-node1 1/1 Running 0 3h52m kube-system kube-controller-manager-k8s-node1 1/1 Running 0 3h52m kube-system kube-flannel-ds-amd64-5xs5j 1/1 Running 0 3h6m kube-system kube-flannel-ds-amd64-6xwth 1/1 Running 0 3h24m kube-system kube-flannel-ds-amd64-fvnvx 1/1 Running 0 3h6m kube-system kube-proxy-7tkvl 1/1 Running 0 3h6m kube-system kube-proxy-mvlnk 1/1 Running 0 3h6m kube-system kube-proxy-sz2vz 1/1 Running 0 3h52m kube-system kube-scheduler-k8s-node1 1/1 Running 0 3h52m
从前面看到tomcat部署在Node3上,现在模拟因为各种原因宕机的情况,将node3关闭电源,观察情况。
docker stop执行的时候,docker ps发现又有新的容器了,这是k8s又新建了,所以选择关机node3
[root@k8s-node1 k8s]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 79m v1.17.3 k8s-node2 Ready <none> 41m v1.17.3 k8s-node3 NotReady <none> 41m v1.17.3
得等个几分钟(跟网速有关,可能会很久,tomcat的镜像300M左右。)才能容灾恢复
[root@k8s-node1 k8s]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat6-5f7ccf4cb9-clcpr 1/1 Running 0 4m16s 10.244.1.2 k8s-node2 <none> <none> tomcat6-5f7ccf4cb9-xhrr9 1/1 Terminating 1 22m 10.244.2.2 k8s-node3 <none> <none>
2、暴露nginx访问
在master上执行
# tomcat镜像端口8080,转发到pod的80端口上,然后转发到虚拟机的XXX端口上(自动生成) kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort
pod的80映射容器的8080;server会带来pod的80
查看服务:
[root@k8s-node1 k8s]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 93m tomcat6 NodePort 10.96.7.78 <none> 80:30055/TCP 8s [root@k8s-node1 k8s]# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 103m <none> tomcat6 NodePort 10.96.7.78 <none> 80:30055/TCP 9m38s app=tomcat6
浏览器输入:http://192.168.56.100:30055/ ,可以看到tomcat首页
输入下面命令可以看到pod和封装pod 的service,pod是部署产生的,部署还有一个副本
[root@k8s-node1 ~]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-5f7ccf4cb9-clcpr 1/1 Running 0 4h12m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h37m service/tomcat6 NodePort 10.96.7.78 <none> 80:30055/TCP 4h3m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 1/1 1 1 4h30m NAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-5f7ccf4cb9 1 1 1 4h30m
3、动态扩容测试
kubectl get deployment
[root@k8s-node1 ~]# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE tomcat6 2/2 2 2 11h
应用升级: kubectl set image (--help查看帮助)
扩容:kubectl scale --replicas=3 deployment tomcat6
[root@k8s-node1 ~]# kubectl scale --replicas=3 deployment tomcat6 deployment.apps/tomcat6 scaled [root@k8s-node1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat6-5f7ccf4cb9-clcpr 1/1 Running 0 4h23m 10.244.1.2 k8s-node2 <none> <none> tomcat6-5f7ccf4cb9-jbvr4 1/1 Running 0 9s 10.244.2.3 k8s-node3 <none> <none> tomcat6-5f7ccf4cb9-ng556 1/1 Running 0 9s 10.244.2.4 k8s-node3 <none> <none> [root@k8s-node1 ~]# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h48m <none> tomcat6 NodePort 10.96.7.78 <none> 80:30055/TCP 4h15m app=tomcat6
扩容了多份,所有无论访问哪个node的指定端口,都可以访问到tomcat6
缩容:kubectl scale --replicas=1 deployment tomcat6
[root@k8s-node1 ~]# kubectl scale --replicas=1 deployment tomcat6 deployment.apps/tomcat6 scaled [root@k8s-node1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat6-5f7ccf4cb9-clcpr 1/1 Running 0 4h32m 10.244.1.2 k8s-node2 <none> <none> [root@k8s-node1 ~]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-5f7ccf4cb9-clcpr 1/1 Running 0 4h33m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h58m service/tomcat6 NodePort 10.96.7.78 <none> 80:30055/TCP 4h24m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 1/1 1 1 4h51m NAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-5f7ccf4cb9 1 1 1 4h51m
4、以上操作的yaml获取
# 尝试运行,并不会真正的创建镜像 kubectl create deployment web --image=nginx -o yaml --dry-run
5、删除
kubectl delete deploye/nginx
kubectl delete service/nginx-service
kubectl get all
[root@k8s-node1 ~]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-5f7ccf4cb9-clcpr 1/1 Running 0 4h33m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h58m service/tomcat6 NodePort 10.96.7.78 <none> 80:30055/TCP 4h24m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 1/1 1 1 4h51m NAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-5f7ccf4cb9 1 1 1 4h51m #删除deployment.apps/tomcat6 [root@k8s-node1 ~]# kubectl delete deployment.apps/tomcat6 deployment.apps "tomcat6" deleted #查看剩余的资源 [root@k8s-node1 ~]# kubectl get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h service/tomcat6 NodePort 10.96.7.78 <none> 80:30055/TCP 4h26m # 此时没有了部署,但是有service,没有pod只有service也是没有对应的服务的 # 查看pod信息 [root@k8s-node1 ~]# kubectl get pods No resources found in default namespace.
三、docker深入
四、K8s细节
https://kubernetes.io/zh/docs/reference/kubectl/overview/
1、yaml输出
https://kubernetes.io/zh/docs/reference/kubectl/overview/#资源类型
在此示例中,以下命令将单个 pod 的详细信息输出为 YAML 格式的对象:
kubectl get pod web-pod-13je7 -o yaml
请记住:有关每个命令支持哪种输出格式的详细信息,请参阅 kubectl 参考文档。
--dry-run:
--dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be
sent, without sending it. If server strategy, submit server-side request without persisting the resource.
值必须为,或。
- none
- server:提交服务器端请求而不持久化资源。
- client:只打印该发送对象,但不发送它。
也就是说,通过--dry-run选项,并不会真正的执行这条命令。
# 输出yaml [root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml W0504 03:39:08.389369 8107 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client. apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: tomcat6 name: tomcat6 spec: replicas: 1 selector: matchLabels: app: tomcat6 strategy: {} template: metadata: creationTimestamp: null labels: app: tomcat6 spec: containers: - image: tomcat:6.0.53-jre8 name: tomcat resources: {} status: {}
实际上我们也可以将这个yaml输出到文件,然后使用kubectl apply -f 来应用它
#输出到tomcat6.yaml [root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml >tomcat6.yaml W0504 03:46:18.180366 11151 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.
tomcat6.yaml 内容,修改一下副本个数为3
apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: tomcat6 name: tomcat6 spec: replicas: 3 # 修改 selector: matchLabels: app: tomcat6 strategy: {} template: metadata: creationTimestamp: null labels: app: tomcat6 spec: containers: - image: tomcat:6.0.53-jre8 name: tomcat resources: {} status: {}
#应用tomcat6.yaml [root@k8s-node1 k8s]# kubectl apply -f tomcat6.yaml deployment.apps/tomcat6 created [root@k8s-node1 k8s]# kubectl get pods NAME READY STATUS RESTARTS AGE tomcat6-5f7ccf4cb9-hxqfl 1/1 Running 0 7s tomcat6-5f7ccf4cb9-ksm4n 1/1 Running 0 7s tomcat6-5f7ccf4cb9-qlzd4 1/1 Running 0 7s
查看某个pod的具体信息:
[root@k8s-node1 ~]# kubectl get pods tomcat6-7b84fb5fdc-5jh6t -o yaml
2、service
Kubernetes Service定义了这样一种抽象:一个Pod的逻辑分组,一种可以访问它们的策略 —— 通常称为微服务。这一组Pod能够被Service访问到,通常是通过Label Selector
通俗的讲:SVC负责检测Pod的状态信息,不会因pod的改动IP地址改变(因为关注的是标签),导致Nginx负载均衡影响

Service能够提供负载均衡的能力,但是在使用上有以下限制:
- 默认只提供 4 层负载均衡能力(IP+端口),而没有 7 层功能(主机名和域名),但有时我们可能需要更多的匹配规则来转发请求,这点上 4 层负载均衡是不支持的
- 后续可以通过Ingress方案,添加7层的能力
# 1、部署一个nginx kubectl create deployment nginx --image=nginx # 2、暴露nginx访问 kubectl expose deployment nginx --port=80 --type=NodePort
现在我们使用NodePort的方式暴露,这样访问每个节点的端口,都可以访问各个Pod,如果节点宕机,就会出现问题。
前面我们通过命令行的方式,部署和暴露了tomcat,实际上也可以通过yaml的方式来完成这些操作。
#这些操作实际上是为了获取Deployment的yaml模板 [root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml >tomcat6-deployment.yaml W0504 04:13:28.265432 24263 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client. [root@k8s-node1 ~]# ls tomcat6-deployment.yaml tomcat6-deployment.yaml [root@k8s-node1 ~]#
修改“tomcat6-deployment.yaml”内容如下:
apiVersion: apps/v1 kind: Deployment metadata: labels: app: tomcat6 name: tomcat6 spec: replicas: 3 selector: matchLabels: app: tomcat6 template: metadata: labels: app: tomcat6 spec: containers: - image: tomcat:6.0.53-jre8 name: tomcat
#部署 [root@k8s-node1 ~]# kubectl apply -f tomcat6-deployment.yaml deployment.apps/tomcat6 configured #查看资源 [root@k8s-node1 ~]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-7b84fb5fdc-5jh6t 1/1 Running 0 27m pod/tomcat6-7b84fb5fdc-8lhwv 1/1 Running 0 27m pod/tomcat6-7b84fb5fdc-j4qmh 1/1 Running 0 27m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 3/3 3 3 27m NAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-7b84fb5fdc 3 3 3 27m [root@k8s-node1 ~]#
kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort --dry-run -o yaml
apiVersion: v1 kind: Service # service metadata: creationTimestamp: null labels: app: tomcat6 name: tomcat6 spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: tomcat6 # 标签 type: NodePort status: loadBalancer: {}
关联部署和service
将这段输出和“tomcat6-deployment.yaml”用---
进行拼接,表示部署完毕并进行暴露服务:
apiVersion: apps/v1 kind: Deployment # 部署 metadata: labels: app: tomcat6 # 标签 name: tomcat6 spec: replicas: 3 #副本数 selector: matchLabels: app: tomcat6 template: metadata: labels: app: tomcat6 spec: containers: - image: tomcat:6.0.53-jre8 name: tomcat --- apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: tomcat6 # 标签 name: tomcat6 spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: tomcat6 # 标签 type: NodePort
- 上面类型一个是Deployment,一个是Service
部署并暴露服务
[root@k8s-node1 ~]# kubectl apply -f tomcat6-deployment.yaml deployment.apps/tomcat6 created service/tomcat6 created
查看服务和部署信息
[root@k8s-node1 ~]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-7b84fb5fdc-dsqmb 1/1 Running 0 4s pod/tomcat6-7b84fb5fdc-gbmxc 1/1 Running 0 5s pod/tomcat6-7b84fb5fdc-kjlc6 1/1 Running 0 4s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h service/tomcat6 NodePort 10.96.147.210 <none> 80:30172/TCP 4s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 3/3 3 3 5s NAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-7b84fb5fdc 3 3 3 5s [root@k8s-node1 ~]#
访问node1,node2和node3的30172端口:
[root@k8s-node1 ~]# curl -I http://192.168.56.{100,101,102}:30172/ HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Accept-Ranges: bytes ETag: W/"7454-1491118183000" Last-Modified: Sun, 02 Apr 2017 07:29:43 GMT Content-Type: text/html Content-Length: 7454 Date: Mon, 04 May 2020 04:35:35 GMT HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Accept-Ranges: bytes ETag: W/"7454-1491118183000" Last-Modified: Sun, 02 Apr 2017 07:29:43 GMT Content-Type: text/html Content-Length: 7454 Date: Mon, 04 May 2020 04:35:35 GMT HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Accept-Ranges: bytes ETag: W/"7454-1491118183000" Last-Modified: Sun, 02 Apr 2017 07:29:43 GMT Content-Type: text/html Content-Length: 7454 Date: Mon, 04 May 2020 04:35:35 GMT [root@k8s-node1 ~]#
现在的问题是service的3个pod都可以访问,但怎么创建个总的管理者,以便负载均衡
3、Ingress
通过Ingress发现pod进行关联。基于域名访问
通过Ingress controller实现POD负载均衡
支持TCP/UDP 4层负载均衡和HTTP 7层负载均衡
可以把ingress理解为nginx,通过域名访问service端口
- Ingress管理多个service
- service管理多个pod
步骤:
(1)部署Ingress controller
执行“k8s/ingress-controller.yaml”,
[root@k8s-node1 k8s]# kubectl apply -f ingress-controller.yaml namespace/ingress-nginx created configmap/nginx-configuration created configmap/tcp-services created configmap/udp-services created serviceaccount/nginx-ingress-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-role created rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created daemonset.apps/nginx-ingress-controller created service/ingress-nginx created [root@k8s-node1 k8s]#
查看
[root@k8s-node1 k8s]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default tomcat6-7b84fb5fdc-dsqmb 1/1 Running 0 16m default tomcat6-7b84fb5fdc-gbmxc 1/1 Running 0 16m default tomcat6-7b84fb5fdc-kjlc6 1/1 Running 0 16m ingress-nginx nginx-ingress-controller-9q6cs 0/1 ContainerCreating 0 40s ingress-nginx nginx-ingress-controller-qx572 0/1 ContainerCreating 0 40s kube-system coredns-546565776c-9sbmk 1/1 Running 1 14h kube-system coredns-546565776c-t68mr 1/1 Running 1 14h kube-system etcd-k8s-node1 1/1 Running 1 14h kube-system kube-apiserver-k8s-node1 1/1 Running 1 14h kube-system kube-controller-manager-k8s-node1 1/1 Running 1 14h kube-system kube-flannel-ds-amd64-5xs5j 1/1 Running 2 13h kube-system kube-flannel-ds-amd64-6xwth 1/1 Running 2 14h kube-system kube-flannel-ds-amd64-fvnvx 1/1 Running 1 13h kube-system kube-proxy-7tkvl 1/1 Running 1 13h kube-system kube-proxy-mvlnk 1/1 Running 2 13h kube-system kube-proxy-sz2vz 1/1 Running 1 14h kube-system kube-scheduler-k8s-node1 1/1 Running 1 14h [root@k8s-node1 k8s]#
这里master节点负责调度,具体执行交给node2和node3来完成,能够看到它们正在下载镜像

(2)创建Ingress规则
ingress-tomcat6.yaml
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: web spec: rules: - host: tomcat6.kubenetes.com http: paths: - backend: serviceName: tomcat6 servicePort: 80
[root@k8s-node1 k8s]# touch ingress-tomcat6.yaml #将上面的规则,添加到ingress-tomcat6.yaml文件中 [root@k8s-node1 k8s]# vi ingress-tomcat6.yaml [root@k8s-node1 k8s]# kubectl apply -f ingress-tomcat6.yaml ingress.extensions/web created
修改本机的hosts文件,添加如下的域名转换规则:
192.168.56.102 tomcat6.kubenetes.com
测试: http://tomcat6.kubenetes.com/ 访问到tomcat
并且集群中即便有一个节点不可用,也不影响整体的运行。
10、安装kubernetes可视化界面——DashBoard(默认的控制台,不推荐)
同样网上可以找到yaml:https://gitee.com/CaiJinHao/kubernetesdashboard/tree/v1.10.1/src/deploy/recommended
1、部署DashBoard
$ kubectl apply -f kubernetes-dashboard.yaml
2、暴露DashBoard为公共访问
默认DashBoard只能集群内部访问,修改Service为NodePort类型,暴露到外部
kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30001 selector: k8s-app: kubernetes-dashboard
访问地址:http://NodeIP:30001
3、创建授权账号
$ kubectl create serviceaccount dashboar-admin -n kube-sysem $ kubectl create clusterrolebinding dashboar-admin --clusterrole=cluter-admin --serviceaccount=kube-system:dashboard-admin $ kubectl describe secrets -n kube-system $( kubectl -n kube-system get secret |awk '/dashboard-admin/{print $1}' )
使用输出的token登录dashboard
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】凌霞软件回馈社区,博客园 & 1Panel & Halo 联合会员上线
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】博客园社区专享云产品让利特惠,阿里云新客6.5折上折
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 一个费力不讨好的项目,让我损失了近一半的绩效!
· 清华大学推出第四讲使用 DeepSeek + DeepResearch 让科研像聊天一样简单!
· 实操Deepseek接入个人知识库
· CSnakes vs Python.NET:高效嵌入与灵活互通的跨语言方案对比
· Plotly.NET 一个为 .NET 打造的强大开源交互式图表库
2021-07-07 设计模式-23种设计模式-行为型-命令模式
2021-07-07 设计模式-23种设计模式-行为型-模板模式
2021-07-07 设计模式-23种设计模式-结构型-代理模式
2021-07-07 递归-迷宫问题(回溯算法)
2021-07-07 数据结构-栈