overlay与underlay通信总结

一、overlay简介

1、VxLAN: VxLAN全称是Virtual eXtensible Local Area Network(虚拟扩展本地局域网) ,主要有Cisco推出, vxlan是一个 VLAN 的扩展协议, 是由IETF定义的NVO3(Network Virtualization over Layer 3) 标准技术之一,VXLAN的特点是将L2的以太帧封装到UDP报文(即L2 over L4) 中, 并在L3网络中传输, 即使用MAC in UDP的方法对报文进行重新封装, VxLAN 本质上是一种overlay的隧道封装技术, 它将L2的以太网帧封装成L4的UDP数据报,然后在L3的网络中传输, 效果就像L2的以太网帧在一个广播域中传输一样, 实际上L2的以太网帧跨越了L3网络传输, 但是缺不受L3网络的限制, vxlan采用24位标识vlan ID号, 因此可以支持2^24=16777216个vlan, 其可扩展性比vlan强大的多, 可以支持大规模数据中心的网络需求。

2、VTEP(VXLAN Tunnel Endpoint vxlan隧道端点),VTEP是VXLAN网络的边缘设备, 是VXLAN隧道的起点和终点, VXLAN对用户原始数据帧的封装和解封装均在VTEP上进行,用于VXLAN报文的封装和解封装,VTEP与物理网络相连,分配的地址为物理网IP地址,VXLAN报文中源IP地址为本节点的VTEP地址,VXLAN报文中目的IP地址为对端节点的VTEP地址, 一对VTEP地址就对应着一个VXLAN隧道, 服务器上的虚拟交换机(隧道flannel.1就是VTEP), 比如一个虚拟机网络中的多个vxlan就需要多个VTEP对不同网络的报文进行封装与解封装。

3、VNI(VXLAN Network Identifier) : VXLAN网络标识VNI类似VLAN ID,用于区分VXLAN段,不同VXLAN段的虚拟机不能直接二层相互通信,一个VNI表示一个租户,即使多个终端用户属于同一个VNI,也表示一个租户。

4、NVGRE: Network Virtualization using Generic Routing Encapsulation, 主要支持者是Microsoft, 与VXLAN不同的是, NVGRE没有采用标准传输协议(TCP/UDP) , 而是借助通用路由封装协议(GRE) , NVGRE使用GRE头部的低24位作为租户网络标识符(TNI) , 与VXLAN一样可以支持1777216个vlan。

二、overlay通信过程

1.VM A发送L2 帧与VM请求与VM B通信。

2.源宿主机VTEP添加或者封装VXLAN、 UDP及IP头部报文。

3.网络层设备将封装后的报文通过标准的报文在三层网络进行转发到目标主机。

4.目标宿主机VTEP删除或者解封装VXLAN、 UDP及IP头部。

5.将原始L2帧发送给目标VM。

三、overlay应用场景

1、叠加网络/覆盖网络, 在物理网络的基础之上叠加实现新的虚拟网络, 即可使网络的中的容器可以相互通信。

2、优点是对物理网络的兼容性比较好, 可以实现pod的夸宿主机子网通信。

3、calico与flannel等网络插件都支持overlay网络。

4、缺点是有额外的封装与解封性能开销。

5、目前私有云使用比较多。

四、underlay简介

1、Underlay网络就是传统IT基础设施网络, 由交换机和路由器等设备组成, 借助以太网协议、 路由协议和VLAN协议等驱动, 它还是Overlay网络的底层网络, 为Overlay网络提供数据通信服务。 容器网络中的Underlay网络是指借助驱动程序将宿主机的底层网络接口直接暴露给容器使用的一种网络构建技术,较为常见的解决方案有MAC VLAN、 IP VLAN和直接路由等。

2、Underlay依赖于网络网络进行跨主机通信。

五、underlay实现模式简介

1、Mac Vlan模式:

  • MAC VLAN: 支持在同一个以太网接口上虚拟出多个网络接口(子接口), 每个虚拟接口都拥有唯一的MAC地址并可配置网卡子接口IP。

2、IP VLAN模式:

  • IP VLAN类似于MAC VLAN, 它同样创建新的虚拟网络接口并为每个接口分配唯一的IP地址, 不同之处在于, 每个虚拟接口将共享使用物理接口的MAC地址。

六、MAC Vlan工作模式

Private(私有)模式:

  • 在Private模式下, 同一个宿主机下的容器不能通信, 即使通过交换机再把数据报文转发回来也不行。

VEPA模式:

  • 虚拟以太端口汇聚器(VirtualEthernetPortAggregator, 简称VEPA) , 在这种模式下, macvlan内的容器不能直接接收在同一个物理网卡的容器的请求数据包,
  • 但是可以经过交换机的(端口回流)再转发回来可以实现通信。

passthru(直通)模式:

  • Passthru模式下该macvlan只能创建一个容器, 当运行一个容器后再创建其他容器则会报错。

bridge模式:

  • 在bridge这种模式下, 使用同一个宿主机网络的macvlan容器可以直接实现通信, 推荐使用此模式。

Overlay: 基于VXLAN、 NVGRE等封装技术实现overlay叠加网络。

Macvlan: 基于Docker宿主机物理网卡的不同子接口实现多个虚拟vlan, 一个子接口就是一个虚拟vlan, 容器通过宿主机的路由功能和外网保持通信。

Underlay架构图

网络通信总结:
Overlay:基于VXLAN、 NVGRE等封装技术实现overlay叠加网络。
Macvlan:基于Docker宿主机物理网卡的不同子接口实现多个虚拟vlan,一个子接口就是一个虚拟vlan,容器通过宿主机的路由功能和外网保持通信。

七、kubernetes pod通信总结

7.1、CNI插件三种模式

7.2、k8s 网络通信模式

Overlay网络

  • Flannel Vxlan、 Calico BGP、 Calico Vxlan
  • 将pod 地址信息封装在宿主机地址信息以内, 实现跨主机且可跨node子网的通信报文。

直接路由

  • Flannel Host-gw、 Flannel VXLAN Directrouting、 Calico Directrouting
  • 基于主机路由, 实现报文从源主机到目的主机的直接转发, 不需要进行报文的叠加封装, 性能比overlay更好。

Underlay:

  • 需要为pod启用单独的虚拟机网络, 而是直接使用宿主机物理网络, pod甚至可以在k8s环境之外的节点直接访问(与node节点的网络被打通), 相当于把pod当桥接模式的虚拟机使用, 比较方便k8s环境以外的访问访问k8s环境中的pod中的服务, 而且由于主机使用的宿主机网络, 其性能最好。

八、underlay实验案例

8.1、环境准备

192.168.247.71 k8s-master-underlay-01  2vcpu 4G 50G 
192.168.247.72 k8s-master-underlay-02  2vcpu 4G 50G 
192.168.247.73 k8s-master-underlay-03  2vcpu 4G 50G 
192.168.247.74 k8s-node-underlay-01    2vcpu 4G 50G 

#修改主机名 hostnamectl
set-hostname k8s-master-underlay-01 hostnamectl set-hostname k8s-master-underlay-02 hostnamectl set-hostname k8s-master-underlay-03 hostnamectl set-hostname k8s-node-underlay-01 #配置hosts cat > /etc/hosts << EOF 127.0.0.1 localhost 127.0.1.1 cyh # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 192.168.247.71 k8s-master-underlay-01 192.168.247.72 k8s-master-underlay-02 192.168.247.73 k8s-master-underlay-03 192.168.247.74 k8s-node-underlay-01 EOF swapoff -a #关闭swap交换分区 sed -i 's@/swap.img@#swap.img@g' /etc/fstab #禁止开机前启动交换分区 apt-get update #更新本地仓库源

8.2、安装docker环境

#安装必要的一些系统工具
apt -y install apt-transport-https ca-certificates curl software-properties-common

#安装GPG证书
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -

#写入软件源信息
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

#更新软件源
apt-get -y update

#查看可安装的Docker版本
apt-cache madison docker-ce docker-ce-cli
apt install -y docker-ce docker-ce-cli
systemctl start docker && systemctl enable docker

参数优化, 配置镜像加速并使用systemd:
mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://9916w1ow.mirror.aliyuncs.com"]
}
EOF

#重启docker
systemctl daemon-reload && systemctl restart docker
[root@k8s-master-underlay-01 ~]# docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Docker Buildx (Docker Inc., v0.9.1-docker)
  scan: Docker Scan (Docker Inc., v0.21.0)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 20.10.21
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 1c90a442489720eec95342e1789ee8a5e1b9536f
 runc version: v1.1.4-0-g5fd4c4d
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-122-generic
 Operating System: Ubuntu 20.04.3 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 3.81GiB
 Name: k8s-master-underlay-01
 ID: ISPH:MD5W:P7ML:X36J:LLOH:O7BG:IWYL:AAU5:PBFI:YAIC:LT42:ZV32
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Registry Mirrors:
  https://9916w1ow.mirror.aliyuncs.com/
 Live Restore Enabled: false

WARNING: No swap limit support
[root@k8s-master-underlay-01 ~]# 

8.3、安装cri-docker

[root@k8s-master-underlay-01 ~]# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd-0.2.6.amd64.tgz
[root@k8s-master-underlay-01 ~]# tar xvf cri-dockerd-0.2.6.amd64.tgz
[root@k8s-master-underlay-01 ~]# cp cri-dockerd/cri-dockerd /usr/local/bin/
[root@k8s-master-underlay-01 ~]# scp cri-dockerd/cri-dockerd root@192.168.247.72:/usr/local/bin/
[root@k8s-master-underlay-01 ~]# scp cri-dockerd/cri-dockerd root@192.168.247.73:/usr/local/bin/
[root@k8s-master-underlay-01 ~]# scp cri-dockerd/cri-dockerd root@192.168.247.74:/usr/local/bin/

#所有节点配置cri-dockerd.service文件
cat > /lib/systemd/system/cri-docker.service <<EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com

After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket

[Service]
Type=notify
ExecStart=/usr/local/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7
ExecReload=/bin/kill -s HUP $MAINPID

TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

#所有节点配置cri-docker.socket文件
cat > /etc/systemd/system/cri-docker.socket <<EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service

[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target
EOF

#启动服务:
systemctl enable --now cri-docker cri-docker.socket

8.4、集群初始化准备

安装kubeadm、kubelet、kubectl组件,版本 ≥ 1.24.0

所有节点安装kubelet kubeadm kubectl, 配置阿里云镜像的kubernetes源(用于安装kubelet kubeadm kubectl命令)
使用阿里或者清华大学的kubernetes镜像源
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

#开始安装kubeadm:
apt-get update
apt-cache madison kubeadm
apt-get install -y kubelet=1.25.3-00 kubeadm=1.25.3-00 kubectl=1.25.3-00

#验证版本
# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:55:36Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}

镜像准备

[root@k8s-master-underlay-01 ~]# kubeadm config images list --kubernetes-version v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
[root@k8s-master-underlay-01 ~]# 
[root@k8s-master-underlay-01 ~]# vi images-download.sh
[root@k8s-master-underlay-01 ~]# cat images-download.sh 
#!/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.9.3
[root@k8s-master-underlay-01 ~]# bash images-download.sh

8.5、初始化集群

场景1: pod可以选择overlay或者underlay, SVC使用overlay, 如果是underlay需要配置SVC使用宿主机的子网
比如以下场景是overlay网络、 后期会用于overlay场景的pod, service会用于overlay的svc场景。 kubeadm init
--apiserver-advertise-address=192.168.247.71 \ --apiserver-bind-port=6443 \ --kubernetes-version=v1.25.3 \ --pod-network-cidr=10.200.0.0/16 \ --service-cidr=10.100.0.0/16 \ --service-dns-domain=cluster.local \ --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \ --cri-socket unix:///var/run/cri-dockerd.sock 场景2: pod可以选择overlay或者underlay, SVC使用underlay初始化, --pod-network-cidr=10.200.0.0/16会用于后期overlay的场景,
underlay的网络CIDR后期单独指定, overlay会与underlay并存, --servicecidr=192.168.200.0/24用于后期的underlay svc, 通过SVC可以直接访问pod。 kubeadm init --apiserver-advertise-address=192.168.247.71 \ --apiserver-bind-port=6443 \ --kubernetes-version=v1.25.3 \ --pod-network-cidr=10.200.0.0/16 \ --service-cidr=192.168.200.0/24 \ --service-dns-domain=cluster.local \ --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \ --cri-socket unix:///var/run/cri-dockerd.sock 注意: 后期如果要访问SVC则需要在网络设备配置静态路由, 因为SVC是iptables或者IPVS规则, 不会进行arp报文广播: -A KUBE-SERVICES -d 172.31.5.148/32 -p tcp -m comment --comment "myserver/myserver-tomcat-app1-service-underlay:http cluster IP" -m tcp --dport 80 -j KUBE-SVC-DXPW2IL54XTPIKP5 -A KUBE-SVC-DXPW2IL54XTPIKP5 ! -s 10.200.0.0/16 -d 172.31.5.148/32 -p tcp -m comment --comment "myserver/myserver-tomcat-app1-serviceunderlay:http cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ

场景1-初始化

[root@k8s-master-underlay-01 ~]# kubeadm init --control-plane-endpoint "192.168.247.71" \
> --upload-certs \
> --apiserver-advertise-address=192.168.247.71 \
> --apiserver-bind-port=6443 \
> --kubernetes-version=v1.25.3 \
> --pod-network-cidr=10.200.0.0/16 \
> --service-cidr=10.100.0.0/16 \
> --service-dns-domain=cluster.local \
> --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
> --cri-socket unix:///var/run/cri-dockerd.sock
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-underlay-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.100.0.1 192.168.247.71]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-underlay-01 localhost] and IPs [192.168.247.71 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-underlay-01 localhost] and IPs [192.168.247.71 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.007146 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
f8450272a7f24480172a246c55d0fa7f542e19f5769b6e91e2cd26421183896b
[mark-control-plane] Marking the node k8s-master-underlay-01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master-underlay-01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 7vag71.8v7mqsajsya5n00t
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.247.71:6443 --token 7vag71.8v7mqsajsya5n00t \
    --discovery-token-ca-cert-hash sha256:471386ff8802bd6aff33e41518f1644153ea475ecfa4eb9f94bda8be66ebb388 \
    --control-plane --certificate-key f8450272a7f24480172a246c55d0fa7f542e19f5769b6e91e2cd26421183896b

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.247.71:6443 --token 7vag71.8v7mqsajsya5n00t \
    --discovery-token-ca-cert-hash sha256:471386ff8802bd6aff33e41518f1644153ea475ecfa4eb9f94bda8be66ebb388 
[root@k8s-master-underlay-01 ~]# 

8.6、添加集群节点

8.6.1、添加master
[root@k8s-master-underlay-02 ~]# kubeadm join 192.168.247.71:6443 --token 7vag71.8v7mqsajsya5n00t \
> --discovery-token-ca-cert-hash sha256:471386ff8802bd6aff33e41518f1644153ea475ecfa4eb9f94bda8be66ebb388 \
> --control-plane --certificate-key f8450272a7f24480172a246c55d0fa7f542e19f5769b6e91e2cd26421183896b \
> --cri-socket unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-underlay-02 localhost] and IPs [192.168.247.72 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-underlay-02 localhost] and IPs [192.168.247.72 127.0.0.1 ::1]
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-underlay-02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.100.0.1 192.168.247.72 192.168.247.71]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node k8s-master-underlay-02 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master-underlay-02 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@k8s-master-underlay-02 ~]# mkdir -p $HOME/.kube
[root@k8s-master-underlay-02 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master-underlay-02 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master-underlay-02 ~]# 

[root@k8s-master-underlay-03 ~]# kubeadm join 192.168.247.71:6443 --token 7vag71.8v7mqsajsya5n00t \
> --discovery-token-ca-cert-hash sha256:471386ff8802bd6aff33e41518f1644153ea475ecfa4eb9f94bda8be66ebb388 \
> --control-plane --certificate-key f8450272a7f24480172a246c55d0fa7f542e19f5769b6e91e2cd26421183896b \
> --cri-socket unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-underlay-03 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.100.0.1 192.168.247.73 192.168.247.71]
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-underlay-03 localhost] and IPs [192.168.247.73 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-underlay-03 localhost] and IPs [192.168.247.73 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node k8s-master-underlay-03 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master-underlay-03 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@k8s-master-underlay-03 ~]# mkdir -p $HOME/.kube
[root@k8s-master-underlay-03 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master-underlay-03 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master-underlay-03 ~]# 

8.6.2、添加node节点
[root@k8s-node-underlay-01 ~]# kubeadm join 192.168.247.71:6443 --token 7vag71.8v7mqsajsya5n00t \
> --discovery-token-ca-cert-hash sha256:471386ff8802bd6aff33e41518f1644153ea475ecfa4eb9f94bda8be66ebb388 \
> --cri-socket unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node-underlay-01 ~]#

九、安装underlay网络组件

9.1、helm环境准备

官网地址:
https://helm.sh/docs/intro/install/
https://github.com/helm/helm/releases

[root@k8s-master-underlay-01 ~]# wget https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz
[root@k8s-master-underlay-01 ~]# tar xvf helm-v3.9.0-linux-amd64.tar.gz
[root@k8s-master-underlay-01 ~]# mv linux-amd64/helm /usr/local/bin/
[root@k8s-master-underlay-01 ~]# helm version
version.BuildInfo{Version:"v3.9.0", GitCommit:"7ceeda6c585217a19a1131663d8cd1f7d641b2a7", GitTreeState:"clean", GoVersion:"go1.17.5"}
[root@k8s-master-underlay-01 ~]# 

9.2、部署hybridnet

[root@k8s-master-underlay-01 ~]# helm repo add hybridnet https://alibaba.github.io/hybridnet/
"hybridnet" has been added to your repositories
[root@k8s-master-underlay-01 ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "hybridnet" chart repository
Update Complete. ⎈Happy Helming!⎈
[root@k8s-master-underlay-01 ~]# helm install hybridnet hybridnet/hybridnet -n kube-system --set init.cidr=10.200.0.0/16 #配置overlay pod网络, 如果不指定--set init.cidr=10.200.0.0/16默认会使用100.64.0.0/16 
NAME: hybridnet LAST DEPLOYED: Fri Oct 28 01:01:18 2022 NAMESPACE: kube-system STATUS: deployed REVISION: 1 TEST SUITE: None [root@k8s-master-underlay-01 ~]# [root@k8s-master-underlay-01 ~]# kubectl get node -owide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master-underlay-01 Ready control-plane 46m v1.25.3 192.168.247.71 <none> Ubuntu 20.04.3 LTS 5.4.0-122-generic docker://20.10.21 k8s-master-underlay-02 Ready control-plane 44m v1.25.3 192.168.247.72 <none> Ubuntu 20.04.3 LTS 5.4.0-122-generic docker://20.10.21 k8s-master-underlay-03 Ready control-plane 43m v1.25.3 192.168.247.73 <none> Ubuntu 20.04.3 LTS 5.4.0-122-generic docker://20.10.21 k8s-node-underlay-01 Ready <none> 22m v1.25.3 192.168.247.74 <none> Ubuntu 20.04.3 LTS 5.4.0-131-generic docker://20.10.21 [root@k8s-master-underlay-01 ~]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7f8cbcb969-tllgr 0/1 ContainerCreating 0 46m kube-system coredns-7f8cbcb969-xs56z 0/1 ContainerCreating 0 46m kube-system etcd-k8s-master-underlay-01 1/1 Running 0 46m kube-system etcd-k8s-master-underlay-02 1/1 Running 0 44m kube-system etcd-k8s-master-underlay-03 1/1 Running 0 43m kube-system hybridnet-daemon-956nw 2/2 Running 0 3m41s kube-system hybridnet-daemon-f86fj 2/2 Running 0 3m43s kube-system hybridnet-daemon-mvfbf 2/2 Running 0 3m48s kube-system hybridnet-daemon-p9jnd 2/2 Running 0 3m43s kube-system hybridnet-manager-5fcd869c59-bwwdc 0/1 Pending 0 3m43s kube-system hybridnet-manager-5fcd869c59-crdf8 0/1 Pending 0 3m41s kube-system hybridnet-manager-5fcd869c59-vpj7s 0/1 Pending 0 3m41s kube-system hybridnet-webhook-5dc9fc7d9d-8r5zp 0/1 Pending 0 3m48s kube-system hybridnet-webhook-5dc9fc7d9d-kwxz4 0/1 Pending 0 3m48s kube-system hybridnet-webhook-5dc9fc7d9d-rkvz4 0/1 Pending 0 3m48s kube-system kube-apiserver-k8s-master-underlay-01 1/1 Running 0 46m kube-system kube-apiserver-k8s-master-underlay-02 1/1 Running 0 43m kube-system kube-apiserver-k8s-master-underlay-03 1/1 Running 0 43m kube-system kube-controller-manager-k8s-master-underlay-01 1/1 Running 0 46m kube-system kube-controller-manager-k8s-master-underlay-02 1/1 Running 0 44m kube-system kube-controller-manager-k8s-master-underlay-03 1/1 Running 0 43m kube-system kube-proxy-4xq49 1/1 Running 0 43m kube-system kube-proxy-7kkgd 1/1 Running 0 46m kube-system kube-proxy-pnwxt 1/1 Running 0 22m kube-system kube-proxy-wnxpz 1/1 Running 0 44m kube-system kube-scheduler-k8s-master-underlay-01 1/1 Running 1 (44m ago) 46m kube-system kube-scheduler-k8s-master-underlay-02 1/1 Running 0 43m kube-system kube-scheduler-k8s-master-underlay-03 1/1 Running 0 43m [root@k8s-master-underlay-01 ~]#

此时hybridnet-manager、hybridnet-webhook pod Pending,通过describe查看发现集群没有节点打上master标签

[root@k8s-master-underlay-01 ~]# kubectl get pod -A
NAMESPACE     NAME                                             READY   STATUS              RESTARTS      AGE
kube-system   coredns-7f8cbcb969-tllgr                         0/1     ContainerCreating   0             45m
kube-system   coredns-7f8cbcb969-xs56z                         0/1     ContainerCreating   0             45m
kube-system   etcd-k8s-master-underlay-01                      1/1     Running             0             45m
kube-system   etcd-k8s-master-underlay-02                      1/1     Running             0             43m
kube-system   etcd-k8s-master-underlay-03                      1/1     Running             0             42m
kube-system   hybridnet-daemon-956nw                           2/2     Running             0             2m42s
kube-system   hybridnet-daemon-f86fj                           2/2     Running             0             2m44s
kube-system   hybridnet-daemon-mvfbf                           2/2     Running             0             2m49s
kube-system   hybridnet-daemon-p9jnd                           2/2     Running             0             2m44s
kube-system   hybridnet-manager-5fcd869c59-bwwdc               0/1     Pending             0             2m44s
kube-system   hybridnet-manager-5fcd869c59-crdf8               0/1     Pending             0             2m42s
kube-system   hybridnet-manager-5fcd869c59-vpj7s               0/1     Pending             0             2m42s
kube-system   hybridnet-webhook-5dc9fc7d9d-8r5zp               0/1     Pending             0             2m49s
kube-system   hybridnet-webhook-5dc9fc7d9d-kwxz4               0/1     Pending             0             2m49s
kube-system   hybridnet-webhook-5dc9fc7d9d-rkvz4               0/1     Pending             0             2m49s
kube-system   kube-apiserver-k8s-master-underlay-01            1/1     Running             0             45m
kube-system   kube-apiserver-k8s-master-underlay-02            1/1     Running             0             42m
kube-system   kube-apiserver-k8s-master-underlay-03            1/1     Running             0             42m
kube-system   kube-controller-manager-k8s-master-underlay-01   1/1     Running             0             45m
kube-system   kube-controller-manager-k8s-master-underlay-02   1/1     Running             0             43m
kube-system   kube-controller-manager-k8s-master-underlay-03   1/1     Running             0             42m
kube-system   kube-proxy-4xq49                                 1/1     Running             0             42m
kube-system   kube-proxy-7kkgd                                 1/1     Running             0             45m
kube-system   kube-proxy-pnwxt                                 1/1     Running             0             21m
kube-system   kube-proxy-wnxpz                                 1/1     Running             0             43m
kube-system   kube-scheduler-k8s-master-underlay-01            1/1     Running             1 (43m ago)   45m
kube-system   kube-scheduler-k8s-master-underlay-02            1/1     Running             0             42m
kube-system   kube-scheduler-k8s-master-underlay-03            1/1     Running             0             42m
[root@k8s-master-underlay-01 ~]# 
[root@k8s-master-underlay-01 ~]# kubectl describe pod -n kube-system hybridnet-manager-5fcd869c59-bwwdc
Name:                 hybridnet-manager-5fcd869c59-bwwdc
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      hybridnet
Node:                 <none>
Labels:               app=hybridnet-manager
                      pod-template-hash=5fcd869c59
Annotations:          <none>
Status:               Pending
IP:                   
IPs:                  <none>
Controlled By:        ReplicaSet/hybridnet-manager-5fcd869c59
Containers:
  hybridnet-manager:
    Image:      docker.io/hybridnetdev/hybridnet:v0.7.4
    Port:       9899/TCP
    Host Port:  9899/TCP
    Command:
      /hybridnet/hybridnet-manager
      --default-ip-retain=true
      --feature-gates=MultiCluster=false,VMIPRetain=false
      --controller-concurrency=Pod=1,IPAM=1,IPInstance=1
      --kube-client-qps=300
      --kube-client-burst=600
      --metrics-port=9899
    Environment:
      DEFAULT_NETWORK_TYPE:  Overlay
      DEFAULT_IP_FAMILY:     IPv4
      NAMESPACE:             kube-system (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nr5vh (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kube-api-access-nr5vh:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              node-role.kubernetes.io/master=
Tolerations:                 :NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  6m3s  default-scheduler  0/4 nodes are available: 4 node(s) didn't match Pod's node affinity/selector. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.
  Warning  FailedScheduling  56s   default-scheduler  0/4 nodes are available: 4 node(s) didn't match Pod's node affinity/selector. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.
[root@k8s-master-underlay-01 ~]# 

解决办法:
[root@k8s-master-underlay-01 ~]# kubectl get node  --show-labels
NAME                     STATUS   ROLES           AGE   VERSION   LABELS
k8s-master-underlay-01   Ready    control-plane   50m   v1.25.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-underlay-01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-master-underlay-02   Ready    control-plane   47m   v1.25.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-underlay-02,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-master-underlay-03   Ready    control-plane   47m   v1.25.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-underlay-03,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-node-underlay-01     Ready    <none>          25m   v1.25.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-underlay-01,kubernetes.io/os=linux
[root@k8s-master-underlay-01 ~]#
[root@k8s-master-underlay-01 ~]# kubectl label node k8s-master-underlay-01 node-role.kubernetes.io/master=
node/k8s-master-underlay-01 labeled
[root@k8s-master-underlay-01 ~]# kubectl label node k8s-master-underlay-02 node-role.kubernetes.io/master=
node/k8s-master-underlay-02 labeled
[root@k8s-master-underlay-01 ~]# kubectl label node k8s-master-underlay-03 node-role.kubernetes.io/master=
node/k8s-master-underlay-03 labeled
[root@k8s-master-underlay-01 ~]#
[root@k8s-master-underlay-01 ~]# kubectl get node  --show-labels
NAME                     STATUS   ROLES                  AGE   VERSION   LABELS
k8s-master-underlay-01   Ready    control-plane,master   52m   v1.25.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-underlay-01,kubernetes.io/os=linux,networking.alibaba.com/overlay-network-attachment=true,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-master-underlay-02   Ready    control-plane,master   49m   v1.25.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-underlay-02,kubernetes.io/os=linux,networking.alibaba.com/overlay-network-attachment=true,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-master-underlay-03   Ready    control-plane,master   49m   v1.25.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-underlay-03,kubernetes.io/os=linux,networking.alibaba.com/overlay-network-attachment=true,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-node-underlay-01     Ready    <none>                 27m   v1.25.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-underlay-01,kubernetes.io/os=linux,networking.alibaba.com/overlay-network-attachment=true
[root@k8s-master-underlay-01 ~]# 
[root@k8s-master-underlay-01 ~]# kubectl get pod -A -owide
NAMESPACE     NAME                                             READY   STATUS    RESTARTS      AGE     IP               NODE                     NOMINATED NODE   READINESS GATES
kube-system   coredns-7f8cbcb969-tllgr                         0/1     Running   0             51m     10.200.0.1       k8s-node-underlay-01     <none>           <none>
kube-system   coredns-7f8cbcb969-xs56z                         0/1     Running   0             51m     10.200.0.2       k8s-node-underlay-01     <none>           <none>
kube-system   etcd-k8s-master-underlay-01                      1/1     Running   0             51m     192.168.247.71   k8s-master-underlay-01   <none>           <none>
kube-system   etcd-k8s-master-underlay-02                      1/1     Running   0             49m     192.168.247.72   k8s-master-underlay-02   <none>           <none>
kube-system   etcd-k8s-master-underlay-03                      1/1     Running   0             48m     192.168.247.73   k8s-master-underlay-03   <none>           <none>
kube-system   hybridnet-daemon-956nw                           2/2     Running   0             8m57s   192.168.247.74   k8s-node-underlay-01     <none>           <none>
kube-system   hybridnet-daemon-f86fj                           2/2     Running   0             8m59s   192.168.247.72   k8s-master-underlay-02   <none>           <none>
kube-system   hybridnet-daemon-mvfbf                           2/2     Running   0             9m4s    192.168.247.71   k8s-master-underlay-01   <none>           <none>
kube-system   hybridnet-daemon-p9jnd                           2/2     Running   0             8m59s   192.168.247.73   k8s-master-underlay-03   <none>           <none>
kube-system   hybridnet-manager-5fcd869c59-bwwdc               1/1     Running   0             8m59s   192.168.247.71   k8s-master-underlay-01   <none>           <none>
kube-system   hybridnet-manager-5fcd869c59-crdf8               1/1     Running   0             8m57s   192.168.247.73   k8s-master-underlay-03   <none>           <none>
kube-system   hybridnet-manager-5fcd869c59-vpj7s               1/1     Running   0             8m57s   192.168.247.72   k8s-master-underlay-02   <none>           <none>
kube-system   hybridnet-webhook-5dc9fc7d9d-8r5zp               1/1     Running   0             9m4s    192.168.247.73   k8s-master-underlay-03   <none>           <none>
kube-system   hybridnet-webhook-5dc9fc7d9d-kwxz4               1/1     Running   0             9m4s    192.168.247.72   k8s-master-underlay-02   <none>           <none>
kube-system   hybridnet-webhook-5dc9fc7d9d-rkvz4               1/1     Running   0             9m4s    192.168.247.71   k8s-master-underlay-01   <none>           <none>
kube-system   kube-apiserver-k8s-master-underlay-01            1/1     Running   0             51m     192.168.247.71   k8s-master-underlay-01   <none>           <none>
kube-system   kube-apiserver-k8s-master-underlay-02            1/1     Running   0             48m     192.168.247.72   k8s-master-underlay-02   <none>           <none>
kube-system   kube-apiserver-k8s-master-underlay-03            1/1     Running   0             48m     192.168.247.73   k8s-master-underlay-03   <none>           <none>
kube-system   kube-controller-manager-k8s-master-underlay-01   1/1     Running   0             51m     192.168.247.71   k8s-master-underlay-01   <none>           <none>
kube-system   kube-controller-manager-k8s-master-underlay-02   1/1     Running   0             49m     192.168.247.72   k8s-master-underlay-02   <none>           <none>
kube-system   kube-controller-manager-k8s-master-underlay-03   1/1     Running   0             48m     192.168.247.73   k8s-master-underlay-03   <none>           <none>
kube-system   kube-proxy-4xq49                                 1/1     Running   0             48m     192.168.247.73   k8s-master-underlay-03   <none>           <none>
kube-system   kube-proxy-7kkgd                                 1/1     Running   0             51m     192.168.247.71   k8s-master-underlay-01   <none>           <none>
kube-system   kube-proxy-pnwxt                                 1/1     Running   0             27m     192.168.247.74   k8s-node-underlay-01     <none>           <none>
kube-system   kube-proxy-wnxpz                                 1/1     Running   0             49m     192.168.247.72   k8s-master-underlay-02   <none>           <none>
kube-system   kube-scheduler-k8s-master-underlay-01            1/1     Running   1 (49m ago)   51m     192.168.247.71   k8s-master-underlay-01   <none>           <none>
kube-system   kube-scheduler-k8s-master-underlay-02            1/1     Running   0             49m     192.168.247.72   k8s-master-underlay-02   <none>           <none>
kube-system   kube-scheduler-k8s-master-underlay-03            1/1     Running   0             48m     192.168.247.73   k8s-master-underlay-03   <none>           <none>
[root@k8s-master-underlay-01 ~]# 

9.3、验证underlay

9.3.1、创建underlay网络并与node节点关联
[root@k8s-master-underlay-01 ~]# mkdir /root/hybridnet
[root@k8s-master-underlay-01 ~]# cd hybridnet/
[root@k8s-master-underlay-01 hybridnet]# pwd
/root/hybridnet
[root@k8s-master-underlay-01 hybridnet]# 
[root@k8s-master-underlay-01 hybridnet]# kubectl label node k8s-master-underlay-01 network=underlay-nethost
node/k8s-master-underlay-01 labeled
[root@k8s-master-underlay-01 hybridnet]# kubectl label node k8s-master-underlay-02 network=underlay-nethost
node/k8s-master-underlay-02 labeled
[root@k8s-master-underlay-01 hybridnet]# kubectl label node k8s-master-underlay-03 network=underlay-nethost
node/k8s-master-underlay-03 labeled
[root@k8s-master-underlay-01 hybridnet]# kubectl label node k8s-node-underlay-01 network=underlay-nethost
node/k8s-node-underlay-01 labeled
[root@k8s-master-underlay-01 hybridnet]# kubectl get node --show-labels
NAME                     STATUS   ROLES                  AGE     VERSION   LABELS
k8s-master-underlay-01   Ready    control-plane,master   2d22h   v1.25.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-underlay-01,kubernetes.io/os=linux,network=underlay-nethost,networking.alibaba.com/dualstack-address-quota=empty,networking.alibaba.com/ipv4-address-quota=nonempty,networking.alibaba.com/ipv6-address-quota=empty,networking.alibaba.com/overlay-network-attachment=true,networking.alibaba.com/underlay-network-attachment=true,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-master-underlay-02   Ready    control-plane,master   2d22h   v1.25.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-underlay-02,kubernetes.io/os=linux,network=underlay-nethost,networking.alibaba.com/dualstack-address-quota=empty,networking.alibaba.com/ipv4-address-quota=nonempty,networking.alibaba.com/ipv6-address-quota=empty,networking.alibaba.com/overlay-network-attachment=true,networking.alibaba.com/underlay-network-attachment=true,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-master-underlay-03   Ready    control-plane,master   2d22h   v1.25.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-underlay-03,kubernetes.io/os=linux,network=underlay-nethost,networking.alibaba.com/dualstack-address-quota=empty,networking.alibaba.com/ipv4-address-quota=nonempty,networking.alibaba.com/ipv6-address-quota=empty,networking.alibaba.com/overlay-network-attachment=true,networking.alibaba.com/underlay-network-attachment=true,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-node-underlay-01     Ready    <none>                 2d21h   v1.25.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-underlay-01,kubernetes.io/os=linux,network=underlay-nethost,networking.alibaba.com/dualstack-address-quota=empty,networking.alibaba.com/ipv4-address-quota=nonempty,networking.alibaba.com/ipv6-address-quota=empty,networking.alibaba.com/overlay-network-attachment=true,networking.alibaba.com/underlay-network-attachment=true
[root@k8s-master-underlay-01 hybridnet]#
[root@k8s-master-underlay-01 hybridnet]# vim 1.create-underlay-network.yaml
[root@k8s-master-underlay-01 hybridnet]# cat 1.create-underlay-network.yaml 
---
apiVersion: networking.alibaba.com/v1
kind: Network
metadata:
  name: underlay-network1
spec:
  netID: 0
  type: Underlay
  nodeSelector:
    network: "underlay-nethost"

---
apiVersion: networking.alibaba.com/v1
kind: Subnet
metadata:
  name: underlay-network1 
spec:
  network: underlay-network1
  netID: 0
  range:
    version: "4"
    cidr: "192.168.247.0/24"
    gateway: "192.168.247.2"     # 外部网关地址
    start: "192.168.247.10"
    end: "192.168.247.254"
[root@k8s-master-underlay-01 hybridnet]# 
[root@k8s-master-underlay-01 hybridnet]# kubectl apply -f 1.create-underlay-network.yaml 
network.networking.alibaba.com/underlay-network1 unchanged
subnet.networking.alibaba.com/underlay-network1 created
[root@k8s-master-underlay-01 hybridnet]# 
[root@k8s-master-underlay-01 hybridnet]# kubectl get network
NAME                NETID   TYPE       MODE   V4TOTAL   V4USED   V4AVAILABLE   LASTALLOCATEDV4SUBNET   V6TOTAL   V6USED   V6AVAILABLE   LASTALLOCATEDV6SUBNET
init                4       Overlay           65534     2        65532         init                    0         0        0             
underlay-network1   0       Underlay          245       0        244           underlay-network1       0         0        0             
[root@k8s-master-underlay-01 hybridnet]#
[root@k8s-master-underlay-01 hybridnet]# kubectl get subnet
NAME                VERSION   CIDR               START             END               GATEWAY         TOTAL   USED   AVAILABLE   NETID   NETWORK
init                4         10.200.0.0/16                                                          65534   2      65532               init
underlay-network1   4         192.168.247.0/24   192.168.247.10   192.168.247.254   192.168.247.2    246     0      244          0      underlay-network1
[root@k8s-master-underlay-01 hybridnet]# 

查看节点labels信息

[root@k8s-master-underlay-01 hybridnet]# kubectl describe node  k8s-master-underlay-01
Name:               k8s-master-underlay-01
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8s-master-underlay-01
                    kubernetes.io/os=linux
                    network=underlay-nethost
                    networking.alibaba.com/dualstack-address-quota=empty
                    networking.alibaba.com/ipv4-address-quota=nonempty
                    networking.alibaba.com/ipv6-address-quota=empty
                    networking.alibaba.com/overlay-network-attachment=true
                    networking.alibaba.com/underlay-network-attachment=true
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
                    networking.alibaba.com/local-vxlan-ip-list: 192.168.247.71,192.168.247.71
                    networking.alibaba.com/vtep-ip: 192.168.247.71
                    networking.alibaba.com/vtep-mac: 00:0c:29:39:96:72
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 28 Oct 2022 00:18:31 +0800
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  k8s-master-underlay-01
  AcquireTime:     <unset>
  RenewTime:       Sun, 30 Oct 2022 22:15:24 +0800
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sun, 30 Oct 2022 22:15:20 +0800   Fri, 28 Oct 2022 00:18:23 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sun, 30 Oct 2022 22:15:20 +0800   Fri, 28 Oct 2022 00:18:23 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sun, 30 Oct 2022 22:15:20 +0800   Fri, 28 Oct 2022 00:18:23 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sun, 30 Oct 2022 22:15:20 +0800   Fri, 28 Oct 2022 01:03:57 +0800   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.247.71
  Hostname:    k8s-master-underlay-01
Capacity:
  cpu:                2
  ephemeral-storage:  25110788Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3994668Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  23142102183
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3892268Ki
  pods:               110
System Info:
  Machine ID:                 99f43e0eca6d412ea75c8b7359a3b3b0
  System UUID:                ee844d56-427e-a7fb-246f-7887bc399672
  Boot ID:                    4960018e-a366-4ded-ab93-85a2a811abd8
  Kernel Version:             5.4.0-122-generic
  OS Image:                   Ubuntu 20.04.3 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.21
  Kubelet Version:            v1.25.3
  Kube-Proxy Version:         v1.25.3
PodCIDR:                      10.200.0.0/24
PodCIDRs:                     10.200.0.0/24
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
  kube-system                 etcd-k8s-master-underlay-01                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2d21h
  kube-system                 hybridnet-daemon-mvfbf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2d21h
  kube-system                 hybridnet-manager-5fcd869c59-bwwdc                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2d21h
  kube-system                 hybridnet-webhook-5dc9fc7d9d-rkvz4                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2d21h
  kube-system                 kube-apiserver-k8s-master-underlay-01             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2d21h
  kube-system                 kube-controller-manager-k8s-master-underlay-01    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2d21h
  kube-system                 kube-proxy-7kkgd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2d21h
  kube-system                 kube-scheduler-k8s-master-underlay-01             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2d21h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                650m (32%)  0 (0%)
  memory             100Mi (2%)  0 (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:              <none>
[root@k8s-master-underlay-01 hybridnet]# 


[root@k8s-master-underlay-01 hybridnet]# kubectl describe node  k8s-node-underlay-01
Name:               k8s-node-underlay-01
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8s-node-underlay-01
                    kubernetes.io/os=linux
                    networking.alibaba.com/overlay-network-attachment=true
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
                    networking.alibaba.com/local-vxlan-ip-list: 192.168.247.74,192.168.247.74
                    networking.alibaba.com/vtep-ip: 192.168.247.74
                    networking.alibaba.com/vtep-mac: 00:0c:29:76:6c:cb
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 28 Oct 2022 00:42:49 +0800
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  k8s-node-underlay-01
  AcquireTime:     <unset>
  RenewTime:       Sun, 30 Oct 2022 22:14:35 +0800
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sun, 30 Oct 2022 22:11:25 +0800   Fri, 28 Oct 2022 00:42:49 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sun, 30 Oct 2022 22:11:25 +0800   Fri, 28 Oct 2022 00:42:49 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sun, 30 Oct 2022 22:11:25 +0800   Fri, 28 Oct 2022 00:42:49 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sun, 30 Oct 2022 22:11:25 +0800   Fri, 28 Oct 2022 01:03:37 +0800   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.247.74
  Hostname:    k8s-node-underlay-01
Capacity:
  cpu:                2
  ephemeral-storage:  25110788Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3994704Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  23142102183
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3892304Ki
  pods:               110
System Info:
  Machine ID:                 99f43e0eca6d412ea75c8b7359a3b3b0
  System UUID:                65b44d56-cfbc-c518-1895-b036f0766ccb
  Boot ID:                    ee96f020-29ec-4e37-a1b6-a361e498b1d5
  Kernel Version:             5.4.0-131-generic
  OS Image:                   Ubuntu 20.04.3 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.21
  Kubelet Version:            v1.25.3
  Kube-Proxy Version:         v1.25.3
PodCIDR:                      10.200.3.0/24
PodCIDRs:                     10.200.3.0/24
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-7f8cbcb969-tllgr                                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2d21h
  kube-system                 coredns-7f8cbcb969-xs56z                                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2d21h
  kube-system                 hybridnet-daemon-956nw                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2d21h
  kube-system                 kube-proxy-pnwxt                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2d21h
  myserver                    myserver-tomcat-app1-deployment-overlay-57db444484-268mw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                200m (10%)  0 (0%)
  memory             140Mi (3%)  340Mi (8%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:              <none>
[root@k8s-master-underlay-01 hybridnet]# 
9.3.2、创建pod并使用overlay网络
[root@k8s-master-underlay-01 hybridnet]# vi 2.tomcat-app1-overlay.yaml
[root@k8s-master-underlay-01 hybridnet]# cat 2.tomcat-app1-overlay.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-tomcat-app1-deployment-overlay-label
  name: myserver-tomcat-app1-deployment-overlay
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-tomcat-app1-overlay-selector
  template:
    metadata:
      labels:
        app: myserver-tomcat-app1-overlay-selector
    spec:
      nodeName: k8s-node-underlay-01 
      containers:
      - name: myserver-tomcat-app1-container
        #image: tomcat:7.0.93-alpine 
        image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1 
        imagePullPolicy: IfNotPresent
        ##imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
#        resources:
#          limits:
#            cpu: 0.5
#            memory: "512Mi"
#          requests:
#            cpu: 0.5
#            memory: "512Mi"

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-tomcat-app1-service-overlay-label
  name: myserver-tomcat-app1-service-overlay
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30003
  selector:
    app: myserver-tomcat-app1-overlay-selector

[root@k8s-master-underlay-01 hybridnet]# kubectl create ns myserver
namespace/myserver created
[root@k8s-master-underlay-01 hybridnet]#
[root@k8s-master-underlay-01 hybridnet]# kubectl apply -f 2.tomcat-app1-overlay.yaml 
deployment.apps/myserver-tomcat-app1-deployment-overlay created
service/myserver-tomcat-app1-service-overlay created
[root@k8s-master-underlay-01 hybridnet]# kubectl get pod -n myserver -owide
NAME                                                       READY   STATUS    RESTARTS   AGE     IP           NODE                   NOMINATED NODE   READINESS GATES
myserver-tomcat-app1-deployment-overlay-57db444484-424bl   1/1     Running   0          6m43s   10.200.0.3   k8s-node-underlay-01   <none>           <none>
[root@k8s-master-underlay-01 hybridnet]# 

9.3.3、创建pod并使用underlay网络
[root@k8s-master-underlay-01 hybridnet]# vi 3.tomcat-app1-underlay.yaml 
[root@k8s-master-underlay-01 hybridnet]# cat 3.tomcat-app1-underlay.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-tomcat-app1-deployment-underlay-label
  name: myserver-tomcat-app1-deployment-underlay
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-tomcat-app1-underlay-selector
  template:
    metadata:
      labels:
        app: myserver-tomcat-app1-underlay-selector
      annotations: #使用Underlay或者Overlay网络
        networking.alibaba.com/network-type: Underlay
    spec:
      #nodeName: k8s-node2.example.com
      containers:
      - name: myserver-tomcat-app1-container
        #image: tomcat:7.0.93-alpine 
        image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v2 
        imagePullPolicy: IfNotPresent
        ##imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
#        resources:
#          limits:
#            cpu: 0.5
#            memory: "512Mi"
#          requests:
#            cpu: 0.5
#            memory: "512Mi"

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-tomcat-app1-service-underlay-label
  name: myserver-tomcat-app1-service-underlay
  namespace: myserver
spec:
#  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    #nodePort: 40003
  selector:
    app: myserver-tomcat-app1-underlay-selector

[root@k8s-master-underlay-01 hybridnet]# kubectl apply -f 3.tomcat-app1-underlay.yaml 
deployment.apps/myserver-tomcat-app1-deployment-underlay created
service/myserver-tomcat-app1-service-underlay created
[root@k8s-master-underlay-01 hybridnet]# 
[root@k8s-master-underlay-01 hybridnet]# kubectl get pod -n myserver -owide
NAME                                                       READY   STATUS    RESTARTS   AGE   IP               NODE                   NOMINATED NODE   READINESS GATES
myserver-tomcat-app1-deployment-overlay-57db444484-424b1   1/1     Running   0          25m   10.200.0.3       k8s-node-underlay-01   <none>           <none>
myserver-tomcat-app1-deployment-underlay-9749fdf45-nvk28   1/1     Running   0          30s   192.168.247.10   k8s-node-underlay-01   <none>           <none>
[root@k8s-master-underlay-01 hybridnet]# 
[root@k8s-master-underlay-01 hybridnet]# vi 3.tomcat-app1-underlay.yaml
[root@k8s-master-underlay-01 hybridnet]# grep replicas: 3.tomcat-app1-underlay.yaml
replicas: 3
[root@k8s-master-underlay-01 hybridnet]# kubectl apply  -f 3.tomcat-app1-underlay.yaml 
deployment.apps/myserver-tomcat-app1-deployment-underlay configured
service/myserver-tomcat-app1-service-underlay unchanged
[root@k8s-master-underlay-01 hybridnet]# kubectl get pod  -n myserver -owide
NAME                                                       READY   STATUS    RESTARTS   AGE     IP               NODE                     NOMINATED NODE   READINESS GATES
myserver-tomcat-app1-deployment-overlay-57db444484-268mw   1/1     Running   0          51m     10.200.0.3       k8s-node-underlay-01     <none>           <none>
myserver-tomcat-app1-deployment-underlay-9749fdf45-d2cgz   1/1     Running   0          3m37s   192.168.247.14   k8s-master-underlay-01   <none>           <none>
myserver-tomcat-app1-deployment-underlay-9749fdf45-gdrgz   1/1     Running   0          3m37s   192.168.247.13   k8s-node-underlay-01     <none>           <none>
myserver-tomcat-app1-deployment-underlay-9749fdf45-lms8f   1/1     Running   0          3m37s   192.168.247.15   k8s-master-underlay-02   <none>           <none>
[root@k8s-master-underlay-01 hybridnet]# 

由于本环境是在windows VMware环境下部署,需要在windows添加路由,为了方便测试,直接使用集群环境测试

9.3.4、通过service IP访问Pod
[root@k8s-master-underlay-01 hybridnet]# vi 4.pod-underlay.yaml
[root@k8s-master-underlay-01 hybridnet]# cat 4.pod-underlay.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: annotations-demo
  annotations:
    networking.alibaba.com/network-type: Underlay
spec:
  containers:
  - name: nginx
    image: nginx:1.7.9
    ports:
    - containerPort: 80
[root@k8s-master-underlay-01 hybridnet]# kubectl apply -f 4.pod-underlay.yaml 
pod/annotations-demo created
[root@k8s-master-underlay-01 hybridnet]#
[root@k8s-master-underlay-01 hybridnet]# kubectl get pod  -owide
NAME               READY   STATUS    RESTARTS   AGE    IP               NODE                   NOMINATED NODE   READINESS GATES
annotations-demo   1/1     Running   0          2m2s   192.168.247.16   k8s-node-underlay-01   <none>           <none>
[root@k8s-master-underlay-01 hybridnet]# 
[root@k8s-node-underlay-01 ~]# kubectl get svc -A
NAMESPACE     NAME                                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes                              ClusterIP   10.100.0.1       <none>        443/TCP                  2d23h
kube-system   hybridnet-webhook                       ClusterIP   10.100.36.247    <none>        443/TCP                  2d22h
kube-system   kube-dns                                ClusterIP   10.100.0.10      <none>        53/UDP,53/TCP,9153/TCP   2d23h
myserver      myserver-tomcat-app1-service-overlay    NodePort    10.100.248.109   <none>        80:30003/TCP             93m
myserver      myserver-tomcat-app1-service-underlay   ClusterIP   10.100.105.171   <none>        80/TCP                   45m
[root@k8s-node-underlay-01 ~]#
[root@k8s-node-underlay-01 ~]# route add -net 10.100.105.0 netmask 255.255.255.0 gateway 192.168.247.74 #在pod所在的宿主机添加service IP网段即可
[root@k8s-node-underlay-01 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.247.2   0.0.0.0         UG    100    0        0 eth0
10.100.105.0    192.168.247.74  255.255.255.0   UG    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.247.0   0.0.0.0         255.255.255.0   U     100    0        0 eth0
[root@k8s-node-underlay-01 ~]# 
[root@k8s-node-underlay-01 ~]# curl http://10.100.105.171/myapp/index.html
tomcat app1 v2
[root@k8s-node-underlay-01 ~]#

 

posted @ 2022-09-03 14:11  cyh00001  阅读(1670)  评论(0编辑  收藏  举报