Centos下Kubernetes+Flannel部署(新)
一、准备工作
1) 三台centos主机
k8s master: 10.11.151.97 tc-151-97
k8s node1: 10.11.151.100 tc-151-100
k8s node2: 10.11.151.101 tc-151-101
2)程序下载(百度网盘)
k8s-1.1.3,Docker-1.8.2,ETCD-2.2.1,Flannel-0.5.5
二、ETCD集群部署
ETCD是k8s集群的基础,可以单结点也可以以集群的方式部署。本文以三台主机组成ETCD集群进行部署,以service形式启动。在三台主机上分别执行如下操作:
1)解压ETCD安装包并将etcd和etcdctl复制到工作目录下(本文工作目录为/opt/domeos/openxxs/k8s-1.1.3-flannel)。
2)创建 /lib/systemd/system/etcd.service 文件,该文件为centos系统的服务文件,注意配置其中的etcd可执行文件的绝对路径:
[Unit] Description=ETCD [Service] Type=notify EnvironmentFile=/etc/sysconfig/etcd ExecStart=/opt/domeos/openxxs/k8s-1.1.3-flannel/etcd $ETCD_NAME \ $INITIAL_ADVERTISE_PEER_URLS \ $LISTEN_PEER_URLS \ $ADVERTISE_CLIENT_URLS \ $LISTEN_CLIENT_URLS \ $INITIAL_CLUSTER_TOKEN \ $INITIAL_CLUSTER \ $INITIAL_CLUSTER_STATE \ $ETCD_OPTS Restart=on-failure
3)创建 /etc/sysconfig/etcd 文件,该文件为服务的配置文件,三台主机的ETCD_NAME、INITIAL_ADVERTISE_PEER_URLS和ADVERTISE_CLIENT_URLS参数各不相同,下面为97机上的配置文件,100和101上要做相应修改:
# configure file for etcd # -name ETCD_NAME='-name k8sETCD0' # -initial-advertise-peer-urls INITIAL_ADVERTISE_PEER_URLS='-initial-advertise-peer-urls http://10.11.151.97:4010' # -listen-peer-urls LISTEN_PEER_URLS='-listen-peer-urls http://0.0.0.0:4010' # -advertise-client-urls ADVERTISE_CLIENT_URLS='-advertise-client-urls http://10.11.151.97:4011,http://10.11.151.97:4012' # -listen-client-urls LISTEN_CLIENT_URLS='-listen-client-urls http://0.0.0.0:4011,http://0.0.0.0:4012' # -initial-cluster-token INITIAL_CLUSTER_TOKEN='-initial-cluster-token k8s-etcd-cluster' # -initial-cluster INITIAL_CLUSTER='-initial-cluster k8sETCD0=http://10.11.151.97:4010,k8sETCD1=http://10.11.151.100:4010,k8sETCD2=http://10.11.151.101:4010' # -initial-cluster-state INITIAL_CLUSTER_STATE='-initial-cluster-state new' # other parameters ETCD_OPTS=''
4)启动ETCD集群
systemctl daemon-reload
systemctl start etcd
三台主机上都执行完毕后,可通过如下命令确认ETCD集群是否正常工作了(以97机为例):
# 查看服务状态 systemctl status -l etcd # 若正常,则显示 Active: active (running),同时在日志的最后会提示当前结点已加入到集群中了,如 "the connection with 6adad1923d90fb38 became active"
# 如果各个ETCD结点间系统时间相差较大则会提示"the clock difference against ... peer is too high",此时根据需要修正系统时间
# 查看集群结点的访问是否正常 curl -L http://10.11.151.97:4012/version curl -L http://10.11.151.100:4012/version curl -L http://10.11.151.101:4012/version # 若正常,则返回: {"etcdserver":"2.2.1","etcdcluster":"2.2.0"}
三、配置网络环境
启动集群前如果网络环境配置存在冲突,特别是iptables规则的干涉,会导致集群工作不正常。因此在启动前需要确认如下配置:
1)/etc/hosts
kubelet 是通过/etc/hosts来获取本机IP的,因此需要在/etc/hosts中配置hostname和IP的对应关系,如97机上的 /etc/hosts 中需要存在这条记录:
10.11.151.97 tc-151-97
hostname在k8s的网络配置中是个很重要的参数,要求其满足DNS的命名规则,可由字母数字短横线组成,但下划线不行(如tc_151_97就是不符合要求的)。在主机上通过执行 hostname 命令查看本机的hostname,如果不符合要求,有两种解决方案:<1>直接更改主机的hostname使其符合要求,更改过程中需要重启网络,这里写了一个centos下更改hostname的脚本(百度网盘,戳这里);<2>在启动kubelet时使用 --hostname_override 参数指定用于集群内的hostname,在已有其它服务依赖于主机hostname的情形下推荐使用这种方式。
2)iptables
flannel通过修改iptables规则来达到托管docker网络的目的,因此在启动前需要对iptables进行清理确保不存在冲突。如果iptables中并没有很重要的规则,建议直接清空:
# 瞄一眼现有的iptables规则 iptables -L -n # 如果没有重要规则,执行清空 iptables -P INPUT ACCEPT iptables -F # 再瞄一眼看是不是已经清空了 iptables -L -n
一不作二不休关闭防火墙服务(flannel启动时会自动启用iptables):
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
3)ifconfig
如果在主机上进行了多次k8s的配置,则需要对网卡进行清理。未启动flanneld和docker服务的情形下,通过 ifconfig 查看网卡,如果存在docker0、flannel.0或flannel.1,以及calico网络(准备写《Centos下kubernetes+calico部署》,到时会详细说明)设置产生的虚拟网卡,则使用如下命令进行删除:
ip link delete docker0 ip link delete flannel.1 ......
4)flannel参数设置
集群中flannel的可用子网段和网络包封装方式等配置信息需要提前写入ETCD中:
curl -L http://10.11.151.97:4012/v2/keys/flannel/network/config -XPUT -d value="{\"Network\":\"172.16.0.0/16\",\"SubnetLen\":25,\"Backend\":{\"Type\":\"vxlan\",\"VNI\":1}}"
写入ETCD中的key为 /flannel/network/config ,后面配置flannel服务时需要用到。配置项中的 Network 为整个k8s集群可用的子网段;SubnetLen为每个Node结点的子网掩码长度;Type表示封包的方式,推荐使用vxlan,此外还有udp等方式。
四、启动k8s-master端
k8s-master一般包括三个组件:kube-apiserver、kube-controller-manager 和 kube-scheduler。如果要将k8s-master所在的主机也加入集群管理中,比如让这台主机可以使用集群内的DNS服务等,则需要在这台主机上启动kube-proxy,本文不考虑这种情况。将安装包解压后,复制 解压目录/bin/linux/amd64/ 下的 kube-apiserver、kube-controller-manager 和 kube-scheduler 到工作目录中。
1)创建、配置和启动kube-apiserver服务
<1> /lib/systemd/system/kube-apiserver.service 文件,同样需要注意将kube-apiserver可执行文件的绝对路径配置一下:
[Unit] Description=kube-apiserver [Service] EnvironmentFile=/etc/sysconfig/kube-apiserver ExecStart=/opt/domeos/openxxs/k8s-1.1.3-flannel/kube-apiserver $ETCD_SERVERS \
$LOG_DIR \ $SERVICE_CLUSTER_IP_RANGE \ $INSECURE_BIND_ADDRESS \ $INSECURE_PORT \ $BIND_ADDRESS \ $SECURE_PORT \ $AUTHORIZATION_MODE \ $AUTHORIZATION_FILE \ $BASIC_AUTH_FILE \ $KUBE_APISERVER_OPTS Restart=on-failure
<2> /etc/sysconfig/kube-apiserver 文件:
# configure file for kube-apiserver # --etcd-servers ETCD_SERVERS='--etcd-servers=http://10.11.151.97:4012,http://10.11.151.100:4012,http://10.11.151.101:4012' # --log-dir
LOG_DIR='/opt/domeos/openxxs/k8s-1.1.3-flannel/logs'
# --service-cluster-ip-range SERVICE_CLUSTER_IP_RANGE='--service-cluster-ip-range=172.16.0.0/16' # --insecure-bind-address INSECURE_BIND_ADDRESS='--insecure-bind-address=0.0.0.0' # --insecure-port INSECURE_PORT='--insecure-port=8080' # --bind-address BIND_ADDRESS='--bind-address=0.0.0.0' # --secure-port SECURE_PORT='--secure-port=6443' # --authorization-mode AUTHORIZATION_MODE='--authorization-mode=ABAC' # --authorization-policy-file AUTHORIZATION_FILE='--authorization-policy-file=/opt/domeos/openxxs/k8s-1.1.3-flannel/authorization' # --basic-auth-file BASIC_AUTH_FILE='--basic-auth-file=/opt/domeos/openxxs/k8s-1.1.3-flannel/authentication.csv' # other parameters KUBE_APISERVER_OPTS=''
如果不需要使用 https 进行认证和授权,则可以不配置BIND_ADDRESS、SECURE_PORT、AUTHORIZATION_MODE、AUTHORIZATION_FILE和BASIC_AUTH_FILE。关于安全认证和授权在k8s官方文档里给出了很详细的介绍(authorization戳这里,authentication戳这里),本文的配置方式以ABAC(用户配置认证策略)进行认证,同时明文存储了密码。两个配置文件的内容如下:
# /opt/domeos/openxxs/k8s-1.1.3-flannel/authorization的内容: {"user": "admin"} # /opt/domeos/openxxs/k8s-1.1.3-flannel/authentication.csv的内容,共三列(密码,用户名,用户ID): admin,admin,adminID
事实上只要配置了ETCD_SERVERS一项其它全留空也足以让kube-apiserver正常跑起来了。ETCD_SERVERS也并不需要将ETCD集群的所有结点服务地址写上,但至少要有一个。
<3> 启动kube-apiserver
systemctl daemon-reload systemctl start kube-apiserver
# 启动完成后查看下服务状态和日志是否正常
systemctl status -l kube-apiserver
还可以通过如下命令查看kube-apiserver是否正常,正常则返回'ok':
curl -L http://10.11.151.97:8080/healthz
2)创建、配置和启动kube-controller-manager服务
三个组件启动是有顺序,必须等kube-apiserver正常启动之后再启动kube-controller-manager。
<1> /etc/sysconfig/kube-controller 文件:
# configure file for kube-controller-manager
# --master
KUBE_MASTER='--master=http://10.11.151.97:8080'
# --log-dir
LOG_DIR='--log-dir=/opt/domeos/openxxs/k8s-1.1.3-flannel/logs'
# --cloud-provider
CLOUD_PROVIDER='--cloud-provider='
# other parameters
KUBE_CONTROLLER_OPTS=''
<2> /lib/systemd/system/kube-controller.service
[Unit]
Description=kube-controller-manager
After=kube-apiserver.service
Wants=kube-apiserver.service
[Service]
EnvironmentFile=/etc/sysconfig/kube-controller
ExecStart=/opt/domeos/openxxs/k8s-1.1.3-flannel/kube-controller-manager $KUBE_MASTER \
$LOG_DIR \
$CLOUD_PROVIDER \
$KUBE_CONTROLLER_OPTS
Restart=on-failure
<3> 启动kube-controller-manager
systemctl daemon-reload
systemctl start kube-controller
systemctl status -l kube-controller
现在来看看日志报了什么错:
I0127 10:34:11.374094 29737 plugins.go:71] No cloud provider specified. I0127 10:34:11.374212 29737 nodecontroller.go:133] Sending events to api server. E0127 10:34:11.374448 29737 controllermanager.go:290] Failed to start service controller: ServiceController should not be run without a cloudprovider. I0127 10:34:11.382191 29737 controllermanager.go:332] Starting extensions/v1beta1 apis I0127 10:34:11.382217 29737 controllermanager.go:334] Starting horizontal pod controller. I0127 10:34:11.382284 29737 controllermanager.go:346] Starting job controller E0127 10:34:11.402650 29737 serviceaccounts_controller.go:215] serviceaccounts "default" already exists
第一个错误为"ServiceController should not be run without a cloudprovider",表示--cloud-provider必须设置;第二个错误为"serviceaccounts "default" already exists",controller希望每个namespace都有一个service account,如果没有,controller会尝试创建一个名为"default"的account,然而它在本地又是存在的。该模块的开发者说这两个错误是"harmless"的(戳这里,再戳这里),在后续版本中也已修复了这个bug。对于第一个错误,启动命令中必须带有--cloud-provider参数,即使它的值为空;对于第二个错误,Google搜索得到唯一的解决方案为在启动kube-apiserver时设置的--admission-controllers参数中移除serviceAccount这一项,试过后并不管用。当设置了具体的--cloud-provider时,不会报这两个错误;而对于--cloud-provider为空的情况,这两个错误确实是harmless的,报了错但进程已经正常启动了,所以并不影响kube-controller-manager的工作。
3)创建、配置和启动kube-scheduler服务
<1> /etc/sysconfig/kube-scheduler
# configure file for kube-scheduler # --master KUBE_MASTER='--master=http://10.11.151.97:8080' # --log-dir LOG_DIR='--log-dir=/opt/domeos/openxxs/k8s-1.1.3-flannel/logs' # other parameters KUBE_SCHEDULER_OPTS=''
<2> /lib/systemd/system/kube-scheduler.service
[Unit] Description=kube-scheduler After=kube-apiserver.service Wants=kube-apiserver.service [Service] EnvironmentFile=/etc/sysconfig/kube-scheduler ExecStart=/opt/domeos/openxxs/k8s-1.1.3-flannel/kube-scheduler $KUBE_MASTER \ $LOG_DIR \ $KUBE_SCHEDULER_OPTS Restart=on-failure
<3> 启动kube-scheduler
systemctl daemon-reload systemctl start kube-scheduler systemctl status -l kube-scheduler
五、启动k8s-node端
将Docker和Flannel的 rpm 安装包下载到工作目录下;将k8s安装包解压后,复制解压目录/bin/linux/amd64/ 下的 kube-proxy、kubelet到工作目录下。
这里阉割改写了DomeOS项目一键添加node结点的start_node.sh脚本(戳这里),包括环境检查、安装docker、安装flannel、启动kubelet等等,下载start_node.sh脚本到工作目录,然后根据需要修改STEP02中的一些配置项。以100机为例,改完脚本后确定各参数取值运行如下命令即可:
sudo sh start_node.sh --api-server http://10.11.151.97:8080 --iface em1 --hostname-override tc-151-100 --pod-infra 10.11.150.76:5000/kubernetes/pause:latest --cluster-dns 172.16.40.1 --cluster-domain domeos.sohu --insecure-registry 10.11.150.76:5000 --etcd-server http://10.11.151.97:4012
--api-server为kube-apiserver的服务地址;--iface为目前用于连接的网卡(以100机为例,即IP地址为10.11.151.100的网卡);--hostname-override为主机名的别名;--pod-infra为/kubernetes/pause:latest镜像的地址;--cluster-dns为集群内DNS服务的地址;--cluster-domain为DNS解析服务的域名后缀;--insecure-registry为私有仓库地址;--etcd-server为用于集群的ETCD服务地址。
下面以101机为例来说明不使用start_node.sh脚本来配置启动k8s-node端的过程:
1)安装和配置docker
<1> 安装Docker
yum install docker-engine-1.8.2-1.el7.centos.x86_64.rpm -y
<2> 修改配置文件 /etc/sysconfig/docker
DOCKER_OPTS="-g /opt/domeos/openxxs/k8s-1.1.3-flannel/docker" INSECURE_REGISTRY="--insecure-registry 10.11.150.76:5000"
这里设置了Docker的数据存放路径(默认放在 /var 下面)和私有的镜像仓库。
<3> 修改配置文件 /lib/systemd/system/docker.service
[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target docker.socket Requires=docker.socket [Service] EnvironmentFile=/etc/sysconfig/docker ExecStart=/usr/bin/docker daemon $DOCKER_OPTS \ $DOCKER_STORAGE_OPTIONS \ $DOCKER_NETWORK_OPTIONS \ $ADD_REGISTRY \ $BLOCK_REGISTRY \ $INSECURE_REGISTRY MountFlags=slave LimitNOFILE=1048576 LimitNPROC=1048576 LimitCORE=infinity [Install] WantedBy=multi-user.target
这里注意,如果安装了低版本docker或用非官方的方式安装的docker(例如安装的是docker-selinux-1.8.2-10.e17.centos.x86_64和docker-1.8.2.e17.centos.x86_64),很有可能没有docker.socket这个文件,此时需要把"After=network.target docker.socket"和"Requires=docker.socket"这两句去除了。
2)安装和配置flannel
<1> 安装flannel
yum install -y flannel-0.5.5-1.fc24.x86_64.rpm
<2> 修改配置文件 /etc/sysconfig/flanneld
FLANNEL_ETCD="http://10.11.151.97:4012" FLANNEL_ETCD_KEY="/flannel/network" FLANNEL_OPTIONS="-iface=em1"
这里需要特别注意,如果对机子的网卡进行了一些修改,用于连接外网的网卡名比较特殊(比如机子用的是万兆网卡,网卡名即为p6p1),启动flannel时会报"Failed to get default interface: Unable to find default route"错误,则FLANNEL_OPTIONS需要添加参数:iface=<用于连接的网卡名>。例如100机的网卡名为em1则 iface=em1;万兆网卡的网卡名为p6p1则 iface=p6p1。
<3> 修改配置文件 /lib/systemd/system/flanneld.service
[Unit] Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] EnvironmentFile=/etc/sysconfig/flanneld EnvironmentFile=-/etc/sysconfig/docker-network ExecStart=/usr/bin/flanneld -etcd-endpoints=${FLANNEL_ETCD} -etcd-prefix=${FLANNEL_ETCD_KEY} $FLANNEL_OPTIONS ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=on-failure [Install] WantedBy=multi-user.target RequiredBy=docker.service
3)启动Flannel
systemctl daemon-reload
systemctl start flanneld
systemctl status -l flanneld
4)启动Docker
systemctl daemon-reload
systemctl start docker
systemctl status -l docker
启动后查看下启动的docker是不是被flannel托管了:
命令: ps aux | grep docker 显示结果: /usr/bin/docke daemon -g /opt/domeos/openxxs/k8s-1.1.3-flannel/docker --bip=172.16.17.129/25 --ip-masq=true --mtu=1450 --insecure-registry 10.11.150.76:5000 可以看到docker启动后被加上了flanneld的相关配置项了(bip, ip-masq 和 mtu)
5)配置和启动kube-proxy
<1> 修改配置文件 /etc/sysconfig/kube-proxy
# configure file for kube-proxy # --master KUBE_MASTER='--master=http://10.11.151.97:8080' # --proxy-mode PROXY_MODE='--proxy-mode=iptables' # --log-dir
LOG_DIR='--log-dir=/opt/domeos/openxxs/k8s-1.1.3-flannel/logs'
# other parameters KUBE_PROXY_OPTS=''
<2> 修改配置文件 /lib/systemd/system/kube-proxy.service
[Unit] Description=kube-proxy [Service] EnvironmentFile=/etc/sysconfig/kube-proxy ExecStart=/opt/domeos/openxxs/k8s-1.1.3-flannel/kube-proxy $KUBE_MASTER \ $PROXY_MODE \
$LOG_DIR \ $KUBE_PROXY_OPTS Restart=on-failure
<3> 启动kube-proxy
systemctl daemon-reload systemctl start kube-proxy systemctl status -l kube-proxy
6)配置和启动kubelet
<1> 修改配置文件 /etc/sysconfig/kubelet
# configure file for kubelet # --api-servers API_SERVERS='--api-servers=http://10.11.151.97:8080' # --address ADDRESS='--address=0.0.0.0' # --hostname-override HOSTNAME_OVERRIDE='' # --allow-privileged ALLOW_PRIVILEGED='--allow-privileged=false' # --pod-infra-container-image POD_INFRA='--pod-infra-container-image=10.11.150.76:5000/kubernetes/pause:latest' # --cluster-dns CLUSTER_DNS='--cluster-dns=172.16.40.1' # --cluster-domain CLUSTER_DOMAIN='--cluster-domain=domeos.sohu' # --max-pods MAX_PODS='--max-pods=70' # --log-dir
LOG_DIR='--log-dir=/opt/domeos/openxxs/k8s-1.1.3-flannel/logs'
# other parameters KUBELET_OPTS=''
这里的 CLUSTER_DNS 和 CLUSTER_DOMAIN 两项设置与集群内使用的DNS相关,具体参考《在k8s中搭建可解析hostname的DNS服务》。每个pod启动时都要先启动一个/kubernetes/pause:latest容器来进行一些基本的初始化工作,该镜像默认下载地址为 gcr.io/google_containers/pause:latest,可通过POD_INFRA参数来更改下载地址。由于GWF的存在可能会连接不上该资源,所以可以将该镜像下载下来之后再push到自己的docker本地仓库中,启动 kubelet 时从本地仓库中读取即可。MAX_PODS参数表示一个节点最多可启动的pod数量。
<2> 修改配置文件 /lib/systemd/system/kubelet.service
[Unit] Description=kubelet [Service] EnvironmentFile=/etc/sysconfig/kubelet ExecStart=/opt/domeos/openxxs/k8s-1.1.3-flannel/kubelet $API_SERVERS \ $ADDRESS \ $HOSTNAME_OVERRIDE \ $ALLOW_PRIVILEGED \ $POD_INFRA \ $CLUSTER_DNS \ $CLUSTER_DOMAIN \ $MAX_PODS \
$LOG_DIR \ $KUBELET_OPTS Restart=on-failure
<3> 启动kubelet
systemctl daemon-reload
systemctl start kubelet
systemctl status -l kubelet
六、测试
1)查看主机状态
借助kubectl,执行如下命令查看状态:
命令: ./kubectl --server=10.11.151.97:8080 get nodes 返回: NAME LABELS STATUS AGE tc-151-100 kubernetes.io/hostname=tc-151-100 Ready 9m tc-151-101 kubernetes.io/hostname=tc-151-101 Ready 17h 说明: 结点状态为Ready,说明100和101成功注册进k8s集群中
2)创建pod
创建test.yaml文件,内容如下:
1 apiVersion: v1 2 kind: ReplicationController 3 metadata: 4 name: test-1 5 spec: 6 replicas: 1 7 template: 8 metadata: 9 labels: 10 app: test-1 11 spec: 12 containers: 13 - name: iperf 14 image: 10.11.150.76:5000/openxxs/iperf:1.2 15 nodeSelector: 16 kubernetes.io/hostname: tc-151-100 17 --- 18 apiVersion: v1 19 kind: ReplicationController 20 metadata: 21 name: test-2 22 spec: 23 replicas: 1 24 template: 25 metadata: 26 labels: 27 app: test-2 28 spec: 29 containers: 30 - name: iperf 31 image: 10.11.150.76:5000/openxxs/iperf:1.2 32 nodeSelector: 33 kubernetes.io/hostname: tc-151-100 34 --- 35 apiVersion: v1 36 kind: ReplicationController 37 metadata: 38 name: test-3 39 spec: 40 replicas: 1 41 template: 42 metadata: 43 labels: 44 app: test-3 45 spec: 46 containers: 47 - name: iperf 48 image: 10.11.150.76:5000/openxxs/iperf:1.2 49 nodeSelector: 50 kubernetes.io/hostname: tc-151-101 51 --- 52 apiVersion: v1 53 kind: ReplicationController 54 metadata: 55 name: test-4 56 spec: 57 replicas: 1 58 template: 59 metadata: 60 labels: 61 app: test-4 62 spec: 63 containers: 64 - name: iperf 65 image: 10.11.150.76:5000/openxxs/iperf:1.2 66 nodeSelector: 67 kubernetes.io/hostname: tc-151-101
表示在100上创建 test-1 和 test-2 两个pod,在101上创建 test-3 和 test-4 两个pod。注意其中的 image 等参数根据实际情况进行修改。
通过kubectl和test.yaml创建pod:
命令: ./kubectl --server=10.11.151.97:8080 create -f test.yaml 返回: replicationcontroller "test-1" created replicationcontroller "test-2" created replicationcontroller "test-3" created replicationcontroller "test-4" created 说明: 四个rc创建成功 命令: ./kubectl --server=10.11.151.97:8080 get pods 返回: NAME READY STATUS RESTARTS AGE test-1-vrt0s 1/1 Running 0 8m test-2-uwtj7 1/1 Running 0 8m test-3-59562 1/1 Running 0 8m test-4-m2rqw 1/1 Running 0 8m 说明: 四个pod成功启动状态正常
3)结点间通讯
<1> 获取四个pod对应container的IP地址
命令: ./kubectl --server=10.11.151.97:8080 describe pod test-1-vrt0s
返回:
......
IP 172.16.42.4
...... 说明: 该命令返回pod的详细信息,其中的IP字段即为该pod在集群内的IP地址,也是container的IP地址
pod名称 | container名称 | 所在主机 | IP地址 |
test-1-vrt0s |
c19ff66d7cc7 |
10.11.151.100 | 172.16.42.4 |
test-2-uwtj7 |
3fa6b1f78996 |
10.11.151.100 | 172.16.42.5 |
test-3-59562 |
0cc5ffa7cce6 |
10.11.151.101 | 172.16.17.132 |
test-4-m3rqw |
2598a2ee012e |
10.11.151.101 | 172.16.17.133 |
<2> 进入各个container内部ping其它container
命令: docker ps | grep -v pause 结果: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3fa6b1f78996 10.11.150.76:5000/openxxs/iperf:1.2 "/block" About an hour ago Up About an hour k8s_iperf.a4ede594_test-2-uwtj7_default_dd1d9201-c63a-11e5-8db4-782bcb435e46_aa0327af c19ff66d7cc7 10.11.150.76:5000/openxxs/iperf:1.2 "/block" About an hour ago Up About an hour k8s_iperf.a4ede594_test-1-vrt0s_default_dd0fdef0-c63a-11e5-8db4-782bcb435e46_89db57da 命令: docker exec -it c19ff66d7cc7 /bin/sh 结果: sh-4.2# ping 172.16.17.132 -c 5 PING 172.16.17.132 (172.16.17.132) 56(84) bytes of data. 64 bytes from 172.16.17.132: icmp_seq=1 ttl=62 time=0.938 ms 64 bytes from 172.16.17.132: icmp_seq=2 ttl=62 time=0.329 ms 64 bytes from 172.16.17.132: icmp_seq=3 ttl=62 time=0.329 ms 64 bytes from 172.16.17.132: icmp_seq=4 ttl=62 time=0.303 ms 64 bytes from 172.16.17.132: icmp_seq=5 ttl=62 time=0.252 ms --- 172.16.17.132 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4001ms rtt min/avg/max/mdev = 0.252/0.430/0.938/0.255 ms sh-4.2# ping 172.16.17.133 -c 5 PING 172.16.17.133 (172.16.17.133) 56(84) bytes of data. 64 bytes from 172.16.17.133: icmp_seq=1 ttl=62 time=0.619 ms 64 bytes from 172.16.17.133: icmp_seq=2 ttl=62 time=0.335 ms 64 bytes from 172.16.17.133: icmp_seq=3 ttl=62 time=0.320 ms 64 bytes from 172.16.17.133: icmp_seq=4 ttl=62 time=0.328 ms 64 bytes from 172.16.17.133: icmp_seq=5 ttl=62 time=0.323 ms --- 172.16.17.133 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4000ms rtt min/avg/max/mdev = 0.320/0.385/0.619/0.117 ms sh-4.2# ping 172.16.42.5 -c 5 PING 172.16.42.5 (172.16.42.5) 56(84) bytes of data. 64 bytes from 172.16.42.5: icmp_seq=1 ttl=64 time=0.122 ms 64 bytes from 172.16.42.5: icmp_seq=2 ttl=64 time=0.050 ms 64 bytes from 172.16.42.5: icmp_seq=3 ttl=64 time=0.060 ms 64 bytes from 172.16.42.5: icmp_seq=4 ttl=64 time=0.051 ms 64 bytes from 172.16.42.5: icmp_seq=5 ttl=64 time=0.070 ms --- 172.16.42.5 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 3999ms rtt min/avg/max/mdev = 0.050/0.070/0.122/0.028 ms
以上为test-1与其它三个pod间的通讯,结果显示均连接正常。同理,分别测试test2、test-3、test-4与其它pod间的通讯,均能正常连接。配置成功。