Kubernetes Flannel网路的安装配置

1下载安装包地址

https://github.com/coreos/flannel/releases

2.部署flannel网路之前,提前安装好docker 参考《Yum 安装Docker

同时,需要向etcd 中写入一个子网,该子网就是为每一个docker 节点分配一个不同的小子网

[root@dn01 ~]# /opt/etcd/bin/etcdctl  --ca-file=/root/k8s/etcd-cert/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem  --endpoints="https://10.10.100.30:2379,https://10.10.100.31:2379,https://10.10.100.32:2379" set /coreos.com/network/config  '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
View Code

如果要获取设置的子网配置,可将执行命令中的“set” 修改成“get” 查看,该值是写到了etcd数据库中的。

[root@dn01 ~]# /opt/etcd/bin/etcdctl  --ca-file=/root/k8s/etcd-cert/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem  --endpoints="https://10.10.100.30:2379,https://10.10.100.31:2379,https://10.10.100.32:2379" get /coreos.com/network/config 
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
[root@dn01 ~]# 
View Code

3.上传flannel安装包,只需要部署到node节点即可,解压

[root@dn02 ~]# tar -zxf flannel-v0.10.0-linux-amd64.tar.gz
[root@dn02 ~]# ls
anaconda-ks.cfg flanneld flannel-v0.10.0-linux-amd64.tar.gz mk-docker-opts.sh README.md

4.为flannel 创建一个部署目录,作为kubernetes的组件,我们将其放置在k8s的部署目录中

[root@dn02 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p

将flannel解压目录中的 flanneld mk-docker-opts.sh 两个文件拷贝到/opt/kubernetes/bin/目录下

[root@dn02 ~]# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/

[root@dn02 ~]# ls /opt/kubernetes/bin/
flanneld mk-docker-opts.sh

5.部署

部署前同样要将flannel证书拷贝到各个节点中,因为本例中每个节点都拷贝了etcd的证书,在指定证书时将flannel的证书指定到etcd的证书即可

生成配置文件,默认没有需要手动创建

[root@dn02 ~]# vi /opt/kubernetes/cfg/flanneld 


FLANNEL_OPTIONS="--etcd-endpoints=https://10.10.100.30:2379,https://10.10.100.31:2379,https://10.10.100.32:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
View Code

为系统system目录创建flannel 服务,本例中将system管理的flannel 服务放置在目录/usr/lib/systemd/system下

[root@dn02 ~]# cat /usr/lib/systemd/system/flanneld.service 
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

[root@dn02 ~]#
View Code

 导入配置文件,启动flanneld

[root@dn02 ~]# systemctl daemon-reload
[root@dn02 ~]# systemctl enable flanneld
[root@dn02 ~]# systemctl start flanneld
View Code

修改docker配置文件,让docker 的网络与flannel 整合

[root@dn02 ~]# vi /usr/lib/systemd/system/docker.service 

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env  ##增加这一行,让docker使用flannel的网络配置环境
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ##修改这一行,引用环境变量的DOCKER_NETWORK_OPTIONS选项值
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target



flanne 的网络配置EnvironmentFile=/run/flannel/subnet.env
[root@dn02 ~]# cat /run/flannel/subnet.env 
DOCKER_OPT_BIP="--bip=172.17.85.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.85.1/24 --ip-masq=false --mtu=1450"   ###docker配置文件ExecStart项引用的值
[root@dn02 ~]#
View Code

 配置修改之后,重启启动docker,通过ifconfig命令查看主机的网路配置

[root@dn02 ~]# systemctl daemon-reload
[root@dn02 ~]# systemctl restart docker

[root@dn02 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.85.1 netmask 255.255.255.0 broadcast 172.17.85.255
ether 02:42:22:b1:77:2e txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.10.100.31 netmask 255.255.255.0 broadcast 10.10.100.255
inet6 fe80::389d:e340:ea17:3a30 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:58:c5:7c txqueuelen 1000 (Ethernet)
RX packets 87291 bytes 10835949 (10.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 86850 bytes 10793745 (10.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.85.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::6cdb:36ff:fef4:70f3 prefixlen 64 scopeid 0x20<link>
ether 6e:db:36:f4:70:f3 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 320 bytes 18686 (18.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 320 bytes 18686 (18.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
View Code

此时在节点2上的flannel网络就配置完成,接下来通过复制拷贝的方式配置节点3

[root@dn02 ~]# scp -r /opt/kubernetes/ root@10.10.100.32:/opt/
The authenticity of host '10.10.100.32 (10.10.100.32)' can't be established.
ECDSA key fingerprint is SHA256:pyiZjF3b1phvgSDt3+LU2LbME/tEfDsNOrZJCCZiicg.
ECDSA key fingerprint is MD5:35:c1:58:24:d0:7f:a9:6c:d9:99:68:a2:98:b8:9a:8d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.10.100.32' (ECDSA) to the list of known hosts.
root@10.10.100.32's password: 
flanneld                                                                            100%  232    72.8KB/s   00:00    
mk-docker-opts.sh                                                                   100% 2139   866.3KB/s   00:00    
flanneld                                                                            100%   35MB  59.9MB/s   00:00    
[root@dn02 ~]# scp -r /usr/lib/systemd/system/{flanneld,docker}.service root@10.10.100.32:/usr/lib/systemd/system
root@10.10.100.32's password: 
flanneld.service                                                                    100%  417   217.6KB/s   00:00    
docker.service                                                                      100% 1693   603.6KB/s   00:00    
[root@dn02 ~]# 


注意:拷贝时一个要拷贝安装目录下flannel的配置文件,一个要拷贝系统system 服务的配置文件
View Code

因为在配置文件中没有写死的ip,主机名等信息,所以在拷贝到新的节点后可以直接启动flannel

[root@dn03 kubernetes]# systemctl daemon-reload
[root@dn03 kubernetes]# systemctl start flanneld
[root@dn03 kubernetes]# systemctl restart docker 

注意: flannel 的启动服务名为 flanneld


查看启动进程
[root@dn03 kubernetes]# ps -ef | grep flanneld
root      20448      1  0 21:45 ?        00:00:00 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://10.10.100.30:2379,https://10.10.100.31:2379,https://10.10.100.32:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
root      20862  16786  0 21:47 pts/0    00:00:00 grep --color=auto flanneld
View Code

 

测试两台跨主机的网络连通性

节点1 的IP

[root@dn02 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.85.1  netmask 255.255.255.0  broadcast 172.17.85.255
        ether 02:42:22:b1:77:2e  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.100.31  netmask 255.255.255.0  broadcast 10.10.100.255
        inet6 fe80::389d:e340:ea17:3a30  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:58:c5:7c  txqueuelen 1000  (Ethernet)
        RX packets 146602  bytes 18254938 (17.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 170054  bytes 85670174 (81.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.85.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::6cdb:36ff:fef4:70f3  prefixlen 64  scopeid 0x20<link>
        ether 6e:db:36:f4:70:f3  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 496  bytes 27966 (27.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 496  bytes 27966 (27.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
View Code

节点2的IP

[root@dn03 kubernetes]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.41.1  netmask 255.255.255.0  broadcast 172.17.41.255
        ether 02:42:f7:5b:56:4a  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.100.32  netmask 255.255.255.0  broadcast 10.10.100.255
        inet6 fe80::1534:7f05:3d6a:9287  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::25e8:8754:cb81:68c8  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::389d:e340:ea17:3a30  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:c8:13:a5  txqueuelen 1000  (Ethernet)
        RX packets 170723  bytes 56263010 (53.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 146659  bytes 18277647 (17.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.41.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::28e1:c9ff:febf:9948  prefixlen 64  scopeid 0x20<link>
        ether 2a:e1:c9:bf:99:48  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 715  bytes 49915 (48.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 715  bytes 49915 (48.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
View Code

通过节点1ping 节点2的docker 0的地址,同时用节点2ping 节点1的docker 0,结果网络是可达的

[root@dn02 ~]# ping  172.17.41.1
PING 172.17.41.1 (172.17.41.1) 56(84) bytes of data.
64 bytes from 172.17.41.1: icmp_seq=1 ttl=64 time=0.376 ms
64 bytes from 172.17.41.1: icmp_seq=2 ttl=64 time=1.40 ms
64 bytes from 172.17.41.1: icmp_seq=3 ttl=64 time=1.03 ms
^C
--- 172.17.41.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.376/0.940/1.407/0.426 ms


[root@dn03 kubernetes]# ping 172.17.85.1
PING 172.17.85.1 (172.17.85.1) 56(84) bytes of data.
64 bytes from 172.17.85.1: icmp_seq=1 ttl=64 time=0.349 ms
64 bytes from 172.17.85.1: icmp_seq=2 ttl=64 time=0.928 ms
64 bytes from 172.17.85.1: icmp_seq=3 ttl=64 time=1.39 ms
^C
--- 172.17.85.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.349/0.891/1.397/0.429 ms
[root@dn03 kubernetes]# 
View Code

测试两个跨主机创建的容器网络连通性

节点1 启动容器,可查看IP为172.17.85.2/24

[root@dn02 ~]# docker run -it busybox sh
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
7c9d20b9b6cd: Pull complete 
Digest: sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e
Status: Downloaded newer image for busybox:latest
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
    link/ether 02:42:ac:11:55:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.85.2/24 brd 172.17.85.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 

容器IP地址为 172.17.85.2
/ # ping  172.17.41.2
PING 172.17.41.2 (172.17.41.2): 56 data bytes
64 bytes from 172.17.41.2: seq=0 ttl=62 time=0.494 ms
64 bytes from 172.17.41.2: seq=1 ttl=62 time=1.284 ms
64 bytes from 172.17.41.2: seq=2 ttl=62 time=1.247 ms
^C
--- 172.17.41.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.494/1.008/1.284 ms
View Code

节点2启动容器,可查看IP为172.17.41.2/24

[root@dn03 kubernetes]# docker run -it busybox sh
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
7c9d20b9b6cd: Pull complete 
Digest: sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e
Status: Downloaded newer image for busybox:latest
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
    link/ether 02:42:ac:11:29:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.41.2/24 brd 172.17.41.255 scope global eth0
       valid_lft forever preferred_lft forever

该容器的IP地址为172.17.41.2

/ # 
/ # 
/ # ping 172.17.85.2
PING 172.17.85.2 (172.17.85.2): 56 data bytes
64 bytes from 172.17.85.2: seq=0 ttl=62 time=1.323 ms
64 bytes from 172.17.85.2: seq=1 ttl=62 time=1.359 ms
64 bytes from 172.17.85.2: seq=2 ttl=62 time=1.237 ms
^C
--- 172.17.85.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 1.237/1.306/1.359 ms
/ # 
View Code

经验证跨主机的连个容器的网络相互可以ping通。

同样在一台主机上ping 另一台主机上的容器ip,也可以ping通,至此网络全网可达。

 

posted @ 2019-09-15 14:24  彦祚  阅读(1383)  评论(0编辑  收藏  举报