kubernetes云平台管理实战: 集群部署(一)

一、环境规划

1、架构拓扑图

2、主机规划

3、软件版本

最新版本详见:https://www.cnblogs.com/luoahong/p/12917582.html

[root@k8s-master ~]# cat /etc/redhat-release 
CentOS Linux release 7.4.1708 (Core) 
[root@k8s-master ~]# uname -r
3.10.0-693.el7.x86_64
 
[root@k8s-master ~]# docker version 
Client:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-68.gitec8512b.el7.centos.x86_64
 Go version:      go1.8.3
 Git commit:      ec8512b/1.12.6
 Built:           Mon Dec 11 16:08:42 2017
 OS/Arch:         linux/amd64

Server:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-68.gitec8512b.el7.centos.x86_64
 Go version:      go1.8.3
 Git commit:      ec8512b/1.12.6
 Built:           Mon Dec 11 16:08:42 2017
 OS/Arch:         linux/amd64
           
[root@k8s-master ~]# kubectl version 
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

[root@k8s-master ~]# kubectl cluster-info 
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

[root@k8s-master ~]# kubectl get node
NAME        STATUS    AGE
k8s-node1   Ready     1d
k8s-node2   Ready     1d

4、修改主机和host解析

1、修改主机

hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2

2、hosts解析

[root@k8s-master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
k8s-master 10.0.128.0
k8s-node1  10.0.128.1
k8s-node2  10.0.128.2 

scp -rp /etc/hosts 10.0.128.1:/etc/hosts
scp -rp /etc/hosts 10.0.128.2:/etc/hosts

二、所有节点安装docker-1.12.6-68

1、下载rpm包到本地

http://vault.centos.org/7.4.1708/extras/x86_64/Packages/

2、安装

yum localinstall docker-common-1.12.6-68.gitec8512b.el7.centos.x86_64.rpm -y
yum localinstall docker-client-1.12.6-68.gitec8512b.el7.centos.x86_64.rpm -y
yum localinstall docker-1.12.6-68.gitec8512b.el7.centos.x86_64.rpm -y

3、启动

[root@k8s-master ~]# systemctl enable docker.service 
[root@k8s-node1 ~]# systemctl enable docker.service 
[root@k8s-node2 ~]# systemctl enable docker.service

4、遇到的坑

1、docker-common,docker-client,docker-1.12.6-68先后顺序一定不能乱,负责会安装失败

2、我安装的时候就把docker-client和docker-common的顺序搞反,耽误了半天时间

三、master节点安装etcd

k8s数据库kv类型存储,原生支持做集群

1、安装

yum install etcd.x86_64 -y

2、配置

vim /etc/etcd/etcd.conf 
修改一下两行:
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.128.0:2379"

3、启动

 systemctl start etcd.service
 systemctl enable etcd.service

4、健康检查

[root@k8s-master ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:2380          0.0.0.0:*               LISTEN      1396/etcd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1162/sshd           
tcp6       0      0 :::2379                 :::*                    LISTEN      1396/etcd           
tcp6       0      0 :::22                   :::*                    LISTEN      1162/sshd           
udp        0      0 0.0.0.0:30430           0.0.0.0:*                           1106/dhclient       
udp        0      0 0.0.0.0:68              0.0.0.0:*                           1106/dhclient       
udp        0      0 127.0.0.1:323           0.0.0.0:*                           869/chronyd         
udp6       0      0 :::42997                :::*                                1106/dhclient       
udp6       0      0 ::1:323                 :::*                                869/chronyd         
[root@k8s-master ~]# etcdctl set tesstdir/testkey0 0
0
[root@k8s-master ~]# etcdctl get tesstdir/testkey0
0
[root@k8s-master ~]# etcdctl -C http://10.0.128.0:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://10.0.128.0:2379
cluster is healthy
[root@k8s-master ~]# systemctl stop etcd.service
[root@k8s-master ~]# etcdctl -C http://10.0.128.0:2379 cluster-health
cluster may be unhealthy: failed to list members
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.0.128.0:2379: getsockopt: connection refused
error #0: dial tcp 10.0.128.0:2379: getsockopt: connection refused
[root@k8s-master ~]# systemctl start etcd.service
[root@k8s-master ~]# etcdctl -C http://10.0.128.0:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://10.0.128.0:2379
cluster is healthy

四、master节点安装kubernetes,k8s

1、安装

[root@k8s-master ~]# yum install kubernetes-master.x86_64 -y

2、配置

[root@k8s-master ~]#vim /etc/kubernetes/apiserver
修改如下:
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.128.0:2379"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
[root@k8s-master ~]#vim /etc/kubernetes/config 
修改如下:
KUBE_MASTER="--master=http://10.0.128.0:8080"

3、启动

systemctl start kube-apiserver.service 
systemctl start kube-controller-manager.service 
systemctl start kube-scheduler.service
systemctl enable kube-apiserver.service 
systemctl enable kube-controller-manager.service 
systemctl enable kube-scheduler.service

4、组件作用

api-server: 接受并响应用户的请求
controller:  控制器管理,保证容器始终存活
scheduler:   调度器,选择启动容器的node节点

五、node节点安装kubernetes

1、安装

[root@k8s-node1 ~]#yum install kubernetes-node.x86_64 -y

2、配置

[root@k8s-node1 ~]#  vim /etc/kubernetes/config 
修改如下内容 :
KUBE_MASTER="--master=http://10.0.128.0:8080"

[root@k8s-node1 ~]# vim /etc/kubernetes/kubelet
修改如下内容:
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=k8s-node1"
KUBELET_API_SERVER="--api-servers=http://10.0.128.0:8080"

3、启动

[root@k8s-node1 ~]# systemctl enable kubelet.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-node1 ~]# systemctl start kubelet.service 
[root@k8s-node1 ~]# systemctl enable kube-proxy.service 
[root@k8s-node1 ~]# systemctl start kube-proxy.service

4、组件作用

kubelet     调用docker,管理容器生命周期
nova-compute    调用libvirt,管理虚拟机的生命周期
kube-poxy           提供容器网络访问

六、所有节点配置flannel网络

1、master节点

1、安装

[root@k8s-master ~]# yum install flannel -y

2、配置

[root@k8s-master ~]# sed -i 's#http://127.0.0.1:2379#http://10.0.128.0:2379#g' /etc/sysconfig/flanneld 
[root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{ "Network":"172.16.0.0/16" }'
{ "Network":"172.16.0.0/16" }

3、启动

[root@k8s-master ~]# systemctl enable flanneld.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k8s-master ~]# systemctl start flanneld.service 

4、检查是否安装成功

[root@k8s-master ~]# systemctl restart docker
[root@k8s-master ~]# systemctl restart kube-apiserver.service
[root@k8s-master ~]# systemctl restart kube-controller-manager.service
[root@k8s-master ~]# systemctl restart kube-scheduler.service 
[root@k8s-master ~]# ifconfig flannel0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
        inet 172.16.67.0  netmask 255.255.0.0  destination 172.16.67.0
        inet6 fe80::b9bb:f96a:188d:426e  prefixlen 64  scopeid 0x20<link>
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3  bytes 144 (144.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

2、node节点

1、安装

[root@k8s-node1 ~]# yum install flannel -y
[root@k8s-node2 ~]# yum install flannel -y

2、配置

[root@k8s-node1 ~]#sed -i 's#http://127.0.0.1:2379#http://10.0.128.0:2379#g' /etc/sysconfig/flannel
[root@k8s-node2 ~]#sed -i 's#http://127.0.0.1:2379#http://10.0.128.0:2379#g' /etc/sysconfig/flannel

3、启动

[root@k8s-node1 ~]# systemctl enable flanneld.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k8s-node1 ~]# systemctl start flanneld.service 
[root@k8s-node2 ~]# systemctl enable flanneld.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k8s-node2 ~]# systemctl start flanneld.service 

4、检查是否安装成功

[root@k8s-node1 ~]# ifconfig flannel0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
        inet 172.16.10.0  netmask 255.255.0.0  destination 172.16.10.0
        inet6 fe80::6fc:c859:c331:833d  prefixlen 64  scopeid 0x20<link>
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3  bytes 144 (144.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@k8s-node2 ~]# ifconfig flannel0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
        inet 172.16.48.0  netmask 255.255.0.0  destination 172.16.48.0
        inet6 fe80::2f4c:6385:45ca:34b8  prefixlen 64  scopeid 0x20<link>
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3  bytes 144 (144.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

3、容器网络互通测试

1、所有节点启动容器并获取容器ip地址

[root@k8s-master ~]# kubectl get nodes
NAME        STATUS     AGE
k8s-node1   Ready      10h
k8s-node2   NotReady   9h

[root@k8s-master ~]# docker run -it busybox /bin/sh
Unable to find image 'busybox:latest' locally
Trying to pull repository docker.io/library/busybox ... 
latest: Pulling from docker.io/library/busybox
57c14dd66db0: Pull complete 
Digest: sha256:7964ad52e396a6e045c39b5a44438424ac52e12e4d5a25d94895f2058cb863a0
/ # ifconfig|grep eth0
eth0      Link encap:Ethernet  HWaddr 02:42:AC:10:30:02  
          inet addr:172.16.48.2  Bcast:0.0.0.0  Mask:255.255.255.0

[root@k8s-node1 ~]# docker run -it busybox /bin/sh
Unable to find image 'busybox:latest' locally
Trying to pull repository docker.io/library/busybox ... 
latest: Pulling from docker.io/library/busybox
57c14dd66db0: Pull complete 
Digest: sha256:7964ad52e396a6e045c39b5a44438424ac52e12e4d5a25d94895f2058cb863a0
/ # ifconfig|grep eth0
eth0      Link encap:Ethernet  HWaddr 02:42:AC:10:0A:02  
          inet addr:172.16.10.2  Bcast:0.0.0.0  Mask:255.255.255.0


[root@k8s-node2 ~]# docker run -it busybox /bin/sh
Unable to find image 'busybox:latest' locally
Trying to pull repository docker.io/library/busybox ... 
latest: Pulling from docker.io/library/busybox
57c14dd66db0: Pull complete 
Digest: sha256:7964ad52e396a6e045c39b5a44438424ac52e12e4d5a25d94895f2058cb863a0
/ # ifconfig|grep eth0
eth0      Link encap:Ethernet  HWaddr 02:42:AC:10:30:02  
          inet addr:172.16.48.2  Bcast:0.0.0.0  Mask:255.255.255.0

2、网络互通性测试

[root@k8s-node2 ~]# docker run -it busybox /bin/sh
Unable to find image 'busybox:latest' locally
Trying to pull repository docker.io/library/busybox ... 
latest: Pulling from docker.io/library/busybox
57c14dd66db0: Pull complete 
Digest: sha256:7964ad52e396a6e045c39b5a44438424ac52e12e4d5a25d94895f2058cb863a0
/ # ifconfig|grep eth0
eth0      Link encap:Ethernet  HWaddr 02:42:AC:10:30:02  
          inet addr:172.16.48.2  Bcast:0.0.0.0  Mask:255.255.255.0
		  
/ # ping 172.16.10.2
PING 172.16.10.2 (172.16.10.2): 56 data bytes
64 bytes from 172.16.10.2: seq=0 ttl=60 time=5.212 ms
64 bytes from 172.16.10.2: seq=1 ttl=60 time=1.076 ms
^C
--- 172.16.10.2 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 1.076/2.118/5.212 ms

/ # ping 172.16.48.2
PING 172.16.48.2 (172.16.48.2): 56 data bytes
64 bytes from 172.16.48.2: seq=0 ttl=60 time=5.717 ms
64 bytes from 172.16.48.2: seq=1 ttl=60 time=1.108 ms
^C
--- 172.16.48.2 ping statistics ---
7 packets transmitted, 7 packets received, 0% packet loss
round-trip min/avg/max = 1.108/1.904/5.717 ms

4、遇到的坑:

如果用docker-1.31可能会有网络不通的情况,解决办法如下

 

iptables -P FORWARD ACCEPT

 

七、配置master为镜像仓库

1、master节点

1、配置

 vim /etc/sysconfig/docker

修改内容如下:
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=10.0.128.0:5000'
systemctl restart docker

2、启动私有仓库

[root@k8s-master ~]# docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry
Unable to find image 'registry:latest' locally
Trying to pull repository docker.io/library/registry ... 
latest: Pulling from docker.io/library/registry
cd784148e348: Pull complete 
0ecb9b11388e: Pull complete 
918b3ddb9613: Pull complete 
5aa847785533: Pull complete 
adee6f546269: Pull complete 
Digest: sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57
ba8d9b958c7c0867d5443ee23825f842a580331c87f6555678709dfc8899ff17
[root@k8s-master ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                      PORTS                    NAMES
ba8d9b958c7c        registry            "/entrypoint.sh /etc/"   11 seconds ago      Up 9 seconds                0.0.0.0:5000->5000/tcp   registry
15340ee09614        busybox             "/bin/sh"                3 hours ago         Exited (1) 19 minutes ago                            sleepy_mccarthy

3、推送镜像测试

[root@k8s-master ~]# docker tag docker.io/busybox:latest 10.0.128.0:5000/busybox:latest
[root@k8s-master ~]# docker push 10.0.128.0:5000/busybox:latest
The push refers to a repository [10.0.128.0:5000/busybox]
683f499823be: Pushed 
latest: digest: sha256:bbb143159af9eabdf45511fd5aab4fd2475d4c0e7fd4a5e154b98e838488e510 size: 527

2、node节点

1、配置

 vim /etc/sysconfig/docker

修改内容如下:
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=10.0.128.0:5000'
systemctl restart docker

2、测试

[root@k8s-node1 ~]# docker pull 10.0.128.0:5000/busybox:latest
Trying to pull repository 10.0.128.0:5000/busybox ... 
latest: Pulling from 10.0.128.0:5000/busybox
Digest: sha256:bbb143159af9eabdf45511fd5aab4fd2475d4c0e7fd4a5e154b98e838488e510
posted @ 2019-01-21 15:30  活的潇洒80  阅读(1599)  评论(0编辑  收藏  举报