kubernetes(2):yum安装kubernetes
kubernetes(2):yum安装kubernetes
https://www.cnblogs.com/luoahong/p/10297973.html
https://www.soulchild.cn/681.html
1 kubernetes安装环境准备
1.1 环境介绍:
centos 7.4.1708 关闭selinux和iptable,环境很重要!
主机 |
ip地址 |
cpu核数 |
内存 |
swap |
host解析 |
k8s-master |
192.168.0.136 |
2+ |
1G+ |
关闭 |
需要 |
k8s-node-1 |
192.168.0.137 |
1+ |
1G+ |
关闭 |
需要 |
k8s-node-2 |
192.168.0.138 |
1+ |
1G+ |
关闭 |
需要 |
1.2 修改主机和host解析
hostnamectl set-hostname k8s-master hostnamectl set-hostname k8s-node-1 hostnamectl set-hostname k8s-node-2
[root@k8s-master ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.0.136 k8s-master
192.168.0.137 k8s-node-1
192.168.0.138 k8s-node-2 [root@k8s-master ~]# #同步hosts解析 scp -rp /etc/hosts 192.168.0.137:/etc/hosts scp -rp /etc/hosts 192.168.0.138:/etc/hosts
1.3 关闭防火墙和selinux
systemctl stop firewalld setenforce 0
1.4 为什么选择yum安装
yum安装最简单
二进制繁琐
编译安装最难
kubeadm官方(网络限制),kubelet二进制,其他k8s组件都是容器,所以需要下载镜像,累死。kubeadm安装方式https://www.qstack.com.cn/archives/425.html
minikube 单机版,不适合生产环境
1.5 配置阿里yum源
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
1.6 docker版本
第一种
#下载rpm包到本地 http://vault.centos.org/7.4.1708/extras/x86_64/Packages/ 建议1.12版本。太高了很坑 yum install docker-common-1.12.6-68.gitec8512b.el7.centos.x86_64.rpm -y yum install docker-client-1.12.6-68.gitec8512b.el7.centos.x86_64.rpm -y yum install docker-1.12.6-68.gitec8512b.el7.centos.x86_64.rpm -y systemctl enable docker.service #docker-common,docker-client,docker-1.12.6-68先后顺序一定不能乱,会安装失败
第二种
如果没安装docker环境,node节点会帮你安装docker-1.13.1-102版本,也不会出错。
2 安装配置k8s-master
2.1 k8s-master安装etcd:
yum install -y etcd
2.2 配置etcd
vim /etc/etcd/etcd.conf #修改下面三项 ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ETCD_NAME="k8s-master" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.136:2379" #启动服务 systemctl enable etcd systemctl start etcd #测试,正常设置和取值 [root@k8s-master ~]# etcdctl set test/k v v [root@k8s-master ~]# etcdctl get test/k v #查看etcd健康状态 [root@k8s-master ~]# etcdctl -C http://192.168.0.136:2379 cluster-health member 8e9e05c52164694d is healthy: got healthy result from http://192.168.0.136:2379 cluster is healthy
2.3 安装kubernetes-master
yum install -y kubernetes-master
2.4 配置apiserver
#vim /etc/kubernetes/apiserver # 修改下面五项 KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" KUBE_API_PORT="--port=8080" KUBELET_PORT="--kubelet-port=10250" KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.0.136:2379" KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" #删除ServiceAccount 会很坑
2.5 指定apiserver地址,给controller-manager和scheduler使用
#vim /etc/kubernetes/config KUBE_MASTER="--master=http:// 192.168.0.136:8080"
2.6 启动服务
systemctl enable kube-apiserver systemctl enable kube-controller-manager systemctl enable kube-scheduler systemctl restart kube-apiserver systemctl restart kube-controller-manager systemctl restart kube-scheduler
2.7 检查其他组件的状态
[root@k8s-master ~]# netstat -lntp| grep -E "8080|2379|1025*|6443" tcp6 0 0 :::10251 :::* LISTEN 9345/kube-scheduler tcp6 0 0 :::6443 :::* LISTEN 9324/kube-apiserver tcp6 0 0 :::2379 :::* LISTEN 9184/etcd tcp6 0 0 :::10252 :::* LISTEN 9335/kube-controlle tcp6 0 0 :::8080 :::* LISTEN 9324/kube-apiserver [root@k8s-master ~]# [root@k8s-master ~]# kubectl get componentstatus NAME STATUS MESSAGE ERROR etcd-0 Healthy {"health":"true"} controller-manager Healthy ok scheduler Healthy ok [root@k8s-master ~]# [root@k8s-master ~]# kubectl get nodes No resources found. [root@k8s-master ~]#
3 配置node节点(node-2也是同样配置)
3.1 安装kubernetes-node
yum install -y kubernetes-node.x86_64
3.2 修改kubelet
注意hostname,每个节点需要不同的名字
#vim /etc/kubernetes/kubelet KUBELET_ADDRESS="--address=0.0.0.0" KUBELET_PORT="--port=10250" KUBELET_HOSTNAME="--hostname-override=k8s-node-1" KUBELET_API_SERVER="--api-servers=http://192.168.0.136:8080"
3.3 指定apiserver地址,给kube-proxy使用
#vim /etc/kubernetes/config KUBE_MASTER="--master=http://192.168.0.136:8080"
3.4 启动服务
systemctl enable kubelet systemctl enable kube-proxy systemctl restart kubelet.service systemctl restart kube-proxy.servic
3.5 在master上查看node节点
[root@k8s-master ~]# kubectl get nodes NAME STATUS AGE k8s-node-1 NotReady 15m k8s-node-2 Ready 15m [root@k8s-master ~]#
至此k8s已经装完了,接下来配置不同宿主机之间容器的通信,这里使用的flannel。
4 所有节点配置flannel网络
4.1 所有节点安装flannel
yum install -y flannel systemctl enable flanneld
4.2 修改fannel配置
#vim /etc/sysconfig/flanneld FLANNEL_ETCD_ENDPOINTS="http://192.168.0.136:2379" #也可以使用sed替换 sed -i s#127.0.0.1:2379#192.168.0.136:2379#g /etc/sysconfig/flanneld
4.3 通过etcd配置ip地址范围(key:value形式)
etcdctl mk /atomic.io/network/config '{"Network":"172.16.0.0/16"}' 如果设置错了,删除重新设置 etcdctl get /atomic.io/network/config etcdctl rm /atomic.io/network/config
4.4 重启服务
master节点需要自行安装docker,配置完flannel也需要重启docker。
master节点: systemctl restart flanneld systemctl restart kube-apiserver systemctl restart kube-controller-manager systemctl restart kube-scheduler
systemctl restart docker
node节点:
systemctl restart flanneld
systemctl restart docker
systemctl restart kubelet
systemctl restart kube-proxy
4.5 检查安装
[root@k8s-master ~]# ifconfig flannel0 flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472 inet 192.168.74.0 netmask 255.255.0.0 destination 192.168.74.0 inet6 fe80::6a72:8da4:98f2:c5e7 prefixlen 64 scopeid 0x20<link> unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3 bytes 144 (144.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@k8s-master ~]# [root@k8s-node-1 ~]# ifconfig flannel0 flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472 inet 192.168.75.0 netmask 255.255.0.0 destination 192.168.75.0 inet6 fe80::cd9d:bdc7:fe43:a9e7 prefixlen 64 scopeid 0x20<link> unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3 bytes 144 (144.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@k8s-node-1 ~]# [root@k8s-node-2 ~]# ifconfig flannel0 flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472 inet 192.168.15.0 netmask 255.255.0.0 destination 192.168.15.0 inet6 fe80::fbbc:e2d3:b182:8cc3 prefixlen 64 scopeid 0x20<link> unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3 bytes 144 (144.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@k8s-node-2 ~]#
docker1.13版本中的一个问题,需要修改iptables,否则容器之间不通。
#vim /usr/lib/systemd/system/docker.service ExecStartPost=/sbin/iptables -P FORWARD ACCEPT systemctl daemon-reload systemctl restart docker
4.6 容器网络互通测试
我们pull一个docker_busybox
所有节点启动容器并获取容器ip地址
[root@k8s-node-1 ~]# docker run -it busybox /bin/sh / # ifconfig eth0 eth0 Link encap:Ethernet HWaddr 02:42:AC:10:49:02 inet addr:172.16.73.2 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::42:acff:fe10:4902/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1472 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:516 (516.0 B) TX bytes:516 (516.0 B) [root@k8s-node-2 ~]# docker run -it busybox /bin/sh / # ifconfig eth0 eth0 Link encap:Ethernet HWaddr 02:42:AC:10:0E:02 inet addr:172.16.14.2 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::42:acff:fe10:e02/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1472 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:656 (656.0 B) TX bytes:656 (656.0 B)
网络互通性测试
[root@k8s-master ~]# docker run -it busybox /bin/sh / # ifconfig eth0 eth0 Link encap:Ethernet HWaddr 02:42:AC:10:3D:02 inet addr:172.16.61.2 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::42:acff:fe10:3d02/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1472 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:586 (586.0 B) TX bytes:586 (586.0 B) / # ping 172.16.73.2 PING 172.16.73.2 (172.16.73.2): 56 data bytes 64 bytes from 172.16.73.2: seq=0 ttl=60 time=49.116 ms 64 bytes from 172.16.73.2: seq=1 ttl=60 time=1.493 ms 64 bytes from 172.16.73.2: seq=2 ttl=60 time=1.265 ms ^C --- 172.16.73.2 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 1.265/17.291/49.116 ms / # ping 172.16.14.2 PING 172.16.14.2 (172.16.14.2): 56 data bytes 64 bytes from 172.16.14.2: seq=0 ttl=60 time=4.578 ms 64 bytes from 172.16.14.2: seq=1 ttl=60 time=1.222 ms 64 bytes from 172.16.14.2: seq=2 ttl=60 time=1.260 ms ^C --- 172.16.14.2 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 1.222/2.353/4.578 ms
docker1.13网络不通
iptables -P FORWARD ACCEPT
5 配置master为镜像仓库
5.1 master节点修改配置
1.13
版本之前
#vim /etc/sysconfig/docker 修改内容如下: OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=192.168.0.136:5000' systemctl restart docker
1.13版本之后
#vim /etc/docker/daemon.json { "registry-mirror": ["registry.docker-cn.com"], "insecure-registries":["192.168.0.136:5000"] } systemctl restart docker
5.2 master节点启动镜像仓库
docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry
5.3 master节点推送镜像测试
[root@k8s-master ~]# curl -s http://192.168.0.136:5000/v2/_catalog| grep repositories {"repositories":["centos-7-ssh-nginx","hello-world","httpd"]} [root@k8s-master ~]# docker tag docker.io/busybox:latest 192.168.0.136:5000/busybox:latest [root@k8s-master ~]# [root@k8s-master ~]# docker push 192.168.0.136:5000/busybox:latest The push refers to a repository [192.168.0.136:5000/busybox] 0d315111b484: Pushed latest: digest: sha256:895ab622e92e18d6b461d671081757af7dbaa3b00e3e28e12505af7817f73649 size: 527 [root@k8s-master ~]# [root@k8s-master ~]# curl -s http://192.168.0.136:5000/v2/_catalog| grep repositories {"repositories":["busybox","centos-7-ssh-nginx","hello-world","httpd"]} [root@k8s-master ~]#
5.4 node节点配置
1.13
版本之前
vim /etc/sysconfig/docker 修改内容如下: OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=192.168.0.136:5000' systemctl restart docker
1.13版本之后
#vim /etc/docker/daemon.json { "registry-mirror": ["registry.docker-cn.com"], "insecure-registries":["192.168.0.136:5000"] } systemctl restart docker
5.5 node节点拉取镜像测试
[root@k8s-node-1 ~]# docker images|grep busy docker.io/busybox latest db8ee88ad75f 4 weeks ago 1.22 MB [root@k8s-node-1 ~]# docker pull 192.168.0.136:5000/busybox:latest Trying to pull repository 192.168.0.136:5000/busybox ... latest: Pulling from 192.168.0.136:5000/busybox Digest: sha256:895ab622e92e18d6b461d671081757af7dbaa3b00e3e28e12505af7817f73649 Status: Downloaded newer image for 192.168.0.136:5000/busybox:latest [root@k8s-node-1 ~]# docker images|grep busy 192.168.0.136:5000/busybox latest db8ee88ad75f 4 weeks ago 1.22 MB docker.io/busybox latest db8ee88ad75f 4 weeks ago 1.22 MB [root@k8s-node-1 ~]#