k8s集群的安装部署

一、搭建Kubernetes集群环境三种方式

1. Minikube安装方式
Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用。但是这种方式仅可用于学习和测试部署,不能用于生产环境。

官方文档:https://kubernetes.io/docs/setup/minikube/

2. Kubeadm安装方式
kubeadm是一个kubernetes官方提供的快速安装和初始化拥有最佳实践(best practice)的kubernetes集群的工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。目前kubeadm还处于beta 和alpha状态,不推荐用在生产环境。

kubeadm的目标是提供一个最小可用的可以通过Kubernetes一致性测试的集群,所以并不会安装任何除此之外的非必须的addon。kubeadm默认情况下并不会安装一个网络解决方案,所以用kubeadm安装完之后,需要自己来安装一个网络的插件。所以说,目前的kubeadm是不能用于生产环境的

官网文档:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

3. 二进制包安装方式(生产部署的推荐方式)
从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群,这种方式符合企业生产环境标准的Kubernetes集群环境的安装,可用于生产方式部署。

不同版本:https://github.com/kubernetes/kubernetes/releases

二、搭建部署Kubernetes集群环境

1. 环境准备

三台虚拟机(所有节点做好host解析)

2. master节点安装etcd

etcd是CoreOS团队发起的一个开源分布式键值仓库项目,可以用于分布式系统中的配置信息管理和服务发现。它的目标是构建一个高可用的分布式键值仓库,遵循Apache v2许可,基于go语言实现。

[root@kub_master ~]# yum install etcd -y

[root@kub_master ~]# etcd --version
etcd Version: 3.3.11
Git SHA: 2cf9e51
Go Version: go1.10.3
Go OS/Arch: linux/amd64

#编写etcd配置文件

[root@kub_master ~]# vim /etc/etcd/etcd.conf

[root@kub_master ~]# grep -v "^#" /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_NAME="default"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.212:2379"

#启动服务

[root@kub_master ~]# systemctl start etcd.service

[root@kub_master ~]# systemctl enable etcd.service

Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

[root@kub_master ~]# systemctl status etcd.service
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2020-09-20 23:11:03 CST; 15s ago
Main PID: 15368 (etcd)
CGroup: /system.slice/etcd.service
└─15368 /usr/bin/etcd --name=default --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://0.0.0.0:2379

Sep 20 23:11:03 kub_master etcd[15368]: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
Sep 20 23:11:03 kub_master etcd[15368]: 8e9e05c52164694d became leader at term 2
Sep 20 23:11:03 kub_master etcd[15368]: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
Sep 20 23:11:03 kub_master etcd[15368]: published {Name:default ClientURLs:[http://192.168.0.212:2379]} to cluster cdf818194e3a8c32
Sep 20 23:11:03 kub_master etcd[15368]: ready to serve client requests
Sep 20 23:11:03 kub_master systemd[1]: Started Etcd Server.
Sep 20 23:11:03 kub_master etcd[15368]: setting up the initial cluster version to 3.3
Sep 20 23:11:03 kub_master etcd[15368]: serving insecure client requests on [::]:2379, this is strongly discouraged!
Sep 20 23:11:03 kub_master etcd[15368]: set the initial cluster version to 3.3
Sep 20 23:11:03 kub_master etcd[15368]: enabled capabilities for version 3.3

[root@kub_master ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1248/master
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 15368/etcd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1457/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1248/master
tcp6 0 0 :::2379 :::* LISTEN 15368/etcd
tcp6 0 0 :::22 :::* LISTEN 1457/sshd
udp 0 0 0.0.0.0:68 0.0.0.0:* 697/dhclient
udp 0 0 127.0.0.1:323 0.0.0.0:* 641/chronyd
udp6 0 0 ::1:323 :::* 641/chronyd

#测试etcd是否安装成功

[root@kub_master ~]# etcdctl set testdir/testkey 1

1

[root@kub_master ~]# etcdctl get testdir/testkey

1

#检查健康状态

[root@kub_master ~]# etcdctl -C http://192.168.0.212:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://192.168.0.212:2379
cluster is healthy

3.master节点安装kubernetes

[root@kub_master ~]# yum install kubernetes-master -y

[root@kub_master ~]# vim /etc/kubernetes/apiserver

[root@kub_master ~]# grep -v "^#" /etc/kubernetes/apiserver

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

KUBE_API_PORT="--port=8080"


KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.0.212:2379"

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.0.0/16"

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

KUBE_API_ARGS=""

[root@kub_master ~]# vim /etc/kubernetes/config

[root@kub_master ~]# grep -v "^#" /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=0"

KUBE_ALLOW_PRIV="--allow-privileged=false"

KUBE_MASTER="--master=http://192.168.0.212:8080"

#启动服务

[root@kub_master ~]# systemctl start kube-apiserver

[root@kub_master ~]# systemctl start kube-controller-manager

[root@kub_master ~]# systemctl start kube-scheduler

#检查服务是否安装正常

[root@kub_master ~]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}

4.node节点安装kubernetes

[root@kub_node1 ~]# yum install kubernetes-node -y

#修改配置文件

[root@kub_node1 ~]# vim /etc/kubernetes/config

[root@kub_node1 ~]# grep -v "^#" /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=0"

KUBE_ALLOW_PRIV="--allow-privileged=false"

KUBE_MASTER="--master=http://192.168.0.212:8080"

[root@kub_node1 ~]# vim /etc/kubernetes/kubelet

[root@kub_node1 ~]# grep -v "^#" /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"

KUBELET_PORT="--port=10250"

KUBELET_HOSTNAME="--hostname-override=192.168.0.184"

KUBELET_API_SERVER="--api-servers=http://192.168.0.212:8080"

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

KUBELET_ARGS=""

#启动服务

[root@kub_node1 ~]# systemctl start kubelet.service

[root@kub_node1 ~]# systemctl start kube-proxy.service

注:其他节点做相同的操作

#在master节点检查

[root@kub_master ~]# kubectl get nodes
NAME STATUS AGE
192.168.0.184 Ready 1m
192.168.0.208 Ready 1m
192.168.0.212 Ready 1m

5.所有node节点配置flannel网络

[root@kub_node1 ~]# yum install flannel -y

[root@kub_node1 ~]# sed -i 's#http://127.0.0.1:2379#http://192.168.0.212:2379#g' /etc/sysconfig/flanneld

#master节点上操作

[root@kub_master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "172.16.0.0/16" }'
{ "Network": "172.16.0.0/16" }

[root@kub_master ~]# systemctl restart docker

[root@kub_master ~]# systemctl restart kube-apiserver

[root@kub_master ~]# systemctl restart kube-controller-manager

[root@kub_master ~]# systemctl restart kube-scheduler

#node节点

[root@kub_node1 ~]# systemctl start flanneld

[root@kub_node1 ~]# systemctl restart docker

[root@kub_node1 ~]# systemctl restart kubelet

[root@kub_node1 ~]# systemctl restart kube-proxy

#查看ip地址,可以看出flannel0和docker0在同一个网段

[root@kub_master ~]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.16.81.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::42:faff:fef0:e83a prefixlen 64 scopeid 0x20<link>
ether 02:42:fa:f0:e8:3a txqueuelen 0 (Ethernet)
RX packets 8 bytes 544 (544.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 656 (656.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@kub_master ~]# ifconfig flannel0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
inet 172.16.81.0 netmask 255.255.0.0 destination 172.16.81.0
inet6 fe80::dec7:bd5d:20e1:d912 prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3 bytes 144 (144.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@kub_node1 ~]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.16.46.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::42:9dff:fe0d:aa58 prefixlen 64 scopeid 0x20<link>
ether 02:42:9d:0d:aa:58 txqueuelen 0 (Ethernet)
RX packets 20 bytes 1368 (1.3 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9 bytes 698 (698.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@kub_node1 ~]# ifconfig flannel0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
inet 172.16.46.0 netmask 255.255.0.0 destination 172.16.46.0
inet6 fe80::b3bd:19df:ebb6:a69 prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 5 bytes 420 (420.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6 bytes 396 (396.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

#测试跨宿主机容器之间的互通性

###所有机器都运行docker容器

[root@kub_node1 ~]# docker pull busybox
Using default tag: latest
Trying to pull repository docker.io/library/busybox ...
latest: Pulling from docker.io/library/busybox
df8698476c65: Pull complete
Digest: sha256:d366a4665ab44f0648d7a00ae3fae139d55e32f9712c67accd604bb55df9d05a
Status: Downloaded newer image for docker.io/busybox:latest

[root@kub_node1 ~]# docker image ls busybox
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/busybox latest 6858809bf669 11 days ago 1.23 MB

#运行容器

[root@kub_master ~]# docker run -it docker.io/busybox:latest
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:10:51:02
inet addr:172.16.81.2 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:acff:fe10:5102/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1472 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:656 (656.0 B) TX bytes:656 (656.0 B)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

[root@kub_node1 ~]# docker run -it docker.io/busybox:latest
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:10:2E:02
inet addr:172.16.46.2 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:acff:fe10:2e02/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1472 Metric:1
RX packets:6 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:516 (516.0 B) TX bytes:516 (516.0 B)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

#互ping

/ # ping 172.16.46.2
PING 172.16.46.2 (172.16.46.2): 56 data bytes

/ # ping 172.16.81.2
PING 172.16.81.2 (172.16.81.2): 56 data bytes

#发现ping不通,定位原因——1.13.1版本的docker iptables默认forward规则是drop

[root@kub_master ~]# docker version
Client:
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-162.git64e9980.el7.centos.x86_64
Go version: go1.10.3
Git commit: 64e9980/1.13.1
Built: Wed Jul 1 14:56:42 2020
OS/Arch: linux/amd64

Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: docker-1.13.1-162.git64e9980.el7.centos.x86_64
Go version: go1.10.3
Git commit: 64e9980/1.13.1
Built: Wed Jul 1 14:56:42 2020
OS/Arch: linux/amd64
Experimental: false

[root@kub_master ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-FIREWALL all -- anywhere anywhere

Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-ISOLATION all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
KUBE-FIREWALL all -- anywhere anywhere

Chain DOCKER (1 references)
target prot opt source destination

Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere

Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-SERVICES (1 references)
target prot opt source destination

#解决办法

#临时生效

[root@kub_master ~]# iptables  -P FORWARD  ACCEPT

#测试互ping
/ # ping 172.16.46.2
PING 172.16.46.2 (172.16.46.2): 56 data bytes
64 bytes from 172.16.46.2: seq=0 ttl=60 time=0.393 ms
64 bytes from 172.16.46.2: seq=1 ttl=60 time=0.366 ms
64 bytes from 172.16.46.2: seq=2 ttl=60 time=0.346 ms
64 bytes from 172.16.46.2: seq=3 ttl=60 time=0.344 ms

/ # ping 172.16.46.2
PING 172.16.46.2 (172.16.46.2): 56 data bytes
64 bytes from 172.16.46.2: seq=0 ttl=60 time=0.807 ms
64 bytes from 172.16.46.2: seq=1 ttl=60 time=0.545 ms
64 bytes from 172.16.46.2: seq=2 ttl=60 time=0.382 ms
64 bytes from 172.16.46.2: seq=3 ttl=60 time=0.320 ms
64 bytes from 172.16.46.2: seq=4 ttl=60 time=0.317 ms

/ # ping 172.16.81.2
PING 172.16.81.2 (172.16.81.2): 56 data bytes
64 bytes from 172.16.81.2: seq=0 ttl=60 time=0.326 ms
64 bytes from 172.16.81.2: seq=1 ttl=60 time=0.367 ms
64 bytes from 172.16.81.2: seq=2 ttl=60 time=0.312 ms
64 bytes from 172.16.81.2: seq=3 ttl=60 time=0.301 ms
64 bytes from 172.16.81.2: seq=4 ttl=60 time=0.315 ms
^C
--- 172.16.81.2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.301/0.324/0.367 ms

#永久生效

[root@kub_master ~]# vim  /usr/lib/systemd/system/docker.service

[root@kub_master ~]# grep ExecStartPost /usr/lib/systemd/system/docker.service
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT

[root@kub_master ~]# systemctl daemon-reload
[root@kub_master ~]# systemctl restart docker

 6. 配置master为镜像仓库

#所有节点配置镜像加速和仓库地址

[root@kub_master ~]# vim /etc/docker/daemon.json

[root@kub_master ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries": ["192.168.0.212:5000"]
}

[root@kub_master ~]# systemctl restart docker

#上传仓库容器包到master

[root@kub_master ~]# yum install -y lrzsz

[root@master ~]# rz

[root@kub_master ~]# ll
total 34936
-rw-r--r-- 1 root root 35771392 Nov 13 2017 registry.tar.gz

[root@kub_master ~]# docker load -i registry.tar.gz
ef763da74d91: Loading layer [==================================================>] 5.058 MB/5.058 MB
7683d4fcdf4e: Loading layer [==================================================>] 7.894 MB/7.894 MB
656c7684d0bd: Loading layer [==================================================>] 22.79 MB/22.79 MB
a2717186d7dd: Loading layer [==================================================>] 3.584 kB/3.584 kB
3c133a51bc00: Loading layer [==================================================>] 2.048 kB/2.048 kB
Loaded image: registry:latest

[root@kub_master ~]# docker image ls registry
REPOSITORY TAG IMAGE ID CREATED SIZE
registry latest a07e3f32a779 2 years ago 33.3 MB

[root@kub_master ~]# mkdir /opt/myregistry

[root@kub_master ~]# docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry
a9823aace1d208600cf60795ba9c1a6662f231823ac06e602c7c74adaab9481f

[root@kub_master ~]# docker ps -a -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9823aace1d2 registry "/entrypoint.sh /e..." 12 seconds ago Up 11 seconds 0.0.0.0:5000->5000/tcp registry

#测试,上传镜像至私有仓库

[root@kub_node1 ~]# docker tag docker.io/busybox:latest 192.168.0.212:5000/busybox:latest

[root@kub_node1 ~]# docker images

REPOSITORY TAG IMAGE ID CREATED SIZE
192.168.0.212:5000/busybox latest 6858809bf669 12 days ago 1.23 MB
docker.io/busybox latest 6858809bf669 12 days ago 1.23 MB

[root@kub_node1 ~]# docker push 192.168.0.212:5000/busybox:latest
The push refers to a repository [192.168.0.212:5000/busybox]
be8b8b42328a: Pushed
latest: digest: sha256:2ca5e69e244d2da7368f7088ea3ad0653c3ce7aaccd0b8823d11b0d5de956002 size: 527

[root@kub_master ~]# ll /opt/myregistry/docker/registry/v2/repositories/
total 4
drwxr-xr-x 5 root root 4096 Sep 21 22:19 busybox

posted @ 2020-09-21 22:23  出水芙蓉·薇薇  阅读(1006)  评论(1编辑  收藏  举报