k8s集群安装-master和node
架构:1台master、3台node
关闭防火墙、selinux、NetworkManager、postfix
安装依赖包yum install -y yum-utils device-mapper-persistent-data lvm2
一、etcd安装
etcd安装在master节点,centos7原有镜像没有etcd包,因此需要使用其他镜像
在/etc/yum.repo目录下wget http://mirrors.aliyun.com/repo/ Centos-7.repo
(-c断点续传, -O下载并重命名wget -O newname url)
yum install -y etcd
修改配置文件
/etc/etcd/etcd.conf
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" #ip修改为0.0.0.0
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.85.30:2379" #ip修改为master ip
启动etcd服务并设置为开机自启
systemctl start etcd
systemctl enable etcd
[root@master etcd]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 5045/etcd
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 689/rpcbind
tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN 1391/dnsmasq
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 997/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 995/cupsd
tcp6 0 0 :::2379 :::* LISTEN 5045/etcd
监听任意的2379端口和本机的2380端口
2379端口对外提供服务,k8s向etcd写数据使用2379端口,etcd集群之间数据同步使用2380端口
二、master安装
[root@master etcd]# yum -y install kubernetes-master.x86_64
会同时安装kubernetes-master和kubernetes-client
[root@master kubernetes]# ls
apiserver config controller-manager scheduler
修改apiserver配置文件,/etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" #ip修改为0.0.0.0
KUBE_API_PORT="--port=8080" #去掉注释
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.85.30:2379" #ip修改为master ip
KUBELET_PORT="--kubelet-port=10250" #去掉注释
修改controller-manager和scheduler配置文件,两者共用配置文件/etc/kubernetes/config(proxy也在该文件配置)
KUBE_MASTER="--master=http://192.168.85.30:8080" # ip修改为master ip
启动kube-apiserver、kube-controller-manager和kube-scheduler服务并设置为开机自启
[root@master kubernetes]# systemctl start kube-apiserver
[root@master kubernetes]# systemctl start kube-controller-manager
[root@master kubernetes]# systemctl start kube-scheduler
[root@master kubernetes]# systemctl enable kube-apiserver
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@master kubernetes]# systemctl enable kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@master kubernetes]# systemctl enable kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
检查master状态
[root@master kubernetes]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
三、node安装(注意master也用作node节点)
[root@master ~ ]# yum -y install kubernetes-node.x86_64 (安装过程很慢,如果下载失败可尝试几次)
安装node时会自动装上docker
修改proxy配置文件,/etc/kubernetes/config
KUBE_MASTER="--master=http://192.168.85.30:8080" # ip修改为master ip
修改kubelet配置文件,/etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=192.168.85.31" #ip修改为各node节点的ip
KUBELET_PORT="--port=10250" #去掉注释
KUBELET_HOSTNAME="--hostname-override=node1" #各node节点的主机名(需要配置host解析并且主机名唯一),也可以写ip
KUBELET_API_SERVER="--api-servers=http://192.168.85.30:8080" #ip修改为master ip
启动kubelet、kube-proxy服务
[root@master kubernetes]# systemctl start kubelet #kubelet启动时会启动docker
[root@master kubernetes]# systemctl start kube-proxy
[root@master kubernetes]# systemctl enable kubelet
[root@master kubernetes]# systemctl enable kube-proxy
在master上查看node节点状态
[root@master kubernetes]# kubectl get nodes
NAME STATUS AGE
master Ready 7m
node1 Ready 5m
node2 Ready 4m
四、flannel网络插件安装(所有节点上,实现跨宿主机容器之间的通讯)
yum install -y flannel
修改flannel配置文件,/etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://192.168.85.30:2379" #ip修改为etcd ip即master ip
FLANNEL_ETCD_PREFIX="/atomic.io/network" #无需修改,粘贴出来是为了下一步设置key
设置etcd的key(只在etcd即master上设置)
[root@master sysconfig]# etcdctl set /atomic.io/network/config '{ "Network":"172.16.0.0/16" }'
[root@master sysconfig]# systemctl start flanneld
[root@master sysconfig]# systemctl enable flanneld
此时通过ifconfig可查看多了一个flannel0网卡,网卡地址在设置的Network中随机分配,但是与docker0网卡地址段不同,需要重启docker服务使flannel0网卡和docker0地址段相同
[root@master sysconfig]# systemctl enable docker
启动node节点的flanneld服务、重启node节点docker服务
每个节点上的flannel0网卡和docker0网卡地址段相同,但是各个节点上的地址段是不同的,我的三个节点最后分别是.2、.28、.51网段
[root@master ~]# ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.16.2.1 netmask 255.255.255.0 broadcast 0.0.0.0 ether 02:42:11:f5:e4:6b txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.85.30 netmask 255.255.255.0 broadcast 192.168.85.255 inet6 fe80::20c:29ff:fe8d:2852 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:8d:28:52 txqueuelen 1000 (Ethernet) RX packets 236138 bytes 169924220 (162.0 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 149103 bytes 60479160 (57.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472 inet 172.16.2.0 netmask 255.255.0.0 destination 172.16.2.0 inet6 fe80::212c:461d:d5ad:b79c prefixlen 64 scopeid 0x20<link> unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3 bytes 144 (144.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ……
以busybox镜像为例,先docker search 镜像名称 搜索,结果如下,OFFICIAL为ok的为官方镜像
[root@master ~]# docker search busybox INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED docker.io docker.io/busybox Busybox base image. 1977 [OK] docker.io docker.io/progrium/busybox 71 [OK] docker.io docker.io/radial/busyboxplus Full-chain, Internet enabled, busybox made... 32 [OK] docker.io docker.io/yauritux/busybox-curl Busybox with CURL 10 docker.io docker.io/arm32v7/busybox Busybox base image. 8 docker.io docker.io/armhf/busybox Busybox base image. 6 docker.io docker.io/odise/busybox-curl 4 [OK] docker.io docker.io/arm64v8/busybox Busybox base image. 3 docker.io docker.io/aarch64/busybox Busybox base image. 2…… 0
通过dokcer pull 镜像名称下载镜像,再通过docker images查看镜像
[root@master ~]# docker pull docker.io/busybox Using default tag: latest Trying to pull repository docker.io/library/busybox ... latest: Pulling from docker.io/library/busybox 61c5ed1cbdf8: Pull complete Digest: sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977 Status: Downloaded newer image for docker.io/busybox:latest [root@master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/busybox latest 018c9d7b792b 4 weeks ago 1.22 MB
docker run -it 镜像名称 进入容器
[root@master ~]# docker run -it busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:10:02:02 inet addr:172.16.2.2 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::42:acff:fe10:202/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1472 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2626 (2.5 KiB) TX bytes:648 (648.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) / # ping 172.16.28.2 PING 172.16.28.2 (172.16.28.2): 56 data bytes ^z
在容器内查看ip后互ping,发现各节点间无法ping通,这是因为docker在1.13版本之后将iptables的Chain FORWARD的策略设置为了drop,需要修改为accept才可
[root@master sysconfig]# iptables -P FORWARD ACCEPT #修改策略
[root@master sysconfig]# iptables -L -n #查看策略
[root@master ~]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:67 Chain FORWARD (policy ACCEPT) target prot opt source destination DOCKER-ISOLATION all -- 0.0.0.0/0 0.0.0.0/0 DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) ……
为防止重启后失效,将策略写入docker的启动文件/usr/lib/systemd/system/docker.service
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT #增加这一行
ExecStart=/usr/bin/dockerd-current \
然后重新加载systemctl daemon-reload
---------------------------------------------------------------------------------------------------------------------------------------------
准备三台虚拟机,一台做master,两台做node节点。
一.安装docker,两node节点上进行
1.虚拟机挂载centos7镜像,挂载到/mnt目录;修改主机名;配置静态ip;配置hosts解析;配置/etc/resolve.conf,nameserver 8.8.8.8;关闭selinux,关闭防火墙
2.安装docker依赖包:yum -y install yum-utils device-mapper-persistent-data lvm2
3.安装wget:yum install -y wget
4.在/etc/yum.repos.d/目录下,wget https://download.docker.com/linux/centos/docker-ce.repo,会将指定目录下的docker-ce.repo下载到当前目录
5.通过yum list docker-ce --showduplicates | sort -r查看所有可用版本
5.yum -y install docker-ce,会根据docker.ce从官方下载最新docker并安装
6.启动docker:systemctl start docker,设置开机自启动docker: systemctl enable docker
7.可通过docker info查看安装的docker信息
二.安装证书,master上
1.进入/usr/local/bin目录下,下载三个cfssl工具文件用于生成证书:
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
https://download.docker.com/linux/centos/7/x86_64/stable/Packages/