使用kubeadm安装Kubernetes 1.15
kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新.
一、准备工作
1、系统配置
在安装之前,需要先做如下准备。两台CentOS 7.5主机
cat /etc/hosts 192.168.100.30 master 192.168.100.32 node2
禁用防火墙、selinux
systemctl stop firewalld systemctl disable firewalld setenforce 0 vi /etc/selinux/config SELINUX=disabled
创建/etc/sysctl.d/k8s.conf文件,修改内核参数:
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 #执行命令使修改生效。 modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
需要关闭swap
,(由于服务器本来配置就低,这里就不关闭swap,在后面部署过程中忽略swap报错)
swapoff -a #临时
vim /etc/fstab #永久
时间同步
yum install ntpdate -y echo "*/20 * * * * /usr/sbin/ntpdate -u ntp.api.bz >/dev/null &" >> /var/spool/cron/root
2、kube-proxy开启ipvs的前置条件
由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
在所有节点执行
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。
各个节点上安装ipset和ipvsadm
yum -y install ipset ipvsadm
如果以上前提条件如果不满足,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。
3、安装docker
安装docker的yum源:
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装docker
yum -y install docker-ce-18.09.6 docker-ce-cli-18.09.6 containerd.io
修改docker cgroup driver为systemd
使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd。 vim /etc/docker/daemon.json #如果不存在则创建 { "exec-opts": ["native.cgroupdriver=systemd"] }
启动
systemctl restart docker #启动docker systemctl enable docker #开机自启动 docker info |grep -E "Server\ Version|Cgroup" Server Version: 18.09.6 Cgroup Driver: systemd
二、安装kubeadm
配置kubenetes
的yum
仓库(阿里云仓库)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
安装kubelat
、kubectl
、kubeadm
yum -y install kubelet-1.15.2 kubeadm-1.15.2 kubectl-1.15.2
将kubelet
加入开机启动,(注意:只是加入开机启动,不能启动哦!)
systemctl enable kubelet
三、创建集群
配置忽略swap
vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false"
1、初始化master节点
kubeadm init --kubernetes-version=v1.15.2 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
#参数说明 --kubernetes-version #指定Kubernetes版本 --image-repository #--image-repository指定为阿里云镜像仓库地址 --pod-network-cidr #指定pod网络段 --service-cidr #指定service网络段 --ignore-preflight-errors=Swap #忽略swap报错信息
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.100.30:6443 --token c416qn.rdupmak2rhf5pqd8 \
--discovery-token-ca-cert-hash sha256:7c9f791d1008f061ea76ea1c8bae6b254246f6c92917a1fd0dcc4d0d8b4a1d51
按照上面提示创建配置文件
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
查看一下集群状态,确认个组件都处于healthy状态:
kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"}
集群初始化如果遇到问题,可以使用下面的命令进行清理:
kubeadm reset ifconfig cni0 down ip link delete cni0 ifconfig flannel.1 down ip link delete flannel.1 rm -rf /var/lib/cni/
安装网络插件
mkdir -p ~/k8s/ #创建目录放置yaml文件 cd ~/k8s curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml #需要多尝试几次, kubectl apply -f kube-flannel.yml
#如果Node有多个网卡的话,需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=<iface-name>
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth1
......
使用kubectl get pod –all-namespaces -o wide确保所有的Pod都处于Running状态。
kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-bccdc95cf-7md8h 1/1 Running 0 9m47s 10.244.0.2 master <none> <none> kube-system coredns-bccdc95cf-lff4h 1/1 Running 0 9m47s 10.244.0.3 master <none> <none> kube-system etcd-master 1/1 Running 0 8m43s 192.168.100.30 master <none> <none> kube-system kube-apiserver-master 1/1 Running 0 9m4s 192.168.100.30 master <none> <none> kube-system kube-controller-manager-master 1/1 Running 0 8m49s 192.168.100.30 master <none> <none> kube-system kube-flannel-ds-amd64-f5t5p 1/1 Running 0 8m10s 192.168.100.30 master <none> <none> kube-system kube-proxy-m8gjz 1/1 Running 0 9m47s 192.168.100.30 master <none> <none> kube-system kube-scheduler-master 1/1 Running 0 9m8s 192.168.100.30 master <none> <none>
2、添加Node节点
下面将node2这个主机添加到Kubernetes集群中,在node2上执行刚才记录的join:
#由于为关闭swap,需要加上忽略swap报错的参数,不然添加不成功
kubeadm join 192.168.100.30:6443 --token c416qn.rdupmak2rhf5pqd8 \ --discovery-token-ca-cert-hash sha256:7c9f791d1008f061ea76ea1c8bae6b254246f6c92917a1fd0dcc4d0d8b4a1d51 --ignore-preflight-errors=Swap
node2加入集群很是顺利,下面在master节点上执行命令查看集群中的节点:
kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 49m v1.15.2 node2 Ready <none> 90s v1.15.2
3、如何移除Node节点
在master节点上执行:
kubectl drain node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node node2
#如果有其他节点还需要在其他节点执行
kubectl delete node node2
在node2上执行:
kubeadm reset ifconfig cni0 down ip link delete cni0 ifconfig flannel.1 down ip link delete flannel.1 rm -rf /var/lib/cni/
4、kube-proxy开启ipvs
修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”
kubectl edit configmap kube-proxy -n kube-system
之后重启各节点上的kube-proxy pod:
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}' ##命令只需要在master节点执行
检查是否已经启用了ipvs
kubectl get pod -n kube-system | grep kube-proxy kube-proxy-62pjd 1/1 Running 0 118s kube-proxy-7mczc 1/1 Running 0 2m #查看其中一个pod的日志 kubectl logs kube-proxy-62pjd -n kube-system I1213 09:15:10.643663 1 server_others.go:170] Using ipvs Proxier. W1213 09:15:10.644248 1 proxier.go:401] IPVS scheduler not specified, use rr by default I1213 09:15:10.644851 1 server.go:534] Version: v1.15.2 I1213 09:15:10.675116 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I1213 09:15:10.675671 1 config.go:187] Starting service config controller I1213 09:15:10.675701 1 controller_utils.go:1029] Waiting for caches to sync for service config controller I1213 09:15:10.675806 1 config.go:96] Starting endpoints config controller I1213 09:15:10.675824 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller I1213 09:15:10.776116 1 controller_utils.go:1036] Caches are synced for service config controller I1213 09:15:10.776311 1 controller_utils.go:1036] Caches are synced for endpoints config controller
日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。
四、部署Dashboard
1、下载Dashboard的yaml
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
由于默认是连接官方社区获取镜像,需要修改成阿里云地址,使用如下命令或直接手动编辑kubernetes-dashboard.yaml文件
sed -i 's#k8s.gcr.io#registry.aliyuncs.com/google_containers#g' kubernetes-dashboard.yaml sed -i '/targetPort:/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' kubernetes-dashboard.yaml
需要在Dashboard Service内容加入nodePort: 30001和type: NodePort两项内容,将Dashboard访问端口映射为节点端口,以供外部访问,编辑完成后,状态如下
。。。。。。 spec: containers: - name: kubernetes-dashboard image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 ports: - containerPort: 8443 protocol: TCP args: 。。。。。。 # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: ports: - port: 443 targetPort: 8443 nodePort: 30001 type: NodePort selector: k8s-app: kubernetes-dashboard
部署Dashboard
kubectl apply -f kubernetes-dashboard.yaml
创建完成后,检查相关服务运行状态
kubectl get deployment kubernetes-dashboard -n kube-system NAME READY UP-TO-DATE AVAILABLE AGE kubernetes-dashboard 1/1 1 1 23m kubectl get services -n kube-system|grep dashboard kubernetes-dashboard NodePort 10.101.240.195 <none> 443:30001/TCP 24m
创建和获取访问Dashboard的认证令牌
kubectl create serviceaccount dashboard-admin -n kube-system kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
浏览器输入Dashboard访问地址:https://192.168.100.30:30001
使用输出的token登录Dashboard
认证通过后,登录Dashboard如图