【Linux】【Services】【SaaS】 kubeadm安装kubernetes
1. 简介
2. 环境
2.1. OS: CentOS Linux release 7.5.1804 (Core)
2.2. Ansible: 2.6.2-1.el7
2.3. docker:
2.4. kubernetes:
2.5.
3. 准备
3.1. ansible,节点太多了,安装一个ansible统一执行命令吧
yum -y install ansible
看一下配置文件
~]# cat /etc/ansible/hosts [all] service ansible_host=10.210.55.220 hostname=server master1 ansible_host=10.210.55.221 hostname=master1 master2 ansible_host=10.210.55.222 hostname=master2 master3 ansible_host=10.210.55.223 hostname=master3 node1 ansible_host=10.210.55.226 hostname=node1 node2 ansible_host=10.210.55.227 hostname=node2 node3 ansible_host=10.210.55.228 hostname=node3 node4 ansible_host=10.210.55.229 hostname=node4 [master] master1 master2 master3 [etcd] master1 master2 master3 [worker] node1 node2 node3 node4
看一下hosts文件
~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.210.55.220 service service.eric.com 10.210.55.221 master1 master1.eric.com 10.210.55.222 master2 master2.eric.com 10.210.55.223 master3 master3.eric.com 10.210.55.226 node1 node1.eric.com 10.210.55.227 node2 node2.eric.com 10.210.55.228 node3 node3.eric.com 10.210.55.229 node4 node4.eric.com
最后配置一下root账户SSH免密登录,试一下
~]# ansible all -m command -a hostname service | SUCCESS | rc=0 >> centos-0 node4 | SUCCESS | rc=0 >> centos-node-4 node3 | SUCCESS | rc=0 >> centos-node-3 node2 | SUCCESS | rc=0 >> centos-node-2 node1 | SUCCESS | rc=0 >> centos-node-1 master2 | SUCCESS | rc=0 >> centos-master-2 master3 | SUCCESS | rc=0 >> centos-master-3 master1 | SUCCESS | rc=0 >> centos-master-1
3.2. 配置yum源,我直接使用的是阿里的源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo rpm --import https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg rpm --import https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg #kubernetes repo需要手写 ]# cat kubernetes.repo [kubernetes] name = kubernetes@aliyun baseurl = https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck = 1 gpgkey = https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg enabled = 1 #验证一下 ]# yum repolist ]# yum makecache
3.3. 安装基础的包
#或者使用ansible all -m yum -a "state=present name=docker-ce,kubectl,kubelet,kubeadm"
yum install -y docker-ce kubectl kubelet kubeadm
3.4. (optional)配置镜像下载的代理
]# cat !$
cat /usr/lib/systemd/system/docker.service
...
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
Environment="HTTPS_PROXY=http://www.ik8s.io:10080/"
Environment="NO_PROXY=127.0.0.0/8,10.210.55.0/24"
...
]# systemctl daemon-reload
3.5. 配置master内核参数,这两个参数是bridge的参数,需要手工创建或者让docker启动的时候创建才会出现,千万不要做!做了会导致kubedns无法启动
~]# systemctl start docker
~]# echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf
~]# echo "net.bridge.bridge-nf-call-ip6tables=1" >> /etc/sysctl.conf
~]# sysctl -p
3.6. 配置kubelet和docker开机启动,配置交给kubeadm来做
~]# systemctl enable docker ~]# systemctl enable kubelet
3.7. (optional)禁用swap,新版的k8s可以使用ignore选项来忽略swap
~]# cat /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false"
4. 使用kubeadm初始化集群master节点
# 也可以指定参数,不指定参数都是默认,最好指定一下版本,默认是装最高版,有可能会出现兼容问题 kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
成功之后记得执行如下步骤,否则无法与kube-api通信,在生产系统中最好使用普通用户,我这里就直接使用root用户了
To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config # 同时必须安装网络附件,否则master会处于notready状态 You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ #在node节点执行下面的操作就可以加入master了 You can now join any number of machines by running the following on each node as root: kubeadm join 10.210.55.223:6443 --token xd0696.i3kegveg7g1z3i09 --discovery-token-ca-cert-hash sha256:9498a3f73791b9b7c228cd468fe7332581e703771de0da68811a3391c717b953
此时master节点状态还是notready,因为没有网络,我们需要安装flannel
]# kubectl get nodes NAME STATUS ROLES AGE VERSION centos-master-3 NotReady master 16m v1.11.2
参考https://github.com/coreos/flannel
~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
然后系统会拉取镜像,可能需要一段时间
~]# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy-amd64 v1.11.1 d5c25579d0ff 4 weeks ago 97.8MB k8s.gcr.io/kube-apiserver-amd64 v1.11.1 816332bd9d11 4 weeks ago 187MB k8s.gcr.io/kube-controller-manager-amd64 v1.11.1 52096ee87d0e 4 weeks ago 155MB k8s.gcr.io/kube-scheduler-amd64 v1.11.1 272b3a60cd68 4 weeks ago 56.8MB k8s.gcr.io/coredns 1.1.3 b3b94275d97c 2 months ago 45.6MB k8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 4 months ago 219MB quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 6 months ago 44.6MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 8 months ago 742kB ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master3 Ready master 4m v1.11.2 ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-78fcdf6894-jbrc6 1/1 Running 0 4m kube-system coredns-78fcdf6894-wqc96 1/1 Running 0 4m kube-system etcd-master3 1/1 Running 0 3m kube-system kube-apiserver-master3 1/1 Running 0 3m kube-system kube-controller-manager-master3 1/1 Running 0 3m kube-system kube-flannel-ds-amd64-cnnmc 1/1 Running 0 2m kube-system kube-proxy-tvb7c 1/1 Running 0 4m kube-system kube-scheduler-master3 1/1 Running 0 3m
5. 在node节点上分别执行
kubeadm join 10.210.55.223:6443 --token xd0696.i3kegveg7g1z3i09 --discovery-token-ca-cert-hash sha256:9498a3f73791b9b7c228cd468fe7332581e703771de0da68811a3391c717b953
可以在master节点上看到他们
~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master3 Ready master 9m v1.11.2 node1 Ready <none> 2m v1.11.2 node2 Ready <none> 39s v1.11.2 node3 Ready <none> 31s v1.11.2 node4 NotReady <none> 22s v1.11.2