kubeadm安装k8s 1.13版本
一:环境初始化
1.关闭selinux,iptables
2.做好本地的dns解析,我这里用的是/etc/hosts
3.做一下免密传输
4.
master:10.0.18.210 node1:10.0.18.211 node2:10.0.18.212
二:配置yum源
[root@master yum.repos.d]# vim kuberbetes.repo [kubernetes] name=Kubernetes Repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg enabled=1
[root@master yum.repos.d]# cat docker-ce.repo [docker-ce-stable] name=Docker CE Stable - $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable enabled=1 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-stable-debuginfo] name=Docker CE Stable - Debuginfo $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/stable enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-stable-source] name=Docker CE Stable - Sources baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/stable enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-edge] name=Docker CE Edge - $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/edge enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-edge-debuginfo] name=Docker CE Edge - Debuginfo $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/edge enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-edge-source] name=Docker CE Edge - Sources baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/edge enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-test] name=Docker CE Test - $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/test enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-test-debuginfo] name=Docker CE Test - Debuginfo $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/test enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-test-source] name=Docker CE Test - Sources baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/test enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-nightly] name=Docker CE Nightly - $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/nightly enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-nightly-debuginfo] name=Docker CE Nightly - Debuginfo $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/nightly enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-nightly-source] name=Docker CE Nightly - Sources baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/nightly enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[root@master yum.repos.d]# yum repolist Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.huaweicloud.com * epel: mirrors.yun-idc.com * extras: mirrors.huaweicloud.com * updates: mirrors.huaweicloud.com repo id repo name status !base/7/x86_64 CentOS-7 - Base 10,019 !docker-ce-stable/x86_64 Docker CE Stable - x86_64 39 *!epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 13,041 !extras/7/x86_64 CentOS-7 - Extras 385 !kubernetes Kubernetes Repo 336 !rsyslog_v8/7/x86_64 Adiscon CentOS-7 - local packages for x86_64 2,015 !updates/7/x86_64 CentOS-7 - Updates 1,493 !zabbix/x86_64 Zabbix Official Repository - x86_64 183 !zabbix-non-supported/x86_64 Zabbix Official Repository non-supported - x86_64 4 repolist: 27,515
安装kubeadm kubelet kubectl docker-ce
yum -y install docker-ce kubeadm-1.13.2-0 kubectl-1.13.2-0 kubelet-1.13.2-0
将yum源文件拷贝到node1和node2上,同时在node1和node2上安装docker-ce kubeadm-1.13.2-0 kubelet-1.13.2-0
[root@master yum.repos.d]# scp kuberbetes.repo docker-ce.repo node1:/etc/yum.repos.d/ [root@master yum.repos.d]# scp kuberbetes.repo docker-ce.repo node2:/etc/yum.repos.d/
三:master初始化
1.因一些不可抗拒的因素,我们在初始化的时候会报错不能pull images ,此时需要FQ,此处我提前下载好了需要的docker image,我们需要在初始化前把这些镜像文件导入到docker中(docker load < 文件名):
链接:https://pan.baidu.com/s/1rkwYyeShXF05OswhweNCAw
提取码:gs0c
[root@master images]# docker load < coredns.tar Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? [root@master images]# systemctl start docker [root@master images]# docker load < coredns.tar 9198eadacc0a: Loading layer [==================================================>] 542.2kB/542.2kB 30a6f49aa944: Loading layer [==================================================>] 39.74MB/39.74MB Loaded image: k8s.gcr.io/coredns:1.2.6 [root@master images]# docker load < etcd.tar f9d9e4e6e2f0: Loading layer [==================================================>] 1.378MB/1.378MB 7882cc107ed3: Loading layer [==================================================>] 195.1MB/195.1MB 43f7b6974634: Loading layer [==================================================>] 23.45MB/23.45MB Loaded image: k8s.gcr.io/etcd:3.2.24 [root@master images]# docker load < kube-apiserver.tar 5fe6d025ca50: Loading layer [==================================================>] 43.87MB/43.87MB c248e3e3678b: Loading layer [==================================================>] 138.6MB/138.6MB Loaded image: k8s.gcr.io/kube-apiserver:v1.13.2 [root@master images]# docker load < kube-controller-manager.tar 0184d92152bd: Loading layer [==================================================>] 103.9MB/103.9MB Loaded image: k8s.gcr.io/kube-controller-manager:v1.13.2 [root@master images]# docker load < kube-proxy.tar e5a609b37e16: Loading layer [==================================================>] 3.403MB/3.403MB 3155f3c58fe7: Loading layer [==================================================>] 34.84MB/34.84MB Loaded image: k8s.gcr.io/kube-proxy:v1.13.2 [root@master images]# docker load < kube-scheduler.tar ee29d41ee5b0: Loading layer [==================================================>] 37.3MB/37.3MB Loaded image: k8s.gcr.io/kube-scheduler:v1.13.2 [root@master images]# docker load < pause.tar e17133b79956: Loading layer [==================================================>] 744.4kB/744.4kB Loaded image: k8s.gcr.io/pause:3.1
2.初始化前的准备:
[root@master ~]#systemctl start docker [root@master ~]#systemctl enable docker [root@master ~]#systemctl enable kubelet
我们需要关闭swap
[root@master ~]# vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false"
初始化:
[root@master images]# kubeadm init --kubernetes-version=v1.13.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap [init] Using Kubernetes version: v1.13.2 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.4. Latest validated version: 18.06 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [hd04 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.21 3][certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [hd04 localhost] and IPs [192.168.0.213 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [hd04 localhost] and IPs [192.168.0.213 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 23.503218 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "hd04" as an annotation [mark-control-plane] Marking the node hd04 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node hd04 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 07huya.1nhbzzu3lu6j54o2 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.0.210:6443 --token 4vtrh8.mepwl6gl0s1mlt3j --discovery-token-ca-cert-hash sha256:83ad166dc9a8805d827b95112f5437c1e547c95482fb92d7176ddd6f55a9cc79
其中红色部分需要在master上执行一下,黄色部分需要记录下来保存住,这是以后我们node加入k8s集群需要执行的命令
[root@master images]# mkdir -p $HOME/.kube [root@master images]# [root@master images]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@master images]# [root@master images]# kubectl get node NAME STATUS ROLES AGE VERSION hd04 NotReady master 7m3s v1.13.2
此处可以看出我们已经获取到了node节点,但是状态是NotReady,因为此时我们还没有安装网络,接下来我们安装flannel:
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
此刻马上去get node可能也是notready状态,因为此时系统还没有下载完flannel的镜像,需要稍等一下:
[root@master ~]# kubectl get node NAME STATUS ROLES AGE VERSION hd01 Ready master 22h v1.13.2
到这里我们就安装好了master,面安装node:
[root@node1 ~]#systemctl start docker [root@node1 ~]#systemctl enable docker [root@node1 ~]#systemctl enable kubelet
[root@node1 ~]#cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
[root@node1 ~]#kubeadm join 192.168.0.210:6443 --token 4vtrh8.mepwl6gl0s1mlt3j --discovery-token-ca-cert-hash sha256:83ad166dc9a8805d827b95112f5437c1e547c95482fb92d7176ddd6f55a9cc79
此时node就已经加入到集群中了,node2同样的操作
在master上:
[root@master ~]# kubectl get node NAME STATUS ROLES AGE VERSION hd01 Ready master 22h v1.13.2 hd02 Ready <none> 5h38m v1.13.2 hd03 Ready <none> 4h54m v1.13.2 [root@master ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-86c58d9df4-57gcl 1/1 Running 0 5h21m coredns-86c58d9df4-rkfzg 1/1 Running 0 5h21m etcd-hd01 1/1 Running 1 22h kube-apiserver-hd01 1/1 Running 1 22h kube-controller-manager-hd01 1/1 Running 0 22h kube-flannel-ds-amd64-kw99x 1/1 Running 0 5h27m kube-flannel-ds-amd64-nktsr 1/1 Running 0 4h55m kube-flannel-ds-amd64-r442w 1/1 Running 0 5h27m kube-proxy-5dlm5 1/1 Running 0 5h38m kube-proxy-lnj7f 1/1 Running 0 4h55m kube-proxy-nvc8p 1/1 Running 0 22h kube-scheduler-hd01 1/1 Running 1 22h