kubernetes部署1.15.0版本

部署环境 centos7.4

master01: 192.168.85.110

node01: 192.168.85.120

node02: 192.168.85.130

 

所有节点都要写入hosts 

[root@master01 ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

master01: 192.168.85.110

node01: 192.168.85.120

node02: 192.168.85.130

 

以下都要在所有节点上执行

准备docker yum仓库

准备k8s yum仓库

 

配置docker的yum库

cd /etc/yum.repos.d/

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

 

配置k8s的yum库

/etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

gpgcheck=0

enabled=1

 

所有节点安装

安装必备软件

yum install lrzsz wget vim -y

kubeadm部署

yum 安装docker

yum -y install docker-ce

 

编辑docker的环境变量

如果有HTTP代理,可以添加自己的代理,没有就忽略

vim /usr/lib/systemd/system/docker.service

Environment="NO_PROXY=127.0.0.0/8"

 

docker国内加速

mkdir -p /etc/docker

vim /etc/docker/daemon.json

{

  "registry-mirrors": ["https://lvb4p7mn.mirror.aliyuncs.com"]

}

 

加载环境变量

systemctl daemon-reload

 

启动docker并设置开机启动 

systemctl start docker

systemctl enable docker

 

kubeadm部署

yum 安装 kubeadm

yum -y install  kubeadm-1.15.0-0.x86_64  kubectl-1.15.0-0.x86_64 kubelet-1.15.0-0.x86_64 kubernetes-cni-0.7.5-0.x86_64

 

swap没关的话就忽略swap参数

vim /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--fail-swap-on=false"

KUBE_PROXY_MODE=ipvs

 

开机启动kubelet

systemctl enable kubelet

 

镜像加载

kubeadm镜像安装

先提前下载镜像k8s-1.15.0.tar.gz

链接: https://pan.baidu.com/s/1AhDsQHUIMd0CQufGteFSXw 提取码: vshs

 

上传到各节点

各节点都要加载镜像 

docker load  -i k8s-1.15.0.tar.gz

 

flannel镜像安装

先提前下载镜像flannel-v0.11.0.tar.gz

链接: https://pan.baidu.com/s/1QEssOf2yX1taupQT4lTxQg 提取码: x42r

 

各节点都要加载镜像

docker load  -i flannel-v0.11.0.tar.gz

 

kubectl命令自动补全

yum install bash-completion* -y

##写入环境变量

source <(kubectl completion bash)

echo "source <(kubectl completion bash)" >> ~/.bashrc

 

部署k8s

master节点部署

kubeadm 初始化

kubeadm init  --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12 --ignore-preflight-errors=all

 

初始化完成后

记住节点要加入的token

kubeadm join 192.168.85.110:6443 --token fo0kd9.ocdrd0obki28g76i  --discovery-token-ca-cert-hash sha256:9a5b3ec15c16926e667281cda008b0b550ed5404628453929b0c2a551cbb0bfd  -- ignore-preflight-errors=all

 

按照要求执行三个步骤

mkdir -p $HOME/.kube

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

chown $(id -u):$(id -g) $HOME/.kube/config

 

检查集群健康状态

[root@master01 ~]# kubectl get cs

NAME                 STATUS    MESSAGE             ERROR

controller-manager   Healthy   ok                 

scheduler            Healthy   ok                 

etcd-0               Healthy   {"health":"true"} 

 

master部署网络插件flannel

[root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

 

node节点部署

各节点利用token部署

kubeadm join 192.168.85.110:6443 --token fo0kd9.ocdrd0obki28g76i  --discovery-token-ca-cert-hash sha256:9a5b3ec15c16926e667281cda008b0b550ed5404628453929b0c2a551cbb0bfd --ignore-preflight-errors=all

默认token的有效期为24小时,当过期之后,该token就不可用了,以后加入节点需要新token

 

master重新生成新的token

[root@master01 ~]# kubeadm token create

905hgq.1akgmga715dzooxo

[root@master01 ~]# kubeadm token list

TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS

905hgq.1akgmga715dzooxo   23h       2019-06-23T15:18:24+08:00   authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token

 

获取ca证书sha256编码hash值

[root@master01 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

2db0df25f40a3376e35dc847d575a2a7def59604b8196f031663efccbc8290c2

 

利用新token加入集群

kubeadm join 192.168.85.110:6443 --token 905hgq.1akgmga715dzooxo \

   --discovery-token-ca-cert-hash sha256:2db0df25f40a3376e35dc847d575a2a7def59604b8196f031663efccbc8290c2 \

--ignore-preflight-errors=all

 

最后查看各节点是否就绪

[root@master01 ~]# kubectl get node

NAME       STATUS   ROLES    AGE    VERSION

master01   Ready    master   3m1s   v1.15.0

node01     Ready    <none>   72s    v1.15.0

node02     Ready    <none>   54s    v1.15.0

 

开启ipvs

加载ipvs

内核4.19以上是nf_conntrack,4.19以下是 nf_conntrack_ipv4,其他不变

[root@master01 ~]# uname -r

5.2.2-1.el7.elrepo.x86_64

[root@master01 ~]# cat /etc/sysconfig//modules/ipvs.modules

#!/bin/bash

module=(ip_vs

        ip_vs_rr

        ip_vs_wrr

        ip_vs_sh

        ip_vs_lc

        br_netfilter

        nf_conntrack)

for kernel_module in ${module[@]};do

    /sbin/modinfo -F filename $kernel_module |& grep -qv ERROR && echo $kernel_module >> /etc/modules-load.d/ipvs.conf || :

done

ipvs_modules_dir="/usr/lib/modules/5.2.2-1.el7.elrepo.x86_64/kernel/net/netfilter/ipvs"

for i in `ls $ipvs_modules_dir | sed  -r 's#(.*).ko#\1#'`; do

    /sbin/modinfo -F filename $i  &> /dev/null

    if [ $? -eq 0 ]; then

        /sbin/modprobe $i

    fi

done 

 

[root@master01 ~]# lsmod | grep ip_vs

ip_vs_wlc              16384  0

ip_vs_sed              16384  0

ip_vs_pe_sip           16384  0

nf_conntrack_sip       32768  1 ip_vs_pe_sip

ip_vs_ovf              16384  0

ip_vs_nq               16384  0

ip_vs_mh               16384  0

ip_vs_lblcr            16384  0

ip_vs_lblc             16384  0

ip_vs_ftp              16384  0

nf_nat                 40960  4 ip6table_nat,iptable_nat,xt_MASQUERADE,ip_vs_ftp

ip_vs_fo               16384  0

ip_vs_dh               16384  0

ip_vs_lc               16384  0

ip_vs_sh               16384  0

ip_vs_wrr              16384  0

ip_vs_rr               16384  4

ip_vs                 151552  35 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_ovf,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_pe_sip,ip_vs_wrr,ip_vs_lc,ip_vs_mh,ip_vs_sed,ip_vs_ftp

nf_conntrack          139264  6 xt_conntrack,nf_nat,nf_conntrack_sip,nf_conntrack_netlink,xt_MASQUERADE,ip_vs

nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs

libcrc32c              16384  4 nf_conntrack,nf_nat,xfs,ip_vs

 

 [root@master01 ~]# kubectl -n kube-system edit configmaps kube-proxy

 kind: KubeProxyConfiguration

    metricsBindAddress: 127.0.0.1:10249

    mode: "ipvs"   ### 添加ipvs就行

安装ipvsadm

 

yum install ipvsadm ipset sysstat conntrack libseccomp conntrack-tools  socat  -y

 

删除原来的kube-proxy,重新加载kube-proxy

 

[root@master01 ~]# kubectl delete  daemonsets kube-proxy   -n kube-system

 [root@master01 ~]# ipvsadm -Ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  10.96.0.1:443 rr

  -> 192.168.48.101:6443          Masq    1      2          0        

TCP  10.96.0.10:53 rr

  -> 10.244.0.3:53                Masq    1      0          0        

  -> 10.244.1.3:53                Masq    1      0          0        

TCP  10.96.0.10:9153 rr

  -> 10.244.0.3:9153              Masq    1      0          0        

  -> 10.244.1.3:9153              Masq    1      0          0        

UDP  10.96.0.10:53 rr

  -> 10.244.0.3:53                Masq    1      0          0        

  -> 10.244.1.3:53                Masq    1      0          0        

原文:https://blog.csdn.net/tangwei0928/article/details/93377100

posted on 2019-07-30 20:44  SZ_文彬  阅读(925)  评论(0编辑  收藏  举报