k8s集群1.11.2安装教程

Centos7使用kubeadm部署kubernetes-1.11.2

 
分类: Linux,虚拟化

1、安装方式

1、  传统方式,以下组件全部运行在系统层面(yum或者rpm包),都为系统级守护进程

2、  kubeadm方式,master和node上的组件全部运行为pod容器,k8s也为pod

  a)         master/nodes

  安装kubelet,kubeadm,docker

  b)         master:kubeadm init(完成集群初始化)

  c)         nodes:kubeadm join

2、规划

主机           IP     

k8smaster    192.168.2.19

k8snode01    192.168.2.21

k8snode02    192.168.2.5

版本号:

docker: 18.06.1.ce-3.el7

OS: CentOS release 7

kubernetes: 1.11.2

kubeadm 1.11.2

kubectl 1.11.2

kubelet 1.11.2

etcdctl: 3.2.15

3、主机设置

3.1、关闭交换分区

swapoff -a

sed -i 's/.*swap.*/#&/' /etc/fstab

3.2、初始化系统

关闭SELinux和Firewall

sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
setenforce 0
systemctl disable firewalld.service
systemctl disable firewalld
systemctl stop firewalld

防火墙开启转发

iptables -P FORWARD ACCEPT

安装必要和常用工具

yum install -y epel-release vim vim-enhanced lrzsz unzip ntpdate sysstat dstat wget mlocate mtr lsof iotop bind-tools git net-tools

其他

#时间设置
timedatectl set-timezone Asia/Shanghai ntpdate ntp1.aliyun.com rm -f /etc/localtime ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
#设置日志格式 systemctl restart rsyslog echo 'export HISTTIMEFORMAT="%m/%d %T "' >> ~/.bashrc source ~/.bashrc
#设置连接符 cat >> /etc/security/limits.conf << EOF * soft nproc 65535 * hard nproc 65535 * soft nofile 65535 * hard nofile 65535 EOF echo "ulimit -SHn 65535" >> /etc/profile echo "ulimit -SHn 65535" >> /etc/rc.local ulimit -SHn 65535 sed -i 's/4096/10240/' /etc/security/limits.d/20-nproc.conf modprobe ip_conntrack modprobe br_netfilter cat >> /etc/rc.d/rc.local <<EOF modprobe ip_conntrack modprobe br_netfilter EOF chmod 744 /etc/rc.d/rc.local
#设置内核 cat <<EOF >> /etc/sysctl.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 vm.swappiness = 0 vm.overcommit_memory=1 vm.panic_on_oom=0 kernel/panic=10 kernel/panic_on_oops=1 kernel.pid_max = 4194303 vm.max_map_count = 655350 fs.aio-max-nr = 524288 fs.file-max = 6590202 EOF
sysctl -p /etc/sysctl.conf /sbin/sysctl -p # vim /etc/sysconfig/kubelet #由于云环境默认没有swap分区,关闭swap选项 KUBELET_EXTRA_ARGS="--fail-swap-on=false" #设置kubeproxy模式为ipvs KUBE_PROXY_MODE=ipvs

4、master设置yum源

cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

vim kubernetes.repo
i
[kubernetes]
name=Kuberneters Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1

wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

rpm --import yum-key.gpg
rpm --import rpm-package-key.gpg

4.1、拷贝到nodes

由于本例是实验环境,如果生产环境存在saltstack和ansible等分发工具,可以用分发工具去做

1. 在服务器 S 上执行如下命令来生成配对密钥:

node01:
mkdir /root/.ssh -p
node02: mkdir /root/.ssh -p
master: ssh-keygen -t rsa scp .ssh/id_rsa.pub node01:/root/.ssh/authorized_keys scp .ssh/id_rsa.pub node02:/root/.ssh/authorized_keys

2、分发镜像源和验证秘钥

scp /etc/yum.repos.d/kubernetes.repo /etc/yum.repos.d/docker-ce.repo node01:/etc/yum.repos.d/           
scp /etc/yum.repos.d/kubernetes.repo /etc/yum.repos.d/docker-ce.repo node02:/etc/yum.repos.d/
 
scp  /etc/yum.repos.d/yum-key.gpg /etc/yum.repos.d/rpm-package-key.gpg node01:/root
scp  /etc/yum.repos.d/yum-key.gpg /etc/yum.repos.d/rpm-package-key.gpg node02:/root

4.2、nodes上执行

rpm --import yum-key.gpg

rpm --import rpm-package-key.gpg

5、master安装k8s

yum install docker-ce kubelet-1.11.2 kubeadm-1.11.2 kubectl-1.11.2 -y

5.1、配置镜像源

安装完毕后需要初始化,此时需要下载镜像

由于国内没办法访问Google的镜像源,变通的方法有两种:

1、是从其他镜像源下载后,修改tag。

A、执行下面这个Shell脚本即可。

#!/bin/bash
images=(kube-proxy-amd64:v1.11.2 kube-scheduler-amd64:v1.11.2 kube-controller-manager-amd64:v1.11.2 kube-apiserver-amd64:v1.11.2
etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.9 k8s-dns-kube-dns-amd64:1.14.9
k8s-dns-dnsmasq-nanny-amd64:1.14.9 )
for imageName in ${images[@]} ; do
  docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName
  docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName k8s.gcr.io/$imageName
  docker rmi registry.cn-hangzhou.aliyuncs.com
done
docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1

B、从官方拉取

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.2
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.2
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.2
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.18
docker pull coredns/coredns:1.1.3
docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.11.2 k8s.gcr.io/kube-proxy-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.11.2 k8s.gcr.io/kube-scheduler-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.11.2 k8s.gcr.io/kube-apiserver-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.2 k8s.gcr.io/kube-controller-manager-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.2.18  k8s.gcr.io/etcd-amd64:3.2.18
docker tag docker.io/mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1
docker tag docker.io/coredns/coredns:1.1.3  k8s.gcr.io/coredns:1.1.3

C、从阿里云拉取

docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-apiserver-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-scheduler-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-controller-manager-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-kube-dns-amd64:1.14.8
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-sidecar-amd64:1.14.8
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/etcd-amd64:3.1.12
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1

docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

2、是有代理服务器

vim /usr/lib/systemd/system/docker.service
[Service]下增加:
Environment="HTTPS_PROXY=$PROXY_SERVER_IP:PORT"
Environment="NO_PROXY=127.0.0.0/8,192.168.0.0/16"

5.2、配置镜像加速器

Docker 官方和国内很多云服务商都提供了国内加速器服务,例如:

当配置某一个加速器地址之后,若发现拉取不到镜像,请切换到另一个加速器地址。

国内各大云服务商均提供了 Docker 镜像加速服务,建议根据运行 Docker 的云平台选择对应的镜像加速服务。

使用阿里云加速器或者其他加速器:https://yeasy.gitbooks.io/docker_practice/content/install/mirror.html

配置镜像加速器,您可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://registry-mirror.qiniu.com","https://zsmigm0p.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

5.3、重启服务

systemctl daemon-reload
systemctl restart docker.service

设置K8S开机自启

systemctl enable kubelet

systemctl enable docker

5.4、初始化master节点

kubeadm init --kubernetes-version=v1.11.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=all

过程简介

[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
I0829 17:45:00.722283   19542 kernel_validator.go:81] Validating kernel version
I0829 17:45:00.722398   19542 kernel_validator.go:96] Validating kernel config
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
    ############certificates  证书##############
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.19]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.2.19 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
    ############  配置文件  ##############
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 43.503085 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[bootstraptoken] using token: 0tjvur.56emvwc4k4ghwenz
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
    v1.11版本正式命名
   # 早期版本演进:skyDNS ---->kubeDNS---->CoreDNS
[addons] Applied essential addon: kube-proxy
   # 作为附件运行,自托管在K8S上,动态为service资源生成ipvs(iptables)规则,1.11版本开始默认使用IPvs,不支持的话自动降级到iptables
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube    #在家目录下创建kube文件
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config    #admin.conf包含一些kubectl拿来做配置文件指定连接K8S的认证信息
  sudo chown $(id -u):$(id -g) $HOME/.kube/config    #改属主属组

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:
#以root在其他任意节点上执行以下命令加入K8S集群中。
  kubeadm join 192.168.2.19:6443 --token 0tjvur.56emvwc4k4ghwenz --discovery-token-ca-cert-hash sha256:f768ad7522e7bb5ccf9d0e0590c478b2a7161912949d1be3d5cd3f46ace7f4bf
注释
#--discovery-token-ca-cert-hash   #发现master使用的哈希码验证

根据提示执行如下操作

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config    

master部署网络组件

  CNI(容器网络插件):         flannel      #不支持网络策略         calico         canel         kube-router

注:如果是有代理服务器此处可以正常部署,如果没有,那么节点需要下载镜像后才可以部署

nodes:

docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.2

docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag docker.io/mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1
docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.11.2 k8s.gcr.io/kube-proxy-amd64:v1.11.2

master:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

6、nodes安装k8s

下面的命令需要在所有的node上执行。

yum install docker-ce kubelet-1.11.2 kubeadm-1.11.2 kubectl-1.11.2 -y

6.1、nodes启动docker kubelet

master端

scp /usr/lib/systemd/system/docker.service node01:/usr/lib/systemd/system/docker.service
scp /usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/docker.service
scp /etc/sysconfig/kubelet node01:/etc/sysconfig/
scp /etc/sysconfig/kubelet node02:/etc/sysconfig/

nodes端

systemctl start docker
systemctl enable docker kubelet   

6.2、加入master

这个token是在master节点上kubeadm初始化时最后给出的结果:
kubeadm join 192.168.2.19:6443 --token 0tjvur.56emvwc4k4ghwenz --discovery-token-ca-cert-hash sha256:f768ad7522e7bb5ccf9d0e0590c478b2a7161912949d1be3d5cd3f46ace7f4bf

###########如果token失效了##############

[root@master ~]# kubeadm token create

0wnvl7.kqxkle57l4adfr53

[root@node02 yum.repos.d]# kubeadm join 192.168.2.19:6443 --token 0wnvl7.kqxkle57l4adfr53 --discovery-token-unsafe-skip-ca-verification

posted @ 2019-01-25 16:11  一米八大高个儿  阅读(587)  评论(0编辑  收藏  举报