【Kubernetes学习之二】Kubernetes集群安装

环境
  centos 7

Kubernetes有三种安装方式:yum、二进制、kubeadm,这里演示kubeadm。

一、准备工作
1、软件版本

软件 版本
kubernetes v1.15.3
CentOS7.6 CentOS Linux release 7.6.1810(Core)
Docker docker-ce-19.03.1-3.el7.x86_64
flannel 0.11.0


2、集群拓扑

IP 角色 主机名
192.168.118.106 master node106 k8s-master
192.168.118.107 node01 node107 k8s-node01
192.168.118.108 node02 node108 k8s-node02

节点及网络规划如下:

 

3、系统设置
3.1 配置主机名-/etc/hosts

192.168.118.106    node106 k8s-master
192.168.118.107    node107 k8s-node01
192.168.118.108    node108 k8s-node02

3.2 关闭防火墙

[root@node106 ~]# yum install -y net-tools
#关闭防火墙
[root@node106 ~]# systemctl stop firewalld
#禁用防火墙
[root@node106 ~]# systemctl disable firewalld

3.3 文件权限相关-关闭SELinux
目的是允许容器访问主机文件系统。

[root@node106 ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
[root@node106 ~]# setenforce 0

3.4 关闭swap
kubernetes的想法是将实例紧密包装到尽可能接近100%,所有的部署应该与CPU/内存限制固定在一起,所以如果调度程序发送一个pod到一台机器,它不应该使用交换。
设计者不想交换,因为它会减慢速度,所以关闭swap主要是为了性能考虑。当然为了一些节省资源的场景,比如运行容器数量较多,可添加kubelet参数 --fail-swap-on=false来解决

[root@node106 ~]# swapoff -a
[root@node106 ~]# sed -i 's/.*swap.*/#&/' /etc/fstab

3.5 配置转发参数
RHEL/CentOS7上由于iptables被绕过而导致流量路由不正确的问题,需要将net.bridge.bridge-nf-call-iptables在sysctl配置中设置为1。
确保br_netfilter在此步骤之前加载了模块。这可以通过运行来完成lsmod | grep br_netfilter。要加载它显式调用modprobe br_netfilter。
(1)首先查看是否加载了模块br_netfilter

[root@node106 ~]# lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter

(2)如果未加载,进行加载

[root@node106 ~]# modprobe br_netfilter

(3)配置net.bridge.bridge-nf-call-iptables

[root@node106 ~]# cat <<EOF >  /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@node106 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...

4、docker安装
(1)设置docker源。

[root@node106 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@node106 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#禁用docker-ce-edge开发版本 不稳定

[root@node106 ~]# yum-config-manager --disable docker-ce-edge
[root@node106 ~]# yum makecache fast

(2)查看目前官方仓库的docker版本

[root@node106 yum.repos.d]# yum list docker-ce.x86_64  --showduplicates |sort -r
 * updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
 * extras: mirrors.aliyun.com
docker-ce.x86_64            3:19.03.1-3.el7                     docker-ce-stable
docker-ce.x86_64            3:19.03.0-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.8-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.7-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.6-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.5-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.4-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.3-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.2-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.1-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.0-3.el7                     docker-ce-stable
docker-ce.x86_64            18.06.3.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.06.2.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.06.1.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.06.0.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.12.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.12.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.09.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.09.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.2.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.3.ce-1.el7                    docker-ce-stable
docker-ce.x86_64            17.03.2.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.0.ce-1.el7.centos             docker-ce-stable
 * base: mirrors.aliyun.com
Available Packages

(3)安装docker

[root@node106 ~]# yum install docker-ce-19.03.1-3.el7 -y

(4)配置国内镜像仓库加速器

[root@node106 ~]# mkdir -p /etc/docker
[root@node106 ~]# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://qr09dqf9.mirror.aliyuncs.com"]
}
EOF

(5)启动docker

[root@node106 ~]# systemctl daemon-reload
[root@node106 ~]# systemctl enable docker
[root@node106 ~]# systemctl start docker

验证:

[root@node106 ~]# docker -v
Docker version 19.03.1, build 74b1e89

5、安装kubernetes相关组件
5.1设置国内kubernetes阿里云源。

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#增量更新缓存

[root@node106 ~]# yum makecache fast -y

#查看kubectl kubelet kubeadm列表

[root@node106 ~]# yum list kubectl kubelet kubeadm
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Available Packages
kubeadm.x86_64                                                                         1.15.3-0                                                                   kubernetes
kubectl.x86_64                                                                         1.15.3-0                                                                   kubernetes
kubelet.x86_64                                                                         1.15.3-0 

#安装

[root@node106 ~]# yum install -y kubectl kubelet kubeadm

开启kubelet服务

[root@node106 ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

 6、加载IPVS内核

ipvs (IP Virtual Server) 实现了传输层负载均衡,也就是我们常说的4层LAN交换,作为 Linux 内核的一部分。ipvs运行在主机上,在真实服务器集群前充当负载均衡器。ipvs可以将基于TCP和UDP的服务请求转发到真实服务器上,并使真实服务器的服务在单个 IP 地址上显示为虚拟服务。pod的负载均衡是用kube-proxy来实现的,实现方式有两种,一种是默认的iptables,一种是ipvs,ipvs比iptable的性能更好而已。
(1)加载ipvs内核,使node节点kube-proxy支持ipvs代理规则。

#检查有没有开启
[root@node106 ~]# cut -f1 -d " "  /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh
ip_vs_wrr
ip_vs_rr
ip_vs
nf_conntrack_ipv4

#如果没有开启 使用如下命令开启:
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4

 

(2)添加到开机启动文件/etc/rc.local里面

cat <<EOF >> /etc/rc.local
modprobe ip_vs
modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh
modprobe nf_conntrack_ipv4
EOF

(3)ipvs还需要ipset

[root@node106 ~]# yum install ipset ipvsadm -y

参考:

k8s集群中ipvs负载详解
如何在kubernetes中启用ipvs

kubernetes的ipvs模式和iptables模式

二、安装master节点
1、初始化master节点
kubeadm init --kubernetes-version=v1.15.3

1)初始化遇到的问题
第一次init:

[root@node106 ~]# kubeadm init --kubernetes-version=v1.15.3
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

分析:
警告1:[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
警告2:[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
版本警告
警告3:[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
解决:[root@node106 ~]# systemctl enable kubelet.service
错误1:[ERROR NumCPU]:设置虚拟机CPU核心数>1个即可

第二次init:

[root@node106 ~]# kubeadm init --kubernetes-version=v1.15.3
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@node106 ~]# 

分析:
错误1:[ERROR ImagePull] 拉取Image失败,因为连接的是google服务器,可以根据报错中版本号使用docker拉取或者通过kubeadm config images list查看需要下载的版本

[root@node106 ~]# kubeadm config images list
W0906 11:12:52.841583   16407 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0906 11:12:52.841780   16407 version.go:99] falling back to the local client version: v1.15.3
k8s.gcr.io/kube-apiserver:v1.15.3
k8s.gcr.io/kube-controller-manager:v1.15.3
k8s.gcr.io/kube-scheduler:v1.15.3
k8s.gcr.io/kube-proxy:v1.15.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
[root@node106 ~]# 

(2)准备镜像
mirrorgooglecontainers 在 docker hub 同步了所有 k8s 最新的镜像,先从这儿下载,然后修改 tag 即可。
#拉镜像

[root@node106 ~]# kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#mirrorgooglecontainers#g' |sh -x && docker pull coredns/coredns:1.3.1

#修改tag,将镜像标记为k8s.gcr.io的名称

[root@node106 ~]# docker images |grep mirrorgooglecontainers |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#mirrorgooglecontainers#k8s.gcr.io#2' |sh -x && docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

#删除无用的镜像

[root@node106 ~]# docker images | grep mirrorgooglecontainers | awk '{print "docker rmi " $1":"$2}' | sh -x && docker rmi coredns/coredns:1.3.1

最终:

[root@node106 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.15.3             232b5c793146        2 weeks ago         82.4MB
k8s.gcr.io/kube-apiserver            v1.15.3             5eb2d3fc7a44        2 weeks ago         207MB
k8s.gcr.io/kube-controller-manager   v1.15.3             e77c31de5547        2 weeks ago         159MB
k8s.gcr.io/kube-scheduler            v1.15.3             703f9c69a5d5        2 weeks ago         81.1MB
k8s.gcr.io/coredns                   1.3.1               eb516548c180        7 months ago        40.3MB
k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        9 months ago        258MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        20 months ago       742kB
[root@node106 ~]# 

  (3)初始化 

因为后面要安装网络插件flannel ,所有这里要添加参数, --pod-network-cidr=10.244.0.0/16,10.244.0.0/16是flannel插件固定使用的ip段,它的值取决于你准备安装哪个网络插件

[root@node106 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=1.15.3
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node106 localhost] and IPs [192.168.118.106 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node106 localhost] and IPs [192.168.118.106 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node106 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.118.106]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.007081 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node106 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node106 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: unqj7v.wr7yvcj8i7wan93g
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.118.106:6443 --token unqj7v.wr7yvcj8i7wan93g \
    --discovery-token-ca-cert-hash sha256:011f55be71445e7031ac7a582afc7a4350cdf6d8ae8bef790d2517634d93f337

 后续操作:

[root@node106 ~]# mkdir -p $HOME/.kube
[root@node106 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@node106 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl命令默认从$HOME/.kube/config这个位置读取配置,不做这个操作,使用kubectl会报错。

 2、给pod配置网络

Flannel是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,其目的在于帮助每一个使用 Kuberentes 的 CoreOS 主机拥有一个完整的子网。
Flannel通过给每台宿主机分配一个子网的方式为容器提供虚拟网络,它基于Linux TUN/TAP,使用UDP封装IP包来创建overlay网络,并借助etcd维护网络的分配情况。

#下载Flannel插件配置

[root@node106 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@node106 ~]# ll
total 20
-rw-------. 1 root root  1779 Aug 15 14:39 anaconda-ks.cfg
-rw-r--r--  1 root root 12487 Sep  6 16:42 kube-flannel.yml

#kube安装kube-flannel.yml

[root@node106 ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

#查看Master状态  

[root@node106 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-dwjfs          1/1     Running   0          3h57m
kube-system   coredns-5c98db65d4-xxdr2          1/1     Running   0          3h57m
kube-system   etcd-node106                      1/1     Running   0          3h56m
kube-system   kube-apiserver-node106            1/1     Running   0          3h56m
kube-system   kube-controller-manager-node106   1/1     Running   0          3h56m
kube-system   kube-flannel-ds-amd64-srdxz       1/1     Running   0          2m32s
kube-system   kube-proxy-8mxmm                  1/1     Running   0          3h57m
kube-system   kube-scheduler-node106            1/1     Running   0          3h56m

不是running状态,就说明出错了,通过以下操作来来排错:
查看描述:

[root@node106 ~]# kubectl describe pod kube-scheduler-node106 -n kube-system

查看日志:

[root@node106 ~]# kubectl logs kube-scheduler-node106 -n kube-system

参考:Flannel安装部署 

三、安装node节点

1、下载需要的镜像
node107和node108节点只需要安装kube-proxy和pause镜像

[root@node107 ~]# docker images
REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy   v1.15.3             232b5c793146        2 weeks ago         82.4MB
k8s.gcr.io/pause        3.1                 da86e6ba6ca1        20 months ago       742kB

2、添加节点
在master上初始化节点成功时,最后有一个kubeadm join,就是用来添加节点的
在node107和node108上操作:

[root@node107 ~]# kubeadm join 192.168.118.106:6443 --token unqj7v.wr7yvcj8i7wan93g \
>     --discovery-token-ca-cert-hash sha256:011f55be71445e7031ac7a582afc7a4350cdf6d8ae8bef790d2517634d93f337
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

提示:如果执行join命令时提示token过期,按照提示在Master 上执行kubeadm token create生成一个新的token。如果忘记token,可以使用kubeadm token list查看。

四、验证集群
1、节点状态

[root@node106 ~]# kubectl get nodes
NAME      STATUS   ROLES    AGE     VERSION
node106   Ready    master   4h53m   v1.15.3
node107   Ready    <none>   101s    v1.15.3
node108   Ready    <none>   82s     v1.15.3

2、组件状态

[root@node106 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   

3、服务账户

[root@node106 ~]# kubectl get serviceaccount
NAME      SECRETS   AGE
default   1         5h1m

4、集群信息

[root@node106 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.118.106:6443
KubeDNS is running at https://192.168.118.106:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

5、验证dns功能

[root@node106 ~]# kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
[ root@curl-6bf6db5c4f-dn65h:/ ]$ nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

五、案例验证
创建一个nginx的service试一下集群是否可用。

(1)创建并运行deployment

[root@node106 ~]# kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=nginx  --port=80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created

(2)把服务通过nodeport的形式暴露出来

[root@node106 ~]# kubectl expose deployment nginx --type=NodePort --name=example-service
service/example-service exposed
#查看服务的详细信息
[root@node106 ~]# kubectl expose deployment nginx --type=NodePort --name=example-service
service/example-service exposed
[root@node106 ~]# kubectl describe service example-service
Name:                     example-service
Namespace:                default
Labels:                   run=load-balancer-example
Annotations:              <none>
Selector:                 run=load-balancer-example
Type:                     NodePort
IP:                       10.108.73.249
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  32168/TCP
Endpoints:                10.244.1.4:80,10.244.2.2:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

#查看服务状态
[root@node106 ~]# kubectl get service
NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
example-service   NodePort    10.108.73.249   <none>        80:32168/TCP   91s
kubernetes        ClusterIP   10.96.0.1       <none>        443/TCP        44h
[root@node106 ~]# 

#查看pod
应用的配置和当前状态信息保存在 etcd 中,执行 kubectl get pod 时 API Server 会从 etcd 中读取这些数据。
[root@node106 ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
curl-6bf6db5c4f-dn65h    1/1     Running   2          39h
nginx-5c47ff5dd6-hjxq8   1/1     Running   0          3m10s
nginx-5c47ff5dd6-qj9k2   1/1     Running   0          3m10s

(3)访问服务IP

[root@node106 ~]# curl 10.108.73.249:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

访问endpoint,与访问服务ip结果相同。这些IP只能在 Kubernetes Cluster中的容器和节点访问。endpoint与service 之间有映射关系。service实际上是负载均衡着后端的endpoint。其原理是通过iptables实现的

[root@node106 ~]# curl 10.244.1.4:80
[root@node106 ~]# curl 10.244.2.2:80

访问节点ip,与访问集群ip相同,可以在集群外部访问

[root@node106 ~]# curl 192.168.118.107:32168
[root@node106 ~]# curl 192.168.118.108:32168

整个部署过程是这样的:
① kubectl 发送部署请求到 API Server。
② API Server 通知 Controller Manager 创建一个 deployment 资源。
③ Scheduler 执行调度任务,将两个副本 Pod 分发到 node01 和 node02。
④ node01 和 node02 上的kubelet 在各自的节点上创建并运行 Pod。
flannel 会为每个 Pod 都分配 IP。

 

参考:
yum安装Kubernetes
二进制安装Kubernetes
kubeadm安装Kubernetes
手把手教你在CentOS上搭建Kubernetes集群
官网Installing kubeadm

Kubernetes最新版本安装过程和注意事项

kubeadm安装kubernetes1.13集群

posted @ 2019-08-30 09:47  cac2020  阅读(809)  评论(0编辑  收藏  举报