kubernetes之kubeadmin安装部署

kubeadmin介绍

kubeadmKubernetes项目自带的及集群构建工具,负责执行构建一个最小化的可用集群以及将其启动等的必要基本步骤,kubeadmKubernetes集群全生命周期的管理工具,可用于实现集群的部署、升级、降级及拆除。kubeadm部署Kubernetes集群是将大部分资源以pod的方式运行,例如(kube-proxykube-controller-managerkube-schedulerkube-apiserverflannel)都是以pod方式运行。

Kubeadm仅关心如何初始化并启动集群,余下的其他操作,例如安装Kubernetes Dashboard、监控系统、日志系统等必要的附加组件则不在其考虑范围之内,需要管理员自行部署。

Kubeadm集成了Kubeadm initkubeadm join等工具程序,其中kubeadm init用于集群的快速初始化,其核心功能是部署Master节点的各个组件,而kubeadm join则用于将节点快速加入到指定集群中,它们是创建Kubernetes集群最佳实践的“快速路径”。另外,kubeadm token可于集群构建后管理用于加入集群时使用的认证令牌(token),而kubeadm reset命令的功能则是删除集群构建过程中生成的文件以重置回初始状态。

kubeadm项目地址

kubeadm官方文档

安装前期规划

软件信息

软件 版本
centos 7.6
kubeadm 1.15.6
kubelet 1.15.6
kubectl 1.15.6
coredns 1.15.6
dashborad 2.0.0
coredns 1.3.1
nginx-ingress 1.6.0
docker 18.xxx
Etcd 3.3.18
flannel 0.11
flannel 0.11

部署环境

角色 ip 组件
slb 192.168.5.200 openretry
m1 + etcd1 192.168.5.3 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
m2 + etcd2 192.168.5.4 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
m3 + etcd3 192.168.5.7 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
n1 192.168.5.5 kubelet,kube-proxy,docker,flannel
n2 192.168.5.6 kubelet,kube-proxy,docker,flannel

修改主机名

按照上述表格角色设置对相应主机名

hostnamectl  set-hostname slb
hostnamectl  set-hostname m1
hostnamectl  set-hostname m2
hostnamectl  set-hostname m3
hostnamectl  set-hostname  n1

各服务器/etc/hosts 配置

除slb 外所有服务器相同

cat /etc/hosts
192.168.5.3 m1
192.168.5.4 m2
192.168.5.7 m3
192.168.5.5 n1
192.168.5.6 n2

关闭Swap

swapoff -a
sed 's/._swap._/#&/' /etc/fstab

关闭防火墙和selinux

全部服务器节点都要修改

systemctl stop firewalld.service
systemctl disable firewalld.service

修改完selinux 重启服务器

sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

net.bridge 和ip 转发设置

除slb 外全部服务器增加

cat /etc/sysctl.conf
# 打开ip转发,下面4条都干上去
net.ipv4.ip_forward = 1
# Disable netfilter on bridges.
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1

sysctl -p 命令使其生效

集群安装时需要注意的点

问题在安装的过程出现和总结。带着这些问题点安装k8s集群。会有收获。
带着问题自己也查阅相关资料学习。安装中没有书写到特别的详细。自行查阅学习。

集群hostname 处理

1、方法一
统一写入到/etc/hosts 文件
缺点:集群服务器过多,每次新增节点所有机器都要修改该文件

2、方法二
自建dns 服务,强烈推荐该方法,每次修改只需在dns 服务中增加即可

etcd 集群安装方式

两种方式只能使用其中一种,下边有详细介绍
1、外部方式
2、内部方式

证书的过期时间

两种方式。只能使用其中一种
1、使用自己生成的证书,自定义证书过期时间方式
2、重新编译kubeadm ,修改其生成证书时的时间方式

kubelete 需要跟docker 同步使用systemd 驱动

默认docker 是使用cgroup,kubelete 使用使用systemd,必须统一。
最好使用systemd

flannel yaml 文件的设置的子网

flannel yaml 文件中的子网必须跟kubeadm yaml文件中设置的子网地址一样。

dashboard 官方的yaml 文件中权限问题

dashboard 官方yaml 文件默认权限不过,需要自行修改。

安装

本次安装使用的是etcd 外部集群模式,因为电脑配置有限,能开的虚拟机数量有限,本次安装的外部etcd,etcd 还是跟各master 在一起。生产环境中最好把etcd跟master 分开到不同服务器。
安装外部etcd时跟master服务器不在同一服务器时,在安装初始化和加入其他master时需要把etcd的客户端证书提前复制到master节点相关位置存放。此位置跟下边的kube-config.yaml 文件中etcd配置的证书相关字段中必须相同

etcd 集群规划

使用kubeadm 安装k8s 集群,etcd 的安装有两种方式,一种是内部方式。一种是外部集群方式。两种安装方式有很大的区别。

内部方式

使用kubeadm init 时默认安装etcd。etcd 跟master 在一台服务器上,这种方式优点是安装方便,初始化时就自动安装etcd。缺点是当master 服务器宕机后会间接造成etcd 服务也不能使用。集群资源变更过多,master服务器 既要处理node 的请求连接,还要处理etcd 服务的请求连接。会造成服务器出现负载。内部方式还有一个特别的大的缺点:kubadm init 初始化的那台服务器的etcd 只是单独的etcd服务,后续使用kubeadm join --control-plane 加入的master 中的etcd 组成集群还需要手动修改etcd服务器的yaml 文件。其他master同样使用kubeadm init 安装那etcd 就不是集群。各自都是单机版

架构图

外部方式

自己手动安装etcd集群,k8s 集群跟etcd 能通信就OK。
这种安装方式不会出现内部方式的缺点。此方式缺点需要单独部署集群,
不过使用ansible 或者写好安装脚本也快速方便。

架构图

安装docker

除slb 外所有服务器全部安装docker
默认docker使用的是cgroupdriver 是cgroup,修改cgroupdriver 为systemd

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum -y install docker-ce-18.09.9-3* docker-ce-cli-18.09.9-3*
mkdir /etc/docker/
cat >> /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://slwwbaoq.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl enable docker
systemctl start docker

安装 ipvsadm

除slb 外所有服务器全部安装ipvsadm
kuberneter-proxy 使用ipvsadm 当做service 转发的负载均衡

yum install ipvsadm -y
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

添加权限加验证ipvs 模块是否开启

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

安装etcd

本次在etcd1 上操作。证书生成操作也可以在其他服务器上操作,生成完毕拷贝到相应服务器上。
证书配置文件可执行文件存放目录

mkdir -p /etc/etcd/{bin,cfg,ssl}

使用cfssl来生成自签证书,先下载cfssl工具:

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl

mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

生成证书

创建生成证书的文件临时目录

mkdir /k8s-tmp

创建以下三个文件,文件在/k8s-tmp目录下,目录按照自己环境随意定

cat /k8s-tmp/ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}

cat /k8s-tmp/ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}

cat /k8s-tmp/server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "192.168.5.3",
    "192.168.5.5",
    "192.168.5.7",
    "192.168.5.200",
     "192.168.5.8",
     "192.168.5.9"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}

生成ca证书

cd /etc/etcd/ssl/
cfssl gencert -initca /k8s-tmp/ca-csr.json | cfssljson -bare ca
2020/01/10 22:08:08 [INFO] generating a new CA key and certificate from CSR
2020/01/10 22:08:08 [INFO] generate received request
2020/01/10 22:08:08 [INFO] received CSR
2020/01/10 22:08:08 [INFO] generating key: rsa-2048
2020/01/10 22:08:08 [INFO] encoded CSR
2020/01/10 22:08:08 [INFO] signed certificate with serial number 490053314682424709503949261482590717907168955991

生成域名证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=/k8s-tmp/ca-config.json -profile=www /k8s-tmp/server-csr.json | cfssljson -bare server
2020/01/10 22:11:21 [INFO] generate received request
2020/01/10 22:11:21 [INFO] received CSR
2020/01/10 22:11:21 [INFO] generating key: rsa-2048
ca-key.pem  ca.pem  server-key.pem  server.pem
2020/01/10 22:11:21 [INFO] encoded CSR
2020/01/10 22:11:21 [INFO] signed certificate with serial number 308419828069657306052544507320294995575828716921
2020/01/10 22:11:21 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

生成的文件

ls /etc/etcd/ssl/*
ca-key.pem  ca.pem  server-key.pem  server.pem

安装etcd

二进制包下载地址:https://github.com/coreos/etcd/releases/tag/v3.3.18

解压二进制包:

wget https://github.com/etcd-io/etcd/releases/download/v3.3.18/etcd-v3.3.18-linux-amd64.tar.gz

tar zxvf etcd-v3.3.18-linux-amd64.tar.gz

[root@k8s-master1 ssl]# mv etcd-v3.3.18-linux-amd64/{etcd,etcdctl} /etc/etcd/bin/

创建数据存放目录

mkdir /data/etcd-data

配置文件

 cat /etc/etcd/cfg/etcd.conf 
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/data/etcd-data"
ETCD_LISTEN_PEER_URLS="https://192.168.5.3:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.5.3:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.5.3:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.5.3:2379"
ETCD_INITIAL_CLUSTER="etcd02=https://192.168.5.4:2380,etcd03=https://192.168.5.7:2380,etcd01=https://192.168.5.3:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

  • ETCD_NAME 节点名称
  • ETCD_DATA_DIR 数据目录
  • ETCD_LISTEN_PEER_URLS 集群通信监听地址
  • ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
  • ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
  • ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
  • ETCD_INITIAL_CLUSTER 集群节点地址
  • ETCD_INITIAL_CLUSTER_TOKEN 集群Token
  • ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群

systemd管理etcd:

cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/etc/etcd/cfg/etcd.conf
ExecStart=/etc/etcd/bin/etcd --name=${ETCD_NAME} --data-dir=${ETCD_DATA_DIR} --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster-state=new --cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem --peer-cert-file=/etc/etcd/ssl/server.pem --peer-key-file=/etc/etcd/ssl/server-key.pem --trusted-ca-file=/etc/etcd/ssl/ca.pem --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

启动并设置开启启动:

systemctl start etcd

systemctl enable etcd

在etcd2 和 etcd3 跟etcd1 操作一抹一样。唯一不同的是etcd配置文件中的ETCD_NAME ETCD_LISTEN_CLIENT_URLS ETCD_LISTEN_PEER_URLS ETCD_INITIAL_ADVERTISE_PEER_URLS ETCD_ADVERTISE_CLIENT_URLS
四个地方需要修改

都部署完成后,检查etcd集群状态:

/etc/etcd/bin/etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem --endpoints="https://192.168.5.3:2379,https://192.168.5.4:2379,https://192.168.5.5:2379" cluster-health

member 24586baafb4ab4b8 is healthy: got healthy result from https://192.168.5.7:2379
member 90b0b3dde8b183f1 is healthy: got healthy result from https://192.168.5.3:2379
member 94c0f494655271a4 is healthy: got healthy result from https://192.168.5.4:2379
cluster is healthy

如果输出上面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etc

安装openresty

安装依赖

yum -y install pcre-devel openssl-devel gcc curl postgresql-devel

下载安装包

cd /usr/src/
wget https://openresty.org/download/openresty-1.15.8.2.tar.gz

编译安装

tar xf /usr/src/openresty-1.15.8.2.tar.gz
cd /usr/src/openresty-1.15.8.2/
./configure  --with-luajit --without-http_redis2_module --with-http_iconv_module --with-http_postgres_module
make && make install

ln -s /usr/local/openresty/nginx/sbin/nginx /usr/bin/nginx

配置tcp模式负载均衡

cat /usr/local/openresty/nginx/conf/nginx.conf

#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;

events {
    worker_connections  1024;
}

stream {
        server {
        listen 6443;
        proxy_pass kubeadm;
}
include servers/*;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;
    include servers/*;
    #gzip  on;

    server {
        listen       80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   html;
            index  index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

    }

}
cat /usr/local/openresty/nginx/conf/servers/k8s.com
upstream kubeadm {
        server 192.168.5.3:6443 weight=10 max_fails=30 fail_timeout=10s;
        server 192.168.5.4:6443 weight=10 max_fails=30 fail_timeout=10s;
        server 192.168.5.7:6443 weight=10 max_fails=30 fail_timeout=10s;
}

配置文件中最核心的点是

stream {
        server {
        listen 6443;
        proxy_pass kubeadm;
}
include servers/*;
}

启动openrety

nginx

查看监听状态

netstat -anptl | grep 6443
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      7707/nginx: master

安装 kubeadm kubectl kubelet

五台服务器需要全部安装
配置yum 源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装

yum install -y kubeadm-1.15.6 kubectl-1.15.6 kubelet-1.15.6

kubeadm 替换

在m1 上操作,生成新的kubeadm 替换掉旧的,所有服务器都要替换
默认安装的kubeadm 安装的k8s 集群的证书只有一年时间。
需要每年重新生成新的证书,在重新生成证书时容易导致集群出现宕机等状态。因此需要修改源代码修改证书有效期时间,重新编译kubeadm。

安装go环境

cd /usr/src
wget https://dl.google.com/go/go1.12.14.linux-amd64.tar.gz
tar -zxf /usr/src/go1.12.14.linux-amd64.tar.gz
mv go /usr/local/

echo "export PATH=$PATH:/usr/local/go/bin" >>/etc/profile
source /etc/profile

重新编译kubeadm

下载源代码,上传源代码到192.168.5.3服务器的/usr/src 下
下载地址:https://github.com/kubernetes/kubernetes/releases/tag/v1.15.6

cd /usr/src && tar xf kubernetes-1.15.6.tar.gz

大于1.14 版本修改方式
修改CertificateValidity = time.Hour * 24 * 365CertificateValidity = time.Hour * 24 * 365 * 10

cat /usr/src/kubernetes-1.15.6/kubeadm/app/constants/constants.go
const (
        // KubernetesDir is the directory Kubernetes owns for storing various configuration files
        KubernetesDir = "/etc/kubernetes"
        // ManifestsSubDirName defines directory name to store manifests
        ManifestsSubDirName = "manifests"
        // TempDirForKubeadm defines temporary directory for kubeadm
        // should be joined with KubernetesDir.
        TempDirForKubeadm = "tmp"

        // CertificateValidity defines the validity for all the signed certificates generated by kubeadm
        //CertificateValidity = time.Hour * 24 * 365
        CertificateValidity = time.Hour * 24 * 365 * 10

修改完毕编译生成kubeadm

cd /usr/src/kubernetes-1.15.6/ && make WHAT=cmd/kubeadm GOFLAGS=-v

将kubeadm 文件拷贝替换系统中原有kubeadm

cp /usr/bin/kubeadm /usr/bin/kubeadm.origin
cp /usr/src/kubernetes-1.15.6/_output/bin/kubeadm /usr/bin/kubeadm

注意:如果不确认重新编译的kubeadm 生成的证书是自己修改的有效时间,在初始化完毕一个master后使用kubeadm alpha certs check-expiration命令查看证书的时间(使用外部模式etcd 集群该命令不能使用。因为外部模式etcd集群证书是自己生成。该命令检测时会出现找不到etcd 证书报错)

kubeadm alpha certs check-expiratio 命令的结果例子:
查看 EXPIRES 字段跟初始化master时的日期做下对比

kubeadm alpha certs check-expiration
CERTIFICATE                EXPIRES 字段跟初始化master时的日期做下对比RESIDUAL TIME   EXTERNALLY MANAGED
admin.conf                 Dec 27, 2029 15:47 UTC   9y              no
apiserver                  Dec 27, 2029 15:47 UTC   9y              no
apiserver-etcd-client      Dec 27, 2029 15:47 UTC   9y              no
apiserver-kubelet-client   Dec 27, 2029 15:47 UTC   9y              no
controller-manager.conf    Dec 27, 2029 15:47 UTC   9y              no
etcd-healthcheck-client    Dec 27, 2029 15:47 UTC   9y              no
etcd-peer                  Dec 27, 2029 15:47 UTC   9y              no
etcd-server                Dec 27, 2029 15:47 UTC   9y              no
front-proxy-client         Dec 27, 2029 15:47 UTC   9y              no
scheduler.conf             Dec 27, 2029 15:47 UTC   9y              no

kubelete 配置调整

配置文件中增加Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
该参数是设置kubelet的cgroup drivers 跟docker 的一致

cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

添加开机启动和启动kubelet

systemctl enable kubelet
systemctl start kubelet

kubelet 在没有初始化集群时启动后的状态错误的,等初始化完或者节点加入到集群后就自动恢复

安装初始化m1

配置kubeadm-config.yaml
配置文件文档链接:https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2

cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
bootstrapTokens:
- token: "783bde.3f89s0fje9f38fhf"
  description: "another bootstrap token"
  ttl: "0s"
  usages:
  - authentication
  - signing
  groups:
  - system:bootstrappers:kubeadm:default-node-token
localAPIEndpoint:
  advertiseAddress: "192.168.5.3"
  bindPort: 6443
certificateKey: "e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
etcd:
  external:
    endpoints:
    - https://192.168.5.3:2379
    - https://192.168.5.4:2379
    - https://192.168.5.7:2379
    caFile: /etc/etcd/ssl/ca.pem
    certFile: /etc/etcd/ssl/server.pem
    keyFile: /etc/etcd/ssl/server-key.pem
networking:
  serviceSubnet: "10.96.0.0/12"
  podSubnet: "10.50.0.0/16"
  dnsDomain: "cluster.local"
kubernetesVersion: "v1.15.6"
controlPlaneEndpoint: "192.168.5.200:6443"
apiServer:
  certSANs:
  - "192.168.5.3"
  - "192.168.5.4"
  - "192.168.5.7"
  - "192.168.5.200"
  - "192.168.5.10"
  - "192.168.5.11"
  timeoutForControlPlane: 4m0s
certificatesDir: "/etc/kubernetes/pki"
imageRepository: registry.aliyuncs.com/google_containers
useHyperKubeImage: false
clusterName: kubernetes
dns:
  type: CoreDNS
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs

一些重要字段详解

  • bootstrapTokens.token
    该字段是指定node节点加入集群 和 其他master 节点加入集群自签证书功能所需的token值,可以使用 kubeadm token generate命令生成,该字段表示后续添加node 时指定的token 值为自定义的。而不是使用集群初始化时自动生成的值。该字段也可以注释不使用。初始化时master 时如果没有自定义。系统会默认生成。

  • bootstrapTokens.ttl
    设置token 值的有效时间,如果使用该字段按照默认为1天。
    指定字段值为 0s 代表token 永不过期。

  • localAPIEndpoint.advertiseAddress
    api server绑定的IP地址

  • localAPIEndpoint.bindPort
    api server 绑定的端口号

  • etcd
    该字段下的配置是使用外部自建etcd 集群,所需要的配置信息。包括etcd 集群地址、证书相关配置。

  • networking.serviceSubnet
    service 子网

  • networking.podSubnet
    pod 的子网,后续安装flannle 时需要跟该值对应

  • networking.dnsDomain
    域名解析中的根,如果该位置修改了就必须修改kebelet中的配置参数

  • kubernetesVersion
    指定安装的k8s 版本

  • controlPlaneEndpoint
    指定控制平台节点,可以写域名和ip地址。api serever如果是集群版,指定slb 地址

  • apiServer.certSANs
    该字段指定需要生成证书api server的地址

  • certificatesDir
    指定证书存放的位置,不配置默认存放为/etc/kubernetes/pki 目录下

  • imageRepository
    指定相应镜像的位置,默认是从k8s.gcr.io下载镜像,该地址需要特殊上网才能下载。修改镜像地址为阿里云镜像仓库registry.aliyuncs.com/google_containers

  • dns.type
    指定安装的dns解析服务。默认coredn

  • featureGates.SupportIPVSProxyMode
    该字段表示proxy开启ipvs支持

  • mode
    该字段表示proxy 使用ipvs模式作为service 负载均衡

初始化m1

kubeadm init --config /k8s/kubeadm-config.yaml --upload-certs

等待出现以下结果初始化完毕

生成kubectl 命令配置
初始化结果已经提示,按照相关提示操作即可。后续使用kubeadm jion 加入的master或 node 也可以使用下边操作后,也可以使用kubectl命令

mkdir -p $HOME/.kube && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && chown $(id -u):$(id -g) $HOME/.kube/config

使用kubectl 查看集群
刚初始化完毕的集群,因为没有安装cni 网络插件。coredns 的状态是pending 状态

[root@m1 k8s]# kubectl get pod -n kube-system 
NAME                          READY   STATUS     RESTARTS   AGE
coredns-bccdc95cf-h5jkz       0/1     Pending    0          5m
coredns-bccdc95cf-vcgbq       0/1     Pending    0          5m
kube-apiserver-m1             1/1     Running    0          5m

kube-controller-manager-m1    1/1     Running    0          5m

kube-proxy-72cdg              1/1     Running    0          5m

kube-scheduler-m1             1/1     Running    0          5m

安装flannel 网络插件

flannel 相关资料
https://github.com/coreos/flannel

配置flannel yaml 文件
修改yaml文件中如下部分,network 字段设置的子网要跟初始化集群的kubeadm-config.yaml文件中设置的network 字段一样

net-conf.json: |
    {
      "Network": "10.50.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }

安装flannel
flannel pod 是使用DaemonSet 安装的控制器安装的pod,
后续执行集群加入命令加入到集群中的master 和node 节点都会在节点上启动一个flannel pod

kubectl apply -f /k8s/kube-flannel.yml

安装成功后coredns 变为running,表示没有问题

kubectl get pod -n kube-system
NAME                          READY   STATUS     RESTARTS   AGE
coredns-bccdc95cf-h5jkz       1/1     Running    0
5m
coredns-bccdc95cf-vcgbq       1/1     Running    0          5m
kube-apiserver-m1             1/1     Running    0          5m

kube-flannel-ds-amd64-67tvq   1/1     Running    0          5m

kube-controller-manager-m1    1/1     Running    0          5m

kube-proxy-72cdg              1/1     Running    0          5m

kube-scheduler-m1             1/1     Running    0          5m

安装其他master 节点

在其他master 执行以下命令。直接加入到集群中。在初始化时集群时有相关提示

kubeadm join 192.168.5.200:6443 --token 783bde.3f89s0fje9f38fhf \
    --discovery-token-ca-cert-hash sha256:5dd8f46b1e107e863d3d905411b591573cb65015e2c80386362599b81db09ef7 \
    --control-plane --certificate-key e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204
  • token
    是在kubeadm-config.yaml 文件中设置的
  • --discovery-token-ca-cert-hash
    是初始化时自动生成的
  • --certificate-ke
    是kubeadm-config.yaml 文件中设置的,它的作用是用于自动从初始化的m1 上下载相关证书,对应init时的--upload-certs参数,只有在加入master 时才会使用--certificate-ke

节点加入集群

在节点服务器执行以下命令,在初始化时集群时有相关提示

kubeadm join 192.168.5.200:6443 --token 783bde.3f89s0fje9f38fhf --discovery-token-ca-cert-hash sha256:5dd8f46b1e107e863d3d905411b591573cb65015e2c80386362599b81db09ef7f

查看加入的master 和 node节点

kubectl get node 命令

kubectl get node
NAME   STATUS   ROLES    AGE     VERSION
m1     Ready    master   7d22h   v1.15.6
m2     Ready    master   7d22h   v1.15.6
m3     Ready    master   7d22h   v1.15.6
n1     Ready    <none>   7d22h   v1.15.6
n2     Ready    <none>   7d22h   v1.15.6

如果不知道以上命令,怎么查询 token 和 discovery-token-ca-cert-has 呢?

如果没有token,执行以下查询获得

kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                           EXTRA GROUPS
783bde.3f89s0fje9f38fhf   <forever>   <never>                     authentication,signing   another bootstrap token                               system:bootstrappers:kubeadm:default-node-token

默认情况下,令牌在24小时后过期。如果在当前令牌过期后将节点加入群集,则可以通过在主节点上运行以下命令来创建新令牌:

kubeadm token create
ih6qhw.tbkp26l64xivcca7

如果没有discovery-token-ca-cert-hash,执行以下查询获得。 --discovery-token-ca-cert-hash 的值可以配合多个token以重复使用

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
0a737675e1e37aa4025077b27ced8053fe84c363df11c506bfb512b88408697e

安装dashboard

dashboard 是k8s 配套的的一个web 管理界面工具。它能管理k8s 集群的大多数功能。
dashboard 默认安装的是使用https和 token 认证方式,相关资料到网站查看:https://github.com/kubernetes/dashboard。本次使用dashboard 的http方式安装。dashboard 默认镜像中使用9090端口,相关资料:https://github.com/kubernetes/dashboard/blob/master/aio/Dockerfile

配置 dashboard yaml文件,

文件修改以下配置
注释默认文件中的ClusterRole

#kind: ClusterRole
#apiVersion: rbac.authorization.k8s.io/v1
#metadata:
 # labels:
  #  k8s-app: kubernetes-dashboard
  #name: kubernetes-dashboard
#rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
 # - apiGroups: ["metrics.k8s.io"]
  #  resources: ["pods", "nodes"]
  #  verbs: ["get", "list", "watch"]

在ClusterRoleBinding 字段中绑定集群默认的Cluster-admin 角色。因为默认配置文件设置的ClusterRole 权限不足。

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

在容器配置字段添加

- containerPort: 9090
    protocol: TCP

在检测字段添加

httpGet:
   scheme: HTTP
     path: /
     port: 9090
   initialDelaySeconds: 30
   timeoutSeconds: 30

修改service 字段,修改类型为nodeport,添加9090 的映射

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      name: https
    - port: 9090
      name: http
      targetPort: 9090
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard

完整的dashboard yaml 文件

cat dashboard.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      name: https
    - port: 9090
      name: http
      targetPort: 9090
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

#kind: ClusterRole
#apiVersion: rbac.authorization.k8s.io/v1
#metadata:
 # labels:
  #  k8s-app: kubernetes-dashboard
  #name: kubernetes-dashboard
#rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
 # - apiGroups: ["metrics.k8s.io"]
  #  resources: ["pods", "nodes"]
  #  verbs: ["get", "list", "watch"]
---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-rc1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
            - containerPort: 9090
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
            httpGet:
               scheme: HTTP
               path: /
               port: 9090
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "beta.kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.1
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "beta.kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

dashboard 安装

kubectl apply -f /k8s/dashboard.yaml

查看pod

kubectl get pod -n kubernetes-dashboard 
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-6c554969c6-k6rbh   1/1     Running   0          3d4h
kubernetes-dashboard-9bff46df4-t7sn2         1/1     Running   1          4d1h

查看svc

kubectl get svc -n kubernetes-dashboard 
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                        AGE
dashboard-metrics-scraper   ClusterIP   10.104.248.58   <none>        8000/TCP                       4d1h
kubernetes-dashboard        NodePort    10.100.47.122   <none>        443:30735/TCP,9090:32701/TCP   4d1h

访问192.168.5.5:32701 出现以下界面

安装metrics-server

k8s 1.13 不再使用heapster ,改为使用 metrics-server。

metrics-server github: https://github.com/kubernetes-incubator/metrics-server

主要修改
1、添加两个参数 --kubelet-preferred-address-types=InternalIP --kubelet-insecure-tls

2、imagePullPolicy: Always修改为imagePullPolicy: IfNotPresent 因为Always 每次都是从原地址去拉取,不能特殊上网。IfNotPresent node 有镜像优先使用

镜像的调整
默认镜像是k8s.gcr.io/metrics-server-amd64:v0.3.1 自动拉取不到。先自行下载到node上修改镜像名称

docker pull mirrorgooglecontainers/metrics-server-amd64:v0.3.1

docker tag mirrorgooglecontainers/metrics-server-amd64:v0.3.1 k8s.gcr.io/metrics-server-amd64:v0.3.1

创建存放yaml 文件目录,上传所有相关yaml 文件到该目录下

mkdir -p /k8s/metrics-server/

yaml 文件调整

# cat /k8s/metrics-server-deployment.yaml
---
apiVersion: v1
kind: ServiceAccount
...
---
apiVersion: extensions/v1beta1
kind: Deployment
...
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: k8s.gcr.io/metrics-server-amd64:v0.3.1
        imagePullPolicy: Always
        args:
        - --kubelet-preferred-address-types=InternalIP
        - --kubelet-insecure-tls
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp

执行加载文件

$ kubectl create -f /k8s/metrics-server/

查看pod是否正常运行,查看pod日志是否报错

# # kubectl -n kube-system get po,svc | grep metrics-server 
pod/metrics-server-8665bf49db-5wv7l                                1/1     Running   0          31m
service/metrics-server                     NodePort    10.99.222.85    <none>        443:32443/TCP   23m

# kubectl -n kube-system  logs -f metrics-server-8665bf49db-5wv7l

通过kubectl工具测试获取metrics数据

# kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
{
  "kind": "NodeMetricsList",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
  },
  "items": [
    {
      "metadata": {
        "name": "master02",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/master02",
        "creationTimestamp": "2019-01-29T10:02:00Z"
      },
      "timestamp": "2019-01-29T10:01:48Z",
      "window": "30s",
      "usage": {
        "cpu": "131375532n",
        "memory": "989032Ki"
      }
    },
    ...

使用top确认数据

#  kubectl top nodes
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master01   200m         2%     1011Mi          3%
master02   451m         5%     967Mi           3%
master03   423m         5%     1003Mi          3%
node01     84m          1%     440Mi           1%
#  kubectl top pod
NAME                     CPU(cores)   MEMORY(bytes)
myip-7644b545d9-htg5z    0m           1Mi
myip-7644b545d9-pnwrn    0m           1Mi
myip-7644b545d9-ptnqc    0m           1Mi
tools-657d877fc5-4cfdd   0m           0Mi

ingress 和ingress-Controllers

简易介绍 ingress
ingress相当于nginx 配置中的 server + upstream
ingress-Controllers 相当于nginx 服务。它们的产生是为的解决默认service 是tcp 层的负载均衡、service使用nodeport 模式还需要记录相关端口,端口数量还有限制。使用ingress 和 ingress-Controllers 能完美解决该问题。ingress-Controllers 部署完毕后,ingress 配置关联集群中的service,ingress-Controllers 自动把相关配置同步过来(前提是在部署时添加了授权)

官方资料:https://v1-15.docs.kubernetes.io/docs/concepts/services-networking/ingress/https://v1-15.docs.kubernetes.io/docs/concepts/services-networking/ingress-controllers/

使用nginx当 ingress 控制器

相关资料: https://docs.nginx.com/nginx-ingress-controller/installation/building-ingress-controller-image/

github 地址:https://github.com/nginxinc/kubernetes-ingress/

nginx-ingress yaml 文件
修改nginx 的image地址,默认使用的不是稳定版,修改为 nginx/nginx-ingress:1.6.0

cat kube-nginx-ingress.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-ingress
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: nginx-ingress
rules:
- apiGroups:
  - ""
  resources:
  - services
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  - list
  - watch
  - update
  - create
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs:
  - list
  - watch
  - get
- apiGroups:
  - "extensions"
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - k8s.nginx.org
  resources:
  - virtualservers
  - virtualserverroutes
  verbs:
  - list
  - watch
  - get
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: nginx-ingress
subjects:
- kind: ServiceAccount
  name: nginx-ingress
  namespace: nginx-ingress
roleRef:
  kind: ClusterRole
  name: nginx-ingress
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: v1
kind: Secret
metadata:
  name: default-server-secret
  namespace: nginx-ingress
type: Opaque
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN2akNDQWFZQ0NRREFPRjl0THNhWFhEQU5CZ2txaGtpRzl3MEJBUXNGQURBaE1SOHdIUVlEVlFRRERCWk8KUjBsT1dFbHVaM0psYzNORGIyNTBjbTlzYkdWeU1CNFhEVEU0TURreE1qRTRNRE16TlZvWERUSXpNRGt4TVRFNApNRE16TlZvd0lURWZNQjBHQTFVRUF3d1dUa2RKVGxoSmJtZHlaWE56UTI5dWRISnZiR3hsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUwvN2hIUEtFWGRMdjNyaUM3QlBrMTNpWkt5eTlyQ08KR2xZUXYyK2EzUDF0azIrS3YwVGF5aGRCbDRrcnNUcTZzZm8vWUk1Y2Vhbkw4WGM3U1pyQkVRYm9EN2REbWs1Qgo4eDZLS2xHWU5IWlg0Rm5UZ0VPaStlM2ptTFFxRlBSY1kzVnNPazFFeUZBL0JnWlJVbkNHZUtGeERSN0tQdGhyCmtqSXVuektURXUyaDU4Tlp0S21ScUJHdDEwcTNRYzhZT3ExM2FnbmovUWRjc0ZYYTJnMjB1K1lYZDdoZ3krZksKWk4vVUkxQUQ0YzZyM1lma1ZWUmVHd1lxQVp1WXN2V0RKbW1GNWRwdEMzN011cDBPRUxVTExSakZJOTZXNXIwSAo1TmdPc25NWFJNV1hYVlpiNWRxT3R0SmRtS3FhZ25TZ1JQQVpQN2MwQjFQU2FqYzZjNGZRVXpNQ0F3RUFBVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQWpLb2tRdGRPcEsrTzhibWVPc3lySmdJSXJycVFVY2ZOUitjb0hZVUoKdGhrYnhITFMzR3VBTWI5dm15VExPY2xxeC9aYzJPblEwMEJCLzlTb0swcitFZ1U2UlVrRWtWcitTTFA3NTdUWgozZWI4dmdPdEduMS9ienM3bzNBaS9kclkrcUI5Q2k1S3lPc3FHTG1US2xFaUtOYkcyR1ZyTWxjS0ZYQU80YTY3Cklnc1hzYktNbTQwV1U3cG9mcGltU1ZmaXFSdkV5YmN3N0NYODF6cFErUyt1eHRYK2VBZ3V0NHh3VlI5d2IyVXYKelhuZk9HbWhWNThDd1dIQnNKa0kxNXhaa2VUWXdSN0diaEFMSkZUUkk3dkhvQXprTWIzbjAxQjQyWjNrN3RXNQpJUDFmTlpIOFUvOWxiUHNoT21FRFZkdjF5ZytVRVJxbStGSis2R0oxeFJGcGZnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdi91RWM4b1JkMHUvZXVJTHNFK1RYZUprckxMMnNJNGFWaEMvYjVyYy9XMlRiNHEvClJOcktGMEdYaVN1eE9ycXgrajlnamx4NXFjdnhkenRKbXNFUkJ1Z1B0ME9hVGtIekhvb3FVWmcwZGxmZ1dkT0EKUTZMNTdlT1l0Q29VOUZ4amRXdzZUVVRJVUQ4R0JsRlNjSVo0b1hFTkhzbysyR3VTTWk2Zk1wTVM3YUhudzFtMApxWkdvRWEzWFNyZEJ6eGc2clhkcUNlUDlCMXl3VmRyYURiUzc1aGQzdUdETDU4cGszOVFqVUFQaHpxdmRoK1JWClZGNGJCaW9CbTVpeTlZTW1hWVhsMm0wTGZzeTZuUTRRdFFzdEdNVWozcGJtdlFmazJBNnljeGRFeFpkZFZsdmwKMm82MjBsMllxcHFDZEtCRThCay90elFIVTlKcU56cHpoOUJUTXdJREFRQUJBb0lCQVFDZklHbXowOHhRVmorNwpLZnZJUXQwQ0YzR2MxNld6eDhVNml4MHg4Mm15d1kxUUNlL3BzWE9LZlRxT1h1SENyUlp5TnUvZ2IvUUQ4bUFOCmxOMjRZTWl0TWRJODg5TEZoTkp3QU5OODJDeTczckM5bzVvUDlkazAvYzRIbjAzSkVYNzZ5QjgzQm9rR1FvYksKMjhMNk0rdHUzUmFqNjd6Vmc2d2szaEhrU0pXSzBwV1YrSjdrUkRWYmhDYUZhNk5nMUZNRWxhTlozVDhhUUtyQgpDUDNDeEFTdjYxWTk5TEI4KzNXWVFIK3NYaTVGM01pYVNBZ1BkQUk3WEh1dXFET1lvMU5PL0JoSGt1aVg2QnRtCnorNTZud2pZMy8yUytSRmNBc3JMTnIwMDJZZi9oY0IraVlDNzVWYmcydVd6WTY3TWdOTGQ5VW9RU3BDRkYrVm4KM0cyUnhybnhBb0dCQU40U3M0ZVlPU2huMVpQQjdhTUZsY0k2RHR2S2ErTGZTTXFyY2pOZjJlSEpZNnhubmxKdgpGenpGL2RiVWVTbWxSekR0WkdlcXZXaHFISy9iTjIyeWJhOU1WMDlRQ0JFTk5jNmtWajJTVHpUWkJVbEx4QzYrCk93Z0wyZHhKendWelU0VC84ajdHalRUN05BZVpFS2FvRHFyRG5BYWkyaW5oZU1JVWZHRXFGKzJyQW9HQkFOMVAKK0tZL0lsS3RWRzRKSklQNzBjUis3RmpyeXJpY05iWCtQVzUvOXFHaWxnY2grZ3l4b25BWlBpd2NpeDN3QVpGdwpaZC96ZFB2aTBkWEppc1BSZjRMazg5b2pCUmpiRmRmc2l5UmJYbyt3TFU4NUhRU2NGMnN5aUFPaTVBRHdVU0FkCm45YWFweUNweEFkREtERHdObit3ZFhtaTZ0OHRpSFRkK3RoVDhkaVpBb0dCQUt6Wis1bG9OOTBtYlF4VVh5YUwKMjFSUm9tMGJjcndsTmVCaWNFSmlzaEhYa2xpSVVxZ3hSZklNM2hhUVRUcklKZENFaHFsV01aV0xPb2I2NTNyZgo3aFlMSXM1ZUtka3o0aFRVdnpldm9TMHVXcm9CV2xOVHlGanIrSWhKZnZUc0hpOGdsU3FkbXgySkJhZUFVWUNXCndNdlQ4NmNLclNyNkQrZG8wS05FZzFsL0FvR0FlMkFVdHVFbFNqLzBmRzgrV3hHc1RFV1JqclRNUzRSUjhRWXQKeXdjdFA4aDZxTGxKUTRCWGxQU05rMXZLTmtOUkxIb2pZT2pCQTViYjhibXNVU1BlV09NNENoaFJ4QnlHbmR2eAphYkJDRkFwY0IvbEg4d1R0alVZYlN5T294ZGt5OEp0ek90ajJhS0FiZHd6NlArWDZDODhjZmxYVFo5MWpYL3RMCjF3TmRKS2tDZ1lCbyt0UzB5TzJ2SWFmK2UwSkN5TGhzVDQ5cTN3Zis2QWVqWGx2WDJ1VnRYejN5QTZnbXo5aCsKcDNlK2JMRUxwb3B0WFhNdUFRR0xhUkcrYlNNcjR5dERYbE5ZSndUeThXczNKY3dlSTdqZVp2b0ZpbmNvVlVIMwphdmxoTUVCRGYxSjltSDB5cDBwWUNaS2ROdHNvZEZtQktzVEtQMjJhTmtsVVhCS3gyZzR6cFE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
  namespace: nginx-ingress
data:

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: virtualservers.k8s.nginx.org
spec:
  group: k8s.nginx.org
  versions:
  - name: v1
    served: true
    storage: true
  scope: Namespaced
  names:
    plural: virtualservers
    singular: virtualserver
    kind: VirtualServer
    shortNames:
    - vs
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: virtualserverroutes.k8s.nginx.org
spec:
  group: k8s.nginx.org
  versions:
  - name: v1
    served: true
    storage: true
  scope: Namespaced
  names:
    plural: virtualserverroutes
    singular: virtualserverroute
    kind: VirtualServerRoute
    shortNames:
    - vsr

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
spec:
  selector:
    matchLabels:
      app: nginx-ingress
  template:
    metadata:
      labels:
        app: nginx-ingress
     #annotations:
       #prometheus.io/scrape: "true"
       #prometheus.io/port: "9113"
    spec:
      serviceAccountName: nginx-ingress
      containers:
      - image: nginx/nginx-ingress:1.6.0
        imagePullPolicy: Always
        name: nginx-ingress
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: https
          containerPort: 443
          hostPort: 443
       #- name: prometheus
         #containerPort: 9113
        securityContext:
          allowPrivilegeEscalation: true
          runAsUser: 101 #nginx
          capabilities:
            drop:
            - ALL
            add:
            - NET_BIND_SERVICE
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        args:
          - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
          - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
         #- -v=3 # Enables extensive logging. Useful for troubleshooting.
         #- -report-ingress-status
         #- -external-service=nginx-ingress
         #- -enable-leader-election
         #- -enable-prometheus-metrics
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  - port: 443
    targetPort: 443
    protocol: TCP
    name: https
  selector:
    app: nginx-ingress

部署,使用的pod 的deamonset

kubectl apply -f /k8s/kube-nginx-ingress.yaml

查看部署结果

kubectl get pod -n nginx-ingress
NAME                  READY   STATUS    RESTARTS   AGE
nginx-ingress-crk7x   1/1     Running   1          3d23h
nginx-ingress-vw8mx   1/1     Running   1          3d23h


 kubectl get svc -n nginx-ingress
NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
nginx-ingress   NodePort   10.110.78.89   <none>        80:30526/TCP,443:30195/TCP   3d23h

添加测试案例

安装一个myapp

cat demo1.yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  selector:
    app: myapp
    release: canary
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v2
        ports:
        - name: httpd
          containerPort: 80

部署demo1

kubectl apply  -f /k8s/demo1.yaml

查看结果

kubectl get pod -n default  | grep myapp
myapp-deploy-67d64cb6f4-c582p   1/1     Running   1          3d23h
myapp-deploy-67d64cb6f4-j98nx   1/1     Running   0          3d6h

kubectl get svc -n default  | grep myapp
myapp        ClusterIP   10.104.231.115   <none>        80/TCP    3d23h

添加demo1-ingess.yaml

cat demo1-ingess.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-myapp
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: demo1.com #生产中该域名应当可以被公网解析
    http:
      paths:
      - path:
        backend:
          serviceName: myapp
          servicePort: 80

部署demo1-ingress

 kubectl apply  -f /k8s/demo1-ingess.yaml

查看结果

kubectl get ingress -n default
NAME            HOSTS       ADDRESS   PORTS   AGE
ingress-myapp   demo1.com             80      3d23h

在一台要访问demo1.com 域名的电脑的hosts 文件中添加解析记录
192.168.5.5 demo1.com
在浏览器访问http://demo1.com:30526/
出现如下表示ingress和ingress-controllers 安装成功

报错处理

添加的节点状态为notready

一般情况下 我们是在maste节点上安装网络插件的,然后在join node 节点,这样导致node节点可能无法加载到这些插件
使用

journalctl -f -u kubelet

显示如下内容

Nov 06 15:37:21 jupiter kubelet[86177]: W1106 15:37:21.482574   86177 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni
Nov 06 15:37:25 jupiter kubelet[86177]: E1106 15:37:25.075839   86177 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reaeady: cni config uninitialized

通过研究发现 /etc/没有cni这个目录 其他node节点都有
使用scp 把master节点的cni 下 复制过来

scp -r master1:/etc/cni /etc/cni
重启kubelet
systemctl restart kubelet

回到master 节点查看 状态 仍然是notready (一般情况,重启服务,需要等他反应,好吧,我们等几分钟)
始终看不到 status ready
回到 node节点
再次使用

journalctl -f -u kubelet
显示如下
Nov 06 15:36:41 jupiter kubelet[86177]: W1106 15:36:41.439409   86177 cni.go:202] Error validating CNI config &{weave 0.3.0 false [0xc000fb0c00 0xc000fb0c80] [123 10 32 32 32 32 34 99 110 105 86 101 114 115 105 111 110 34 58 32 34 48 46 51 46 48 34 44 10 32 32 32 32 34 110 97 109 101 34 58 32 34 119 101 97 118 101 34 44 10 32 32 32 32 34 112 108 117 103 105 110 115 34 58 32 91 10 32 32 32 32 32 32 32 32 123 10 32 32 32 32 32 32 32 32 32 32 32 32 34 110 97 109 101 34 58 32 34 119 101 97 118 101 34 44 10 32 32 32 32 32 32 32 32 32 32 32 32 34 116 121 112 101 34 58 32 34 119 101 97 118 101 45 110 101 116 34 44 10 32 32 32 32 32 32 32 32 32 32 32 32 34 104 97 105 114 112 105 110 77 111 100 101 34 58 32 116 114 117 101 10 32 32 32 32 32 32 32 32 125 44 10 32 32 32 32 32 32 32 32 123 10 32 32 32 32 32 32 32 32 32 32 32 32 34 116 121 112 101 34 58 32 34 112 111 114 116 109 97 112 34 44 10 32 32 32 32 32 32 32 32 32 32 32 32 34 99 97 112 97 98 105 108 105 116 105 101 115 34 58 32 123 34 112 111 114 116 77 97 112 112 105 110 103 115 34 58 32 116 114 117 101 125 44 10 32 32 32 32 32 32 32 32 32 32 32 32 34 115 110 97 116 34 58 32 116 114 117 101 10 32 32 32 32 32 32 32 32 125 10 32 32 32 32 93 10 125 10]}: [failed to find plugin "weave-net" in path [/opt/cni/bin]]
Nov 06 15:36:41 jupiter kubelet[86177]: W1106 15:36:41.439604   86177 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d

这次找到目标了:回到master节点下查看/opt/cni/bin下 查看文件 对比node节点下这个目录的文件发现 数量不一样
其实大致是这三个

这是两个链接,如果不知道选取哪一个,无所谓,三个统统scp拷贝过来

scp master1:/opt/cni/bin/weave-plugin-2.5.2  ./
scp master1:/opt/cni/bin/weave-ipam  ./
scp master1:/opt/cni/bin/weave-net  ./

最后重启服务
systemctl restart kubelet

​```sh
再次使用

journalctl -f -u kubelet
显示如下:
Nov 06 15:50:24 jupiter kubelet[114959]: I1106 15:50:24.546098 114959 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/7e1ce4d9-8ef6-4fda-8e10-84837b033e06-kube-proxy") pod "kube-proxy-wp5p7" (UID: "7e1ce4d9-8ef6-4fda-8e10-84837b033e06")
Nov 06 15:50:24 jupiter kubelet[114959]: I1106 15:50:24.546183 114959 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/7e1ce4d9-8ef6-4fda-8e10-84837b033e06-xtables-lock") pod "kube-proxy-wp5p7" (UID: "7e1ce4d9-8ef6-4fda-8e10-84837b033e06")
Nov 06 15:50:24 jupiter kubelet[114959]: I1106 15:50:24.546254 114959 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/7e1ce4d9-8ef6-4fda-8e10-84837b033e06-lib-modules") pod "kube-proxy-wp5p7" (UID: "7e1ce4d9-8ef6-4fda-8e10-84837b033e06")

最后发现正常了
回到master节点
​```sh
kubectl get nodes

查看状态发现还是notready,不要担心 等个一分钟再看看 最后发现正常了

swap 问题处理

报错如下

init] Using Kubernetes version: v1.15.6
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Swap]: running with swap on is not supported. Please disable swap

处理

[root@k8snode2 k8s_images]# swapoff -a
[root@k8snode2 k8s_images]# sed 's/._swap._/#&/' /etc/fstab
[root@k8smaster k8s_images]# free -m
              total        used        free      shared  buff/cache   available
Mem:            992         524          74           7         392         284
Swap:             0           0           0

主机cpu 数量问题

报错
the number of available CPUs 1 is less than the required 2
处理
设置虚拟机CPU核心数>1个即可

iptables bridge 和 ip转发问题

报错
/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
/proc/sys/net/ipv4/ip_forward contents are not set to 1

处理
/etc/sysctl.conf 文件增加如下几条

cat /etc/sysctl.conf
# 打开ip转发,下面4条都干上去
net.ipv4.ip_forward = 1
# Disable netfilter on bridges.
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1

sysctl -p 命令使其生效

相关资料

posted @ 2020-07-06 14:29  Liuxz  阅读(7465)  评论(0编辑  收藏  举报