二进制安装 kubernetes

kubernetes 二进制安装部署手册

部署前准备

1) 系统优化

所有机器.

关闭防火墙和selinux

# setenforce 0
# sed -i "s#SELINUX=enforcing#SELINUX=disabled#g" /etc/selinux/config
# systemctl stop firewalld && systemctl disable firewalld

配置阿里云镜像源

mkdir -p /etc/yum.repos.d/bak
mv -t /etc/yum.repos.d/bak/ /etc/yum.repos.d/*
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum -y remove epel-re*
yum clean all
yum -y install epel-re*

配置ntp时间对时

yum -y install ntpdate
/usr/sbin/ntpdate ntp7.aliyun.com > /dev/null 2>&1
(crontab -l; echo "00 23 * * *  /usr/sbin/ntpdate ntp7.aliyun.com  > /dev/null 2>&1" ) | crontab

2,检查内核版本,查看是否高于 3.8(Docker 需要)

# uname  -

3,安装一些工具

yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix buind-utils -y

2) 安装DNS服务

安装 bind9 服务,在 11 机器上

[root@zsf7-11 ~]# yum -y install bind9
[root@zsf7-11 ~]# vim /etc/named.conf
listen-on port 53 { 10.4.7.11; };
allow-query { any; }; //允许那些客户端来请求
forwarders { 10.4.7.254; }; //上级 DNS 地址
recursion yes; //当前 DNS采用递归算法提供服务,默认迭代和递归
dnssec-enable no;
dnssec-validation no;

检查 DNS 配置文件语法是否正确:

named-checkconf

配置区域文件:

[root@zsf7-11 ~]# vim /etc/named.rfc1912.zones
//主机域
zone "host.com" IN {
        type  master;
        file  "host.com.zone";
        allow-update { 10.4.7.11; };
};

zone "zsf.com" IN {
        type  master;
        file  "zsf.com.zone";
        allow-update { 10.4.7.11; };
};

配置主机域的区域文件

[root@zsf7-11 ~]# vim /var/named/host.com.zone
$ORIGIN host.com.
$TTL 600	; 10 minutes
@       IN SOA	dns.host.com. dnsadmin.host.com. (
				2020073001 ; serial
				10800      ; refresh (3 hours)
				900        ; retry (15 minutes)
				604800     ; expire (1 week)
				86400      ; minimum (1 day)
				)
			NS   dns.host.com.
$TTL 60	; 1 minute
dns                A    10.4.7.11
zsf7-11           A    10.4.7.11
zsf7-12           A    10.4.7.12
zsf7-21           A    10.4.7.21
zsf7-22           A    10.4.7.22
zsf7-200          A    10.4.7.200
[root@zsf7-11 ~]# /var/named/zsf.com.zone
$ORIGIN zsf.com.
$TTL 600        ; 10 minutes
@               IN SOA  dns.zsf.com. dnsadmin.zsf.com. (
                                2020073001 ; serial
                                10800      ; refresh (3 hours)
                                900        ; retry (15 minutes)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
                                NS   dns.zsf.com.
$TTL 60 ; 1 minute
dns                A    10.4.7.11

启动

systemctl start named && systemctl enable named

检查

dig -t A 主机名   @DNS 服务器地址 +short

更改所有机器的 DNS 解析地址

vim /etc/sysconfig/network-scripts/ifcfg-ens33
systemctl restart network

添加短域名解析

vim /etc/resolv.conf
search host.com

3) 安装harbor,docker仓库

在github 上面下载harbor 安装包,我们这边安装的是2.0.0 版本

[root@zsf7-200 opt]# mkdir src
[root@zsf7-200 opt]# cd src/
[root@zsf7-200 src]# tar xf harbor-offline-installer-v2.0.0.tgz  -C /opt/
[root@zsf7-200 src]# cd /opt/
[root@zsf7-200 opt]# ls
certs  containerd  harbor  src
[root@zsf7-200 opt]# mv harbor/ harbor-2.0.0
[root@zsf7-200 opt]# ln -s harbor-2.0.0/ harbor
[root@zsf7-200 opt]# cd harbor
[root@zsf7-200 harbor]# yum -y install docker-compose   //因为harbor 是通过docker compose单机编排的,所以我们需要安装docker-compose
[root@zsf7-200 harbor]# cp harbor.yml.tmpl  harbor.yml
[root@zsf7-200 harbor]# egrep -v "#|^$" harbor.yml
hostname: harbor.zsf.com		//harbor访问的域名
http:
  port: 180
harbor_admin_password: Ysyhl9t!    //登录harbor仓库的密码
database:
  password: root123
  max_idle_conns: 50
  max_open_conns: 100
data_volume: /data/harbor	//数据存储目录
clair:
  updaters_interval: 12
trivy:
  ignore_unfixed: false
  skip_update: false
  insecure: false
jobservice:
  max_job_workers: 10
notification:
  webhook_job_max_retry: 10
chart:
  absolute_url: disabled
log:
  level: info
  local:
    rotate_count: 50
    rotate_size: 200M
    location: /var/log/harbor
_version: 2.0.0
proxy:
  http_proxy:
  https_proxy:
  no_proxy:
  components:
    - core
    - jobservice
    - clair
    - trivy
[root@zsf7-200 harbor]# ./install.sh 
[Step 0]: checking if docker is installed ...

Note: docker version: 19.03.12

[Step 1]: checking docker-compose is installed ...

Note: docker-compose version: 1.18.0

[Step 2]: loading Harbor images ...

然后在DNS服务器上添加一个A 记录

[root@zsf7-11 ~]# vim /var/named/zsf.com.zone
$ORIGIN zsf.com.
$TTL 600        ; 10 minutes
@               IN SOA  dns.zsf.com. dnsadmin.zsf.com. (
                                2020073003 ; serial
                                10800      ; refresh (3 hours)
                                900        ; retry (15 minutes)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
                                NS   dns.zsf.com.
$TTL 60 ; 1 minute
dns                A    10.4.7.11
harbor             A    10.4.7.200

//注意 serial 必须改变
[root@zsf7-11 ~]# systemctl restart named
[root@zsf7-11 ~]# ping harbor.zsf.com
PING harbor.zsf.com (10.4.7.200) 56(84) bytes of data.
64 bytes from 10.4.7.200 (10.4.7.200): icmp_seq=1 ttl=64 time=18.9 ms

安装一个nginx代理harbor仓库

[root@zsf7-200 harbor]# yum -y install nginx
[root@zsf7-200 harbor]# cat /etc/nginx/conf.d/harbor.zsf.com.conf
server {
	listen 80;
	server_name harbor.zsf.com;
	client_max_body_size 1000m;			//注意docker分层镜像可能大于这个数值,根据自己情况更改
	location / {
		proxy_pass http://127.0.0.1:180;
	}
}
[root@zsf7-200 harbor]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@zsf7-200 harbor]# systemctl start nginx && systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.

创建一个public的仓库,登录web界面

image-20200806112601173

image-20200806112718149

然后测试上传镜像是否正常,

[root@zsf7-21 ~]# docker pull nginx
[root@zsf7-21 ~]# docker tag nginx:latest harbor.zsf.com/public/nginx:1.17.0 
[root@zsf7-21 ~]# docker login -u admin -p Ysyhl9t!  harbor.zsf.com
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@zsf7-21 ~]# docker push harbor.zsf.com/public/nginx:1.17.0

可以在harbor仓库内看到镜像,到此harbor仓库搭建完成

4) 安装Docker

3,安装 docker,

在这些机器上安装Docker :zsf7-21,zsf7-22,zsf7-200

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

更改 Docker 配置文件

certs]# mkdir -p /data/docker
certs]# mkdir -p /etc/docker/
certs]# vim /etc/docker/daemon.json
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.zsf.com"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.7.21.1/24",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

解释

{
  "graph": "/data/docker",		//docker放在什么位置
  "storage-driver": "overlay2", //存储引擎
  "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.zsf.com"],   //添加http的harbor仓库
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],						  //配置阿里云镜像加速
  "bip": "172.7.21.1/24",															  //Docker的虚拟IP地址,需要改成每台主机的最后一位
  "exec-opts": ["native.cgroupdriver=systemd"],										  //docker cgroup 驱动
  "live-restore": true																  //当docker daemon down时容器能正常运行
}

启动Docker

systemctl start docker.service  && systemctl enable docker.service

准备自签证书

1,安装生产自签证书需要的软件

[root@zsf7-200 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl				//创建证书,创建出来的证书是声明式的
[root@zsf7-200 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json  //将声明式的证书转换成 json 格式,变成承载式
[root@zsf7-200 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo  //反解析证书
[root@zsf7-200 ~]# chmod +x /usr/bin/cfssl*

2,生成自签证书

[root@zsf7-200 ~]# mkdir /opt/certs
[root@zsf7-200 ~]# cd /opt/certs

1)创建根证书申请文件

[root@zsf7-200 certs]# vim /opt/certs/ca-csr.json
{
    "CN": "OldboyEdu",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ],
    "ca": {
        "expiry": "175200h"
    }
}

解释

{
    "CN": "公司机构",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",   //加密算法
        "size": 2048     //加密长度
    },
    "names": [
        {
            "C": "CN",	//国家
            "ST": "beijing",  //省份
            "L": "beijing",   //城市
            "O": "od",        // 
            "OU": "ops"       //职位
        }
    ],
    "ca": {
        "expiry": "175200h"   //证书有效时间
    }
}
  1. 生成证书
[root@zsf7-200 certs]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca
2020/08/04 15:43:48 [INFO] generating a new CA key and certificate from CSR
2020/08/04 15:43:48 [INFO] generate received request
2020/08/04 15:43:48 [INFO] received CSR
2020/08/04 15:43:48 [INFO] generating key: rsa-2048
2020/08/04 15:43:48 [INFO] encoded CSR
2020/08/04 15:43:48 [INFO] signed certificate with serial number 187642729321576839847461032555167519484416835896

[root@zsf7-200 certs]# ls 
ca.csr  ca-csr.json  ca-key.pem  ca.pem
//为根证书私钥ca-key.pem  ca.pem

安装 k8s 服务

安装 master 组件

安装 etcd

12,21,22

1) 签发 ETCD 证书
[root@zsf7-200 certs]# vim /opt/certs/ca-config.json
{
    "signing": {
        "default": {
            "expiry": "175200h"
        },
        "profiles": {			//服务端认证
            "server": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {			//客户端认证
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {			//双向认证
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}

创建etcd 证书请求的文件

[root@zsf7-200 certs]# vi etcd-peer-csr.json
{
    "CN": "k8s-etcd",
    //可能部署在哪台主机上,不支持网段,如果后期IP地址不在这个里面需要重新签发证书,所以最好多保留几个IP地址
    "hosts": [
        "10.4.7.11",
        "10.4.7.12",
        "10.4.7.21",
        "10.4.7.22"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

签发证书

[root@zsf7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json |cfssl-json -bare etcd-peer
//-profile=peer 双向认证使用
2020/08/04 16:02:12 [INFO] generate received request
2020/08/04 16:02:12 [INFO] received CSR
2020/08/04 16:02:12 [INFO] generating key: rsa-2048
2020/08/04 16:02:13 [INFO] encoded CSR
2020/08/04 16:02:13 [INFO] signed certificate with serial number 699445385173737942257356343390958864239006335823
2020/08/04 16:02:13 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@zsf7-200 certs]# ls etcd*
etcd-peer.csr  etcd-peer-csr.json  etcd-peer-key.pem  etcd-peer.pem
2) 安装 etcd

下面的操作在三台etcd 主机上分别进行操作,zsf7-12 zsf7-21 zsf7-22

创建etcd 服务启动的用户

[root@zsf7-12 ~]# useradd -s /sbin/nologin -M etcd

[root@zsf7-12 ~]# id etcd 
uid=1000(etcd) gid=1000(etcd) groups=1000(etcd)

下载 ectd 我们去github上面下载对应版本的etcd,我们这边使用的是3.1.20版本

上传下载好的软件包到/opt/src
[root@zsf7-12 src]# ls
etcd-v3.1.20-linux-amd64.tar.gz
[root@zsf7-12 src]# tar xvf etcd-v3.1.20-linux-amd64.tar.gz -C /opt/
[root@zsf7-12 opt]# mv etcd-v3.1.20-linux-amd64/ etcd-v3.1.20
[root@zsf7-12 opt]# ln -s etcd-v3.1.20/ etcd		
[root@zsf7-12 opt]# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server /data/etcd/etcd-server
[root@zsf7-12 opt]# cd /opt/etcd/certs

下面的操作我们只在zsf7-12 这个机器上操作,另外两台机器类比操作

上传自签证书到指定位置

在zsf7-200机器上操作
[root@zsf7-12 certs]# scp ca.pem etcd-peer-key.pem etcd-peer.pem  root@10.4.7.12:/opt/etcd/certs
[root@zsf7-12 certs]# scp ca.pem etcd-peer-key.pem etcd-peer.pem  root@10.4.7.21:/opt/etcd/certs
[root@zsf7-12 certs]# scp ca.pem etcd-peer-key.pem etcd-peer.pem  root@10.4.7.22:/opt/etcd/certs

在zsf7-12 上编写启动脚本

[root@zsf7-12 certs]# ls 
ca.pem  etcd-peer-key.pem  etcd-peer.pem
[root@zsf7-12 ~]# vim /opt/etcd/etcd-server-startup.sh
#!/bin/sh
./etcd --name etcd-server-7-12 \
       --data-dir /data/etcd/etcd-server \
       --listen-peer-urls https://10.4.7.12:2380 \
       --listen-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
       --quota-backend-bytes 8000000000 \
       --initial-advertise-peer-urls https://10.4.7.12:2380 \
       --advertise-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
       --initial-cluster  etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \
       --ca-file ./certs/ca.pem \
       --cert-file ./certs/etcd-peer.pem \
       --key-file ./certs/etcd-peer-key.pem \
       --client-cert-auth  \
       --trusted-ca-file ./certs/ca.pem \
       --peer-ca-file ./certs/ca.pem \
       --peer-cert-file ./certs/etcd-peer.pem \
       --peer-key-file ./certs/etcd-peer-key.pem \
       --peer-client-cert-auth \
       --peer-trusted-ca-file ./certs/ca.pem \
       --log-output stdout
       
[root@zsf7-12 ~]# chmod +x /opt/etcd/etcd-server-startup.sh

更改文件权限

[root@zsf7-12 ~]# chown -R etcd.etcd /opt/etcd-v3.1.20/
[root@zsf7-12 ~]# chown -R etcd.etcd /data/etcd/etcd-server

3, 安装配置supervisor

因为etcd 不能自己后台启动,所以我们使用supervisor来管理它

[root@zsf7-12 ~]# yum install supervisor -y
[root@zsf7-12 ~]# systemctl start supervisord && systemctl enable supervisord

[root@zsf7-12 ~]# vim /etc/supervisord.d/etcd-server.ini
[program:etcd-server-7-12]
command=/opt/etcd/etcd-server-startup.sh                        ; the program (relative uses PATH, can take args)
numprocs=1                                                      ; number of processes copies to start (def 1)
directory=/opt/etcd                                             ; directory to cwd to before exec (def no cwd)
autostart=true                                                  ; start at supervisord start (default: true)
autorestart=true                                                ; retstart at unexpected quit (default: true)
startsecs=30                                                    ; number of secs prog must stay running (def. 1)
startretries=3                                                  ; max # of serial start failures (default 3)
exitcodes=0,2                                                   ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                 ; signal used to kill process (default TERM)
stopwaitsecs=10                                                 ; max num secs to wait b4 SIGKILL (default 10)
user=etcd                                                       ; setuid to this UNIX account to run the program
redirect_stderr=true                                            ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log           ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                    ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                        ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                     ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                     ; emit events on stdout writes (default false)

[root@zsf7-12 ~]# supervisorctl update
etcd-server-7-12: added process group

剩下的两台主机也安装 etcd

zsf7-21

etcd]# cat /opt/etcd/etcd-server-startup.sh 
#!/bin/sh
./etcd --name etcd-server-7-21 \
       --data-dir /data/etcd/etcd-server \
       --listen-peer-urls https://10.4.7.21:2380 \
       --listen-client-urls https://10.4.7.21:2379,http://127.0.0.1:2379 \
       --quota-backend-bytes 8000000000 \
       --initial-advertise-peer-urls https://10.4.7.21:2380 \
       --advertise-client-urls https://10.4.7.21:2379,http://127.0.0.1:2379 \
       --initial-cluster  etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \
       --ca-file ./certs/ca.pem \
       --cert-file ./certs/etcd-peer.pem \
       --key-file ./certs/etcd-peer-key.pem \
       --client-cert-auth  \
       --trusted-ca-file ./certs/ca.pem \
       --peer-ca-file ./certs/ca.pem \
       --peer-cert-file ./certs/etcd-peer.pem \
       --peer-key-file ./certs/etcd-peer-key.pem \
       --peer-client-cert-auth \
       --peer-trusted-ca-file ./certs/ca.pem \
       --log-output stdout

zsf7-22

~]# cat /opt/etcd/etcd-server-startup.sh 
#!/bin/bash
./etcd --name etcd-server-7-22 \
       --data-dir /data/etcd/etcd-server \
       --listen-peer-urls https://10.4.7.22:2380 \
       --listen-client-urls https://10.4.7.22:2379,http://127.0.0.1:2379 \
       --quota-backend-bytes 8000000000 \
       --initial-advertise-peer-urls https://10.4.7.22:2380 \
       --advertise-client-urls https://10.4.7.22:2379,http://127.0.0.1:2379 \
       --initial-cluster  etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \
       --ca-file ./certs/ca.pem \
       --cert-file ./certs/etcd-peer.pem \
       --key-file ./certs/etcd-peer-key.pem \
       --client-cert-auth  \
       --trusted-ca-file ./certs/ca.pem \
       --peer-ca-file ./certs/ca.pem \
       --peer-cert-file ./certs/etcd-peer.pem \
       --peer-key-file ./certs/etcd-peer-key.pem \
       --peer-client-cert-auth \
       --peer-trusted-ca-file ./certs/ca.pem \
       --log-output stdout

3) 检查etcd集群状态
[root@zsf7-12 etcd]# ./etcdctl cluster-health
member 988139385f78284 is healthy: got healthy result from http://127.0.0.1:2379
member 5a0ef2a004fc4349 is healthy: got healthy result from http://127.0.0.1:2379
member f4a0cb0a765574a8 is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy


[root@zsf7-12 etcd]# ./etcdctl member list
988139385f78284: name=etcd-server-7-22 peerURLs=https://10.4.7.22:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.22:2379 isLeader=false
5a0ef2a004fc4349: name=etcd-server-7-21 peerURLs=https://10.4.7.21:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.21:2379 isLeader=false
f4a0cb0a765574a8: name=etcd-server-7-12 peerURLs=https://10.4.7.12:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.12:2379 isLeader=true

安装 APiserver

1) 在 github 上下载安装包

我们这边安装 1.15.4,然后我们在 github 上找到对应的 tag: https://github.com/kubernetes/kubernetes/releases/tag/v1.15.4

image-20200805055849767

如果上面链接失效,可以点击下方这个连接来进行下载:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md#downloads-for-v1154

image-20200805060433520

2) 创建证书

签发 clinet 证书,Apiserver(client) 和 etcd(service) 进行通信需要的证书,

[root@zsf7-200 certs]# vim client-csr.json
{
    "CN": "k8s-client",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "jiangsu",
            "L": "nanjing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

[root@zsf7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client

[root@zsf7-200 certs]# ll client*
-rw-r--r-- 1 root root  997 Aug  5 15:31 client.csr
-rw-r--r-- 1 root root  283 Aug  5 15:30 client-csr.json
-rw------- 1 root root 1675 Aug  5 15:31 client-key.pem
-rw-r--r-- 1 root root 1363 Aug  5 15:31 client.pem

签发 Apiserver 证书,这个是 Apiserver 作为服务端需要的证书,别人请求 Apiserver 需要证书认证

# vi apiserver-csr.json

{
    "CN": "k8s-apiserver",
    "hosts": [
        "127.0.0.1",
        "192.168.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local",
        //Apiserver 可能存在哪些机器上,需要注意 VIP 也需要写到里面
        "10.4.7.10",
        "10.4.7.21",
        "10.4.7.22",
        "10.4.7.23"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "jiangsu",
            "L": "nanjing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

[root@zsf7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver

[root@zsf7-200 certs]# ll apiserver*
-rw-r--r-- 1 root root 1249 Aug  5 15:33 apiserver.csr
-rw-r--r-- 1 root root  566 Aug  5 15:32 apiserver-csr.json
-rw------- 1 root root 1679 Aug  5 15:33 apiserver-key.pem
-rw-r--r-- 1 root root 1594 Aug  5 15:33 apiserver.pem
3) 安装 apiserver
[root@zsf7-21 /]# cd /opt/src
[root@zsf7-21 src]# ll 
total 443192
-rw-r--r-- 1 root root   9850227 Aug  4 15:24 etcd-v3.1.20-linux-amd64.tar.gz
-rw-r--r-- 1 root root 443976803 Aug  5 14:11 kubernetes-server-linux-amd64.tar.gz
[root@zsf7-21 src]# tar xf kubernetes-server-linux-amd64.tar.gz -C /opt/

[root@zsf7-21 src]# cd /opt/
[root@zsf7-21 opt]# mv kubernetes/ kubernetes-1.15.4
[root@zsf7-21 opt]# ln -s kubernetes-1.15.4/ kubernetes
[root@zsf7-21 opt]# cd /opt/kubernetes/server/bin
[root@zsf7-21 bin]# rm -rf *.tar *_tag  //.tar 为软件的docker包,我们不需要就删除掉
[root@zsf7-21 bin]# mkdir -p mkdir /opt/kubernetes/bin/certs  //存放证书的目录

在 zsf7-200上面拷贝证书到 apiserver 上

[root@zsf7-200 certs]# scp ca.pem ca-key.pem client-key.pem client.pem apiserver.pem apiserver-key.pem root@10.4.7.21:/opt/kubernetes/server/bin/certs/
[root@zsf7-21 bin]# ll /opt/kubernetes/server/bin/certs/
total 24
-rw------- 1 root root 1679 Aug  5 15:41 apiserver-key.pem
-rw-r--r-- 1 root root 1594 Aug  5 15:41 apiserver.pem
-rw------- 1 root root 1675 Aug  5 15:41 ca-key.pem
-rw-r--r-- 1 root root 1338 Aug  5 15:41 ca.pem
-rw------- 1 root root 1675 Aug  5 15:41 client-key.pem
-rw-r--r-- 1 root root 1363 Aug  5 15:41 client.pem

创建Apiserver 启动的配置文件

[root@zsf7-21 bin]# mkdir -p /opt/kubernetes/server/bin/conf
[root@zsf7-21 bin]# cd /opt/kubernetes/server/bin/conf
[root@zsf7-21 conf]# vi audit.yaml
//日志审计 
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"

创建启动脚本

[root@zsf7-21 bin]# vi /opt/kubernetes/server/bin/kube-apiserver.sh
#!/bin/bash
./kube-apiserver \
  --apiserver-count 2 \
  --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
  --audit-policy-file ./conf/audit.yaml \
  --authorization-mode RBAC \
  --client-ca-file ./certs/ca.pem \
  --requestheader-client-ca-file ./certs/ca.pem \
  --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
  --etcd-cafile ./certs/ca.pem \
  --etcd-certfile ./certs/client.pem \
  --etcd-keyfile ./certs/client-key.pem \
  --etcd-servers https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
  --service-account-key-file ./certs/ca-key.pem \
  --service-cluster-ip-range 192.168.0.0/16 \
  --service-node-port-range 3000-29999 \
  --target-ram-mb=1024 \
  --kubelet-client-certificate ./certs/client.pem \
  --kubelet-client-key ./certs/client-key.pem \
  --log-dir  /data/logs/kubernetes/kube-apiserver \
  --tls-cert-file ./certs/apiserver.pem \
  --tls-private-key-file ./certs/apiserver-key.pem \
  --v 2
  
[root@zsf7-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-apiserver.sh
[root@zsf7-21 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver/audit-log

解释:

  • apiserver-count: apiserver的数量,根据自己实际情况来更改
  • audit-log-path:日志审计的日志目录存放地方
  • audit-policy-file: 日志审计规则的配置文件
  • authorization-mode: 认证的模式,目前使用基于角色的访问控制
  • client-ca-file: 根证书存放的位置
  • requestheader-client-ca-file:验证传入请求中的客户证书
  • enable-admission-plugins:除了默认插件外启用的插件,顺序没关系
  • etcd-cafile: etcd的根证书
  • etcd-certfile: 请求etcd验证的证书
  • etcd-keyfile: 请求etcd验证的私钥
  • etcd-servers: etcd 服务的地址
  • service-account-key-file: 服务的证书私钥
  • service-cluster-ip-range: 用于分配服务集群IP。此范围不得与为 pods 分配给节点的任何 IP 范围重叠。(默认为10.0.0.0/24) clusterIP
  • service-node-port-range: nodeport的端口范围,默认是30000-32767
  • target-ram-mb:apiserver的内存限制,单位为MB(用于配置缓存大小等)
4) 创建supervisor 启动脚本
[root@zsf7-21 bin]#  vi /etc/supervisord.d/kube-apiserver.ini
  
[program:kube-apiserver-7-21]
command=/opt/kubernetes/server/bin/kube-apiserver.sh            ; the program (relative uses PATH, can take args)
numprocs=1                                                      ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                            ; directory to cwd to before exec (def no cwd)
autostart=true                                                  ; start at supervisord start (default: true)
autorestart=true                                                ; retstart at unexpected quit (default: true)
startsecs=30                                                    ; number of secs prog must stay running (def. 1)
startretries=3                                                  ; max # of serial start failures (default 3)
exitcodes=0,2                                                   ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                 ; signal used to kill process (default TERM)
stopwaitsecs=10                                                 ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                       ; setuid to this UNIX account to run the program
redirect_stderr=true                                            ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log        ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                    ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                        ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                     ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                     ; emit events on stdout writes (default false)

[root@zsf7-21 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver/

[root@zsf7-21 bin]# supervisorctl update
kube-apiserver-7-21: added process group

zsf7-22类比上面进行这边不分开写

[root@zsf7-22 src]# tar xvf kubernetes-server-linux-amd64.tar.gz  -C /opt/
[root@zsf7-22 src]# cd /opt/
[root@zsf7-22 opt]# mv kubernetes/ kubernetes-1.15.4
[root@zsf7-22 opt]# ln -s kubernetes-1.15.4/ kubernetes
[root@zsf7-22 opt]# cd /opt/kubernetes/server/bin
[root@zsf7-22 bin]# rm -f *.tar *_tar
[root@zsf7-22 bin]# mkdir /opt/kubernetes/server/bin/certs/

[root@zsf7-200 certs]# scp ca.pem ca-key.pem client-key.pem client.pem apiserver.pem apiserver-key.pem root@10.4.7.22:/opt/kubernetes/server/bin/certs/
[root@zsf7-22 bin]# ll certs/
total 24
-rw------- 1 root root 1679 Aug  5 16:31 apiserver-key.pem
-rw-r--r-- 1 root root 1594 Aug  5 16:31 apiserver.pem
-rw------- 1 root root 1675 Aug  5 16:31 ca-key.pem
-rw-r--r-- 1 root root 1338 Aug  5 16:31 ca.pem
-rw------- 1 root root 1675 Aug  5 16:31 client-key.pem
-rw-r--r-- 1 root root 1363 Aug  5 16:31 client.pem
[root@zsf7-22 bin]# mkdir conf

[root@zsf7-22 bin]# vi conf/audit.yaml
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"
      
[root@zsf7-22 bin]# vim /opt/kubernetes/server/bin/kube-apiserver.sh 
#!/bin/bash
./kube-apiserver \
  --apiserver-count 2 \
  --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
  --audit-policy-file ./conf/audit.yaml \
  --authorization-mode RBAC \
  --client-ca-file ./certs/ca.pem \
  --requestheader-client-ca-file ./certs/ca.pem \
  --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
  --etcd-cafile ./certs/ca.pem \
  --etcd-certfile ./certs/client.pem \
  --etcd-keyfile ./certs/client-key.pem \
  --etcd-servers https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
  --service-account-key-file ./certs/ca-key.pem \
  --service-cluster-ip-range 192.168.0.0/16 \
  --service-node-port-range 3000-29999 \
  --target-ram-mb=1024 \
  --kubelet-client-certificate ./certs/client.pem \
  --kubelet-client-key ./certs/client-key.pem \
  --log-dir  /data/logs/kubernetes/kube-apiserver \
  --tls-cert-file ./certs/apiserver.pem \
  --tls-private-key-file ./certs/apiserver-key.pem \
  --v 2

[root@zsf7-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-apiserver.sh
[root@zsf7-21 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver/audit-log

[root@zsf7-22 bin]#  vi /etc/supervisord.d/kube-apiserver.ini
  
[program:kube-apiserver-7-22]
command=/opt/kubernetes/server/bin/kube-apiserver.sh            ; the program (relative uses PATH, can take args)
numprocs=1                                                      ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                            ; directory to cwd to before exec (def no cwd)
autostart=true                                                  ; start at supervisord start (default: true)
autorestart=true                                                ; retstart at unexpected quit (default: true)
startsecs=30                                                    ; number of secs prog must stay running (def. 1)
startretries=3                                                  ; max # of serial start failures (default 3)
exitcodes=0,2                                                   ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                 ; signal used to kill process (default TERM)
stopwaitsecs=10                                                 ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                       ; setuid to this UNIX account to run the program
redirect_stderr=true                                            ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log        ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                    ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                        ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                     ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                     ; emit events on stdout writes (default false)

[root@zsf7-22 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver/

[root@zsf7-22 bin]# supervisorctl update
kube-apiserver-7-22: added process group

部署 Apiserver 高可用

1) 安装nginx
[root@zsf7-11 ~]# yum -y install nginx
[root@zsf7-11 ~]# vim /etc/nginx/nginx.conf
//注意需要在http外测编写下面内容
stream {
    upstream kube-apiserver {
        server 10.4.7.21:6443     max_fails=3 fail_timeout=30s;
        server 10.4.7.22:6443     max_fails=3 fail_timeout=30s;
    }
    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass kube-apiserver;
    }
}

[root@zsf7-11 ~]# nginx -t 
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@zsf7-11 ~]# systemctl start  nginx && systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.




[root@zsf7-12 ~]# yum -y install nginx
[root@zsf7-12 ~]# vim /etc/nginx/nginx.conf
//注意需要在http外测编写下面内容
stream {
    upstream kube-apiserver {
        server 10.4.7.21:6443     max_fails=3 fail_timeout=30s;
        server 10.4.7.22:6443     max_fails=3 fail_timeout=30s;
    }
    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass kube-apiserver;
    }
}

[root@zsf7-12 ~]# nginx -t 
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@zsf7-12 ~]# systemctl start  nginx && systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
2) 安装keepalived
[root@zsf7-11 ~]# yum -y install keepalived -y 
[root@zsf7-11 ~]# vim /etc/keepalived/check_port.sh
#!/bin/bash
#keepalived 监控端口脚本
#使用方法:
#在keepalived的配置文件中
#vrrp_script check_port {#创建一个vrrp_script脚本,检查配置
#    script "/etc/keepalived/check_port.sh 6379" #配置监听的端口
#    interval 2 #检查脚本的频率,单位(秒)
#}
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
else
        echo "Check Port Cant Be Empty!"
fi

[root@zsf7-11 ~]# chmod +x /etc/keepalived/check_port.sh

keepalived 主:
[root@zsf7-11 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
   router_id 10.4.7.11

}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 7443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33		//要给自己的实际网卡相符合
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 10.4.7.11
    nopreempt
    authentication {
        auth_type PASS
        auth_pass apiserver-pass
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        10.4.7.10
    }
}
[root@zsf7-11 ~]# useradd  keepalived_script -s /sbin/nologin
[root@zsf7-11 ~]# systemctl start keepalived.service && systemctl enable keepalived.service
  • nopreempt : 当主挂了之后VIP漂移到从节点,然后主启动正常之后不自动漂移回来
[root@zsf7-12 ~]# yum -y install keepalived -y 
[root@zsf7-12 ~]# vim /etc/keepalived/check_port.sh
#!/bin/bash
#keepalived 监控端口脚本
#使用方法:
#在keepalived的配置文件中
#vrrp_script check_port {#创建一个vrrp_script脚本,检查配置
#    script "/etc/keepalived/check_port.sh 6379" #配置监听的端口
#    interval 2 #检查脚本的频率,单位(秒)
#}
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
else
        echo "Check Port Cant Be Empty!"
fi

[root@zsf7-12 ~]# chmod +x /etc/keepalived/check_port.sh

keepalived从:
[root@zsf7-12 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
	router_id 10.4.7.12
}
vrrp_script chk_nginx {
	script "/etc/keepalived/check_port.sh 7443"
	interval 2
	weight -20
}
vrrp_instance VI_1 {
	state BACKUP
	interface ens33
	virtual_router_id 251
	mcast_src_ip 10.4.7.12
	priority 90
	advert_int 1
	authentication {
		auth_type PASS
		auth_pass apiserver-pass
	}
	track_script {
		chk_nginx
	}
	virtual_ipaddress {
		10.4.7.10
	}
}
[root@zsf7-12 ~]# useradd  keepalived_script -s /sbin/nologin
[root@zsf7-12 ~]# systemctl start keepalived.service && systemctl enable keepalived.service
3)验证是否正常
[root@zsf7-11 keepalived]# ip addr | grep 10.4.7.10
    inet 10.4.7.10/32 scope global ens33
[root@zsf7-12 ~]# ip addr | grep 10.4.7.10
[root@zsf7-12 ~]# 
[root@zsf7-11 keepalived]# systemctl stop nginx
[root@zsf7-11 keepalived]# ip addr | grep 10.4.7.10
[root@zsf7-12 ~]# ip addr | grep 10.4.7.10
    inet 10.4.7.10/32 scope global ens33
集群正常

部署controller-manager

1) 创建controller-manager启动脚本
[root@zsf7-21 bin]# vim /opt/kubernetes/server/bin/kube-controller-manager.sh
#!/bin/sh
./kube-controller-manager \
  --cluster-cidr 172.7.0.0/16 \
  --leader-elect true \
  --log-dir /data/logs/kubernetes/kube-controller-manager \
  --master http://127.0.0.1:8080 \
  --service-account-private-key-file ./certs/ca-key.pem \
  --service-cluster-ip-range 192.168.0.0/16 \
  --root-ca-file ./certs/ca.pem \
  --v 2
 
[root@zsf7-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-controller-manager.sh
[root@zsf7-21 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager


[root@zsf7-22 bin]# vim /opt/kubernetes/server/bin/kube-controller-manager.sh
#!/bin/sh
./kube-controller-manager \
  --cluster-cidr 172.7.0.0/16 \
  --leader-elect true \
  --log-dir /data/logs/kubernetes/kube-controller-manager \
  --master http://127.0.0.1:8080 \
  --service-account-private-key-file ./certs/ca-key.pem \
  --service-cluster-ip-range 192.168.0.0/16 \
  --root-ca-file ./certs/ca.pem \
  --v 2
 
[root@zsf7-22 bin]# chmod +x /opt/kubernetes/server/bin/kube-controller-manager.sh
[root@zsf7-22 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager
  • cluster-cidr: 集群中Pods的CIDR范围

  • leader-elect: 在执行主循环之前,启动领导选举客户端并获得领导权。当运行复制组件以实现高可用性时,请启用此功能。(默认为true)

  • master: apiserver的地址,这里我们使用的是本地回环端口,所以不需要证书

  • service-account-private-key-file

[root@zsf7-21 bin]# vim /etc/supervisord.d/kube-conntroller-manager.ini
[program:kube-controller-manager-7-21]
command=/opt/kubernetes/server/bin/kube-controller-manager.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                              ; directory to cwd to before exec (def no cwd)
autostart=true                                                                    ; start at supervisord start (default: true)
autorestart=true                                                                  ; retstart at unexpected quit (default: true)
startsecs=30                                                                      ; number of secs prog must stay running (def. 1)
startretries=3                                                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                         ; setuid to this UNIX account to run the program
redirect_stderr=true                                                              ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log  ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                                       ; emit events on stdout writes (default false)
[root@zsf7-21 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager/

[root@zsf7-21 bin]# supervisorctl update
kube-controller-manager-7-21: added process group



[root@zsf7-22 bin]# vim /etc/supervisord.d/kube-conntroller-manager.ini
[program:kube-controller-manager-7-22]
command=/opt/kubernetes/server/bin/kube-controller-manager.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                              ; directory to cwd to before exec (def no cwd)
autostart=true                                                                    ; start at supervisord start (default: true)
autorestart=true                                                                  ; retstart at unexpected quit (default: true)
startsecs=30                                                                      ; number of secs prog must stay running (def. 1)
startretries=3                                                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                         ; setuid to this UNIX account to run the program
redirect_stderr=true                                                              ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log  ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                                       ; emit events on stdout writes (default false)
[root@zsf7-22 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager/

[root@zsf7-22 bin]# supervisorctl update
kube-controller-manager-7-22: added process group

kube-scheduler

1) 创建启动kube-scheduler配置文件
[root@zsf7-21 bin]# vim /opt/kubernetes/server/bin/kube-scheduler.sh
#!/bin/sh
./kube-scheduler \
  --leader-elect  \
  --log-dir /data/logs/kubernetes/kube-scheduler \
  --master http://127.0.0.1:8080 \
  --v 2
  
[root@zsf7-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-scheduler.sh
[root@zsf7-21 bin]# mkdir -p /data/logs/kubernetes/kube-scheduler

[root@zsf7-22 bin]# vim /opt/kubernetes/server/bin/kube-scheduler.sh
#!/bin/sh
./kube-scheduler \
  --leader-elect  \
  --log-dir /data/logs/kubernetes/kube-scheduler \
  --master http://127.0.0.1:8080 \
  --v 2
  
[root@zsf7-22 bin]# chmod +x /opt/kubernetes/server/bin/kube-scheduler.sh
[root@zsf7-22 bin]# mkdir -p /data/logs/kubernetes/kube-scheduler
2) 创建启动脚本
[root@zsf7-21 bin]# vim /etc/supervisord.d/kube-scheduler.ini
[program:kube-scheduler-7-21]
command=/opt/kubernetes/server/bin/kube-scheduler.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                               ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                     ; directory to cwd to before exec (def no cwd)
autostart=true                                                           ; start at supervisord start (default: true)
autorestart=true                                                         ; retstart at unexpected quit (default: true)
startsecs=30                                                             ; number of secs prog must stay running (def. 1)
startretries=3                                                           ; max # of serial start failures (default 3)
exitcodes=0,2                                                            ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                          ; signal used to kill process (default TERM)
stopwaitsecs=10                                                          ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                ; setuid to this UNIX account to run the program
redirect_stderr=true                                                     ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                             ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                              ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                              ; emit events on stdout writes (default false)
[root@zsf7-21 bin]# mkdir -p /data/logs/kubernetes/kube-scheduler/
[root@zsf7-21 bin]# supervisorctl update
kube-scheduler-7-21: added process group


[root@zsf7-22 bin]# vim /etc/supervisord.d/kube-scheduler.ini
[program:kube-scheduler-7-22]
command=/opt/kubernetes/server/bin/kube-scheduler.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                               ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                     ; directory to cwd to before exec (def no cwd)
autostart=true                                                           ; start at supervisord start (default: true)
autorestart=true                                                         ; retstart at unexpected quit (default: true)
startsecs=30                                                             ; number of secs prog must stay running (def. 1)
startretries=3                                                           ; max # of serial start failures (default 3)
exitcodes=0,2                                                            ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                          ; signal used to kill process (default TERM)
stopwaitsecs=10                                                          ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                ; setuid to this UNIX account to run the program
redirect_stderr=true                                                     ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                             ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                              ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                              ; emit events on stdout writes (default false)
[root@zsf7-22 bin]# mkdir -p /data/logs/kubernetes/kube-scheduler/
[root@zsf7-22 bin]# supervisorctl update
kube-scheduler-7-22: added process group

检查主控节点是否正常

[root@zsf7-21 bin]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
[root@zsf7-21 bin]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-2               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   

安装工作节点

安装 kubelet

1)签发证书

如果后期需要添加新的节点的话,那么要重新签发证书,可以先不动老的,先部署新的,但是建议全部更新

Kubelet 对外提供 https 服务,所以要签发一套 server 证书,apiserver 会主动请求他

[root@zsf7-200 certs]# vi kubelet-csr.json
{
    "CN": "k8s-kubelet",
    "hosts": [
    "127.0.0.1",
    "10.4.7.10",
    "10.4.7.21",
    "10.4.7.22",
    "10.4.7.23",
    "10.4.7.24",
    "10.4.7.25",
    "10.4.7.26",
    "10.4.7.27",
    "10.4.7.28"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

[root@zsf7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet

[root@zsf7-200 certs]# ll kubelet*
-rw-r--r-- 1 root root 1115 Aug  5 18:12 kubelet.csr
-rw-r--r-- 1 root root  453 Aug  5 18:12 kubelet-csr.json
-rw------- 1 root root 1679 Aug  5 18:12 kubelet-key.pem
-rw-r--r-- 1 root root 1464 Aug  5 18:12 kubelet.pem

[root@zsf7-200 certs]# scp ca.pem  ca-key.pem  kubelet.pem kubelet-key.pem client.pem client-key.pem root@10.4.7.21:/opt/kubernetes/server/bin/certs
[root@zsf7-200 certs]# scp ca.pem  ca-key.pem  kubelet.pem kubelet-key.pem client.pem client-key.pem root@10.4.7.22:/opt/kubernetes/server/bin/certs
2) 创建kubelet 的配置文件

Set-cluster

注意在 conf 目录下

[root@zsf7-21 conf]# kubectl config set-cluster myk8s \
  --certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \
  --embed-certs=true \
  --server=https://10.4.7.10:7443 \
  --kubeconfig=kubelet.kubeconfig

set-credentials

[root@zsf7-21 conf]# kubectl config set-credentials k8s-node \
  --client-certificate=/opt/kubernetes/server/bin/certs/client.pem \
  --client-key=/opt/kubernetes/server/bin/certs/client-key.pem \
  --embed-certs=true \
  --kubeconfig=kubelet.kubeconfig 

set-context

[root@zsf7-21 conf]# kubectl config set-context myk8s-context \
  --cluster=myk8s \
  --user=k8s-node \
  --kubeconfig=kubelet.kubeconfig

use-context

[root@zsf7-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
3) 角色绑定
[root@zsf7-21 conf}# k8s-node.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node

[root@zsf7-21 conf]# kubectl create -f k8s-node.yaml

[root@zsf7-21 conf]# kubectl get clusterrolebinding k8s-node -o yaml
4) 准备pause 镜像

因为pod 一个pod是有业务容器和pause容器构成的,

[root@zsf7-21 ~]# docker pull kubernetes/pause
[root@zsf7-21 ~]# docker tag docker.io/kubernetes/pause:latest harbor.zsf.com/public/pause:latest
[root@zsf7-21 ~]# docker push harbor.zsf.com/public/pause:latest
5) 创建 kubelet 启动文件
[root@zsf7-21 ~]# vim /opt/kubernetes/server/bin/kubelet.sh

#!/bin/sh
./kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 192.168.0.2 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice \
  --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on="false" \
  --client-ca-file ./certs/ca.pem \
  --tls-cert-file ./certs/kubelet.pem \
  --tls-private-key-file ./certs/kubelet-key.pem \
  --hostname-override zsf7-21.host.com \
  --image-gc-high-threshold 20 \
  --image-gc-low-threshold 10 \
  --kubeconfig ./conf/kubelet.kubeconfig \
  --log-dir /data/logs/kubernetes/kube-kubelet \
  --pod-infra-container-image harbor.zsf.com/public/pause:latest \
  --root-dir /data/kubelet


[root@zsf7-21 ~]# mkdir -p /data/kubelet
[root@zsf7-21 ~]# mkdir -p /data/logs/kubernetes/kube-kubelet
[root@zsf7-21 ~]# chmod +x /opt/kubernetes/server/bin/kubelet.sh 
6) 创建管理脚本
[root@zsf7-21 bin]# vim /etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-7-21]
command=/opt/kubernetes/server/bin/kubelet.sh     ; the program (relative uses PATH, can take args)
numprocs=1                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin              ; directory to cwd to before exec (def no cwd)
autostart=true                                    ; start at supervisord start (default: true)
autorestart=true              		          ; retstart at unexpected quit (default: true)
startsecs=30                                      ; number of secs prog must stay running (def. 1)
startretries=3                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                         ; setuid to this UNIX account to run the program
redirect_stderr=true                              ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log   ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                       ; emit events on stdout writes (default false)

# supervisorctl update

另外一台节点部署

[root@zsf7-22 conf]# scp zsf7-21:/opt/kubernetes/server/bin/conf/kubelet.kubeconfig /opt/kubernetes/server/bin/conf/

[root@zsf7-22 ~]# vim /opt/kubernetes/server/bin/kubelet.sh
#!/bin/sh
./kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 192.168.0.2 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice \
  --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on="false" \
  --client-ca-file ./certs/ca.pem \
  --tls-cert-file ./certs/kubelet.pem \
  --tls-private-key-file ./certs/kubelet-key.pem \
  --hostname-override zsf7-22.host.com \
  --image-gc-high-threshold 20 \
  --image-gc-low-threshold 10 \
  --kubeconfig ./conf/kubelet.kubeconfig \
  --log-dir /data/logs/kubernetes/kube-kubelet \
  --pod-infra-container-image harbor.zsf.com/public/pause:latest \
  --root-dir /data/kubelet

[root@zsf7-22 ~]# mkdir -p /data/kubelet
[root@zsf7-22 ~]# mkdir -p /data/logs/kubernetes/kube-kubelet
[root@zsf7-22 ~]# chmod +x /opt/kubernetes/server/bin/kubelet.sh 


[root@zsf7-22 bin]# vim /etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-7-22]
command=/opt/kubernetes/server/bin/kubelet.sh     ; the program (relative uses PATH, can take args)
numprocs=1                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin              ; directory to cwd to before exec (def no cwd)
autostart=true                                    ; start at supervisord start (default: true)
autorestart=true              		          ; retstart at unexpected quit (default: true)
startsecs=30                                      ; number of secs prog must stay running (def. 1)
startretries=3                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                         ; setuid to this UNIX account to run the program
redirect_stderr=true                              ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log   ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                       ; emit events on stdout writes (default false)

[root@zsf7-22 conf]# supervisorctl update
kube-kubelet-7-22: added process group
  1. 查看node 是否正常,打标签
//当我执行kubectl get nodes 的时候并未得到我想要的结果,然后去看kubelet日志发现如下报错
E0806 07:12:22.356765   14672 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "k8s-client" cannot list resource "runtimeclasses"
说是k8s-client这个客户端没有权限查看节点列表,我们需要使用下面命令进行授权
[root@zsf7-21 conf]# kubectl create clusterrolebinding kubelet-node-clusterbinding --clusterrole=system:node --user=k8s-client

[root@zsf7-21 bin]#  kubectl get nodes
NAME               STATUS   ROLES    AGE    VERSION
zsf7-21.host.com   Ready    <none>   3m6s   v1.15.4
zsf7-22.host.com   Ready    <none>   7s     v1.15.4
//打标签
[root@zsf7-21 bin]# kubectl label node zsf7-21.host.com node-role.kubernetes.io/master=
[root@zsf7-21 bin]# kubectl label node zsf7-21.host.com node-role.kubernetes.io/node=
[root@zsf7-21 bin]# kubectl label node zsf7-22.host.com node-role.kubernetes.io/master=
[root@zsf7-21 bin]# kubectl label node zsf7-22.host.com node-role.kubernetes.io/node=
[root@zsf7-21 bin]#  kubectl get nodes
NAME               STATUS   ROLES         AGE     VERSION
zsf7-21.host.com   Ready    master,node   4m15s   v1.15.4
zsf7-22.host.com   Ready    master,node   76s     v1.15.4

安装 kube-proxy

1) 创建 kube-proxy 配置文件
conf]# kubectl config set-cluster myk8s \
  --certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \
  --embed-certs=true \
  --server=https://10.4.7.10:7443 \
  --kubeconfig=kube-proxy.kubeconfig
  
conf]# kubectl config set-credentials kube-proxy \
  --client-certificate=/opt/kubernetes/server/bin/certs/client.pem \
  --client-key=/opt/kubernetes/server/bin/certs/client-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
  
conf]# kubectl config set-context myk8s-context \
  --cluster=myk8s \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
  
conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
2) 创建角色绑定
conf}# k8s-kube-proxy.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kube-proxy
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-proxy
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: kube-proxy
3) 加载 IPVs
[root@zsf7-21 conf]# vim /root/ipvs.sh
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
  /sbin/modinfo -F filename $i &>/dev/null
  if [ $? -eq 0 ];then
    /sbin/modprobe $i
  fi
done
4) 创建启动脚本
[root@zsf7-21 conf]# vim /opt/kubernetes/server/bin/kube-proxy.sh
#!/bin/sh
./kube-proxy \
  --cluster-cidr 172.7.0.0/16 \
  --hostname-override zsf7-21.host.com \
  --proxy-mode=ipvs \
  --ipvs-scheduler=nq \
  --log-dir /data/logs/kubernetes/kube-proxy \
  --kubeconfig ./conf/kube-proxy.kubeconfig
  
# chmod +x kube-proxy.sh
# mkdir -p /data/logs/kubernetes/kube-proxy

/etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-7-21]
command=/opt/kubernetes/server/bin/kube-proxy.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                           ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                 ; directory to cwd to before exec (def no cwd)
autostart=true                                                       ; start at supervisord start (default: true)
autorestart=true                                                     ; retstart at unexpected quit (default: true)
startsecs=30                                                         ; number of secs prog must stay running (def. 1)
startretries=3                                                       ; max # of serial start failures (default 3)
exitcodes=0,2                                                        ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                      ; signal used to kill process (default TERM)
stopwaitsecs=10                                                      ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                            ; setuid to this UNIX account to run the program
redirect_stderr=true                                                 ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log     ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                         ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                             ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                          ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                          ; emit events on stdout writes (default false)

# supervisorctl update

启动报错如下:
User "k8s-client" cannot list resource "endpoints" in API group "" at the cluster scope
[root@zsf7-21 bin]# kubectl get clusterrole | grep endpo
system:controller:endpoint-controller                                  17h
[root@zsf7-21 bin]# kubectl create clusterrolebinding kube-proxy-endpoints-clusterbinding --clusterrole=system:controller:endpoint-controller --user=k8s-client
5) 查看IPvs 是否有记录:
[root@zsf7-21 conf]# yum -y install ipvsadm-1.27-8.el7.x86_64
[root@zsf7-21 bin]# ipvsadm -Ln 
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq
  -> 10.4.7.21:6443               Masq    1      0          0         
  -> 10.4.7.22:6443               Masq    1      0          0

另外一台机器部署

[root@zsf7-22 conf]# scp zsf7-21:/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig ./
[root@zsf7-22 conf]# vim /opt/kubernetes/server/bin/kube-proxy.sh
#!/bin/sh
./kube-proxy \
  --cluster-cidr 172.7.0.0/16 \
  --hostname-override zsf7-22.host.com \
  --proxy-mode=ipvs \
  --ipvs-scheduler=nq \
  --log-dir /data/logs/kubernetes/kube-proxy \
  --kubeconfig ./conf/kube-proxy.kubeconfig
  
# chmod +x kube-proxy.sh
# mkdir -p /data/logs/kubernetes/kube-proxy

[root@zsf7-22 bin]# /etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-7-22]
command=/opt/kubernetes/server/bin/kube-proxy.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                           ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                 ; directory to cwd to before exec (def no cwd)
autostart=true                                                       ; start at supervisord start (default: true)
autorestart=true                                                     ; retstart at unexpected quit (default: true)
startsecs=30                                                         ; number of secs prog must stay running (def. 1)
startretries=3                                                       ; max # of serial start failures (default 3)
exitcodes=0,2                                                        ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                      ; signal used to kill process (default TERM)
stopwaitsecs=10                                                      ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                            ; setuid to this UNIX account to run the program
redirect_stderr=true                                                 ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log     ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                         ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                             ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                          ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                          ; emit events on stdout writes (default false)


[root@zsf7-22 conf]# vim /root/ipvs.sh
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
  /sbin/modinfo -F filename $i &>/dev/null
  if [ $? -eq 0 ];then
    /sbin/modprobe $i
  fi
done

[root@zsf7-22 kube-proxy]# chmod +x /root/ipvs.sh 
[root@zsf7-22 kube-proxy]# . /root/ipvs.sh 
[root@zsf7-22 bin]# supervisorctl update
6) 扩展,我们可以使用下面的方式来部署kube-proxy
# vi kube-proxy-csr.json
{
		//这个可以自己改,这边写的是角色绑定,
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client

conf]# kubectl config set-cluster myk8s \
  --certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \
  --embed-certs=true \
  --server=https://10.4.7.10:7443 \
  --kubeconfig=kube-proxy.kubeconfig
  
conf]# kubectl config set-credentials kube-proxy \
  --client-certificate=/opt/kubernetes/server/bin/certs/kube-proxy-client.pem \
  --client-key=/opt/kubernetes/server/bin/certs/kube-proxy-client-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
  
conf]# kubectl config set-context myk8s-context \
  --cluster=myk8s \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
  
conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig

创建启动脚本。。。。

创建命令补全工具

[root@zsf7-21 bin]# yum -y install bash-co*
[root@zsf7-21 bin]# source <(kubectl completion bash)
[root@zsf7-21 bin]# echo "source <(kubectl completion bash)" >> ~/.bashrc

验证集群状态

查看集群节点状态:

[root@zsf7-21 bin]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}               
[root@zsf7-21 bin]# kubectl get node
NAME               STATUS   ROLES         AGE    VERSION
zsf7-21.host.com   Ready    master,node   157m   v1.15.4
zsf7-22.host.com   Ready    master,node   154m   v1.15.4

创建一个daemonset 的pod验证集群状态

[root@zsf7-21 test]# cat nginx-test.yml 
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: test-nginx
spec:
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: my-test-nginx
        image: harbor.zsf.com/public/nginx:1.17.0
        ports:
        - containerPort: 80
[root@zsf7-21 test]# kubectl create -f nginx-test.yml

[root@zsf7-21 test]# kubectl get pods -o wide
NAME               READY   STATUS    RESTARTS   AGE   IP           NODE               NOMINATED NODE   READINESS GATES
test-nginx-7p5pc   1/1     Running   0          12m   172.7.22.2   zsf7-22.host.com   <none>           <none>
test-nginx-w4jvp   1/1     Running   0          12m   172.7.21.2   zsf7-21.host.com   <none>           <none>

curl  -i http://172.7.21.2/
HTTP/1.1 200 OK
Server: nginx/1.19.1
Date: Thu, 06 Aug 2020 02:35:04 GMT
Content-Type: text/html
Content-Length: 612

安装 k8s 核心插件

安装k8s 网络插件 flannel

我们在github上面安装,我们这边下载安装1.10.0,

1) 下载上传解压软件包

[root@zsf7-21 ~]# cd /opt/src/
[root@zsf7-21 src]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@zsf7-21 src]# mkdir /opt/flannel-v0.11.0
[root@zsf7-21 src]# tar xf flannel-v0.11.0-linux-amd64.tar.gz -C /opt/flannel-v0.11.0/
[root@zsf7-21 src]# ln -s /opt/flannel-v0.11.0/ /opt/flannel
[root@zsf7-21 flannel]# tree ./
./
├── flanneld
├── mk-docker-opts.sh
└── README.md

2) 配置证书

[root@zsf7-21 flannel]# mkdir certs
[root@zsf7-200 certs]# scp ca.pem client.pem client-key.pem zsf7-11:/opt/flannel/certs
[root@zsf7-21 flannel]# tree ./
./
├── certs
│   ├── ca.pem
│   ├── client-key.pem
│   └── client.pem
├── flanneld
├── mk-docker-opts.sh
└── README.md

3) 创建env变量,默认为"/run/flannel/subnet.env"

[root@zsf7-21 flannel]# vi subnet.env
FLANNEL_NETWORK=172.7.0.0/16			
FLANNEL_SUBNET=172.7.21.1/24			
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false
  • FLANNEL_NETWORK: docker 虚拟网络网段,包含所有宿主机上的docker网络,
  • FLANNEL_SUBNET : 当前主机上Docker 网络的网段
  • FLANNEL_IPMASQ:

4) 创建flanneld 启动脚本

[root@zsf7-21 flannel]# vim flanneld.sh 
#!/bin/bash
./flanneld \
  --public-ip=10.4.7.21 \
  --etcd-endpoints=https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
  --etcd-keyfile=./certs/client-key.pem \
  --etcd-certfile=./certs/client.pem \
  --etcd-cafile=./certs/ca.pem \
  --iface=ens33 \
  --subnet-file=./subnet.env \
  --healthz-port=2401
[root@zsf7-21 flannel]# mkdir -p /data/logs/flanneld
[root@zsf7-21 flannel]# chmod +x /opt/flannel/flanneld.sh
  • iface: 需要根据自己的实际网卡名称来进行更改

5) 创建supervisor 启动脚本

~]# vi /etc/supervisord.d/flannel.ini
[program:flanneld-7-21]
command=/opt/flannel/flanneld.sh                             ; the program (relative uses PATH, can take args)
numprocs=1                                                   ; number of processes copies to start (def 1)
directory=/opt/flannel                                       ; directory to cwd to before exec (def no cwd)
autostart=true                                               ; start at supervisord start (default: true)
autorestart=true                                             ; retstart at unexpected quit (default: true)
startsecs=30                                                 ; number of secs prog must stay running (def. 1)
startretries=3                                               ; max # of serial start failures (default 3)
exitcodes=0,2                                                ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                              ; signal used to kill process (default TERM)
stopwaitsecs=10                                              ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                    ; setuid to this UNIX account to run the program
redirect_stderr=true                                         ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/flanneld/flanneld.stdout.log       ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                 ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                     ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                  ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                  ; emit events on stdout writes (default false)

6) 在etcd里面提前创建flannel的网络插件类型,目前设置为 host-gw

[root@zsf7-21 etcd]# ./etcdctl set /coreos.com/network/config '{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}'
{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}

7) 启动flanneld服务

[root@zsf7-21 etcd]# supervisorctl update
flanneld-7-21: added process group
[root@zsf7-21 flanneld]# supervisorctl status 
etcd-server-7-21                 RUNNING   pid 7699, uptime 5 days, 22:07:35
flanneld-7-21                    RUNNING   pid 36035, uptime 0:01:34

8) 另外一台机器上同样的操作进行一遍

在这里不啰嗦

9) 验证是否能跨主机通信

我们在上面验证集群是否正常的时候,创建了一个DaemonSet的资源,现在我们测试下他们之间能不能跨主机通信

[root@zsf7-21 flannel]# kubectl get pods  -o wide
NAME               READY   STATUS    RESTARTS   AGE   IP           NODE               NOMINATED NODE   READINESS GATES
test-nginx-lm7l8   1/1     Running   0          5h    172.7.22.2   zsf7-22.host.com   <none>           <none>
test-nginx-wpwz9   1/1     Running   0          5h    172.7.21.2   zsf7-21.host.com   <none>           <none>
[root@zsf7-21 test]# curl -i 172.7.22.2
HTTP/1.1 200 OK
Server: nginx/1.19.1
Date: Mon, 10 Aug 2020 07:45:38 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 07 Jul 2020 15:52:25 GMT
Connection: keep-alive
ETag: "5f049a39-264"
......

正常请求,然后我们进去到pod中去请求测试是否正常

[root@zsf7-21 test]# kubectl exec  -it test-nginx-wpwz9 bash
root@test-nginx-wpwz9:/# curl -i 172.7.22.2
HTTP/1.1 200 OK
Server: nginx/1.19.1
Date: Mon, 10 Aug 2020 07:46:53 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 07 Jul 2020 15:52:25 GMT
Connection: keep-alive
ETag: "5f049a39-264"

然后我们查看22 机器上的nginx日志

[root@zsf7-21 test]# kubectl logs test-nginx-lm7l8
10.4.7.21 - - [10/Aug/2020:07:45:38 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
10.4.7.21 - - [10/Aug/2020:07:46:53 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.0" "-"

10) flanneld 网络优化

可以看出来,我们在pod里面请求的nginx但是日志里面记录的还是宿主机上面的IP地址,这是我们不想看到的,造成这个问题的原因是:

[root@zsf7-21 test]# iptables-save | grep POSTROUTING
-A POSTROUTING -s 172.7.21.0/24 ! -o docker0 -j MASQUERADE
这个防火墙的规则就是,源地址是172.7.21.0/24,不是从docker0网卡出去的都进行nat地址转换

我们这边想让docker 之间访问不进行net地址转换,那么我们就要做如下操作

[root@zsf7-21 test]#  yum install iptables-services -y
[root@zsf7-21 test]# iptables -t nat -D POSTROUTING -s 172.7.21.0/24 ! -o docker0 -j MASQUERADE  //删除一个规则
[root@zsf7-21 test]# iptables -t nat -I POSTROUTING -s 172.7.21.0/24 ! -d 172.7.0.0/16 ! -o docker0 -j MASQUERADE  //源地址是172.7.21.0/24,目的地址不是172.7.0.0/16,和不是从docker0网卡出去的都进行net地址转换
[root@zsf7-21 test]# iptables-save |grep -i postrouting
:POSTROUTING ACCEPT [5:302]
:KUBE-POSTROUTING - [0:0]
-A POSTROUTING -s 172.7.21.0/24 ! -d 172.7.0.0/16 ! -o docker0 -j MASQUERADE
[root@zsf7-21 test]# iptables-save > /etc/sysconfig/iptables //保存当前规则

再次请求,查看pod之间访问是不是不进行地址转换了

[root@zsf7-21 test]# kubectl logs test-nginx-lm7l8 
10.4.7.21 - - [10/Aug/2020:07:45:38 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
10.4.7.21 - - [10/Aug/2020:07:46:53 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.0" "-"
172.7.21.2 - - [10/Aug/2020:07:59:04 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.0" "-"

部署CoreDns 插件

1, 通过k8s资源部署CoreDNS

下面的yaml文件是在kubernetes 官方的github里面找的,然后根据自己的实际情况进行改改,其他的插件也可以类比这个来进行操作,地址为: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base

1)创建RBAC认证
mkdir coredns
cd coredns
vim coredns-rbac.yml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
2)创建configMap 配置文件

官方参考文档地址:https://kubernetes.io/zh/docs/tasks/administer-cluster/dns-custom-nameservers/

vim coredns-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        log
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.local 192.168.0.0/16
        prometheus :9153
        forward . 10.4.7.11 
        cache 30
        loop
        reload
        loadbalance
    }
  • prometheus: 提供给prometheus 监控的端口
3)创建deployment 配置文件,启动coredns
vim coredns-dm.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      containers:
      - name: coredns
        image: harbor.zsf.com/public/coredns:v1.6.1
        args:
        - -conf
        - /etc/coredns/Corefile
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
4) 创建svc
vim coredns-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 192.168.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
  - name: metrics
    port: 9153
    protocol: TCP
5) 制作镜像上传镜像
[root@zsf7-21 ~]# docker pull coredns/coredns:1.6.1
[root@zsf7-21 ~]# docker tag c0f6e815079e harbor.zsf.com/public/coredns:v1.6.1
6) 创建服务
kubectl create -f coredns-rbac.yml
kubectl create -f coredns-cm.yaml
kubectl create -f coredns-dm.yaml
kubectl create -f coredns-svc.yaml
7) 查看DNS是否解析成功
[root@zsf7-21 coredns]# dig -t A zsf7-21.host.com @192.168.0.2 +short
10.4.7.21
[root@zsf7-21 coredns]# dig -t A kubernetes.default.svc.cluster.local. @192.168.0.2 +short
192.168.0.1

安装ingress控制器traefik

1) 制作traefik镜像

[root@zsf7-21 ~]# docker pull traefik:v1.7.2-alpine
[root@zsf7-21 test]# docker tag traefik:v1.7.2-alpine harbor.zsf.com/public/traefik:v1.7.2-alpine
[root@zsf7-21 test]# docker push harbor.zsf.com/public/traefik:v1.7.2-alpine

2) 创建traefik rbac 授权文件

一下所有文件都是从github上找到,针对实际情况进行更改:https://github.com/containous/traefik/tree/v1.7/examples/k8s

vim traefik-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system

3) 创建DaemonSet文件

vim traefik-ds.yml
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
      name: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      containers:
      - image: harbor.zsf.com/public/traefik:v1.7.2-alpine
        name: traefik-ingress-lb
        ports:
        - name: controller
          containerPort: 80
          hostPort: 81
        - name: admin-web
          containerPort: 8080
        securityContext:
          capabilities:
            drop:
            - ALL
            add:
            - NET_BIND_SERVICE
        args:
        - --api
        - --kubernetes
        - --logLevel=INFO
        - --insecureskipverify=true
        - --kubernetes.endpoint=https://10.4.7.10:7443
        - --accesslog
        - --accesslog.filepath=/var/log/traefik_access.log
        - --traefiklog
        - --traefiklog.filepath=/var/log/traefik.log
        - --metrics.prometheus

4) 创建service yaml文件

vim traefik-svc.yml
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: controller
    - protocol: TCP
      port: 8080
      name: admin-web

5) 创建ingress yaml文件

vim traefik-ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  rules:
  - host: traefik.zsf.com
    http:
      paths:
      - path: /
        backend:
          serviceName: traefik-ingress-service
          servicePort: admin-web
  1. 启动所有的yaml

  1. 在前端页面上创建一个负载均衡器
upstream default_backend_traefik {
    server 10.4.7.21:81    max_fails=3 fail_timeout=10s;
    server 10.4.7.22:81    max_fails=3 fail_timeout=10s;
}
server {
    server_name *.zsf.com;
  
    location / {
        proxy_pass http://default_backend_traefik;
        proxy_set_header Host       $http_host;
        proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
    }
}
  1. 添加DNS解析

安装 dashboard

和上面安装插件一样,我们首先先去github上找人家官方写的yaml文件,然后我们借鉴下:https://github.com/kubernetes/kubernetes/tree/release-1.15/cluster/addons/dashboard

1)制作dashboard 镜像

[root@zsf7-21 opt]# docker pull k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
官方配置文件里面写的镜像地址我们在国内并不能正常拉去,我们需要替换成下面这个去拉取
[root@zsf7-21 opt]# docker pull registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
[root@zsf7-21 opt]# docker tag registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 harbor.zsf.com/public/kubernetes-dashboard-amd64:v1.10.1
[root@zsf7-21 opt]# docker push harbor.zsf.com/public/kubernetes-dashboard-amd64:v1.10.1

2)创建rbac 授权文件

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard
  namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

3) 创建deployment 文件

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      containers:
      - name: kubernetes-dashboard
        image: harbor.zsf.com/public/kubernetes-dashboard-amd64:v1.10.1
        env:
        - name: ACCEPT_LANGUAGE
          value: english
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 50m
            memory: 100Mi
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          # PLATFORM-SPECIFIC ARGS HERE
          - --auto-generate-certificates
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
        - name: tmp-volume
          mountPath: /tmp
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"

在这个里面我添加了如下一段内容,

        env:
        - name: ACCEPT_LANGUAGE
          value: english

这段内容是让你的 dashboard 面板是用英文显示,要不然它会根据你的系统语言来进行选择,我建议还是使用英文,因为中文是真的看不懂啊。

4) 创建svc

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 443
    targetPort: 8443

5) 创建配置配置文件

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    # Allows editing resource and makes sure it is created first.
    addonmanager.kubernetes.io/mode: EnsureExists
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    # Allows editing resource and makes sure it is created first.
    addonmanager.kubernetes.io/mode: EnsureExists
  name: kubernetes-dashboard-key-holder
  namespace: kube-system
type: Opaque
---
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    # Allows editing resource and makes sure it is created first.
    addonmanager.kubernetes.io/mode: EnsureExists
  name: kubernetes-dashboard-settings
  namespace: kube-system

6) 创建ingress

vim traefik-ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  rules:
  - host: k8s-dashboard.zsf.com
    http:
      paths:
      - path: /
        backend:
          serviceName: kubernetes-dashboard
          servicePort: 443

7) 配置 nginx

因为 dashboard 需要 https 访问,我们后端用 http ,那么在前端页面上就要配置一个 https 的反向代理,否则页面能正常访问,但是没有办法验证 tokne 不能正常登录

upstream https_default_backend_traefik {
    server 192.168.4.74:81    max_fails=3 fail_timeout=10s;
    server 192.168.4.75:81    max_fails=3 fail_timeout=10s;
    server 192.168.4.76:81    max_fails=3 fail_timeout=10s;
}
server {
    server_name *.zsf.com;
    listen 443 ssl;
    ssl_certificate   /etc/nginx/conf.d/certs/zsf.com.pem;
    ssl_certificate_key  /etc/nginx/conf.d/certs/zsf.com-key.pem;

    ssl_session_cache    shared:SSL:1m;
    ssl_session_timeout  5m;

    ssl_ciphers  HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers  on;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

    location / {
        proxy_pass http://https_default_backend_traefik;
        proxy_set_header Host       $http_host;
        proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
    }
}

上面的证书可以自己生成,也可以使用公司的证书

8) 添加 DNS 解析记录

9) 测试访问

https://k8s-dashboard.zsf.com/#!/login

image-20200825105325689

我们这边选择 token 登录,那么我们就需要去创建一个账户,然后获取他的 token

10) 创建授权用户,获取 token

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

获取 token

$ kubectl describe -n kube-system secrets $(kubectl get -n kube-system  secrets  | awk '$1~/admin-user/{print $1}' )

Name:         admin-user-token-jtq8f
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 2f6ff8c2-9cf1-4b4a-b8c6-73cf5367ae5b

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1326 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWp0cThmIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyZjZmZjhjMi05Y2YxLTRiNGEtYjhjNi03M2NmNTM2N2FlNWIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.LdPnfwKvxLeYLfcS1BZDY1ubY5GUlEOThsxct6AcQnTuQfn7iN6wViLhTqTsUowmMswIl1tkaIGdPmFG4i9nJCEz0Hu9-veGnPy_pWUJtNjM5IJgqeQef2whzLJLcPSbSMYSiNTz_P9QhtMON92ujhvIq-cM9lSXXp6GmPctBqiNASJM0A_y7F_oxa8Q_ccfT83AdO4OZVcrMoz32YLJIxaPtqw7wLEbP8QgDEp2Gonq51Qb_uH__PC4rLE7CCXajVeo4zkqGE8_0ASopZ36C8ZNkujR6oCEmH3XkEwJ6FI3kTjrlMUZTsGf2UZn1N44rs1uql0D8EdhwsTnYRsp5A

然后拿到获取的 token 去登录页面,

image-20200825111221795

posted @ 2020-10-09 15:13  张首富  阅读(485)  评论(6编辑  收藏  举报