K8S(01)二进制部署实践-1.15.5
系列文章说明
本系列文章,可以基本算是 老男孩2019年王硕的K8S周末班课程 笔记,根据视频来看本笔记最好,否则有些地方会看不明白
需要视频可以联系我
- 系列文章说明
- 1 部署架构
- 2 部署准备
- 3 部署master节点-etcd服务
- 4 部署mater节点 kube-apiserver服务
- 5 部署4层反代去代理apiserver
- 6 部署node节点
- 7 验证kubernetes集群
1 部署架构
1.1 架构图
架构说明:
- etcd至少3台组成一个高可用集群
- 两台proxy组成高可用代理对外提供VIP
- 两台机器共同承担master和node节点功能
- 运维主机非K8S套件,但为K8S服务
1.2 安装方式选择
- Minikube 预览使用,仅供学习
- 二进制安装(生产首选,新手推荐)
- kubeadmin安装
简单,用k8s跑k8s自己,熟手推荐
新手不推荐的原因是容易知其然不知其所以然
出问题后找不到解决办法
2 部署准备
2.1 准备工作
准备5台2C/2g/50g虚拟机,网络10.4.7.0/24
预装centos7.4,做完基础优化
安装部署bind9,部署自建DNS系统
准备自签证书环境
安装部署docker和harbor仓库
机器列表
主机名 | IP地址 | 用途 |
---|---|---|
hdss7-11 | 10.4.7.11 | proxy1 |
hdss7-12 | 10.4.7.12 | proxy2 |
hdss7-21 | 10.4.7.21 | master1 |
hdss7-22 | 10.4.7.22 | master2 |
hdss7-200 | 10.4.7.200 | 运维主机 |
基本部署软件
[root@hdss7-11 ~]# hostname
hdss7-11
[root@hdss7-11 ~]# getenforce
Disabled
[root@hdss7-11 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=10.4.7.11
NETMASK=255.255.255.0
GATEWAY=10.4.7.254
DNS1=10.4.7.254
[root@hdss7-11 ~]# yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix -y
2.2 部署DNS服务bind9
2.2.1 安装配置DNS服务
在7.11
上部署bind的DNS服务
yum install bind bind-utils -y
修改并校验配置文件
[root@hdss7-11 ~]# vim /etc/named.conf
listen-on port 53 { 10.4.7.11; };
allow-query { any; };
forwarders { 10.4.7.254; }; #上一层DNS地址(网关或公网DNS)
recursion yes;
dnssec-enable no;
dnssec-validation no
[root@hdss7-11 ~]# named-checkconf
2.2.2 增加自定义域和对于配置
在域配置中增加自定义域
cat >>/etc/named.rfc1912.zones <<'EOF'
# 添加自定义主机域
zone "host.com" IN {
type master;
file "host.com.zone";
allow-update { 10.4.7.11; };
};
# 添加自定义业务域
zone "zq.com" IN {
type master;
file "zq.com.zone";
allow-update { 10.4.7.11; };
};
EOF
host.com和zq.com都是我们自定义的域名,一般用host.com做为主机域
zq.com为业务域,业务不同可以配置多个
为自定义域host.com
创建配置文件
cat >/var/named/host.com.zone <<'EOF'
$ORIGIN host.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.host.com. dnsadmin.host.com. (
2020041601 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.host.com.
$TTL 60 ; 1 minute
dns A 10.4.7.11
HDSS7-11 A 10.4.7.11
HDSS7-12 A 10.4.7.12
HDSS7-21 A 10.4.7.21
HDSS7-22 A 10.4.7.22
HDSS7-200 A 10.4.7.200
EOF
为自定义域zq.com
创建配置文件
cat >/var/named/zq.com.zone <<'EOF'
$ORIGIN zq.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.zq.com. dnsadmin.zq.com. (
2020041601 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.zq.com.
$TTL 60 ; 1 minute
dns A 10.4.7.11
EOF
host.com域用于主机之间通信,所以要先增加上所有主机
zq.com域用于后面的业务解析用,因此不需要先添加主机
2.2.3 启动并验证DNS服务
再次检查配置并启动dns服务
[root@hdss7-11 ~]# named-checkconf
[root@hdss7-11 ~]# systemctl start named
[root@hdss7-11 ~]# ss -lntup|grep 53
udp UNCONN 0 0 10.4.7.11:53
udp UNCONN 0 0 :::53
tcp LISTEN 0 10 10.4.7.11:53
tcp LISTEN 0 128 127.0.0.1:953
tcp LISTEN 0 10 :::53
tcp LISTEN 0 128 ::1:953
# 验证结果
[root@hdss7-11 ~]# dig -t A hdss7-11.host.com @10.4.7.11 +short
10.4.7.11
[root@hdss7-11 ~]# dig -t A hdss7-21.host.com @10.4.7.11 +short
10.4.7.21
2.2.4 所有主机修改网络配置
5台K8S主机都需要按如下方式修改网络配置
# 修改dns并添加搜索域
sed -i 's#^DNS.*#DNS1=10.4.7.11#g' /etc/sysconfig/network-scripts/ifcfg-eth0
echo "search=host.com" >>/etc/sysconfig/network-scripts/ifcfg-eth0
systemctl restart network
# 检查DNS配置
~]# cat /etc/resolv.conf
# Generated by NetworkManager
search host.com
nameserver 10.4.7.11
~]# dig -t A hdss7-21.host.com +short
10.4.7.21
# 一定记得检查dns配置文件中是否有search信息
windows宿主机也要改
wmnet8网卡更改DNS:10.4.7.11
# ping通才行,否则检查
ping hdss7-200.host.com
2.3 自签发证书环境准备
操作在7.200
这个运维机上完成
2.3.1 下载安装cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo
chmod +x /usr/bin/cfssl*
2.3.2 生成ca证书文件
mkdir /opt/certs
cat >/opt/certs/ca-csr.json <<EOF
{
"CN": "zqcd",
"hosts": [
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "chengdu",
"L": "chengdu",
"O": "zq",
"OU": "ops"
}
],
"ca": {
"expiry": "175200h"
}
}
EOF
CN: Common Name,浏览器使用该字段验证网站是否合法,一般写的是域名。非常重要。浏览器使用该字段验证网站是否合法
C: Country, 国家
ST: State,州,省
L: Locality,地区,城市
O: Organization Name,组织名称,公司名称
OU: Organization Unit Name,组织单位名称,公司部门
2.3.3 生成ca证书
cd /opt/certs
cfssl gencert -initca ca-csr.json | cfssl-json -bare ca
[root@hdss7-200 certs]# ll
total 16
-rw-r--r-- 1 root root 989 Apr 16 20:53 cacsr
-rw-r--r-- 1 root root 324 Apr 16 20:52 ca-csr.json
-rw------- 1 root root 1679 Apr 16 20:53 ca-key.pem
-rw-r--r-- 1 root root 1330 Apr 16 20:53 ca.pem
2.4 docker环境准备
2.4.1 安装并配置docker
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
mkdir /etc/docker/
cat >/etc/docker/daemon.json <<EOF
{
"graph": "/data/docker",
"storage-driver": "overlay2",
"insecure-registries": ["registry.access.redhat.com","quay.io","harbor.zq.com"],
"registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
"bip": "172.7.21.1/24",
"exec-opts": ["native.cgroupdriver=systemd"],
"live-restore": true
}
EOF
注意:bip要根据宿主机ip变化
hdss7-21.host.com bip 172.7.21.1/24
hdss7-22.host.com bip 172.7.22.1/24
hdss7-200.host.com bip 172.7.200.1/24
2.4.2 启动docker
mkdir -p /data/docker
systemctl start docker
systemctl enable docker
docker --version
2.5 部署harbor私有仓库
下载地址:https://github.com/goharbor/harbor/releases/download/v1.8.5/harbor-offline-installer-v1.8.5.tgz
2.5.1 下载并解压
tar xf harbor-offline-installer-v1.8.5.tgz -C /opt/
cd /opt/
mv harbor/ harbor-v1.8.5
ln -s /opt/harbor-v1.8.5/ /opt/harbor
2.5.2 编辑配置文件
[root@hdss7-200 opt]# vi /opt/harbor/harbor.yml
# 以下是修改项,手动在配置文件中更改
hostname: harbor.zq.com
http:
port: 180
harbor_admin_password:Harbor12345
data_volume: /data/harbor
log:
level: info
rotate_count: 50
rotate_size:200M
location: /data/harbor/logs
[root@hdss7-200 opt]# mkdir -p /data/harbor/logs
2.5.3 使用docker-compose启动harbor
[root@hdss7-200 opt]cd /opt/harbor/
yum install docker-compose -y
sh /opt/harbor/install.sh
docker-compose ps
docker ps -a
2.5.4 使用dns解析harbor
在7.11
DNS服务上操作
[root@hdss7-11 ~]# vi /var/named/zq.com.zone
2020032002 ; serial #每次修改DNS解析后,都要滚动此ID
harbor A 10.4.7.200
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A harbor.zq.com +short
10.4.7.200
2.5.5 使用nginx反向代理harbor
回到7.200
运维机上操作
[root@hdss7-200 harbor]# yum install nginx -y
[root@hdss7-200 harbor]# vi /etc/nginx/conf.d/harbor.zq.com.conf
server {
listen 80;
server_name harbor.zq.com;
client_max_body_size 1000m;
location / {
proxy_pass http://127.0.0.1:180;
}
}
[root@hdss7-200 harbor]# nginx -t
[root@hdss7-200 harbor]# systemctl start nginx
[root@hdss7-200 harbor]# systemctl enable nginx
浏览器输入:harbor.zq.com
用户名:admin 密码:Harbor12345
新建项目:public 访问级别:公开
2.5.6 提前准备pauser/nginx基础镜像
pauser镜像是k8s启动pod时,预先用来创建相关资源(如名称空间)的
nginx镜像是k8s部署好以后,我们测试pod创建所用的
docker login harbor.zq.com -uadmin -pHarbor12345
docker pull kubernetes/pause
docker pull nginx:1.17.9
docker tag kubernetes/pause:latest harbor.zq.com/public/pause:latest
docker tag nginx:1.17.9 harbor.zq.com/public/nginx:v1.17.9
docker push harbor.zq.com/public/pause:latest
docker push harbor.zq.com/public/nginx:v1.17.9
2.6 准备nginx文件服务
创建一个nginx虚拟主机,用来提供文件访问访问,主要依赖nginx的autoindex
属性
2.6.1 创建文件访问
在7.200
上
# 创建配置
cat >/etc/nginx/conf.d/k8s-yaml.zq.com.conf <<EOF
server {
listen 80;
server_name k8s-yaml.zq.com;
location / {
autoindex on;
default_type text/plain;
root /data/k8s-yaml;
}
}
EOF
# 启动nginx
mkdir -p /data/k8s-yaml/coredns
nginx -t
nginx -s reload
2.6.2 添加域名解析
在7.11
的bind9
域名服务器上,增加DNS记录
vi /var/named/zq.com.zone
# 在最后添加一条解析记录
k8s-yaml A 10.4.7.200
# 同时滚动serial为
@ IN SOA dns.zq.com. dnsadmin.zq.com. (
2019061803 ; serial
重启服务并验证:
systemctl restart named
[root@hdss7-11 ~]# dig -t A k8s-yaml.zq.com +short
10.4.7.200
3 部署master节点-etcd服务
3.1 部署etcd集群
分别在12/21/22 上安装ectd服务,11节点作为备选节点
3.1.1 创建生成CA证书的JSON配置文件
在7.200上操作
一个配置里面包含了server端,clinet端和双向(peer)通信所需要的配置,后面创建证书的时候会传入不同的参数调用不同的配置
cat >/opt/certs/ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"server": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
},
"client": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
},
"peer": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
证书时间统一为10年,不怕过期
证书类型
client certificate:客户端使用,用于服务端认证客户端,例如etcdctl、etcd proxy、fleetctl、docker客户端
server certificate:服务端使用,客户端以此验证服务端身份,例如docker服务端、kube-apiserver
peer certificate:双向证书,用于etcd集群成员间通信
3.1.3.创建生成自签发请求(csr)的json配置文件
注意:
需要将所有可能用来部署etcd
的机器,都加入到hosts列表中
否则后期重新加入不在列表中的机器,需要更换所有etcd服务的证书
cat >/opt/certs/etcd-peer-csr.json <<EOF
{
"CN": "k8s-etcd",
"hosts": [
"10.4.7.11",
"10.4.7.12",
"10.4.7.21",
"10.4.7.22"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "zq",
"OU": "ops"
}
]
}
EOF
3.1.4.生成etcd证书文件
cd /opt/certs/
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \
-config=ca-config.json -profile=peer \
etcd-peer-csr.json |cfssl-json -bare etcd-peer
[root@hdss7-200 certs]# ll
total 36
-rw-r--r-- 1 root root 837 Apr 19 15:35 ca-config.json
-rw-r--r-- 1 root root 989 Apr 16 20:53 ca.csr
-rw-r--r-- 1 root root 324 Apr 16 20:52 ca-csr.json
-rw------- 1 root root 1679 Apr 16 20:53 ca-key.pem
-rw-r--r-- 1 root root 1330 Apr 16 20:53 ca.pem
-rw-r--r-- 1 root root 1062 Apr 19 15:35 etcd-peer.csr
-rw-r--r-- 1 root root 363 Apr 19 15:35 etcd-peer-csr.json
-rw------- 1 root root 1679 Apr 19 15:35 etcd-peer-key.pem
-rw-r--r-- 1 root root 1419 Apr 19 15:35 etcd-peer.pem
3.2 安装启动etcd集群
以7.12
做为演示,另外2台机器大同小异,不相同的配置都会特别说明
3.2.1 创建etcd用户和安装软件
etcd地址:https://github.com/etcd-io/etcd/tags
建议使用3.1版本,更高版本有问题
useradd -s /sbin/nologin -M etcd
wget https://github.com/etcd-io/etcd/archive/v3.1.20.tar.gz
tar xf etcd-v3.1.20-linux-amd64.tar.gz -C /opt/
cd /opt/
mv etcd-v3.1.20-linux-amd64/ etcd-v3.1.20
ln -s /opt/etcd-v3.1.20/ /opt/etcd
3.2.2 创建目录,拷贝证书文件
创建证书目录、数据目录、日志目录
mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
chown -R etcd.etcd /opt/etcd-v3.1.20/
chown -R etcd.etcd /data/etcd/
chown -R etcd.etcd /data/logs/etcd-server/
拷贝生成的证书文件
cd /opt/etcd/certs
scp hdss7-200:/opt/certs/ca.pem .
scp hdss7-200:/opt/certs/etcd-peer.pem .
scp hdss7-200:/opt/certs/etcd-peer-key.pem .
chown -R etcd.etcd /opt/etcd/certs
也可以先创建一个NFS,直接从NFS中拷贝
3.2.3 创建etcd服务启动脚本
参数说明: https://blog.csdn.net/kmhysoft/article/details/71106995
cat >/opt/etcd/etcd-server-startup.sh <<'EOF'
#!/bin/sh
./etcd \
--name etcd-server-7-12 \
--data-dir /data/etcd/etcd-server \
--listen-peer-urls https://10.4.7.12:2380 \
--listen-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
--quota-backend-bytes 8000000000 \
--initial-advertise-peer-urls https://10.4.7.12:2380 \
--advertise-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
--initial-cluster etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \
--ca-file ./certs/ca.pem \
--cert-file ./certs/etcd-peer.pem \
--key-file ./certs/etcd-peer-key.pem \
--client-cert-auth \
--trusted-ca-file ./certs/ca.pem \
--peer-ca-file ./certs/ca.pem \
--peer-cert-file ./certs/etcd-peer.pem \
--peer-key-file ./certs/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file ./certs/ca.pem \
--log-output stdout
EOF
[root@hdss7-12 ~]# chmod +x /opt/etcd/etcd-server-startup.sh
注意:以上启动脚本,有几个配置项在每个服务器都有所不同
--name #节点名字
--listen-peer-urls #监听其他节点所用的地址
--listen-client-urls #监听etcd客户端的地址
--initial-advertise-peer-urls #与其他节点交互信息的地址
--advertise-client-urls #与etcd客户端交互信息的地址
3.2.4 使用supervisor启动etcd
安装supervisor软件
yum install supervisor -y
systemctl start supervisord
systemctl enable supervisord
创建supervisor管理etcd的配置文件
配置说明参考: https://www.jianshu.com/p/53b5737534e8
cat >/etc/supervisord.d/etcd-server.ini <<EOF
[program:etcd-server] ; 显示的程序名,类型my.cnf,可以有多个
command=sh /opt/etcd/etcd-server-startup.sh
numprocs=1 ; 启动进程数 (def 1)
directory=/opt/etcd ; 启动命令前切换的目录 (def no cwd)
autostart=true ; 是否自启 (default: true)
autorestart=true ; 是否自动重启 (default: true)
startsecs=30 ; 服务运行多久判断为成功(def. 1)
startretries=3 ; 启动重试次数 (default 3)
exitcodes=0,2 ; 退出状态码 (default 0,2)
stopsignal=QUIT ; 退出信号 (default TERM)
stopwaitsecs=10 ; 退出延迟时间 (default 10)
user=etcd ; 运行用户
redirect_stderr=true ; 是否重定向错误输出到标准输出(def false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log
stdout_logfile_maxbytes=64MB ; 日志文件大小 (default 50MB)
stdout_logfile_backups=4 ; 日志文件滚动个数 (default 10)
stdout_capture_maxbytes=1MB ; 设定capture管道的大小(default 0)
;子进程还有子进程,需要添加这个参数,避免产生孤儿进程
killasgroup=true
stopasgroup=true
EOF
启动etcd服务并检查
supervisorctl update
supervisorctl status
netstat -lntup|grep etcd
3.2.5 部署启动集群其他机器
略
3.2.6 检查集群状态
[root@hdss7-12 certs]# /opt/etcd/etcdctl cluster-health
member 988139385f78284 is healthy: got healthy result from http://127.0.0.1:2379
member 5a0ef2a004fc4349 is healthy: got healthy result from http://127.0.0.1:2379
member f4a0cb0a765574a8 is healthy: got healthy result from http://127.0.0.1:2379
[root@hdss7-12 certs]# /opt/etcd/etcdctl member list
988139385f78284: name=etcd-server-7-22 peerURLs=https://10.4.7.22:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.22:2379 isLeader=false
5a0ef2a004fc4349: name=etcd-server-7-21 peerURLs=https://10.4.7.21:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.21:2379 isLeader=false
f4a0cb0a765574a8: name=etcd-server-7-12 peerURLs=https://10.4.7.12:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.12:2379 isLeader=true
4 部署mater节点 kube-apiserver服务
下载页面: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md
下载地址:
https://dl.k8s.io/v1.15.5/kubernetes-server-linux-amd64.tar.gz
https://dl.k8s.io/v1.15.5/kubernetes-client-linux-amd64.tar.gz
https://dl.k8s.io/v1.15.5/kubernetes-node-linux-amd64.tar.gz
4.1 签发client端证书
证书签发都在7.200
上操作
此证书的用途是apiserver和etcd之间通信所用
4.1.1 创建生成证书csr的json配置文件
cat >/opt/certs/client-csr.json <<EOF
{
"CN": "k8s-node",
"hosts": [
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "zq",
"OU": "ops"
}
]
}
EOF
4.1.2 生成client证书文件
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=client \
client-csr.json |cfssl-json -bare client
[root@hdss7-200 certs]# ll|grep client
-rw-r--r-- 1 root root 993 Apr 20 21:30 client.csr
-rw-r--r-- 1 root root 280 Apr 20 21:30 client-csr.json
-rw------- 1 root root 1675 Apr 20 21:30 client-key.pem
-rw-r--r-- 1 root root 1359 Apr 20 21:30 client.pem
4.2 签发kube-apiserver证书
此证书的用途是apiserver对外提供的服务的证书
4.2.1 创建生成证书csr的json配置文件
此配置中的hosts
包含所有可能会部署apiserver的列表
其中10.4.7.10
是反向代理的vip地址
cat >/opt/certs/apiserver-csr.json <<EOF
{
"CN": "k8s-apiserver",
"hosts": [
"127.0.0.1",
"192.168.0.1",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"10.4.7.10",
"10.4.7.21",
"10.4.7.22",
"10.4.7.23"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "zq",
"OU": "ops"
}
]
}
EOF
4.2.2 生成kube-apiserver证书文件
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=server \
apiserver-csr.json |cfssl-json -bare apiserver
[root@hdss7-200 certs]# ll|grep apiserver
-rw-r--r-- 1 root root 1249 Apr 20 21:31 apiserver.csr
-rw-r--r-- 1 root root 566 Apr 20 21:31 apiserver-csr.json
-rw------- 1 root root 1675 Apr 20 21:31 apiserver-key.pem
-rw-r--r-- 1 root root 1590 Apr 20 21:31 apiserver.pem
4.3 下载安装kube-apiserver
以7.21
为例
# 上传并解压缩
tar xf kubernetes-server-linux-amd64-v1.15.2.tar.gz -C /opt
cd /opt
mv kubernetes/ kubernetes-v1.15.2
ln -s /opt/kubernetes-v1.15.2/ /opt/kubernetes
# 清理源码包和docker镜像
cd /opt/kubernetes
rm -rf kubernetes-src.tar.gz
cd server/bin
rm -f *.tar
rm -f *_tag
# 创建命令软连接到系统环境变量下
ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
4.4 部署apiserver服务
4.4.1 拷贝证书文件
拷贝证书文件到/opt/kubernetes/server/bin/cert
目录下
# 创建目录
mkdir -p /opt/kubernetes/server/bin/cert
cd /opt/kubernetes/server/bin/cert
# 拷贝三套证书
scp hdss7-200:/opt/certs/ca.pem .
scp hdss7-200:/opt/certs/ca-key.pem .
scp hdss7-200:/opt/certs/client.pem .
scp hdss7-200:/opt/certs/client-key.pem .
scp hdss7-200:/opt/certs/apiserver.pem .
scp hdss7-200:/opt/certs/apiserver-key.pem .
4.4.2 创建audit配置
audit日志审计规则配置是k8s要求必须要有得配置,可以不理解,直接用
mkdir /opt/kubernetes/server/conf
cat >/opt/kubernetes/server/conf/audit.yaml <<'EOF'
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]
# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]
# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
# Long-running requests like watches that fall under this rule will not
# generate an audit event in RequestReceived.
omitStages:
- "RequestReceived"
EOF
4.4.3 创建apiserver启动脚本
cat >/opt/kubernetes/server/bin/kube-apiserver.sh <<'EOF'
#!/bin/bash
./kube-apiserver \
--apiserver-count 2 \
--audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
--audit-policy-file ../conf/audit.yaml \
--authorization-mode RBAC \
--client-ca-file ./cert/ca.pem \
--requestheader-client-ca-file ./cert/ca.pem \
--enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
--etcd-cafile ./cert/ca.pem \
--etcd-certfile ./cert/client.pem \
--etcd-keyfile ./cert/client-key.pem \
--etcd-servers https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
--service-account-key-file ./cert/ca-key.pem \
--service-cluster-ip-range 192.168.0.0/16 \
--service-node-port-range 3000-29999 \
--target-ram-mb=1024 \
--kubelet-client-certificate ./cert/client.pem \
--kubelet-client-key ./cert/client-key.pem \
--log-dir /data/logs/kubernetes/kube-apiserver \
--tls-cert-file ./cert/apiserver.pem \
--tls-private-key-file ./cert/apiserver-key.pem \
--v 2
EOF
# 授权
chmod +x /opt/kubernetes/server/bin/kube-apiserver.sh
4.4.4 创建supervisor启动apiserver的配置
安装supervisor软件
yum install supervisor -y
systemctl start supervisord
systemctl enable supervisord
cat >/etc/supervisord.d/kube-apiserver.ini <<EOF
[program:kube-apiserver] ; 显示的程序名,类似my.cnf,可以有多个
command=sh /opt/kubernetes/server/bin/kube-apiserver.sh
numprocs=1 ; 启动进程数 (def 1)
directory=/opt/kubernetes/server/bin
autostart=true ; 是否自启 (default: true)
autorestart=true ; 是否自动重启 (default: true)
startsecs=30 ; 服务运行多久判断为成功(def. 1)
startretries=3 ; 启动重试次数 (default 3)
exitcodes=0,2 ; 退出状态码 (default 0,2)
stopsignal=QUIT ; 退出信号 (default TERM)
stopwaitsecs=10 ; 退出延迟时间 (default 10)
user=root ; 运行用户
redirect_stderr=true ; 重定向错误输出到标准输出(def false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log
stdout_logfile_maxbytes=64MB ; 日志文件大小 (default 50MB)
stdout_logfile_backups=4 ; 日志文件滚动个数 (default 10)
stdout_capture_maxbytes=1MB ; 设定capture管道的大小(default 0)
;子进程还有子进程,需要添加这个参数,避免产生孤儿进程
killasgroup=true
stopasgroup=true
EOF
4.4.5 启动apiserver服务并检查
mkdir -p /data/logs/kubernetes/kube-apiserver
supervisorctl update
supervisorctl status
netstat -nltup|grep kube-api
4.4.6 部署启动所有apiserver机器
集群其他机器的部署,没有不同的地方,所以略
4.5 部署controller-manager服务
apiserve、controller-manager、kube-scheduler三个服务所需的软件在同一套压缩包里面的,因此后两个服务不需要在单独解包
而且这三个服务是在同一个主机上,互相之间通过http://127.0.0.1
,也不需要证书
4.5.1 创建controller-manager启动脚本
cat >/opt/kubernetes/server/bin/kube-controller-manager.sh <<'EOF'
#!/bin/sh
./kube-controller-manager \
--cluster-cidr 172.7.0.0/16 \
--leader-elect true \
--log-dir /data/logs/kubernetes/kube-controller-manager \
--master http://127.0.0.1:8080 \
--service-account-private-key-file ./cert/ca-key.pem \
--service-cluster-ip-range 192.168.0.0/16 \
--root-ca-file ./cert/ca.pem \
--v 2
EOF
# 授权
chmod +x /opt/kubernetes/server/bin/kube-controller-manager.sh
4.5.2 创建supervisor配置
cat >/etc/supervisord.d/kube-conntroller-manager.ini <<EOF
[program:kube-controller-manager] ; 显示的程序名
command=sh /opt/kubernetes/server/bin/kube-controller-manager.sh
numprocs=1 ; 启动进程数 (def 1)
directory=/opt/kubernetes/server/bin
autostart=true ; 是否自启 (default: true)
autorestart=true ; 是否自动重启 (default: true)
startsecs=30 ; 服务运行多久判断为成功(def. 1)
startretries=3 ; 启动重试次数 (default 3)
exitcodes=0,2 ; 退出状态码 (default 0,2)
stopsignal=QUIT ; 退出信号 (default TERM)
stopwaitsecs=10 ; 退出延迟时间 (default 10)
user=root ; 运行用户
redirect_stderr=true ; 重定向错误输出到标准输出(def false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log
stdout_logfile_maxbytes=64MB ; 日志文件大小 (default 50MB)
stdout_logfile_backups=4 ; 日志文件滚动个数 (default 10)
stdout_capture_maxbytes=1MB ; 设定capture管道的大小(default 0)
;子进程还有子进程,需要添加这个参数,避免产生孤儿进程
killasgroup=true
stopasgroup=true
EOF
4.5.3 启动服务并检查
mkdir -p /data/logs/kubernetes/kube-controller-manager
supervisorctl update
supervisorctl status
4.5.4 部署启动所有集群
没有不同的地方,所以略
4.6 部署kube-scheduler服务
4.6.1 创建启动脚本
cat >/opt/kubernetes/server/bin/kube-scheduler.sh <<'EOF'
#!/bin/sh
./kube-scheduler \
--leader-elect \
--log-dir /data/logs/kubernetes/kube-scheduler \
--master http://127.0.0.1:8080 \
--v 2
EOF
# 授权
chmod +x /opt/kubernetes/server/bin/kube-scheduler.sh
4.6.2 创建supervisor配置
cat >/etc/supervisord.d/kube-scheduler.ini <<EOF
[program:kube-scheduler]
command=sh /opt/kubernetes/server/bin/kube-scheduler.sh
numprocs=1 ; 启动进程数 (def 1)
directory=/opt/kubernetes/server/bin
autostart=true ; 是否自启 (default: true)
autorestart=true ; 是否自动重启 (default: true)
startsecs=30 ; 服务运行多久判断为成功(def. 1)
startretries=3 ; 启动重试次数 (default 3)
exitcodes=0,2 ; 退出状态码 (default 0,2)
stopsignal=QUIT ; 退出信号 (default TERM)
stopwaitsecs=10 ; 退出延迟时间 (default 10)
user=root ; 运行用户
redirect_stderr=true ; 重定向错误输出到标准输出(def false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log
stdout_logfile_maxbytes=64MB ; 日志文件大小 (default 50MB)
stdout_logfile_backups=4 ; 日志文件滚动个数 (default 10)
stdout_capture_maxbytes=1MB ; 设定capture管道的大小(default 0)
;子进程还有子进程,需要添加这个参数,避免产生孤儿进程
killasgroup=true
stopasgroup=true
EOF
4.6.3 启动服务并检查
mkdir -p /data/logs/kubernetes/kube-scheduler
supervisorctl update
supervisorctl status
4.6.4 部署启动所有集群
没有不同的地方,所以略
4.7 检查master节点部署情况
[root@hdss7-21 bin]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
5 部署4层反代去代理apiserver
master节点上的3套服务部署完成后,需要使用反向代理去统一两个apiservser的对外端口
这里使用nginx+keepalived的高可用架构部署在7.11
和7.12
两台机器上
5.1 部署nginx四层反代
使用7443端口代理apiserver的6443端口,使用keepalived管理VIP10.4.7.10
5.1.1 yum安装程序
yum install nginx keepalived -y
5.1.2 配置NGINX
四层代理不能写在默认的conf.d
目录下,因为这个目录默认是数据http模块的include
所以要么把四层代理写到主配置文件最下面,要么模仿七层代理创建一个四层代理文件夹
# 1. 在nginx配置文件中增加四层代理配置文件夹
mkdir /etc/nginx/tcp.d/
echo 'include /etc/nginx/tcp.d/*.conf;' >>/etc/nginx/nginx.conf
# 写入代理配置
cat >/etc/nginx/tcp.d/apiserver.conf <<EOF
stream {
upstream kube-apiserver {
server 10.4.7.21:6443 max_fails=3 fail_timeout=30s;
server 10.4.7.22:6443 max_fails=3 fail_timeout=30s;
}
server {
listen 7443;
proxy_connect_timeout 2s;
proxy_timeout 900s;
proxy_pass kube-apiserver;
}
}
EOF
5.1.3 启动nginx
nginx -t
systemctl start nginx
systemctl enable nginx
5.2 配置keepalived
5.2.1 创建端口监测脚本
创建脚本
cat >/etc/keepalived/check_port.sh <<'EOF'
#!/bin/bash
#keepalived 监控端口脚本
#使用方法:等待keepalived传入端口参数,检查改端口是否存在并返回结果
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
if [ $PORT_PROCESS -eq 0 ];then
echo "Port $CHK_PORT Is Not Used,End."
exit 1
fi
else
echo "Check Port Cant Be Empty!"
fi
EOF
给与脚本执行权限
chmod +x /etc/keepalived/check_port.sh
5.2.2 创建keepalived主
配置文件
主机定义为10.4.7.11
,从机定义为10.4.7.12
注意:主配置文件添加了nopreempt
参数,非抢占式,意味着VIP发生漂移后,主重新启动后也不会夺回VIP,目的是为了稳定性
cat >/etc/keepalived/keepalived.conf <<'EOF'
! Configuration File for keepalived
global_defs {
router_id 10.4.7.11
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_port.sh 7443"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 251
priority 100
advert_int 1
mcast_src_ip 10.4.7.11
nopreempt
authentication {
auth_type PASS
auth_pass 11111111
}
track_script {
chk_nginx
}
virtual_ipaddress {
10.4.7.10
}
}
EOF
5.2.3 创建keepalived从
配置文件
cat >/etc/keepalived/keepalived.conf <<'EOF'
! Configuration File for keepalived
global_defs {
router_id 10.4.7.12
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_port.sh 7443"
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 251
mcast_src_ip 10.4.7.12
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 11111111
}
track_script {
chk_nginx
}
virtual_ipaddress {
10.4.7.10
}
}
EOF
5.3.4 启动keepalived并验证
systemctl start keepalived
systemctl enable keepalived
ip addr|grep '10.4.7.10'
6 部署node节点
6.1 签发kubelet证书
签发证书,都在7.200
上
6.1.1 创建生成证书csr的json配置文件
cd /opt/certs/
cat >/opt/certs/kubelet-csr.json <<EOF
{
"CN": "k8s-kubelet",
"hosts": [
"127.0.0.1",
"10.4.7.10",
"10.4.7.21",
"10.4.7.22",
"10.4.7.23",
"10.4.7.24",
"10.4.7.25",
"10.4.7.26",
"10.4.7.27",
"10.4.7.28"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "zq",
"OU": "ops"
}
]
}
EOF
6.1.2 生成kubelet证书文件
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=server \
kubelet-csr.json | cfssl-json -bare kubelet
[root@hdss7-200 certs]# ll |grep kubelet
-rw-r--r-- 1 root root 1115 Apr 22 22:17 kubelet.csr
-rw-r--r-- 1 root root 452 Apr 22 22:17 kubelet-csr.json
-rw------- 1 root root 1679 Apr 22 22:17 kubelet-key.pem
-rw-r--r-- 1 root root 1460 Apr 22 22:17 kubelet.pem
6.2 创建kubelet服务
6.2.1 拷贝证书至node节点
cd /opt/kubernetes/server/bin/cert
scp hdss7-200:/opt/certs/kubelet.pem .
scp hdss7-200:/opt/certs/kubelet-key.pem .
6.2.2 创建kubelet配置
创建kubelet的配置文件kubelet.kubeconfig
比较麻烦,需要四步操作才能完成
(1) set-cluster(设置集群参数)
使用ca证书创建集群myk8s
,使用的apiserver信息是10.4.7.10
这个VIP
cd /opt/kubernetes/server/conf/
kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://10.4.7.10:7443 \
--kubeconfig=kubelet.kubeconfig
(2) set-credentials(设置客户端认证参数)
使用client证书创建用户k8s-node
kubectl config set-credentials k8s-node \
--client-certificate=/opt/kubernetes/server/bin/cert/client.pem \
--client-key=/opt/kubernetes/server/bin/cert/client-key.pem \
--embed-certs=true \
--kubeconfig=kubelet.kubeconfig
(3) set-context(绑定namespace)
创建myk8s-context
,关联集群myk8s
和用户k8s-node
kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=k8s-node \
--kubeconfig=kubelet.kubeconfig
(4) use-context
使用生成的配置文件向apiserver注册,注册信息会写入etcd,所以只需要注册一次即可
kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
(5) 查看生成的kubelet.kubeconfig
[root@hdss7-21 conf]# cat kubelet.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxxxxxxx
server: https://10.4.7.10:7443
name: myk8s
contexts:
- context:
cluster: myk8s
user: k8s-node
name: myk8s-context
current-context: myk8s-context
kind: Config
preferences: {}
users:
- name: k8s-node
user:
client-certificate-data: xxxxxxxx
client-key-data: xxxxxxxx
可以看出来,这个配置文件里面包含了集群名字,用户名字,集群认证的公钥,用户的公私钥等
6.2.3 创建k8s-node.yaml配置文件
cat >/opt/kubernetes/server/conf/k8s-node.yaml <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node
EOF
使用RBAC鉴权规则,创建了一个
ClusterRoleBinding
的资源
此资源中定义了一个user
叫k8s-node
给k8s-node
用户绑定了角色ClusterRole
,角色名为system:node
使这个用户具有成为集群运算节点角色的权限
由于这个用户名,同时也是kubeconfig
中指定的用户,
所以通过kubeconfig
配置启动的kubelet
节点,就能够成为node节点
6.2.4 应用资源配置
应用资源配置,并查看结果
# 应用资源配置
kubectl create -f /opt/kubernetes/server/conf/k8s-node.yaml
# 查看集群角色和角色属性
[root@hdss7-21 conf]# kubectl get clusterrolebinding k8s-node
NAME AGE
k8s-node 13s
[root@hdss7-21 conf]# kubectl get clusterrolebinding k8s-node -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: "2020-04-22T14:38:09Z"
name: k8s-node
resourceVersion: "21217"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/k8s-node
uid: 597ffb0f-f92d-4eb5-aca2-2fe73397e2e4
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node
#此时只是创建了相应的资源,还没有具体的node,如下验证
[root@hdss7-21 conf]# kubectl get nodes
No resources found.
6.2.5 创建kubelet启动脚本
--hostname-override
参数每个node节点都一样,是节点的主机名,注意修改
cat >/opt/kubernetes/server/bin/kubelet.sh <<'EOF'
#!/bin/sh
./kubelet \
--hostname-override hdss7-21.host.com \
--anonymous-auth=false \
--cgroup-driver systemd \
--cluster-dns 192.168.0.2 \
--cluster-domain cluster.local \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on="false" \
--client-ca-file ./cert/ca.pem \
--tls-cert-file ./cert/kubelet.pem \
--tls-private-key-file ./cert/kubelet-key.pem \
--image-gc-high-threshold 20 \
--image-gc-low-threshold 10 \
--kubeconfig ../conf/kubelet.kubeconfig \
--log-dir /data/logs/kubernetes/kube-kubelet \
--pod-infra-container-image harbor.zq.com/public/pause:latest \
--root-dir /data/kubelet
EOF
# 创建目录&授权
chmod +x /opt/kubernetes/server/bin/kubelet.sh
mkdir -p /data/logs/kubernetes/kube-kubelet
mkdir -p /data/kubelet
6.2.6 创建supervisor配置
cat >/etc/supervisord.d/kube-kubelet.ini <<EOF
[program:kube-kubelet]
command=sh /opt/kubernetes/server/bin/kubelet.sh
numprocs=1 ; 启动进程数 (def 1)
directory=/opt/kubernetes/server/bin
autostart=true ; 是否自启 (default: true)
autorestart=true ; 是否自动重启 (default: true)
startsecs=30 ; 服务运行多久判断为成功(def. 1)
startretries=3 ; 启动重试次数 (default 3)
exitcodes=0,2 ; 退出状态码 (default 0,2)
stopsignal=QUIT ; 退出信号 (default TERM)
stopwaitsecs=10 ; 退出延迟时间 (default 10)
user=root ; 运行用户
redirect_stderr=true ; 重定向错误输出到标准输出(def false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log
stdout_logfile_maxbytes=64MB ; 日志文件大小 (default 50MB)
stdout_logfile_backups=4 ; 日志文件滚动个数 (default 10)
stdout_capture_maxbytes=1MB ; 设定capture管道的大小(default 0)
;子进程还有子进程,需要添加这个参数,避免产生孤儿进程
killasgroup=true
stopasgroup=true
EOF
6.2.7 启动服务并检查
supervisorctl update
supervisorctl status
[root@hdss7-21 server]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready <none> 65s v1.15.5
6.2.8 部署其他node节点
第一个节点部署完成后,其他节点就要简单很多,只需拷贝kubelet.kubeconfig
配置到本地后,创建启动脚本并用`supervisord启动即可
也可以不拷贝配置文件,就需要手动再执行创建配置文件的四步
# 拷贝证书
cd /opt/kubernetes/server/bin/cert
scp hdss7-200:/opt/certs/kubelet.pem .
scp hdss7-200:/opt/certs/kubelet-key.pem .
# 拷贝配置文件
cd /opt/kubernetes/server/conf/
scp hdss7-21:/opt/kubernetes/server/conf/kubelet.kubeconfig .
拷贝完配置后,剩下的步骤参考6.2.5 创建kubelet启动脚本
,除脚本中--hostname-override
不同外,其他都一样
6.2.9 检查所有节点并给节点打上标签
此操作非必须,因为只是打的一个标签,方便识别而已
kubectl get nodes
kubectl label node hdss7-21.host.com node-role.kubernetes.io/master=
kubectl label node hdss7-21.host.com node-role.kubernetes.io/node=
[root@hdss7-22 cert]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready master,node 9m v1.15.5
hdss7-22.host.com Ready <none> 64s v1.15.5
6.3 创建kube-proxy服务
签发证书在7.200
上
6.3.1 签发kube-proxy证书
(1) 创建生成证书csr的json配置文件
cd /opt/certs/
cat >/opt/certs/kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "zq",
"OU": "ops"
}
]
}
EOF
(2) 生成kube-proxy证书文件
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=client \
kube-proxy-csr.json |cfssl-json -bare kube-proxy-client
(3) 检查生成的证书文件
[root@hdss7-200 certs]# ll |grep proxy
-rw-r--r-- 1 root root 1005 Apr 22 22:54 kube-proxy-client.csr
-rw------- 1 root root 1675 Apr 22 22:54 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1371 Apr 22 22:54 kube-proxy-client.pem
-rw-r--r-- 1 root root 267 Apr 22 22:54 kube-proxy-csr.json
6.3.2 拷贝证书文件至各节点
cd /opt/kubernetes/server/bin/cert
scp hdss7-200:/opt/certs/kube-proxy-client.pem .
scp hdss7-200:/opt/certs/kube-proxy-client-key.pem .
6.3.3 创建kube-proxy配置
同样是四步操作,类似kubelet
(1) set-cluster
cd /opt/kubernetes/server/conf/
kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://10.4.7.10:7443 \
--kubeconfig=kube-proxy.kubeconfig
(2) set-credentials
kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
--client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
(3) set-context
kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
(4) use-context
kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
6.3.4 加载ipvs模块以备kube-proxy启动用
# 创建开机ipvs脚本
cat >/etc/ipvs.sh <<'EOF'
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
/sbin/modinfo -F filename $i &>/dev/null
if [ $? -eq 0 ];then
/sbin/modprobe $i
fi
done
EOF
# 执行脚本开启ipvs
sh /etc/ipvs.sh
# 验证开启结果
[root@hdss7-21 conf]# lsmod |grep ip_vs
ip_vs_wrr 12697 0
ip_vs_wlc 12519 0
......略
6.3.5 创建kube-proxy启动脚本
同上, --hostname-override
参数在不同的node节点上不一样,需修改
cat >/opt/kubernetes/server/bin/kube-proxy.sh <<'EOF'
#!/bin/sh
./kube-proxy \
--hostname-override hdss7-21.host.com \
--cluster-cidr 172.7.0.0/16 \
--proxy-mode=ipvs \
--ipvs-scheduler=nq \
--kubeconfig ../conf/kube-proxy.kubeconfig
EOF
# 授权
chmod +x /opt/kubernetes/server/bin/kube-proxy.sh
6.3.6 创建kube-proxy的supervisor配置
cat >/etc/supervisord.d/kube-proxy.ini <<'EOF'
[program:kube-proxy]
command=sh /opt/kubernetes/server/bin/kube-proxy.sh
numprocs=1 ; 启动进程数 (def 1)
directory=/opt/kubernetes/server/bin
autostart=true ; 是否自启 (default: true)
autorestart=true ; 是否自动重启 (default: true)
startsecs=30 ; 服务运行多久判断为成功(def. 1)
startretries=3 ; 启动重试次数 (default 3)
exitcodes=0,2 ; 退出状态码 (default 0,2)
stopsignal=QUIT ; 退出信号 (default TERM)
stopwaitsecs=10 ; 退出延迟时间 (default 10)
user=root ; 运行用户
redirect_stderr=true ; 重定向错误输出到标准输出(def false)
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log
stdout_logfile_maxbytes=64MB ; 日志文件大小 (default 50MB)
stdout_logfile_backups=4 ; 日志文件滚动个数 (default 10)
stdout_capture_maxbytes=1MB ; 设定capture管道的大小(default 0)
;子进程还有子进程,需要添加这个参数,避免产生孤儿进程
killasgroup=true
stopasgroup=true
EOF
6.3.7 启动服务并检查
mkdir -p /data/logs/kubernetes/kube-proxy
supervisorctl update
supervisorctl status
[root@hdss7-21 conf]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 47h
# 检查ipvs,是否新增了配置
yum install ipvsadm -y
[root@hdss7-21 conf]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.0.1:443 nq
-> 10.4.7.21:6443 Masq 1 0 0
-> 10.4.7.22:6443 Masq 1 0 0
6.3.8 部署所有节点
首先需拷贝kube-proxy.kubeconfig 到 hdss7-22.host.com的conf目录下
# 拷贝证书文件
cd /opt/kubernetes/server/bin/cert
scp hdss7-200:/opt/certs/kube-proxy-client.pem .
scp hdss7-200:/opt/certs/kube-proxy-client-key.pem .
# 拷贝配置文件
cd /opt/kubernetes/server/conf/
scp hdss7-21:/opt/kubernetes/server/conf/kube-proxy.kubeconfig .
其他不同的地方就一个主机名,都已经在前面说明了,略
7 验证kubernetes集群
7.1 在任意一个节点上创建一个资源配置清单
cat >/root/nginx-ds.yaml <<'EOF'
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ds
spec:
template:
metadata:
labels:
app: nginx-ds
spec:
containers:
- name: my-nginx
image: harbor.zq.com/public/nginx:v1.17.9
ports:
- containerPort: 80
EOF
7.2 应用资源配置,并检查
7.2.1 应用资源配置
kubectl create -f /root/nginx-ds.yaml
[root@hdss7-22 conf]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ds-j777c 1/1 Running 0 8s
nginx-ds-nwsd6 1/1 Running 0 8s
7.2.2 在另一台node节点上检查
kubectl get pods
kubectl get pods -o wide
curl 172.7.22.2
7.2.3 查看kubernetes是否搭建好
[root@hdss7-22 conf]# kubectl get cs
NAME STATUS MESSAGE ERROR
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
controller-manager Healthy ok
scheduler Healthy ok
[root@hdss7-21 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready master,node 6d1h v1.15.5
hdss7-22.host.com Ready <none> 6d1h v1.15.5
[root@hdss7-22 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ds-j777c 1/1 Running 0 6m45s
nginx-ds-nwsd6 1/1 Running 0 6m45s