二进制安装K8S
1.安装前准备
1.1.资源分配
IP地址
|
主机名称
|
操作系统
|
CPU
|
内存
|
角色
|
磁盘
|
192.168.1.31
|
HDSS7-31.host.com
|
Centos7
|
2C
|
2G
|
LB,DNS
|
50G
|
192.168.1.32
|
HDSS7-32.host.com
|
Centos7
|
2C
|
2G
|
LB,ETCD
|
50G
|
192.168.1.33
|
HDSS7-33.host.com
|
Centos7
|
2C
|
4G
|
K8S Master,ETCD
|
50G
|
192.168.1.34
|
HDSS7-34.host.com
|
Centos7
|
2C
|
4G
|
K8S Master,ETCD
|
50G
|
192.168.1.35
|
HDSS7-35.host.com
|
Centos7
|
2C
|
4G
|
Node,ETCD
|
50G
|
192.168.1.36
|
HDSS7-36.host.com
|
Centos7
|
2C
|
4G
|
Node,ETCD
|
50G
|
192.168.1.40
|
HDSS7-40.host.com
|
Centos7
|
2C
|
2G
|
Harbor,NFS
|
50G
|
VIP地址:192.168.1.45
1.2.环境准备
- 所有机器都需要执行
- 关闭防火墙
- 关闭selinux
- 设置主机名
- 配置yum源、配置repl源
- 安装基础软件包
- 时间同步
[root@localhost ~]# systemctl stop firewalld [root@localhost ~]# systemctl disable firewalld [root@localhost ~]# setenforce 0 [root@localhost ~]# sed -i "s@SELINUX=enforcing@SELINUX=disabled@g" /etc/selinux/config [root@localhost ~]# vi /etc/hostname hdss7-31.host.com [root@localhost ~]# sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo [root@localhost ~]# yum makecache [root@hdss7-31 ~]# yum install -y epel-release [root@hdss7-31 ~]# yum install -y wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils vim less
1.3.安装bind
1.3.1.在hdss7-31上面安装DNS服务
[root@hdss7-31 ~]# yum install -y bind
1.3.2.配置DNS服务
● 主配置文件
[root@hdss7-31 ~]# vim /etc/named.conf #修改以下几项配置 options { listen-on port 53 { 192.168.1.31; }; allow-query { any; }; forwarders { 114.114.114.114; }; recursion yes; dnssec-enable no; dnssec-validation no;
● 配置区域文件,增加两个业务域,od.com为业务域,host.com.zone为主机域。
[root@hdss7-31 ~]# vim /etc/named.rfc1912.zones zone "host.com" IN { type master; file "host.com.zone"; allow-update { 192.168.1.31; }; }; zone "od.com" IN { type master; file "od.com.zone"; allow-update { 192.168.1.31; }; };
● host.com.zone主机域配置
[root@hdss7-31 ~]# vim /var/named/host.com.zone $ORIGIN host.com. $TTL 600 ; 10 minutes @ IN SOA dns.host.com. dnsadmin.host.com. ( 2021041201 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS dns.host.com. $TTL 60 ; 1 minute dns A 192.168.1.31 HDSS7-31 A 192.168.1.32 HDSS7-32 A 192.168.1.33 HDSS7-33 A 192.168.1.34 HDSS7-34 A 192.168.1.35 HDSS7-35 A 192.168.1.36 HDSS7-40 A 192.168.1.40
● od.com.zone业务域配置文件配置
[root@hdss7-31 ~]# vim /var/named/od.com.zone $ORIGIN od.com. $TTL 600 ; 10 minutes @ IN SOA dns.od.com. dnsadmin.od.com. ( 2021041201 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS dns.od.com. $TTL 60 ; 1 minute dns A 192.168.1.31
1.3.2.修改所有主机的DNS
1.4.根证书准备
● 在hdss7-40上面准备部署证书服务
[root@hdss7-40 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl [root@hdss7-40 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssl-json [root@hdss7-40 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo [root@hdss7-40 ~]# chmod u+x /usr/local/bin/cfssl*
● 签发根证书
[root@hdss7-40 ~]# mkdir /opt/certs/ ; cd /opt/certs/ # 根证书配置: # CN 一般写域名,浏览器会校验 # names 为地区和公司信息 # expiry 为过期时间 [root@hdss7-40 certs]# vim /opt/certs/ca-csr.json { "CN": "OldboyEdu", "hosts": [ ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ], "ca": { "expiry": "175200h" } }
● 生成ca证书
1.5.Docker环境准备
● 需要在hdss7-33、hdss7-34、hdss7-35、hdss7-36、hdss7-40上操作
[root@hdss7-33 ~]# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [root@hdss7-33 ~]# yum -y install docker-ce [root@hdss7-33 ~]# mkdir /etc/docker/ # 不安全的registry中增加了harbor地址 # 各个机器上bip网段不一致,bip中间两段与宿主机最后两段相同,目的是方便定位问题 [root@hdss7-33 ~]# vim /etc/docker/daemon.json { "graph": "/data/docker", "storage-driver": "overlay2", "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com"], "registry-mirrors": ["https://registry.docker-cn.com"], "bip": "172.7.33.1/24", "exec-opts": ["native.cgroupdriver=systemd"], "live-restore": true } [root@hdss7-33 ~]# mkdir -p /data/docker [root@hdss7-33 ~]# systemctl start docker ; systemctl enable docker
2.安装Harbor
参考地址:https://www.yuque.com/duduniao/trp3ic/ohrxds#9Zpxx
官方地址:https://goharbor.io/
2.1.在Hdss7-40安装harbor
# 目录说明:
# /opt/src : 源码、文件下载目录
# /opt/release : 各个版本软件存放位置
# /opt/apps : 各个软件当前版本的软链接
[root@hdss7-40 certs]# mkdir -p /opt/src /opt/release /opt/apps [root@hdss7-40 src]# wget https://github.com/goharbor/harbor/releases/download/v1.9.4/harbor-offline-installer-v1.9.4.tgz [root@hdss7-40 src]# ls harbor-offline-installer-v1.9.4.tgz [root@hdss7-40 src]# tar xf harbor-offline-installer-v1.9.4.tgz [root@hdss7-40 src]# mv harbor /opt/release/harbor-v1.9.4 [root@hdss7-40 src]# ln -s /opt/release/harbor-v1.9.4 /opt/apps/harbor [root@hdss7-40 src]# vim /opt/apps/harbor/harbor.yml # 修改以下内容,Harbor如果是生产环境必须修改密码: arbor_admin_password: Harbor12345 hostname: harbor.od.com http: # port for http, default is 80. If https enabled, this port will redirect to https port port: 180 data_volume: /data/harbor location: /data/harbor/logs [root@hdss7-40 src]# mkdir -p /data/harbor/logs /data/harbor [root@hdss7-40 src]# yum install -y docker-compose [root@hdss7-40 src]# cd /opt/apps/harbor/ [root@hdss7-40 harbor]# ls harbor.v1.9.4.tar.gz harbor.yml install.sh LICENSE prepare [root@hdss7-40 harbor]# ./install.sh [Step 0]: checking installation environment ... … [root@hdss7-40 harbor]# docker-compose ps Name Command State Ports -------------------------------------------------------------------------------------- harbor-core /harbor/harbor_core Up harbor-db /docker-entrypoint.sh Up 5432/tcp harbor-jobservice /harbor/harbor_jobservice ... Up harbor-log /bin/sh -c /usr/local/bin/ ... Up 127.0.0.1:1514->10514/tcp harbor-portal nginx -g daemon off; Up 8080/tcp nginx nginx -g daemon off; Up 0.0.0.0:180->8080/tcp redis redis-server /etc/redis.conf Up 6379/tcp registry /entrypoint.sh /etc/regist ... Up 5000/tcp registryctl /harbor/start.sh Up [root@hdss7-40 harbor]# vim /etc/rc.d/rc.local #设置开机启动 cd /opt/apps/harbor /usr/bin/docker-compose stop /usr/bin/docker-compose start
2.2.在Hdss7-40安装 Nginx代理
- 新建一个项目,后面需要用到
[root@hdss7-40 src]# yum -y install gcc make pcre-devel zlib-devel [root@hdss7-40 src]# ls harbor-offline-installer-v1.9.4.tgz nginx-1.18.0.tar.gz [root@hdss7-40 src]# tar xf nginx-1.18.0.tar.gz [root@hdss7-40 src]# cd nginx-1.18.0 [root@hdss7-40 nginx-1.18.0]# ls auto CHANGES CHANGES.ru conf configure contrib html LICENSE man README src [root@hdss7-40 nginx-1.18.0]# ./configure --prefix=/usr/local/nginx && make && make install [root@hdss7-40 nginx-1.18.0]# cd /usr/local/nginx/conf/ [root@hdss7-40 conf]# vim nginx.conf server { listen 80; server_name harbor.od.com; # 避免出现上传失败的情况 client_max_body_size 1000m; location / { proxy_pass http://127.0.0.1:180; } [root@hdss7-40 conf]# /usr/local/nginx/sbin/nginx -t nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful [root@hdss7-40 conf]# /usr/local/nginx/sbin/nginx
访问Harbor
- 新建一个项目,后面需要用到
2.3.在Hdss7-31上面添加一条A记录
[root@hdss7-31 ~]# vim /var/named/od.com.zone $ORIGIN od.com. $TTL 600 ; 10 minutes @ IN SOA dns.od.com. dnsadmin.od.com. ( 2021041201 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS dns.od.com. $TTL 60 ; 1 minute dns A 192.168.1.31 harbor A 192.168.1.40 [root@hdss7-31 ~]# systemctl restart named
3.安装主控节点
3.1.Etcd安装
etcd 的leader选举机制,要求至少为3台或以上的奇数台,本次安装etcd主机为hdss7-32、hdss7-33、hdss7-34、hdss7-35、hdss7-36。
3.1.1.签发证书
证书签发服务器 hdss7-40:
• 创建ca的json配置: /opt/certs/ca-config.json
• server 表示服务端连接客户端时携带的证书,用于客户端验证服务端身份。
• client 表示客户端连接服务端时携带的证书,用于服务端验证客户端身份。
• peer 表示相互之间连接时使用的证书,如etcd节点之间验证。
[root@hdss7-40 certs]# vim /opt/certs/ca-config.json { "signing": { "default": { "expiry": "175200h" }, "profiles": { "server": { "expiry": "175200h", "usages": [ "signing", "key encipherment", "server auth" ] }, "client": { "expiry": "175200h", "usages": [ "signing", "key encipherment", "client auth" ] }, "peer": { "expiry": "175200h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } }
• 创建etcd证书配置:/opt/certs/etcd-peer-csr.json
重点在hosts上,将所有可能的etcd服务器添加到host列表,不能使用网段,新增etcd服务器需要重新签发证书
[root@hdss7-40 certs]# vim /opt/certs/etcd-peer-csr.json { "CN": "k8s-etcd", "hosts": [ "192.168.1.31", "192.168.1.32", "192.168.1.33", "192.168.1.34", "192.168.1.35", "192.168.1.36", "192.168.1.37", "192.168.1.38", "192.168.1.39", "192.168.1.40" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] } [root@hdss7-40 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json |cfssl-json -bare etcd-peer 2021/04/12 21:36:58 [INFO] generate received request 2021/04/12 21:36:58 [INFO] received CSR 2021/04/12 21:36:58 [INFO] generating key: rsa-2048 2021/04/12 21:36:58 [INFO] encoded CSR 2021/04/12 21:36:58 [INFO] signed certificate with serial number 204673588339134578955518151996413837209374516414 2021/04/12 21:36:58 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@hdss7-40 certs]# ll etcd-peer* -rw-r--r-- 1 root root 1110 Apr 12 21:36 etcd-peer.csr -rw-r--r-- 1 root root 477 Apr 12 21:36 etcd-peer-csr.json -rw------- 1 root root 1675 Apr 12 21:36 etcd-peer-key.pem -rw-r--r-- 1 root root 1476 Apr 12 21:36 etcd-peer.pem
3.1.2.安装etcd
- etcd地址:https://github.com/etcd-io/etcd/
- 实验使用版本: etcd-v3.1.20-linux-amd64.tar.gz
- 本次安装涉及:hdss7-32,hdss7-33,hdss7-34,hdss7-35,hdss7-36
[root@hdss7-32 ~]# useradd -s /sbin/nologin -M etcd [root@hdss7-32 ~]# mkdir -p /opt/src/ /opt/release /opt/apps [root@hdss7-32 ~]# cd /opt/src/ [root@hdss7-32 src]# wget https://github.com/etcd-io/etcd/releases/download/v3.1.20/etcd-v3.1.20-linux-amd64.tar.gz [root@hdss7-32 src]# tar -xf etcd-v3.1.20-linux-amd64.tar.gz [root@hdss7-32 src]# mv etcd-v3.1.20-linux-amd64 /opt/release/etcd-v3.1.20 [root@hdss7-32 src]# ln -s /opt/release/etcd-v3.1.20 /opt/apps/etcd [root@hdss7-32 src]# ll /opt/apps/etcd lrwxrwxrwx 1 root root 25 Apr 12 21:54 /opt/apps/etcd -> /opt/release/etcd-v3.1.20 [root@hdss7-32 src]# mkdir -p /opt/apps/etcd/certs /data/etcd /data/logs/etcd-server 下发证书到各个etcd服务器上 [root@hdss7-40 certs]# pwd /opt/certs [root@hdss7-40 certs]# scp ca.pem etcd-peer.pem etcd-peer-key.pem hdss7-32:/opt/apps/etcd/certs [root@hdss7-32 src]# md5sum /opt/apps/etcd/certs/* # 证书文件校验 40423e10a0777f7964c8d79ee13e8828 /opt/apps/etcd/certs/ca.pem e6c928f28b63e55d3b99b7c4cd28583c /opt/apps/etcd/certs/etcd-peer-key.pem 8497740476d22556cd40462004ce0920 /opt/apps/etcd/certs/etcd-peer.pem 创建etcd启动脚本 [root@hdss7-32 src]# vim /opt/apps/etcd/etcd-server-startup.sh #!/bin/sh # listen-peer-urls etcd节点之间通信端口 # listen-client-urls 客户端与etcd通信端口 # quota-backend-bytes 配额大小 # 需要修改的参数:name,listen-peer-urls,listen-client-urls,initial-advertise-peer-urls WORK_DIR=$(dirname $(readlink -f $0)) [ $? -eq 0 ] && cd $WORK_DIR || exit /opt/apps/etcd/etcd --name etcd-server-7-32 \ --data-dir /data/etcd/etcd-server \ --listen-peer-urls https://192.168.1.32:2380 \ --listen-client-urls https://192.168.1.32:2379,http://127.0.0.1:2379 \ --quota-backend-bytes 8000000000 \ --initial-advertise-peer-urls https://192.168.1.32:2380 \ --advertise-client-urls https:// 192.168.1.33:2379,http://127.0.0.1:2379 \ --initial-cluster etcd-server-7-32=https://192.168.1.32:2380,etcd-server-7-33=https://192.168.1.33:2380,etcd-server-7-34=https://192.168.1.34:2380,etcd-server-7-35=https://192.168.1.35:2380,etcd-se rver-7-36=https://192.168.1.36:2380 \ --ca-file ./certs/ca.pem \ --cert-file ./certs/etcd-peer.pem \ --key-file ./certs/etcd-peer-key.pem \ --client-cert-auth \ --trusted-ca-file ./certs/ca.pem \ --peer-ca-file ./certs/ca.pem \ --peer-cert-file ./certs/etcd-peer.pem \ --peer-key-file ./certs/etcd-peer-key.pem \ --peer-client-cert-auth \ --peer-trusted-ca-file ./certs/ca.pem \ --log-output stdout [root@hdss7-32 src]# chmod u+x /opt/apps/etcd/etcd-server-startup.sh [root@hdss7-32 src]# chown -R etcd.etcd /opt/apps/etcd/ /data/etcd /data/logs/etcd-server
3.1.3.启动etcd
[root@hdss7-32 src]# yum install -y supervisor [root@hdss7-32 src]# systemctl start supervisord ; systemctl enable supervisord [root@hdss7-32 src]# vim /etc/supervisord.d/etcd-server.ini [program:etcd-server-7-32] command=/opt/apps/etcd/etcd-server-startup.sh ; the program (relative uses PATH, can take args) numprocs=1 ; number of processes copies to start (def 1) directory=/opt/apps/etcd ; directory to cwd to before exec (def no cwd) autostart=true ; start at supervisord start (default: true) autorestart=true ; retstart at unexpected quit (default: true) startsecs=30 ; number of secs prog must stay running (def. 1) startretries=3 ; max # of serial start failures (default 3) exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) stopsignal=QUIT ; signal used to kill process (default TERM) stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) user=etcd ; setuid to this UNIX account to run the program redirect_stderr=true ; redirect proc stderr to stdout (default false) stdout_logfile=/data/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTO stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB) stdout_logfile_backups=5 ; # of stdout logfile backups (default 10) stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) stdout_events_enabled=false ; emit events on stdout writes (default false) [root@hdss7-32 src]# mkdir -p /data/logs/etcd-server [root@hdss7-32 ~]# supervisorctl update [root@hdss7-32 ~]# /opt/apps/etcd/etcdctl member list 1fb3b709d89285c: name=etcd-server-7-34 peerURLs=https://192.168.1.34:2380 clientURLs=http://127.0.0.1:2379,https://192.168.1.34:2379 isLeader=false 91f2add63ee518e: name=etcd-server-7-33 peerURLs=https://192.168.1.33:2380 clientURLs=http://127.0.0.1:2379,https://192.168.1.33:2379 isLeader=true 49cc7ce5639c4e1a: name=etcd-server-7-32 peerURLs=https://192.168.1.32:2380 clientURLs=http://127.0.0.1:2379,https://192.168.1.32:2379 isLeader=false afdb491c59ce63ff: name=etcd-server-7-35 peerURLs=https://192.168.1.35:2380 clientURLs=http://127.0.0.1:2379,https://192.168.1.35:2379 isLeader=false baaeca8660bc4d02: name=etcd-server-7-36 peerURLs=https://192.168.1.36:2380 clientURLs=http://127.0.0.1:2379,https://192.168.1.36:2379 isLeader=false
3.2.apiserver安装
3.2.1.准备kubernetes服务端
aipserver 涉及的服务器:hdss7-33,hdss7-34
下载 kubernetes 二进制版本包需要FQ工具
• 进入kubernetes的github页面: https://github.com/kubernetes/kubernetes
• 进入tags页签: https://github.com/kubernetes/kubernetes/tags
• 选择要下载的版本: https://github.com/kubernetes/kubernetes/releases/tag/v1.15.2
• 点击 CHANGELOG-${version}.md 进入说明页面: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#downloads-for-v1152
• 下载Server Binaries: https://dl.k8s.io/v1.15.2/kubernetes-server-linux-amd64.tar.gz
[root@hdss7-33 src]# tar xf kubernetes-server-linux-amd64.tar.gz [root@hdss7-33 src]# mv kubernetes /opt/release/kubernetes-v1.15.2 [root@hdss7-33 src]# ln -s /opt/release/kubernetes-v1.15.2 /opt/apps/Kubernetes [root@hdss7-33 src]# cd /opt/apps/kubernetes/ [root@hdss7-33 kubernetes]# rm -rf kubernetes-src.tar.gz [root@hdss7-33 kubernetes]# cd server/bin/ [root@hdss7-33 bin]# rm -rf *.tar *_tag [root@hdss7-33 bin]# ll total 884636 -rwxr-xr-x 1 root root 43534816 Aug 5 2019 apiextensions-apiserver -rwxr-xr-x 1 root root 100548640 Aug 5 2019 cloud-controller-manager -rwxr-xr-x 1 root root 200648416 Aug 5 2019 hyperkube -rwxr-xr-x 1 root root 40182208 Aug 5 2019 kubeadm -rwxr-xr-x 1 root root 164501920 Aug 5 2019 kube-apiserver -rwxr-xr-x 1 root root 116397088 Aug 5 2019 kube-controller-manager -rwxr-xr-x 1 root root 42985504 Aug 5 2019 kubectl -rwxr-xr-x 1 root root 119616640 Aug 5 2019 kubelet -rwxr-xr-x 1 root root 36987488 Aug 5 2019 kube-proxy -rwxr-xr-x 1 root root 38786144 Aug 5 2019 kube-scheduler -rwxr-xr-x 1 root root 1648224 Aug 5 2019 mounter
3.2.2.签发证书
- 涉及服务器hdss7-40
- 签发client证书(apiserver和etcd通信证书)
[root@hdss7-40 ~]# cd /opt/certs/ [root@hdss7-40 certs]# vim /opt/certs/client-csr.json { "CN": "k8s-node", "hosts": [ ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] } [root@hdss7-40 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client 2021/04/13 02:59:39 [INFO] generate received request 2021/04/13 02:59:39 [INFO] received CSR 2021/04/13 02:59:39 [INFO] generating key: rsa-2048 2021/04/13 02:59:40 [INFO] encoded CSR 2021/04/13 02:59:40 [INFO] signed certificate with serial number 650743899999714848914222711882723799478365141462 2021/04/13 02:59:40 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@hdss7-40 certs]# ll client* -rw-r--r-- 1 root root 993 Apr 13 02:59 client.csr -rw-r--r-- 1 root root 280 Apr 13 02:59 client-csr.json -rw------- 1 root root 1679 Apr 13 02:59 client-key.pem -rw-r--r-- 1 root root 1363 Apr 13 02:59 client.pem
- 签发server证书(apiserver和其它k8s组件通信使用)
- hosts中将所有可能作为apiserver的ip添加进去,VIP 10.4.7.10 也要加入
[root@hdss7-40 certs]# vim /opt/certs/apiserver-csr.json { "CN": "k8s-apiserver", "hosts": [ "127.0.0.1", "192.168.0.1", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local", "192.168.1.33", "192.168.1.34", "192.168.1.35", "192.168.1.36", "192.168.1.45", ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] } [root@hdss7-40 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver 2021/04/13 03:03:40 [INFO] generate received request 2021/04/13 03:03:40 [INFO] received CSR 2021/04/13 03:03:40 [INFO] generating key: rsa-2048 2021/04/13 03:03:41 [INFO] encoded CSR 2021/04/13 03:03:41 [INFO] signed certificate with serial number 440454737565616397028104171184139694234209918760 2021/04/13 03:03:41 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@hdss7-40 certs]# ll apiserver* -rw-r--r-- 1 root root 1249 Apr 13 03:03 apiserver.csr -rw-r--r-- 1 root root 578 Apr 13 03:03 apiserver-csr.json -rw------- 1 root root 1679 Apr 13 03:03 apiserver-key.pem -rw-r--r-- 1 root root 1598 Apr 13 03:03 apiserver.pem 分发证书 在hdss7-33和hess7-34创建好证书目录 [root@hdss7-33 bin]# mkdir /opt/apps/kubernetes/server/bin/certs [root@hdss7-40 certs]# scp apiserver-key.pem apiserver.pem ca-key.pem ca.pem client-key.pem client.pem hdss7-33:/opt/apps/kubernetes/server/bin/certs [root@hdss7-40 certs]# scp apiserver-key.pem apiserver.pem ca-key.pem ca.pem client-key.pem client.pem hdss7-34:/opt/apps/kubernetes/server/bin/certs
3.2.3.配置apiserver日志审计
- 在hdss7-33和hess7-34操作
[root@hdss7-33 kubernetes]# mkdir /opt/apps/kubernetes/conf [root@hdss7-33 kubernetes]# cd /opt/apps/kubernetes/conf/ [root@hdss7-33 conf]# vim /opt/apps/kubernetes/conf/audit.yaml #vim 设置set paste apiVersion: audit.k8s.io/v1beta1 # This is required. kind: Policy # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" # Resource "pods" doesn't match requests to any subresource of pods, # which is consistent with the RBAC policy. resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata # Long-running requests like watches that fall under this rule will not # generate an audit event in RequestReceived. omitStages: - "RequestReceived"
3.2.4.配置启动脚本
[root@hdss7-33 conf]# vim /opt/apps/kubernetes/server/bin/kube-apiserver-startup.sh #!/bin/bash WORK_DIR=$(dirname $(readlink -f $0)) [ $? -eq 0 ] && cd $WORK_DIR || exit /opt/apps/kubernetes/server/bin/kube-apiserver \ --apiserver-count 2 \ --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \ --audit-policy-file ../../conf/audit.yaml \ --authorization-mode RBAC \ --client-ca-file ./certs/ca.pem \ --requestheader-client-ca-file ./certs/ca.pem \ --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \ --etcd-cafile ./certs/ca.pem \ --etcd-certfile ./certs/client.pem \ --etcd-keyfile ./certs/client-key.pem \ --etcd-servers https://192.168.1.32:2379,https://192.168.1.33:2379,https://192.168.1.34:2379,https://192.168.1.35:2379,https://192.168.1.36:2379 \ --service-account-key-file ./certs/ca-key.pem \ --service-cluster-ip-range 192.168.0.0/16 \ --service-node-port-range 3000-29999 \ --target-ram-mb=1024 \ --kubelet-client-certificate ./certs/client.pem \ --kubelet-client-key ./certs/client-key.pem \ --log-dir /data/logs/kubernetes/kube-apiserver \ --tls-cert-file ./certs/apiserver.pem \ --tls-private-key-file ./certs/apiserver-key.pem \ --v 2
- 配置supervisor启动配置
[root@hdss7-33 conf]# vim /etc/supervisord.d/kube-apiserver.ini [program:kube-apiserver-7-33] command=/opt/apps/kubernetes/server/bin/kube-apiserver-startup.sh numprocs=1 directory=/opt/apps/kubernetes/server/bin autostart=true autorestart=true startsecs=30 startretries=3 exitcodes=0,2 stopsignal=QUIT stopwaitsecs=10 user=root redirect_stderr=true stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log stdout_logfile_maxbytes=64MB stdout_logfile_backups=5 stdout_capture_maxbytes=1MB stdout_events_enabled=false [root@hdss7-33 conf]# chmod +x /opt/apps/kubernetes/server/bin/kube-apiserver-startup.sh [root@hdss7-33 conf]# mkdir -p /data/logs/kubernetes/kube-apiserver [root@hdss7-33 conf]# supervisorctl update [root@hdss7-33 conf]# supervisorctl status etcd-server-7-33 RUNNING pid 8051, uptime 3:22:32 kube-apiserver-7-33 RUNNING pid 8287, uptime 0:01:03
3.3.配置apiserver L4代理
3.3.1.配置Nginx
- L4代理涉及服务器hdss7-31,hdss7-32
[root@hdss7-31 nginx-1.18.0]# yum -y install gcc make zlib zlib-devel pcre pcre-devel [root@hdss7-31 src]# tar xf nginx-1.18.0.tar.gz [root@hdss7-31 src]# cd nginx-1.18.0 [root@hdss7-31 nginx-1.18.0]# ./configure --prefix=/usr/local/nginx --with-stream && make && make install [root@hdss7-31 nginx-1.18.0]# cd /usr/local/nginx/conf/ #在配置文件最后 添加如下内容 stream { log_format proxy '$time_local|$remote_addr|$upstream_addr|$protocol|$status|' '$session_time|$upstream_connect_time|$bytes_sent|$bytes_received|' '$upstream_bytes_sent|$upstream_bytes_received' ; upstream kube-apiserver { server 192.168.1.33:6443 max_fails=3 fail_timeout=30s; server 192.168.1.34:6443 max_fails=3 fail_timeout=30s; } server { listen 7443; proxy_connect_timeout 2s; proxy_timeout 900s; proxy_pass kube-apiserver; access_log logs/proxy.log proxy; } } [root@hdss7-31 conf]# /usr/local/nginx/sbin/nginx -t nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful #启动Nginx [root@hdss7-31 conf]# /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
- 测试Nginx代理,多测试几次能看到效果
[root@hdss7-31 conf]# curl 127.0.0.1:7443 [root@hdss7-31 conf]# cat /usr/local/nginx/logs/proxy.log 13/Apr/2021:03:44:37 -0400|127.0.0.1|10.4.7.21:6443, 10.4.7.22:6443|TCP|502|4.005|-, -|0|0|0, 0|0, 0 13/Apr/2021:03:45:03 -0400|127.0.0.1|10.4.7.22:6443, 10.4.7.21:6443|TCP|502|4.004|-, -|0|0|0, 0|0, 0
3.3.2. keepalived配置
- aipserver L4 代理涉及的服务器:hdss7-31,hdss7-32
- 安装keepalive
[root@hdss7-31 ~]# yum install -y keepalived [root@hdss7-31 ~]# vim /etc/keepalived/check_port.sh #!/bin/bash if [ $# -eq 1 ] && [[ $1 =~ ^[0-9]+ ]];then [ $(netstat -lntp|grep ":$1 " |wc -l) -eq 0 ] && echo "[ERROR] nginx may be not running!" && exit 1 || exit 0 else echo "[ERROR] need one port!" exit 1 fi [root@hdss7-31 ~]# chmod +x /etc/keepalived/check_port.sh
- 配置主节点:/etc/keepalived/keepalived.conf
- 主节点中,必须加上 nopreempt
- 因为一旦因为网络抖动导致VIP漂移,不能让它自动飘回来,必须要分析原因后手动迁移VIP到主节点!如主节点确认正常后,重启备节点的keepalive,让VIP飘到主节点.
- keepalived 的日志输出配置此处省略,生产中需要进行处理。
[root@hdss7-31 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id 192.168.1.31 } vrrp_script chk_nginx { script "/etc/keepalived/check_port.sh 7443" interval 2 weight -20 } vrrp_instance VI_1 { state MASTER interface ens32 virtual_router_id 251 priority 100 advert_int 1 mcast_src_ip 192.168.1.31 nopreempt authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 192.168.1.45 } }
- 备用节点/etc/keepalived/keepalived.conf配置
[root@hdss7-32 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id 192.168.1.32 } vrrp_script chk_nginx { script "/etc/keepalived/check_port.sh 7443" interval 2 weight -20 } vrrp_instance VI_1 { state BACKUP interface ens32 virtual_router_id 251 mcast_src_ip 192.168.1.32 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 192.168.1.45 } }
- 启动keepalived
[root@hdss7-31 ~]# systemctl start keepalived ; systemctl enable keepalived Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service. [root@hdss7-31 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:07:79:3f brd ff:ff:ff:ff:ff:ff inet 192.168.1.31/24 brd 192.168.1.255 scope global noprefixroute ens32 valid_lft forever preferred_lft forever inet 192.168.1.45/32 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe07:793f/64 scope link valid_lft forever preferred_lft forever
3.4.controller-manager 安装
- controller-manager 涉及的服务器:hdss7-33,hdss7-34
- controller-manager 设置为只调用当前机器的 apiserver,走127.0.0.1网卡,因此不配制SSL证书
[root@hdss7-33 ~]# vim /opt/apps/kubernetes/server/bin/kube-controller-manager-startup.sh #!/bin/sh WORK_DIR=$(dirname $(readlink -f $0)) [ $? -eq 0 ] && cd $WORK_DIR || exit /opt/apps/kubernetes/server/bin/kube-controller-manager \ --cluster-cidr 172.7.0.0/16 \ --leader-elect true \ --log-dir /data/logs/kubernetes/kube-controller-manager \ --master http://127.0.0.1:8080 \ --service-account-private-key-file ./certs/ca-key.pem \ --service-cluster-ip-range 192.168.0.0/16 \ --root-ca-file ./certs/ca.pem \ --v 2 [root@hdss7-33 ~]# chmod u+x /opt/apps/kubernetes/server/bin/kube-controller-manager-startup.sh [program:kube-controller-manager-7-33] command=/opt/apps/kubernetes/server/bin/kube-controller-manager-startup.sh ; the program (relative uses PATH, can take args) numprocs=1 ; number of processes copies to start (def 1) directory=/opt/apps/kubernetes/server/bin ; directory to cwd to before exec (def no cwd) autostart=true ; start at supervisord start (default: true) autorestart=true ; retstart at unexpected quit (default: true) startsecs=30 ; number of secs prog must stay running (def. 1) startretries=3 ; max # of serial start failures (default 3) exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) stopsignal=QUIT ; signal used to kill process (default TERM) stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) user=root ; setuid to this UNIX account to run the program redirect_stderr=true ; redirect proc stderr to stdout (default false) stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log ; stderr log path, NONE for none; default AUTO stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB) stdout_logfile_backups=4 ; # of stdout logfile backups (default 10) stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) stdout_events_enabled=false ; emit events on stdout writes (default false) [root@hdss7-33 ~]# mkdir /data/logs/kubernetes/kube-controller-manager [root@hdss7-33 ~]# supervisorctl update
- kube-scheduler 涉及的服务器:hdss7-33,hdss7-34
- kube-scheduler 设置为只调用当前机器的 apiserver,走127.0.0.1网卡,因此不配制SSL证书
[root@hdss7-33 ~]# vim /opt/apps/kubernetes/server/bin/kube-scheduler-startup.sh #!/bin/sh WORK_DIR=$(dirname $(readlink -f $0)) [ $? -eq 0 ] && cd $WORK_DIR || exit /opt/apps/kubernetes/server/bin/kube-scheduler \ --leader-elect \ --log-dir /data/logs/kubernetes/kube-scheduler \ --master http://127.0.0.1:8080 \ --v 2 [root@hdss7-33 ~]# chmod +x /opt/apps/kubernetes/server/bin/kube-scheduler-startup.sh [root@hdss7-33 ~]# mkdir -p /data/logs/kubernetes/kube-scheduler [root@hdss7-33 ~]# vim /etc/supervisord.d/kube-scheduler.ini [program:kube-scheduler-7-33] command=/opt/apps/kubernetes/server/bin/kube-scheduler-startup.sh numprocs=1 directory=/opt/apps/kubernetes/server/bin autostart=true autorestart=true startsecs=30 startretries=3 exitcodes=0,2 stopsignal=QUIT stopwaitsecs=10 user=root redirect_stderr=true stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log stdout_logfile_maxbytes=64MB stdout_logfile_backups=4 stdout_capture_maxbytes=1MB stdout_events_enabled=false [root@hdss7-33 ~]# supervisorctl update [root@hdss7-33 ~]# supervisorctl status etcd-server-7-33 RUNNING pid 8051, uptime 4:32:30 kube-apiserver-7-33 RUNNING pid 8287, uptime 1:11:01 kube-controller-manager-7-33 RUNNING pid 15212, uptime 0:07:52 kube-scheduler-7-33 RUNNING pid 15243, uptime 0:00:35
3.6.检查主控节点状态
[root@hdss7-33 ~]# ln -s /opt/apps/kubernetes/server/bin/kubectl /usr/local/bin/ [root@hdss7-33 ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-2 Healthy {"health": "true"} etcd-3 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-4 Healthy {"health": "true"}
- 第二台节点情况
[root@hdss7-34 ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-1 Healthy {"health": "true"} etcd-4 Healthy {"health": "true"} etcd-3 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"}
4.部署运算节点
4.1.kubelet 部署
4.1.1.kubelet安装
- 此操作在hdss7-35和hdss7-36
[root@hdss7-35 src]# tar xf kubernetes-server-linux-amd64.tar.gz [root@hdss7-35 src]# mv kubernetes /opt/release/kubernetes-v1.15.2 [root@hdss7-35 release]# ln -s /opt/release/kubernetes-v1.15.2 /opt/apps/Kubernetes [root@hdss7-35 release]# cd /opt/apps/kubernetes/ [root@hdss7-35 kubernetes]# rm -rf kubernetes-src.tar.gz [root@hdss7-35 kubernetes]# cd server/bin/ [root@hdss7-35 bin]# rm -rf *.tar *_tag [root@hdss7-35 kubernetes]# mkdir /opt/apps/kubernetes/server/bin/certs/
4.1.2.签发证书
- 证书签发在 hdss7-40 操作
- 尽可能将可能成为运算节点的IP添加进去
[root@hdss7-40 certs]# vim kubelet-csr.json { "CN": "k8s-kubelet", "hosts": [ "127.0.0.1", "192.168.1.31", "192.168.1.32", "192.168.1.33", "192.168.1.34", "192.168.1.35", "192.168.1.36", "192.168.1.37", "192.168.1.38", "192.168.1.45" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] } [root@hdss7-40 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet 2021/04/13 04:42:53 [INFO] generate received request 2021/04/13 04:42:53 [INFO] received CSR 2021/04/13 04:42:53 [INFO] generating key: rsa-2048 2021/04/13 04:42:54 [INFO] encoded CSR 2021/04/13 04:42:54 [INFO] signed certificate with serial number 73289397552719991187395015398446642396670182383 2021/04/13 04:42:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@hdss7-40 certs]# ll kubelet* -rw-r--r-- 1 root root 1115 Apr 13 04:42 kubelet.csr -rw-r--r-- 1 root root 479 Apr 13 04:41 kubelet-csr.json -rw------- 1 root root 1679 Apr 13 04:42 kubelet-key.pem -rw-r--r-- 1 root root 1468 Apr 13 04:42 kubelet.pem [root@hdss7-40 certs]# scp client-key.pem client.pem ca.pem kubelet.pem kubelet-key.pem hdss7-35:/opt/apps/kubernetes/server/bin/certs/ root@hdss7-40 certs]# scp client-key.pem client.pem ca.pem kubelet.pem kubelet-key.pem hdss7-36:/opt/apps/kubernetes/server/bin/certs/
4.1.3.kubelet配置
- 此操作在hdss7-35和hdss7-36
- set-cluster # 创建需要连接的集群信息,可以创建多个k8s集群信息
[root@hdss7-35 kubernetes]# mkdir /opt/apps/kubernetes/conf/ [root@hdss7-35 conf]# ln -s /opt/release/kubernetes-v1.15.2/server/bin/kubectl /usr/bin/kubectl # 这里只需要在其中一台执行即可 [root@hdss7-35 conf]# kubectl config set-cluster myk8s \ --certificate-authority=/opt/apps/kubernetes/server/bin/certs/ca.pem \ --embed-certs=true \ --server=https://192.168.1.45:7443 \ --kubeconfig=/opt/apps/kubernetes/conf/kubelet.kubeconfigset-credentials
- set-credentials# 创建用户账号,即用户登陆使用的客户端私有和证书,可以创建多个证书
- 这里只需要在其中一台执行即可
[root@hdss7-35 conf]# kubectl config set-credentials k8s-node \ --client-certificate=/opt/apps/kubernetes/server/bin/certs/client.pem \ --client-key=/opt/apps/kubernetes/server/bin/certs/client-key.pem \ --embed-certs=true \ --kubeconfig=/opt/apps/kubernetes/conf/kubelet.kubeconfig User "k8s-node" set.
- set-context # 设置context,即确定账号和集群对应关系
- #这里只需要在其中一台执行即可
[root@hdss7-35 conf]# kubectl config set-context myk8s-context \ --cluster=myk8s \ --user=k8s-node \ --kubeconfig=/opt/apps/kubernetes/conf/kubelet.kubeconfig Context "myk8s-context" created. use-context # 设置当前使用哪个context #这里只需要在其中一台执行即可 [root@hdss7-35 conf]# kubectl config use-context myk8s-context --kubeconfig=/opt/apps/kubernetes/conf/kubelet.kubeconfig
4.1.4.授权k8s-node用户
- 此步骤只需要一台master节点执行即可
root@hdss7-33 conf]# vim k8s-node.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: k8s-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: k8s-node [root@hdss7-33 conf]# kubectl create -f k8s-node.yaml clusterrolebinding.rbac.authorization.k8s.io/k8s-node created [root@hdss7-33 conf]# kubectl get clusterrolebinding k8s-node
4.1.5.装备pause镜像
- 将pause镜像放到harbor私有仓库中,这个操作在hdss7-40上面操作
[root@hdss7-40 ~]# docker image pull kubernetes/pause Using default tag: latest latest: Pulling from kubernetes/pause 4f4fb700ef54: Pull complete b9c8ec465f6b: Pull complete Digest: sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 Status: Downloaded newer image for kubernetes/pause:latest docker.io/kubernetes/pause:latest [root@hdss7-40 ~]# docker image tag kubernetes/pause:latest harbor.od.com/public/pause:latest [root@hdss7-40 ~]# docker login -u admin harbor.od.com Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded [root@hdss7-40 ~]# docker image push harbor.od.com/public/pause:latest The push refers to repository [harbor.od.com/public/pause] 5f70bf18a086: Pushed e16a89738269: Pushed latest: digest: sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 size: 938
4.1.6.创建kubelet启动脚本
- 在node节点创建脚本并启动kubelet,涉及服务器: hdss7-35 hdss7-36
[root@hdss7-35 conf]# vim /opt/apps/kubernetes/server/bin/kubelet-startup.sh #!/bin/sh WORK_DIR=$(dirname $(readlink -f $0)) [ $? -eq 0 ] && cd $WORK_DIR || exit /opt/apps/kubernetes/server/bin/kubelet \ --anonymous-auth=false \ --cgroup-driver systemd \ --cluster-dns 192.168.0.2 \ --cluster-domain cluster.local \ --runtime-cgroups=/systemd/system.slice \ --kubelet-cgroups=/systemd/system.slice \ --fail-swap-on="false" \ --client-ca-file ./certs/ca.pem \ --tls-cert-file ./certs/kubelet.pem \ --tls-private-key-file ./certs/kubelet-key.pem \ --hostname-override hdss7-35.host.com \ --image-gc-high-threshold 20 \ --image-gc-low-threshold 10 \ --kubeconfig ../../conf/kubelet.kubeconfig \ --log-dir /data/logs/kubernetes/kube-kubelet \ --pod-infra-container-image harbor.od.com/public/pause:latest \ --root-dir /data/kubelet [root@hdss7-35 conf]# chmod u+x /opt/apps/kubernetes/server/bin/kubelet-startup.sh [root@hdss7-35 conf]# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet [root@hdss7-35 conf]# vim /etc/supervisord.d/kube-kubelet.ini [program:kube-kubelet-7-35] command=/opt/apps/kubernetes/server/bin/kubelet-startup.sh numprocs=1 directory=/opt/apps/kubernetes/server/bin autostart=true autorestart=true startsecs=30 startretries=3 exitcodes=0,2 stopsignal=QUIT stopwaitsecs=10 user=root redirect_stderr=true stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log stdout_logfile_maxbytes=64MB stdout_logfile_backups=5 stdout_capture_maxbytes=1MB stdout_events_enabled=false [root@hdss7-35 conf]# supervisorctl update [root@hdss7-35 conf]# supervisorctl status etcd-server-7-35 RUNNING pid 9338, uptime 5:46:12 kube-kubelet-7-35 RUNNING pid 9857, uptime 0:00:43
- 检查node节点状态
[root@hdss7-33 ~]# kubectl get node NAME STATUS ROLES AGE VERSION hdss7-35.host.com Ready <none> 2m48s v1.15.2 hdss7-36.host.com Ready <none> 2m55s v1.15.2
4.1.7.设置node节点角色状态分配
[root@hdss7-33 ~]# kubectl label node hdss7-35.host.com node-role.kubernetes.io/master= node/hdss7-35.host.com labeled [root@hdss7-33 ~]# kubectl label node hdss7-35.host.com node-role.kubernetes.io/node= node/hdss7-35.host.com labeled [root@hdss7-33 ~]# [root@hdss7-33 ~]# kubectl label node hdss7-36.host.com node-role.kubernetes.io/master= node/hdss7-36.host.com labeled [root@hdss7-33 ~]# kubectl label node hdss7-36.host.com node-role.kubernetes.io/ndoe= node/hdss7-36.host.com labeled [root@hdss7-33 ~]# kubectl get node NAME STATUS ROLES AGE VERSION hdss7-35.host.com Ready master,node 14h v1.15.2 hdss7-36.host.com Ready master,ndoe 14h v1.15.2
4.2.kube-proxy部署
- Kube-proxy需要在所有node节点安装,这里涉及服务器hdss7-35,hdss7-36
4.2.1.签发证书
- 签发证书回到hdss7-40
[root@hdss7-40 certs]# vim kube-proxy-csr.json { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] }
- #因为kube-proxy使用的用户是kube-proxy,不能使用client证书,必须要重新签发自己的证书
[root@hdss7-40 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client 2021/04/13 21:26:04 [INFO] generate received request 2021/04/13 21:26:04 [INFO] received CSR 2021/04/13 21:26:04 [INFO] generating key: rsa-2048 2021/04/13 21:26:04 [INFO] encoded CSR 2021/04/13 21:26:04 [INFO] signed certificate with serial number 592921903324586220732736491700592054147446760846 2021/04/13 21:26:04 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@hdss7-40 certs]# ll kube-proxy-c* -rw-r--r-- 1 root root 1005 Apr 13 21:26 kube-proxy-client.csr -rw------- 1 root root 1679 Apr 13 21:26 kube-proxy-client-key.pem -rw-r--r-- 1 root root 1375 Apr 13 21:26 kube-proxy-client.pem -rw-r--r-- 1 root root 267 Apr 13 21:25 kube-proxy-csr.json [root@hdss7-40 certs]# scp kube-proxy-client-key.pem kube-proxy-client.pem hdss7-35:/opt/apps/kubernetes/server/bin/certs/ [root@hdss7-40 certs]# scp kube-proxy-client-key.pem kube-proxy-client.pem hdss7-36:/opt/apps/kubernetes/server/bin/certs/ 2021/04/13 21:26:04 [INFO] generating key: rsa-2048
4.2.2.创建kube-proxy配置
- 在node节点创建,这里 node节点hdss7-35,hdss7-36,只需要在一台执行即可,然后将/opt/apps/kubernetes/conf/kube-proxy.kubeconfig文件拷贝到另外一台/opt/apps/kubernetes/conf/目录下
[root@hdss7-35 conf]# kubectl config set-cluster myk8s \ --certificate-authority=/opt/apps/kubernetes/server/bin/certs/ca.pem \ --embed-certs=true \ --server=https://192.168.1.45:7443 \ --kubeconfig=/opt/apps/kubernetes/conf/kube-proxy.kubeconfig [root@hdss7-35 conf]# kubectl config set-cluster myk8s \ --certificate-authority=/opt/apps/kubernetes/server/bin/certs/ca.pem \ --embed-certs=true \ --server=https://192.168.1.45:7443 \ --kubeconfig=/opt/apps/kubernetes/conf/kube-proxy.kubeconfig Cluster "myk8s" set. [root@hdss7-35 conf]# kubectl config set-credentials kube-proxy \ --client-certificate=/opt/apps/kubernetes/server/bin/certs/kube-proxy-client.pem \ --client-key=/opt/apps/kubernetes/server/bin/certs/kube-proxy-client-key.pem \ --embed-certs=true \ --kubeconfig=/opt/apps/kubernetes/conf/kube-proxy.kubeconfig User "kube-proxy" set. [root@hdss7-35 conf]# [root@hdss7-35 conf]# kubectl config set-context myk8s-context \ --cluster=myk8s \ --user=kube-proxy \ --kubeconfig=/opt/apps/kubernetes/conf/kube-proxy.kubeconfig Context "myk8s-context" created. [root@hdss7-35 conf]# [root@hdss7-35 conf]# kubectl config use-context myk8s-context --kubeconfig=/opt/apps/kubernetes/conf/kube-proxy.kubeconfig Switched to context "myk8s-context". scp kube-proxy.kubeconfig hdss7-36:/opt/apps/kubernetes/conf
4.2.3.加载ipvs模块
- kube-proxy 共有3种流量调度模式,分别是 namespace,iptables,ipvs,其中ipvs性能最好。
[root@hdss7-35 conf]# for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done ip_vs_dh ip_vs_ftp ip_vs ip_vs_lblc ip_vs_lblcr ip_vs_lc ip_vs_nq ip_vs_pe_sip ip_vs_rr ip_vs_sed ip_vs_sh ip_vs_wlc ip_vs_wrr [root@hdss7-35 conf]# lsmod | grep ip_vs ip_vs_wrr 12697 0 ip_vs_wlc 12519 0 ip_vs_sh 12688 0 ip_vs_sed 12519 0 ip_vs_rr 12600 0 ip_vs_pe_sip 12740 0 nf_conntrack_sip 33780 1 ip_vs_pe_sip ip_vs_nq 12516 0 ip_vs_lc 12516 0 ip_vs_lblcr 12922 0 ip_vs_lblc 12819 0 ip_vs_ftp 13079 0 ip_vs_dh 12688 0 ip_vs 145458 24 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc nf_nat 26583 3 ip_vs_ftp,nf_nat_ipv4,nf_nat_masquerade_ipv4 nf_conntrack 139264 8 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4 libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
4.2.4.创建hube-proxyq启动脚本
[root@hdss7-35 ~]# vim /opt/apps/kubernetes/server/bin/kube-proxy-startup.sh #!/bin/sh WORK_DIR=$(dirname $(readlink -f $0)) [ $? -eq 0 ] && cd $WORK_DIR || exit /opt/apps/kubernetes/server/bin/kube-proxy \ --cluster-cidr 172.7.0.0/16 \ --hostname-override hdss7-35.host.com \ --proxy-mode=ipvs \ --ipvs-scheduler=nq \ --kubeconfig ../../conf/kube-proxy.kubeconfig [root@hdss7-35 ~]# chmod u+x /opt/apps/kubernetes/server/bin/kube-proxy-startup.sh [root@hdss7-35 ~]# mkdir -p /data/logs/kubernetes/kube-proxy [root@hdss7-35 ~]# vim /etc/supervisord.d/kube-proxy.ini [program:kube-proxy-7-35] command=/opt/apps/kubernetes/server/bin/kube-proxy-startup.sh numprocs=1 directory=/opt/apps/kubernetes/server/bin autostart=true autorestart=true startsecs=30 startretries=3 exitcodes=0,2 stopsignal=QUIT stopwaitsecs=10 user=root redirect_stderr=true stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log stdout_logfile_maxbytes=64MB stdout_logfile_backups=5 stdout_capture_maxbytes=1MB stdout_events_enabled=false [root@hdss7-35 ~]# supervisorctl update [root@hdss7-35 ~]# supervisorctl status etcd-server-7-35 RUNNING pid 9338, uptime 21:43:42 kube-kubelet-7-35 RUNNING pid 12885, uptime 15:35:05 kube-proxy-7-35 RUNNING pid 67983, uptime 0:00:33 [root@hdss7-35 ~]# yum install -y ipvsadm [root@hdss7-35 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.1:443 nq -> 192.168.1.33:6443 Masq 1 0 0 -> 192.168.1.34:6443 Masq 1 0 0
5.kubectl基础命令
- 创建一个nginx
[root@hdss7-33 conf]# vim create-nginx.yaml apiVersion: v1 kind: Pod metadata: name: test-nginx namespace: default labels: name: test-nginx spec: containers: - name: nginx image: harbor.od.com/public/nginx:v1.19 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 hostPort: 80
基础命令:create,delete,get,run,expose,set,explain,edit
create 命令:根据文件或者输入来创建资源
# 创建Deployment和Service资源
$ kubectl create -f demo-deployment.yaml $ kubectl create -f demo-service.yaml
delete 命令:删除资源
# 根据yaml文件删除对应的资源,但是yaml文件并不会被删除,这样更加高效
$ kubectl delete -f demo-deployment.yaml $ kubectl delete -f demo-service.yaml
# 也可以通过具体的资源名称来进行删除,使用这个删除资源,同时删除deployment和service资源
$ kubectl delete 具体的资源名称
get 命令 :获得资源信息
# 查看所有的资源信息
$ kubectl get all $ kubectl get --all-namespaces
# 查看pod列表
$ kubectl get pod
# 显示pod节点的标签信息
$ kubectl get pod --show-labels
# 根据指定标签匹配到具体的pod
$ kubectl get pods -l app=example
# 查看node节点列表
$ kubectl get node
# 显示node节点的标签信息
$ kubectl get node --show-labels
# 查看pod详细信息,也就是可以查看pod具体运行在哪个节点上(ip地址信息)
$ kubectl get pod -o wide
# 查看服务的详细信息,显示了服务名称,类型,集群ip,端口,时间等信息
$ kubectl get svc $ kubectl get svc -n kube-system
# 查看命名空间
$ kubectl get ns $ kubectl get namespaces
# 查看所有pod所属的命名空间
$ kubectl get pod --all-namespaces
# 查看所有pod所属的命名空间并且查看都在哪些节点上运行
$ kubectl get pod --all-namespaces -o wide
# 查看目前所有的replica set,显示了所有的pod的副本数,以及他们的可用数量以及状态等信息
$ kubectl get rs
# 查看已经部署了的所有应用,可以看到容器,以及容器所用的镜像,标签等信息
$ kubectl get deploy -o wide $ kubectl get deployments -o wide