3.2 二进制方式安装k8s集群

二进制方式

从github下载发行版的二进制包, 手动部署每个组件, 组成Kubernetes集群

Kubeadm降低了部署门槛, 但屏蔽了很多细节, 遇到问题很难排查。 如果想更容易可控,推荐使用二进制部署Kubernetes集群, 虽然手动部署麻烦点 , 期间可以学习很多工作原理。 也利于后期维护 。

1.1 安装要求

在开始之前, 部署kubernetes集群集群需要满足一下几个条件:

  • 一台或多台机器 , 操作系统CentOS7.x-86_x64
  • 硬件配置: 2G或更多RAM , 2个CPU或更多CPU , 硬盘30GB或更多
  • 集群中所有集群之间网络互通
  • 可以访问外网, 需要拉取镜像, 如果服务器不能上网, 需要提前下载镜像并导入节点
  • 禁止swap分区

1.2 准备环境

软件环境:

软件 版本
操作系统 CentOS7.6_x86_64
Docker 19-ce
Kubenetes 1.18

服务器整体规划:

角色 IP 组件
k8s-master1 192.168.31.71 Kube-apiserver,kube-controller-manager,kube-scheduler,etcd
K8s-master2 192.168.31.74 Kube-apiserver,kube-controller-manager,kube-scheduler
K8s-node1 192.168.31.72 Kubelet,kube-proxy,docker, etcd
K8s-node2 192.168.31.73 Kubelet,kube-proxy,docker ,etcd
Load Balancer(Master) 192.168.31.81
192.168.31.88(vip)
Nginx L4
Load Balancer(Backup) 192.168.31.82 Nginx L4

须知:考虑到有些朋友电脑配置较低, 这么多虚拟机跑不动, 所以这一套高可用集群分两部分实施, 先部署一套单Master架构(192.168.31.71、72、73),再扩容为多Master架构(上述规划),顺便熟悉一下Master扩容流程。

单Master架构图

单Master服务器规划:

角色 IP 组件
k8s-master 192.168.31.71 Kube-apiserver,kube-controller-manager,kube-scheduler,etcd
K8s-node1 192.168.31.72 Kubelet,kube-proxy,docker, etcd
K8s-node2 192.168.31.73 Kubelet,kube-proxy,docker ,etcd

1.3 操作系统初始化配置

#关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

#关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config #永久
setenforce 0 #临时

#关闭swap
swapoff -a #临时
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久

#根据规划设置主机名
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2

#在master添加hosts
cat >> /etc/hosts << EOF
192.168.31.71 k8s-master
192.168.31.72 k8s-node1
192.168.31.73 k8s-node2
EOF

# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

#时间同步
yum install ntpdate -y
ntpdate time.windows.com

二部署ETCD 集群

ETCD是一个分布式键值存储系统, kubernetes使用Etcd进行数据存储, 所以先准备一个Etcd数据库, 为了解决单点故障, 应采用集群方式部署, 这里使用3台组件集群, 可容忍1台机器故障, 当然, 你也可以使用5台组建集群, 可容忍2台集群故障

节点名称 IP
etcd-1 192.168.31.71
etcd-2 192.168.31.72
etcd-3 192.168.31.73

注:为了节省集群, 这里与k8s节点机器复用, 也可以独立于k8s 集群之外部署, 只要apiserver能连接到就行 。

2.1 准备cfssl证书生成工具

cfssl是一个开源的证书管理工具, 使用json文件生成证书, 相比openssl更方便使用

赵任意一台服务器操作, 这里用master节点

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

2.2 生成Etcd证书

1.自签名证书颁发机构(CA)

创建工作目录

mkdir ~/TLS/{etcd,k8s} -p
cd TLS/etcd/

自签CA:

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
        "expiry": "87600h",
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json << EOF
{
  "CN": "etcd CA",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing"
    }
  ]
}
EOF

生成证书:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem
ca-key.pem ca.pem

2.3使用自签CA签发Etcd HTTPS证书

创建证书申请文件:

cat > server-csr.json << EOF
{
  "CN": "etcd",
  "hosts": [
    "192.168.31.71",
    "192.168.31.72",
    "192.168.31.73"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing"
    }
  ]
}
EOF

注: 上述文件hosts字段中IP为所有etcd节点的集群内部通信IP , 一个都不能少?为了方便后期扩容可以多写几个预留的IP

生成证书:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json |cfssljson -bare server

ls server*pem 
server-key.pem server.pem 

2.4 从Github下载二进制文件

下载地址

https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

2.5 部署Etcd集群

以下在节点1上操作, 为了简化操作, 待会将节点1生成的所有文件拷贝到节点2和节点3

1.创建工作目录并压缩二进制包(三个节点都需要)

mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz 
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

2.创建etcd配置文件

  • ETCD_NAME: 节点名称, 集群中唯一
  • ETCD_DATA_DIR: 数据目录
  • ETCD_LISTEN_PEER_URLS: 集群通信监听地址
  • ETCD_LISTEN_CLIENT_URLS: 客户端访问监听地址
  • ETCD_INITIAL_ADVERTISE_PEER_URLS: 集群通告地址
  • ETCD_ADVERTISE_CLIENT_URLS: 客户端通过地址
  • ETCD_INITIAL_CLUSTER: 集群节点地址
  • ETCD_INITIAL_CLUSTER_TOKEN: 集群Token
  • ETCD_INITIAL_CLUSTER_STATE: 加入集群的当前状态, new是新集群, existing表示加入已有集群

Master

cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

3.systemd管理etcd

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

4.拷贝刚才生成的证书

把刚才生成的证书拷贝到配置文件中的路径:

cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/

5.启动并设置开机启动

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

6.将上面节点1所有生成的文件拷贝到节点2和节点3

scp -r /opt/etcd/ root@192.168.31.72:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.31.72:/usr/lib/systemd/system/
scp -r /opt/etcd/ root@192.168.31.73:/opt/
scp -r /usr/lib/systemd/system/etcd/service root@192.168.31.73:/usr/lib/systemd/system/

然后在节点2和节点3 分别修改etcd.conf配置文件中的节点名称和当前服务器IP:

Node1

cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.72:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.72:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.72:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.72:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

node2

cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.73:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.73:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.73:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.73:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

7. 查看集群状态

ETCD_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.31.71:2379,https://192.168.31.72:2379,https://192.168.31.73:2379" endpoint health

https://192.168.31.71:2379 is healthy: successfully committed proposal: took = 21.289133ms
https://192.168.31.72:2379 is healthy: successfully committed proposal: took = 165.432479ms
https://192.168.31.73:2379 is healthy: successfully committed proposal: took = 167.897358ms

如果输出上面信息 , 就说明集群部署成功 。如果有问题第一步先看日志:/var/log/message或journalctl -u etcd

k8s现在使用的是etcd v3,必须提供ca、key、cert,否则会出现Error: context deadline exceeded
不加–endpoint参数时,默认访问的127.0.0.1:2379,而使用–endpoint参数时,必须提供ca,key,cert

8.配置环境变量

#设置etcdctl为v3版本
export ETCDCTL_API=3

#设置etcd的端点信息,即etcd集群的ip:port,以逗号分隔
export ETCDCTL_ENDPOINTS=https://192.168.31.71:2379,https://192.168.31.72:2379,https://192.168.31.73:2379

#设置证书的环境变量
export ETCDCTL_CACERT=/opt/etcd/ssl/ca.pem
export ETCDCTL_CERT=/opt/etcd/ssl/server.pem 
export ETCDCTL_KEY=/opt/etcd/ssl/server-key.pem 

三 安装docker

下载地址:

https://download.docker.com/linux/static/stable/x86_64/docker-19.03.0.tgz

以下在所有节点操作 。 这里采用二进制安装 , 用yum安装也一样

3.1 解压二进制包

tar zxvf docker-19.03.0.tgz 
mv docker/* /usr/bin/

3.2 system管理docker

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF

3.3 创建配置文件

mkdir /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
    "registry-mirrors": ["https://f4573wwn.mirror.aliyuncs.com"]
}
EOF

registry-mirrors 阿里云镜像加速器

3.4 启动并设置开机启动

systemctl daemon-reload
systemctl start docker
systemctl enable docker

四、部署master node

4.1 生成kube-apiserver证书

1. 自签证书颁发机构(CA)

cd ~/TLS/k8s

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "expiry": "87600h",
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json << EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

生成证书:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem
ca-key.pem ca.pem

2.使用自签CA签发kube-apiserver HTTPS证书

创建证书申请文件:

cd ~/install-k8s/TLS/k8s

cat > server-csr.json << EOF
{
  "CN": "kubernetes",
  "hosts": [
    "10.0.0.1",
    "127.0.0.1",
    "192.168.31.71",
    "192.168.31.72",
    "192.168.31.73",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
     }
  ]
}
EOF

注: 上述文件hosts字段中可以多写几个ip包含master、LB、VIP IP、,一个都不能少 , 为了方便后期扩容可以多写几个预留的IP。

生成证书:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json |cfssljson -bare server

ls server*pem 
server-key.pem server.pem 

4.2 从Github下载二进制文件

下载地址:

https://github.com/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183
https://gitee.com/RaYong8080/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183

注: 打开链接你会发现里面有很多包 , 下载一个server包就够了 ,包含了master和worker node的二进制文件 。

4.3 解压二进制包(所有节点)

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar xf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin/
cp kubectl /usr/bin/


scp kubectl root@192.168.31.73:/usr/bin/

4.4 部署kube-apiserver

1.创建配置文件

cat > /opt/kubernetes/cfg/kube-apiserver.conf <<EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.31.71:2379,https://192.168.31.72:2379,https://192.168.31.73:2379 \\
--bind-address=192.168.31.71 \\
--secure-port=6443 \\
--advertise-address=192.168.31.71 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csr \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF
  • --logtostderr: 启动日志
  • --v: 日志登记
  • --log-dir: 日志目录
  • --etcd-servers: etcd集群地址
  • --bind-address: 监听地址
  • --secure-port: https安全端口
  • --adverttse-address: 集群通告地址
  • --allow-privileged: 启动授权
  • --service-cluster-ip-range: Service虚拟IP地址段
  • --enable-admission-plugins: 准入控制模块
  • --authorization-mode: 认证授权, 启动RBAC授权和节点自管理
  • --enable-bootstrap-token-auth: 启动TLS bootstrap机制
  • --token-auth-file: bootstrap token文件
  • --service-node-port-range: Service noeport类型默认分配端口范围
  • --kubelet-client-xxx: apiserver访问kubelet客户端证书
  • --tls-xx-file: apiserver https证书
  • --etcd-xxxfile: 连接Etcd集群证书
  • --audit-log-xxx: 审计日志

2. 拷贝刚才生成的正式

把刚才生成的正式拷贝到配置文件中的路径:

cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

3.启动TLS Bootstrapping 机制

TLS Bootstraping:Master apiserver启动TLS认证后 ,Node节点kubelet和kube-proxy要与kube-apiserver进展通信 , 必须使用CA签发的有效证书才可以, 当Node节点很多时 , 这种客户端证书颁发需要大量工作, 同样也会增加集群扩展复杂度 。 为了简化流程 ,kubernetes引入了TLS bootstraping机制来自动颁发客户端证书, kebelet会以一个低权限用户自动向apiserver申请证书, kubelet的证书由apiserver动态签署 。 所以强烈建议在Node上使用这种方式, 目前主要用于kubelet ,kube-proxy还是由我们统一颁发一个证书 。

TLS bootstraping 工作流程:

创建上述配置文件中token文件:

cat > /opt/kubernetes/cfg/token.csr << EOF
b231eb5bbb461b30fa239031a1f191d6,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

格式:token , 用户名 ,UID ,用户组

token也可自行生成替换

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

4.systemd管理apiserver

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

5.启动并设置开机启动

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

6.授权kubelet-bootstrap用户允许请求证书

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

4.5 部署kube-controller-manager

1.创建配置文件

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF

  • --master: 通过本地非安全本地端口8080连接apiserver
  • --cluster-cidr: 集群pod分配的ip地址 ,flannel 的Network 地址一致
  • --service-cluster-ip-range: 集群service地址
  • --leader-elect: 当该组件启动多个时 , 自动选举(HA)
  • --cluster-signing-cert-file/--cluster-signing-key-file: 自动为kubelet颁发证书的CA ,与apiserver保持一致

2.systemd管理controller-manager

cat > /usr/lib/systemd/system/kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

3.启动并设置开机启动

systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

4.6 部署kube-scheduler

1.创建配置文件

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF
  • --master: 通过本地非安全本地端口8080连接apiserver
  • --leader-elect: 当该组件启动多个时, 自动选举(HA)

2.systemd管理scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

3.启动并设置开机启动

systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

4.查看集群状态

所有组件都已经启动成功, 通过kubectl工具查看当前集群组件状态:

kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}  

如上输出说明master 节点组件运行正常

五、部署Worker Node

下面还是在master node上操作 , 即同时作为worker node

5.1 创建工作目录并拷贝二进制文件

在所有worder node创建工作目录:

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

从master节点拷贝

cd kubernetes/server/bin
cp kubelet kube-proxy /opt/kubernetes/bin    #本地拷贝   

5.2 部署kubelet

1.创建配置文件

cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF
  • --hostname-override: 显示名称, 集群中唯一
  • --network-plugin: 启用CNI
  • --kubeconfig: 空路径, 会自动生成, 后面用于连接apiserver
  • --bootstrap-kubeconfig: 首次启动向apiserver申请证书
  • --config: 配置参数文件
  • --cert-dir: kubelet证书生成目录
  • --pod-infra-container-image: 管理Pod网络容器的镜像

2.配置参数文件

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

3.生成bootstrap.kubeconfig文件

KUBE_APISERVER="https://192.168.31.71:6443" #apiserver IP:port
TOKEN="b231eb5bbb461b30fa239031a1f191d6" #与token.csr 里报错一致
b231eb5bbb461b30fa239031a1f191d6

#生成kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

拷贝配置文件路径:

cp bootstrap.kubeconfig /opt/kubernetes/cfg/

4.systemd管理kubelet

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

5.启动并设置开机启动

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

报错

cat /var/log/messages|grep kubelet|grep -i error

5.3 批准kubelet证书申请并加入集群

 #查看kubelet 证书请求
 kubectl get csr
 NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-RrAbmArhPhH9U4xm7oMqPY4oauQybeHvub9VMAj9VoM   48s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

 
 #批准申请
kubectl certificate approve node-csr-RrAbmArhPhH9U4xm7oMqPY4oauQybeHvub9VMAj9VoM
 
 
 # 查看节点
kubectl get node 
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   <none>   17s   v1.18.3

注由于网络插件还没有部署 , 节点会没有准备就绪NotReady

如果要kubectl 访问 kube-apiserver需要设置

kubectl 默认是访问localhost:8080 
配置kubeconfig

#创建配置文件目录
mkdir .kube
#创建kubeconfig
touch config
将 kubeconfig内容放入.kube/config文件中
配置环境变量

添加: KUBECONFIG=$HOME/.kube/config 到环境变量,并刷新:source /etc/profile

export KUBECONFIG

5.4 部署kube-proxy

1.创建配置文件

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

2.配置参数文件

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master
clusterCIDR: 10.0.0.0/24
mode: "ipvs"
EOF

3.生成kube-proxy.kubecofig文件

生成kube-proxy证书:

#切换工作目录
cd ~/TLS/k8s

#创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF


#生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

生成kubeconfig文件:

KUBE_APISERVER="https://192.168.31.71:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig 
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

拷贝到配置文件指定路径:

cp kube-proxy.kubeconfig /opt/kubernetes/cfg/

4. systemd管理kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

5.启动并设置开机启动

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

5.5 部署CNI网络

先准备好CNI二进制文件:

下载地址:

https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz

解压二进制包并移动到默认工作目录:

mkdir /opt/cni/bin -p
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

部署CNI网络:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml 

默认镜像地址无法访问, 修改为docker hub镜像仓库。

kubectl apply -f kube-flannel.yml

kubectl get pods -n kube-system
NAME                          READY   STATUS    RESTARTS   AGE
kube-flannel-ds-amd64-zkt9h   1/1     Running   0          101s
#查看pod状态
kubectl describe pods -n kube-system

5.6 授权apiserver访问kubelet

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF
kubectl apply -f apiserver-to-kubelet-rbac.yaml

5.7 新增加Worker Node

1.拷贝已部署好的Node相关文件到新节点

在master节点将Worker Node涉及文件拷贝到新节点192.168.31.72/192.168.31.73

scp -r /opt/kubernetes root@192.168.31.72:/opt/
scp -r /opt/kubernetes root@192.168.31.73:/opt/

scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.31.72:/usr/lib/systemd/system

scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.31.73:/usr/lib/systemd/system

scp -r /opt/cni/ root@192.168.31.72:/opt/
scp -r /opt/cni/ root@192.168.31.73:/opt/

2.删除kubelet证书和kubeconfig文件

rm /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*

注:这几个文件是证书申请审批后自动生成的, 每个Node不同 , 必须删除重新生成

3.修改主机名

vim /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1

vim /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node2

4.启动并设置开机自动

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy

5.在Master上批准新Node kubelet证书申请

kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-AjchWd-e7ORKHpi6kwm17aIi7SWfao6S-UofT_o-2p8   2m35s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-RrAbmArhPhH9U4xm7oMqPY4oauQybeHvub9VMAj9VoM   70m     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
#node1
kubectl certificate approve node-csr-AjchWd-e7ORKHpi6kwm17aIi7SWfao6S-UofT_o-2p8 

kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-AjchWd-e7ORKHpi6kwm17aIi7SWfao6S-UofT_o-2p8   7m34s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-RrAbmArhPhH9U4xm7oMqPY4oauQybeHvub9VMAj9VoM   75m     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-Tmx2_KM58-Nh4kU_ii3Fwu_q8HCgs_xpZD8HGASFB_g   12s     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
#node2
kubectl certificate approve node-csr-Tmx2_KM58-Nh4kU_ii3Fwu_q8HCgs_xpZD8HGASFB_g 

kubectl delete node k8s-master
kubectl delete node k8s-node1
kubectl delete node k8s-node2

六、部署Dashboard和CoreDNS

6.1 部署Dashboard

https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

默认Dashboard只能集群内部访问, 修改Service为NodePort类型, 暴露到外部

vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard


kubectl apply -f recommended.yaml

kubectl get pods,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-694557449d-bxvcg   1/1     Running   0          114s
pod/kubernetes-dashboard-9774cc786-m9q8g         1/1     Running   0          114s

NAME                                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.0.0.48    <none>        8000/TCP        114s
service/kubernetes-dashboard        NodePort    10.0.0.122   <none>        443:30001/TCP   56s

访问地址: https://NodeIP:30001

创建service account并绑定默认cluster-admin管理员集群角色:

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

使用输出的token登录Dashboard

6.2 汉化为中文

Dashboard 的页面都已经被汉化了 , 当前支持英文, 中文简体等,根据浏览器的语言自动切换的 。 如果想要强制设置dashboard中显示的语言 , 需要在dashboard 的Deployment yaml配置中增加如下配置:

英文
env:

  name: ACCEPT_LANGUAGE
  value: english
中文
env:

  name: ACCEPT_LANGUAGE
  value: zh

这种方式是强制容器显示内容为中文, 经过验证有效(但是可能不全)

6.2部署CoreDNS

CoreDNS用于集群内部Service名称解析

kubectl apply -f coredns.yaml

kubectl get pods -n kube-system
NAME                          READY   STATUS    RESTARTS   AGE
coredns-7b5f8b68d5-jdfmf      1/1     Running   0          116s
coredns-7b5f8b68d5-xqhxw      1/1     Running   0          116s
kube-flannel-ds-amd64-b2rhq   1/1     Running   0          10h
kube-flannel-ds-amd64-g5dnz   1/1     Running   0          9h
kube-flannel-ds-amd64-ltjs6   1/1     Running   0          9h

DNS解析测试:

kubectl run -it --rm dns-test --image=busybox:1.28.4 sh 
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

为以上内容 , 表示coredns部署正常

七、高可用架构(扩容多个Master架构)

kubetes作为容器集群系统, 通过健康检查+重启策略实现了Pod故障自我修复能力 , 通过调度算法实现将Pod分布式部署, 并保持预期副本数, 根据Node失效状态自动在其他Node拉起Pod, 实现应用层的高可用性。

针对kubernnetes集群, 高可用性还应包含以下两个层面的考虑: Etcd数据库的高可用性和Kubernetes Master组件的高可用性 。 而Etcd我们已经采用3个节点组件集群实现高可用 ,本节点将对Master节点高可用进行说明和实施部署 。

Master节点扮演着总控中心的角色, 通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整个集群的监控工作状态 。 如果Master节点故障, 将无法使用kubectl工具或者API做任何集群管理 。

Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用, 所以Master高可用主要针对kube-apiserver组件,而该组件是以http API提供服务 , 因此对他高可用与Web服务器类似, 增加负载均衡对其负载均衡即可 , 并且可水平扩容 。

多个Master架构图

7.1 安装Docker

部署Master2 node(192.168.31.74)

Master2与已部署的master1所以操作一致 。 所以我们只需要将master 所有k8s文件拷贝过来 , 再修改下服务器IP和主机名启动即可

1.创建etcd证书目录

在master2创建etcd证书目录

mkdir -p /opt/etcd/ssl 

2.拷贝文件(master1操作)

拷贝master1 上所有k8s文件和etcd 证书到master2

scp -r /opt/kubernetes root@192.168.31.74:/opt
scp -r /opt/cni/ root@192.168.31.72:/opt
scp -r /opt/cni/ root@192.168.31.74:/opt
scp -r /opt/etcd/ssl root@192.168.31.74:/opt/etcd
scp /usr/lib/systemd/system/kube* root@192.168.31.74:/usr/lib/systemd/system
scp /usr/bin/kubectl root@192.168.31.74:/usr/bin/

3.删除证书文件

删除kubelet证书和kubernetes文件:

rm -f /opt/kubernetes/cfg/kubelet*
rm -f /opt/kubernetes/ssl/kubelet*

4.修改配置文件IP和主机名

修改apiserver,kubelet和kube-proxy配置文件为本地IP

vi /opt/kubernetes/cfg/kube-apiserver.conf 
...
--bind-address=192.168.31.74 \
--advertise-address=192.168.31.74 \
...

vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master2

vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-master2

5.启动设置开机启动

systemctl daemon-reload
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
systemctl enable kubelet
systemctl enable kube-proxy

6.查看集群状态

kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   

7.批准kubelet证书申请

kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-jvn3j3erGwwkf7HctNYT2aEsb1HA43465ixW1Evo-Zk   39s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

kubectl certificate approve node-csr-jvn3j3erGwwkf7HctNYT2aEsb1HA43465ixW1Evo-Zk

7.3 部署nginx负载均衡

Kube-apiserver高可用架构图:

  • nginx是一个主流web服务器和反向代理服务器, 这里用四层实现对apiserver实现负载均衡。

  • Keepalived 是一个主流高可用软件, 基于VIP绑定实现服务器双机热备, 在上述拓扑中,keepalived主要根据Nginx运行状态判断是否需要故障转移(偏移VIP),例如当Nginx主节点挂掉, VIP会自动绑定在Nginx备节点, 从而保证VIP一直可用, 实现Nginx高可用。

1.安装软件包(主、备)

yum install epel-release -y
yum install nginx keepalived -y 

2.Nginx配置文件(主、备一样)

cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 192.168.31.71:6443;   # Master1 APISERVER IP:PORT
       server 192.168.31.74:6443;   # Master2 APISERVER IP:PORT
    }
    
    server {
       listen 6444;
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}
EOF

yum安装的ngixn其实是带stream模块的,但默认没有加载,需要在nginx.conf首行加入:

load_module /usr/lib64/nginx/modules/ngx_stream_module.so;

3.keepalived配置文件(nginx master)

cat > /etc/keepalived/keepalived.conf << EOF
global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
    interval 2
	weight -20
}

vrrp_instance VI_1 { 
    state MASTER 
    interface ens33  # 修改为实际网卡名
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 100    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    # 虚拟IP
    virtual_ipaddress { 
        192.168.31.88/24
    } 
    track_script {
        check_nginx
    } 
}
EOF

  • vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)
  • virtual_ipaddress:虚拟IP(VIP)

检查nginx状态脚本:

cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh

4.keepalived配置文件(Nginx Backup)

cat > /etc/keepalived/keepalived.conf << EOF
global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_BACKUP
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state BACKUP 
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 90
    advert_int 1
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.31.88/24
    } 
    track_script {
        check_nginx
    } 
}
EOF

上述配置文件中检查nginx运行状态脚本:

cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh

注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。

5.启动并设置开机启动

systemctl daemon-reload
systemctl start nginx
systemctl start keepalived
systemctl enable nginx
systemctl enable keepalived

FAQ : 如果nginx 启动报错 , 一般是没有stream 模块

操作步骤

1.检查nginx版本

nginx -v

2.检查nginx编译参数

nginx -V

3.下载与之对应的nginx tar 或模块并解压

wget http://nginx.org/download/nginx-1.20.1.tar.gz

4.编译nginx

#./configure 上面的编译命令 --add-module=/usr/local/  #其他模块
./configure 上面的编译命令 --with-stream 
make 
make install 

5.拷贝编译后的可执行文件


依赖

yum -y install libxml2 libxml2-dev libxslt-devel 
yum -y install gd-devel 
yum -y install perl-devel perl-ExtUtils-Embed 
yum -y install GeoIP GeoIP-devel GeoIP-data
yum -y install pcre-devel
yum -y install openssl openssl-devel

6. 查看keepalived工作状态

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:04:f7:2c brd ff:ff:ff:ff:ff:ff
    inet 192.168.31.80/24 brd 192.168.31.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.31.88/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe04:f72c/64 scope link 
       valid_lft forever preferred_lft forever

可以看到,在ens33网卡绑定了192.168.31.88 虚拟IP,说明工作正常。

7. Nginx+Keepalived高可用测试

关闭主节点Nginx,测试VIP是否漂移到备节点服务器。

在Nginx Master执行 pkill nginx
在Nginx Backup,ip addr命令查看已成功绑定VIP。

8. 访问负载均衡器测试

找K8s集群中任意一个节点,使用curl查看K8s版本测试,使用VIP访问:

curl -k https://192.168.31.88:6443/version
{
  "major": "1",
  "minor": "18",
  "gitVersion": "v1.18.3",
  "gitCommit": "2e7996e3e2712684bc73f0dec0200d64eec7fe40",
  "gitTreeState": "clean",
  "buildDate": "2020-05-20T12:43:34Z",
  "goVersion": "go1.13.9",
  "compiler": "gc",
  "platform": "linux/amd64"
}

可以正确获取到K8s版本信息,说明负载均衡器搭建正常。该请求数据流程:curl -> vip(nginx) -> apiserver

通过查看Nginx日志也可以看到转发apiserver IP:

到此还没结束,还有下面最关键的一步。

7.4 修改所有Worker Node连接LB VIP

试想下,虽然我们增加了Master2和负载均衡器,但是我们是从单Master架构扩容的,也就是说目前所有的Node组件连接都还是Master1,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。

因此接下来就是要改所有Node组件配置文件,由原来192.168.31.71修改为192.168.31.88(VIP):

角色 IP
k8s-master1 192.168.31.71
k8s-master2 192.168.31.74
k8s-node1 192.168.31.72
k8s-node2 192.168.31.73

也就是通过kubectl get node命令查看到的节点。

在上述所有Worker Node执行:

sed -i 's#192.168.31.71:6443#192.168.31.88:6444#' /opt/kubernetes/cfg/*
systemctl restart kubelet
systemctl restart kube-proxy

检查节点状态:

kubectl get node
NAME          STATUS   ROLES    AGE    VERSION
k8s-master    Ready    <none>   34h    v1.18.3
k8s-master2   Ready    <none>   101m   v1.18.3
k8s-node1     Ready    <none>   33h    v1.18.3
k8s-node2     Ready    <none>   33h    v1.18.3

至此,一套完整的 Kubernetes 高可用集群就部署完成了!

PS:如果你是在公有云上,一般都不支持keepalived,那么你可以直接用它们的负载均衡器产品(内网就行,还免费~),架构与上面一样,直接负载均衡多台Master kube-apiserver即可!

7.5 配置使用ipvs代理

前置条件

modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules << EOF 
#!/bin/bash 
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- ip_vs_nq
modprobe -- ip_vs_seq
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && \
bash /etc/sysconfig/modules/ipvs.modules &&\
lsmod | grep -e ip_vs -e nf_conntrack_ipv4 

更改ipvs配置文件

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--masquerade-all=true \\
--proxy-mode=ipvs \\
--ipvs-min-sync-period=5s \\
--ipvs-sync-period=5s \\
--ipvs-scheduler=rr \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml "
EOF

image-20210717235521765

更改配置参数文件 mode 为ipvs

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master
clusterCIDR: 10.0.0.0/24
mode: "ipvs"
EOF

FAQ

1.在kube-proxy日志里面有很多报错

E0813 10:47:18.983433  454071 proxier.go:722] conntrack return with error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH
E0813 10:47:18.983492  454071 proxier.go:722] conntrack return with error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH
E0813 10:47:18.983513  454071 proxier.go:722] conntrack return with error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH

很容易理解,这个就是conntrack没有安装,那么这个conntrack是干嘛的很多人可能不太理解,它是一个用户态的命令,用于控制内核中ip_conntrack模块的,该模块是用于处理链路追踪的工具。就是iptables和netfilter的关系。
简单介绍一下ip_conntrack模块,数据包(a -> b)经过网关,发生了SNAT,地址信息成为了(m -> b),虽然发生了nat,(a -> b)和(m -> b)应该是属于同一个数据流conntrack的,ip_conntrack需要作记录,以便将两个流绑定在一起,数据从a到b的方向在网关处成了由m到b的方向,属于一个方向,都是源到目的,发生了SNAT后,数据就可以出去了,既然数据离开了网关,我们也就不必关心它了,我们关心的是从b发出的回应a数据到达网关后如何将之绑定到流conntrack,数据回来后由于发生过snat流标示显然是(b -> m),于是ip_conntrack需要将(b-m)也绑定到conntrack,这样才能将数据返回。由于ip_conntrack帮助,可以将两个连接捆绑到一起,通过ip_conntrack来帮助netfilter进行流量转发。
回到kubernetes的问题上面,proxy在删除UDP的service时候也需要清除这些ip_conntrack连接。否则还会还会把流量导入的废弃的pod上面。

修复需要安装conntrack-tools

yum install conntrack-tools
posted @ 2021-07-25 15:14  白色的番茄  阅读(693)  评论(0编辑  收藏  举报