Shell脚本实现----Kubernetes单集群二进制部署
Shell脚本实现----Kubernetes单集群二进制部署
搭建Kubernetes集群环境有以下三种方式:
1. Minikube安装方式
Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用。但是这种方式仅可用于学习和测试部署,不能用于生产环境。
2. Kubeadm安装方式
kubeadm是一个kubernetes官方提供的快速安装和初始化拥有最佳实践(best practice)的kubernetes集群的工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。目前kubeadm还处于beta 和alpha状态,可以通过学习这种部署方法来体会一些官方推荐的kubernetes最佳实践的设计和思想,目前大的生产环境中比较少用。
3. 二进制包安装方式(生产部署的推荐方式)
从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群,这种方式符合企业生产环境标准的Kubernetes集群环境的安装,可用于生产方式部署。
# wget https://dl.k8s.io/v1.16.1/kubernetes-server-linux-amd64.tar.gz # wget https://github.com/etcd-io/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz # wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 # wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 # wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 # wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
软件环境
系统: CentOS7.8
Docker: docker-ce.18.06
Kubernetes: v1.16.1
Etcd Version: 3.3.13
Flanneld: v0.11.0
服务器规划
IP | 主机名 | 角色 | 安装组件 |
192.168.10.10 | k8s-master | master | kube- apiserver kube- controller-manager kube- scheduler etcd kubectl |
192.168.10.11 | k8s-node1 | node1 | kubelet kube-proxy docker flannel etcd |
192.168.10.12 | k8s-node2 | node2 | kubelet kube-proxy docker flannel etcd |
1. 环境准备(所有机器)
1.1 关闭防火墙, selinux (略)
1.2 互相解析 (略),master节点上传公钥到node节点
https://www.cnblogs.com/user-sunli/p/13889477.html
1.3 配置好集群时间同步 (略)
集群时间同步 https://www.jianshu.com/p/dd91df901302 https://www.cnblogs.com/pipci/p/12871993.html https://www.cnblogs.com/quchunhui/p/7658853.html 为了节约网络资源,可让集群中一台服务器(主时间服务器)同步网络时间,其它服务器与主时间服务器进行同步。 例:三台服务器需要做时间同步 192.168.10.11 主时间服务器 192.168.10.12 192.168.10.13 配置过程 所有机器安装chrony # yum install -y chrony 主时间服务器配置 1. 修改主配置文件 # egrep -v "^#|^$" /etc/chrony.conf server 0.centos.pool.ntp.org iburst # 这里的server是网络时间服务器,本机将会向它们同步时间 server 1.centos.pool.ntp.org iburst server 2.centos.pool.ntp.org iburst server 3.centos.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync allow 192.168.10.0/24 # 允许谁向本服务器同步时间,这行本来是注释的,将其打开并修改。 logdir /var/log/chrony 阿里ntp的服务器列表 time1.aliyun.com time2.aliyun.com time3.aliyun.com time4.aliyun.com time5.aliyun.com time6.aliyun.com time7.aliyun.com 2. 重启服务 # systemctl restart chronyd 3. 查看时间服务器 # chronyc sources 集群中其他服务器配置 其他服务器只需要将server中的网络时间服务器地址改为主时间服务器地址就可以了 [root@host2 ~]# egrep -v "^#|^$" /etc/chrony.conf server 192.168.10.11 iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony [root@host2 ~]# systemctl restart chronyd
1.4 更新内核(7.6以上的系统不需要这步)
# yum update
1.5 关闭 swap
# swapoff -a
# sed -i.bak 's/^.*swap/#&/' /etc/fstab
1.6 配置内核参数
# vim /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# sysctl -p
1.7 重启(升级内核则需要做此步)
# shutdown -r now
1.8 验证
# uname -r
3.10.0-1062.4.1.el7.x86_64
# free -m
total used free shared buff/cache available
Mem: 1980 120 1371 9 488 1704
Swap: 0 0 0
2. 编写脚本及上传二进制包
在/root/目录下
主脚本
vim main.sh
#/bin/bash
#auther:sunli
#mail:<1916989848@qq.com>
k8s_master=192.168.10.10
k8s_node1=192.168.10.11
k8s_node2=192.168.10.12
sh ansible_docker.sh $k8s_node1 $k8s_node2 || (echo "docker install error" && exit)
sh etcd_install.sh $k8s_master $k8s_node1 $k8s_node2 || (echo "etcd install error" && exit)
sh flannel_install.sh $k8s_master $k8s_node1 $k8s_node2 || (echo "flannel install error" && exit)
sh master.sh $k8s_master || (echo "master install error" && exit)
sh node.sh $k8s_master $k8s_node1 $k8s_node2 || (echo "node install error" && exit)
docker安装脚本
vim docker_install.sh
#/bin/bash
curl http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.0.ce
[ ! -d /etc/docker ] && mkdir /etc/docker
cat >> /etc/docker/daemon.json <<- EOF
{
"registry-mirrors": ["https://pf5f57i3.mirror.aliyuncs.com"]
}
EOF
systemctl enable docker
systemctl start docker
vim ansible_docker.sh
#!bin/bash
[ ! -x /usr/bin/ansible ] && yum -y install ansible
cat >> /etc/ansible/hosts << EOF
[docker]
$1
$2
EOF
ansible docker -m script -a 'creates=/root/docker_install.sh /root/docker_install.sh'
CA签证脚本
vim CA.sh
#/bin/bash
#auther:sunli
#mail:<1916989848@qq.com>
#description:利用cfssljson格式,自建CA中心,生成ca-key.pem(私钥)和ca.pem(证书),还会生成ca.csr(证书签名请求)
CFSSL() {
#将下载好的三个cfssl文件赋予可执行权限,并设置为系统命令
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
mv cfssl* /usr/local/bin/
chmod +x /usr/local/bin/cfssl*
}
#判断cfssl命令是否存在
which cfssljson_linux-amd64
[ `echo $?` -ne 0 ] && CFSSL
#确定service
service=$1
[ ! -d /etc/$service/ssl ] && mkdir -p /etc/$service/ssl
CA_DIR=/etc/$service/ssl
#CA中心配置文件
CA_CONFIG() {
cat > $CA_DIR/ca-config.json <<- EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"$service": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
}
#CA证书签名请求
CA_CSR() {
cat > $CA_DIR/ca-csr.json <<- EOF
{
"CN": "$service",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "GuangDong",
"ST": "GuangZhou",
"O": "$service",
"OU": "System"
}
]
}
EOF
}
#服务器请求CA中心颁发证书签名请求
SERVER_CSR() {
host1=192.168.10.10
host2=192.168.10.11
host3=192.168.10.12
host4=192.168.10.13
host5=192.168.10.14
host6=192.168.10.15
cat > $CA_DIR/server-csr.json <<- EOF
{
"CN": "$service",
"hosts": [
"127.0.0.1",
"$host1",
"$host2",
"$host3",
"$host4",
"$host5"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "GuangDong",
"ST": "GuangZhou",
"O": "$service",
"OU": "System"
}
]
}
EOF
}
SERVER_CSR1() {
host1=192.168.10.10
host2=192.168.10.20
host3=192.168.10.30
host4=192.168.10.40
cat > $CA_DIR/server-csr.json <<- EOF
{
"CN": "$service",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"$host1",
"$host2",
"$host3",
"$host4",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "GuangDong",
"ST": "GuangZhou",
"O": "$service",
"OU": "System"
}
]
}
EOF
}
CA_CONFIG && CA_CSR
[ "$service" == "kubernetes" ] && SERVER_CSR1 || SERVER_CSR
#生成CA所必需的文件ca-key.pem(私钥)和ca.pem(证书),还会生成ca.csr(证书签名请求),用于交叉签名或重新签名
cd $CA_DIR/
cfssl_linux-amd64 gencert -initca ca-csr.json | cfssljson_linux-amd64 -bare ca
#生成证书
cfssl_linux-amd64 gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=$service server-csr.json | cfssljson_linux-amd64 -bare server
安装etcd
vim etcd_install.sh
#/bin/bash
#解压etcd二进制包,将二进制命令设置为系统命令
etcd_01=$1
etcd_02=$2
etcd_03=$3
sh CA.sh etcd || (echo "etcd CA not build" && exit)
#将二进制包提前拷贝至当前路径
dir=./
pkgname=etcd-v3.3.13-linux-amd64
[ ! -e $dir/$pkgname.tar.gz ] && echo "no package" && exit
tar xf $dir/$pkgname.tar.gz
cp -p $dir/$pkgname/etc* /usr/local/bin/
#创建etcd配置文件
ETCD_CONFIG() {
cat > /etc/etcd/etcd.conf <<- EOF
#[Member]
ETCD_NAME="etcd-01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://$etcd_01:2380"
ETCD_LISTEN_CLIENT_URLS="https://$etcd_01:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://$etcd_01:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://$etcd_01:2379"
ETCD_INITIAL_CLUSTER="etcd-01=https://$etcd_01:2380,etcd-02=https://$etcd_02:2380,etcd-03=https://$etcd_03:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
}
#创建etcd的system启动文件
ETCD_SERVICE() {
cat > /usr/lib/systemd/system/etcd.service <<- EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/etc/etcd/ssl/server.pem \
--key-file=/etc/etcd/ssl/server-key.pem \
--peer-cert-file=/etc/etcd/ssl/server.pem \
--peer-key-file=/etc/etcd/ssl/server-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
}
ETCD_CONFIG && ETCD_SERVICE
#将master节点的配置信息拷贝到node节点,前提已做ssh授信和域名解析
scp /usr/local/bin/etcd* $etcd_02:/usr/local/bin/
scp -r /etc/etcd/ $etcd_02:/etc/
scp /usr/lib/systemd/system/etcd.service $etcd_02:/usr/lib/systemd/system/
scp /usr/local/bin/etcd* $etcd_03:/usr/local/bin/
scp -r /etc/etcd/ $etcd_03:/etc/
scp /usr/lib/systemd/system/etcd.service $etcd_03:/usr/lib/systemd/system/
#安装ansible,并将node_ip写入host
[ ! -x /usr/bin/ansible ] && yum -y install ansible
echo "[etcd]" >> /etc/ansible/hosts
echo "$etcd_02" >> /etc/ansible/hosts
echo "$etcd_03" >> /etc/ansible/hosts
#修改etcd-02的etcd.conf
cat > /tmp/etcd-02.sh <<- EOF
#/bin/bash
sed -i "s#\"etcd-01\"#\"etcd-02\"#g" /etc/etcd/etcd.conf
sed -i "s#\"https://$etcd_01#\"https://$etcd_02#g" /etc/etcd/etcd.conf
EOF
ansible $etcd_02 -m script -a 'creates=/tmp/etcd-02.sh /tmp/etcd-02.sh'
#修改etcd-03的etcd.conf
#ansible $etcd_03 -m lineinfile -a "dest=/etc/etcd/etcd.conf regexp='ETCD_NAME=\"etcd-01\"' line='ETCD_NAME=\"etcd-03\"' backrefs=yes"
cat > /tmp/etcd-03.sh <<- EOF
#/bin/bash
sed -i "s#\"etcd-01\"#\"etcd-03\"#g" /etc/etcd/etcd.conf
sed -i "s#\"https://$etcd_01#\"https://$etcd_03#g" /etc/etcd/etcd.conf
EOF
ansible $etcd_03 -m script -a 'creates=/tmp/etcd-03.sh /tmp/etcd-03.sh'
#启动etcd-02、etcd-03
ansible etcd -m service -a "name=etcd state=started enabled=yes" && continue
#启动etcd_01
systemctl enable etcd
systemctl start etcd
#别名简化
cat > /etc/profile.d/alias_etcd.sh <<- EOF
alias etcdctld='etcdctl --cert-file=/etc/etcd/ssl/server.pem \
--key-file=/etc/etcd/ssl/server-key.pem \
--ca-file=/etc/etcd/ssl/ca.pem \
--endpoint=https://$etcd_01:2379,https://$etcd_02:2379,https://$etcd_03:2379'
EOF
source /etc/profile.d/alias_etcd.sh
#将简化的命令拷贝node
scp /etc/profile.d/alias_etcd.sh $etcd_02:/etc/profile.d/
scp /etc/profile.d/alias_etcd.sh $etcd_03:/etc/profile.d/
ansible etcd -m shell -a "source /etc/profile.d/alias_etcd.sh"
#输出etcd集群健康,注意重开终端生效
etcdctld cluster-health
echo "输出etcd集群健康,注意重开终端生效,命令:etcdctld cluster-health"
安装flannel
vim flannel_install.sh
#/bin/bash
flannel_01=$1
flannel_02=$2
flannel_03=$3
#将二进制包提前拷贝至当前路径
dir=./
pkgname=flannel-v0.11.0-linux-amd64
[ ! -e $dir/$pkgname.tar.gz ] && echo "error:no package" && exit
tar xf $dir/$pkgname.tar.gz
mv $dir/{flanneld,mk-docker-opts.sh} /usr/local/bin/
#向 etcd 写入集群 Pod 网段信息(在任意一个etcd节点执行)
cd /etc/etcd/ssl/
etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoint=https://$flannel_01:2379,https://$flannel_02:2379,https://$flannel_03:2379 \
set /coreos.com/network/config '{ "Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
#创建flannel.conf
FLANNEL_CONFIG() {
cat >/etc/flannel.conf <<- EOF
FLANNEL_OPTIONS="--etcd-endpoints=https://$flannel_01:2379,https://$flannel_02:2379,https://$flannel_03:2379 -etcd-cafile=/etc/etcd/ssl/ca.pem -etcd-certfile=/etc/etcd/ssl/server.pem -etcd-keyfile=/etc/etcd/ssl/server-key.pem"
EOF
}
#创建flannel.service
FLANNEL_SERVICE() {
cat > /usr/lib/systemd/system/flanneld.service <<- EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/flannel.conf
ExecStart=/usr/local/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
}
#重建docker.service
DOCKER_SERVICE() {
cat > /usr/lib/systemd/system/docker.service <<- EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
}
FLANNEL_CONFIG && FLANNEL_SERVICE && DOCKER_SERVICE
# 重启 docker 服务
systemctl daemon-reload
systemctl restart docker
scp /usr/local/bin/{flanneld,mk-docker-opts.sh} $flannel_02:/usr/local/bin/
scp /usr/local/bin/{flanneld,mk-docker-opts.sh} $flannel_03:/usr/local/bin/
scp /etc/flannel.conf $flannel_02:/etc/
scp /etc/flannel.conf $flannel_03:/etc/
scp /usr/lib/systemd/system/flanneld.service $flannel_02:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/flanneld.service $flannel_03:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/docker.service $flannel_02:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/docker.service $flannel_03:/usr/lib/systemd/system/
#安装ansible,并将node_ip写入host
[ ! -x /usr/bin/ansible ] && yum -y install ansible
echo "[flannel]" >> /etc/ansible/hosts
echo "$flannel_02" >> /etc/ansible/hosts
echo "$flannel_03" >> /etc/ansible/hosts
ansible flannel -m service -a "name=flanneld state=started daemon_reload=yes enabled=yes" && continue
ansible flannel -m service -a "name=docker state=restarted enabled=yes" && continue
#别名简化
cat > /etc/profile.d/alias_etcdf.sh <<- EOF
alias etcdctlf='cd /etc/etcd/ssl/;etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoint=https://$flannel_01:2379,https://$flannel_02:2379,https://$flannel_03:2379 ls /coreos.com/network/subnets'
EOF
source /etc/profile.d/alias_etcdf.sh
scp /etc/profile.d/alias_etcdf.sh $flannel_02:/etc/profile.d/
scp /etc/profile.d/alias_etcdf.sh $flannel_03:/etc/profile.d/
ansible flannel -m shell -a "source /etc/profile.d/alias_etcdf.sh"
master节点
vim master.sh
#/bin/bash
master=$1
#创建kubernetes的CA证书
sh CA.sh kubernetes
#配置kube-apiserver
#将kubernetes二进制包提前拷贝至当前路径
dir=./
pkgname=kubernetes-server-linux-amd64
[ ! -e $dir/$pkgname.tar.gz ] && echo "no package" && exit
tar xf $dir/$pkgname.tar.gz
cp -p $dir/kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl} /usr/local/bin/
#创建Bootstrapping Token 文件
TLS=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > /etc/kubernetes/token.csv <<- EOF
$TLS,,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
#创建 kube-apiserver 配置文件
KUBE_API_CONF() {
cat > /etc/kubernetes/apiserver <<- EOF
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.10.10:2379,https://192.168.10.11:2379,https://192.168.10.12:2379 \
--bind-address=$master \
--secure-port=6443 \
--advertise-address=$master \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/server.pem \
--tls-private-key-file=/etc/kubernetes/ssl/server-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/server.pem \
--etcd-keyfile=/etc/etcd/ssl/server-key.pem"
EOF
}
#创建kube-apiserver.service的systemd 启动文件
KUBE_API_SERVICE() {
cat > /usr/lib/systemd/system/kube-apiserver.service <<- EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
}
KUBE_API_CONF && KUBE_API_SERVICE
#启动服务
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
#部署kube-scheduler
#创建kube-scheduler.conf配置文件
KUBE_SCH_CONF() {
cat > /etc/kubernetes/kube-scheduler.conf <<- EOF
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
EOF
}
#创建kube-scheduler的systemd启动文件
KUBE_SCH_SERVICE() {
cat > /usr/lib/systemd/system/kube-scheduler.service <<- EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
}
KUBE_SCH_CONF && KUBE_SCH_SERVICE
#启动服务
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
#部署kube-controller-manager
#创建kube-controller-manager.conf配置文件
KUBE_CM_CONF() {
cat > /etc/kubernetes/kube-controller-manager.conf <<- EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem"
EOF
}
#创建kube-controller-manager的systemd 启动文件
KUBE_CM_SERVICE() {
cat > /usr/lib/systemd/system/kube-controller-manager.service <<- EOF
[unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
}
KUBE_CM_CONF && KUBE_CM_SERVICE
#启动服务
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
#检查各服务状态及 Master 集群状态
kubectl get cs
部署node节点
vim node.sh
#/bin/bash
master=$1
node1=$2
node2=$3
ssh $node1 "mkdir -p /etc/kubernetes/ssl"
ssh $node2 "mkdir -p /etc/kubernetes/ssl"
dir=./
[ ! -d kubernetes ] && echo "error:no kubernetes dir" && exit
#部署kubelet
#在master节点创建 kube-proxy 证书
kube_proxy_csr() {
cd /etc/kubernetes/ssl/
cat > /etc/kubernetes/ssl/kube-proxy-csr.json <<- EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "GuangDong",
"ST": "GuangZhou",
"O": "kubernetes",
"OU": "System"
}
]
}
EOF
cfssl_linux-amd64 gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json --profile=kubernetes kube-proxy-csr.json | cfssljson_linux-amd64 -bare kube-proxy
}
kube_proxy_csr || exit
#创建 kubelet bootstrap kubeconfig 文件
#写一个脚本bs_kubeconfig.sh
cat > /tmp/bs_kubeconfig.sh <<- EOF
#!/bin/bash
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
KUBE_SSL=/etc/kubernetes/ssl
KUBE_APISERVER="https://$master:6443"
cd \$KUBE_SSL/
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=\${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=\${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
# 创建 kube-proxy kubeconfig 文件
kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=\${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# 将 kubelet-bootstrap 用户绑定到系统集群角色
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
EOF
#执行脚本
sh /tmp/bs_kubeconfig.sh
#查看ls /etc/kubernetes/ssl/*.kubeconfig
#会有/etc/kubernetes/ssl/bootstrap.kubeconfig /etc/kubernetes/ssl/kube-proxy.kubeconfig这两个文件
#将二进制文件, 刚生成的两个 .kubeconfig 文件拷贝到所有的 node 节点
#SHELL_FOLDER=$(cd "$(dirname "$0")";pwd)
scp /root/kubernetes/server/bin/{kubelet,kube-proxy} $node1:/usr/local/bin/
scp /root/kubernetes/server/bin/{kubelet,kube-proxy} $node2:/usr/local/bin/
scp /etc/kubernetes/ssl/*.kubeconfig $node1:/etc/kubernetes/
scp /etc/kubernetes/ssl/*.kubeconfig $node2:/etc/kubernetes/
#node:创建 kubelet 配置文件
KUBELET_CONF() {
cat > /etc/kubernetes/kubelet.conf <<- EOF
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=$master \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--config=/etc/kubernetes/kubelet.yaml \
--cert-dir=/etc/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF
}
#创建kube-proxy.conf配置文件
KUBE_PROXY_CONF() {
cat > /etc/kubernetes/kube-proxy.conf <<- EOF
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=$master \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig"
EOF
}
#创建 kubelet 参数配置模板文件
KUBELET_YAML() {
cat > /etc/kubernetes/kubelet.yaml <<- EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: $master
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
EOF
}
#创建 kubelet的systemd 启动文件
KUBELET_SERVICE() {
cat > /usr/lib/systemd/system/kubelet.service <<- EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.conf
ExecStart=/usr/local/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
}
#创建 kube-proxy的systemd 启动文件
KUBE_PROXY_SERVICE() {
cat > /usr/lib/systemd/system/kube-proxy.service <<- EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/kube-proxy.conf
ExecStart=/usr/local/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
}
KUBELET_CONF && KUBELET_YAML && KUBELET_SERVICE && KUBE_PROXY_CONF && KUBE_PROXY_SERVICE
scp /etc/kubernetes/{kubelet.conf,kubelet.yaml,kube-proxy.conf} $node1:/etc/kubernetes/
scp /etc/kubernetes/{kubelet.conf,kubelet.yaml,kube-proxy.conf} $node2:/etc/kubernetes/
scp /usr/lib/systemd/system/{kubelet.service,kube-proxy.service} $node1:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/{kubelet.service,kube-proxy.service} $node2:/usr/lib/systemd/system/
#修改node1的kubelet.conf、kubelet.yaml、kube-proxy.conf
cat > /tmp/kubelet_conf1.sh <<- EOF
#/bin/bash
sed -i "s#$master#$node1#g" /etc/kubernetes/{kubelet.conf,kubelet.yaml,kube-proxy.conf}
EOF
ansible $node1 -m script -a 'creates=/tmp/kubelet_conf1.sh /tmp/kubelet_conf1.sh'
#修改node2的kubelet.conf、kubelet.yaml、kube-proxy.conf
cat > /tmp/kubelet_conf2.sh <<- EOF
#/bin/bash
sed -i "s#$master#$node2#g" /etc/kubernetes/{kubelet.conf,kubelet.yaml,kube-proxy.conf}
EOF
ansible $node2 -m script -a 'creates=/tmp/kubelet_conf2.sh /tmp/kubelet_conf2.sh'
#安装ansible,并将node_ip写入host
[ ! -x /usr/bin/ansible ] && yum -y install ansible
echo "[node]" >> /etc/ansible/hosts
echo "$node1" >> /etc/ansible/hosts
echo "$node2" >> /etc/ansible/hosts
ansible node -m service -a "name=kubelet state=started daemon_reload=yes enabled=yes" && continue
ansible node -m service -a "name=kube-proxy state=started daemon_reload=yes enabled=yes" && continue
#Approve kubelet CSR请求
kubectl certificate approve `kubectl get csr|awk 'NR>1{print $1}'`