K8S集群部署-二进制部署

前言:

官方提供的几种Kubernetes部署方式

minikube

Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用。不能用于生产环境。
官方地址:https://kubernetes.io/docs/setup/minikube/

kubeadm

Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。
官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

二进制包

从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。

小结:

生产环境中部署Kubernetes集群,只有Kubeadm和二进制包可选,Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。
我们这里使用二进制包部署Kubernetes集群,我也是推荐大家使用这种方式,虽然手动部署麻烦点,但学习很多工作原理,更有利于后期维护。

一、环境准备

1.1 软件环境

软件版本
操作系统 CentOS7.5_x64
Docker 18-ce
Kubernetes 1.12

 

 

 

 

 

1.2 服务器角色

角色IP组件
vm-k8s-master 10.99.18.236 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
vm-k8s-node1 10.99.18.237 kubelet,kube-proxy,docker,flannel,etcd
vm-k8s-node2 10.99.18.238 kubelet,kube-proxy,docker,flannel,etcd

 

 

 

 

 

 

 

二、部署ETCD集群

使用cfssl来生成自签证书,先下载cfssl工具:每个节点都要操作

hostnamectl set-hostname  hostname

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 

mv cfssl_linux-amd64 /usr/bin/cfssl 
mv cfssljson_linux-amd64 /usr/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2.1 生成ETCD证书

二进制包下载地址:https://github.com/coreos/etcd/releases/tag/v3.2.12

以下部署步骤在规划的三个etcd节点操作一样,唯一不同的是etcd配置文件中的服务器IP要写当前的:

解压二进制包:

[root@vm-k8s-master ~]# mkdir src
[root@vm-k8s-master ~]# cd src/
[root@vm-k8s-master src]# mkdir /opt/etcd/{bin,cfg,ssl} -p
[root@vm-k8s-master src]# wget https://github.com/etcd-io/etcd/releases/download/v3.3.12/etcd-v3.3.12-linux-amd64.tar.gz
[root@vm-k8s-master src]# tar zxf etcd-v3.3.12-linux-amd64.tar.gz 
[root@vm-k8s-master src]# mv etcd-v3.3.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
[root@vm-k8s-master src]# rm -rf etcd-v3.3.12-linux-amd64
[root@vm-k8s-master src]# ll /opt/etcd/bin/
total 32264
-rwxr-xr-x 1 centos centos 18101056 Feb  8  2019 etcd
-rwxr-xr-x 1 centos centos 14930816 Feb  8  2019 etcdctl
[root@vm-k8s-master src]# 

etcd配置文件:

[root@vm-k8s-master etcd]# cat /opt/etcd/cfg/etcd 
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.99.18.236:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.99.18.236:2379"#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.99.18.236:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.99.18.236:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://10.99.18.236:2380,etcd02=https://10.99.18.237:2380,etcd03=https://10.99.18.238:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
  • ETCD_NAME 节点名称

  • ETCD_DATA_DIR 数据目录

  • ETCD_LISTEN_PEER_URLS 集群通信监听地址

  • ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址

  • ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址

  • ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址

  • ETCD_INITIAL_CLUSTER 集群节点地址

  • ETCD_INITIAL_CLUSTER_TOKEN 集群Token

  • ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群

生成配置文件:

[root@vm-k8s-master etcd]# cat etcd.sh 
#!/bin/bash
# example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380
​
ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3
​
WORK_DIR=/opt/etcd
​
cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
​
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
​
cat << EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
​
[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
​
[Install]
WantedBy=multi-user.target
EOF
​
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
​
[root@vm-k8s-master etcd]#
[root@vm-k8s-master etcd]# chmod +x etcd.sh 
[root@vm-k8s-master etcd]# ./etcd.sh etcd01 10.99.18.236 etcd02=https://10.99.18.237:2380,etcd03=https://10.99.18.238:2380

systemd管理etcd:

[root@vm-k8s-master etcd]# cat /usr/lib/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
​
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd --name=${ETCD_NAME} --data-dir=${ETCD_DATA_DIR} --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
​
[Install]
WantedBy=multi-user.target
[root@vm-k8s-master etcd]# 

把刚才生成的证书拷贝到配置文件中的位置:

[root@vm-k8s-master etcd]# pwd
/root/etcd
[root@vm-k8s-master etcd]# scp ssl/*pem /opt/etcd/ssl/
[root@vm-k8s-master etcd]# 

启动并设置开启启动:

[root@vm-k8s-master ~]#  systemctl daemon-reload    ### 加载最新配置
[root@vm-k8s-master ~]#  systemctl start etcd 
[root@vm-k8s-master ~]#  systemctl enable etcd

2.2 将配置copy到其他节点

将上面的配置copy到其他节点中

[root@vm-k8s-master ~]# scp -rp /opt/etcd 10.99.18.237:/opt/
[root@vm-k8s-master ~]# scp -rp /opt/etcd 10.99.18.238:/opt/
[root@vm-k8s-master ~]# scp -rp /usr/lib/systemd/system/etcd.service 10.99.18.237:/usr/lib/systemd/system/
[root@vm-k8s-master ~]# scp -rp /usr/lib/systemd/system/etcd.service 10.99.18.238:/usr/lib/systemd/system/
[root@vm-k8s-node1 ~]# cat  /opt/etcd/cfg/etcd      ### 改一下相应的配置信息
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.99.18.237:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.99.18.237:2379"#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.99.18.237:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.99.18.237:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://10.99.18.236:2380,etcd02=https://10.99.18.237:2380,etcd03=https://10.99.18.238:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@vm-k8s-node1 ~]# 

都部署完成后,检查etcd集群状态:

[root@vm-k8s-node1 ~]# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://10.99.18.236:2379,https://10.99.18.237:2379,https://10.99.18.238:2379" cluster-health
member 420e4cda789c0fbd is healthy: got healthy result from https://10.99.18.236:2379
member 8d0288666777b53f is healthy: got healthy result from https://10.99.18.237:2379
member f4d64f2e5f6135b0 is healthy: got healthy result from https://10.99.18.238:2379
cluster is healthy
[root@vm-k8s-node1 ~]# 

如果输出上面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd

三、在Node上部署Docker

每个Node节点都要操作

[root@vm-k8s-node1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@vm-k8s-node1 ~]# yum-config-manager --add-repo  https://download.docker.com/linux/centos/docker-ce.repo
[root@vm-k8s-node1 ~]# yum install docker-ce-18.06.3.ce-3.el7
[root@vm-k8s-node1 ~]# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://bc437cce.m.daocloud.io
[root@vm-k8s-node1 ~]# systemctl start docker
[root@vm-k8s-node1 ~]# systemctl enable docker
### 开启ipv4地址转发
vim /etc/sysctl.conf 
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
### 使文件生效
sysctl -p

四、部署Flannel网络

Falnnel要用etcd存储自身一个子网信息,所以要保证能成功连接Etcd,写入预定义子网段:

二进制下载路径:https://github.com/coreos/flannel/releases

[root@vm-k8s-node1 ~]# cd /opt/etcd/ssl/
[root@vm-k8s-node1 ssl]# /opt/etcd/bin/etcdctl  --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://10.99.18.236:2379,https://10.99.18.237:2379,https://10.99.18.238:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
[root@vm-k8s-node1 ssl]# 

以下部署步骤在规划的每个node节点都操作。

下载二进制包:

[root@vvm-k8s-node1 src]# cd /root/src/
[root@vm-k8s-node1 src]# tar zxf flannel-v0.11.0-linux-amd64.tar.gz 
[root@vm-k8s-node1 src]# mkdir /opt/kubernetes/{bin,cfg,ssl} -p
[root@vm-k8s-node1 src]# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/

配置Flannel:

[root@vm-k8s-node1 ~]# mkdir flannel
[root@vm-k8s-node1 ~]# cd flannel/
[root@vm-k8s-node1 flannel]# cat flannel.sh 
#!/bin/bash
​
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
​
cat <<EOF >/opt/kubernetes/cfg/flanneld
​
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
​
EOF
​
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
​
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
​
[Install]
WantedBy=multi-user.target
​
EOF
​
cat <<EOF >/usr/lib/systemd/system/dockerd.service
​
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
​
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
​
[Install]
WantedBy=multi-user.target
​
EOF
​
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
systemctl restart dockerd
​
[root@vm-k8s-node1 flannel]# 
[root@vm-k8s-node1 flannel]# chmod +x flannel.sh 
[root@vm-k8s-node1 flannel]# ./flannel.sh https://10.99.18.236:2379,https://10.99.18.237:2379,https://10.99.18.238:2379

Flanneld配置:

[root@vm-k8s-node1 flannel]# cat /opt/kubernetes/cfg/flanneld 
​
FLANNEL_OPTIONS="--etcd-endpoints=https://10.99.18.236:2379,https://10.99.18.237:2379,https://10.99.18.238:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
​
[root@vm-k8s-node1 flannel]# 

systemd管理Flannel:

[root@vm-k8s-node1 flannel]# cat /usr/lib/systemd/system/flanneld.service 
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
​
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
​
[Install]
WantedBy=multi-user.target
​
[root@vm-k8s-node1 flannel]#

配置Docker启动指定子网段:

[root@vm-k8s-node1 flannel]# cat  /usr/lib/systemd/system/docker.service|grep -Ev "#|^$"
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
[root@vm-k8s-node1 flannel]# 

重启flannel和docker

[root@vm-k8s-node1 flannel]# systemctl daemon-reload
[root@vm-k8s-node1 flannel]# systemctl start flanneld
[root@vm-k8s-node1 flannel]# systemctl enable flanneld
[root@vm-k8s-node1 flannel]# systemctl restart docker

检测是否生效

[root@vm-k8s-node1 flannel]# ps -ef |grep docker
root      22947      1  0 17:24 ?        00:00:00 /usr/bin/dockerd --bip=172.17.80.1/24 --ip-masq=false --mtu=1450
root      22958  22947  0 17:24 ?        00:00:00 docker-containerd --config /var/run/docker/containerd/containerd.toml
root      23371   5987  0 17:24 pts/0    00:00:00 grep --color=auto docker
[root@vm-k8s-node1 flannel]# 
[root@vm-k8s-node1 flannel]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:40:b3:0a:21:00 brd ff:ff:ff:ff:ff:ff
    inet 10.99.18.237/24 brd 10.60.188.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f840:b3ff:fe0a:2100/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:f9:0a:68:27 brd ff:ff:ff:ff:ff:ff
    inet 172.17.80.1/24 brd 172.17.80.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 36:07:d7:63:20:4c brd ff:ff:ff:ff:ff:ff
    inet 172.17.80.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::3407:d7ff:fe63:204c/64 scope link 
       valid_lft forever preferred_lft forever
[root@vm-k8s-node1 flannel]# 
​
[root@vm-k8s-node2 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:43:05:88:64:00 brd ff:ff:ff:ff:ff:ff
    inet 10.99..238/24 brd 10.60.188.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f843:5ff:fe88:6400/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:22:cd:6d:fe brd ff:ff:ff:ff:ff:ff
    inet 172.17.23.1/24 brd 172.17.23.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 82:b4:e5:89:76:23 brd ff:ff:ff:ff:ff:ff
    inet 172.17.23.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::80b4:e5ff:fe89:7623/64 scope link 
       valid_lft forever preferred_lft forever
[root@vm-k8s-node2 ~]# 

确保docker0与flannel.1在同一网段。 测试不同节点互通,在当前节点访问另一个Node节点docker0 IP:

[root@vm-k8s-node2 ~]# ping 172.17.80.1
PING 172.17.80.1 (172.17.80.1) 56(84) bytes of data.
64 bytes from 172.17.80.1: icmp_seq=1 ttl=64 time=0.227 ms
64 bytes from 172.17.80.1: icmp_seq=2 ttl=64 time=0.222 ms

如果能通说明Flannel部署成功。如果不通检查下日志:journalctl -u flannel -f

五、在Master节点部署组件

在部署Kubernetes之前一定要确保etcd、flannel、docker是正常工作的,否则先解决问题再继续。

5.1 生成证书

创建CA证书:

[root@vm-k8s-master ~]# mkdir /root/k8s/ssl -p
[root@vm-k8s-master ~]# mv k8s-cert.sh /root/k8s
k8s/         k8s-cert.sh  
[root@vm-k8s-master ~]# mv k8s-cert.sh /root/k8s/ssl/
[root@vm-k8s-master ~]# cd !$
cd /root/k8s/ssl/
[root@vm-k8s-master ssl]# cat k8s-cert.sh 
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
​
cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
​
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -#-----------------------
​
cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "10.99.18.236",
      "10.99.18.237",
      "10.99.18.238",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
​
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
​
#-----------------------
​
cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
​
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
​
#-----------------------
​
cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
​
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@vm-k8s-master ssl]# ./k8s-cert.sh 
[root@vm-k8s-master ssl]# ls *pem
admin-key.pem  ca-key.pem  kube-proxy-key.pem  server-key.pem
admin.pem      ca.pem      kube-proxy.pem      server.pem
[root@vm-k8s-master ssl]# 

移动证书文件

[root@vm-k8s-master ~]# scp /root/k8s/ssl/*pem /opt/kubernetes/ssl/
[root@vm-k8s-master ~]# ll /opt/kubernetes/ssl/
total 32
-rw------- 1 root root 1675 Oct 24 19:24 admin-key.pem
-rw-r--r-- 1 root root 1399 Oct 24 19:24 admin.pem
-rw------- 1 root root 1679 Oct 24 19:24 ca-key.pem
-rw-r--r-- 1 root root 1359 Oct 24 19:24 ca.pem
-rw------- 1 root root 1675 Oct 24 19:24 kube-proxy-key.pem
-rw-r--r-- 1 root root 1403 Oct 24 19:24 kube-proxy.pem
-rw------- 1 root root 1679 Oct 24 19:24 server-key.pem
-rw-r--r-- 1 root root 1627 Oct 24 19:24 server.pem
[root@vm-k8s-master ~]#

5.2 部署apiserver组件

下载二进制包:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.6.md下载这个包 (kubernetes-server-linux-amd64.tar.gz)就够了,包含了所需的所有组件。

[root@vm-k8s-master src]# tar zxf kubernetes-server-linux-amd64.tar.gz 
[root@vm-k8s-master src]# cd kubernetes/server/bin/
[root@vm-k8s-master bin]# cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin/

创建token文件,用途后面会用到:

[root@vm-k8s-master bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
e25e08cd9f3de1e2e7e4935be62eb2de
[root@vm-k8s-master bin]# cat  /opt/kubernetes/cfg/token.csv
e25e08cd9f3de1e2e7e4935be62eb2de,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@vm-k8s-master bin]# 

第一列:随机字符串,自己可生成 第二列:用户名 第三列:UID 第四列:用户组

[root@vm-k8s-master ~]# mkdir /root/k8s/master
[root@vm-k8s-master ~]# cd !$
cd /root/k8s/master
-rw-r--r-- 1 root root 1426 Aug 14  2018 apiserver.sh
-rw-r--r-- 1 root root 1091 Oct 22  2018 controller-manager.sh
-rw-r--r-- 1 root root  622 Aug 14  2018 scheduler.sh
[root@vm-k8s-master master]# 

创建apiserver配置文件:

[root@vm-k8s-master master]# cat apiserver.sh 
#!/bin/bash
​
MASTER_ADDRESS=$1
ETCD_SERVERS=$2
​
cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
​
KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=${MASTER_ADDRESS} \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
​
EOF
​
cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
​
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
​
[Install]
WantedBy=multi-user.target
EOF
​
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
​
[root@vm-k8s-master master]# chmod +x apiserver.sh 
[root@vm-k8s-master master]# ./apiserver.sh 10.99.18.236 https://10.99.18.236:2379,https://10.99.18.237:2379,https://10.99.18.238:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@vm-k8s-master master]# echo $?
0
[root@vm-k8s-master master]# cat /opt/kubernetes/cfg/kube-apiserver 
​
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://10.99.18.236:2379,https://10.99.18.237:2379,https://10.99.18.238:2379 \
--bind-address=10.99.18.236 \
--secure-port=6443 \
--advertise-address=10.99.18.236 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
​
[root@vm-k8s-master master]# 

配置好前面生成的证书,确保能连接etcd。 参数说明:

  • --logtostderr 启用日志

  • ---v 日志等级

  • --etcd-servers etcd集群地址

  • --bind-address 监听地址

  • --secure-port https安全端口

  • --advertise-address 集群通告地址

  • --allow-privileged 启用授权

  • --service-cluster-ip-range Service虚拟IP地址段

  • --enable-admission-plugins 准入控制模块

  • --authorization-mode 认证授权,启用RBAC授权和节点自管理

  • --enable-bootstrap-token-auth 启用TLS bootstrap功能,后面会讲到

  • --token-auth-file token文件

  • --service-node-port-range Service Node类型默认分配端口范围

systemd管理apiserver:

[root@vm-k8s-master master]# cat /usr/lib/systemd/system/kube-apiserver.service 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
​
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
​
[Install]
WantedBy=multi-user.target
[root@vm-k8s-master master]# 

启动:

[root@vm-k8s-master master]# systemctl daemon-reload
[root@vm-k8s-master master]# systemctl enable kube-apiserver
[root@vm-k8s-master master]# systemctl start kube-apiserver
[root@vm-k8s-master master]# netstat -lnpt |grep kube
tcp        0      0 10.99.18.236:6443      0.0.0.0:*               LISTEN      52995/kube-apiserve 
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      52995/kube-apiserve 
[root@vm-k8s-master master]# 

5.3 部署scheduler组件

创建schduler配置文件:

[root@vm-k8s-master ~]# cd /root/k8s/master/
[root@vm-k8s-master master]# ll
total 12
-rwxr-xr-x 1 root root 1426 Aug 14  2018 apiserver.sh
-rw-r--r-- 1 root root 1091 Oct 22  2018 controller-manager.sh
-rw-r--r-- 1 root root  622 Aug 14  2018 scheduler.sh
[root@vm-k8s-master master]# chmod +x scheduler.sh 
[root@vm-k8s-master master]# cat scheduler.sh 
#!/bin/bash
​
MASTER_ADDRESS=$1
​
cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
​
KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect"
​
EOF
​
cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
​
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
​
[Install]
WantedBy=multi-user.target
EOF
​
​
[root@vm-k8s-master master]# ./scheduler.sh 127.0.0.1
[root@vm-k8s-master master]# cat /opt/kubernetes/cfg/kube-scheduler 
​
KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"
​
[root@vm-k8s-master master]#

参数说明:

  • --master 连接本地apiserver

  • --leader-elect 当该组件启动多个时,自动选举(HA)

systemd管理schduler组件:

[root@vm-k8s-master master]# cat /usr/lib/systemd/system/kube-scheduler.service 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
​
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
​
[Install]
WantedBy=multi-user.target
[root@vm-k8s-master master]# 

启动:

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
[root@vm-k8s-master master]# netstat -lnpt |grep kube
tcp        0      0 10.99.18.236:6443      0.0.0.0:*               LISTEN      52995/kube-apiserve 
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      52995/kube-apiserve 
tcp6       0      0 :::10251                :::*                    LISTEN      53830/kube-schedule 
[root@vm-k8s-master master]# 

5.4 部署controller-manager组件

[root@vm-k8s-master master]# chmod +x controller-manager.sh 
[root@vm-k8s-master master]# cat controller-manager.sh 
#!/bin/bash
​
MASTER_ADDRESS=$1
​
cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
​
​
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
​
EOF
​
cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
​
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
​
[Install]
WantedBy=multi-user.target
EOF
​
​
[root@vm-k8s-master master]# 
[root@vm-k8s-master master]# ./controller-manager.sh 10.99.18.236
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@vm-k8s-master master]# 

创建controller-manager配置文件:

[root@vm-k8s-master master]# cat /opt/kubernetes/cfg/kube-controller-manager 
​
​
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"
​
[root@vm-k8s-master master]# ./controller-manager.sh  127.0.0.1
[root@vm-k8s-master master]# cat /opt/kubernetes/cfg/kube-controller-manager 
​
​
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"
​
[root@vm-k8s-master master]#

systemd管理controller-manager组件:

[root@vm-k8s-master master]# cat /usr/lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
​
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
​
[Install]
WantedBy=multi-user.target
[root@vm-k8s-master master]# 

启动:

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
[root@vm-k8s-master master]# netstat -lnpt |grep kube
tcp        0      0 10.99.18.236:6443      0.0.0.0:*               LISTEN      52995/kube-apiserve 
tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      55471/kube-controll 
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      52995/kube-apiserve 
tcp6       0      0 :::10251                :::*                    LISTEN      55801/kube-schedule 
tcp6       0      0 :::10257                :::*                    LISTEN      55471/kube-controll 
[root@vm-k8s-master master]#

所有组件都已经启动成功,通过kubectl工具查看当前集群组件状态:

[root@vm-k8s-master ~]# /opt/kubernetes/bin/kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
[root@vm-k8s-master ~]# 

如上输出说明组件都正常。

六、在Node节点部署组件

Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与 apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet 会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

6.1 将kubelet-bootstrap用户绑定到系统集群角色

[root@vm-k8s-master ~]# /opt/kubernetes/bin/kubectl create clusterrolebinding kubelet-bootstrap  --clusterrole=system:node-bootstrapper  --user=kubelet-bootstrap  
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@vm-k8s-master ~]# echo $?
0
[root@vm-k8s-master ~]#

6.2 创建kubeconfig文件

在生成kubernetes证书的目录下执行以下命令生成kubeconfig文件:

[root@vm-k8s-master ~]# mkdir /root/k8s/node -p
[root@vm-k8s-master ~]# cd /root/k8s/node/
[root@vm-k8s-master node]# cat /opt/kubernetes/cfg/token.csv 
[root@vm-k8s-master node]# cat kubeconfig.sh 
# 创建 TLS Bootstrapping Token
#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
BOOTSTRAP_TOKEN=e25e08cd9f3de1e2e7e4935be62eb2de
​
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
​
#----------------------
​
APISERVER=$1
SSL_DIR=$2# 创建kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
​
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
​
# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
​
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
​
#----------------------
# 创建kube-proxy kubeconfig文件
​
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
​
kubectl config set-credentials kube-proxy \
  --client-certificate=$SSL_DIR/kube-proxy.pem \
  --client-key=$SSL_DIR/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
​
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
​
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
[root@vm-k8s-master node]# chmod +x kubeconfig.sh
[root@vm-k8s-master node]# tail -1 /etc/profile
export PATH=$PATH:/opt/kubernetes/bin
[root@vm-k8s-master node]# source /etc/profile 
[root@vm-k8s-master node]# ./kubeconfig.sh 10.60.188.236 /opt/kubernetes/ssl
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@vm-k8s-master node]# 
​
[root@vm-k8s-master node]# ls *config
bootstrap.kubeconfig  kube-proxy.kubeconfig
[root@vm-k8s-master node]# 

将这两个文件拷贝到Node节点/opt/kubernetes/cfg目录下。

[root@vm-k8s-master node]# scp *config 10.99.18.237:/opt/kubernetes/cfg/
[root@vm-k8s-master node]# scp *config 10.99.18.238:/opt/kubernetes/cfg/

6.2 部署kubelet组件

将前面下载的二进制包中的kubelet和kube-proxy拷贝到/opt/kubernetes/bin目录下。

[root@vm-k8s-master ~]# cd /root/src/kubernetes/server/bin/
[root@vm-k8s-master bin]# ls
apiextensions-apiserver              kube-apiserver.docker_tag           kube-proxy
cloud-controller-manager             kube-apiserver.tar                  kube-proxy.docker_tag
cloud-controller-manager.docker_tag  kube-controller-manager             kube-proxy.tar
cloud-controller-manager.tar         kube-controller-manager.docker_tag  kube-scheduler
hyperkube                            kube-controller-manager.tar         kube-scheduler.docker_tag
kubeadm                              kubectl                             kube-scheduler.tar
kube-apiserver                       kubelet                             mounter
[root@vm-k8s-master bin]#
[root@vm-k8s-master bin]# scp kubelet 10.99.18.238:/opt/kubernetes/bin/
kubelet                                                                         100%  169MB  94.2MB/s   00:01    
[root@vm-k8s-master bin]# scp kube-proxy 10.99.18.238:/opt/kubernetes/bin/
kube-proxy                                                                      100%   48MB  87.1MB/s   00:00    
[root@vm-k8s-master bin]# scp kubelet 10.99.18.237:/opt/kubernetes/bin/   
kubelet                                                                         100%  169MB  91.8MB/s   00:01    
[root@vm-k8s-master bin]# scp kube-proxy 10.99.18.237:/opt/kubernetes/bin/
kube-proxy                                                                      100%   48MB  84.5MB/s   00:00    
[root@vm-k8s-master bin]# 

创建kubelet配置文件:

[root@vm-k8s-node1 ~]# mkdir /root/k8s/node -p
[root@vm-k8s-node1 ~]# cd /root/k8s/node/
[root@vm-k8s-node1 node]# rz
rz waiting to receive.
 zmodem trl+C ȡ
​
  100%     645 bytes  645 bytes/s 00:00:01       0 Errors
  100%       1 KB    1 KB/s 00:00:01       0 Errors
​
[root@vm-k8s-node1 node]# 
[root@vm-k8s-node1 node]# cat kubelet.sh 
#!/bin/bash
​
NODE_ADDRESS=$1
DNS_SERVER_IP=${2:-"10.0.0.2"}
​
cat <<EOF >/opt/kubernetes/cfg/kubelet
​
KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet.config \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
​
EOF
​
cat <<EOF >/opt/kubernetes/cfg/kubelet.config
​
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: ${NODE_ADDRESS}
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- ${DNS_SERVER_IP} 
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
EOF
​
cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
​
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process
​
[Install]
WantedBy=multi-user.target
EOF
​
​
[root@vm-k8s-node1 node]# chmod +x kubelet.sh 
[root@vm-k8s-node1 node]# ./kubelet.sh 10.99.18.237

建kubelet配置文件:

[root@vm-k8s-node1 ~]# cat /opt/kubernetes/cfg/kubelet
​
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.99.18.237 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
​
[root@vm-k8s-node1 ~]# 

参数说明:

  • --hostname-override 在集群中显示的主机名

  • --kubeconfig 指定kubeconfig文件位置,会自动生成

  • --bootstrap-kubeconfig 指定刚才生成的bootstrap.kubeconfig文件

  • --cert-dir 颁发证书存放位置

  • --pod-infra-container-image 管理Pod网络的镜像

其中/opt/kubernetes/cfg/kubelet.config配置文件如下:

[root@vm-k8s-node1 ~]# cat /opt/kubernetes/cfg/kubelet.config 
​
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.99.18.237
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2 
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
[root@vm-k8s-node1 ~]# 

systemd管理kubelet组件:

[root@vm-k8s-node1 ~]# cat /usr/lib/systemd/system/kubelet.service 
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
​
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
​
[Install]
WantedBy=multi-user.target
[root@vm-k8s-node1 ~]# 
启动:

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
[root@vm-k8s-node1 ~]# ps -ef |grep kubelet
root      38641      1  0 16:01 ?        00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=10.99.18.237 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root      39599  38222  0 16:03 pts/1    00:00:00 grep --color=auto kubelet
[root@vm-k8s-node1 ~]#

copy脚本文件到另个node节点上

[root@vm-k8s-node1 cfg]# scp -r /root/k8s 10.99.18.238:~       
proxy.sh                                                                        100%  645   212.3KB/s   00:00    
kubelet.sh                                                                      100% 1215   423.3KB/s   00:00    
[root@vm-k8s-node1 cfg]# 
[root@vm-k8s-node2 ~]# cd /root/k8s/node/
[root@vm-k8s-node2 node]# ls
kubelet.sh  proxy.sh
[root@vm-k8s-node2 node]# ./kubelet.sh 10.99.18.238
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
[root@vm-k8s-node2 node]# ps -ef|grep kubelet
root      39807      1  1 16:07 ?        00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=10.99.18.238 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root      40222   4173  0 16:08 pts/0    00:00:00 grep --color=auto kubelet
[root@vm-k8s-node2 node]# 

在Master审批Node加入集群:

启动后还没加入到集群中,需要手动允许该节点才可以。 在Master节点查看请求签名的Node:

kubectl get csr 
kubectl certificate approve XXXXID 
kubectl get node
[root@vm-k8s-master ~]# kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
10.99.18.237   Ready    <none>   46s   v1.12.9
10.99.18.238   Ready    <none>   67s   v1.12.9
[root@vm-k8s-master ~]# 

6.3 部署kube-proxy组件

创建kube-proxy配置文件:

[root@vm-k8s-node1 ~]# cd /root/k8s/node/
[root@vm-k8s-node1 node]# ll
total 8
-rwxr-xr-x 1 root root 1215 Oct 23  2018 kubelet.sh
-rw-r--r-- 1 root root  645 Aug 14  2018 proxy.sh
[root@vm-k8s-node1 node]# chmod +x proxy.sh 
[root@vm-k8s-node1 node]# cat proxy.sh 
#!/bin/bash
​
NODE_ADDRESS=$1
​
cat <<EOF >/opt/kubernetes/cfg/kube-proxy
​
KUBE_PROXY_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--cluster-cidr=10.0.0.0/24 \\
--proxy-mode=ipvs \\
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
​
EOF
​
cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
​
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
​
[Install]
WantedBy=multi-user.target
EOF
​
[root@vm-k8s-node1 node]# ./kubelet.sh 10.99.18.237

kube-proxy配置文件:

[root@vm-k8s-node1 node]# cat /opt/kubernetes/cfg/kube-proxy
​
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.99.18.237 \
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
​
[root@vm-k8s-node1 node]# 

systemd管理kube-proxy组件:

[root@vm-k8s-node1 node]# cat /usr/lib/systemd/system/kube-proxy.service 
[Unit]
Description=Kubernetes Proxy
After=network.target
​
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
​
[Install]
WantedBy=multi-user.target
[root@vm-k8s-node1 node]# 

启动:

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
[root@vm-k8s-node1 node]# ps -ef |grep kube-proxy
root      45309      1  1 16:16 ?        00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=10.99.18.237 --cluster-cidr=10.0.0.0/24 --proxy-mode=ipvs --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
root      45490  38222  0 16:17 pts/1    00:00:00 grep --color=auto kube-proxy
[root@vm-k8s-node1 node]# 

Node2部署方式一样。

七、查看集群状态

[root@vm-k8s-master ~]# kubectl get node
NAME            STATUS   ROLES    AGE     VERSION
10.99.18.237   Ready    <none>   8m26s   v1.12.9
10.99.18.238   Ready    <none>   8m47s   v1.12.9
[root@vm-k8s-master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
[root@vm-k8s-master ~]#

八、运行一个测试示例

创建一个Nginx Web,测试集群是否正常工作:

[root@vm-k8s-master ~]# kubectl run nginx --image=nginx --replicas=3
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
[root@vm-k8s-master ~]# kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
service/nginx exposed

查看Pod,Service:

[root@vm-k8s-master ~]# kubectl get pods
NAME                    READY   STATUS             RESTARTS   AGE
nginx-dbddb74b8-g95k2   1/1     Running            0          105s
nginx-dbddb74b8-l29wb   1/1     Running            0          105s
nginx-dbddb74b8-qk77k   1/1     Running            0          105s
[root@vm-k8s-master ~]# 
[root@vm-k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP        55m
nginx        NodePort    10.0.0.13    <none>        88:41567/TCP   2m50s
[root@vm-k8s-master ~]# 

访问集群中部署的Nginx,打开浏览器输入:http://10.60.188.237:41567

[root@vm-k8s-master ~]# curl 10.60.188.237:41567
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@vm-k8s-master ~]# 

九、CoreDns POD内部访问外部网络

pod访问不了外部网络

kubectl exec podname-it -- /bin/bash
ping  www.baidu.com   ping不通外网络

在kube-system  namespace中创建coredns网络

[root@vm-k8s-master cfg]# cat coredns.yaml 
# Warning: This is a file generated from the base underscore template file: coredns.yaml.base

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: lizhenliang/coredns:1.2.2
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.0.0.2 
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
[root@vm-k8s-master cfg]#  pwd
/opt/kubernetes/cfg
clusterIP: 10.0.0.2  就是前面kubelet、kube-proxy中配置的网络段

 

.99.18

posted @ 2020-04-29 08:56  Pythia丶陌乐  阅读(1651)  评论(0编辑  收藏  举报