k8s 二进制部署详解

环境说明:

192.168.1.101 -- master01 + etcd01
192.168.1.102 -- etcd02
192.168.1.103 -- etcd03
192.168.1.104 -- node01
192.168.1.105 -- node02
192.168.1.106 -- node03
192.168.1.107 -- harbor(私有仓库)

说明

etcd保存了整个集群的状态;
apiserver提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;
controller manager负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;
scheduler负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;
kubelet负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;
Container runtime负责镜像管理以及Pod和容器的真正运行(CRI);
kube-proxy负责为Service提供cluster内部的服务发现和负载均衡;

除了核心组件,还有一些推荐的Add-ons:
kube-dns负责为整个集群提供DNS服务
Ingress Controller为服务提供外网入口
Heapster提供资源监控
Dashboard提供GUI
Federation提供跨可用区的集群
Fluentd-elasticsearch提供集群日志采集、存储与查询

网段使用说明:

         flannel:   172.10.0.0/16
ServiceClusterIP:   172.10.0.0/16
      clusterDNS:   172.10.1.1

参考文档:

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#downloads-for-v1131
https://kubernetes.io/docs/home/?path=users&persona=app-developer&level=foundational
https://github.com/etcd-io/etcd
https://shengbao.org/348.html
https://github.com/coreos/flannel
http://www.cnblogs.com/blogscc/p/10105134.html
https://blog.csdn.net/xiegh2014/article/details/84830880
https://blog.csdn.net/tiger435/article/details/85002337
https://www.cnblogs.com/wjoyxt/p/9968491.html
https://blog.csdn.net/zhaihaifei/article/details/79098564
http://blog.51cto.com/jerrymin/1898243
http://www.cnblogs.com/xuxinkun/p/5696031.html
https://www.kubernetes.org.cn/5025.html

下载链接(可能需要FQ):

https://dl.k8s.io/v1.13.12/kubernetes-server-linux-amd64.tar.gz
https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

etcd部署

cfssl安装

cd /usr/local/src/
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

创建etcd证书

export etcd01=192.168.1.101
export etcd02=192.168.1.102
export etcd03=192.168.1.103
export etcd_data_dir=/export/data/etcd
export localhost=$(ifconfig eth0 |grep inet |awk '{print $2}')
mkdir ${etcd_data_dir}
mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p
cd /k8s/etcd/ssl/

etcd ca配置

cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "etcd": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

etcd ca证书

cat << EOF | tee ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

etcd server证书

#hosts为三台etcd的IP地址,记得修改为自己对应的IP
cat << EOF | tee server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "${etcd01}",
    "${etcd02}",
    "${etcd03}"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

生成etcd ca证书和私钥 初始化ca

cfssl gencert -initca ca-csr.json | cfssljson -bare ca
#输出内容 
2019/11/06 15:39:48 [INFO] generating a new CA key and certificate from CSR
2019/11/06 15:39:48 [INFO] generate received request
2019/11/06 15:39:48 [INFO] received CSR
2019/11/06 15:39:48 [INFO] generating key: rsa-2048
2019/11/06 15:39:48 [INFO] encoded CSR
2019/11/06 15:39:48 [INFO] signed certificate with serial number 585291419186689846171752473066422999458858758659

生成server证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server
#输出内容
2019/11/06 15:40:06 [INFO] generate received request
2019/11/06 15:40:06 [INFO] received CSR
2019/11/06 15:40:06 [INFO] generating key: rsa-2048
2019/11/06 15:40:07 [INFO] encoded CSR
2019/11/06 15:40:07 [INFO] signed certificate with serial number 616227990429827501363810279661704496755579941204
2019/11/06 15:40:07 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem

etcd安装

#建议本地FQ下载完上传到服务器,不然下载比较慢
cd /usr/local/src/
wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/

配置etcd主文件

cat << EOF | tee /k8s/etcd/cfg/etcd.conf 
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="${etcd_data_dir}"
ETCD_LISTEN_PEER_URLS="https://${localhost}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${localhost}:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${localhost}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${localhost}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${etcd01}:2380,etcd02=https://${etcd02}:2380,etcd03=https://${etcd03}:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
EOF

配置文件说明

ETCD_NAME 节点名称
ETCD_DATA_DIR 数据目录
ETCD_LISTEN_PEER_URLS 集群通信监听地址
ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
ETCD_INITIAL_CLUSTER 集群节点地址,多个逗号隔开
ETCD_INITIAL_CLUSTER_TOKEN 集群Token
ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群

配置etcd启动文件

vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=${etcd_data_dir}
EnvironmentFile=-/k8s/etcd/cfg/etcd.conf
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /k8s/etcd/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\" --cert-file=\"${ETCD_CERT_FILE}\" --key-file=\"${ETCD_KEY_FILE}\" --trusted-ca-file=\"${ETCD_TRUSTED_CA_FILE}\" --client-cert-auth=\"${ETCD_CLIENT_CERT_AUTH}\" --peer-cert-file=\"${ETCD_PEER_CERT_FILE}\" --peer-key-file=\"${ETCD_PEER_KEY_FILE}\" --peer-trusted-ca-file=\"${ETCD_PEER_TRUSTED_CA_FILE}\" --peer-client-cert-auth=\"${ETCD_PEER_CLIENT_CERT_AUTH}\""
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

启动(注意启动前etcd02、etcd03同样配置下),配置时记得修改对应的IP以及ETCD_NAME参数

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

服务检查

/k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://${etcd01}:2379,https://${etcd02}:2379,https://${etcd03}:2379" cluster-health
#输出内容
member 40fd0311a47c2fa9 is healthy: got healthy result from https://192.168.1.101:2379
member 8b60281620c78359 is healthy: got healthy result from https://192.168.1.102:2379
member ff41ada5935bd98a is healthy: got healthy result from https://192.168.1.103:2379
cluster is healthy

master部署

生成kubernets证书与私钥,制作kubernetes ca证书

cd /k8s/kubernetes/ssl
cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

2019/11/06 17:37:34 [INFO] generating a new CA key and certificate from CSR
2019/11/06 17:37:34 [INFO] generate received request
2019/11/06 17:37:34 [INFO] received CSR
2019/11/06 17:37:34 [INFO] generating key: rsa-2048
2019/11/06 17:37:35 [INFO] encoded CSR
2019/11/06 17:37:35 [INFO] signed certificate with serial number 33061203788432767485911793456865092703925639072

制作apiserver证书

export master01=192.168.1.101
export master02=192.168.1.102
export master03=192.168.1.103
export VIP=172.20.103.210
cat << EOF | tee server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "172.10.0.1",
      "127.0.0.1",
      "${master01}",
      "${master02}",
      "${master03}",
      "${VIP}",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

参数说明

hosts参数中
172.10.0.1 为kubernets的clusterip,是提前规划好的网段172.10.0.0/16,一般情况下kubernets会使用172.10.0.1该IP
127.0.0.1  客户端工具在master上面执行指令时,直接通过127.0.0.1:port方式连接,不需要走证书,如果不写,后期可能会出现没有权限等问题
其他的为master地址,master为单节点,写一个IP即可,我为了后期可以扩容master多节点,所以多写了两个IP
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

2019/11/06 17:38:20 [INFO] generate received request
2019/11/06 17:38:20 [INFO] received CSR
2019/11/06 17:38:20 [INFO] generating key: rsa-2048
2019/11/06 17:38:21 [INFO] encoded CSR
2019/11/06 17:38:21 [INFO] signed certificate with serial number 298585487028849925737841613188164881684400754820
2019/11/06 17:38:21 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

制作kube-proxy证书

cat << EOF | tee kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

2019/11/06 17:39:35 [INFO] generate received request
2019/11/06 17:39:35 [INFO] received CSR
2019/11/06 17:39:35 [INFO] generating key: rsa-2048
2019/11/06 17:39:36 [INFO] encoded CSR
2019/11/06 17:39:36 [INFO] signed certificate with serial number 179375415898878619372428495237194776940178122496
2019/11/06 17:39:36 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

#ls
ca-config.json  ca-csr.json  ca.pem          kube-proxy-csr.json  kube-proxy.pem  server-csr.json  server.pem
ca.csr          ca-key.pem   kube-proxy.csr  kube-proxy-key.pem   server.csr      server-key.pem

部署kubernetes server

解压缩文件

tar -zxvf kubernetes-server-linux-amd64.tar.gz 
cd kubernetes/server/bin/
cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/

部署kube-apiserver组件 创建TLS Bootstrapping Token

root@k8s-master-1 bin:# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
0a7a3cf71f596a5a83c9740d87024d3f
 
vim /k8s/kubernetes/cfg/token.csv
0a7a3cf71f596a5a83c9740d87024d3f,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

第一列:随机字符串,自己可生成
第二列:用户名
第三列:UID
第四列:用户组

创建Apiserver配置文件

vim /k8s/kubernetes/cfg/kube-apiserver 
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.1.101:2379,https://192.168.1.102:2379,https://192.168.1.103:2379 \
--bind-address=192.168.1.101 \
--secure-port=6443 \
--advertise-address=192.168.1.101 \
--allow-privileged=true \
--service-cluster-ip-range=172.10.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/server.pem  \
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"

参数说明

--bind-address 监听地址
--secure-port 监听端口
--advertise-address 通告地址
--allow-privileged=true 设置为true时,kubernetes允许在Pod中运行拥有系统特权的容器应用
--service-cluster-ip-range 提前规划好的ServiceClusterIP
--authorization-mode=Node,RBAC 启用Node RBAC插件
--kubelet-https=true 启用https
--token-auth-file=$kubernetesDir/token.csv 指定生成token文件
--service-node-port-range=30000-50000 设置node port端口号范围30000~32767
--tls-cert-file=$kubernetesTLSDir/apiserver.pem 指定apiserver的tls公钥证书
--tls-private-key-file=$kubernetesTLSDir/apiserver.key 指定apiserver的tls私钥证书
--client-ca-file=$kubernetesTLSDir/ca.pem 指定TLS证书的ca根证书公钥
--service-account-key-file=$kubernetesTLSDir/ca.key 指定apiserver的tls证书
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction 指定需要打开或者关闭的 Admission Controller
--etcd-cafile=$etcdCaPem 指定etcd访问的ca根证书公钥
--etcd-certfile=$etcdPem 指定etcd访问的TLS证书公钥
--etcd-keyfile=$etcdKeyPem 指定etcd访问的TLS证书私钥
其他参数
--runtime-config=rbac.authorization.k8s.io/v1beta1 运行的rabc配置文件
--storage-backend=etcd3 指定etcd存储为version 3系列
--enable-swagger-ui=true 启用 swagger-ui 功能,Kubernetes使用了swagger-ui提供API在线查询功能
--apiserver-count=3 设置集群中运行的API Sever数量,这种使用单个也没关系
--event-ttl=1h API Server 对于各种审计时间保存1小时

创建apiserver systemd文件

vim /usr/lib/systemd/system/kube-apiserver.service 

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver
ps aux |grep kube-apiserver
netstat -tulpn |grep kube-apiserve
tcp        0      0 192.168.1.101:6443     0.0.0.0:*               LISTEN      32174/kube-apiserve 
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      32174/kube-apiserve 

部署kube-scheduler组件 创建kube-scheduler配置文件

vim  /k8s/kubernetes/cfg/kube-scheduler 
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"

说明

参数备注: –address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求; –kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver; –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

创建kube-scheduler systemd文件

vim /usr/lib/systemd/system/kube-scheduler.service 
 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload
systemctl enable kube-scheduler.service 
systemctl start kube-scheduler.service
netstat -lntp |grep kube-scheduler
tcp6       0      0 :::10251                :::*                    LISTEN      1227/kube-scheduler 
tcp6       0      0 :::10259                :::*                    LISTEN      1227/kube-scheduler

部署kube-controller-manager组件 创建kube-controller-manager配置文件

vim /k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=172.10.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"

说明

master=http://127.0.0.1:8080 配置master访问地址
--address=127.0.0.1 配置监听本地IP地址,address 值必须为 127.0.0.1,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器
--service-cluster-ip-range=172.10.0.0/16 设置kubernetes的service的网段
--cluster-name=kubernetes 设置集群的域名为kubernetes
--cluster-signing-cert-file=$kubernetesTLSDir/ca.pem 设置集群签署TLS的ca根证书公钥 。指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥;
--cluster-signing-key-file=$kubernetesTLSDir/ca.key 设置集群签署TLS的ca根证书私钥 ;指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥;
--service-account-private-key-file=$kubernetesTLSDir/ca.key 设置集群安全账号签署TLS的ca根证书私钥
--root-ca-file=$kubernetesTLSDir/ca.pem 设置集群root用户签署TLS的ca根证书公钥;用来对 kube-apiserver 证书进行校验,指定该参数后,才会在Pod 容器的 ServiceAccount 中放置该 CA 证书文件;
--leader-elect=true 设置启动选举,但是目前只启动一个,也没地方要选择,主要在于API Sever有多个的时候
--cluster-cidr=$podClusterIP 设置集群pod的IP网段

创建kube-controller-manager systemd文件

vim /usr/lib/systemd/system/kube-controller-manager.service 
 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
netstat -lntp |grep kube-controlle
tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      5036/kube-controlle 
tcp6       0      0 :::10257                :::*                    LISTEN      5036/kube-controlle 

设置环境变量

vim /etc/profile
PATH=/k8s/kubernetes/bin:$PATH
source /etc/profile

查看master服务状态

kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok                  
componentstatus/scheduler            Healthy   ok                  
componentstatus/etcd-0               Healthy   {"health":"true"}   
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-1               Healthy   {"health":"true"} 

Node部署

Docker环境安装

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce -y
systemctl start docker
systemctl enable docker

部署kubelet组件

kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如exec、run、logs 等; kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况; 为确保安全,只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如apiserver、heapster)

安装二进制文件

文件直接下载server端即可,把master刚刚下载的文件cp到每个node上面即可
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p
cd /usr/local/src/
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kube-proxy kubelet kubectl /k8s/kubernetes/bin/

复制相关证书到node节点

scp ./* 192.168.1.104:$PWD
scp ./* 192.168.1.105:$PWD
scp ./* 192.168.1.106:$PWD

创建kubelet bootstrap kubeconfig文件 通过脚本实现

vim /k8s/kubernetes/cfg/environment.sh
#!/bin/bash
#创建kubelet bootstrapping kubeconfig 
BOOTSTRAP_TOKEN=0a7a3cf71f596a5a83c9740d87024d3f
KUBE_APISERVER="https://192.168.1.101:6443"
#设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
 
#设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
 
# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
 
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
 
#----------------------
 
# 创建kube-proxy kubeconfig文件
 
kubectl config set-cluster kubernetes \
  --certificate-authority=/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-credentials kube-proxy \
  --client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem \
  --client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

执行脚本

#sh environment.sh 
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".

创建kubelet参数配置模板文件

vim /k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.1.104
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["172.10.1.1"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

说明

API Server权限控制方式介绍

API Server权限控制分为三种:
Authentication(身份认证)、Authorization(授权)、AdmissionControl(准入控制)。
身份认证:
当客户端向Kubernetes非只读端口发起API请求时,Kubernetes通过三种方式来认证用户的合法性。kubernetes中,验证用户是否有权限操作api的方式有三种:证书认证,token认证,基本信息认证。

① 证书认证
设置apiserver的启动参数:--client_ca_file=SOMEFILE ,这个被引用的文件中包含的验证client的证书,如果被验证通过,那么这个验证记录中的主体对象将会作为请求的username。

② Token认证(本次使用token认证的方式)

设置apiserver的启动参数:--token_auth_file=SOMEFILE。 token file的格式包含三列:token,username,userid。当使用token作为验证方式时,在对apiserver的http请求中,增加 一个Header字段:Authorization ,将它的值设置为:Bearer SOMETOKEN。

③ 基本信息认证
设置apiserver的启动参数:--basic_auth_file=SOMEFILE,如果更改了文件中的密码,只有重新启动apiserver使 其重新生效。其文件的基本格式包含三列:password,username,userid。当使用此作为认证方式时,在对apiserver的http 请求中,增加一个Header字段:Authorization,将它的值设置为: Basic BASE64ENCODEDUSER:PASSWORD。

我们这里只采用Token authentication file这种方式(版本稳定)
Token可以是任意的包含128 bit的字符串,可以使用安全的随机数发生器生成,在master上已经创建过了,直接拷贝过来即可

BOOTSTRAP_TOKEN的内容被写入了token.csv和kubelets使用的bootstrap.config文件中,如果后续对BOOTSTRAP_TOKEN进行更替需要进行相关的“同步”操作:
- 更新 token.csv 文件,分发到所有机器 (master 和 node),分发到node节点上非必需;
- 重新生成 bootstrap.kubeconfig 文件,分发到所有 node;
- 重启 kube-apiserver 和 kubelet 进程;
- 重新 approve kubelet 的 csr 请求;

--embed-certs=true 为 true 时表示将 certificate-authority 证书写入到生成的 bootstrap.kubeconfig 文件中;
设置集群参数和客户端认证参数时 --embed-certs 都为 true,这会将 certificate-authority、client-certificate 和 client-key 指向的证书文件内容写入到生成的 kube-proxy.kubeconfig 文件中;

创建kubelet配置文件

vim /k8s/kubernetes/cfg/kubelet
 
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.1.104 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

创建kubelet systemd文件

vim /usr/lib/systemd/system/kubelet.service 
 
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
 
[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target

将kubelet-bootstrap用户绑定到系统集群角色

#kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
#注意这个默认连接localhost:8080端口,可以在master上操作

启动服务

systemctl daemon-reload 
systemctl enable kubelet 
systemctl start kubelet

错误

Nov  7 09:01:56 k8s-node-1 kubelet: I1107 09:01:56.104545   13181 bootstrap.go:239] Failed to connect to apiserver: Get https://192.168.1.101:6443/healthz?timeout=1s: x509: certificate has expired or is not yet valid
日志中出现该错误信息,显示为证书已过期,开始以为是token值有问题,重新生成重启还不行,最后发现是因为时间不一致
哎呀,大意了。

Master接受kubelet CSR请求 可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书,如下是手动 approve CSR请求操作方法 查看CSR列表

# kubectl get csr 
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-sYarOqixDJY1rFGXHAIUi81-dw-y7leyCaf2wV64rrU   113s   kubelet-bootstrap   Pending

Pending  等待被master接受

接受node

# kubectl certificate approve node-csr-sYarOqixDJY1rFGXHAIUi81-dw-y7leyCaf2wV64rrU
certificatesigningrequest.certificates.k8s.io/node-csr-sYarOqixDJY1rFGXHAIUi81-dw-y7leyCaf2wV64rrU approved

再查看CSR

# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-sYarOqixDJY1rFGXHAIUi81-dw-y7leyCaf2wV64rrU   14m   kubelet-bootstrap   Approved,Issued

部署kube-proxy组件
kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡 1)创建 kube-proxy 配置文件

vim /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.1.104 \
--cluster-cidr=172.10.0.0/16 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"

创建kube-proxy systemd文件

vim /usr/lib/systemd/system/kube-proxy.service 
 
[Unit]
Description=Kubernetes Proxy
After=network.target
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload 
systemctl enable kube-proxy 
systemctl start kube-proxy

查看集群状态

kubectl get nodes

同样操作部署到其他两台node并认证csr,认证后会生成kubelet-client证书
注意期间要是kubelet,kube-proxy配置错误,比如监听IP或者hostname错误导致node not found,需要删除kubelet-client证书,重启kubelet服务,重启认证csr即可

Flannel部署

关闭防火墙和selinux

systemctl stop firewalld
sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/sysconfig/selinux

安装docker

官方文档:https://docs.docker.com/v18.09/install/linux/docker-ce/centos/
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce -y
systemctl start docker
systemctl enable docker

安装docker-compose(docker编排工具)

yum -y install epel-release
yum install python-pip -y
pip install docker-compose
docker-compose version

安装Harbor

官方文档:https://github.com/goharbor/harbor
yum install wget -y
wget -P /usr/local/src/ https://github.com/goharbor/harbor/releases/download/v1.9.2/harbor-online-installer-v1.9.2.tgz
cd /usr/loca/src/
tar xvf harbor-online-installer-v1.9.2.tgz -C /usr/local/
cd /usr/local/harbor/
cp harbor.yml harbor.yml.bak
./prepare
./install.sh

Harbor默认http协议,出于安全考虑,使用https访问

#只列出需要修改的参数,证书为阿里云购买,可以自己生成
hostname: harbor.keji.com
http:
  port: 80

https:
    port: 443
    certificate: /usr/local/harbor/ssl/keji.com.pem
    private_key: /usr/local/harbor/ssl/keji.com.key
posted @ 2019-11-07 17:22  于欢水  阅读(984)  评论(0编辑  收藏  举报