kubernetes集群初始化(二)

参考地址:https://github.com/unixhot/salt-kubernetes

一、系统初始化

1.1、安装docker

所有节点都安装docker,设置docker国内yum源

[root@linux-node1 ~]# cd /etc/yum.repos.d/
[root@linux-node1 yum.repos.d]# wget \
 https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

1.2、安装

yum install -y docker-ce

1.3、启动

systemctl start docker

1.4、准备部署目录(所有节点都创建)

mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}

1.5、下载软件包

百度网盘下载地址:
[https://pan.baidu.com/s/1zs8sCouDeCQJ9lghH1BPiw](https://pan.baidu.com/s/1zs8sCouDeCQJ9lghH1BPiw)

官方下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md

1.6、解压

 # tar zxf kubernetes.tar.gz 
 # tar zxf kubernetes-server-linux-amd64.tar.gz 
 # tar zxf kubernetes-client-linux-amd64.tar.gz
 # tar zxf kubernetes-node-linux-amd64.tar.gz

 1.7、环境变量设置(所有节点)

vim ~/.bash_profile
PATH=$PATH:$HOME/bin:/opt/kubernetes/bin

source ~/.bash_profile

 

 二、手动制作CA证书

2.1、安装cfssl

[root@linux-node1 ~]# cd /usr/local/src
[root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@linux-node1 src]# chmod +x cfssl*
[root@linux-node1 src]# mv cfssl-certinfo_linux-amd64 /opt/kubernetes/bin/cfssl-certinfo
[root@linux-node1 src]# mv cfssljson_linux-amd64  /opt/kubernetes/bin/cfssljson
[root@linux-node1 src]# mv cfssl_linux-amd64  /opt/kubernetes/bin/cfssl
复制cfssl命令文件到k8s-node1和k8s-node2节点。如果实际中多个节点,就都需要同步复制。
[root@linux-node1 ~]# scp /opt/kubernetes/bin/cfssl* 192.168.56.12: /opt/kubernetes/bin
[root@linux-node1 ~]# scp /opt/kubernetes/bin/cfssl* 192.168.56.13: /opt/kubernetes/bin

 

2.2、初始化cfssl

mkdir  -p /usr/local/src/ssl && cd /usr/local/src/ssl

 

 2.3、创建用来生成CA证书的json文件

[root@linux-node1 ssl]# vim ca-config.json
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "8760h"
      }
    }
  }
}

 

2.4、创建用来生成CA证书签名请求的json文件

[root@linux-node1 ssl]# vim ca-csr.json
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

 

2.5、生成CA证书和密匙

[root@ linux-node1 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[root@ linux-node1 ssl]# ls -l ca*
-rw-r--r-- 1 root root  290 Mar  4 13:45 ca-config.json
-rw-r--r-- 1 root root 1001 Mar  4 14:09 ca.csr
-rw-r--r-- 1 root root  208 Mar  4 13:51 ca-csr.json
-rw------- 1 root root 1679 Mar  4 14:09 ca-key.pem
-rw-r--r-- 1 root root 1359 Mar  4 14:09 ca.pem

 

2.6、颁发证书

# cp ca.csr ca.pem ca-key.pem ca-config.json /opt/kubernetes/ssl
SCP证书到k8s-node1和k8s-node2节点
# scp ca.csr ca.pem ca-key.pem ca-config.json 192.168.56.12:/opt/kubernetes/ssl 
# scp ca.csr ca.pem ca-key.pem ca-config.json 192.168.56.13:/opt/kubernetes/ssl

 三、etcd集群部署

3.1、准备etcd安装包

wget https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz
[root@linux-node1 src]# tar zxf etcd-v3.2.18-linux-amd64.tar.gz
[root@linux-node1 src]# cd etcd-v3.2.18-linux-amd64
[root@linux-node1 etcd-v3.2.18-linux-amd64]# cp etcd etcdctl /opt/kubernetes/bin/ 
[root@linux-node1 etcd-v3.2.18-linux-amd64]# scp etcd etcdctl 192.168.56.12:/opt/kubernetes/bin/
[root@linux-node1 etcd-v3.2.18-linux-amd64]# scp etcd etcdctl 192.168.56.13:/opt/kubernetes/bin/

 

3.2、创建etcd证书签名请求

[root@linux-node1 ~]# vim etcd-csr.json
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
"192.168.56.11",
"192.168.56.12",
"192.168.56.13"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

 

注:上面的ip可以在每个节点数都有自己的ip地址,这里是方便节点之间认证,配置了每个节点的ip地址。

3.3、生成etcd证书和私钥

[root@linux-node1 ~]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
  -ca-key=/opt/kubernetes/ssl/ca-key.pem \
  -config=/opt/kubernetes/ssl/ca-config.json \
  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
会生成以下证书文件
[root@k8s-master ~]# ls -l etcd*
-rw-r--r-- 1 root root 1045 Mar  5 11:27 etcd.csr
-rw-r--r-- 1 root root  257 Mar  5 11:25 etcd-csr.json
-rw------- 1 root root 1679 Mar  5 11:27 etcd-key.pem
-rw-r--r-- 1 root root 1419 Mar  5 11:27 etcd.pem

 

3.4、将证书移动到指定的目录

[root@k8s-master ~]# cp etcd*.pem /opt/kubernetes/ssl
[root@linux-node1 ~]# scp etcd*.pem 192.168.56.12:/opt/kubernetes/ssl
[root@linux-node1 ~]# scp etcd*.pem 192.168.56.13:/opt/kubernetes/ssl
[root@k8s-master ~]# rm -f etcd.csr etcd-csr.json

 

3.5、设置etcd配置文件

[root@linux-node1 ~]# vim /opt/kubernetes/cfg/etcd.conf
#[member]
ETCD_NAME="etcd-node1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://192.168.56.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.56.11:2379,https://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.56.11:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd-node1=https://192.168.56.11:2380,etcd-node2=https://192.168.56.12:2380,etcd-node3=https://192.168.56.13:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.56.11:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pe

注:以上标红部分地方需要在每个节点上修改成对应节点ip

3.6、创建etcd系统服务

[root@linux-node1 ~]# vim /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd"
Type=notify

[Install]
WantedBy=multi-user.target

 

3.7、重新加载服务并拷贝到其他节点

[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 ~]# systemctl enable etcd


# scp /opt/kubernetes/cfg/etcd.conf 192.168.56.12:/opt/kubernetes/cfg/
# scp /etc/systemd/system/etcd.service 192.168.56.12:/etc/systemd/system/
# scp /opt/kubernetes/cfg/etcd.conf 192.168.56.13:/opt/kubernetes/cfg/
# scp /etc/systemd/system/etcd.service 192.168.56.13:/etc/systemd/system/
在所有节点上创建etcd存储目录并启动etcd
[root@linux-node1 ~]# mkdir /var/lib/etcd
[root@linux-node1 ~]# systemctl start etcd
[root@linux-node1 ~]# systemctl status etcd

 

3.8、集群验证

[root@linux ssl]# etcdctl --endpoints=https://192.168.56.11:2379 \
>   --ca-file=/opt/kubernetes/ssl/ca.pem \
>   --cert-file=/opt/kubernetes/ssl/etcd.pem \
>   --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health
member 435fb0a8da627a4c is healthy: got healthy result from https://192.168.56.12:2379
member 6566e06d7343e1bb is healthy: got healthy result from https://192.168.56.11:2379
member ce7b884e428b6c8c is healthy: got healthy result from https://192.168.56.13:2379
cluster is healthy 
posted @ 2019-08-21 14:36  凉生墨客  阅读(524)  评论(0编辑  收藏  举报