安装部署 kubernetes 第一集 下

第六章:部署运算节点服务

1.部署kubelet

1.集群规划

主机名角色ip
shkf6-243.host.com kubelet 192.168.6.243
shkf6-244.host.com kubelet 192.168.6.244

注意:这里部署文档以shkf6-243.host.com主机为例,另外一台运算节点安装部署方法类似

2.签发kubelet证书

运维主机shkf6-245.host.com上:

1.创建生成正事签名请求(csr)的JSON配置文件

[root@shkf6-245 certs]# vi /opt/certs/kubelet-csr.json
[root@shkf6-245 certs]# cat /opt/certs/kubelet-csr.json

{
    "CN": "k8s-kubelet",
    "hosts": [
    "127.0.0.1",
    "192.168.6.66",
    "192.168.6.243",
    "192.168.6.244",
    "192.168.6.245",
    "192.168.6.246",
    "192.168.6.247",
    "192.168.6.248"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

注意:把所有有可能用到的kubulet主机全加进去

2.生成kubelet证书和私钥

[root@shkf6-245 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
2019/11/15 09:49:58 [INFO] generate received request
2019/11/15 09:49:58 [INFO] received CSR
2019/11/15 09:49:58 [INFO] generating key: rsa-2048
2019/11/15 09:49:59 [INFO] encoded CSR
2019/11/15 09:49:59 [INFO] signed certificate with serial number 609294877015122932833154151112494803106290808681
2019/11/15 09:49:59 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

3.检查生成证书的证书、私钥

[root@shkf6-245 certs]# ll kubelet*
-rw-r--r-- 1 root root 1098 Nov 15 09:49 kubelet.csr
-rw-r--r-- 1 root root  445 Nov 15 09:47 kubelet-csr.json
-rw------- 1 root root 1675 Nov 15 09:49 kubelet-key.pem
-rw-r--r-- 1 root root 1452 Nov 15 09:49 kubelet.pem

3.拷贝证书至各运算节点,并创建配置

shkf6-243上:

1.拷贝证书,私钥,注意私钥文件属性600

[root@shkf6-243 bin]#  scp -P52113 shkf6-245:/opt/certs/kubelet-key.pem /opt/kubernetes/server/bin/cert/
[root@shkf6-243 bin]#  scp -P52113 shkf6-245:/opt/certs/kubelet.pem /opt/kubernetes/server/bin/cert/

[root@shkf6-243 bin]# ll cert/
total 32
-rw------- 1 root root 1679 Nov 14 14:18 apiserver-key.pem
-rw-r--r-- 1 root root 1598 Nov 14 14:18 apiserver.pem
-rw------- 1 root root 1679 Nov 14 14:18 ca-key.pem
-rw-r--r-- 1 root root 1346 Nov 14 14:19 ca.pem
-rw------- 1 root root 1679 Nov 14 14:19 client-key.pem
-rw-r--r-- 1 root root 1363 Nov 14 14:19 client.pem
-rw------- 1 root root 1675 Nov 15 10:01 kubelet-key.pem
-rw-r--r-- 1 root root 1452 Nov 15 10:02 kubelet.pem

2.创建配置

1.set-cluster

注意:在conf目录下

[root@shkf6-243 conf]# kubectl config set-cluster myk8s \
  --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
  --embed-certs=true \
  --server=https://192.168.6.66:7443 \
  --kubeconfig=kubelet.kubeconfig

Cluster "myk8s" set.
2.set-credentials

注意:在conf目录下

[root@shkf6-243 conf]# kubectl config set-credentials k8s-node \
  --client-certificate=/opt/kubernetes/server/bin/cert/client.pem \
  --client-key=/opt/kubernetes/server/bin/cert/client-key.pem \
  --embed-certs=true \
  --kubeconfig=kubelet.kubeconfig 

User "k8s-node" set.
3.set-context

注意:在conf目录下

[root@shkf6-243 conf]# kubectl config set-context myk8s-context \
  --cluster=myk8s \
  --user=k8s-node \
  --kubeconfig=kubelet.kubeconfig

Context "myk8s-context" created.
4.use-context

注意:在conf目录下

[root@shkf6-243 conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig

Switched to context "myk8s-context".
5.k8s-node.yaml
  • 创建资源配置文件
    [root@shkf6-243 conf]# vim /opt/kubernetes/server/bin/conf/k8s-node.yaml
    [root@shkf6-243 conf]# cat /opt/kubernetes/server/bin/conf/k8s-node.yaml 
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: k8s-node
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:node
    subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: User
      name: k8s-node

     

  • 使集群角色用户生效
[root@shkf6-243 conf]# kubectl create -f k8s-node.yaml
  • 查看集群角色
[root@shkf6-243 conf]# kubectl get clusterrolebinding k8s-node
NAME       AGE
k8s-node   22m
[root@shkf6-243 conf]# kubectl get clusterrolebinding k8s-node -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: "2019-11-15T02:14:34Z"
  name: k8s-node
  resourceVersion: "17884"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/k8s-node
  uid: e09ed617-936f-4936-8adc-d7cc9b3bce63
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node

在shkf6-244上:

[root@shkf6-244 bin]# scp -P52113 shkf6-243:/opt/kubernetes/server/bin/conf/kubelet.kubeconfig /opt/kubernetes/server/bin/conf/

可略

[root@shkf6-244 bin]# scp -P52113 shkf6-243:/opt/kubernetes/server/bin/conf/k8s-node.yaml /opt/kubernetes/server/bin/conf/

4.准备pause基础镜像

运维主机shkf6-244.host.com上:

1.下载

[root@shkf6-245 certs]# docker pull kubernetes/pause
Using default tag: latest
latest: Pulling from kubernetes/pause
4f4fb700ef54: Pull complete 
b9c8ec465f6b: Pull complete 
Digest: sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105
Status: Downloaded newer image for kubernetes/pause:latest
docker.io/kubernetes/pause:latest

2.打标签

[root@shkf6-245 certs]# docker images|grep pause
kubernetes/pause                latest                     f9d5de079539        5 years ago         240kB
[root@shkf6-245 certs]# docker tag f9d5de079539 harbor.od.com/public/pause:latest

3.推送私有仓库(harbor)中

[root@shkf6-245 certs]# docker push harbor.od.com/public/pause:latest
The push refers to repository [harbor.od.com/public/pause]
5f70bf18a086: Mounted from public/nginx 
e16a89738269: Pushed 
latest: digest: sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 size: 938

5.创建kubelet启动脚本

在shkf6-243:

[root@shkf6-243 conf]# vi /opt/kubernetes/server/bin/kubelet.sh
[root@shkf6-243 conf]# cat /opt/kubernetes/server/bin/kubelet.sh
#!/bin/sh
./kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 10.96.0.2 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice \
  --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on="false" \
  --client-ca-file ./cert/ca.pem \
  --tls-cert-file ./cert/kubelet.pem \
  --tls-private-key-file ./cert/kubelet-key.pem \
  --hostname-override shkf6-243.host.com \
  --image-gc-high-threshold 20 \
  --image-gc-low-threshold 10 \
  --kubeconfig ./conf/kubelet.kubeconfig \
  --log-dir /data/logs/kubernetes/kube-kubelet \
  --pod-infra-container-image harbor.od.com/public/pause:latest \
  --root-dir /data/kubelet

注意:kubelet集群各主机的启动脚本略有不同,部署其节点时注意修改

hostname-override

6.检查配置,权限,创建日志目录

在shkf6-243:

[root@shkf6-243 conf]# ls -l|grep kubelet.kubeconfig 
-rw------- 1 root root 6202 Nov 15 10:11 kubelet.kubeconfig

[root@shkf6-243 conf]# chmod +x /opt/kubernetes/server/bin/kubelet.sh
[root@shkf6-243 conf]# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet

7.创建supervisor配置

在shkf6-243:

[root@shkf6-243 conf]# vi /etc/supervisord.d/kube-kubelet.ini
[root@shkf6-243 conf]# cat /etc/supervisord.d/kube-kubelet.ini

[program:kube-kubelet-6-243]
command=/opt/kubernetes/server/bin/kubelet.sh     ; the program (relative uses PATH, can take args)
numprocs=1                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin              ; directory to cwd to before exec (def no cwd)
autostart=true                                    ; start at supervisord start (default: true)
autorestart=true                                ; retstart at unexpected quit (default: true)
startsecs=30                                      ; number of secs prog must stay running (def. 1)
startretries=3                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                         ; setuid to this UNIX account to run the program
redirect_stderr=true                              ; redirect proc stderr to stdout (default false)
killasgroup=true                                  ; kill all process in a group
stopasgroup=true                                  ; stop all process in a group
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log   ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                       ; emit events on stdout writes (default false)

8.启动服务并检查

  • 启动服务并检查
    [root@shkf6-243 conf]# supervisorctl update
    kube-kubelet-6-243: added process group
    [root@shkf6-243 conf]# supervisorctl status
    etcd-server-6-243                RUNNING   pid 12112, uptime 1 day, 1:43:37
    kube-apiserver-6-243             RUNNING   pid 12824, uptime 20:38:00
    kube-controller-manager-6.243    RUNNING   pid 14952, uptime 2:08:14
    kube-kubelet-6-243               RUNNING   pid 15398, uptime 0:01:25
    kube-scheduler-6-243             RUNNING   pid 15001, uptime 1:53:15
    
    [root@shkf6-243 conf]# tail -fn 200 /data/logs/kubernetes/kube-kubelet/kubelet.stdout.log

     

9.检查运算节点

[root@shkf6-243 conf]# kubectl get nodes
NAME                 STATUS   ROLES    AGE     VERSION
shkf6-243.host.com   Ready    <none>   16m     v1.15.2
shkf6-244.host.com   Ready    <none>   2m12s   v1.15.2
[root@shkf6-243 conf]# kubectl get nodes -o wide
NAME                 STATUS   ROLES    AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
shkf6-243.host.com   Ready    <none>   17m     v1.15.2   192.168.6.243   <none>        CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://19.3.4
shkf6-244.host.com   Ready    <none>   2m45s   v1.15.2   192.168.6.244   <none>        CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://19.3.4


给node节点打标签 [root@shkf6-243 conf]# kubectl label node shkf6-243.host.com node-role.kubernetes.io/node= node/shkf6-243.host.com labeled [root@shkf6-243 conf]# kubectl label node shkf6-243.host.com node-role.kubernetes.io/master= node/shkf6-243.host.com labeled [root@shkf6-243 conf]# kubectl label node shkf6-244.host.com node-role.kubernetes.io/node= node/shkf6-244.host.com labeled [root@shkf6-243 conf]# kubectl label node shkf6-244.host.com node-role.kubernetes.io/master= node/shkf6-244.host.com labeled

10.安装部署启动检查所有集群规划主机的kube-kubelet服务

11.检查所有运算节点

[root@shkf6-243 conf]# kubectl get nodes -o wide
NAME                 STATUS   ROLES         AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
shkf6-243.host.com   Ready    master,node   20m     v1.15.2   192.168.6.243   <none>        CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://19.3.4
shkf6-244.host.com   Ready    master,node   6m34s   v1.15.2   192.168.6.244   <none>        CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://19.3.4
[root@shkf6-243 conf]# kubectl get nodes
NAME                 STATUS   ROLES         AGE     VERSION
shkf6-243.host.com   Ready    master,node   21m     v1.15.2
shkf6-244.host.com   Ready    master,node   6m42s   v1.15.2

2.部署kube-proxy

1.集群规划

主机名角色ip
shkf6-243.host.com kube-proxy 192.168.6.243
shkf6-244.host.com kube-proxy 192.168.6.244

注意:这里部署文档以shkf6-243.host.com主机为例,另外一台运算节点安装部署方法类似

2.签发kube-proxy证书

运维主机shkf6-245.host.com上:

1.创建生成证书签名请求(csr)的JSON文件

[root@shkf6-245 certs]# vi /opt/certs/kube-proxy-csr.json
[root@shkf6-245 certs]# cat kube-proxy-csr.json 
{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

2.生成kubelet证书和私钥

[root@shkf6-245 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client
2019/11/15 12:28:23 [INFO] generate received request
2019/11/15 12:28:23 [INFO] received CSR
2019/11/15 12:28:23 [INFO] generating key: rsa-2048
2019/11/15 12:28:24 [INFO] encoded CSR
2019/11/15 12:28:24 [INFO] signed certificate with serial number 499210659443234759487015805632579178164834077987
2019/11/15 12:28:24 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

 

注意:这里的clent不能与其他的通用,上面CN变了,"CN": "system:kube-proxy",

3.检查生成证书的证书、私钥

[root@shkf6-245 certs]# ll kube-proxy*
-rw-r--r-- 1 root root 1005 Nov 15 12:28 kube-proxy-client.csr
-rw------- 1 root root 1675 Nov 15 12:28 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1375 Nov 15 12:28 kube-proxy-client.pem
-rw-r--r-- 1 root root  267 Nov 15 12:28 kube-proxy-csr.json

3.拷贝证书至各个运算节点,并创建配置

1.拷贝kube-proxy-client-key.pemkube-proxy-client.pem至运算节点

[root@shkf6-243 conf]# scp -P52113 shkf6-245:/opt/certs/kube-proxy-client-key.pem /opt/kubernetes/server/bin/cert/
[root@shkf6-243 conf]# scp -P52113 shkf6-245:/opt/certs/kube-proxy-client.pem /opt/kubernetes/server/bin/cert/

2.创建配置

1.set-cluster

注意:在conf目录下

[root@shkf6-243 conf]# kubectl config set-cluster myk8s \
  --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
  --embed-certs=true \
  --server=https://192.168.6.66:7443 \
  --kubeconfig=kube-proxy.kubeconfig
2.set-credentials

注意:在conf目录下

[root@shkf6-243 conf]# kubectl config set-credentials kube-proxy \
  --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
  --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
3.set-context

注意:在conf目录下

[root@shkf6-243 conf]# kubectl config set-context myk8s-context \
  --cluster=myk8s \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
4.use-context

注意:在conf目录下

[root@shkf6-243 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig

4.创建kube-proxy启动脚本

在shkf6-243上:

  • 加载ipvs模块
    [root@shkf6-243 conf]# vi /root/ipvs.sh
    [root@shkf6-243 conf]# cat /root/ipvs.sh
    #!/bin/bash
    ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
    for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
    do
      /sbin/modinfo -F filename $i &>/dev/null
      if [ $? -eq 0 ];then
        /sbin/modprobe $i
      fi
    done
    
    
    [root@shkf6-243 bin]# sh /root/ipvs.sh[root@shkf6-243 bin]# lsmod |grep ip_vs
    ip_vs_wlc              12519  0 
    ip_vs_sed              12519  0 
    ip_vs_pe_sip           12697  0 
    nf_conntrack_sip       33860  1 ip_vs_pe_sip
    ip_vs_nq               12516  0 
    ip_vs_lc               12516  0 
    ip_vs_lblcr            12922  0 
    ip_vs_lblc             12819  0 
    ip_vs_ftp              13079  0 
    ip_vs_dh               12688  0 
    ip_vs_sh               12688  0 
    ip_vs_wrr              12697  0 
    ip_vs_rr               12600  0 
    ip_vs                 141092  24 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc
    nf_nat                 26787  3 ip_vs_ftp,nf_nat_ipv4,nf_nat_masquerade_ipv4
    nf_conntrack          133387  8 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4
    libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

    创建启动脚本

    [root@shkf6-243 conf]# vi /opt/kubernetes/server/bin/kube-proxy.sh
    [root@shkf6-243 conf]# cat /opt/kubernetes/server/bin/kube-proxy.sh
    #!/bin/sh
    ./kube-proxy \
      --cluster-cidr 172.6.0.0/16 \
      --hostname-override shkf6-243.host.com \
      --proxy-mode=ipvs \
      --ipvs-scheduler=nq \
      --kubeconfig ./conf/kube-proxy.kubeconfig

     

注意:kube-proxy集群各主机的启动脚本略有不同,部署其他节点时注意修改。

5.检查配置,权限,创建日志目录

在shkf6-243上:

[root@shkf6-243 conf]# ls -l|grep kube-proxy.kubeconfig 
-rw------- 1 root root 6207 Nov 15 12:32 kube-proxy.kubeconfig

[root@shkf6-243 conf]# chmod +x /opt/kubernetes/server/bin/kube-proxy.sh
[root@shkf6-243 conf]# mkdir -p /data/logs/kubernetes/kube-proxy

6.创建supervisor配置

[root@shkf6-243 conf]# vi /etc/supervisord.d/kube-proxy.ini
[root@shkf6-243 conf]# cat /etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-6-243]
command=/opt/kubernetes/server/bin/kube-proxy.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                           ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                 ; directory to cwd to before exec (def no cwd)
autostart=true                                                       ; start at supervisord start (default: true)
autorestart=true                                                     ; retstart at unexpected quit (default: true)
startsecs=30                                                         ; number of secs prog must stay running (def. 1)
startretries=3                                                       ; max # of serial start failures (default 3)
exitcodes=0,2                                                        ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                      ; signal used to kill process (default TERM)
stopwaitsecs=10                                                      ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                            ; setuid to this UNIX account to run the program
redirect_stderr=true                                                 ; redirect proc stderr to stdout (default false)
killasgroup=true                                                     ; kill all process in a group
stopasgroup=true                                                     ; stop all process in a group
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log     ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                         ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                             ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                          ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                          ; emit events on stdout writes (default false)

7.启动服务并检查

[root@shkf6-243 conf]# supervisorctl update
kube-proxy-6-243: added process group

[root@shkf6-243 conf]# supervisorctl status
etcd-server-6-243                RUNNING   pid 12112, uptime 1 day, 8:13:06
kube-apiserver-6-243             RUNNING   pid 12824, uptime 1 day, 3:07:29
kube-controller-manager-6.243    RUNNING   pid 14952, uptime 8:37:43
kube-kubelet-6-243               RUNNING   pid 15398, uptime 6:30:54
kube-proxy-6-243                 RUNNING   pid 8055, uptime 0:01:19
kube-scheduler-6-243             RUNNING   pid 15001, uptime 8:22:44
[root@shkf6-243 conf]# yum install ipvsadm -y
[root@shkf6-243 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 nq
  -> 192.168.6.243:6443           Masq    1      0          0         
  -> 192.168.6.244:6443           Masq    1      0          0 

[root@shkf6-243 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   24h

第七章:完成部署并验证集群

  • 创建配置清单
    [root@shkf6-243 conf]# cat /root/nginx-ds.yaml 
    apiVersion: extensions/v1beta1
    kind: DaemonSet
    metadata:
      name: nginx-ds
    spec:
      template:
        metadata:
          labels:
            app: nginx-ds
        spec:
          containers:
          - name: my-nginx
            image: harbor.od.com/public/nginx:v1.7.9
            ports:
            - containerPort: 80

     

  • 集群运算节点登录harbor
[root@shkf6-243 conf]# docker login harbor.od.com
Username: admin  
Password: 

[root@shkf6-244 conf]# docker login harbor.od.com
Username: admin
Password:
  • 创建pod
[root@shkf6-243 conf]# kubectl create -f nginx-ds.yaml
  • 创建pod
[root@shkf6-243 conf]# kubectl get pods 
NAME             READY   STATUS    RESTARTS   AGE
nginx-ds-ftxpz   1/1     Running   0          2m50s
nginx-ds-wb6wt   1/1     Running   0          2m51s

[root@shkf6-243 conf]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}

 

posted @ 2021-01-15 10:20  听&夏  阅读(0)  评论(0编辑  收藏  举报