k8s架构

1.k8s的架构

本次课程架构

2.k8s的安装

环境准备(修改ip地址,主机名,host解析)

主机ip内存软件
k8s-master 10.0.0.11 1g etcd,api-server,controller-manager,scheduler
k8s-node1 100.0.12 2g etcd,kubelet,kube-proxy,docker,flannel
k8s-node2 10.0.0.13 2g ectd,kubelet,kube-proxy,docker,flannel
k8s-node3 10.0.0.14 1g kubelet,kube-proxy,docker,flannel

2.1 颁发证书:

准备证书颁发工具

在node3节点上

复制代码
[root@k8s-node3 ~]# mkdir /opt/softs
[root@k8s-node3 ~]# cd /opt/softs
[root@k8s-node3 softs]# rz -E
rz waiting to receive.
[root@k8s-node3 softs]# ls
cfssl  cfssl-certinfo  cfssl-json
[root@k8s-node3 softs]# chmod +x /opt/softs/*
[root@k8s-node3 softs]# ln -s /opt/softs/* /usr/bin/
​
[root@k8s-node3 softs]# mkdir /opt/certs
[root@k8s-node3 softs]# cd /opt/certs
复制代码

 

编辑ca证书配置文件


复制代码
vi /opt/certs/ca-config.json
i{
    "signing": {
        "default": {
            "expiry": "175200h"
        },
        "profiles": {
            "server": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
编辑ca证书请求配置文件
复制代码

 


vi /opt/certs/ca-csr.json
i{
   "CN": "kubernetes-ca",
   "hosts": [
  ],
   "key": {
       "algo": "rsa",
       "size": 2048
  },
   "names": [
      {
           "C": "CN",
           "ST": "beijing",
           "L": "beijing",
           "O": "od",
           "OU": "ops"
      }
  ],
   "ca": {
       "expiry": "175200h"
  }
}

生成CA证书和私钥


[root@k8s-node3 certs]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca -
2020/09/27 17:20:56 [INFO] generating a new CA key and certificate from CSR
2020/09/27 17:20:56 [INFO] generate received request
2020/09/27 17:20:56 [INFO] received CSR
2020/09/27 17:20:56 [INFO] generating key: rsa-2048
2020/09/27 17:20:56 [INFO] encoded CSR
2020/09/27 17:20:56 [INFO] signed certificate with serial number 409112456326145160001566370622647869686523100724
[root@k8s-node3 certs]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem

2.2 部署etcd集群

主机名ip角色
k8s-master 10.0.0.11 etcd lead
k8s-node1 10.0.0.12 etcd follow
k8s-node2 10.0.0.13 etcd follow

颁发etcd节点之间通信的证书


vi /opt/certs/etcd-peer-csr.json
i{
   "CN": "etcd-peer",
   "hosts": [
       "10.0.0.11",
       "10.0.0.12",
       "10.0.0.13"
  ],
   "key": {
       "algo": "rsa",
       "size": 2048
  },
   "names": [
      {
           "C": "CN",
           "ST": "beijing",
           "L": "beijing",
           "O": "od",
           "OU": "ops"
      }
  ]
}

[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssl-json -bare etcd-peer
2020/09/27 17:29:49 [INFO] generate received request
2020/09/27 17:29:49 [INFO] received CSR
2020/09/27 17:29:49 [INFO] generating key: rsa-2048
2020/09/27 17:29:49 [INFO] encoded CSR
2020/09/27 17:29:49 [INFO] signed certificate with serial number 15140302313813859454537131325115129339480067698
2020/09/27 17:29:49 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-node3 certs]# ls etcd-peer*
etcd-peer.csr etcd-peer-csr.json etcd-peer-key.pem etcd-peer.pem

安装etcd服务

在k8s-master,k8s-node1,k8s-node2上


yum install etcd  -y

#在node3上发送证书到/etc/etcd目录
[root@k8s-node3 certs]# scp -rp *.pem root@10.0.0.11:/etc/etcd/
root@10.0.0.11's password:
ca-key.pem                                                                                    100% 1675     1.1MB/s   00:00    
ca.pem                                                                                        100% 1354     1.0MB/s   00:00    
etcd-peer-key.pem                                                                             100% 1679   961.2KB/s   00:00    
etcd-peer.pem                                                                                 100% 1428   203.5KB/s   00:00    
[root@k8s-node3 certs]#
[root@k8s-node3 certs]# scp -rp *.pem root@10.0.0.12:/etc/etcd/
The authenticity of host '10.0.0.12 (10.0.0.12)' can't be established.
ECDSA key fingerprint is SHA256:cHKT5G6hYgv1k1zTfc36tZrLNQqJhc1JeBTeke545Fk.
ECDSA key fingerprint is MD5:24:4e:94:6d:46:82:0a:61:3a:1e:83:3f:75:82:e1:aa.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.12' (ECDSA) to the list of known hosts.
root@10.0.0.12's password:
ca-key.pem                                                                                    100% 1675     1.4MB/s   00:00    
ca.pem                                                                                        100% 1354     1.1MB/s   00:00    
etcd-peer-key.pem                                                                             100% 1679   833.9KB/s   00:00    
etcd-peer.pem                                                                                 100% 1428   708.5KB/s   00:00    
[root@k8s-node3 certs]# scp -rp *.pem root@10.0.0.13:/etc/etcd/
The authenticity of host '10.0.0.13 (10.0.0.13)' can't be established.
ECDSA key fingerprint is SHA256:cHKT5G6hYgv1k1zTfc36tZrLNQqJhc1JeBTeke545Fk.
ECDSA key fingerprint is MD5:24:4e:94:6d:46:82:0a:61:3a:1e:83:3f:75:82:e1:aa.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.13' (ECDSA) to the list of known hosts.
root@10.0.0.13's password:
ca-key.pem                                                                                    100% 1675     1.4MB/s   00:00    
ca.pem                                                                                        100% 1354     1.1MB/s   00:00    
etcd-peer-key.pem                                                                             100% 1679     1.1MB/s   00:00    
etcd-peer.pem                                                                                 100% 1428   471.5KB/s   00:00    
[root@k8s-node3 certs]#

#master节点
[root@k8s-master ~]# chown -R etcd:etcd /etc/etcd/*.pem
[root@k8s-master ~]# grep -Ev '^$|#' /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_PEER_URLS="https://10.0.0.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379"
ETCD_NAME="node1"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="node1=https://10.0.0.11:2380,node2=https://10.0.0.12:2380,node3=https://10.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_PEER_AUTO_TLS="true"

#node1和node2需修改
ETCD_LISTEN_PEER_URLS="https://10.0.0.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379"
ETCD_NAME="node1"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379"

#3个etcd节点同时启动
systemctl start etcd
systemctl enable etcd

#验证
[root@k8s-master ~]# etcdctl member list
55fcbe0adaa45350: name=node3 peerURLs=https://10.0.0.13:2380 clientURLs=http://127.0.0.1:2379,https://10.0.0.13:2379 isLeader=false
cebdf10928a06f3c: name=node1 peerURLs=https://10.0.0.11:2380 clientURLs=http://127.0.0.1:2379,https://10.0.0.11:2379 isLeader=false
f7a9c20602b8532e: name=node2 peerURLs=https://10.0.0.12:2380 clientURLs=http://127.0.0.1:2379,https://10.0.0.12:2379 isLeader=true

2.3 master节点的安装

安装api-server服务

上传kubernetes-server-linux-amd64-v1.15.4.tar.gz到node3上,然后解压


[root@k8s-node3 softs]# ls
cfssl cfssl-certinfo cfssl-json kubernetes-server-linux-amd64-v1.15.4.tar.gz
[root@k8s-node3 softs]# tar xf kubernetes-server-linux-amd64-v1.15.4.tar.gz
[root@k8s-node3 softs]# ls
cfssl cfssl-certinfo cfssl-json kubernetes kubernetes-server-linux-amd64-v1.15.4.tar.gz
[root@k8s-node3 softs]# cd /opt/softs/kubernetes/server/bin/
[root@k8s-node3 bin]# scp -rp kube-apiserver kube-controller-manager kube-scheduler kubectl root@10.0.0.11:/usr/sbin/
root@10.0.0.11's password:
kube-apiserver                                                                                100% 157MB  45.8MB/s   00:03    
kube-controller-manager                                                                       100% 111MB  51.9MB/s   00:02    
kube-scheduler                                                                                100%   37MB  49.3MB/s   00:00
kubectl                                                                                       100%   41MB  42.4MB/s   00:00

签发client证书


[root@k8s-node3 bin]# cd /opt/certs/
[root@k8s-node3 certs]# vi /opt/certs/client-csr.json
i{
   "CN": "k8s-node",
   "hosts": [
  ],
   "key": {
       "algo": "rsa",
       "size": 2048
  },
   "names": [
      {
           "C": "CN",
           "ST": "beijing",
           "L": "beijing",
           "O": "od",
           "OU": "ops"
      }
  ]
}

[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssl-json -bare client
2020/09/28 09:44:24 [INFO] generate received request
2020/09/28 09:44:24 [INFO] received CSR
2020/09/28 09:44:24 [INFO] generating key: rsa-2048
2020/09/28 09:44:24 [INFO] encoded CSR
2020/09/28 09:44:24 [INFO] signed certificate with serial number 461653240803905097242551686652473623216215818799
2020/09/28 09:44:24 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-node3 certs]# ls client*
client.csr client-csr.json client-key.pem client.pem

签发kube-apiserver证书


[root@k8s-node3 certs]# vi /opt/certs/apiserver-csr.json
i{
   "CN": "apiserver",
   "hosts": [
       "127.0.0.1",
       "10.254.0.1",
       "kubernetes.default",
       "kubernetes.default.svc",
       "kubernetes.default.svc.cluster",
       "kubernetes.default.svc.cluster.local",
       "10.0.0.11",
       "10.0.0.12",
       "10.0.0.13"
  ],
   "key": {
       "algo": "rsa",
       "size": 2048
  },
   "names": [
      {
           "C": "CN",
           "ST": "beijing",
           "L": "beijing",
           "O": "od",
           "OU": "ops"
      }
  ]
}

#注意10.254.0.1为clusterIP网段的第一个ip,做为pod访问api-server的内部ip,oldqiang在这一块被坑了很久

[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json | cfssl-json -bare apiserver
2020/09/28 09:53:20 [INFO] generate received request
2020/09/28 09:53:20 [INFO] received CSR
2020/09/28 09:53:20 [INFO] generating key: rsa-2048
2020/09/28 09:53:21 [INFO] encoded CSR
2020/09/28 09:53:21 [INFO] signed certificate with serial number 433119416708899695145430798275351070833359756410
2020/09/28 09:53:21 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-node3 certs]# ls apiserver*
apiserver.csr apiserver-csr.json apiserver-key.pem apiserver.pem

配置api-server服务

master节点


#拷贝证书
[root@k8s-master ~]# mkdir /etc/kubernetes
[root@k8s-master ~]# cd /etc/kubernetes
[root@k8s-master kubernetes]# scp -rp root@10.0.0.14:/opt/certs/ca*pem .
root@10.0.0.14's password:
ca-key.pem                                                                                    100% 1675   233.6KB/s   00:00    
ca.pem                                                                                        100% 1354   809.1KB/s   00:00    
[root@k8s-master kubernetes]# scp -rp root@10.0.0.14:/opt/certs/apiserver*pem .
root@10.0.0.14's password:
apiserver-key.pem                                                                             100% 1675     1.1MB/s   00:00    
apiserver.pem                                                                                 100% 1590   725.8KB/s   00:00    
[root@k8s-master kubernetes]# scp -rp root@10.0.0.14:/opt/certs/client*pem .
root@10.0.0.14's password:
client-key.pem                                                                                100% 1679     1.3MB/s   00:00    
client.pem                                                                                    100% 1371   807.1KB/s   00:00    


[root@k8s-master kubernetes]# ls
apiserver-key.pem apiserver.pem ca-key.pem ca.pem client-key.pem client.pem

#api-server审计日志规则
[root@k8s-master kubernetes]# vi audit.yaml
iapiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
 - "RequestReceived"
rules:
 # Log pod changes at RequestResponse level
 - level: RequestResponse
  resources:
   - group: ""
     # Resource "pods" doesn't match requests to any subresource of pods,
     # which is consistent with the RBAC policy.
    resources: ["pods"]
 # Log "pods/log", "pods/status" at Metadata level
 - level: Metadata
  resources:
   - group: ""
    resources: ["pods/log", "pods/status"]

 # Don't log requests to a configmap called "controller-leader"
 - level: None
  resources:
   - group: ""
    resources: ["configmaps"]
    resourceNames: ["controller-leader"]

 # Don't log watch requests by the "system:kube-proxy" on endpoints or services
 - level: None
  users: ["system:kube-proxy"]
  verbs: ["watch"]
  resources:
   - group: "" # core API group
    resources: ["endpoints", "services"]

 # Don't log authenticated requests to certain non-resource URL paths.
 - level: None
  userGroups: ["system:authenticated"]
  nonResourceURLs:
   - "/api*" # Wildcard matching.
   - "/version"

 # Log the request body of configmap changes in kube-system.
 - level: Request
  resources:
   - group: "" # core API group
    resources: ["configmaps"]
   # This rule only applies to resources in the "kube-system" namespace.
   # The empty string "" can be used to select non-namespaced resources.
  namespaces: ["kube-system"]

 # Log configmap and secret changes in all other namespaces at the Metadata level.
 - level: Metadata
  resources:
   - group: "" # core API group
    resources: ["secrets", "configmaps"]

 # Log all other resources in core and extensions at the Request level.
 - level: Request
  resources:
   - group: "" # core API group
   - group: "extensions" # Version of group should NOT be included.

 # A catch-all rule to log all other requests at the Metadata level.
 - level: Metadata
   # Long-running requests like watches that fall under this rule will not
   # generate an audit event in RequestReceived.
  omitStages:
     - "RequestReceived"

vi /usr/lib/systemd/system/kube-apiserver.service
i[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
[Service]
ExecStart=/usr/sbin/kube-apiserver \
 --audit-log-path /var/log/kubernetes/audit-log \
 --audit-policy-file /etc/kubernetes/audit.yaml \
 --authorization-mode RBAC \
 --client-ca-file /etc/kubernetes/ca.pem \
 --requestheader-client-ca-file /etc/kubernetes/ca.pem \
 --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
 --etcd-cafile /etc/kubernetes/ca.pem \
 --etcd-certfile /etc/kubernetes/client.pem \
 --etcd-keyfile /etc/kubernetes/client-key.pem \
 --etcd-servers https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 \
 --service-account-key-file /etc/kubernetes/ca-key.pem \
 --service-cluster-ip-range 10.254.0.0/16 \
 --service-node-port-range 30000-59999 \
 --kubelet-client-certificate /etc/kubernetes/client.pem \
 --kubelet-client-key /etc/kubernetes/client-key.pem \
 --log-dir /var/log/kubernetes/ \
 --logtostderr=false \
 --tls-cert-file /etc/kubernetes/apiserver.pem \
 --tls-private-key-file /etc/kubernetes/apiserver-key.pem \
 --v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target

[root@k8s-master kubernetes]# mkdir /var/log/kubernetes
[root@k8s-master kubernetes]# systemctl daemon-reload
[root@k8s-master kubernetes]# systemctl start kube-apiserver.service
[root@k8s-master kubernetes]# systemctl enable kube-apiserver.service

#检验
[root@k8s-master kubernetes]# kubectl get cs
NAME                 STATUS     MESSAGE                                                                                     ERROR
scheduler           Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused  
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused  
etcd-2               Healthy     {"health":"true"}                                                                          
etcd-0               Healthy     {"health":"true"}                                                                          
etcd-1               Healthy     {"health":"true"}  
安装controller-manager服务

[root@k8s-master kubernetes]# vi /usr/lib/systemd/system/kube-controller-manager.service
i[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
[Service]
ExecStart=/usr/sbin/kube-controller-manager \
 --cluster-cidr 172.18.0.0/16 \
 --log-dir /var/log/kubernetes/ \
 --master http://127.0.0.1:8080 \
 --service-account-private-key-file /etc/kubernetes/ca-key.pem \
 --service-cluster-ip-range 10.254.0.0/16 \
 --root-ca-file /etc/kubernetes/ca.pem \
 --logtostderr=false \
 --v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target

[root@k8s-master kubernetes]# systemctl daemon-reload
[root@k8s-master kubernetes]# systemctl start kube-controller-manager.service
[root@k8s-master kubernetes]# systemctl enable kube-controller-manager.service
安装scheduler服务

[root@k8s-master kubernetes]# vi   /usr/lib/systemd/system/kube-scheduler.service
i[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
[Service]
ExecStart=/usr/sbin/kube-scheduler \
 --log-dir /var/log/kubernetes/ \
 --master http://127.0.0.1:8080 \
 --logtostderr=false \
 --v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target

[root@k8s-master kubernetes]# systemctl daemon-reload
[root@k8s-master kubernetes]# systemctl start kube-scheduler.service
[root@k8s-master kubernetes]# systemctl enable kube-scheduler.service

验证master节点


[root@k8s-master kubernetes]# kubectl get cs
NAME                 STATUS   MESSAGE             ERROR
scheduler           Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}  
etcd-0               Healthy   {"health":"true"}  
etcd-2               Healthy   {"health":"true"}

2.4 node节点的安装

安装kubelet服务

在node3节点上签发证书


[root@k8s-node3 bin]# cd /opt/certs/
[root@k8s-node3 certs]# vi kubelet-csr.json
i{
   "CN": "kubelet-node",
   "hosts": [
   "127.0.0.1",
   "10.0.0.11",
   "10.0.0.12",
   "10.0.0.13",
   "10.0.0.14",
   "10.0.0.15"
  ],
   "key": {
       "algo": "rsa",
       "size": 2048
  },
   "names": [
      {
           "C": "CN",
           "ST": "beijing",
           "L": "beijing",
           "O": "od",
           "OU": "ops"
      }
  ]
}

[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
2020/09/28 10:57:08 [INFO] generate received request
2020/09/28 10:57:08 [INFO] received CSR
2020/09/28 10:57:08 [INFO] generating key: rsa-2048
2020/09/28 10:57:08 [INFO] encoded CSR
2020/09/28 10:57:08 [INFO] signed certificate with serial number 19907662860605591123786296854981068265006681308
2020/09/28 10:57:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-node3 certs]# ls kubelet*
kubelet.csr kubelet-csr.json kubelet-key.pem kubelet.pem

#生成kubelet启动所需的kube-config文件
[root@k8s-node3 certs]# ln -s /opt/softs/kubernetes/server/bin/kubectl /usr/sbin/
#设置集群参数
[root@k8s-node3 certs]# kubectl config set-cluster myk8s \
  --certificate-authority=/opt/certs/ca.pem \
  --embed-certs=true \
  --server=https://10.0.0.11:6443 \
  --kubeconfig=kubelet.kubeconfig
Cluster "myk8s" set.
#设置客户端认证参数
[root@k8s-node3 certs]# kubectl config set-credentials k8s-node --client-certificate=/opt/certs/client.pem --client-key=/opt/certs/client-key.pem --embed-certs=true --kubeconfig=kubelet.kubeconfig
User "k8s-node" set.
#生成上下文参数
[root@k8s-node3 certs]# kubectl config set-context myk8s-context \
  --cluster=myk8s \
  --user=k8s-node \
  --kubeconfig=kubelet.kubeconfig
Context "myk8s-context" created.
#切换默认上下文
[root@k8s-node3 certs]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
Switched to context "myk8s-context".
#查看生成的kube-config文件
[root@k8s-node3 certs]# ls kubelet.kubeconfig
kubelet.kubeconfig

master节点上


[root@k8s-master ~]# vi k8s-node.yaml
iapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node

[root@k8s-master ~]# kubectl create -f k8s-node.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-node created

node1节点


#安装docker-ce
过程略
vim /etc/docker/daemon.json
i{
"registry-mirrors": ["https://registry.docker-cn.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl restart docker.service
systemctl enable docker.service

[root@k8s-node1 ~]# mkdir /etc/kubernetes
[root@k8s-node1 ~]# cd /etc/kubernetes
[root@k8s-node1 kubernetes]# scp -rp root@10.0.0.14:/opt/certs/kubelet.kubeconfig .
root@10.0.0.14's password:
kubelet.kubeconfig                                                                            100% 6219     3.8MB/s   00:00    
[root@k8s-node1 kubernetes]# scp -rp root@10.0.0.14:/opt/certs/ca*pem .
root@10.0.0.14's password:
ca-key.pem                                                                                    100% 1675     1.2MB/s   00:00    
ca.pem                                                                                        100% 1354   946.9KB/s   00:00    
[root@k8s-node1 kubernetes]# scp -rp root@10.0.0.14:/opt/certs/kubelet*pem .
root@10.0.0.14's password:
kubelet-key.pem                                                                               100% 1679     1.2MB/s   00:00    
kubelet.pem                                                                                   100% 1464     1.1MB/s   00:00    
[root@k8s-node1 kubernetes]#
[root@k8s-node1 kubernetes]# scp -rp root@10.0.0.14:/opt/softs/kubernetes/server/bin/kubelet /usr/bin/
root@10.0.0.14's password:
kubelet                                                                                       100% 114MB  29.6MB/s   00:03

[root@k8s-node1 kubernetes]# mkdir /var/log/kubernetes
[root@k8s-node1 kubernetes]# vi /usr/lib/systemd/system/kubelet.service
i[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet \
 --anonymous-auth=false \
 --cgroup-driver systemd \
 --cluster-dns 10.254.230.254 \
 --cluster-domain cluster.local \
 --runtime-cgroups=/systemd/system.slice \
 --kubelet-cgroups=/systemd/system.slice \
 --fail-swap-on=false \
 --client-ca-file /etc/kubernetes/ca.pem \
 --tls-cert-file /etc/kubernetes/kubelet.pem \
 --tls-private-key-file /etc/kubernetes/kubelet-key.pem \
 --hostname-override 10.0.0.12 \
 --minimum-image-ttl-duration 10h \
 --image-gc-low-threshold 80 \
 --kubeconfig /etc/kubernetes/kubelet.kubeconfig \
 --log-dir /var/log/kubernetes/ \
 --pod-infra-container-image t29617342/pause-amd64:3.0 \
 --logtostderr=false \
 --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

[root@k8s-node1 kubernetes]# systemctl daemon-reload
[root@k8s-node1 kubernetes]# systemctl start kubelet.service
[root@k8s-node1 kubernetes]# systemctl enable kubelet.service

master节点验证


[root@k8s-master ~]# kubectl get nodes
NAME       STATUS   ROLES   AGE   VERSION
10.0.0.12   Ready   <none>   15m   v1.15.4
10.0.0.13   Ready   <none>   16s   v1.15.4
安装kube-proxy服务

在node3节点上签发证书


[root@k8s-node3 ~]# cd /opt/certs/
[root@k8s-node3 certs]# vi /opt/certs/kube-proxy-csr.json
i{
   "CN": "system:kube-proxy",
   "key": {
       "algo": "rsa",
       "size": 2048
  },
   "names": [
      {
           "C": "CN",
           "ST": "beijing",
           "L": "beijing",
           "O": "od",
           "OU": "ops"
      }
  ]
}

[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json | cfssl-json -bare kube-proxy-client
2020/09/28 15:01:44 [INFO] generate received request
2020/09/28 15:01:44 [INFO] received CSR
2020/09/28 15:01:44 [INFO] generating key: rsa-2048
2020/09/28 15:01:45 [INFO] encoded CSR
2020/09/28 15:01:45 [INFO] signed certificate with serial number 450584845939836824125879839467665616307124786948
2020/09/28 15:01:45 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-node3 certs]# ls kube-proxy-c*
kube-proxy-client.csr kube-proxy-client-key.pem kube-proxy-client.pem kube-proxy-csr.json

#生成kube-proxy启动所需要kube-config
[root@k8s-node3 certs]# kubectl config set-cluster myk8s \
  --certificate-authority=/opt/certs/ca.pem \
  --embed-certs=true \
  --server=https://10.0.0.11:6443 \
  --kubeconfig=kube-proxy.kubeconfig
Cluster "myk8s" set.
[root@k8s-node3 certs]# kubectl config set-credentials kube-proxy \
  --client-certificate=/opt/certs/kube-proxy-client.pem \
  --client-key=/opt/certs/kube-proxy-client-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.
[root@k8s-node3 certs]# kubectl config set-context myk8s-context \
  --cluster=myk8s \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
Context "myk8s-context" created.
[root@k8s-node3 certs]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
Switched to context "myk8s-context".
[root@k8s-node3 certs]# ls kube-proxy.kubeconfig
kube-proxy.kubeconfig
[root@k8s-node3 certs]# scp -rp kube-proxy.kubeconfig root@10.0.0.12:/etc/kubernetes/
root@10.0.0.12's password:
kube-proxy.kubeconfig                                                                         100% 6239     3.9MB/s   00:00    
[root@k8s-node3 certs]# scp -rp kube-proxy.kubeconfig root@10.0.0.13:/etc/kubernetes/
root@10.0.0.13's password:
kube-proxy.kubeconfig                                                                         100% 6239   883.7KB/s   00:00
[root@k8s-node3 bin]# scp -rp kube-proxy root@10.0.0.12:/usr/bin/
root@10.0.0.12's password:
kube-proxy                                                                                    100%   35MB  42.8MB/s   00:00    
[root@k8s-node3 bin]# scp -rp kube-proxy root@10.0.0.13:/usr/bin/
root@10.0.0.13's password:
kube-proxy                                                                                    100%   35MB  46.4MB/s   00:00  

在node1节点上配置kube-proxy


[root@k8s-node1 ~]# vi   /usr/lib/systemd/system/kube-proxy.service
i[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \
 --kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \
 --cluster-cidr 172.18.0.0/16 \
 --hostname-override 10.0.0.12 \
 --logtostderr=false \
 --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl start kube-proxy.service
[root@k8s-node1 ~]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

2.5 配置flannel网络

所有节点安装flannel


yum install flannel  -y
mkdir /opt/certs/

在node3上分发证书


[root@k8s-node3 ~]# cd /opt/certs/
[root@k8s-node3 certs]# scp -rp ca.pem client*pem root@10.0.0.11:/opt/certs/
root@10.0.0.11's password:
ca.pem                                                                                        100% 1354     1.1MB/s   00:00    
client-key.pem                                                                                100% 1679   434.6KB/s   00:00    
client.pem                                                                                    100% 1371   771.3KB/s   00:00    
[root@k8s-node3 certs]# scp -rp ca.pem client*pem root@10.0.0.12:/opt/certs/
root@10.0.0.12's password:
ca.pem                                                                                        100% 1354     1.0MB/s   00:00    
client-key.pem                                                                                100% 1679   230.3KB/s   00:00    
client.pem                                                                                    100% 1371   715.0KB/s   00:00    
[root@k8s-node3 certs]#
[root@k8s-node3 certs]# scp -rp ca.pem client*pem root@10.0.0.13:/opt/certs/
root@10.0.0.13's password:
ca.pem                                                                                        100% 1354     1.0MB/s   00:00    
client-key.pem                                                                                100% 1679   577.4KB/s   00:00    
client.pem                                                                                    100% 1371   716.1KB/s   00:00

在master节点上

etcd创建flannel的key


#通过这个key定义pod的ip地址范围
etcdctl mk /atomic.io/network/config   '{ "Network": "172.18.0.0/16","Backend": {"Type": "vxlan"} }'
#注意可能会失败提示
Error: x509: certificate signed by unknown authority
#多重试几次就好了

配置启动flannel


vi /etc/sysconfig/flanneld
第4行:FLANNEL_ETCD_ENDPOINTS="https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379"
第8行不变:FLANNEL_ETCD_PREFIX="/atomic.io/network"
第11行:FLANNEL_OPTIONS="-etcd-cafile=/opt/certs/ca.pem -etcd-certfile=/opt/certs/client.pem -etcd-keyfile=/opt/certs/client-key.pem"

systemctl start flanneld.service
systemctl enable flanneld.service

#验证
[root@k8s-master ~]# ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
      inet 172.18.43.0 netmask 255.255.255.255 broadcast 0.0.0.0
      inet6 fe80::30d9:50ff:fe47:599e prefixlen 64 scopeid 0x20<link>
      ether 32:d9:50:47:59:9e txqueuelen 0 (Ethernet)
      RX packets 0 bytes 0 (0.0 B)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 0 bytes 0 (0.0 B)
      TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0

在node1和node2上


[root@k8s-node1 ~]# vim /usr/lib/systemd/system/docker.service
将ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
修改为ExecStart=/usr/bin/dockerd  $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
增加一行ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl restart docker

#验证,docker0网络为172.18网段就ok了
[root@k8s-node1 ~]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
      inet 172.18.41.1 netmask 255.255.255.0 broadcast 172.18.41.255
      ether 02:42:07:3e:8a:09 txqueuelen 0 (Ethernet)
      RX packets 0 bytes 0 (0.0 B)
      RX errors 0 dropped 0 overruns 0 frame 0
      TX packets 0 bytes 0 (0.0 B)
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

验证k8s集群的安装


[root@k8s-master ~]# kubectl run nginx --image=nginx:1.13 --replicas=2
#多等待一段时间,再查看pod状态
[root@k8s-master ~]# kubectl get pod -o wide
NAME                     READY   STATUS   RESTARTS   AGE     IP           NODE       NOMINATED NODE   READINESS GATES
nginx-6459cd46fd-8lln4   1/1     Running   0         3m27s   172.18.41.2   10.0.0.12   <none>           <none>
nginx-6459cd46fd-xxt24   1/1     Running   0         3m27s   172.18.96.2   10.0.0.13   <none>           <none>

[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
service/nginx exposed
[root@k8s-master ~]# kubectl get svc
NAME         TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)       AGE
kubernetes   ClusterIP   10.254.0.1     <none>        443/TCP       6h46m
nginx       NodePort    10.254.160.83   <none>        80:41760/TCP   3s

#打开浏览器访问http://10.0.0.12:41760,能访问就ok了

 

3:k8s的常用资源

3.1 pod资源

pod资源至少由两个容器组成,一个基础容器pod+业务容器

动态pod,这个pod的yaml文件从etcd获取的yaml

静态pod,kubelet本地目录读取yaml文件,启动的pod


mkdir /etc/kubernetes/manifest

vim /usr/lib/systemd/system/kubelet.service
#启动参数增加一行
--pod-manifest-path /etc/kubernetes/manifest \

systemctl daemon-reload
systemctl restart kubelet.service

cd /etc/kubernetes/manifest/

vi k8s_pod.yaml
iapiVersion: v1
kind: Pod
metadata:
name: static-pod
spec:
containers:
   - name: nginx
    image: nginx:1.13
    ports:
       - containerPort: 80
       

#验证
[root@k8s-master ~]# kubectl get pod
NAME                     READY   STATUS   RESTARTS   AGE
nginx-6459cd46fd-hg2kq    1/1     Running   1         2d16h
nginx-6459cd46fd-ng9v6    1/1     Running   1         2d16h
oldboy-5478b985bc-6f8gz   1/1     Running   1         2d16h
static-pod-10.0.0.12      1/1     Running   0         21s

 

3.2: 污点和容忍度

污点: 给node节点加污点


污点的类型:
NoSchedule
PreferNoSchedule
NoExecute

#打标签
kubectl label nodes kubernetes-node2 node-role.kubernetes.io/node=
#删标签
kubectl label nodes kubernetes-node2 node-role.kubernetes.io/node-

#添加污点的例子
kubectl taint node 10.0.0.14 node-role.kubernetes.io=master:NoExecute
#检查
[root@k8s-master ~]# kubectl describe nodes 10.0.0.14|grep -i taint
Taints:             node-role.kubernetes.io=master:NoExecute

容忍度


#添加在pod的spec下
tolerations:
- key: "node-role.kubernetes.io"
operator: "Exists"
value: "master"
effect: "NoExecute"

3.3 secrets资源

方式1:


kubectl create secret docker-registry harbor-secret --namespace=default --docker-username=admin --docker-password=a123456 --docker-server=blog.oldqiang.com

vi k8s_sa_harbor.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: docker-image
namespace: default
imagePullSecrets:
- name: harbor-secret

vi k8s_pod.yaml
iapiVersion: v1
kind: Pod
metadata:
name: static-pod
spec:
serviceAccount: docker-image
containers:
  - name: nginx
    image: blog.oldqiang.com/oldboy/nginx:1.13
    ports:
      - containerPort: 80

方法二:


kubectl create secret docker-registry regcred --docker-server=blog.oldqiang.com --docker-username=admin --docker-password=a123456

#验证
[root@k8s-master ~]# kubectl get secrets
NAME                       TYPE                                 DATA   AGE
default-token-vgc4l       kubernetes.io/service-account-token   3     2d19h
regcred                   kubernetes.io/dockerconfigjson        1     114s

[root@k8s-master ~]# cat k8s_pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: static-pod
spec:
nodeName: 10.0.0.12
imagePullSecrets:
   - name: regcred
containers:
   - name: nginx
    image: blog.oldqiang.com/oldboy/nginx:1.13
    ports:
       - containerPort: 80

3.4 configmap资源


vi /opt/81.conf
i   server {
      listen       81;
      server_name localhost;
      root         /html;
      index     index.html index.htm;
      location / {
      }
  }

kubectl create configmap 81.conf --from-file=/opt/81.conf
#验证
kubectl get cm

vi k8s_deploy.yaml
iapiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
template:
  metadata:
    labels:
      app: nginx
  spec:
    volumes:
       - name: nginx-config
        configMap:
          name: 81.conf
    containers:
     - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      volumeMounts:
         - name: nginx-config
          mountPath: /etc/nginx/conf.d
      ports:
       - containerPort: 80
        name: port1
       - containerPort: 81
        name: port2

4: k8s常用服务

4.1 部署dns服务


vi coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
  kubernetes.io/bootstrapping: rbac-defaults
  addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
 - ""
resources:
 - endpoints
 - services
 - pods
 - namespaces
verbs:
 - list
 - watch
- apiGroups:
 - ""
resources:
 - nodes
verbs:
 - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
  rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
  kubernetes.io/bootstrapping: rbac-defaults
  addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
    addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
  .:53 {
      errors
      health
      kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
          ttl 30
      }
      prometheus :9153
      forward . /etc/resolv.conf
      cache 30
      loop
      reload
      loadbalance
  }
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
  k8s-app: kube-dns
  kubernetes.io/cluster-service: "true"
  addonmanager.kubernetes.io/mode: Reconcile
  kubernetes.io/name: "CoreDNS"
spec:
 # replicas: not specified here:
strategy:
  type: RollingUpdate
  rollingUpdate:
    maxUnavailable: 1
selector:
  matchLabels:
    k8s-app: kube-dns
template:
  metadata:
    labels:
      k8s-app: kube-dns
    annotations:
      seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
  spec:
    priorityClassName: system-cluster-critical
    serviceAccountName: coredns
    tolerations:
       - key: "CriticalAddonsOnly"
        operator: "Exists"
    nodeSelector:
      beta.kubernetes.io/os: linux
    containers:
     - name: coredns
      image: coredns/coredns:1.3.1
      imagePullPolicy: IfNotPresent
      resources:
        limits:
          memory: 100Mi
        requests:
          cpu: 100m
          memory: 70Mi
      args: [ "-conf", "/etc/coredns/Corefile" ]
      volumeMounts:
       - name: config-volume
        mountPath: /etc/coredns
        readOnly: true
       - name: tmp
        mountPath: /tmp
      ports:
       - containerPort: 53
        name: dns
        protocol: UDP
       - containerPort: 53
        name: dns-tcp
        protocol: TCP
       - containerPort: 9153
        name: metrics
        protocol: TCP
      livenessProbe:
        httpGet:
          path: /health
          port: 8080
          scheme: HTTP
        initialDelaySeconds: 60
        timeoutSeconds: 5
        successThreshold: 1
        failureThreshold: 5
      readinessProbe:
        httpGet:
          path: /health
          port: 8080
          scheme: HTTP
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          add:
           - NET_BIND_SERVICE
          drop:
           - all
        readOnlyRootFilesystem: true
    dnsPolicy: Default
    volumes:
       - name: tmp
        emptyDir: {}
       - name: config-volume
        configMap:
          name: coredns
          items:
           - key: Corefile
            path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
  prometheus.io/port: "9153"
  prometheus.io/scrape: "true"
labels:
  k8s-app: kube-dns
  kubernetes.io/cluster-service: "true"
  addonmanager.kubernetes.io/mode: Reconcile
  kubernetes.io/name: "CoreDNS"
spec:
selector:
  k8s-app: kube-dns
clusterIP: 10.254.230.254
ports:
 - name: dns
  port: 53
  protocol: UDP
 - name: dns-tcp
  port: 53
  protocol: TCP
 - name: metrics
  port: 9153
  protocol: TCP

#测试
yum install bind-utils.x86_64 -y
dig @10.254.230.254 kubernetes.default.svc.cluster.local +short

4.2 部署dashboard服务


wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

vi kubernetes-dashboard.yaml
#修改镜像地址
image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
#修改service类型为NodePort类型
spec:
type: NodePort
ports:
   - port: 443
    nodePort: 30001
    targetPort: 8443


kubectl create -f kubernetes-dashboard.yaml
#使用火狐浏览器访问https://10.0.0.12:30001

vim dashboard_rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
  k8s-app: kubernetes-dashboard
  addonmanager.kubernetes.io/mode: Reconcile
name: admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-admin
labels:
  k8s-app: kubernetes-dashboard
  addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin
namespace: kube-system

 

5:k8s的网络访问

5.1 k8s的映射


#准备数据库
yum install mariadb-server -y
systemctl start mariadb
mysql_secure_installation
mysql>grant all on *.* to root@'%' identified by '123456';

#删除mysql的rc和svc
kubectl delete rc mysql
kubectl delete svc mysql

#创建endpoint和svc
复制代码
[root@k8s-master yingshe]# vi mysql_endpoint.yaml 
iapiVersion: v1
kind: Endpoints
metadata:
  name: mysql
subsets:
- addresses:
  - ip: 10.0.0.12
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
​
[root@k8s-master yingshe]# vi mysql_svc_v2.yaml 
iapiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
    targetPort: 3306  
  type: ClusterIP
​
#web页面重新访问tomcat/demo
#验证
[root@k8s-node2 ~]# mysql -e 'show databases;'
+--------------------+
| Database           |
+--------------------+
| information_schema |
| HPE_APP            |
| mysql              |
| performance_schema |
+--------------------+
复制代码

 

 

 

5.2 kube-proxy的ipvs模式


yum install conntrack-tools -y
yum install ipvsadm.x86_64 -y

vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \
 --kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \
 --cluster-cidr 172.18.0.0/16 \
 --hostname-override 10.0.0.12 \
 --proxy-mode ipvs \
 --logtostderr=false \
 --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

systemctl daemon-reload
systemctl restart kube-proxy.service
ipvsadm -L -n

#基础kubeadm修改方法
yum install conntrack-tools -y
yum install ipvsadm.x86_64 -y

#加载lvs内核模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

kubectl edit cm -n kube-system kube-proxy
将   mode: ""
修改为mode: "ipvs"
#删除旧的kube-proxy的pod
kubectl get pod -n kube-system |grep proxy
kubectl delete pod -n kube-system kube-proxy-6v4dx
kubectl delete pod -n kube-system kube-proxy-74ccl
 
#验证
[root@kubernetes-master ~]# kubectl logs -n kube-system kube-proxy-69sr8
I0507 03:30:44.059764       1 server_others.go:170] Using ipvs Proxier.
W0507 03:30:44.060053       1 proxier.go:401] IPVS scheduler not specified, use rr by default
I0507 03:30:44.061522       1 server.go:534] Version: v1.15.5

[root@kubernetes-master ~]# ipvsadm -l -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
 -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.17.0.1:30008 rr
 -> 10.244.1.34:8080             Masq    1      0          0        
TCP  10.0.0.11:30000 rr
 -> 10.244.0.3:8443             Masq    1      0          0        
TCP  10.0.0.11:30008 rr
 -> 10.244.1.34:8080             Masq    1      0          0  

5.3 ingress

 

6: k8s弹性伸缩

弹性伸缩

修改kube-controller-manager

--horizontal-pod-autoscaler-use-rest-clients=false

 

7: 动态存储


#准备nfs服务端


vi nfs-client.yaml
ikind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
  type: Recreate
selector:
  matchLabels:
    app: nfs-client-provisioner
template:
  metadata:
    labels:
      app: nfs-client-provisioner
  spec:
    serviceAccountName: nfs-client-provisioner
    containers:
       - name: nfs-client-provisioner
        image: quay.io/external_storage/nfs-client-provisioner:latest
        volumeMounts:
           - name: nfs-client-root
            mountPath: /persistentvolumes
        env:
           - name: PROVISIONER_NAME
            value: fuseim.pri/ifs
           - name: NFS_SERVER
            value: 10.0.0.5
           - name: NFS_PATH
            value: /data
    volumes:
       - name: nfs-client-root
        nfs:
          server: 10.0.0.5
          path: /data


vi nfs-client-sa.yaml
iapiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
 - apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch", "create", "delete"]
 - apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "update"]
 - apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
 - apiGroups: [""]
  resources: ["events"]
  verbs: ["list", "watch", "create", "update", "patch"]
 - apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
 - kind: ServiceAccount
  name: nfs-client-provisioner
  namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io

vi nfs-client-class.yaml
iapiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: course-nfs-storage
provisioner: fuseim.pri/ifs

修改pvc的配置文件
metadata:
namespace: tomcat
name: pvc-01
annotations:
  volume.beta.kubernetes.io/storage-class: "course-nfs-storage"
   
设置默认动态存储
kubectl patch storageclass course-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

8:增加计算节点

计算节点服务: docker kubelet kube-proxy flannel

9.helm

k8s包管理工具

helm 2.0

helm 3.0

posted @   风之帆  阅读(1119)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 无需6万激活码!GitHub神秘组织3小时极速复刻Manus,手把手教你使用OpenManus搭建本
· C#/.NET/.NET Core优秀项目和框架2025年2月简报
· 什么是nginx的强缓存和协商缓存
· 一文读懂知识蒸馏
· Manus爆火,是硬核还是营销?
点击右上角即可分享
微信分享提示