二进制安装k8s

二进制安装k8s

1.创建多台虚拟机,安装Linux操作系统

一台或多台机器操作系统 CentOS7.x-86 x64

硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30G或更多

集群中所有机器之间网络互通

可以访问外网,需要拉取镜像

禁止swap分区

2.操作系统初始化

3.为etcd和apiserver自签证书

cfssl是一个开源的证书管理工具,使用json方式

4.部署etcd集群

5.部署master组件

kube-apiserver,kube-controller-manager,kube-scheduler,etcd

6.部署node组件

kubelet,kube-proxy,docker,etcd

7.部署集群网络

k8s的架构

1563068809299

1601195361513

服务 端口
etcd 127.0.0.1:2379,2380
kubelet 10250,10255
kube-proxy 10256
kube-apiserve 6443,127.0.0.1:8080
kube-schedule 10251,10259
kube-controll 10252,10257

环境准备

主机 ip 内存 软件
k8s-master 10.0.0.11 1g etcd,api-server,controller-manager,scheduler
k8s-node1 10.0.0.12 2g etcd,kubelet,kube-proxy,docker,flannel
k8s-node2 10.0.0.13 2g ectd,kubelet,kube-proxy,docker,flannel
k8s-node3 10.0.0.14 1g kubelet,kube-proxy,docker,flannel
操作系统初始化
systemctl stop firewalld
systemctl disable firewalld
setenforce 0  #临时
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g'/etc/selinux/config
swapoff -a  #临时
sed -ri 's/.*swap.*/#&/' /etc/fstab  #防止开机自动挂载swap
hostnamectl set-hostname 主机名
sed -i 's/200/IP/g' /etc/sysconfig/network-scripts/ifcfg-eth0
yum -y install ntpdate      时间同步
ntpdate time.windows.com



#master节点
[12:06:17 root@k8s-master ~]#cat > /etc/hosts <<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.11 k8s-master
10.0.0.12 k8s-node1
10.0.0.13 k8s-node2
10.0.0.14 k8s-node3
EOF
scp -rp /etc/hosts root@10.0.0.12:/etc/hosts
scp -rp /etc/hosts root@10.0.0.13:/etc/hosts
scp -rp /etc/hosts root@10.0.0.14:/etc/hosts

#所有节点
将IPV4流量传到iptables链里
cat > /etc/sysctl.d/k8s.conf<<EOF
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
EOF
sysctl --system #生效

#node3节点
ssh-copy-id root@10.0.0.11
ssh-copy-id root@10.0.0.12
ssh-copy-id root@10.0.0.13

颁发证书

根据认证对象可以将证书分成三类:

  • 服务器证书server cert:服务端使用,客户端以此验证服务端身份,例如docker服务端、kube-apiserver
  • 客户端证书client cert:用于服务端认证客户端,例如etcdctl、etcd proxy、fleetctl、docker客户端
  • 对等证书peer cert(表示既是server cert又是client cert):双向证书,用于etcd集群成员间通信

kubernetes集群需要的证书如下:

  • etcd 节点需要标识自己服务的server cert,也需要client cert与etcd集群其他节点交互,因此使用对等证书peer cert
  • master 节点需要标识apiserver服务的server cert,也需要client cert连接etcd集群,这里分别指定2个证书。
  • kubectlcalicokube-proxy 只需要client cert,因此证书请求中 hosts 字段可以为空。
  • kubelet证书比较特殊,不是手动生成,它由node节点TLS BootStrapapiserver请求,由master节点的controller-manager 自动签发,包含一个client cert 和一个server cert

本架构使用的证书:参考文档

  • 一套对等证书(etcd-peer):etcd<-->etcd<-->etcd
  • 客户端证书(client):api-server-->etcd和flanneld-->etcd
  • 服务器证书(apiserver):-->api-server
  • 服务器证书(kubelet):api-server-->kubelet
  • 服务器证书(kube-proxy-client):api-server-->kube-proxy

不使用证书:

  • 如果使用证书,每次访问etcd都必须指定证书;为了方便,etcd监听127.0.0.1,本机访问不使用证书。

  • api-server-->controller-manager

  • api-server-->scheduler


在k8s-node3节点基于CFSSL工具创建CA证书,服务端证书,客户端证书。

CFSSL是CloudFlare开源的一款PKI/TLS工具。 CFSSL 包含一个命令行工具 和一个用于签名,验证并且捆绑TLS证书的 HTTP API 服务。 使用Go语言编写。

Github:https://github.com/cloudflare/cfssl
官网:https://pkg.cfssl.org/
参考:http://blog.51cto.com/liuzhengwei521/2120535?utm_source=oschina-app

1.准备证书颁发工具

#node3节点
mkdir /opt/softs &&  cd /opt/softs
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
 chmod +x /opt/softs/*
 ln -s /opt/softs/* /usr/bin/
mkdir /opt/certs && cd /opt/certs

2.编辑ca证书配置文件

 tee  /opt/certs/ca-config.json <<-EOF
{
    "signing": {
        "default": {
            "expiry": "175200h"
        },
        "profiles": {
            "server": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF
tee /opt/certs/ca-csr.json <<-EOF
{
    "CN": "kubernetes-ca",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ],
    "ca": {
        "expiry": "175200h"
    }
}
EOF

3.生成CA证书和私钥

[root@k8s-node3 certs]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca - 
[root@k8s-node3 certs]# ls 
ca-config.json  ca.csr  ca-csr.json  ca-key.em  ca.pem

部署etcd集群

主机名 ip 角色
k8s-master 10.0.0.11 etcd lead
k8s-node1 10.0.0.12 etcd follow
k8s-node2 10.0.0.13 etcd follow

node3节点颁发etcd节点之间通信的证书

tee /opt/certs/etcd-peer-csr.json <<-EOF
{
    "CN": "etcd-peer",
    "hosts": [
        "10.0.0.11",
        "10.0.0.12",
        "10.0.0.13"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
EOF

[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssl-json -bare etcd-peer
[root@k8s-node3 certs]# ls etcd-peer*
etcd-peer.csr  etcd-peer-csr.json  etcd-peer-key.pem  etcd-peer.pem

安装etcd服务

etcd集群

yum install  etcd  -y

node3节点

scp -rp *.pem root@10.0.0.11:/etc/etcd/
scp -rp *.pem root@10.0.0.12:/etc/etcd/
scp -rp *.pem root@10.0.0.13:/etc/etcd/
    
master节点
[root@k8s-master ~]#ls /etc/etcd
chown -R etcd:etcd /etc/etcd/*.pem
tee  /etc/etcd/etcd.conf <<-EOF
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_PEER_URLS="https://10.0.0.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379"
ETCD_NAME="node1"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="node1=https://10.0.0.11:2380,node2=https://10.0.0.12:2380,node3=https://10.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_PEER_AUTO_TLS="true"
EOF

node1节点

 chown -R etcd:etcd /etc/etcd/*.pem
tee  /etc/etcd/etcd.conf <<-EOF
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_PEER_URLS="https://10.0.0.12:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.12:2379,http://127.0.0.1:2379"
ETCD_NAME="node2"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.12:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.12:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="node1=https://10.0.0.11:2380,node2=https://10.0.0.12:2380,node3=https://10.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_PEER_AUTO_TLS="true"
EOF

node2节点

 chown -R etcd:etcd /etc/etcd/*.pem
tee  /etc/etcd/etcd.conf <<-EOF
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_PEER_URLS="https://10.0.0.13:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.13:2379,http://127.0.0.1:2379"
ETCD_NAME="node3"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.13:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.13:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="node1=https://10.0.0.11:2380,node2=https://10.0.0.12:2380,node3=https://10.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_PEER_AUTO_TLS="true"
EOF

etcd节点同时启动

systemctl restart etcd
systemctl enable etcd

验证

etcdctl member list

node3节点的安装

安装api-server服务
rz kubernetes-server-linux-amd64-v1.15.4.tar.gz
[root@k8s-node3 softs]# ls
cfssl  cfssl-certinfo  cfssl-json  kubernetes-server-linux-amd64-v1.15.4.tar.gz
[root@k8s-node3 softs]# tar xf kubernetes-server-linux-amd64-v1.15.4.tar.gz 
[root@k8s-node3 softs]# ls 
cfssl  cfssl-certinfo  cfssl-json  kubernetes  kubernetes-server-linux-amd64-v1.15.4.tar.gz
[root@k8s-node3 softs]# cd /opt/softs/kubernetes/server/bin/
[root@k8s-node3 bin]# rm -rf *.tar *_.tag   
[root@k8s-node3 bin]# scp -rp kube-apiserver kube-controller-manager kube-scheduler  kubectl root@10.0.0.11:/usr/sbin/

签发client证书

[root@k8s-node3 bin]# cd /opt/certs/
 tee /opt/certs/client-csr.json <<-EOF
{
    "CN": "k8s-node",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
EOF
[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssl-json -bare client
 2020/12/17 12:25:03 [INFO] generate received request
2020/12/17 12:25:03 [INFO] received CSR
2020/12/17 12:25:03 [INFO] generating key: rsa-2048

2020/12/17 12:25:04 [INFO] encoded CSR
2020/12/17 12:25:04 [INFO] signed certificate with serial number 533774625324057341405060072478063467467017332427
2020/12/17 12:25:04 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-node3 certs]# ls client*
client.csr  client-csr.json  client-key.pem  client.pem

签发kube-apiserver服务端证书

tee /opt/certs/apiserver-csr.json <<-EOF
{
    "CN": "apiserver",
    "hosts": [
        "127.0.0.1",
        "10.254.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local",
        "10.0.0.11",
        "10.0.0.12",
        "10.0.0.13"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
EOF
[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json | cfssl-json -bare apiserver 
2020/12/17 12:26:58 [INFO] generate received request
2020/12/17 12:26:58 [INFO] received CSR
2020/12/17 12:26:58 [INFO] generating key: rsa-2048
2020/12/17 12:26:58 [INFO] encoded CSR
2020/12/17 12:26:58 [INFO] signed certificate with serial number 315139331004456663749895745137037080303885454504
2020/12/17 12:26:58 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").    
[root@k8s-node3 certs]# ls apiserver*
apiserver.csr  apiserver-csr.json  apiserver-key.pem  apiserver.pem

注:10.254.0.1为clusterIP网段的第一个ip,做为pod访问api-server的内部ip

配置api-server服务

master节点

1.拷贝证书

 mkdir /etc/kubernetes -p && cd /etc/kubernetes
[root@k8s-node3 certs]#scp -rp ca*.pem apiserver*.pem client*.pem root@10.0.0.11:/etc/kubernetes
[root@k8s-master kubernetes]# ls
apiserver-key.pem  apiserver.pem  ca-key.pem  ca.pem  client-key.pem  client.pem

2.api-server审计日志规则

[root@k8s-master kubernetes]# tee /etc/kubernetes/audit.yaml <<-EOF
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
    rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]
  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]
  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]
  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"
  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]
  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]
  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.
  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"
        EOF



tee  /usr/lib/systemd/system/kube-apiserver.service <<-EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
[Service]
ExecStart=/usr/sbin/kube-apiserver \\
  --audit-log-path /var/log/kubernetes/audit-log \\
  --audit-policy-file /etc/kubernetes/audit.yaml \\
  --authorization-mode RBAC \\
  --client-ca-file /etc/kubernetes/ca.pem \\
  --requestheader-client-ca-file /etc/kubernetes/ca.pem \\
  --enable-admission-plugins \\ NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \\
  --etcd-cafile /etc/kubernetes/ca.pem \\
  --etcd-certfile /etc/kubernetes/client.pem \\
  --etcd-keyfile /etc/kubernetes/client-key.pem \\
  --etcd-servers \\ https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 \\
  --service-account-key-file /etc/kubernetes/ca-key.pem \\
  --service-cluster-ip-range 10.254.0.0/16 \\
  --service-node-port-range 30000-59999 \\
  --kubelet-client-certificate /etc/kubernetes/client.pem \\
  --kubelet-client-key /etc/kubernetes/client-key.pem \\
  --log-dir  /var/log/kubernetes/ \\
  --logtostderr=false \\
  --tls-cert-file /etc/kubernetes/apiserver.pem \\
  --tls-private-key-file /etc/kubernetes/apiserver-key.pem \\
  --v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
    
    
    
[root@k8s-master kubernetes]# mkdir /var/log/kubernetes
[root@k8s-master kubernetes]# systemctl daemon-reload 
[root@k8s-master kubernetes]# systemctl start kube-apiserver.service 
[root@k8s-master kubernetes]# systemctl enable kube-apiserver.service

3.检验

[root@k8s-master kubernetes]# kubectl get cs  
NAME                 STATUS      MESSAGE                                                                                     ERROR
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-2               Healthy     {"health":"true"}                                                                           
etcd-1               Healthy     {"health":"true"}                                                                           
etcd-0               Healthy     {"health":"true"}   
4.安装controller-manager服务
tee /usr/lib/systemd/system/kube-controller-manager.service <<-EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
[Service]
ExecStart=/usr/sbin/kube-controller-manager \\
  --cluster-cidr 172.18.0.0/16 \\                     #pod网段
  --log-dir /var/log/kubernetes/ \\
  --master http://127.0.0.1:8080 \\
  --service-account-private-key-file /etc/kubernetes/ca-key.pem \\
  --service-cluster-ip-range 10.254.0.0/16 \\         #VIP网段
  --root-ca-file /etc/kubernetes/ca.pem \\
  --logtostderr=false \\
  --v 2
Restart=on-failure                                  #取消标准错误输出
[Install]
WantedBy=multi-user.target
EOF

为了省事,apiserver和etcd通信,apiserver和kubelet通信共用一套client cert证书。

--audit-log-path /var/log/kubernetes/audit-log \ # 审计日志路径
--audit-policy-file /etc/kubernetes/audit.yaml \ # 审计规则文件
--authorization-mode RBAC \                      # 授权模式:RBAC
--client-ca-file /etc/kubernetes/ca.pem \        # client ca证书
--requestheader-client-ca-file /etc/kubernetes/ca.pem \ # 请求头 ca证书
--enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \ # 启用的准入插件
--etcd-cafile /etc/kubernetes/ca.pem \          # 与etcd通信ca证书
--etcd-certfile /etc/kubernetes/client.pem \    # 与etcd通信client证书
--etcd-keyfile /etc/kubernetes/client-key.pem \ # 与etcd通信client私钥
--etcd-servers https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 \
--service-account-key-file /etc/kubernetes/ca-key.pem \ # ca私钥
--service-cluster-ip-range 10.254.0.0/16 \              # VIP范围
--service-node-port-range 30000-59999 \          # VIP端口范围
--kubelet-client-certificate /etc/kubernetes/client.pem \ # 与kubelet通信client证书
--kubelet-client-key /etc/kubernetes/client-key.pem \ # 与kubelet通信client私钥
--log-dir  /var/log/kubernetes/ \  # 日志文件路径
--logtostderr=false \ # 启用日志
--tls-cert-file /etc/kubernetes/apiserver.pem \            # api服务证书
--tls-private-key-file /etc/kubernetes/apiserver-key.pem \ # api服务私钥
--v 2  # 日志级别 2
Restart=on-failure
-etcd-servers: etcd集群地址
-bind-address:监听地址
-secure-port 安全端口
-allow-privileged:启用授权
-enable-admission-plugins:准入控制模块
-authorization-mode: 认证授权,启用RBAC授权和节点自管理
-token-auth-file: bootfile

5.启动服务

systemctl daemon-reload 
systemctl restart kube-controller-manager.service 
systemctl enable kube-controller-manager.service

6.安装scheduler服务

 tee  /usr/lib/systemd/system/kube-scheduler.service <<-EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
[Service]
ExecStart=/usr/sbin/kube-scheduler \\
  --log-dir /var/log/kubernetes/ \\
  --master http://127.0.0.1:8080 \\
  --logtostderr=false \\
  --v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

7.重启服务验证

systemctl daemon-reload 
systemctl start kube-scheduler.service 
systemctl enable kube-scheduler.service
kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}

node节点的安装

安装kubelet服务

node3节点签发证书

[root@k8s-node3 bin]# cd /opt/certs/
 tee kubelet-csr.json <<-EOF
{
    "CN": "kubelet-node",
    "hosts": [
    "127.0.0.1", \\
    "10.0.0.11", \\
    "10.0.0.12", \\
    "10.0.0.13", \\
    "10.0.0.14", \\
    "10.0.0.15" \\
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
EOF


cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
[root@k8s-node3 certs]#ls kubelet*
kubelet.csr  kubelet-csr.json  kubelet-key.pem  kubelet.pem

生成kubelet启动所需的kube-config文件

[root@k8s-node3 certs]# ln -s /opt/softs/kubernetes/server/bin/kubectl /usr/sbin/

设置集群参数

[root@k8s-node3 certs]# kubectl config set-cluster myk8s \\
   --certificate-authority=/opt/certs/ca.pem \\
   --embed-certs=true \\
   --server=https://10.0.0.11:6443 \\
   --kubeconfig=kubelet.kubeconfig

设置客户端认证参数

[root@k8s-node3 certs]# kubectl config set-credentials k8s-node --client-certificate=/opt/certs/client.pem --client-key=/opt/certs/client-key.pem --embed-certs=true --kubeconfig=kubelet.kubeconfig

生成上下文参数

[root@k8s-node3 certs]# kubectl config set-context myk8s-context \\
   --cluster=myk8s \\
   --user=k8s-node \\
   --kubeconfig=kubelet.kubeconfig

切换默认上下文

[root@k8s-node3 certs]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig

查看生成的kube-config文件

[root@k8s-node3 certs]# ls kubelet.kubeconfig 

master节点

[root@k8s-master ~]# tee  k8s-node.yaml <<-EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node
EOF
[root@k8s-master ~]# kubectl create -f k8s-node.yaml

node1节点

安装docker-ce

rz docker1903_rpm.tar.gz
tar xf docker1903_rpm.tar.gz
cd docker1903_rpm/
yum localinstall *.rpm -y
systemctl start docker

tee /etc/docker/daemon.json <<-EOF
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl restart docker.service
systemctl enable docker.service
[root@k8s-node1 ~]# mkdir /etc/kubernetes -p && cd /etc/kubernetes

    
[root@k8s-node3 certs]# scp -rp kubelet.kubeconfig root@10.0.0.12:/etc/kubernetes
scp -rp kubelet*.pem ca*.pem root@10.0.0.12:/etc/kubernetes
scp -rp /opt/softs/kubernetes/server/bin/kubelet root@10.0.0.12:/usr/bin/

        
[root@k8s-node1 kubernetes]# mkdir  /var/log/kubernetes
tee /usr/lib/systemd/system/kubelet.service <<-EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 10.254.230.254 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice \
  --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on=false \
  --client-ca-file /etc/kubernetes/ca.pem \
  --tls-cert-file /etc/kubernetes/kubelet.pem \
  --tls-private-key-file /etc/kubernetes/kubelet-key.pem \
  --hostname-override 10.0.0.12 \
  --image-gc-high-threshold 20 \
  --image-gc-low-threshold 10 \
  --kubeconfig /etc/kubernetes/kubelet.kubeconfig \
  --log-dir /var/log/kubernetes/ \
  --pod-infra-container-image t29617342/pause-amd64:3.0 \
  --logtostderr=false \
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF  

重启服务

systemctl daemon-reload 
systemctl start kubelet.service 
systemctl enable kubelet.service
netstat -tnulp 

node2节点

rz docker1903_rpm.tar.gz
tar xf docker1903_rpm.tar.gz
cd docker1903_rpm/
yum localinstall *.rpm -y
systemctl start docker
[root@k8s-node3 certs]#scp -rp /opt/softs/kubernetes/server/bin/kubelet root@10.0.0.13:/usr/bin/
[15:55:01 root@k8s-node2 ~/docker1903_rpm]#
scp -rp root@10.0.0.12:/etc/docker/daemon.json /etc/docker
systemctl restart docker
docker info|grep -i cgroup
mkdir /etc/kubernetes 
scp -rp root@10.0.0.12:/etc/kubernetes/* /etc/kubernetes/
mkdir  /var/log/kubernetes
tee /usr/lib/systemd/system/kubelet.service <<-EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 10.254.230.254 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice \
  --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on=false \
  --client-ca-file /etc/kubernetes/ca.pem \
  --tls-cert-file /etc/kubernetes/kubelet.pem \
  --tls-private-key-file /etc/kubernetes/kubelet-key.pem \
  --hostname-override 10.0.0.13 \
  --image-gc-high-threshold 20 \
  --image-gc-low-threshold 10 \
  --kubeconfig /etc/kubernetes/kubelet.kubeconfig \
  --log-dir /var/log/kubernetes/ \
  --pod-infra-container-image t29617342/pause-amd64:3.0 \
  --logtostderr=false \
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
    
systemctl daemon-reload 
systemctl start kubelet.service 
systemctl enable kubelet.service
netstat -tnulp    
Requires=docker.service # 依赖服务
[Service]
ExecStart=/usr/bin/kubelet \
--anonymous-auth=false \         # 关闭匿名认证
--cgroup-driver systemd \        # 用systemd控制
--cluster-dns 10.254.230.254 \   # DNS地址
--cluster-domain cluster.local \ # DNS域名,与DNS服务配置资源指定的一致
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on=false \           # 关闭不使用swap
--client-ca-file /etc/kubernetes/ca.pem \                # ca证书
--tls-cert-file /etc/kubernetes/kubelet.pem \            # kubelet证书
--tls-private-key-file /etc/kubernetes/kubelet-key.pem \ # kubelet密钥
--hostname-override 10.0.0.13 \  # kubelet主机名, 各node节点不一样
--image-gc-high-threshold 20 \   # 磁盘使用率超过20,始终运行镜像垃圾回收
--image-gc-low-threshold 10 \    # 磁盘使用率小于10,从不运行镜像垃圾回收
--kubeconfig /etc/kubernetes/kubelet.kubeconfig \ # 客户端认证凭据
--pod-infra-container-image t29617342/pause-amd64:3.0 \ # pod基础容器镜像

注意:这里的pod基础容器镜像使用的是官方仓库t29617342用户的公开镜像!

master节点验证

[root@k8s-master ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
10.0.0.12   Ready    <none>   15m   v1.15.4
10.0.0.13   Ready    <none>   16s   v1.15.4
安装kube-proxy服务

node3节点签发证书

[root@k8s-node3 ~]# cd /opt/certs/
tee /opt/certs/kube-proxy-csr.json <<-EOF
{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
EOF

[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json | cfssl-json -bare kube-proxy-client
[root@k8s-node3 certs]# ls kube-proxy-c*
kube-proxy-client.csr  kube-proxy-client-key.pem  kube-proxy-client.pem  kube-proxy-csr.json\

生成kube-proxy启动所需要kube-config
[root@k8s-node3 certs]# kubectl config set-cluster myk8s \
   --certificate-authority=/opt/certs/ca.pem \
   --embed-certs=true \
   --server=https://10.0.0.11:6443 \
   --kubeconfig=kube-proxy.kubeconfig




 kubectl config set-credentials kube-proxy \
   --client-certificate=/opt/certs/kube-proxy-client.pem \
   --client-key=/opt/certs/kube-proxy-client-key.pem \
   --embed-certs=true \
   --kubeconfig=kube-proxy.kubeconfig




 kubectl config set-context myk8s-context \
   --cluster=myk8s \
   --user=kube-proxy \
   --kubeconfig=kube-proxy.kubeconfig




 kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig




[root@k8s-node3 certs]# ls kube-proxy.kubeconfig 
kube-proxy.kubeconfig



scp -rp kube-proxy.kubeconfig  root@10.0.0.12:/etc/kubernetes/  
scp -rp kube-proxy.kubeconfig  root@10.0.0.13:/etc/kubernetes/
scp -rp  /opt/softs/kubernetes/server/bin/kube-proxy root@10.0.0.12:/usr/bin/
scp -rp /opt/softs/kubernetes/server/bin/kube-proxy root@10.0.0.13:/usr/bin/

node1节点上配置kube-proxy

[root@k8s-node1 ~]#tee /usr/lib/systemd/system/kube-proxy.service <<-EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \
  --kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \
  --cluster-cidr 172.18.0.0/16 \
  --hostname-override 10.0.0.12 \
  --logtostderr=false \
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
    
    
    
systemctl daemon-reload 
systemctl start kube-proxy.service 
systemctl enable kube-proxy.service
netstat -tnulp    

node2节点配置kube-proxy

tee /usr/lib/systemd/system/kube-proxy.service <<-EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \
  --kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \
  --cluster-cidr 172.18.0.0/16 \
  --hostname-override 10.0.0.13 \
  --logtostderr=false \
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
    
    
    
systemctl daemon-reload 
systemctl start kube-proxy.service 
systemctl enable kube-proxy.service
netstat -tnulp

配置flannel网络

所有节点安装flannel

yum install flannel  -y
mkdir  /opt/certs/

node3分发证书

cd /opt/certs/
 scp -rp ca.pem client*pem root@10.0.0.11:/opt/certs/
 scp -rp ca.pem client*pem root@10.0.0.12:/opt/certs/
 scp -rp ca.pem client*pem root@10.0.0.13:/opt/certs/

master节点

etcd创建flannel的key

#通过这个key定义pod的ip地址范围
etcdctl mk /atomic.io/network/config   '{ "Network": "172.18.0.0/16","Backend": {"Type": "vxlan"} }'

注意可能会失败提示

      Error:  x509: certificate signed by unknown authority
      #多重试几次就好了

配置启动flannel

tee  /etc/sysconfig/flanneld <<-EOF
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/opt/certs/ca.pem -etcd-certfile=/opt/certs/client.pem -etcd-keyfile=/opt/certs/client-key.pem"
EOF
systemctl restart flanneld.service 
systemctl enable flanneld.service

scp -rp /etc/sysconfig/flanneld root@10.0.0.12:/etc/sysconfig/flanneld    
scp -rp /etc/sysconfig/flanneld root@10.0.0.13:/etc/sysconfig/flanneld

systemctl restart flanneld.service 
systemctl enable flanneld.service

#验证
[root@k8s-master ~]# ifconfig flannel.1

node1和node2节点

sed -i 's@ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock@ExecStart=/usr/bin/dockerd  $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock@g'  /usr/lib/systemd/system/docker.service
sed  -i "/ExecStart=/a ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT"  /usr/lib/systemd/system/docker.service
或者sed  -i "/ExecStart=/i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT"  /usr/lib/systemd/system/docker.service
 systemctl daemon-reload 
 systemctl restart docker
iptables -nL
#验证,docker0网络为172.18网段就ok了
ifconfig docker0

验证k8s集群的安装

node1和node2节点
rz docker_nginx1.13.tar.gz
docker load -i docker_nginx1.13.tar.gz
[root@k8s-master ~]# kubectl run nginx  --image=nginx:1.13 --replicas=2
kubectl create deployment test --image=nginx:1.13
kubectl get pod    
kubectl expose deploy nginx --type=NodePort --port=80 --target-port=80
kubectl get svc
curl -I http://10.0.0.12:35822
curl -I http://10.0.0.13:35822    

run将在未来被移除,以后用:

kubectl create deployment test --image=nginx:1.13

k8s高版本支持 -A参数

-A, --all-namespaces # 如果存在,列出所有命名空间中请求的对象

k8s的常用资源

pod资源

pod资源至少由两个容器组成,一个基础容器pod+业务容器

动态pod,这个pod的yaml文件从etcd获取的yaml

静态pod,kubelet本地目录读取yaml文件,启动的pod

node1

mkdir /etc/kubernetes/manifest
tee  /usr/lib/systemd/system/kubelet.service <<-EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet   
  --anonymous-auth=false  \
  --cgroup-driver systemd   \
  --cluster-dns 10.254.230.254  \  
  --cluster-domain cluster.local   \
  --runtime-cgroups=/systemd/system.slice  \
  --kubelet-cgroups=/systemd/system.slice   \
  --fail-swap-on=false  \
  --client-ca-file /etc/kubernetes/ca.pem  \
  --tls-cert-file /etc/kubernetes/kubelet.pem   \
  --tls-private-key-file /etc/kubernetes/kubelet-key.pem  \
  --hostname-override 10.0.0.12   \
  --image-gc-high-threshold 20   \
  --image-gc-low-threshold 10   \
  --kubeconfig /etc/kubernetes/kubelet.kubeconfig   \
  --log-dir /var/log/kubernetes/   \
  --pod-infra-container-image t29617342/pause-amd64:3.0  \
  --pod-manifest-path /etc/kubernetes/manifest  \
  --logtostderr=false  \
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

重启服务

systemctl daemon-reload 
systemctl restart kubelet.service

添加静态pod

tee  /etc/kubernetes/manifest/k8s_pod.yaml<<-EOF
apiVersion: v1
kind: Pod
metadata:
  name: static-pod
spec:
  containers:
    - name: nginx
      image: nginx:1.13
      ports:
        - containerPort: 80
EOF

验证

[18:57:17 root@k8s-master ~]# kubectl get pod
NAME                     READY   STATUS        RESTARTS   AGE
nginx-6459cd46fd-4tszs   1/1     Running       0          37m
nginx-6459cd46fd-74npw   1/1     Running       0          6m31s
nginx-6459cd46fd-qbwg6   1/1     Running       0          37m
static-pod-10.0.0.13     1/1     Running       0          6m28s

污点和容忍度

节点和Pod的亲和力,用来将Pod吸引到一组节点【根据拓扑域】(作为优选或硬性要求)。

污点(Taints)则相反,应用于node,它们允许一个节点排斥一组Pod。

污点taints是定义在节点之上的key=value:effect,用于让节点拒绝将Pod调度运行于其上, 除非该Pod对象具有接纳节点污点的容忍度。

容忍(Tolerations)应用于pod,允许(但不强制要求)pod调度到具有匹配污点的节点上。

容忍度tolerations是定义在 Pod对象上的键值型属性数据,用于配置其可容忍的节点污点,而且调度器仅能将Pod对象调度至其能够容忍该节点污点的节点之上。

img

污点(Taints)和容忍(Tolerations)共同作用,确保pods不会被调度到不适当的节点。一个或多个污点应用于节点;这标志着该节点不应该接受任何不容忍污点的Pod。

说明:我们在平常使用中发现pod不会调度到k8s的master节点,就是因为master节点存在污点。

多个Taints污点和多个Tolerations容忍判断:

可以在同一个node节点上设置多个污点(Taints),在同一个pod上设置多个容忍(Tolerations)。

Kubernetes处理多个污点和容忍的方式就像一个过滤器:从节点的所有污点开始,然后忽略可以被Pod容忍匹配的污点;保留其余不可忽略的污点,污点的effect对Pod具有显示效果:


污点

污点(Taints): node节点的属性,通过打标签实现

污点(Taints)类型:

  • NoSchedule:不要再往该node节点调度了,不影响之前已经存在的pod。
  • PreferNoSchedule:备用。优先往其他node节点调度。
  • NoExecute:清场,驱逐。新pod不许来,老pod全赶走。适用于node节点下线。

污点(Taints)的 effect 值 NoExecute,它会影响已经在节点上运行的 pod:

  • 如果 pod 不能容忍 effect 值为 NoExecute 的 taint,那么 pod 将马上被驱逐
  • 如果 pod 能够容忍 effect 值为 NoExecute 的 taint,且在 toleration 定义中没有指定 tolerationSeconds,则 pod 会一直在这个节点上运行。
  • 如果 pod 能够容忍 effect 值为 NoExecute 的 taint,但是在toleration定义中指定了 tolerationSeconds,则表示 pod 还能在这个节点上继续运行的时间长度。

  1. 查看node节点标签
[root@k8s-master ~]# kubectl get nodes --show-labels
NAME        STATUS     ROLES    AGE   VERSION   LABELS
10.0.0.12   NotReady   <none>   17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.12,kubernetes.io/os=linux
10.0.0.13   NotReady   <none>   17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.13,kubernetes.io/os=linux
  1. 添加标签:node角色
kubectl label nodes 10.0.0.12 node-role.kubernetes.io/node=
  1. 查看node节点标签:10.0.0.12的ROLES变为node
[root@k8s-master ~]# kubectl get nodes --show-labels
NAME        STATUS     ROLES    AGE   VERSION   LABELS
10.0.0.12   NotReady   node     17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.12,kubernetes.io/os=linux,node-role.kubernetes.io/node=
10.0.0.13   NotReady   <none>   17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.13,kubernetes.io/os=linux
  1. 删除标签
kubectl label nodes 10.0.0.12 node-role.kubernetes.io/node-
  1. 添加标签:硬盘类型
kubectl label nodes 10.0.0.12 disk=ssd
kubectl label nodes 10.0.0.13 disk=sata
  1. 清除其他pod
kubectl delete deployments --all
  1. 查看当前pod:2个
[root@k8s-master ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
nginx-6459cd46fd-dl2ct   1/1     Running   1          16h   172.18.28.3   10.0.0.12   <none>           <none>
nginx-6459cd46fd-zfwbg   1/1     Running   0          16h   172.18.98.4   10.0.0.13   <none>           <none>

NoSchedule

  1. 添加污点:基于硬盘类型的NoSchedule
kubectl taint node 10.0.0.12 disk=ssd:NoSchedule
  1. 查看污点
kubectl describe nodes 10.0.0.12|grep Taint
  1. 调整副本数
kubectl scale deployment nginx --replicas=5
  1. 查看pod验证:新增pod都在10.0.0.13上创建
kubectl get pod -o wide
  1. 删除污点
kubectl taint node 10.0.0.12 disk-

NoExecute

  1. 添加污点:基于硬盘类型的NoExecute
kubectl taint node 10.0.0.12 disk=ssd:NoExecute
  1. 查看pod验证:所有pod都在10.0.0.13上创建,之前10.0.0.12上的pod也转移到10.0.0.13上
kubectl get pod -o wide
  1. 删除污点
kubectl taint node 10.0.0.12 disk-

PreferNoSchedule

  1. 添加污点:基于硬盘类型的PreferNoSchedule
kubectl taint node 10.0.0.12 disk=ssd:PreferNoSchedule
  1. 调整副本数
kubectl scale deployment nginx --replicas=2
kubectl scale deployment nginx --replicas=5
  1. 查看pod验证:有部分pod都在10.0.0.12上创建
kubectl get pod -o wide
  1. 删除污点
kubectl taint node 10.0.0.12 disk-

容忍度

容忍度(Tolerations):pod.spec的属性,设置了容忍的Pod将可以容忍污点的存在,可以被调度到存在污点的Node上。


  1. 查看解释
kubectl explain pod.spec.tolerations
  1. 配置能够容忍NoExecute污点的deploy资源yaml配置文件
mkdir -p /root/k8s_yaml/deploy && cd /root/k8s_yaml/deploy
cat > /root/k8s_yaml/deploy/k8s_deploy.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      tolerations:
      - key: "disk"
        operator: "Equal"
        value: "ssd"
        effect: "NoExecute"
      containers:
      - name: nginx
        image: nginx:1.13
        ports:
        - containerPort: 80
EOF
  1. 创建deploy资源
kubectl delete deployments nginx
kubectl create -f k8s_deploy.yaml
  1. 查看当前pod
kubectl get pod -o wide
  1. 添加污点:基于硬盘类型的NoExecute
kubectl taint node 10.0.0.12 disk=ssd:NoExecute
  1. 调整副本数
kubectl scale deployment nginx --replicas=5
  1. 查看pod验证:有部分pod都在10.0.0.12上创建,容忍了污点
kubectl get pod -o wide
  1. 删除污点
kubectl taint node 10.0.0.12 disk-

pod.spec.tolerations示例

tolerations:
- key: "key"
  operator: "Equal"
  value: "value"
  effect: "NoSchedule"
---
tolerations:
- key: "key"
  operator: "Exists"
  effect: "NoSchedule"
---
tolerations:
- key: "key"
  operator: "Equal"
  value: "value"
  effect: "NoExecute"
  tolerationSeconds: 3600

说明:

  • 其中key、value、effect要与Node上设置的taint保持一致
  • operator的值为Exists时,将会忽略value;只要有key和effect就行
  • tolerationSeconds:表示pod能够容忍 effect 值为 NoExecute 的 taint;当指定了 tolerationSeconds【容忍时间】,则表示 pod 还能在这个节点上继续运行的时间长度。

不指定key值和effect值时,且operator为Exists,表示容忍所有的污点【能匹配污点所有的keys,values和effects】

tolerations:
- operator: "Exists"

不指定effect值时,则能容忍污点key对应的所有effects情况

tolerations:
- key: "key"
  operator: "Exists"

有多个Master存在时,为了防止资源浪费,可以进行如下设置:

kubectl taint nodes Node-name node-role.kubernetes.io/master=:PreferNoSchedule

常用资源

pod资源

pod资源至少由两个容器组成:一个基础容器pod+业务容器

  • 动态pod:从etcd获取yaml文件。

  • 静态pod:kubelet本地目录读取yaml文件。


  1. k8s-node1修改kubelet.service,指定静态pod路径:该目录下只能放置静态pod的yaml配置文件
sed -i '22a \ \ --pod-manifest-path /etc/kubernetes/manifest \\' /usr/lib/systemd/system/kubelet.service
mkdir /etc/kubernetes/manifest
systemctl daemon-reload
systemctl restart kubelet.service
  1. k8s-node1创建静态pod的yaml配置文件:静态pod立即被创建,其name增加后缀本机IP
cat > /etc/kubernetes/manifest/k8s_pod.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: static-pod
spec:
  containers:
    - name: nginx
      image: nginx:1.13
      ports:
        - containerPort: 80
EOF
  1. master查看pod
[root@k8s-master ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6459cd46fd-dl2ct   1/1     Running   0          51m
nginx-6459cd46fd-zfwbg   1/1     Running   0          51m
test-8c7c68d6d-x79hf     1/1     Running   0          51m
static-pod-10.0.0.12     1/1     Running   0          3s

kubeadm部署k8s基于静态pod。

静态pod:

  • 创建yaml配置文件,立即自动创建pod。

  • 移走yaml配置文件,立即自动移除pod。


secret资源

secret资源是某个namespace的局部资源,含有加密的密码、密钥、证书等。


k8s对接harbor

首先搭建Harbor docker镜像仓库,启用https,创建私有仓库。

然后使用secrets资源管理密钥对,用于拉取镜像时的身份验证。


首先:deploy在pull镜像时调用secrets

  1. 创建secrets资源regcred
kubectl create secret docker-registry regcred --docker-server=blog.oldqiang.com --docker-username=admin --docker-password=a123456 --docker-email=296917342@qq.com
  1. 查看secrets资源
[root@k8s-master ~]# kubectl get secrets 
NAME                       TYPE                                  DATA   AGE
default-token-vgc4l        kubernetes.io/service-account-token   3      2d19h
regcred                    kubernetes.io/dockerconfigjson        1      114s
  1. deploy资源调用secrets资源的密钥对pull镜像
cd /root/k8s_yaml/deploy
cat > /root/k8s_yaml/deploy/k8s_deploy_secrets.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      imagePullSecrets:
      - name: regcred
      containers:
      - name: nginx
        image: blog.oldqiang.com/oldboy/nginx:1.13
        ports:
        - containerPort: 80
EOF
  1. 创建deploy资源
kubectl delete deployments nginx
kubectl create -f k8s_deploy_secrets.yaml
  1. 查看当前pod:资源创建成功
kubectl get pod -o wide

RBAC:deploy在pull镜像时通过用户调用secrets

  1. 创建secrets资源harbor-secret
kubectl create secret docker-registry harbor-secret --namespace=default --docker-username=admin --docker-password=a123456 --docker-server=blog.oldqiang.com
  1. 创建用户和pod资源的yaml文件
cd /root/k8s_yaml/deploy
# 创建用户
cat > /root/k8s_yaml/deploy/k8s_sa_harbor.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: docker-image
  namespace: default
imagePullSecrets:
- name: harbor-secret
EOF
# 创建pod
cat > /root/k8s_yaml/deploy/k8s_pod.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: static-pod
spec:
  serviceAccount: docker-image
  containers:
    - name: nginx
      image: blog.oldqiang.com/oldboy/nginx:1.13
      ports:
        - containerPort: 80
EOF
  1. 创建资源
kubectl delete deployments nginx
kubectl create -f k8s_sa_harbor.yaml
kubectl create -f k8s_pod.yaml
  1. 查看当前pod:资源创建成功
kubectl get pod -o wide

configmap资源

configmap资源用来存放配置文件,可用挂载到pod容器上。


  1. 创建配置文件
cat > /root/k8s_yaml/deploy/81.conf <<EOF
    server {
        listen       81;
        server_name  localhost;
        root         /html;
        index      index.html index.htm;
        location / {
        }
    }
EOF
  1. 创建configmap资源(可以指定多个--from-file)
kubectl create configmap 81.conf --from-file=/root/k8s_yaml/deploy/81.conf
  1. 查看configmap资源
kubectl get cm
kubectl get cm 81.conf -o yaml
  1. deploy资源挂载configmap资源
cd /root/k8s_yaml/deploy
cat > /root/k8s_yaml/deploy/k8s_deploy_cm.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
        - name: nginx-config
          configMap:
            name: 81.conf
            items:
              - key: 81.conf  # 指定多个配置文件中的一个
                path: 81.conf
      containers:
      - name: nginx
        image: nginx:1.13
        volumeMounts:
          - name: nginx-config
            mountPath: /etc/nginx/conf.d
        ports:
        - containerPort: 80
          name: port1
        - containerPort: 81
          name: port2
EOF
  1. 创建deploy资源
kubectl delete deployments nginx
kubectl create -f k8s_deploy_cm.yaml
  1. 查看当前pod
kubectl get pod -o wide
  1. 但是volumeMounts只能挂目录,原有文件会被覆盖,导致80端口不能访问。

initContainers资源

在启动pod前,先启动initContainers容器进行初始化操作。


  1. 查看解释
kubectl explain pod.spec.initContainers
  1. deploy资源挂载configmap资源

初始化操作:

  • 初始化容器一:挂载持久化hostPath和configmap,拷贝81.conf到持久化目录
  • 初始化容器二:挂载持久化hostPath,拷贝default.conf到持久化目录

最后Deployment容器启动,挂载持久化目录。

cd /root/k8s_yaml/deploy
cat > /root/k8s_yaml/deploy/k8s_deploy_init.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
        - name: config
          hostPath:
            path: /mnt
        - name: tmp
          configMap:
            name: 81.conf
            items:
              - key: 81.conf
                path: 81.conf
      initContainers:
      - name: cp1
        image: nginx:1.13
        volumeMounts:
          - name: config
            mountPath: /nginx_config
          - name: tmp
            mountPath: /tmp
        command: ["cp","/tmp/81.conf","/nginx_config/"]
      - name: cp2
        image: nginx:1.13
        volumeMounts:
          - name: config
            mountPath: /nginx_config
        command: ["cp","/etc/nginx/conf.d/default.conf","/nginx_config/"]
      containers:
      - name: nginx
        image: nginx:1.13
        volumeMounts:
          - name: config
            mountPath: /etc/nginx/conf.d
        ports:
        - containerPort: 80
          name: port1
        - containerPort: 81
          name: port2
EOF
  1. 创建deploy资源
kubectl delete deployments nginx
kubectl create -f k8s_deploy_init.yaml
  1. 查看当前pod
kubectl get pod -o wide -l app=nginx
  1. 查看存在配置文件:81.conf,default.conf
kubectl exec -ti nginx-7879567f94-25g5s /bin/bash
ls /etc/nginx/conf.d

常用服务

RBAC

RBAC:role base access controller

kubernetes的认证访问授权机制RBAC,通过apiserver设置-–authorization-mode=RBAC开启。

RBAC的授权步骤分为两步:

1)定义角色:在定义角色时会指定此角色对于资源的访问控制的规则;

2)绑定角色:将主体与角色进行绑定,对用户进行访问授权。


用户:sa(ServiceAccount)

角色:role

  • 局部角色:Role
    • 角色绑定(授权):RoleBinding
  • 全局角色:ClusterRole
    • 角色绑定(授权):ClusterRoleBinding

K8S RBAC详解


使用流程图

RBAC使用流程图

  • 用户使用:如果是用户需求权限,则将Role与User(或Group)绑定(这需要创建User/Group);

  • 程序使用:如果是程序需求权限,将Role与ServiceAccount指定(这需要创建ServiceAccount并且在deployment中指定ServiceAccount)。


部署dns服务

部署coredns,官方文档

  1. master节点创建配置文件coredns.yaml(指定调度到node2)
mkdir -p /root/k8s_yaml/dns && cd /root/k8s_yaml/dns
cat > /root/k8s_yaml/dns/coredns.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      nodeName: 10.0.0.13
      containers:
      - name: coredns
        image: coredns/coredns:1.3.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        - name: tmp
          mountPath: /tmp
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: tmp
          emptyDir: {}
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.254.230.254
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
EOF
  1. master节点创建资源(准备镜像:coredns/coredns:1.3.1)
kubectl create -f coredns.yaml
  1. master节点查看pod用户
kubectl get pod -n kube-system
kubectl get pod -n kube-system coredns-6cf5d7fdcf-dvp8r -o yaml | grep -i ServiceAccount
  1. master节点查看DNS资源coredns用户的全局角色,绑定
kubectl get clusterrole | grep coredns
kubectl get clusterrolebindings | grep coredns
kubectl get sa -n kube-system | grep coredns
  1. master节点创建tomcat+mysql的deploy资源yaml文件
mkdir -p /root/k8s_yaml/tomcat_deploy && cd /root/k8s_yaml/tomcat_deploy
cat > /root/k8s_yaml/tomcat_deploy/mysql-deploy.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: tomcat
  name: mysql
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: mysql:5.7
          ports:
          - containerPort: 3306
          env:
          - name: MYSQL_ROOT_PASSWORD
            value: '123456'
EOF
cat > /root/k8s_yaml/tomcat_deploy/mysql-svc.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  namespace: tomcat
  name: mysql
spec:
  ports:
    - port: 3306
      targetPort: 3306
  selector:
    app: mysql
EOF
cat > /root/k8s_yaml/tomcat_deploy/tomcat-deploy.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: tomcat
  name: myweb
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
        - name: myweb
          image: kubeguide/tomcat-app:v2
          ports:
          - containerPort: 8080
          env:
          - name: MYSQL_SERVICE_HOST
            value: 'mysql'
          - name: MYSQL_SERVICE_PORT
            value: '3306'
EOF
cat > /root/k8s_yaml/tomcat_deploy/tomcat-svc.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  namespace: tomcat
  name: myweb
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30008
  selector:
    app: myweb
EOF
  1. master节点创建资源(准备镜像:mysql:5.7 和 kubeguide/tomcat-app:v2)
kubectl create namespace tomcat
kubectl create -f .
  1. master节点验证
[root@k8s-master tomcat_demo]# kubectl get pod -n tomcat
NAME                     READY   STATUS    RESTARTS   AGE
mysql-94f6bbcfd-6nng8    1/1     Running   0          5s
myweb-5c8956ff96-fnhjh   1/1     Running   0          5s
[root@k8s-master tomcat_deploy]# kubectl -n tomcat exec -ti myweb-5c8956ff96-fnhjh /bin/bash
root@myweb-5c8956ff96-fnhjh:/usr/local/tomcat# ping mysql
PING mysql.tomcat.svc.cluster.local (10.254.94.77): 56 data bytes
^C--- mysql.tomcat.svc.cluster.local ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
root@myweb-5c8956ff96-fnhjh:/usr/local/tomcat# exit
exit
  1. 验证DNS
  • master节点
[root@k8s-master deploy]# kubectl get pod -n kube-system -o wide
NAME                       READY   STATUS    RESTARTS   AGE    IP            NODE        NOMINATED NODE   READINESS GATES
coredns-6cf5d7fdcf-dvp8r   1/1     Running   0          177m   172.18.98.2   10.0.0.13   <none>           <none>
yum install bind-utils -y
dig @172.18.98.2 kubernetes.default.svc.cluster.local +short
  • node节点(kube-proxy)
yum install bind-utils -y
dig @10.254.230.254 kubernetes.default.svc.cluster.local +short

部署dashboard服务

  1. 官方配置文件,略作修改

k8s1.15的dashboard-controller.yaml建议使用dashboard1.10.1kubernetes-dashboard.yaml

mkdir -p /root/k8s_yaml/dashboard && cd /root/k8s_yaml/dashboard
cat > /root/k8s_yaml/dashboard/kubernetes-dashboard.yaml <<EOF
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      nodePort: 30001
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
EOF
# 镜像改用国内源
  image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
# service类型改为NodePort:指定宿主机端口
spec:
type: NodePort
ports:
    - port: 443
      nodePort: 30001
      targetPort: 8443
  1. 创建资源(准备镜像:registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1)
kubectl create -f kubernetes-dashboard.yaml
  1. 查看当前已存在角色admin
kubectl get clusterrole | grep admin
  1. 创建用户,绑定已存在角色admin(默认用户只有最小权限)
cat > /root/k8s_yaml/dashboard/dashboard_rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-admin
  namespace: kube-system
EOF
  1. 创建资源
kubectl create -f dashboard_rbac.yaml
  1. 查看admin角色用户令牌
[root@k8s-master dashboard]# kubectl describe secrets -n kube-system kubernetes-admin-token-tpqs6 
Name:         kubernetes-admin-token-tpqs6
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-admin
              kubernetes.io/service-account.uid: 17f1f684-588a-4639-8ec6-a39c02361d0e

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1354 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWFkbWluLXRva2VuLXRwcXM2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmVybmV0ZXMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxN2YxZjY4NC01ODhhLTQ2MzktOGVjNi1hMzljMDIzNjFkMGUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZXJuZXRlcy1hZG1pbiJ9.JMvv-W50Zala4I0uxe488qjzDZ2m05KN0HMX-RCHFg87jHq49JGyqQJQDFgujKCyecAQSYRFm4uZWnKiWR81Xd7IZr16pu5exMpFaAryNDeAgTAsvpJhaAuumopjiXXYgip-7pNKxJSthmboQkQ4OOmzSHRv7N6vOsyDQOhwGcgZ01862dsjowP3cCPL6GSQCeXT0TX968MyeKZ-2JV4I2XdbkPoZYCRNvwf9F3u74xxPlC9vVLYWdNP8rXRBXi3W_DdQyXntN-jtMXHaN47TWuqKIgyWmT3ZzTIKhKART9_7YeiOAA6LVGtYq3kOvPqyGHvQulx6W2ADjCTAAPovA
  1. 使用火狐浏览器访问:https://10.0.0.12:30001使用令牌登录
  2. 生成证书,解决Google浏览器不能打开kubernetes dashboard的问题
mkdir /root/k8s_yaml/dashboard/key && cd /root/k8s_yaml/dashboard/key
openssl genrsa -out dashboard.key 2048
openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=10.0.0.11'
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
  1. 删除原有的证书secret资源
kubectl delete secret kubernetes-dashboard-certs -n kube-system
  1. 创建新的证书secret资源
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kube-system
  1. 删除pod,自动创建新pod生效
[root@k8s-master key]# kubectl get pod -n kube-system 
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-6cf5d7fdcf-dvp8r                1/1     Running   0          4h19m
kubernetes-dashboard-5dc4c54b55-sn8sv   1/1     Running   0          41m
kubectl delete pod -n kube-system kubernetes-dashboard-5dc4c54b55-sn8sv
  1. 使用谷歌浏览器访问:https://10.0.0.12:30001使用令牌登录
  2. 令牌生成kubeconfig,解决令牌登陆快速超时的问题
DASH_TOKEN='eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWFkbWluLXRva2VuLXRwcXM2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmVybmV0ZXMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxN2YxZjY4NC01ODhhLTQ2MzktOGVjNi1hMzljMDIzNjFkMGUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZXJuZXRlcy1hZG1pbiJ9.JMvv-W50Zala4I0uxe488qjzDZ2m05KN0HMX-RCHFg87jHq49JGyqQJQDFgujKCyecAQSYRFm4uZWnKiWR81Xd7IZr16pu5exMpFaAryNDeAgTAsvpJhaAuumopjiXXYgip-7pNKxJSthmboQkQ4OOmzSHRv7N6vOsyDQOhwGcgZ01862dsjowP3cCPL6GSQCeXT0TX968MyeKZ-2JV4I2XdbkPoZYCRNvwf9F3u74xxPlC9vVLYWdNP8rXRBXi3W_DdQyXntN-jtMXHaN47TWuqKIgyWmT3ZzTIKhKART9_7YeiOAA6LVGtYq3kOvPqyGHvQulx6W2ADjCTAAPovA'
kubectl config set-cluster kubernetes --server=10.0.0.11:6443 --kubeconfig=/root/dashbord-admin.conf
kubectl config set-credentials admin --token=$DASH_TOKEN --kubeconfig=/root/dashbord-admin.conf
kubectl config set-context admin --cluster=kubernetes --user=admin --kubeconfig=/root/dashbord-admin.conf
kubectl config use-context admin --kubeconfig=/root/dashbord-admin.conf
  1. 下载到主机,用于以后登录使用
cd ~
sz dashbord-admin.conf
  1. 使用谷歌浏览器访问:https://10.0.0.12:30001使用kubeconfig文件登录,可以exec

网络

映射(endpoints资源)

  1. master节点查看endpoints资源
[root@k8s-master ~]# kubectl get endpoints 
NAME         ENDPOINTS        AGE
kubernetes   10.0.0.11:6443   28h
... ...

可用其将外部服务映射到内部使用。每个Service资源自动关连一个endpoints资源,优先标签,然后同名。

  1. k8s-node2准备外部数据库
yum install mariadb-server -y
systemctl start mariadb
mysql_secure_installation

n
y
y
y
y
mysql -e "grant all on *.* to root@'%' identified by '123456';"

该项目在tomcat的index.html页面,已经将数据库连接写固定了,用户名root,密码123456。

  1. master节点创建endpoint和svc资源yaml文件
cd /root/k8s_yaml/tomcat_deploy
cat > /root/k8s_yaml/tomcat_deploy/mysql_endpoint_svc.yaml <<EOF
apiVersion: v1
kind: Endpoints
metadata:
  name: mysql
  namespace: tomcat
subsets:
- addresses:
  - ip: 10.0.0.13
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
--- 
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: tomcat
spec:
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
    targetPort: 3306  
  type: ClusterIP
EOF
# 可以参考系统默认创建
kubectl get endpoints kubernetes -o yaml
kubectl get svc kubernetes -o yaml

注意:此时不能使用标签选择器!

  1. master节点创建资源
kubectl delete deployment mysql -n tomcat
kubectl delete svc mysql -n tomcat
kubectl create -f mysql_endpoint_svc.yaml
  1. master节点查看endpoints资源及其与svc的关联
kubectl get endpoints -n tomcat
kubectl describe svc -n tomcat
  1. 浏览器访问http://10.0.0.12:30008/demo/

  2. k8s-node2查看数据库验证

[root@k8s-node2 ~]# mysql -e 'show databases;'
+--------------------+
| Database           |
+--------------------+
| information_schema |
| HPE_APP            |
| mysql              |
| performance_schema |
+--------------------+
[root@k8s-node2 ~]# mysql -e 'use HPE_APP;select * from T_USERS;'
+----+-----------+-------+
| ID | USER_NAME | LEVEL |
+----+-----------+-------+
|  1 | me        | 100   |
|  2 | our team  | 100   |
|  3 | HPE       | 100   |
|  4 | teacher   | 100   |
|  5 | docker    | 100   |
|  6 | google    | 100   |
+----+-----------+-------+

kube-proxy的ipvs模式

  1. node节点安装依赖命令
yum install ipvsadm conntrack-tools -y
  1. node节点修改kube-proxy.service增加参数
cat > /usr/lib/systemd/system/kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \\
  --kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \\
  --cluster-cidr 172.18.0.0/16 \\
  --hostname-override 10.0.0.12 \\
  --proxy-mode ipvs \\
  --logtostderr=false \\
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
--proxy-mode ipvs  # 启用ipvs模式

LVS默认NAT模式。不满足LVS,自动降级为iptables。

  1. node节点重启kube-proxy并检查LVS规则
systemctl daemon-reload 
systemctl restart kube-proxy.service 
ipvsadm -L -n 

七层负载均衡(ingress-traefik)

Ingress 包含两大组件:Ingress Controller 和 Ingress。

  • ingress-controller(traefik)服务组件,直接使用宿主机网络。
  • Ingress资源是基于DNS名称(host)或URL路径把请求转发到指定的Service资源的转发规则

image-20201215232645950


Ingress-Traefik

Traefik 是一款开源的反向代理与负载均衡工具。它最大的优点是能够与常见的微服务系统直接整合,可以实现自动化动态配置。目前支持 Docker、Swarm、Mesos/Marathon、 Mesos、Kubernetes、Consul、Etcd、Zookeeper、BoltDB、Rest API 等等后端模型。

Traefike可观测性方案

1568743448535


创建rbac

  1. 创建rbac的yaml文件
mkdir -p /root/k8s_yaml/ingress && cd /root/k8s_yaml/ingress
cat > /root/k8s_yaml/ingress/ingress_rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
EOF
  1. 创建资源
kubectl create -f ingress_rbac.yaml
  1. 查看资源
kubectl get serviceaccounts -n kube-system | grep traefik-ingress-controller
kubectl get clusterrole -n kube-system | grep traefik-ingress-controller
kubectl get clusterrolebindings.rbac.authorization.k8s.io -n kube-system | grep traefik-ingress-controller

部署traefik服务

  1. 创建traefik的DaemonSet资源yaml文件
cat > /root/k8s_yaml/ingress/ingress_traefik.yaml <<EOF
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      tolerations:
      - operator: "Exists"
      #nodeSelector:
        #kubernetes.io/hostname: master
      # 允许使用主机网络,指定主机端口hostPort
      hostNetwork: true
      containers:
      - image: traefik:v1.7.2
        imagePullPolicy: IfNotPresent
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: admin
          containerPort: 8080
          hostPort: 8080
        args:
        - --api
        - --kubernetes
        - --logLevel=DEBUG
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 8080
      name: admin
  type: NodePort
EOF
  1. 创建资源(准备镜像:traefik:v1.7.2)
kubectl create -f ingress_traefik.yaml
  1. 浏览器访问 traefik 的 dashboardhttp://10.0.0.12:8080 此时没有server。

创建Ingress资源

  1. 查看要代理的svc资源的NAME和POST
[root@k8s-master ingress]# kubectl get svc -n tomcat 
NAME    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
mysql   ClusterIP   10.254.71.221    <none>        3306/TCP         4h2m
myweb   NodePort    10.254.130.141   <none>        8080:30008/TCP   8h
  1. 创建Ingress资源yaml文件
cat > /root/k8s_yaml/ingress/ingress.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-myweb
  namespace: tomcat
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: tomcat.oldqiang.com
    http:
      paths:
      - backend:
          serviceName: myweb
          servicePort: 8080
EOF
  1. 创建资源
kubectl create -f ingress.yaml
  1. 查看资源
kubectl get ingress -n tomcat

测试访问

  1. windows配置:在C:\Windows\System32\drivers\etc\hosts文件中增加10.0.0.12 tomcat.oldqiang.com

  2. 浏览器直接访问tomcat:http://tomcat.oldqiang.com/demo/

image-20201215205523416

  1. 浏览器访问:http://10.0.0.12:8080 此时BACKENDS(后端)有Server

image-20201215205417151

image-20201215205446740


七层负载均衡(ingress-nginx)

img

五个基础yaml文件:

  • Namespace
  • ConfigMap
  • RBAC
  • Service:添加NodePort端口
  • Deployment:默认404页面,改用国内阿里云镜像
  • Deployment:ingress-controller,改用国内阿里云镜像
  1. 准备配置文件
mkdir /root/k8s_yaml/ingress-nginx && cd /root/k8s_yaml/ingress-nginx
# 创建命名空间 ingress-nginx
cat > /root/k8s_yaml/ingress-nginx/namespace.yaml <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
EOF
# 创建配置资源
cat > /root/k8s_yaml/ingress-nginx/configmap.yaml <<EOF
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
EOF
# 如果外界访问的域名不存在的话,则默认转发到default-http-backend这个Service,直接返回404:
cat > /root/k8s_yaml/ingress-nginx/default-backend.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: default-http-backend
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: default-http-backend
          # Any image is permissible as long as:
          # 1. It serves a 404 page at /
          # 2. It serves 200 on a /healthz endpoint
          # 改用国内阿里云镜像
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 5
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 10m
              memory: 20Mi
            requests:
              cpu: 10m
              memory: 20Mi

---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
spec:
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
EOF
# 创建Ingress的RBAC授权控制,包括:
# ServiceAccount、ClusterRole、Role、RoleBinding、ClusterRoleBinding
cat > /root/k8s_yaml/ingress-nginx/rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx
EOF
# 创建ingress-controller。将新加入的Ingress进行转化为Nginx的配置。
cat > /root/k8s_yaml/ingress-nginx/with-rbac.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          # 改用国内阿里云镜像
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=\$(POD_NAMESPACE)/default-http-backend
            - --configmap=\$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=\$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=\$(POD_NAMESPACE)/udp-services
            - --publish-service=\$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
EOF
# 创建Service资源,对外提供服务
cat > /root/k8s_yaml/ingress-nginx/service-nodeport.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 32080  # http
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 32443  # https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
EOF
  1. 所有node节点准备镜像
docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
docker images
  1. 创建资源
kubectl create -f namespace.yaml
kubectl create -f configmap.yaml
kubectl create -f rbac.yaml
kubectl create -f default-backend.yaml
kubectl create -f with-rbac.yaml
kubectl create -f service-nodeport.yaml
  1. 查看ingress-nginx组件状态
kubectl get all -n ingress-nginx
  1. 访问http://10.0.0.12:32080/
[root@k8s-master ingress-nginx]# curl 10.0.0.12:32080
default backend - 404
  1. 准备后端Service,创建Deployment资源(nginx)
cat > /root/k8s_yaml/ingress-nginx/deploy-demon.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: myapp-nginx
spec:
  selector:
    app: myapp-nginx
    release: canary
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: nginx-deploy
spec:
  replicas: 2
  selector: 
    matchLabels:
      app: myapp-nginx
      release: canary
  template:
    metadata:
      labels:
        app: myapp-nginx
        release: canary
    spec:
      containers:
      - name: myapp-nginx
        image: nginx:1.13
        ports:
        - name: httpd
          containerPort: 80
EOF
  1. 创建资源(准备镜像:nginx:1.13)
kubectl apply -f deploy-demon.yaml
  1. 查看资源
kubectl get all
  1. 创建ingress资源:将nginx加入ingress-nginx中
cat > /root/k8s_yaml/ingress-nginx/ingress-myapp.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-myapp
  annotations: 
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: myapp.oldqiang.com
    http:
      paths:
      - path: 
        backend:
          serviceName: myapp-nginx
          servicePort: 80
EOF
  1. 创建资源
kubectl apply -f ingress-myapp.yaml
  1. 查看资源
kubectl get ingresses
  1. windows配置:在C:\Windows\System32\drivers\etc\hosts文件中增加10.0.0.12 myapp.oldqiang.com
  2. 浏览器直接访问http://myapp.oldqiang.com:32080/,显示nginx欢迎页
  3. 修改nginx页面以便区分
[root@k8s-master ingress-nginx]# kubectl get pod
NAME                           READY   STATUS    RESTARTS   AGE
nginx-deploy-6b4c84588-crgvr   1/1     Running   0          22m
nginx-deploy-6b4c84588-krvwz   1/1     Running   0          22m
kubectl exec -ti nginx-deploy-6b4c84588-crgvr /bin/bash
echo web1 > /usr/share/nginx/html/index.html
exit
kubectl exec -ti nginx-deploy-6b4c84588-krvwz /bin/bash
echo web2 > /usr/share/nginx/html/index.html
exit
  1. 浏览器访问http://myapp.oldqiang.com:32080/,刷新测试负载均衡

image-20201215225826142


弹性伸缩

heapster监控

参考heapster1.5.4官方配置文件

  1. 查看已存在默认角色heapster
kubectl get clusterrole | grep heapster
  1. 创建heapster所需RBAC、Service和Deployment的yaml文件
mkdir /root/k8s_yaml/heapster/ && cd /root/k8s_yaml/heapster/
cat > /root/k8s_yaml/heapster/heapster.yaml <<EOF
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:heapster
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      serviceAccountName: heapster
      containers:
      - name: heapster
        image: registry.aliyuncs.com/google_containers/heapster-amd64:v1.5.3
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:https://kubernetes.default
        - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monitoring-grafana
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      containers:
      - name: grafana
        image: registry.aliyuncs.com/google_containers/heapster-grafana-amd64:v4.4.3
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ca-certificates
          readOnly: true
        - mountPath: /var
          name: grafana-storage
        env:
        - name: INFLUXDB_HOST
          value: monitoring-influxdb
        - name: GF_SERVER_HTTP_PORT
          value: "3000"
          # The following env variables are required to make Grafana accessible via
          # the kubernetes api-server proxy. On production clusters, we recommend
          # removing these env variables, setup auth for grafana, and expose the grafana
          # service using a LoadBalancer or a public IP.
        - name: GF_AUTH_BASIC_ENABLED
          value: "false"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          # If you're only using the API Server proxy, set this value instead:
          # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
          value: /
      volumes:
      - name: ca-certificates
        hostPath:
          path: /etc/ssl/certs
      - name: grafana-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: kube-system
spec:
  # In a production setup, we recommend accessing Grafana through an external Loadbalancer
  # or through a public IP.
  # type: LoadBalancer
  # You could also use NodePort to expose the service at a randomly-generated port
  # type: NodePort
  ports:
  - port: 80
    targetPort: 3000
  selector:
    k8s-app: grafana
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monitoring-influxdb
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: influxdb
    spec:
      containers:
      - name: influxdb
        image: registry.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.3.3
        volumeMounts:
        - mountPath: /data
          name: influxdb-storage
      volumes:
      - name: influxdb-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-influxdb
  name: monitoring-influxdb
  namespace: kube-system
spec:
  ports:
  - port: 8086
    targetPort: 8086
  selector:
    k8s-app: influxdb
EOF
  1. 创建资源
kubectl create -f heapster.yaml
  1. 高版本k8s已经不建议使用heapster弹性伸缩,配置强制开启:
kube-controller-manager \
--horizontal-pod-autoscaler-use-rest-clients=false
sed -i '8a \ \ --horizontal-pod-autoscaler-use-rest-clients=false \\' /usr/lib/systemd/system/kube-controller-manager.service
  1. 创建业务资源
cd /root/k8s_yaml/deploy
cat > /root/k8s_yaml/deploy/k8s_deploy3.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.13
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
EOF
kubectl create -f k8s_deploy3.yaml
  1. 创建HPA规则
kubectl autoscale deploy nginx --max=6 --min=1 --cpu-percent=5
  1. 查看资源
kubectl get pod
kubectl get hpa
  1. 清除heapster资源,和metric-server不能兼容
kubectl delete -f heapster.yaml
kubectl delete hpa nginx
# 还原kube-controller-manager.service配置
  1. 当node节点NotReady时,强制删除pod
kubectl delete -n kube-system pod Pod_Name --force --grace-period 0

metric-server

metrics-server Github 1.15

  1. 准备yaml文件,使用国内镜像地址(2个),修改一些其他参数
mkdir -p /root/k8s_yaml/metrics/ && cd /root/k8s_yaml/metrics/
cat <<EOF > /root/k8s_yaml/metrics/auth-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metrics-server-auth-reader
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:auth-delegator
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-server
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - "extensions"
  resources:
  - deployments
  verbs:
  - get
  - list
  - update
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:metrics-server
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
EOF
cat <<EOF > /root/k8s_yaml/metrics/metrics-apiservice.yaml
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100
EOF
cat <<EOF > /root/k8s_yaml/metrics/metrics-server.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metrics-server-config
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  NannyConfiguration: |-
    apiVersion: nannyconfig/v1alpha1
    kind: NannyConfiguration
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server-v0.3.3
  namespace: kube-system
  labels:
    k8s-app: metrics-server
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v0.3.3
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
      version: v0.3.3
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
        version: v0.3.3
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      containers:
      - name: metrics-server
        image: registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.3
        command:
        - /metrics-server
        - --metric-resolution=30s
        # These are needed for GKE, which doesn't support secure communication yet.
        # Remove these lines for non-GKE clusters, and when GKE supports token-based auth.
        #- --kubelet-port=10255
        #- --deprecated-kubelet-completely-insecure=true
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
        ports:
        - containerPort: 443
          name: https
          protocol: TCP
      - name: metrics-server-nanny
        image: registry.aliyuncs.com/google_containers/addon-resizer:1.8.5
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 5m
            memory: 50Mi
        env:
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: MY_POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        volumeMounts:
        - name: metrics-server-config-volume
          mountPath: /etc/config
        command:
          - /pod_nanny
          - --config-dir=/etc/config
          #- --cpu=80m
          - --extra-cpu=0.5m
          #- --memory=80Mi
          #- --extra-memory=8Mi
          - --threshold=5
          - --deployment=metrics-server-v0.3.3
          - --container=metrics-server
          - --poll-period=300000
          - --estimator=exponential
          - --minClusterSize=2
          # Specifies the smallest cluster (defined in number of nodes)
          # resources will be scaled to.
          #- --minClusterSize={{ metrics_server_min_cluster_size }}
      volumes:
        - name: metrics-server-config-volume
          configMap:
            name: metrics-server-config
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
---
apiVersion: v1
kind: Service
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "Metrics-server"
spec:
  selector:
    k8s-app: metrics-server
  ports:
  - port: 443
    protocol: TCP
    targetPort: https
EOF

下载指定配置文件:

for file in auth-delegator.yaml auth-reader.yaml metrics-apiservice.yaml metrics-server-deployment.yaml metrics-server-service.yaml resource-reader.yaml;do wget https://raw.githubusercontent.com/kubernetes/kubernetes/v1.15.0/cluster/addons/metrics-server/$file;done
# 使用国内镜像
  image: registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.3
  command:
        - /metrics-server
        - --metric-resolution=30s
# 不验证客户端证书
        - --kubelet-insecure-tls
# 默认解析主机名,coredns中没有物理机的主机名解析,指定使用IP
        - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
... ...
# 使用国内镜像
        image: registry.aliyuncs.com/google_containers/addon-resizer:1.8.5
        command:
          - /pod_nanny
          - --config-dir=/etc/config
          #- --cpu=80m
          - --extra-cpu=0.5m
          #- --memory=80Mi
          #- --extra-memory=8Mi
          - --threshold=5
          - --deployment=metrics-server-v0.3.3
          - --container=metrics-server
          - --poll-period=300000
          - --estimator=exponential
          - --minClusterSize=2
# 添加 node/stats 权限
kind: ClusterRole
metadata:
  name: system:metrics-server
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats

不加上述参数,可能报错:

unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:k8s-node02: unable to fetch metrics from Kubelet k8s-node02 (10.10.0.13): request failed - "401 Unauthorized", response: "Unauthorized", unable to fully scrape metrics from source kubelet_summary:k8s-node01: unable to fetch metrics from Kubelet k8s-node01 (10.10.0.12): request failed - "401 Unauthorized", response: "Unauthorized"]
  1. 创建资源(准备镜像:registry.aliyuncs.com/google_containers/addon-resizer:1.8.5和registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.3)
kubectl create -f .
  1. 查看资源,使用-l指定标签
kubectl get pod -n kube-system -l k8s-app=metrics-server
  1. 查看资源监控:报错
kubectl top nodes
  1. 注意:二进制安装需要在master节点安装kubelet、kube-proxy、docker-ce。并将master节点加入进群worker node节点。否则有可能会无法连接metrics-server而报错timeout。
kubectl get apiservices v1beta1.metrics.k8s.io -o yaml
# 报错信息:mertics无法与 apiserver服务通信
"metrics-server error "Client.Timeout exceeded while awaiting headers"
  1. 其他报错查看api,日志
kubectl describe apiservice v1beta1.metrics.k8s.io
kubectl get pods -n kube-system | grep 'metrics'
kubectl logs metrics-server-v0.3.3-6b7c586ffd-7b4n4 metrics-server -n kube-system
  1. 修改kube-apiserver.service开启聚合层,使用证书
cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
[Service]
ExecStart=/usr/sbin/kube-apiserver \\
  --audit-log-path /var/log/kubernetes/audit-log \\
  --audit-policy-file /etc/kubernetes/audit.yaml \\
  --authorization-mode RBAC \\
  --client-ca-file /etc/kubernetes/ca.pem \\
  --requestheader-client-ca-file /etc/kubernetes/ca.pem \\
  --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \\
  --etcd-cafile /etc/kubernetes/ca.pem \\
  --etcd-certfile /etc/kubernetes/client.pem \\
  --etcd-keyfile /etc/kubernetes/client-key.pem \\
  --etcd-servers https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 \\
  --service-account-key-file /etc/kubernetes/ca-key.pem \\
  --service-cluster-ip-range 10.254.0.0/16 \\
  --service-node-port-range 30000-59999 \\
  --kubelet-client-certificate /etc/kubernetes/client.pem \\
  --kubelet-client-key /etc/kubernetes/client-key.pem \\
  --proxy-client-cert-file=/etc/kubernetes/client.pem \\
  --proxy-client-key-file=/etc/kubernetes/client-key.pem \\
  --requestheader-allowed-names= \\
  --requestheader-extra-headers-prefix=X-Remote-Extra- \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --log-dir /var/log/kubernetes/ \\
  --logtostderr=false \\
  --tls-cert-file /etc/kubernetes/apiserver.pem \\
  --tls-private-key-file /etc/kubernetes/apiserver-key.pem \\
  --v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl restart kube-apiserver.service
# 开启聚合层,使用证书
--requestheader-client-ca-file /etc/kubernetes/ca.pem \\ # 已配置
--proxy-client-cert-file=/etc/kubernetes/client.pem \\
--proxy-client-key-file=/etc/kubernetes/client-key.pem \\
--requestheader-allowed-names= \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\

注:如果 --requestheader-allowed-names 不为空,则--proxy-client-cert-file 证书的 CN 必须位于 allowed-names 中,默认为 aggregator

  如果 kube-apiserver 主机没有运行 kube-proxy,则还需要添加 --enable-aggregator-routing=true 参数。

注意:kube-apiserver不开启聚合层会报错:

I0109 05:55:43.708300       1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
Error: cluster doesn't provide requestheader-client-ca-file
  1. 每个节点修改kubelet.service检查:否则无法正常获取节点主机或者pod的资源使用情况
  • 删除--read-only-port=0
  • 添加--authentication-token-webhook=true
cat > /usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service multi-user.target
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet \\
  --anonymous-auth=false \\
  --cgroup-driver systemd \\
  --cluster-dns 10.254.230.254 \\
  --cluster-domain cluster.local \\
  --runtime-cgroups=/systemd/system.slice \\
  --kubelet-cgroups=/systemd/system.slice \\
  --fail-swap-on=false \\
  --client-ca-file /etc/kubernetes/ca.pem \\
  --tls-cert-file /etc/kubernetes/kubelet.pem \\
  --tls-private-key-file /etc/kubernetes/kubelet-key.pem \\
  --hostname-override 10.0.0.12 \\
  --image-gc-high-threshold 90 \\
  --image-gc-low-threshold 70 \\
  --kubeconfig /etc/kubernetes/kubelet.kubeconfig \\
  --authentication-token-webhook=true \\
  --log-dir /var/log/kubernetes/ \\
  --pod-infra-container-image t29617342/pause-amd64:3.0 \\
  --logtostderr=false \\
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl restart kubelet.service
  1. 重新部署(生成自签发证书)
cd /root/k8s_yaml/metrics/
kubectl delete -f .
kubectl create -f .
  1. 查看资源监控
[root@k8s-master metrics]# kubectl top nodes
NAME        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
10.0.0.11   99m          9%     644Mi           73%       
10.0.0.12   56m          5%     1294Mi          68%       
10.0.0.13   44m          4%     622Mi           33%

动态存储

搭建NFS提供静态存储

  1. 所有节点安装nfs-utils
yum -y install nfs-utils
  1. master节点部署nfs服务
mkdir -p /data/tomcat-db
cat > /etc/exports <<EOF
/data    10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
EOF
systemctl start nfs
  1. 所有node节点检查挂载
showmount -e 10.0.0.11

配置动态存储

创建PVC时,系统自动创建PV

1. 准备存储类SC资源及其依赖的Deployment和RBAC的yaml文件

mkdir /root/k8s_yaml/storageclass/ && cd /root/k8s_yaml/storageclass/
# 实现自动创建PV功能,提供存储类SC
cat > /root/k8s_yaml/storageclass/nfs-client.yaml <<EOF
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.0.0.11
            - name: NFS_PATH
              value: /data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.13
            path: /data
EOF
# RBAC
cat > /root/k8s_yaml/storageclass/nfs-client-rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
EOF
# 创建SC资源,基于nfs-client-provisioner
cat > /root/k8s_yaml/storageclass/nfs-client-class.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: course-nfs-storage
provisioner: fuseim.pri/ifs
EOF
  1. 创建资源(准备镜像:quay.io/external_storage/nfs-client-provisioner:latest)
kubectl create -f .
  1. 创建pvc资源:yaml文件增加属性annotations(可以设为默认属性)
cat > /root/k8s_yaml/storageclass/test_pvc1.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc1
  annotations:
    volume.beta.kubernetes.io/storage-class: "course-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
EOF

Jenkins对接k8s

Jenkins部署在物理机(常修改),k8s现在有了身份认证:

  • 方案一:Jenkins安装k8s身份认证插件
  • 方案二:远程控制k8s:同版本kubectl,指定kubelet客户端认证凭据
kubectl --kubeconfig='kubelet.kubeconfig' get nodes

kubeadm的凭据位于/etc/kubernetes/admin.conf


kubeadm部署k8s集群

官方文档

环境准备

主机 IP 配置 软件
k8s-adm-master 10.0.0.15 2核2G docker-ce,kubelet,kubeadm,kubectl
k8s-adm-node1 10.0.0.16 2核2G docker-ce,kubelet,kubeadm,kubectl
  • 关闭selinuxfirewalldNetworkManagerpostfix(非必须)

  • 修改IP地址、主机名

hostnamectl set-hostname 主机名
sed -i 's/200/IP/g' /etc/sysconfig/network-scripts/ifcfg-eth0
  • 添加hosts解析
cat > /etc/hosts <<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.15 k8s-adm-master
10.0.0.16 k8s-adm-node1
EOF
  • 修改内核参数,关闭swap分区
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
swapoff -a
sed -i 's%/dev/mapper/centos-swap%#&%g' /etc/fstab

安装docker-ce

wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce-18.09.7 -y
systemctl enable docker.service
systemctl start docker.service
systemctl start docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://registry.docker-cn.com"],
}
EOF
systemctl restart docker.service
docker info

安装kubeadm

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install kubelet-1.15.4-0 kubeadm-1.15.4-0 kubectl-1.15.4-0 -y
systemctl enable kubelet.service
systemctl start kubelet.service

使用kubeadm初始化k8s集群

  1. 选择一个控制节点(k8s-adm-master),初始化一个k8s集群:
kubeadm init --kubernetes-version=v1.15.4 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.254.0.0/16
  1. 等待镜像下载,可以使用docker images查看下载进度。
  2. Your Kubernetes control-plane has initialized successfully!
  3. 执行提示命令1:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. 执行提示命令2:node节点加入k8s集群
kubeadm join 10.0.0.15:6443 --token uwelrl.g25p8ye1q9m2sfk7 \
    --discovery-token-ca-cert-hash sha256:e598a2895a53fded82d808caf9b9fd65a04ff59a5b773696d8ceb799cac93c5e

默认 token 24H过期,需要重新生成

kubeadm token create --print-join-command

默认 证书 10年过期,查看

cfssl-certinfo -cert /etc/kubernetes/pki/ca.crt
  1. kubectl命令行TAB键补全:
echo "source <(kubectl completion bash)" >> ~/.bashrc

master节点配置flannel网络

  1. 准备yaml文件
cat <<EOF >> /etc/hosts
199.232.4.133 raw.githubusercontent.com
EOF
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  1. 创建资源
kubectl create -f kube-flannel.yml
  1. 查看资源
kubectl get all -n kube-system
kubectl get nodes

metric-server

metrics-server Github 1.15

  1. 准备yaml文件,使用国内镜像地址(2个),修改一些其他参数

  2. 创建资源(准备镜像:registry.aliyuncs.com/google_containers/addon-resizer:1.8.5和registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.3)

kubectl create -f .
  1. 查看资源监控
kubectl top nodes

导出所有镜像

docker save `docker images|awk 'NR>1{print $1":"$2}'|xargs -n 50` -o docker_k8s_kubeadm.tar.gz

弹性伸缩

  1. 创建业务资源
kubectl create -f /root/k8s_yaml/deploy/k8s_deploy2.yaml
  1. 创建HPA规则
kubectl autoscale deploy nginx --max=6 --min=1 --cpu-percent=5
  1. 查看pod
kubectl get pod
  1. 创建service和ingress资源,部署dashboard服务,ab压力测试弹性伸缩。

StatefulSet 资源

StatefulSet (PetSets):宠物应用,有状态的应用,有数据的应用,pod名称固定(有序 01 02 03)。

  • 适用于每个Pod中有自己的编号,需要互相访问,以及持久存储区分。
  • 例如数据库应用,redis,es集群,mysql集群。

StatefulSet 用来管理 Deployment 和扩展一组 Pod,并且能为这些 Pod 提供序号和唯一性保证。

StatefulSet 为它的每个 Pod 维护了一个固定的 ID。这些 Pod 是基于相同的声明来创建的,但是不能相互替换:无论怎么调度,每个 Pod 都有一个永久不变的 ID。

参考文档


StatefulSets 对于需要满足以下一个或多个需求的应用程序很有价值:

  • 稳定的、唯一的网络标识符。$(StatefulSet 名称)-$(序号)
  • 稳定的、持久的存储。
  • 有序的、优雅的部署和缩放。
  • 有序的、自动的滚动更新。

使用限制

  • 给定 Pod 的存储必须由 PersistentVolume 驱动基于所请求的 storage class 来提供,或者由管理员预先提供。
  • 删除或者收缩 StatefulSet 并不会删除它关联的存储卷。保证数据安全。
  • StatefulSet 当前需要无头服务(不分配 ClusterIP的 svc 资源)来负责 Pod 的网络标识。需要预先创建此服务。
  • 有序和优雅的终止 StatefulSet 中的 Pod ,在删除前将 StatefulSet 缩放为 0。
  • 默认 Pod 管理策略(OrderedReady) 使用滚动更新,可能进入损坏状态,需要手工修复。

  1. 搭建NFS提供静态存储
  2. 配置动态存储
mkdir -p /root/k8s_yaml/sts/ && cd /root/k8s_yaml/sts/
# 实现自动创建PV功能,提供存储类SC
cat > /root/k8s_yaml/sts/nfs-client.yaml <<EOF
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.0.0.15
            - name: NFS_PATH
              value: /data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.15
            path: /data
EOF
# RBAC
cat > /root/k8s_yaml/sts/nfs-client-rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
EOF
# 创建SC资源,基于nfs-client-provisioner,设为默认SC
cat > /root/k8s_yaml/sts/nfs-client-class.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: course-nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: fuseim.pri/ifs
EOF

给sc资源,命令行打默认补丁:

kubectl patch storageclass course-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
  1. 创建资源(准备镜像:quay.io/external_storage/nfs-client-provisioner:latest)
kubectl create -f .
  1. 创建pvc资源yaml文件
cat > /root/k8s_yaml/sts/test_pvc1.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc1
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
EOF
  1. 创建pvc资源:测试动态存储
kubectl create -f test_pvc1.yaml
  1. 查看资源:验证动态存储
kubectl get pvc
kubectl get pv
  1. 查看sts解释
kubectl explain sts.spec.volumeClaimTemplates
kubectl explain sts.spec.volumeClaimTemplates.spec
kubectl explain sts.spec.selector.matchLabels
  1. 创建sts及其依赖svc资源yaml文件
# 创建无头service:不分配 ClusterIP
cat > /root/k8s_yaml/sts/sts_svc.yaml <<EOF
kind: Service
apiVersion: v1
metadata:
  name: nginx
spec:
  type: ClusterIP
  clusterIP: None
  ports:
    - port: 80
      targetPort: 80
  selector:
    app: nginx
EOF
cat > /root/k8s_yaml/sts/sts.yaml <<EOF
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx
spec:
  serviceName: nginx
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  volumeClaimTemplates:
  - metadata:
      name: html
    spec:
      resources:
        requests:
          storage: 5Gi
      accessModes: 
        - ReadWriteOnce
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.13
        volumeMounts:
          - name: html
            mountPath: /usr/shart/nginx/html
        ports:
        - containerPort: 80
EOF
  1. 创建svc和sts资源
kubectl create -f sts_svc.yaml
kubectl create -f sts.yaml
  1. 查看资源:pod是有序的,对应pvc也是有序的,但pvc无序
kubectl get pod
kubectl get pv
kubectl get pvc
  1. 直接使用域名访问容器:容器名不变,即域名不变
ping nginx-0.nginx.default.svc.cluster.local
  1. 查看DNS地址
[root@k8s-adm-master sts]# kubectl get pod -n kube-system -o wide | grep coredns
coredns-bccdc95cf-9sc5f                  1/1     Running   2          20h   10.244.0.6    k8s-adm-master   <none>           <none>
coredns-bccdc95cf-k298p                  1/1     Running   2          20h   10.244.0.7    k8s-adm-master   <none>           <none>
  1. 解析域名
yum install bind-utils -y
dig @10.244.0.6 nginx-0.nginx.default.svc.cluster.local +short

nginx-0.nginx.default.svc.cluster.local

Pod 的 DNS 子域: $(主机名).$(所属服务的 DNS 域名)

  • 主机名:$(StatefulSet 名称)-$(序号)

  • 所属服务的 DNS 域名: $(服务名称).$(命名空间).svc.$(集群域名)

  • 集群域名: cluster.local

  • 服务名称由 StatefulSet 的 serviceName 域来设定。

集群域名 服务(名字空间/名字) StatefulSet(名字空间/名字) StatefulSet 域名 Pod DNS Pod 主机名
cluster.local default/nginx default/web nginx.default.svc.cluster.local web-{0..N-1}.nginx.default.svc.cluster.local web-
cluster.local foo/nginx foo/web nginx.foo.svc.cluster.local web-{0..N-1}.nginx.foo.svc.cluster.local web-
kube.local foo/nginx foo/web nginx.foo.svc.kube.local web-{0..N-1}.nginx.foo.svc.kube.local web-

Job资源

一次性任务,例如:清理es索引。


  1. 创建job资源yaml文件
mkdir -p /root/k8s_yaml/job/ && cd /root/k8s_yaml/job/
cat > /root/k8s_yaml/job/job.yaml <<EOF
apiVersion: batch/v1
kind: Job
metadata:
  name: nginx
spec:
  template:
    metadata:
      name: myjob
    spec:
      containers:
      - name: nginx
        image: nginx:1.13
        ports:
        - containerPort: 80
        command: ["sleep","10"]
      restartPolicy: Never
EOF
  1. 创建job资源
kubectl create -f job.yaml
  1. 查看资源:启动一个pod,10秒后关闭,STATUS:Completed
kubectl get job
kubectl get pod

CronJob资源

定时任务


  1. 创建cronjob资源yaml文件
cat > /root/k8s_yaml/job/cronjob.yaml <<EOF
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: nginx
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        metadata:
          name: myjob
        spec:
          containers:
          - name: nginx
            image: nginx:1.13
            ports:
            - containerPort: 80
            command: ["sleep","10"]
          restartPolicy: Never
EOF
  1. 创建cronjob资源
kubectl create -f cronjob.yaml
  1. 查看资源:10秒后创建一个pod
kubectl get cronjobs
kubectl get pod

Helm包管理器

Helm:让部署应用变的更简单,高效。

Helm chart帮助我们定义,安装和升级kubernetes应用。

官方安装文档

安装helm客户端

wget https://get.helm.sh/helm-v2.17.0-linux-amd64.tar.gz
tar xf helm-v2.17.0-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm

部署helm服务端

helm必须部署在k8s集群中,才能有权限调用apiserver。

  1. helm初始化(准备镜像:ghcr.io/helm/tiller:v2.17.0)
helm init
  1. 查看资源,验证
kubectl get pod -n kube-system
helm version

授予tiller容器权限

  1. 创建RBAC的yaml文件
mkdir -p /root/k8s_yaml/helm/ && cd /root/k8s_yaml/helm/
cat <<EOF > /root/k8s_yaml/helm/tiller_rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
EOF
  1. 创建RBAC资源
kubectl create -f .
  1. 查看tiller-deploy的yaml文件
kubectl get deploy tiller-deploy -n kube-system -o yaml
  1. 给tiller-deploy打补丁:命令行修改yaml文件
kubectl patch -n kube-system deploy tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
  1. 配置命令行补全
cd ~ && helm completion bash > .helmrc && echo "source ~/.helmrc" >> .bashrc
source ~/.helmrc

部署应用

  1. 搜索应用
helm search phpmyadmin
  1. 下载charts(模板),安装实例
helm install --name oldboy --namespace=oldboy stable/phpmyadmin
[root@k8s-adm-master ~]# helm install --name oldboy --namespace=oldboy stable/phpmyadmin
WARNING: This chart is deprecated
NAME:   oldboy
LAST DEPLOYED: Wed Dec 16 20:19:21 2020
NAMESPACE: oldboy
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME               READY  UP-TO-DATE  AVAILABLE  AGE
oldboy-phpmyadmin  0/1    1           0          0s

==> v1/Pod(related)
NAME                                READY  STATUS             RESTARTS  AGE
oldboy-phpmyadmin-7d65b585fb-r8cp2  0/1    ContainerCreating  0         0s

==> v1/Service
NAME               TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)  AGE
oldboy-phpmyadmin  ClusterIP  10.254.253.220  <none>       80/TCP   0s


NOTES:
This Helm chart is deprecated

Given the `stable` deprecation timeline (https://github.com/helm/charts#deprecation-timeline), the Bitnami maintained Helm chart is now located at bitnami/charts (https://github.com/bitnami/charts/).

The Bitnami repository is already included in the Hubs and we will continue providing the same cadence of updates, support, etc that we've been keeping here these years. Installation instructions are very similar, just adding the _bitnami_ repo and using it during the installation (`bitnami/<chart>` instead of `stable/<chart>`)

```bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/<chart>           # Helm 3
$ helm install --name my-release bitnami/<chart>    # Helm 2
```

To update an exisiting _stable_ deployment with a chart hosted in the bitnami repository you can execute

```bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm upgrade my-release bitnami/<chart>
```

Issues and PRs related to the chart itself will be redirected to `bitnami/charts` GitHub repository. In the same way, we'll be happy to answer questions related to this migration process in this issue (https://github.com/helm/charts/issues/20969) created as a common place for discussion.

1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace oldboy -l "app=phpmyadmin,release=oldboy" -o jsonpath="{.items[0].metadata.name}")
  echo "phpMyAdmin URL: http://127.0.0.1:8080"
  kubectl port-forward --namespace oldboy svc/oldboy-phpmyadmin 8080:80

2. How to log in

phpMyAdmin has not been configure to point to a specific database. Please provide the db host,
username and password at log in or upgrade the release with a specific database:

$ helm upgrade oldboy stable/phpmyadmin --set db.host=mydb



** Please be patient while the chart is being deployed **
  1. 查看资源
kubectl get all -n oldboy
  1. 升级,命令行修改变量
helm upgrade oldboy stable/phpmyadmin --set db.host=10.0.0.13
  1. 可以解压缓存的tgz包,查看charts
[root@k8s-adm-master charts]# ls /root/.helm/cache/archive/
phpmyadmin-4.3.5.tgz

charts

  1. 创建charts
mkdir -p /root/k8s_yaml/helm/charts && cd /root/k8s_yaml/helm/charts
helm create hello-helm
[root@k8s-adm-master charts]# tree hello-helm
hello-helm
|-- charts                 # 子charts
|-- Chart.yaml             # charts版本
|-- templates              # 模板
|   |-- deployment.yaml
|   |-- _helpers.tpl
|   |-- ingress.yaml
|   |-- NOTES.txt           # 使用说明
|   |-- serviceaccount.yaml
|   |-- service.yaml
|   `-- tests
|       `-- test-connection.yaml
`-- values.yaml             # 变量
  1. 自定义charts
rm -rf /root/k8s_yaml/helm/charts/hello-helm/templates/*
echo hello! > /root/k8s_yaml/helm/charts/hello-helm/templates/NOTES.txt
cat <<EOF > /root/k8s_yaml/helm/charts/hello-helm/templates/pod.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.13
        ports:
        - containerPort: 80
EOF
  1. 安装charts
cd /root/k8s_yaml/helm/charts
helm install hello-helm
  1. 查看charts
helm list
  1. 查看pod
kubectl get pod
  1. 调试:只渲染,不部署
helm install hello-helm --debug --dry-run
  1. 卸载实例
helm delete oldboy
  1. 打包charts
helm package hello-helm

配置国内源

  1. 删除默认源
helm repo remove stable
  1. 增加国内源(stable只能指定一个,可以指定不同名的源)官方
helm repo add stable https://burdenbear.github.io/kube-charts-mirror/
helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
helm repo add stable https://mirror.azure.cn/kubernetes/charts/
  1. 查看源
helm repo list
  1. 更新仓库信息
helm repo update
  1. 搜索测试
helm search mysql
  1. 自建仓库

搭建charts仓库需要:参考Github,官方推荐使用gitPage搭建charts仓库。


Helm3变化

去除Tiller 和 helm serve

helm服务端和init命令在helm3已弃用。

helm通过 kubeconfig 直接操作k8s集群,类似于kubectl。
helm使用与kubectl上下文相同的访问权限,无需再使用helm init来初始化Helm。

只需要安装helm即可:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

实际上就是Github下载二进制文件并解压,移动到/usr/local/bin/下,添加执行权限。


移除预定义仓库被,增加helm hub

helm search 区分 repo 和 hub

  • repo:自己手动添加的源
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
helm repo add ibmstable https://raw.githubusercontent.com/IBM/charts/master/repo/stable
  • hub:helm 的中心库,各软件商需要在 hub 把应用更新到最新,我们才能在上面查到最新的,等同dockerhub。hub 搜到的包需要进入hub页面查看下载地址。可以把 hub 和 google repo 配合使用:
helm search hub mysql

Values 支持 JSON Schema 校验器

运行 helm install 、 helm upgrade 、 helm lint 、 helm template 命令时,JSON Schema 的校验会自动运行,如果失败就会立即报错。等于先将yaml文件都校验一遍,再创建。

helm pull stable/mysql
tar -zxvf mysql-1.6.2.tgz 
cd mysql 
vim values.yaml 
# 把port: 3306 改成 port: 3306aaa
# 安装测试,会校验port的格式,而且确实是在安装之前,一旦有错任何资源都不会被创建
helm install mysqlll .
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Service.spec.ports[0].port): invalid type for io.k8s.api.core.v1.ServicePort.port: got "string", expected "integer"

helm2/3 命令差异

参考文档


kubesphere管理平台

kubesphere官网

Linux上部署官方说明

准备

主机 IP 最低要求(每个节点)
master 10.0.0.21 CPU:2 核,内存:4 G,硬盘:40 G
node1 10.0.0.22 CPU:2 核,内存:4 G,硬盘:40 G
node2 10.0.0.23 CPU:2 核,内存:4 G,硬盘:40 G
  • 关闭selinuxfirewalldNetworkManagerpostfix(非必须)

  • 修改IP地址、主机名

hostnamectl set-hostname 主机名
sed -i 's/200/IP/g' /etc/sysconfig/network-scripts/ifcfg-eth0
  • 所有节点配置时间同步(ntp)
  • 所有节点检查:sshd/sudo/curl/openssl可用
  • 所有节点安装docker并配置镜像加速,会快点。

下载

使用 KubeKey v1.0.1 工具安装

kubekey

wget https://github.com/kubesphere/kubekey/releases/download/v1.0.1/kubekey-v1.0.1-linux-amd64.tar.gz
tar xf kubekey-v1.0.1-linux-amd64.tar.gz

创建

  1. 创建配置文件
./kk create config --with-kubernetes v1.18.6 --with-kubesphere v3.0.0
  1. 修改示例配置文件
vim config-sample.yaml
# 实际主机名,各节点SSH连接IP,实际IP,SSH登录使用的用户和密码
  hosts:
    - {name: node1, address: 10.0.0.22, internalAddress: 10.0.0.22, user: root, password: 1
}
# SSH密钥登陆
    - {name: master, address: 10.0.0.21, internalAddress: 10.0.0.21, privateKeyPath: "~/.ssh/id_rsa"}
# 实际主机名
  roleGroups:
    etcd:
    - node1
    master:
    - node1
    worker:
    - node1
    - node2
  1. 使用配置文件创建集群
./kk create cluster -f config-sample.yaml
yes

整个安装过程可能需要 10 到 20 分钟,具体取决于您的计算机和网络环境。

添加nodes节点,修改配置文件,执行./kk add nodes -f config-sample.yaml

完成

  1. 完成后stdout
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://10.0.0.21:30880
Account: admin
Password: P@88w0rd

NOTES:
  1. After logging into the console, please check the
     monitoring status of service components in
     the "Cluster Management". If any service is not
     ready, please wait patiently until all components
     are ready.
  2. Please modify the default password after login.

#####################################################
https://kubesphere.io             20xx-xx-xx xx:xx:xx
#####################################################
  1. 浏览器访问KubeSphere Web 控制台
  2. 启用kubectl 自动补全
# Install bash-completion
apt-get install bash-completion

# Source the completion script in your ~/.bashrc file
echo 'source <(kubectl completion bash)' >>~/.bashrc

# Add the completion script to the /etc/bash_completion.d directory
kubectl completion bash >/etc/bash_completion.d/kubectl

secrets资源

方式1:

kubectl create secret docker-registry harbor-secret --namespace=default  --docker-username=admin  --docker-password=a123456 --docker-server=blog.oldqiang.com
vi  k8s_sa_harbor.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: docker-image
  namespace: default
imagePullSecrets:
- name: harbor-secret
vi k8s_pod.yaml
iapiVersion: v1
kind: Pod
metadata:
  name: static-pod
spec:
  serviceAccount: docker-image
  containers:
    - name: nginx
      image: blog.oldqiang.com/oldboy/nginx:1.13
      ports:
        - containerPort: 80

方法二:


kubectl create secret docker-registry regcred --docker-server=blog.oldqiang.com --docker-username=admin --docker-password=a123456 --docker-email=296917342@qq.com
​
#验证
[root@k8s-master ~]# kubectl get secrets 
NAME                       TYPE                                  DATA   AGE
default-token-vgc4l        kubernetes.io/service-account-token   3      2d19h
regcred                    kubernetes.io/dockerconfigjson        1      114s
​
[root@k8s-master ~]# cat k8s_pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: static-pod
spec:
  nodeName: 10.0.0.12
  imagePullSecrets:
    - name: regcred
  containers:
    - name: nginx
      image: blog.oldqiang.com/oldboy/nginx:1.13
      ports:
        - containerPort: 80

3.3 configmap资源

vi /opt/81.conf
    server {
        listen       81;
        server_name  localhost;
        root         /html;
        index      index.html index.htm;
        location / {
        }
    }
​
kubectl create configmap 81.conf --from-file=/opt/81.conf
#验证
kubectl get cm
​
vi k8s_deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
        - name: nginx-config
          configMap:
            name: 81.conf
            items:
              - key: 81.conf
                path: 81.conf
      containers:
      - name: nginx
        image: nginx:1.13
        volumeMounts:
          - name: nginx-config
            mountPath: /etc/nginx/conf.d
        ports:
        - containerPort: 80
          name: port1
        - containerPort: 81
          name: port2

4: k8s常用服务

4.1 部署dns服务


vi coredns.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      nodeName: 10.0.0.13
      containers:
      - name: coredns
        image: coredns/coredns:1.3.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        - name: tmp
          mountPath: /tmp
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: tmp
          emptyDir: {}
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.254.230.254
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
​
#测试
yum install bind-utils.x86_64 -y
dig @10.254.230.254 kubernetes.default.svc.cluster.local +short

4.2 部署dashboard服务

wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
​
vi kubernetes-dashboard.yaml
#修改镜像地址
image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
#修改service类型为NodePort类型
spec:
  type: NodePort
  ports:
    - port: 443
      nodePort: 30001
      targetPort: 8443
​
​
kubectl create -f kubernetes-dashboard.yaml
#使用火狐浏览器访问https://10.0.0.12:30001
​
vim dashboard_rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kube-system

5:k8s的网络访问

5.1 k8s的映射

#准备数据库
yum install mariadb-server -y
systemctl start  mariadb
mysql_secure_installation
mysql>grant all on *.* to root@'%' identified by '123456';
​
#删除mysql的rc和svc
kubectl  delete  rc  mysql
kubectl delete  svc mysql
​
#创建endpoint和svc
[root@k8s-master yingshe]# cat mysql_endpoint.yaml 
apiVersion: v1
kind: Endpoints
metadata:
  name: mysql
subsets:
- addresses:
  - ip: 10.0.0.13
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
​
[root@k8s-master yingshe]# cat mysql_svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
    targetPort: 3306  
  type: ClusterIP
​
#web页面重新访问tomcat/demo
#验证
[root@k8s-node2 ~]# mysql -e 'show databases;'
+--------------------+
| Database           |
+--------------------+
| information_schema |
| HPE_APP            |
| mysql              |
| performance_schema |
+--------------------+

5.2 kube-proxy的ipvs模式

yum install conntrack-tools -y
yum install ipvsadm.x86_64 -y
​
vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \
  --kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \
  --cluster-cidr 172.18.0.0/16 \
  --hostname-override 10.0.0.12 \
  --proxy-mode ipvs \
  --logtostderr=false \
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

systemctl daemon-reload 
systemctl restart kube-proxy.service 
ipvsadm -L -n

5.3 ingress

6: k8s弹性伸缩

弹性伸缩

--horizontal-pod-autoscaler-use-rest-clients=false

7: 动态存储

cat nfs-client.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.0.0.13
            - name: NFS_PATH
              value: /data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.13
            path: /data
vi nfs-client-sa.yaml 
iapiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
vi nfs-client-class.yaml 
iapiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: course-nfs-storage
provisioner: fuseim.pri/ifs
修改pvc的配置文件
metadata:
  namespace: tomcat
  name: pvc-01
  annotations:
    volume.beta.kubernetes.io/storage-class: "course-nfs-storage"

8:增加计算节点

计算节点服务: docker kubelet kube-proxy flannel

9: 污点和容忍度

污点: 给node节点加污点

污点的类型:
NoSchedule
PreferNoSchedule
NoExecute

#添加污点的例子
kubectl taint node 10.0.0.14  node-role.kubernetes.io=master:NoExecute
#检查
[root@k8s-master ~]# kubectl describe nodes 10.0.0.14|grep -i taint
Taints:             node-role.kubernetes.io=master:NoExecute

容忍度

#添加在pod的spec下
tolerations:
- key: "node-role.kubernetes.io"
  operator: "Exists"
  value: "master"
  effect: "NoExecute"
posted @ 2021-03-06 15:18  上善若水~小辉  阅读(922)  评论(0编辑  收藏  举报