手动部署一个单节点kubernetes
简要说明
我们知道,kubenretes的安装非常复杂,因为组件众多。为此各开源社区以及一些商业公司都发布了一些针对kubernetes集成安装组件,包括kubernetes官方的kubeadm, minikube,基于ubuntu的conjure-up,以及rancher等。但是我个人,是不太喜欢使用集成软件来完成应用部署的。使用集成软件部署固然简化了部署成本,却提升了维护成本,也不利于使用人员对软件原理的理解。所以我宁愿通过自己编写salt或者ansible配置文件的方式来实现应用的自动化部署。但这种方式就要求我们对应用的各组件必须要有深入的了解。
本篇文档我们就通过纯手动的方式来完成kubernetes 1.11.1的单节点部署。
安装组件如下:
- etcd
- kube-apiserver
- kube-scheduler
- kube-controller-manager
- kubelet
- kube-proxy
- coredns
- dashboard
- heapster + influxdb + grafana
安装环境说明
ip | system | role |
---|---|---|
192.168.198.135 | ubuntu 18.04 | etcd、master、node |
部署
生成相关证书
证书类型说明
证书名称 | 配置文件 | 用途 |
---|---|---|
ca.pem | ca-csr.json | ca根证书 |
kube-proxy.pem | ca-config.json kube-proxy-csr.json | kube-proxy使用的证书 |
admin.pem | admin-csr.json ca-config.json | kubectl 使用的证书 |
kubernetes.pem | ca-config.json kubernetes-csr.json | apiserver使用的证书 |
安装cfssl证书生成工具
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
生成CA证书
创建存放证书目录:
mkdir /root/ssl/
创建/root/ssl/ca-config.json文件,内容如下:
{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"kubernetes": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
字段说明:
- ca-config.json:可以定义多个Profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书的时候使用某个Profile。这里定义了两个Profile,一个用于kubernetes,一个用于etcd,我这里etcd没有使用证书,所以另一个不使用。
- signing:表示该 证书可用于签名其他证书;生成的ca.pem证书中CA=TRUE
- server auth:表示client可以使用该ca对server提供的证书进行验证
- client auth:表示server可以用该ca对client提供的证书进行验证
创建/root/ssl/ca-csr.json内容如下:
{
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Wuhan",
"ST": "Hubei",
"O": "k8s",
"OU": "System"
}
]
}
生成CA证书:
cfssl gencert --initca=true ca-csr.json | cfssljson --bare ca
生成Kubernetes master节点使用的证书
创建/root/ssl/kubernetes-csr.json文件,内容如下:
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"localhost",
"192.168.198.135"
"10.254.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "k8s",
"OU": "System"
}
]
}
- hosts字段用于指定授权使用该证书的IP和域名列表,因为现在要生成的证书需要被Kubernetes master节点使用,所以指定了kubenetes master节点的ip和hostname
生成kubernetes证书:
cfssl gencert --ca ca.pem --ca-key ca-key.pem --config ca-config.json --profile kubernetes kubernetes-csr.json | cfssljson --bare kubernetes
生成kubectl证书
创建/root/ssl/admin-csr.json文件,内容如下:
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "system:masters",
"OU": "System"
}
]
}
- kube-apiserver会提取CN作为客户端的用户名,这里是admin,将提取O作为用户的属组,这里是system:masters
- 后续kube-apiserver使用RBAC对客户端(如kubelet、kube-proxy、pod)请求进行授权
- apiserver预定义了一些RBAC使用的ClusterRoleBindings,例如cluster-admin将组system:masters与CluasterRole cluster-admin绑定,而cluster-admin拥有访问apiserver的所有权限,因此admin用户将作为集群的超级管理员。
生成kubectl证书:
cfssl gencert --ca ca.pem --ca-key ca-key.pem --config ca-config.json --profile kubernetes admin-csr.json | cfssljson --bare admin
生成kube-proxy证书
创建/root/ssl/kube-proxy-csr.json文件,内容如下:
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "k8s",
"OU": "System"
}
]
}
- CN指定该证书的user为system:kube-proxy
- kube-apiserver预定义的RoleBinding 将User system:kube-proxy与Role system:node-proxier绑定,该role授予了调用kube-apiserver Proxy相关API的权限;
生成kube-proxy证书:
cfssl gencert --ca ca.pem --ca-key ca-key.pem --config ca-config.json --profile kubernetes kube-proxy-csr.json | cfssljson --bare kube-proxy
校验证书,我们这里以校验kubernetes.pem为例:
{
"subject": {
"common_name": "kubernetes",
"country": "CN",
"organization": "k8s",
"organizational_unit": "System",
"locality": "Wuhan",
"province": "Hubei",
"names": [
"CN",
"Hubei",
"Wuhan",
"k8s",
"System",
"kubernetes"
]
},
"issuer": {
"country": "CN",
"organization": "k8s",
"organizational_unit": "System",
"locality": "Wuhan",
"province": "Hubei",
"names": [
"CN",
"Hubei",
"Wuhan",
"k8s",
"System"
]
},
"serial_number": "604093852911522162752840982392649683093741969960",
"sans": [
"localhost",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"127.0.0.1",
"192.168.198.135",
"10.254.0.1"
],
"not_before": "2018-08-10T05:15:00Z",
"not_after": "2038-08-05T05:15:00Z",
"sigalg": "SHA256WithRSA",
"authority_key_id": "DA:57:77:4C:1F:35:8E:FE:F9:15:2:7A:25:BB:77:DC:3C:36:8A:84",
"subject_key_id": "C2:6A:A6:75:AA:DC:4F:4A:75:D1:4C:60:B3:DF:56:68:34:A:39:15",
"pem": "-----BEGIN CERTIFICATE-----\nMIIEZzCCA0+gAwIBAgIUadCBVqjv5DTo7Hjm80AKo3XxIigwDQYJKoZIhvcNAQEL\nBQAwTDELMAkGA1UEBhMCQ04xDjAMBgNVBAgTBUh1YmVpMQ4wDAYDVQQHEwVXdWhh\nbjEMMAoGA1UEChMDazhzMQ8wDQYDVQQLEwZTeXN0ZW0wHhcNMTgwODEwMDUxNTAw\nWhcNMzgwODA1MDUxNTAwWjBhMQswCQYDVQQGEwJDTjEOMAwGA1UECBMFSHViZWkx\nDjAMBgNVBAcTBVd1aGFuMQwwCgYDVQQKEwNrOHMxDzANBgNVBAsTBlN5c3RlbTET\nMBEGA1UEAxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC\nggEBALfgHIhOVEinORPv6ijwWeJxxM6uDORM/kWl0nwHpn3TQZF3jOsQVSs3LOuD\n6SlkRWtsYAGwm0SnEhEDefCn5xmbHSW0YmWYoBE2BerlwLqmaS2eRy4vjCHgkreb\nL+K5rRN+ZW5NOsegrCxzT3h1WWBVEmG/HztwQvrGP9mRbyfI1/pwC7iqoeAzYPx6\nuCPReRFpDpwLb8ESFtMLyUWXaXs2j1csTDTadDdigEA9UmabVYfcycw4mGXr0CcV\n+oqBz2sGGZWY52SZ7FlOS5adydplda7Jpz2C4TMu7XAjW6zvRVxup31W4pDCwjFh\n2InQUJU1dcMi5gZhEhKK3flHKDECAwEAAaOCASowggEmMA4GA1UdDwEB/wQEAwIF\noDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAd\nBgNVHQ4EFgQUwmqmdarcT0p10Uxgs99WaDQKORUwHwYDVR0jBBgwFoAU2ld3TB81\njv75FQJ6Jbt33Dw2ioQwgaYGA1UdEQSBnjCBm4IJbG9jYWxob3N0ggprdWJlcm5l\ndGVzghJrdWJlcm5ldGVzLmRlZmF1bHSCFmt1YmVybmV0ZXMuZGVmYXVsdC5zdmOC\nHmt1YmVybmV0ZXMuZGVmYXVsdC5zdmMuY2x1c3RlcoIka3ViZXJuZXRlcy5kZWZh\ndWx0LnN2Yy5jbHVzdGVyLmxvY2FshwR/AAABhwTAqMaHhwQK/gABMA0GCSqGSIb3\nDQEBCwUAA4IBAQAa20xc+o6H9qyADwsP9Exv5xpvfUMtuQwGY2LdAB6c02hOPy2Q\nDSRBPFfD6UrM3psFnNZUqnnnsylQ9Y9ib5dGfqKJAbaN6eltEd994TKS3/+FtvP3\nIfByaT1YYI0RSOAs/37qEHv8aTfLSMDK+41+Ruch2a40K5xd1o8q3rUY9EgM9Vc+\nQ2uHmc1D9+7b/VE1VrbW3u/TNrcV4uVsRJrY40ugD4X170C8xyryaInrXg/70kmS\ntKwEdLr6l6dWb8yZpITADAhoOgRPmok6h37gfe9ef2RcY1Q646prMVOmYOd0jNij\ncyZYDvOd1FKg5JhqlRFtaSS7RHcebys3v/Dx\n-----END CERTIFICATE-----\n"
}
将所有证书复制到/etc/kubernete/ssl目录下:
mkdir /etc/kubernetes/ssl
cp /root/ssl/*.pem /etc/kuberetes/ssl/
生成token及kubeconfig
在本次配置中,我们将会同时启用证书认证,token认证,以及http basic认证。所以需要提前生成token认证文件,basic认证文件以及kubeconfig
以下所有操作都直接在/etc/kubernetes目录进行
生成token文件
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > bootstrap-token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
生成http basic认证文件
cat > basic-auth.csv <<EOF
admin,admin,1
EOF
生成用于kubelet认证使用的bootstrap.kubeconfig文件
export KUBE_APISERVER="https://192.168.198.135:6443"
# 设置集群参数,即api-server的访问方式,给集群起个名字就叫kubernetes
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数,这里采用token认证
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数,用于连接用户kubelet-bootstrap与集群kubernetes
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
生成kube-proxy使用的kube-proxy.kubeconfig文件
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
部署etcd
我们已经提前下载好所有组件的二进制文件,并且全部拷贝到了/usr/local/bin目录下。所以这里只配置各服务相关的启动文件。
创建etcd的启动文件/etc/systemd/system/etcd.service,内容如下:
[Unit]
Description=Etcd
After=network.target
Before=flanneld.service
[Service]
User=root
ExecStart=/usr/local/bin/etcd \
-name etcd1 \
-data-dir /var/lib/etcd \
--advertise-client-urls http://192.168.198.135:2379,http://127.0.0.1:2379 \
--listen-client-urls http://192.168.198.135:2379,http://127.0.0.1:2379
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动etcd:
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
部署master
kube-apiserver
创建kube-apiserver的启动文件/etc/systemd/system/kube-apiserver.service,内容如下:
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,NodeRestriction \
--apiserver-count=3 \
--bind-address=192.168.198.135 \
--insecure-bind-address=127.0.0.1 \
--insecure-port=8080 \
--secure-port=6443 \
--authorization-mode=Node,RBAC \
--runtime-config=rbac.authorization.k8s.io/v1 \
--kubelet-https=true \
--anonymous-auth=false \
--basic-auth-file=/etc/kubernetes/basic-auth.csv \
--enable-bootstrap-token-auth \
--token-auth-file=/etc/kubernetes/bootstrap-token.csv \
--service-cluster-ip-range=10.254.0.0/16 \
--service-node-port-range=20000-40000 \
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--etcd-servers=http://192.168.198.135:2379 \
--etcd-quorum-read=true \
--enable-swagger-ui=true \
--allow-privileged=true \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--v=2 \
--logtostderr=true
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
启动kube-apiserver:
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
kube-controller-manager
创建kube-controller-manager的启动文件/etc/systemd/system/kube-controller-manager.service,内容如下:
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--cluster-name=kubernetes \
--address=127.0.0.1 \
--master=http://127.0.0.1:8080 \
--service-cluster-ip-range=10.254.0.0/16 \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=5m0s \
--controllers=*,bootstrapsigner,tokencleaner \
--horizontal-pod-autoscaler-use-rest-clients=false \
--leader-elect=true \
--v=2 \
--logtostderr=true
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
启动kube-controller-manager服务:
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
kube-scheduler
创建kube-scheduler的启动文件/etc/systemd/system/kube-scheduler.service,内容如下:
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--address=127.0.0.1 \
--master=http://127.0.0.1:8080 \
--leader-elect=true \
--v=2 \
--logtostderr=true
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.targe
启动kube-scheduler:
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-shceudler
配置rbac授权
# 绑定kubelet-bootstrap用户到system:node-bootstrapper权限组
/usr/local/bin/kubectl create clusterrolebinding kubelet-bootstrap-clusterbinding --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
# 绑定system:nodes组到system:node权限组
/usr/local/bin/kubectl create clusterrolebinding kubelet-node-clusterbinding --clusterrole=system:node --group=system:nodes
部署Node
docker
安装:
# step 1: 安装必要的一些系统工具
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# step 2: 安装GPG证书
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# Step 3: 写入软件源信息
sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# Step 4: 更新并安装 Docker-CE
sudo apt-get -y update
sudo apt-get -y install docker-ce
配置:
修改/etc/docker/daemon.json内容如下:
{
"registry-mirrors": ["http://5dd4061a.m.daocloud.io"]
}
启动:
systemctl start docker
systemctl enable docker
kubelet
创建kubelet的启动文件/etc/systemd/system/kubelet.service,内容如下:
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
#WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--address=192.168.198.135 \
--hostname-override=192.168.198.135 \
--cgroup-driver=cgroupfs \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 \
--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--cluster-dns=10.254.0.100 \
--cluster-domain=cluster.local. \
--hairpin-mode=promiscuous-bridge \
--allow-privileged=true \
--fail-swap-on=false \
--serialize-image-pulls=false \
--max-pods=30 \
--logtostderr=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
启动kubelet:
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
在master上为kubelet颁发证书:
# node节点正常启动以后,在master端执行kubectl get nodes看不到node节点,这是因为node节点启动后先向master申请证书,master签发证书以后,才能加入到集群中,如下:
# 查看 csr
➜ kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-l9d25 2m kubelet-bootstrap Pending
# 签发证书
➜ kubectl certificate approve csr-l9d25
certificatesigningrequest "csr-l9d25" approved
这时,在master上执行kubectl get nodes就可以看到一个node节点:
# 查看 node
➜ kubectl get node
NAME STATUS AGE VERSION
192.168.198.135 Ready 3m v1.11.1
kube-proxy
创建kube-proxy的启动文件/etc/systemd/system/kube-proxy.service,内容如下:
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
#WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
--bind-address=192.168.198.135 \
--hostname-override=192.168.198.135 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
--v=2 \
--cluster-cidr=10.254.0.0/16
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动kube-proxy:
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
如果kube-proxy正常启动,但是运行应用的时候,发现kube-proxy的网络不通,查看日志,抛如下异常:
Failed to delete stale service IP 10.254.0.100 connections, error: error deleting connection tracking state for UDP service IP: 10.254.0.100, error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH
则需要安装如下组件:
apt install conntrack conntrackd nfct
如果是在centos7上,则安装:
yum install conntrack-tools
安装Add-ons
创建/root/k8s-yamls目录,以下所有操作都在这里进行
coredns
创建/root/k8s-yamls/coredns目录
mkdir -p /root/k8s-yamls/coredns
获取coredns配置示例文件:
cd /root/k8s-yamls/coredns
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/coredns/coredns.yaml.sed
mv coredns.yaml.sed coredns.yaml
修改coredns.yaml里三个地方:
- $DNS_DOMAIN 修改为 cluster.local
- $DNS_SREVER_IP 修改为 10.254.0.100
k8s.gcr.io/coredns:1.1.3
修改为registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.1.3
启动coredns:
kubectl apply -f coredns.yaml
dashboard
启动dashboard:
# 注意:需要修改镜像地址到国内源,我们使用阿里源
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard.yaml
为dashboard创建授权用户:
# 在上面我创建basic-auth.csv文件时,创建了一个用户admin,这个用户用于访问dashboard时的http basic认证,通过rbac为其授权如下:
kubectl create clusterrolebinding login-on-dashboard-with-cluster-admin --clusterrole=cluster-admin --user=admin
clusterrolebinding "login-on-dashboard-with-cluster-admin" created
这个用户用于访问dashboard时的http basic认证
为dashboard创建授权serviceaccount:
# 要访问dashboard,还需要令牌认证,创建一个名为admin-user的serviceaccount,并为其授权:
# 创建ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
# 创建rbac授权
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
kubectl create -f admin-user.yaml
kubectl create -f admin-user.rbac.yaml
获取admin-user这个serviceaccount的token,用于令牌验证:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
注:Kubernetes API Server新增了--anonymous-auth选项,允许匿名请求访问secure port。没有被其他authentication方法拒绝的请求即Anonymous requests, 这样的匿名请求的username为”system:anonymous”, 归属的组为”system:unauthenticated”。并且该选线是默认的。这样一来,当采用chrome浏览器访问dashboard UI时很可能无法弹出用户名、密码输入对话框,导致后续authorization失败。为了保证用户名、密码输入对话框的弹出,需要将–-anonymous-auth设置为false
访问dashbaord:
heapster + influxdb + grafana
# 注: 所有文件中都需要修改镜像源
wget https://raw.githubusercontent.com/kubernetes/heapster/release-1.5/deploy/kube-config/influxdb/grafana.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/release-1.5/deploy/kube-config/rbac/heapster-rbac.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/release-1.5/deploy/kube-config/influxdb/heapster.yaml
wget https://raw.githubusercontent.com/kubernetes/heapster/release-1.5/deploy/kube-config/influxdb/influxdb.yaml
kubectl create -f ./