kubernetes Dashboard 使用RBAC 权限认证控制

kubernetes RBAC实战

环境准备

先用kubeadm安装好kubernetes集群,[包地址在此](https://market.aliyun.com/products/56014009/cmxz022571.html#sku=yuncode1657100000) 好用又方便,服务周到,童叟无欺

本文目的,让名为devuser的用户只能有权限访问特定namespace下的pod

先用kubeadm安装好kubernetes集群,[包地址在此](https://market.aliyun.com/products/56014009/cmxz022571.html#sku=yuncode1657100000) 好用又方便,服务周到,童叟无欺

本文目的,让名为devuser的用户只能有权限访问特定namespace下的pod

命令行kubectl访问

安装cfssl

 此工具生成证书非常方便, pem证书与crt证书,编码一致可直接使用

 wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
 chmod +x cfssl_linux-amd64
 mv cfssl_linux-amd64 /bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
 chmod +x cfssljson_linux-amd64
 mv cfssljson_linux-amd64 /bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
 chmod +x cfssl-certinfo_linux-amd64
 mv cfssl-certinfo_linux-amd64 /bin/cfssl-certinfo

签发客户端证书

根据ca证书与么钥签发用户证书

根证书已经在/etc/kubernetes/pki目录下了

[root@master1 ~]# ls /etc/kubernetes/pki/
 apiserver.crt ca-config.json devuser-csr.json front-proxy-ca.key sa.pub
 apiserver.key ca.crt devuser-key.pem front-proxy-client.crt
 apiserver-kubelet-client.crt ca.key devuser.pem front-proxy-client.key
 apiserver-kubelet-client.key devuser.csr front-proxy-ca.crt sa.key

注意以下几个文件: `ca.crt ca.key ca-config.json devuser-csr.json`

创建ca-config.json文件

cat > ca-config.json < devuser-csr.json < 校验证书
cfssl-certinfo -cert kubernetes.pem

生成config文件

kubeadm已经生成了admin.conf,我们可以直接利用这个文件,省的自己再去配置集群参数

$ cp /etc/kubernetes/admin.conf devuser.kubeconfig

设置客户端认证参数:

kubectl config set-credentials devuser \
 --client-certificate=/etc/kubernetes/ssl/devuser.pem \
 --client-key=/etc/kubernetes/ssl/devuser-key.pem \
 --embed-certs=true \
 --kubeconfig=devuser.kubeconfig

设置上下文参数:

kubectl config set-context kubernetes \
 --cluster=kubernetes \
 --user=devuser \
 --namespace=kube-system \
 --kubeconfig=devuser.kubeconfig

设置莫认上下文:

kubectl config use-context kubernetes --kubeconfig=devuser.kubeconfig

以上执行一个步骤就可以看一下 devuser.kubeconfig的变化。里面最主要的三个东西

  • cluster: 集群信息,包含集群地址与公钥
  • user: 用户信息,客户端证书与私钥,正真的信息是从证书里读取出来的,人能看到的只是给人看的。
  • context: 维护一个三元组,namespace cluster 与 user

创建角色

创建一个叫pod-reader的角色

[root@master1 ~]# cat pod-reader.yaml
 kind: Role
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
 namespace: kube-system
 name: pod-reader
 rules:
 - apiGroups: [""] # "" indicates the core API group
 resources: ["pods"]
 verbs: ["get", "watch", "list"]
kubectl create -f pod-reader.yaml

绑定用户

创建一个角色绑定,把pod-reader角色绑定到 devuser上

[root@master1 ~]# cat devuser-role-bind.yaml
 kind: RoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
 name: read-pods
 namespace: kube-system
 subjects:
 - kind: User
 name: devuser # 目标用户
 apiGroup: rbac.authorization.k8s.io
 roleRef:
 kind: Role
 name: pod-reader # 角色信息
 apiGroup: rbac.authorization.k8s.io
kubectl create -f devuser-role-bind.yaml

使用新的config文件

$ rm .kube/config && cp devuser.kubeconfig .kube/config

效果, 已经没有别的namespace的权限了,也不能访问node信息了:

[root@master1 ~]# kubectl get node
 Error from server (Forbidden): nodes is forbidden: User "devuser" cannot list nodes at the cluster scope

[root@master1 ~]# kubectl get pod -n kube-system
 NAME READY STATUS RESTARTS AGE
 calico-kube-controllers-55449f8d88-74x8f 1/1 Running 0 8d
 calico-node-clpqr 2/2 Running 0 8d
 kube-apiserver-master1 1/1 Running 2 8d
 kube-controller-manager-master1 1/1 Running 1 8d
 kube-dns-545bc4bfd4-p6trj 3/3 Running 0 8d
 kube-proxy-tln54 1/1 Running 0 8d
 kube-scheduler-master1 1/1 Running 1 8d

[root@master1 ~]# kubectl get pod -n default
 Error from server (Forbidden): pods is forbidden: User "devuser" cannot list pods in the namespace "default": role.rbac.authorization.k8s.io "pod-reader" not found

dashboard访问

service account原理

k8s里面有两种用户,一种是User,一种就是service account,User给人用的,service account给进程用的,让进程有相关的权限。

如dasboard就是一个进程,我们就可以创建一个service account给它,让它去访问k8s。

我们看一下是如何把admin权限赋给dashboard的:

╰─➤ cat dashboard-admin.yaml
 apiVersion: rbac.authorization.k8s.io/v1beta1
 kind: ClusterRoleBinding
 metadata:
 name: kubernetes-dashboard
 labels:
 k8s-app: kubernetes-dashboard
 roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
 subjects:
 - kind: ServiceAccount
 name: kubernetes-dashboard
 namespace: kube-system

把 kubernetes-dashboard 这个ServiceAccount绑定到cluster-admin这个ClusterRole上,这个cluster role非常牛逼,啥权限都有

[root@master1 ~]# kubectl describe clusterrole cluster-admin -n kube-system
 Name: cluster-admin
 Labels: kubernetes.io/bootstrapping=rbac-defaults
 Annotations: rbac.authorization.kubernetes.io/autoupdate=true
 PolicyRule:
 Resources Non-Resource URLs Resource Names Verbs
 --------- ----------------- -------------- -----
 [*] [] [*]
 *.* [] [] [*]

而创建dashboard时创建了这个service account:

apiVersion: v1
 kind: ServiceAccount
 metadata:
 labels:
 k8s-app: kubernetes-dashboard
 name: kubernetes-dashboard
 namespace: kube-system

然后deployment里指定service account

volumes:
 - name: kubernetes-dashboard-certs
 secret:
 secretName: kubernetes-dashboard-certs
 - name: tmp-volume
 emptyDir: {}
 serviceAccountName: kubernetes-dashboard

更安全的做法

[root@master1 ~]# cat admin-token.yaml
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1beta1
 metadata:
 name: admin
 annotations:
 rbac.authorization.kubernetes.io/autoupdate: "true"
 roleRef:
 kind: ClusterRole
 name: cluster-admin
 apiGroup: rbac.authorization.k8s.io
 subjects:
 - kind: ServiceAccount
 name: admin
 namespace: kube-system
 ---
 apiVersion: v1
 kind: ServiceAccount
 metadata:
 name: admin
 namespace: kube-system
 labels:
 kubernetes.io/cluster-service: "true"
 addonmanager.kubernetes.io/mode: Reconcile
[root@master1 ~]# kubectl get secret -n kube-system|grep admin
 admin-token-7rdhf kubernetes.io/service-account-token 3 14m
[root@master1 ~]# kubectl describe secret admin-token-7rdhf -n kube-system
 Name: admin-token-7rdhf
 Namespace: kube-system
 Labels:
 Annotations: kubernetes.io/service-account.name=admin
 kubernetes.io/service-account.uid=affe82d4-d10b-11e7-ad03-00163e01d684

Type: kubernetes.io/service-account-token

Data
 ====
 ca.crt: 1025 bytes
 namespace: 11 bytes
 token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi03cmRoZiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImFmZmU4MmQ0LWQxMGItMTFlNy1hZDAzLTAwMTYzZTAxZDY4NCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.jSfQhFsY7V0ZmfqxM8lM_UUOoUhI86axDSeyVVtldSUY-BeP2Nw4q-ooKGJTBBsrOWvMiQePcQxJTKR1K4EIfnA2FOnVm4IjMa40pr7-oRVY37YnR_1LMalG9vrWmqFiqIsKe9hjkoFDuCaP7UIuv16RsV7hRlL4IToqmJMyJ1xj2qb1oW4P1pdaRr4Pw02XBz9yBpD1fs-lbwheu1UKcEnbHS_0S3zlmAgCrpwDFl2UYOmgUKQVpJhX4wBRRQbwo1Sn4rEFVI1NIa9l_lM7Mf6YEquLHRu3BCZTdu9YfY9pevQz4OfHE0NOvDIqmGRL8Z9kPADAXbljWzcD1m1xCQ

用此token在界面上登录即可

 

 

 

 

 

 

----------------------------------------------------------------------------------------------------------------------------------------

在参考大神的教程安装kubernetes集群的同时安装了dashboard;部署时使用到的dashboard-rbac.yaml文件如下:

apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: dashboard
subjects:
- kind: ServiceAccount
name: dashboard
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
可以看到,我们用的是cluster-admin这个角色,这个角色什么权限都有,如果我们的dashboard是暴露外网可访问的,就很危险了;所以决定自己定义集群的角色,使dashboard默认角色拥有较少的权限。

我们这里使用RBAC来做认证。

注意: 因为我们前面安装dashboard的时候没有采用https的请求,而k8s的认证需要使用HTTPS的访问,这一点从官网和dashboard的官方文档都能看出来:

Login view has been introduced in release 1.7. In order to make it appear in Dashboard you need to enable and access Dashboard over HTTPS.Dashboard 升级

删除之前安装的dashboard 的deployment,svc等
$ sudo kubectl delete deploy kubernetes-dashboard -n kube-system
$ sudo kubectl delete svc kubernetes-dashboard -n kube-system
## 如果dashboard需要一个超级管理员的用户,那么ClusterRoleBinding,sa不用删
$ sudo kubectl delete ClusterRoleBinding dashboard -n kube-system
$ sudo kubectl delete sa dashboard -n kube-system


部署Dashboard
还是采用dashboard官方提供的文件:kubernetes-dashboard.ayml

下载文件
#wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

对部署文件做一些修改:
将Service的type改为NodePort
镜像修改;因为gcr.io的镜像需要科学上网才能下载,所以我们可以用别的镜像仓库的替代
k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3

修改为:

## 这是我在阿里云镜像库中找到的镜像,你也可以在其他镜像库中找到替代
## 如果是在dockerhub的镜像,也建议加上阿里云镜像加速,这样速度回快很多
registry.cn-hangzhou.aliyuncs.com/kube_containers/kubernetes-dashboard-amd64:v1.8.3
1
2
3
阿里云镜像加速配置

修改证书
这一步是因为我的机器在使用--auto-generate-certificates自动生成证书后,访问dashboard报错:NET::ERR_CERT_INVALID,查看dashboard的日志提示证书未找到,为解决这个问题,将生成好的dashboard.crt和dashboard.key挂载到容器的/certs下,然后重新发布deployment即可

CA证书的生成可以参考这个和这个

挂载:
volumes:
- name: kubernetes-dashboard-certs
hostPath:
path: /home/share/certs
type: Directory

这里我采取的是hostPath方式挂载,这个需要保证dashboard调度到的node上都要有这个文件;其他挂载的方式可以参考官网

修改后的文件 dashboard-deploy.yaml

# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: registry.cn-hangzhou.aliyuncs.com/kube_containers/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --token-ttl=5400 # 设置token过期时间
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
hostPath:
path: /home/share/certs
type: Directory
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30505
selector:
k8s-app: kubernetes-dashboard
type: NodePort

原文:https://blog.csdn.net/gunner2014/article/details/80966671

-----------------------------------------------------

部署 kubectl 命令行工具

kubectl 是 kubernetes 集群的命令行管理工具,本文档介绍安装和配置它的步骤。

kubectl 默认从 ~/.kube/config 文件读取 kube-apiserver 地址、证书、用户名等信息,如果没有配置,执行 kubectl 命令时可能会出错:

$ kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?

本文档只需要部署一次,生成的 kubeconfig 文件与机器无关。

下载和分发 kubectl 二进制文件

下载和解压:

wget https://dl.k8s.io/v1.10.4/kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz

分发到所有使用 kubectl 的节点:

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kubernetes/client/bin/kubectl k8s@${node_ip}:/opt/k8s/bin/
    ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
  done

创建 admin 证书和私钥

kubectl 与 apiserver https 安全端口通信,apiserver 对提供的证书进行认证和授权。

kubectl 作为集群的管理工具,需要被授予最高权限。这里创建具有最高权限的 admin 证书。

创建证书签名请求:

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "4Paradigm"
    }
  ]
}
EOF
  • O 为 system:masters,kube-apiserver 收到该证书后将请求的 Group 设置为 system:masters;
  • 预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予所有 API的权限;
  • 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

生成证书和私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes admin-csr.json | cfssljson -bare admin
ls admin*

创建 kubeconfig 文件

kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书;

source /opt/k8s/bin/environment.sh
# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kubectl.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials admin \
  --client-certificate=admin.pem \
  --client-key=admin-key.pem \
  --embed-certs=true \
  --kubeconfig=kubectl.kubeconfig

# 设置上下文参数
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=kubectl.kubeconfig
  
# 设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
  • --certificate-authority:验证 kube-apiserver 证书的根证书;
  • --client-certificate--client-key:刚生成的 admin 证书和私钥,连接 kube-apiserver 时使用;
  • --embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl.kubeconfig 文件中(不加时,写入的是证书文件路径);

分发 kubeconfig 文件

分发到所有使用 kubectl 命令的节点:

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh k8s@${node_ip} "mkdir -p ~/.kube"
    scp kubectl.kubeconfig k8s@${node_ip}:~/.kube/config
    ssh root@${node_ip} "mkdir -p ~/.kube"
    scp kubectl.kubeconfig root@${node_ip}:~/.kube/config
  done
  • 保存到用户的 ~/.kube/config 文件;

 

posted @ 2018-11-27 08:47  技术颜良  阅读(3513)  评论(0编辑  收藏  举报