k8s的rbac 授权

k8s 对我们整个系统的认证,授权,访问控制做了精密的设置;对于 k8s 集群来说,apiserver 是整个集群访问控制的唯一入口,我们在 k8s 集群之上部署应用程序的时候,也可以通过宿主机的NodePort 暴露的端口访问里面的程序,用户访问 kubernetes 集群需要经历如下认证过程:认证->授权->准入控制(adminationcontroller)

1.认证(Authenticating)是对客户端的认证,通俗点就是用户名密码验证
2.授权(Authorization)是对资源的授权,k8s 中的资源无非是容器,最终其实就是容器的计算,网络,存储资源,当一个请求经过认证后,需要访问某一个资源(比如创建一个 pod),授权检查会根据授权规则判定该资源(比如某 namespace 下的 pod)是否是该客户可访问的。
3.准入(Admission Control)机制:
准入控制器(Admission Controller)位于 API Server 中,在对象被持久化之前,准入控制器拦截对 API Server 的请求,一般用来做身份验证和授权。其中包含两个特殊的控制器:MutatingAdmissionWebhook 和 ValidatingAdmissionWebhook。分别作为配置的变异和验
证准入控制 webhook。
变更(Mutating)准入控制:修改请求的对象
验证(Validating)准入控制:验证请求的对象
准入控制器是在 API Server 的启动参数配置的。一个准入控制器可能属于以上两者中的一种,也可能两者都属于。当请求到达 API Server 的时候首先执行变更准入控制,然后再执行验证准入控制。
在部署 Kubernetes 集群的时候都会默认开启一系列准入控制器,如果没有设置这些准入控制器的话可以说你的 Kubernetes 集群就是在裸奔,应该只有集群管理员可以修改集群的准入控制器。
会默认开启如下的准入控制器。
--admissioncontrol=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota,MutatingAdmissionWebhook,ValidatingAdmissionWebhook
k8s 的整体架构也是一个微服务的架构,所有的请求都是通过一个 GateWay,也就是 kubeapiserver 这个组件(对外提供 REST 服务),k8s 中客户端有两类,一种是普通用户,一种是集群内的 Pod,这两种客户端的认证机制略有不同,但无论是哪一种,都需要依次经过认证,授权,准入这三个机制

 认证

1、认证支持多种插件
(1)令牌(token)认证:双方有一个共享密钥,服务器上先创建一个密码下来,客户端登陆的时候拿这个密码登陆即可,这个就是对称密钥认证方式;k8s 提供了一个 restful 风格的接口,它的所有服务都是通过 http 协议提供的,因此认证信息只能经由 http 协议的认证首部进行传递,这种认证首部进行传递通常叫做令牌;

(2)ssl 认证:对于 k8s 访问来讲,ssl 认证能让客户端确认服务器的认证身份,我们在跟服务器通信的时候,需要服务器发过来一个证书,我们需要确认这个证书是不是 ca 签署的,如果是我们认可的 ca 签署的,里面的 subj 信息与我们访问的目标主机信息保持一致,没有问题,那么我们就认为服务器的身份得到认证了,k8s 中最重要的是服务器还需要认证客户端的信息,kubectl 也应该有一个证书,这个证书也是 server 所认可的 ca 签署的证书,双方需要互相认证,实现加密通信,这就是 ssl 认证。

2、kubernetes 上的账号
kubectl explain pods.spec 可以看到有一个字段 serviceAccountName(服务账号名称),这个就是我们 pod 连接 apiserver 时使用的账号,因此整个 kubernetes 集群中的账号有两类,ServiceAccount(服务账号),User account(用户账号)
User account:实实在在现实中的人,人可以登陆的账号,客户端想要对 apiserver 发起请求,apiserver 要识别这个客户端是否有请求的权限,那么不同的用户就会有不同的权限,靠用户账号表示,叫做 usernameServiceAccount:方便 Pod 里面的进程调用 Kubernetes API 或其他外部服务而设计的,是kubernetes 中的一种资源sa 账号:登陆 dashboard 使用的账号user account:这个是登陆 k8s 物理机器的用户

1.ServiceAccount
Service account 是为了方便 Pod 里面的进程调用 Kubernetes API 或其他外部服务而设计的。它与 User account 不同,User account 是为人设计的,而 service account 则是为 Pod 中的进程调用 Kubernetes API 而设计;User account 是跨 namespace 的,而 service account 则是仅局限它所在的 namespace;每个 namespace 都会自动创建一个 default service account;开启 ServiceAccount Admission Controller 后
1)每个 Pod 在创建后都会自动设置 spec.serviceAccount 为 default(除非指定了其他ServiceAccout)
2)验证 Pod 引用的 service account 已经存在,否则拒绝创建;

当创建 pod 的时候,如果没有指定一个 serviceaccount,系统会自动在与该 pod 相同的namespace 下为其指派一个 default service account。这是 pod 和 apiserver 之间进行通信的账号.
查看系统sa
[root@master-1 ~]# kubectl get sa
NAME                     SECRETS   AGE
default                  1         111d
nfs-client-provisioner   1         12d
nfs-provisioner          1         48d

 创建sa

[root@master-1 rabc]# kubectl create serviceaccount  test
serviceaccount/test created
[root@master-1 rabc]# kubectl get sa
NAME                     SECRETS   AGE
default                  1         111d
nfs-client-provisioner   1         12d
nfs-provisioner          1         48d
test                     1         9s

     查看sa 详细信息

[root@master-1 rabc]# kubectl describe sa test
Name:                test
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   test-token-pwdqp
Tokens:              test-token-pwdqp
Events:              <none>

  查看secrets

[root@master-1 rabc]# kubectl get secrets
NAME                                 TYPE                                  DATA   AGE
default-token-w7bhz                  kubernetes.io/service-account-token   3      111d
nfs-client-provisioner-token-vvvwb   kubernetes.io/service-account-token   3      12d
nfs-provisioner-token-g8k2p          kubernetes.io/service-account-token   3      48d
test-token-pwdqp                     kubernetes.io/service-account-token   3      3m46s
 
[root@master-1 rabc]# kubectl describe secrets test-token-pwdqp
Name:         test-token-pwdqp
Namespace:    default
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: test
              kubernetes.io/service-account.uid: 6977a619-4375-4e18-89bd-210d2c76c24c

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  7 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IngtVzlvZllDTVdQanhEY2t2YnFQVm9zM2dFR3BrWDVveDZoTDFldjZwVUUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InRlc3QtdG9rZW4tcHdkcXAiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoidGVzdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY5NzdhNjE5LTQzNzUtNGUxOC04OWJkLTIxMGQyYzc2YzI0YyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OnRlc3QifQ.pBLHN1ipoBII6DTtE3bGiPKLM0fUDKXMmfD_fvd5vO_PRyligHRCuMi7Sa_Xe66gBwFlPfgklu6PvYreZ2jDP1adtgrK1p3piMzL2aWoSROzWkrbRlBdv1NTKB9QR_xfw0FogEaGsPBX0sqOH--3bNa2uXzSYB5FrqYRAjv6RnZYjoXneboJcLpfiY5FigG8-X3VjlW4A0FKq4e1sn3LOwj-6Vh4zy0UM-HJJU36Y3D7VzCCSV4SzgqpNtsE_X2FrF4vRGT39Z4nkmK-3EW0BAId7ys7OUj388-sCfdmtgNezYR-taFhbz32baJihO3-EYdQE5w3UpqGKZvXWkyJog
在 K8S 集群当中,每一个用户对资源的访问都是需要通过 apiserver 进行通信认证才能进行访问的,那么在此机制当中,对资源的访问可以是 token,也可以是通过配置文件的方式进行保存和使用
认证信息,可以通过 kubectl config 进行查看配置,

 

[root@master-1 rabc]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.10.29:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

  

 删除sa

[root@master-1 rabc]# kubectl delete sa test
serviceaccount "test" deleted
[root@master-1 rabc]# kubectl get sa
NAME                     SECRETS   AGE
default                  1         111d
nfs-client-provisioner   1         12d
nfs-provisioner          1         48d

  

1.2 授权
如果用户通过认证,什么权限都没有,需要一些后续的授权操作,如对资源的增删该查等,kubernetes1.6 之后开始有 RBAC(基于角色的访问控制机制)授权检查机制。Kubernetes 的授权是基于插件形成的,其常用的授权插件有以下几种:
1)Node(节点认证)
2)ABAC(基于属性的访问控制)
3)RBAC(基于角色的访问控制)
4)Webhook(基于 http 回调机制的访问控制)
什么是 RBAC(基于角色的访问控制)?
让一个用户(Users)扮演一个角色(Role),角色拥有权限,从而让用户拥有这样的权限,随后在授权机制当中,只需要将权限授予某个角色,此时用户将获取对应角色的权限,从而实现角色的访问控制。
在 k8s 的授权机制当中,采用 RBAC 的方式进行授权,其工作逻辑是,把对对象的操作权限定义到一个角色当中,再将用户绑定到该角色,从而使用户得到对应角色的权限。如果通过 rolebinding绑定 role,只能对 rolebingding 所在的名称空间的资源有权限,上图 user1 这个用户绑定到role1 上,只对 role1 这个名称空间的资源有权限,对其他名称空间资源没有权限,属于名称空间级别的;
k8s 为此还有一种集群级别的授权机制,就是定义一个集群角色(ClusterRole),对集群内的所有资源都有可操作的权限,从而将 User2 通过 ClusterRoleBinding 到 ClusterRole,从而使User2 拥有集群的操作权限。


上面我们说了两个角色绑定:
(1)用户通过 rolebinding 绑定 role
(2)用户通过 clusterrolebinding 绑定 clusterrole
还有一种:rolebinding 绑定 clusterrole rolebinding 绑定 clusterrole 的好处:
假如有 6 个名称空间,每个名称空间的用户都需要对自己的名称空间有管理员权限,那么需要定义 6个 role 和 rolebinding,然后依次绑定,如果名称空间更多,我们需要定义更多的 role,这个是很麻烦的,所以我们引入 clusterrole,定义一个 clusterrole,对 clusterrole 授予所有权限,然后用户通过 rolebinding 绑定到 clusterrole,就会拥有自己名称空间的管理员权限了

 部署

[root@master-1 k8s]# kubectl apply -f kubernetes-dashboard.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard cre
ateddeployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@master-1 k8s]# ls
kubernetes-dashboard.yaml  rabc         Storageclass
nfs-deployment.yaml        statefulset
[root@master-1 k8s]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   112d
[root@master-1 k8s]# kubectl get ns
NAME                   STATUS   AGE
default                Active   112d
kube-node-lease        Active   112d
kube-public            Active   112d
kube-system            Active   112d
kubernetes-dashboard   Active   35s
[root@master-1 k8s]# kubectl -n kubernetes-dashboard get pod 
NAME                                         READY   STATUS              RESTARTS   AGE
dashboard-metrics-scraper-7445d59dfd-kztqh   0/1     ContainerCreating   0          69s
kubernetes-dashboard-54f5b6dc4b-dtqpg        0/1     ContainerCreating   0          69s
[root@master-1 k8s]# kubectl -n kubernetes-dashboard get pod 
NAME                                         READY   STATUS              RESTARTS   AGE
dashboard-metrics-scraper-7445d59dfd-kztqh   0/1     ContainerCreating   0          73s
kubernetes-dashboard-54f5b6dc4b-dtqpg        0/1     ContainerCreating   0          73s
[root@master-1 k8s]# kubectl -n kubernetes-dashboard get pod 
NAME                                         READY   STATUS              RESTARTS   AGE
dashboard-metrics-scraper-7445d59dfd-kztqh   0/1     ContainerCreating   0          73s
kubernetes-dashboard-54f5b6dc4b-dtqpg        0/1     ContainerCreating   0          73s
[root@master-1 k8s]# kubectl -n kubernetes-dashboard get pod 
NAME                                         READY   STATUS              RESTARTS   AGE
dashboard-metrics-scraper-7445d59dfd-kztqh   0/1     ContainerCreating   0          74s
kubernetes-dashboard-54f5b6dc4b-dtqpg        0/1     ContainerCreating   0          74s
[root@master-1 k8s]# 
[root@master-1 k8s]# kubectl -n kubernetes-dashboard get pod 
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7445d59dfd-kztqh   1/1     Running   0          6m50s
kubernetes-dashboard-54f5b6dc4b-dtqpg        1/1     Running   0          6m50s
[root@master-1 k8s]# kubectl -n kubernetes-dashboard get pod 
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7445d59dfd-kztqh   1/1     Running   0          6m51s
kubernetes-dashboard-54f5b6dc4b-dtqpg        1/1     Running   0          6m51s

  查看集群角色

[root@master-1 ~]# kubectl get clusterrole

  查看名称空间下sa

[root@master-1 k8s]# kubectl describe -n kubernetes-dashboard sa kubernetes-dashboard
Name:                kubernetes-dashboard
Namespace:           kubernetes-dashboard
Labels:              k8s-app=kubernetes-dashboard
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   kubernetes-dashboard-token-l4449
Tokens:              kubernetes-dashboard-token-l4449
Events:              <none>

  创建名称空间级别的管理员

[root@master-1 k8s]# kubectl create ns lucky   创建名称空间
namespace/lucky created
[root@master-1 k8s]# kubectl create -n lucky sa luckyadmon 创建sa
serviceaccount/luckyadmon created
[root@master-1 k8s]# kubectl create rolebinding luckyadmon-rolebind -n lucky --clusterrole=cluster-admin  --serviceaccount=lucky:luckyadmon  #授权绑定名称空间级别的管理员;表示在lucky名称空间下创建一个名字为luckyadmon-rolebind的rolebinding 绑定到cluster-admin 这个角色下  sa的名字是lucky:luckyadmon;表示luckyadmon 是lucky名称空间下叫luckyadmon的sa

  

  查看secrets 复制token 信息

[root@master-1 k8s]# kubectl describe secrets -n lucky luckyadmon-token-lcxcb 
Name:         luckyadmon-token-lcxcb
Namespace:    lucky
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: luckyadmon
              kubernetes.io/service-account.uid: 5ed8b223-b579-41fb-8a95-c7481be79a3d

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  5 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IngtVzlvZllDTVdQanhEY2t2YnFQVm9zM2dFR3BrWDVveDZoTDFldjZwVUUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsdW
NreSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJsdWNreWFkbW9uLXRva2VuLWxjeGNiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imx1Y2t5YWRtb24iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1ZWQ4YjIyMy1iNTc5LTQxZmItOGE5NS1jNzQ4MWJlNzlhM2QiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6bHVja3k6bHVja3lhZG1vbiJ9.hv6UsuK4jIgTvYLgp8lp-q0aKbW0osNk8j3mAgaaSC0GEEGOsMTKOq9BAmZiQCMPjSvXuvvzbbSAaxbE4Ej9dnYF7r0Xq0ns9m4aeyrQgxxfc07Uiwqg49NnNaebSscnNIok_4jR2kLl7LvQlOlcHgUfqpswRfgMVA9IXSsbf73LXFHNKiB6-T_jXz3BKeDhrfwI-XGlAuIzjhKQu_DTW4mFBWXVWZc9r9dE9MvotK7s5QMb8Wk-ASSO1Bpyal8VST7kzumC0zrhTmz9xD65PidpIlx9XFBws2pHKROQr3vkdE9SK-sXZuh_XRi-NZizrZS2I3UcazOCkhtpO1iGwA

  创建集群级别的管理员

[root@master-1 k8s]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard   # 集群级别的不用指定名称空间
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

  文件授权

[root@master-1 k8s]# cat /root/lucky-admin.conf 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJek1ERXhPVEF4TWpZeU1Gb1hEVE16TURFe
E5qQXhNall5TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT09xClZjMkV3NWtRbkdIQnlvNjBNY0o5Ni9QMFUvZloxNHZDWXhkUnNXRm9IeHpXcThsZG5YTUF6MUcwSUdTaUtURFgKejRXUlNMcElNQzRDcm9XK0xTamhKblJkNVFMbjdwS3RMZlVVb2J4THVvWUVWc3F2cUdDWFp4cERyUnZQVG50ZgorUFVKVnZ2WjRZYnRKZ205U3dELzYwcXJWclV1Q3l3aXY2SUllTWNGMHE3dHhZbnZnamJjWGxIamxtUzFOcEs2ClhRK0d5dkxQc3BEK3VaUHpiNXdpMlMxS1F2NnhSbWdKdzE0ZXAxa1dtalpjSlcwMVNRQ1JrSWVnNjY4bE12MkYKNzNKS3pKZjM5M1lqVFlUOXVNa3pJekNuRTdvNG9oTzNSZ2hERUtRSXBhaWN3NFY0bW5sTDVRSzBrMHlMSU0vNwpWaHUxanFDdXlYaW9LVTZyV09rQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCNlVFYzBWMWVKVE9IajdhN2xqcDBWbWtWTkRNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBR0dkT1psRHFDU1hFOUs2aTVUbkkxSjNzdTVhK0lSeGxXam5DS2ZSV1J3TklYZStaQgp2NGpTeHpPMlU0UjNzUWhpa09CeW8rV004NGtoa0cvLzZSNk9ONWJ4UHRLMXdYUmdMVklDMlg0WmJFT0JpelFOCndtK2lCZXluQ3RzUkI2OFNwV1RtZTRLcERVVWl5cGFtSHVQcE1lVTg5eHhrOTRudG91RGRUY2tNcTVmRjUyeUUKOGtzalVWbnFDMFE1dEtMa1J6aVk4NmM4aTFNaFgxb1JVOTV2N0xIRTVtNzZFWUF2aTAwYkYvcUs5TkVneXYvMgpxa3FTaDBpbGczYmNrc3BOK3IzdUk1ODdsc0VUOHdlZitOajlqdUt4c0E4MUhpZGFCQU9mb25FV2ZhZXdmcGhuCkVEaDNDeUxWMkhxT1B4YnFZV1ZLZm5MZWxZSlZ2QkNBWDNvdQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==    server: 192.168.10.29:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: lucky
  name: lucky@kubernetes
current-context: ""
kind: Config
preferences: {}
users:
- name: lucky
  user:
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6IngtVzlvZllDTVdQanhEY2t2YnFQVm9zM2dFR3BrWDVveDZoTDFldjZwVUUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsdWN
reSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJsdWNreWFkbW9uLXRva2VuLWxjeGNiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imx1Y2t5YWRtb24iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1ZWQ4YjIyMy1iNTc5LTQxZmItOGE5NS1jNzQ4MWJlNzlhM2QiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6bHVja3k6bHVja3lhZG1vbiJ9.hv6UsuK4jIgTvYLgp8lp-q0aKbW0osNk8j3mAgaaSC0GEEGOsMTKOq9BAmZiQCMPjSvXuvvzbbSAaxbE4Ej9dnYF7r0Xq0ns9m4aeyrQgxxfc07Uiwqg49NnNaebSscnNIok_4jR2kLl7LvQlOlcHgUfqpswRfgMVA9IXSsbf73LXFHNKiB6-T_jXz3BKeDhrfwI-XGlAuIzjhKQu_DTW4mFBWXVWZc9r9dE9MvotK7s5QMb8Wk-ASSO1Bpyal8VST7kzumC0zrhTmz9xD65PidpIlx9XFBws2pHKROQr3vkdE9SK-sXZuh_XRi-NZizrZS2I3UcazOCkhtpO1iGwA
[root@master-1 k8s]# kubectl config use-context lucky@kubernetes --kubeconfig=/root/lucky-admin.conf 
Switched to context "lucky@kubernetes".
[root@master-1 k8s]# cat  /root/lucky-admin.conf 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJek1ERXhPVEF4TWpZeU1Gb1hEVE16TURFe
E5qQXhNall5TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT09xClZjMkV3NWtRbkdIQnlvNjBNY0o5Ni9QMFUvZloxNHZDWXhkUnNXRm9IeHpXcThsZG5YTUF6MUcwSUdTaUtURFgKejRXUlNMcElNQzRDcm9XK0xTamhKblJkNVFMbjdwS3RMZlVVb2J4THVvWUVWc3F2cUdDWFp4cERyUnZQVG50ZgorUFVKVnZ2WjRZYnRKZ205U3dELzYwcXJWclV1Q3l3aXY2SUllTWNGMHE3dHhZbnZnamJjWGxIamxtUzFOcEs2ClhRK0d5dkxQc3BEK3VaUHpiNXdpMlMxS1F2NnhSbWdKdzE0ZXAxa1dtalpjSlcwMVNRQ1JrSWVnNjY4bE12MkYKNzNKS3pKZjM5M1lqVFlUOXVNa3pJekNuRTdvNG9oTzNSZ2hERUtRSXBhaWN3NFY0bW5sTDVRSzBrMHlMSU0vNwpWaHUxanFDdXlYaW9LVTZyV09rQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCNlVFYzBWMWVKVE9IajdhN2xqcDBWbWtWTkRNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBR0dkT1psRHFDU1hFOUs2aTVUbkkxSjNzdTVhK0lSeGxXam5DS2ZSV1J3TklYZStaQgp2NGpTeHpPMlU0UjNzUWhpa09CeW8rV004NGtoa0cvLzZSNk9ONWJ4UHRLMXdYUmdMVklDMlg0WmJFT0JpelFOCndtK2lCZXluQ3RzUkI2OFNwV1RtZTRLcERVVWl5cGFtSHVQcE1lVTg5eHhrOTRudG91RGRUY2tNcTVmRjUyeUUKOGtzalVWbnFDMFE1dEtMa1J6aVk4NmM4aTFNaFgxb1JVOTV2N0xIRTVtNzZFWUF2aTAwYkYvcUs5TkVneXYvMgpxa3FTaDBpbGczYmNrc3BOK3IzdUk1ODdsc0VUOHdlZitOajlqdUt4c0E4MUhpZGFCQU9mb25FV2ZhZXdmcGhuCkVEaDNDeUxWMkhxT1B4YnFZV1ZLZm5MZWxZSlZ2QkNBWDNvdQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==    server: 192.168.10.29:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: lucky
  name: lucky@kubernetes
current-context: lucky@kubernetes
kind: Config
preferences: {}
users:
- name: lucky
  user:
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6IngtVzlvZllDTVdQanhEY2t2YnFQVm9zM2dFR3BrWDVveDZoTDFldjZwVUUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsdWN
reSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJsdWNreWFkbW9uLXRva2VuLWxjeGNiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imx1Y2t5YWRtb24iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1ZWQ4YjIyMy1iNTc5LTQxZmItOGE5NS1jNzQ4MWJlNzlhM2QiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6bHVja3k6bHVja3lhZG1vbiJ9.hv6UsuK4jIgTvYLgp8lp-q0aKbW0osNk8j3mAgaaSC0GEEGOsMTKOq9BAmZiQCMPjSvXuvvzbbSAaxbE4Ej9dnYF7r0Xq0ns9m4aeyrQgxxfc07Uiwqg49NnNaebSscnNIok_4jR2kLl7LvQlOlcHgUfqpswRfgMVA9IXSsbf73LXFHNKiB6-T_jXz3BKeDhrfwI-XGlAuIzjhKQu_DTW4mFBWXVWZc9r9dE9MvotK7s5QMb8Wk-ASSO1Bpyal8VST7kzumC0zrhTmz9xD65PidpIlx9XFBws2pHKROQr3vkdE9SK-sXZuh_XRi-NZizrZS2I3UcazOCkhtpO1iGwA您在 /var/spool/mail/root 中有新邮件

  

  

posted @ 2023-05-11 21:48  烟雨楼台,行云流水  阅读(507)  评论(0编辑  收藏  举报