OPA Gatekeeper:Kubernetes的策略和管理

一.系统环境

本文主要基于Kubernetes1.22.2和Linux操作系统Ubuntu 18.04。

服务器版本 docker软件版本 Kubernetes(k8s)集群版本 CPU架构
Ubuntu 18.04.5 LTS Docker version 20.10.14 v1.22.2 x86_64

Kubernetes集群架构:k8scludes1作为master节点,k8scludes2,k8scludes3作为worker节点。

服务器 操作系统版本 CPU架构 进程 功能描述
k8scludes1/192.168.110.128 Ubuntu 18.04.5 LTS x86_64 docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calico k8s master节点
k8scludes2/192.168.110.129 Ubuntu 18.04.5 LTS x86_64 docker,kubelet,kube-proxy,calico k8s worker节点
k8scludes3/192.168.110.130 Ubuntu 18.04.5 LTS x86_64 docker,kubelet,kube-proxy,calico k8s worker节点

二.前言

随着云计算和容器技术的普及,Kubernetes已经成为现代应用程序部署和运维的事实标准。然而,随着集群规模的扩大和权限的分散,如何保证集群的安全性和合规性成为了一个亟待解决的问题。OPA Gatekeeper作为Open Policy Agent在Kubernetes上的一个插件,为Kubernetes集群提供了强大的策略治理能力。

使用OPA Gatekeeper的前提是已经有一套可以正常运行的Kubernetes集群,关于Kubernetes(k8s)集群的安装部署,可以查看博客《Ubuntu 安装部署Kubernetes(k8s)集群》https://www.cnblogs.com/renshengdezheli/p/17632858.html。

三.OPA Gatekeeper简介

OPA Gatekeeper是一个开源项目,它将Open Policy Agent(开放策略代理)集成到Kubernetes中,以实现细粒度的策略控制。OPA是一个通用的策略引擎,可以评估策略并做出决策,这些策略定义了哪些资源可以被访问以及如何访问。通过将OPA集成到Kubernetes中,我们可以充分利用Kubernetes的资源管理能力和OPA的策略治理能力,构建一个既安全又灵活的云原生应用环境。

请求通过认证和鉴权之后、被持久化为 Kubernetes 中的对象之前,Kubernetes 允许通过 admission controller webhooks 将策略决策与 API 服务器分离,从而拦截这些请求。Gatekeeper 创建的目的是使用户能够通过配置(而不是代码)自定义控制许可,并使用户了解集群的状态,而不仅仅是针对评估状态的单个对象,在这些对象准许加入的时候。OPA Gatekeeper利用的是准入控制器功能,关于准入控制器详细内容,请查看博客《准入控制器(Admission Controller):ResourceQuota,ImagePolicyWebhook》。

docker也可以使用Open Policy Agent(OPA)进行访问控制,详细操作请查看博客《docker使用Open Policy Agent(OPA)进行访问控制》。

四.在kubernetes上安装OPA Gatekeeper

创建目录存放文件。

root@k8scludes1:~# mkdir gatekeeper

root@k8scludes1:~# cd gatekeeper/

OPA Gatekeeper安装官网为:https://open-policy-agent.github.io/gatekeeper/website/docs/install/

下载gatekeeper的安装yaml文件。

root@k8scludes1:~/gatekeeper# wget https://raw.githubusercontent.com/open-policy-agent/gatekeeper/release-3.8/deploy/gatekeeper.yaml

root@k8scludes1:~/gatekeeper# ls
gatekeeper.yaml

查看部署gatekeeper所需的镜像。

root@k8scludes1:~/gatekeeper# grep image gatekeeper.yaml 
        image: openpolicyagent/gatekeeper:v3.8.1
        imagePullPolicy: Always
        image: openpolicyagent/gatekeeper:v3.8.1
        imagePullPolicy: Always

在k8s的worker节点提前下载好gatekeeper:v3.8.1镜像。

root@k8scludes2:~# docker pull openpolicyagent/gatekeeper:v3.8.1

root@k8scludes2:~#  docker images | grep gatekeeper
openpolicyagent/gatekeeper                                        v3.8.1    b14dbfb1e27a   3 weeks ago     70.3MB

root@k8scludes3:~# docker pull openpolicyagent/gatekeeper:v3.8.1

root@k8scludes3:~#  docker images | grep gatekeeper
openpolicyagent/gatekeeper                                        v3.8.1    b14dbfb1e27a   3 weeks ago     70.3MB

修改镜像下载策略为IfNotPresent。

root@k8scludes1:~/gatekeeper# vim gatekeeper.yaml 

root@k8scludes1:~/gatekeeper# grep image gatekeeper.yaml 
        image: openpolicyagent/gatekeeper:v3.8.1
        imagePullPolicy: IfNotPresent
        image: openpolicyagent/gatekeeper:v3.8.1
        imagePullPolicy: IfNotPresent

在Kubernetes 中一切都可视为资源,Kubernetes 1.7 之后增加了对 CRD 自定义资源二次开发能力来扩展 Kubernetes API,通过 CRD 我们可以向 Kubernetes API 中增加新资源类型,而不需要修改 Kubernetes 源码来创建自定义的 API server,该功能大大提高了 Kubernetes 的扩展能力。当你创建一个新的CustomResourceDefinition (CRD)时,Kubernetes API服务器将为你指定的每个版本创建一个新的RESTful资源路径,我们可以根据该api路径来创建一些我们自己定义的类型资源。CRD可以是命名空间的,也可以是集群范围的,由CRD的作用域(scpoe)字段中所指定的,与现有的内置对象一样,删除命名空间将删除该名称空间中的所有自定义对象,CRD是全局生效的。

查看crd。

root@k8scludes1:~/gatekeeper# kubectl get crd
NAME                                                  CREATED AT
bgpconfigurations.crd.projectcalico.org               2022-04-16T18:41:13Z
bgppeers.crd.projectcalico.org                        2022-04-16T18:41:13Z
......
ipreservations.crd.projectcalico.org                  2022-04-16T18:41:13Z
issuers.cert-manager.io                               2022-04-24T17:07:46Z
kubecontrollersconfigurations.crd.projectcalico.org   2022-04-16T18:41:13Z
networkpolicies.crd.projectcalico.org                 2022-04-16T18:41:13Z
networksets.crd.projectcalico.org                     2022-04-16T18:41:13Z
orders.acme.cert-manager.io                           2022-04-24T17:07:46Z

现在还没有gatekeeper的crd资源。

root@k8scludes1:~/gatekeeper# kubectl get crd | grep gatekeeper

部署gatekeeper。

root@k8scludes1:~/gatekeeper# kubectl apply -f gatekeeper.yaml 
namespace/gatekeeper-system created
resourcequota/gatekeeper-critical-pods created
customresourcedefinition.apiextensions.k8s.io/assign.mutations.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/assignmetadata.mutations.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/configs.config.gatekeeper.sh created
......
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/gatekeeper-controller-manager created
mutatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-mutating-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-validating-webhook-configuration created

gatekeeper安装好之后,会创建相关的crd资源。

root@k8scludes1:~/gatekeeper# kubectl get crd | grep gatekeeper
assign.mutations.gatekeeper.sh                        2022-05-30T04:07:59Z
assignmetadata.mutations.gatekeeper.sh                2022-05-30T04:07:59Z
configs.config.gatekeeper.sh                          2022-05-30T04:07:59Z
constraintpodstatuses.status.gatekeeper.sh            2022-05-30T04:07:59Z
constrainttemplatepodstatuses.status.gatekeeper.sh    2022-05-30T04:07:59Z
constrainttemplates.templates.gatekeeper.sh           2022-05-30T04:07:59Z
modifyset.mutations.gatekeeper.sh                     2022-05-30T04:08:00Z
mutatorpodstatuses.status.gatekeeper.sh               2022-05-30T04:08:00Z
providers.externaldata.gatekeeper.sh                  2022-05-30T04:08:00Z

五.gatekeeper规则

5.1 使用gatekeeper禁止某些网站的镜像创建pod

在k8s里安装好gatekeeper之后,需要定义一系列的规则,gatekeeper-tmp.yaml表示创建一个名为BlacklistImages的crd模板,通过CRD自定义了一个资源类型 BlacklistImages,并定义了规则:“前缀为docker-fake.io/和google-gcr-fake.com/的镜像创建不了pod”。

root@k8scludes1:~/gatekeeper# vim gatekeeper-tmp.yaml 

root@k8scludes1:~/gatekeeper# cat gatekeeper-tmp.yaml 
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: blacklistimages
spec:
  crd:
    spec:
      names:
        kind: BlacklistImages
  targets:
  - rego: |
      package k8strustedimages

      images {
        image := input.review.object.spec.containers[_].image
        #镜像不以docker-fake.io/和google-gcr-fake.com/开头
        not startswith(image, "docker-fake.io/")
        not startswith(image, "google-gcr-fake.com/")
      }

      #当出现以docker-fake.io/和google-gcr-fake.com/开头的镜像,则禁止并输出"not trusted image!"
      violation[{"msg": msg}] {
        not images
        msg := "not trusted image!"
      }
    target: admission.k8s.gatekeeper.sh

创建自定义资源类型BlacklistImages。

root@k8scludes1:~/gatekeeper# kubectl apply -f gatekeeper-tmp.yaml
constrainttemplate.templates.gatekeeper.sh/blacklistimages created

root@k8scludes1:~/gatekeeper# kubectl get crd | grep blacklistimages
blacklistimages.constraints.gatekeeper.sh             2022-05-30T08:18:21Z

利用自定义资源类型BlacklistImages创建一个实例,指定用于哪个资源(pod/deploy/job)。

root@k8scludes1:~/gatekeeper# vim gatekeeper-blacklist.yaml 

root@k8scludes1:~/gatekeeper# cat gatekeeper-blacklist.yaml 
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: BlacklistImages
metadata:
  generation: 1
  managedFields:
  name: pod-trusted-images
  resourceVersion: "14449"
spec:
  match:
    kinds:
    - apiGroups:
      - ""
      #表示自定义资源类型BlacklistImages用于pod
      kinds:
      - Pod

创建实例,pod-trusted-images就是自定义资源类型BlacklistImages的实例。

root@k8scludes1:~/gatekeeper# kubectl get BlacklistImages
No resources found

root@k8scludes1:~/gatekeeper# kubectl apply -f gatekeeper-blacklist.yaml 
blacklistimages.constraints.gatekeeper.sh/pod-trusted-images created

root@k8scludes1:~/gatekeeper# kubectl get BlacklistImages
NAME                 ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
pod-trusted-images               

编辑pod配置文件,表示使用hub.c.163.com/library/nginx:latest镜像创建一个pod。

root@k8scludes1:~/gatekeeper# vim pod.yaml 

root@k8scludes1:~/gatekeeper# cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: podtest
  name: podtest
spec:
  terminationGracePeriodSeconds: 0
  containers:
  - image: hub.c.163.com/library/nginx:latest
    imagePullPolicy: IfNotPresent
    name: podtest
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

因为镜像名符合Gatekeeper规则,pod创建成功。

root@k8scludes1:~/gatekeeper# kubectl apply -f pod.yaml 
pod/podtest created

root@k8scludes1:~/gatekeeper# kubectl get pod
NAME      READY   STATUS    RESTARTS   AGE
podtest   1/1     Running   0          9s

删除pod。

root@k8scludes1:~/gatekeeper# kubectl delete pod podtest 
pod "podtest" deleted

在k8s的worker节点给hub.c.163.com/library/nginx:latest进行打标签重命名。

root@k8scludes2:~# docker tag hub.c.163.com/library/nginx:latest docker-fake.io/library/nginx:1.12.5

root@k8scludes2:~# docker tag hub.c.163.com/library/nginx:latest google-gcr-fake.com/library/nginx:1.12.5

root@k8scludes2:~# docker images | grep nginx
docker-fake.io/library/nginx                         1.12.5    46102226f2fd   5 years ago     109MB
google-gcr-fake.com/library/nginx                    1.12.5    46102226f2fd   5 years ago     109MB
hub.c.163.com/library/nginx                          latest    46102226f2fd   5 years ago     109MB

root@k8scludes3:~# docker tag hub.c.163.com/library/nginx:latest docker-fake.io/library/nginx:1.12.5

root@k8scludes3:~# docker tag hub.c.163.com/library/nginx:latest google-gcr-fake.com/library/nginx:1.12.5

root@k8scludes3:~# docker images | grep nginx
docker-fake.io/library/nginx                         1.12.5    46102226f2fd   5 years ago     109MB
google-gcr-fake.com/library/nginx                    1.12.5    46102226f2fd   5 years ago     109MB
hub.c.163.com/library/nginx                          latest    46102226f2fd   5 years ago     109MB

编辑pod配置文件,表示使用docker-fake.io/library/nginx:1.12.5镜像创建pod。

root@k8scludes1:~/gatekeeper# vim pod.yaml 

root@k8scludes1:~/gatekeeper# cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: podtest
  name: podtest
spec:
  terminationGracePeriodSeconds: 0
  containers:
  - image: docker-fake.io/library/nginx:1.12.5
    imagePullPolicy: IfNotPresent
    name: podtest
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

创建pod失败,镜像不符合gatekeeper规则,[pod-trusted-images] not trusted image!。

root@k8scludes1:~/gatekeeper# kubectl apply -f pod.yaml 
Error from server (Forbidden): error when creating "pod.yaml": admission webhook "validation.gatekeeper.sh" denied the request: [pod-trusted-images] not trusted image!

这次使用google-gcr-fake.com/library/nginx:1.12.5镜像创建pod,可以发现,因为BlacklistImages定义了规则:“前缀为docker-fake.io/和google-gcr-fake.com/的镜像创建不了pod”,使用docker-fake.io/library/nginx:1.12.5镜像和google-gcr-fake.com/library/nginx:1.12.5镜像创建不了pod。

root@k8scludes1:~/gatekeeper# vim pod.yaml 

root@k8scludes1:~/gatekeeper# grep image: pod.yaml 
  - image: google-gcr-fake.com/library/nginx:1.12.5

root@k8scludes1:~/gatekeeper# kubectl apply -f pod.yaml 
Error from server (Forbidden): error when creating "pod.yaml": admission webhook "validation.gatekeeper.sh" denied the request: [pod-trusted-images] not trusted image!

5.2 使用gatekeeper禁止创建LoadBalancer类型的Services服务

aa-tmp.yaml文件表示创建一个名为LBTypeSvcNotAllowed的CRD模板,通过CRD自定义了一个资源类型LBTypeSvcNotAllowed,并定义了规则:“禁止创建LoadBalancer类型的Services服务”。

root@k8scludes1:~/gatekeeper# vim aa-tmp.yaml 

root@k8scludes1:~/gatekeeper# cat aa-tmp.yaml 
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: lbtypesvcnotallowed
spec:
  crd:
    spec:
      names:
        kind: LBTypeSvcNotAllowed
        #listKind: LBTypeSvcNotAllowedList
        #plural: lbtypesvcnotallowed
        #singular: lbtypesvcnotallowed
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package kubernetes.admission
        violation[{"msg": msg}] {
                    input.review.kind.kind = "Service"
                    input.review.operation = "CREATE"
                    input.review.object.spec.type = "LoadBalancer"
                    msg := "LoadBalancer Services are not permitted"
        }

创建自定义资源类型LBTypeSvcNotAllowed。

root@k8scludes1:~/gatekeeper# kubectl apply -f aa-tmp.yaml 
constrainttemplate.templates.gatekeeper.sh/lbtypesvcnotallowed created

root@k8scludes1:~/gatekeeper# kubectl get LBTypeSvcNotAllowed
No resources found

root@k8scludes1:~/gatekeeper# kubectl get crd | grep lbtypesvcnotallowed
lbtypesvcnotallowed.constraints.gatekeeper.sh         2022-05-30T09:14:53Z

利用自定义资源类型LBTypeSvcNotAllowed创建一个实例,bb-contraint.yaml表示把自定义资源LBTypeSvcNotAllowed应用于bb命名空间的Service上。

root@k8scludes1:~/gatekeeper# vim bb-contraint.yaml 

root@k8scludes1:~/gatekeeper# cat bb-contraint.yaml 
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: LBTypeSvcNotAllowed
metadata:
  name: deny-lb-type-svc-dev-ns
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Service"]
    namespaces:
      - "bb"

创建实例,deny-lb-type-svc-dev-ns就是自定义资源类型LBTypeSvcNotAllowed的实例。

root@k8scludes1:~/gatekeeper# kubectl apply -f bb-contraint.yaml 
lbtypesvcnotallowed.constraints.gatekeeper.sh/deny-lb-type-svc-dev-ns created

root@k8scludes1:~/gatekeeper# kubectl get LBTypeSvcNotAllowed
NAME                      ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
deny-lb-type-svc-dev-ns                        0

创建bb命名空间。

root@k8scludes1:~/gatekeeper# kubectl create ns bb
namespace/bb created

root@k8scludes1:~/gatekeeper# kubectl get pod -n bb
No resources found in bb namespace.

编辑pod配置文件,表示使用nginx创建pod。

root@k8scludes1:~/gatekeeper# vim pod.yaml 

root@k8scludes1:~/gatekeeper# cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: podtest
  name: podtest
spec:
  terminationGracePeriodSeconds: 0
  containers:
  - image: hub.c.163.com/library/nginx:latest
    imagePullPolicy: IfNotPresent
    name: podtest
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

使用hub.c.163.com/library/nginx:latest镜像在bb命名空间创建pod。

root@k8scludes1:~/gatekeeper# kubectl apply -f pod.yaml -n bb
pod/podtest created

root@k8scludes1:~/gatekeeper# kubectl get pod -n bb
NAME      READY   STATUS    RESTARTS   AGE
podtest   1/1     Running   0          18s

ClusterIP类型的svc创建成功。关于svc的详细操作,请查看博客《Kubernetes(k8s)服务service:service的发现和service的发布》。

root@k8scludes1:~/gatekeeper# kubectl expose --name=bb-nginxsvc pod podtest -n bb --port=80
service/bb-nginxsvc exposed

root@k8scludes1:~/gatekeeper# kubectl get svc -n bb
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
bb-nginxsvc   ClusterIP   10.109.160.157   <none>        80/TCP    7s

删除SVC。

root@k8scludes1:~/gatekeeper# kubectl delete svc bb-nginxsvc -n bb
service "bb-nginxsvc" deleted

NodePort类型的svc创建成功。

root@k8scludes1:~/gatekeeper# kubectl expose pod podtest --name=bb-nginxsvc --port=80 -n bb --type=NodePort
service/bb-nginxsvc exposed

root@k8scludes1:~/gatekeeper# kubectl get svc -n bb
NAME          TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
bb-nginxsvc   NodePort   10.106.79.112   <none>        80:31311/TCP   21s

删除SVC。

root@k8scludes1:~/gatekeeper# kubectl delete svc bb-nginxsvc -n bb
service "bb-nginxsvc" deleted

因为不符合gatekeeper规则,LoadBalancer类型的svc不能创建,LoadBalancer Services are not permitted。

在bb命名空间,LoadBalancer类型的svc不能创建,其他类型的svc可以创建,在其他命名空间可以创建所有类型的svc。

root@k8scludes1:~/gatekeeper# kubectl expose pod podtest --name=bb-nginxsvc --port=80 -n bb --type=LoadBalancer
Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [deny-lb-type-svc-dev-ns] LoadBalancer Services are not permitted

删除pod。

root@k8scludes1:~/gatekeeper# kubectl get all -n bb
NAME          READY   STATUS    RESTARTS   AGE
pod/podtest   1/1     Running   0          11m

root@k8scludes1:~/gatekeeper# kubectl delete pod podtest -n bb
pod "podtest" deleted

六.总结

OPA Gatekeeper为Kubernetes提供了强大的策略管理能力,使您能够实施自定义策略以控制资源访问。通过使用OPA Gatekeeper,您可以确保Kubernetes集群的安全性和合规性。

posted @ 2024-06-06 16:20  人生的哲理  阅读(310)  评论(1编辑  收藏  举报