ResourceQuota和LimitRange实践指南
ResourceQuota和LimitRange实践指南
目的:能控制特定命名空间中的资源使用量,最终实现集群的公平使用和成本的控制
需要实现的功能如下:
- 限制运行状态的Pod的计算资源用量
- 限制持久存储卷的数量以控制对存储的访问
- 限制负载均衡器的数量以控制成本
- 防止滥用网络端口
- 提供默认的计算资源Requests以便于系统做出更优化的调度
1. 创建命名空间
[root@t71 quota-example]# vim namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: quota-example
kubectl create -f namespace.yaml
2. 设置限定对象数据的资源配额
[root@t71 quota-example]# vim object-counts.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
spec:
hard:
persistentvolumeclaims: "2" # 持久存储卷
services.loadbalancers: "2" # 负载均衡器
services.nodeports: "0" # NodePort
[root@t71 quota-example]# kubectl create -f object-counts.yaml --namespace=quota-example
resourcequota/object-counts created
3. 设置限定计算资源的资源配额
[root@t71 quota-example]# vim compute-resources.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
pods: "4"
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
[root@t71 quota-example]# kubectl create -f compute-resources.yaml --namespace=quota-example
resourcequota/compute-resources created
配额系统会自动防止该命名空间下同时拥有超过4个非“终止态”的pod。由于该项资源配额限制了CPU和内存的Limits和Requests的总量,因此会强制要求该命名空间下的所有容器都必须显示地定义CPU和内存的Limits和Requests
4.配置默认Requests和Limits
使用LimitRange为命名空间下的所有Pod提供一个资源配置的默认值
[root@t71 quota-example]# vim limits.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: limits
spec:
limits:
- default:
cpu: 200m
memory: 512Mi
defaultRequest:
cpu: 100m
memory: 256Mi
type: Container
[root@t71 quota-example]# kubectl create -f limits.yaml --namespace=quota-example
5.指定资源配额的作用域
如果我们不想为某个命名空间配置默认的计算资源配额,而是希望限定在命名空间内运行的QoS的BestEffor的POD总数。例如集群中部分资源用来运行Qos为非BestEffort的服务,而将闲置的资源用来运行QoS为BestEffort的服务,可以避免集群中 所以资源仅被大量的BestEffort Pod耗尽。这可以通过创建两个资源配额(ResourceQuota实现)
- 5.1 创建名为quota-scopes的命名空间:
[root@t71 quota-example]# kubectl create namespace quota-scopes
namespace/quota-scopes created
[root@t71 quota-example]#
- 5.2 创建名为best-effort的ResourceQuota,指定Scope为BestEffort:
[root@t71 quota-example]# vim best-effort.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: best-effort
spec:
hard:
pods: "10"
scopes:
- BestEffort
[root@t71 quota-example]# kubectl create -f best-effort.yaml --namespace=quota-scopes
resourcequota/best-effort created
- 5.3 再创建名为not-best-effort的ResourceQuota.指定Scope为NotBestEffort
[root@t71 quota-example]# vim not-best-effort.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: not-best-effort
spec:
hard:
pods: "4"
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
scopes:
- NotBestEffort
[root@t71 quota-example]# kubectl create -f not-best-effort.yaml --namespace=quota-scopes
resourcequota/not-best-effort created
- 5.4 查看创建成功的quota
[root@t71 quota-example]# kubectl get quota --namespace=quota-scopes
NAME CREATED AT
best-effort 2019-04-02T11:27:33Z
not-best-effort 2019-04-02T11:31:07Z
[root@t71 quota-example]# kubectl describe quota --namespace=quota-scopes
Name: best-effort
Namespace: quota-scopes
Scopes: BestEffort
* Matches all pods that do not have resource requirements set. These pods have a best effort quality of service.
Resource Used Hard
-------- ---- ----
pods 0 10
Name: not-best-effort
Namespace: quota-scopes
Scopes: NotBestEffort
* Matches all pods that have at least one resource requirement set. These pods have a burstable or guaranteed quality of service.
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
pods 0 4
requests.cpu 0 1
requests.memory 0 1Gi
[root@t71 quota-example]#
- 5.5 创建两个Deployment
- 5.5.1 quota-best-effort.yaml
[root@t71 quota-example]# vim quota-best-effort.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: quota-deploy
namespace: quota-scopes
spec:
replicas: 8
template:
metadata:
labels:
app: centos
spec:
containers:
- name: centos
image: centos:7.5.1804
command: ["/usr/sbin/init"]
- 5.5.2 quota-not-best-effort.yaml
[root@t71 quota-example]# vim quota-not-best-effort.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: quota-deploy-not
namespace: quota-scopes
spec:
replicas: 2
template:
metadata:
labels:
app: centos
spec:
containers:
- name: centos
image: centos:7.5.1804
command: ["/usr/sbin/init"]
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 200m
memory: 512Mi
describe
[root@t71 quota-example]# kubectl describe quota --namespace=quota-scopes
Name: best-effort
Namespace: quota-scopes
Scopes: BestEffort
* Matches all pods that do not have resource requirements set. These pods have a best effort quality of service.
Resource Used Hard
-------- ---- ----
pods 8 10
Name: not-best-effort
Namespace: quota-scopes
Scopes: NotBestEffort
* Matches all pods that have at least one resource requirement set. These pods have a burstable or guaranteed quality of service.
Resource Used Hard
-------- ---- ----
limits.cpu 400m 2
limits.memory 1Gi 2Gi
pods 2 4
requests.cpu 200m 1
requests.memory 512Mi 1Gi
资源配额的作用域(Scopes)提供了一种资源集合分割的机制,这种机制使得集群管理员可以更加方便地监控和限制不同类型对象对于各类资源的使用,同时能为资源分配和限制提供更大的灵活度和便利性