kubernetes 资源yaml文件实例

Pod yaml
 
Node标签,指定node运行pod。节点选择器nodeSelector
Node打标签
[root@k8s-master1 data]#kubectl label node k8s-node1 app=mynode
[root@k8s-master1 data]# cat pod-labes.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: web
name: myapp
namespace: default
spec:
nodeSelector:
app: mynode
containers:
- image: nginx
name: nginx
resources:
requests:
cpu: 500m
memory: 1500Mi
limits:
cpu: 500m
memory: 1500Mi
 
 
nodeAffinity:节点亲和类似宇nodeSelector,可以根据节点的标签约束pod可以调度到那些节点:
相比nodeSelector:
  • 匹配更多的逻辑组和,不只是字符的完全相等
  • 调到分为软策略和盈策略
    • 硬(required):必须满足
    • 软(preferred):场上满足,但不保证
  • 操作: In Notin Exists DoesNotExist Gt LT
  • 亲和性: In
  • 反亲和性: Notin DoesNotExist
 
 
 
 
 
策略:
requiredDuringSchedulingIgnoredDuringExecution 硬策略:节点必须满足
例:必须运行在标签为app=mynode的节点
[root@k8s-master1 data]# cat pod-labes-nodeAffinity-required.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: web-required
name: myapp-required
namespace: default
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In#操作符
values:
- mynode
containers:
- image: nginx
name: nginx-required
resources:
requests:
cpu: 500m
memory: 1500Mi
limits:
cpu: 500m
memory: 1500Mi
 
 
preferredDuringSchedulingIgnoredDuringExecution 软策略:不是必须的
例:选择标签为group=ai的节点运行该pod,如果没有匹配次标签,就更具调度自动分配
[root@k8s-master1 data]# cat pod-labes-nodeAffinity-preferred.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: web-preferred
name: myapp-preferred
namespace: default
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1 #权重1--100,值越大,pod调度到对应标签的概率越高
preference:
matchExpressions:
- key: group
operator: In
values:
- ai
containers:
- image: nginx
name: nginx-preferred
resources:
requests:
cpu: 500m
memory: 1500Mi
limits:
cpu: 500m
memory: 1500Mi
 
给node添加角色(ROLES),打标签
 
给master节点打上master的标签,node打上node的标签
kubectl label node k8s-master1 node-role.kubernetes.io/master=
kubectl label node k8s-node1 node-role.kubernetes.io/node=
kubectl label node k8s-node2 node-role.kubernetes.io/node=
 
删除node标签,只需要将=换成-
kubectl label node k8s-master1 node-role.kubernetes.io/master-
kubectl label node k8s-node1 node-role.kubernetes.io/node-
kubectl label node k8s-node2 node-role.kubernetes.io/node-
 
 
 
 
Taints: 避免pod调度到特定的Node上(污点)
应用场景:
  • 专用节点
  • 配置了特殊硬件的节点
  • 基于Taint的驱逐
 
 
设置污点:
kubectl taint node nodename key=value:[errect]
例:[root@k8s-master1 data]# kubectl taint node k8s-node1 gpu=node:NoExecuten
去掉污点:
kubectl taint node nodename key=value:[effect]-
例:[root@k8s-master1 data]# kubectl taint node k8s-node1 gpu=node:NoExecute-
查看污点:
Kubectl describe node nodemae|grep Taint
其中effect可取值:
NoSchedule: 一定不能被调度
PreferNoSchedule: 尽量不要调度
不仅不会调度,还会驱逐node上已有的pod
 
 
 
允许pod调度到有污点(Taints)的node上:Tolerations污点容忍。注:不是强制性
例:容许这个pod允许在污点为gpu=node:NoExecuten的node节点上
[root@k8s-master1 data]# cat pod-labes-taint.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: web-taint
name: myapp-taint
namespace: default
spec:
tolerations:
- key: gpu #污点的key例如之前打的是gpu
operator: "Equal" #就是打污点时的=
value: "node" #打污点的值
effect: "NoExecute" #effect的值
containers:
- image: nginx
name: nginx-taint
resources:
requests:
cpu: 500m
memory: 1500Mi
limits:
cpu: 500m
memory: 1500Mi
 
指定node名字创建pod,nodename
[root@k8s-master1 data]# cat pod-nodename.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: web-nodename
name: myapp-nodename
namespace: default
spec:
nodeName: k8s-node2
containers:
- image: nginx
name: nginx-nodename
resources:
requests:
cpu: 500m
memory: 1500Mi
limits:
cpu: 500m
memory: 1500Mi
 
 
Deployment yaml
Pod 与Controllers的关系
  • Controllers: 在集群上管理和运行容器的对象
  • 通过label-selector相关联
  • Pod通过控制器实现应用的运维、伸缩、滚动升级等
Deployment 功能和应用场景
  • 部署无状态应用
  • 管理Pod和ReplicaSet
  • 具有上线部署、副本设定、滚动升级、回滚等功能
  • 提供声明式更新,例如只更新一个新的Image
(应用场景:Web服务、微服务)
 
部署一个nginx pod的deployment
[root@k8s-master1 data]# cat deployment-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx-deployment
template:
metadata:
labels:
app: nginx-deployment
spec:
containers:
- name: nginx-deploy
image: nginx
 
 
 
 
通过以下命令将刚创建的pod通过service发布出去
kubectl expose --name nginx-deployment-service deployment nginx-deployment --port=80 --target-port=80 --type=NodePort
--name nginx-deployment-service service的名字
--port 80 service对外提供的端口
--target-port 80 pod的端口
--type NodePort 指定类型为nodeport node随机分配一个端口映射到service的80端口上
 
 
查看service 然后就可以用node ip+ 32302端口访问
[root@k8s-master1 data]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-deployment-service NodePort 10.0.0.19 <none> 80:32302/TCP 2m43s
 
升级、回滚、删除
命令操作:
升级
kubectl set image deployment/nignx-deployment ningx=nginx:1.15
kubectl rollout status deployment/nignx-deployment #查看升级状态
回滚
kubectl rollout history deployment/nignx-deployment #查看升级的版
kubectl rollout undo deployment/nignx-deployment #默认回滚到上一个版
kubectl rollout undo deployment/nignx-deployment --revision=1 #指定版本回
 
 
使用yaml文件就可以在里面直接修改,然后执行apply就可以
 
Service yaml
存在的意义
  • 防止Pod失联(服务发现)
  • 定义一组Pod得访问策略(负载均衡)
 
与pod得关系
  • 通过label-selector相关联
  • 通过Service实现Pod的负载均衡(TCP/IP 4层)
 
 
 
三种类型
  • ClusterIP 集群内部使用
  • NodePort 对外暴露应用
  • LoadBalancer 对外暴露应用,使用公有云
Cluster ip 类型
标准的service yaml文件# 默认是cluster ip
[root@k8s-master1 data]# cat server-clusterip.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: web
name: web-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: web
 
 
[root@k8s-master1 data]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-service ClusterIP 10.0.0.189 <none> 80/TCP 113s
 
 
NodePort 类型的ymal
 
apiVersion: v1
kind: Service
metadata:
labels:
app: web
name: web-service-nodeport
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 32767 #指定端口,不知道系统会自动分配
selector:
app: web
type: NodePort #指定类型,不指定默认是cluster ip类型
[root@k8s-master1 data]# kubectl get svc -o name web-service-nodeport -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-service-nodeport NodePort 10.0.0.76 <none> 80:32767/TCP 3m10s app=web
 
 
Ingress 配置
Pod和Ingress的关系
  • 通过service相关联
  • 通过Ingress Controller实现Pod的负责均衡
    • 支持TCP/IP 4层和HTTP7层
Ingress 工作:
  • 部署ingress controller
  • 创建ingress规则
Ingress controller 有很多实现,我们采用官方维护的Nginx控制器
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/deploy/static/provider/cloud/deploy.yaml
 
vim deploy.yaml
修改镜像:
k8s.gcr.io/ingress-nginx/controller:v0.41.2@sha256:1f4f402b9c14f3ae92b11ada1dfe9893a88f0faeb0b2f4b903e2c67a0c3bf0de
## 替换成
registry.cn-beijing.aliyuncs.com/lingshiedu/ingress-nginx-controller:0.41.2
## 因为k8s.gcr.io的镜像国内没法下载,所以替换成阿里云的镜像
配置暴漏宿主机的网络
#建议修改直接暴漏宿主机的网络: hostNetwork: true
 
 
 
kubectl apply –f deploy.yaml
查看ingress-controller pod
Kubectl get pod –n ingress-nginx
 
 
配置ingress 规则
 
[root@k8s-master1 data]#cat ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myweb-ingress
spec:
rules:
- host: example.ctnrs.com #如nginx的额server name 配置域名
http:
paths:
- path: / #如location
backend: #配置后端主机
serviceName: web #那个service得名字
servicePort: 80 #service得端口
 
报一下错误
[root@k8s-master1 data]# kubectl apply -f ingress.yaml
Error from server (InternalError): error when creating "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: context deadline exceeded
 
failurePolicy 定义了如何处理 admission webhook 中无法识别的错误和超时错误。允许的值为 Ignore 或 Fail。
Ignore 表示调用 webhook 的错误将被忽略并且允许 API 请求继续。 Fail 表示调用 webhook 的错误导致准入失败并且 API 请求被拒绝。 您可以通过 ValidatingWebhookConfiguration 或者 MutatingWebhookConfiguration 动态配置哪些资源要被哪些 admission webhook 处理。 加ignore
 
[root@k8s-master1 data]# kubectl get ValidatingWebhookConfiguration/ingress-nginx-admission -n ingress-nginx
NAME WEBHOOKS AGE
ingress-nginx-admission 1 5h22m
[root@k8s-master1 data]# kubectl edit ValidatingWebhookConfiguration/ingress-nginx-admission -n ingress-nginx
posted @ 2020-12-24 18:02  春天的风情  阅读(613)  评论(0编辑  收藏  举报