3、k8s 核心实战

7 kubernets核心实战

7.1 资源创建方式

  • 命令行
  • yaml

7.2 namespace

名称空间来隔离资源
命令行方式

kubectl create ns hello
kubectl delete ns hello

yaml方式

apiVersion: v1
kind: Namespace
metadata:
  name: hellp

使用 kubectl apply -f xx.yaml

7.3 pod

运行中的一组容器,pod是kubernetes中应用最小的单位

7.3.1 通过kubectl run 创建容器

kubectl run mynginx --image=nginx

7.3.2 通过yaml的方式创建容器

一个pod一个服务

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: mynginx
  name: mynginx
  namespace: default
spec:
  containers:
  - image: nginx
    name: mynginx

一个pod多个服务

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: apps
  name: apps
  namespace: default
spec:
  containers:
  - image: nginx
    name: nginx
  - image: tomcat:8.5.68
    name: tomcat

常用命令

# 查看default命名空间的pod
kubectl get pod

# 查看描述
kubectl describe pod 你的pod名字

# 删除
kubectl delete pod 你的pod名字

# 查看日志
kubectl logs 你的pod名字

# 每个pod k8s都会分配一个ip,查看分配的ip
# 集群中的任意一个机器以及任意的应用都能通过pod分配的ip来访问这个pod
kubectl get pod -owide

7.4 deployment

控制pod,使pod拥有多个副本、自愈、扩缩容等能力

# 使用两种方式创建pod,然后再删除pod,对比一下效果
kubectl run mynginx --image=nginx
kubectl create deployment mytomcat --image=tomcat:8.5.68

结论:使用run没有自愈能力,而使用create deployment 有自愈能力

8 多副本、扩缩容

8.1 多副本

命令行方式:

kubectl create deployment my-dep --image=nginx --replicas=3

yaml方式:

vi my-dep.yaml

kubectl apply -f my-dep.yaml

yaml内容:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-dep
  template:
    metadata:
      labels:
        app: my-dep
    spec:
      containers:
      - image: nginx
        name: nginx

常用命令

# 查看default命名空间的pod
kubectl get pod

# 查看描述
kubectl describe pod 你的pod名字

# 删除
kubectl delete pod 你的pod名字

# 查看日志
kubectl logs 你的pod名字

# 每个pod k8s都会分配一个ip,查看分配的ip
# 集群中的任意一个机器以及任意的应用都能通过pod分配的ip来访问这个pod
kubectl get pod -owide

# deployment 方式删除
kubectl delete deploy 你的pod名称

8.2 扩缩容

扩容

# 方式一
kubectl scale --replicas=5 deployment/my-dep

# 方式二
kubectl edit deployment my-dep
# 修改 replicas

8.3 自愈&故障转移

自愈:当pod因为某种原因宕机时,k8s会自动重启pod
故障转移:当pod所在服务器宕机时,能在其他节点重新部署对应的pod,所在服务恢复时,所在服务的pod将被删除

8.4 滚动更新

查看当前pod内应用使用的版本

kubectl get deployment my-dep -oyaml

滚动更新
方式一:--record 记录更新版本信息

kubectl set image deployment/my-dep nginx=nginx:1.16.1 --record

image
方式二:
修改 image 对应的版本号

kubectl edit deployment/my-dep

流程:先创建一个新的pod,当新的running后,删除一个老的pod,依次执行这个流程,知道所有旧版本全部被替换

8.5 版本回退

查看历史版本

kubectl rollout history deployment/my-dep

image
查看某个历史版本详情

kubectl rollout history deployment/my-dep --revision=2

回滚到上个版本

kubectl rollout undo deployment/my-dep

回滚版本到指定版本

kubectl rollout undo deployment/my-dep --to-revision=1

image

8.6 更多知识

工作负载:
Deployment:无状态应用部署,类似一些微服务、提供多副本等功能
StatefulSet:有状态应用部署,类似Redis,mysql,提供稳定的存储、网络等功能
DaemonSet:守护进程集,守护型应用部署,比如日志收集组件,在每个机器都有一份
Job/CronJob:任务/定时任务,比如垃圾清理组件,可以指定时间运行

9 Service

9.1 将一组Pods公开为网络服务的抽象方法

9.1.1 不带type 模式type 是 ClusterIP

方式一:

# 暴露deploy --port pod对外暴露的端口,--target-port=80 pod内部的端口 my-dep:pod名 my-dep-service:service名字
kubectl expose deployment my-dep --name=my-dep-service --prot=8000 --target-port=80

方式二:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-dep
  name: my-dep-service
spec:
  selector:
    app: my-dep
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80

使用标签检索pod
kubectl get pod -l app=my-dep

查看使用service 暴露的ip和端口

# 查看所有的service暴露的ip和端口
kubectl get service
# 查看指定的
kubectl get service 你的service名字
# 删除指定的service
kubectl delete service 你的service名字

9.1.2 带type

9.1.2.1 ClusterIP

方式一:

# 暴露deploy --port pod对外暴露的端口,--target-port=80 pod内部的端口 my-dep:pod名 my-dep-service:service名字
kubectl expose deployment my-dep --name=my-dep-service --prot=8000 --target-port=80 --type=ClusterIP

方式二:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-dep
  name: my-dep-service
spec:
  selector:
    app: my-dep
  type: ClusterIP
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80

9.1.2.2 NodePort

方式一:

# 暴露deploy --port pod对外暴露的端口,--target-port=80 pod内部的端口 my-dep:pod名 my-dep-service:service名字
kubectl expose deployment my-dep --name=my-dep-service --prot=8000 --target-port=80 --type=NodePort

方式二:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-dep
  name: my-dep-service
spec:
  selector:
    app: my-dep
  type: NodePort
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80

NodePort范围在 30000-32767 之间

9 Ingress

9.1.1 安装

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/baremetal/deploy.yaml

#修改镜像
vi deploy.yaml
#1、将image k8s.gcr.io/ingress-nginx/controller:v0.46.0@sha256:52f0058bed0a17ab0fb35628ba97e8d52b5d32299fbc03cc0f6c7b9ff036b61a的值改为如下值:
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0
#2、在修改了image的上一层增加 hostNetwork: true,如下图1

#3、找到secretName将ingress-nginx-admission改为ingress-nginx-admission-token

# 安装
kubectl apply -f deploy.yaml

# 检查安装的结果
kubectl get pod,svc -n ingress-nginx

图1:
image

如果下载不到这个文件或者不想修改,就用下面的文件(当前文件已经修改了image的值和其他内容)

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx

---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader-nginx
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      hostNetwork: true
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --election-id=ingress-controller-leader
            - --ingress-class=nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission-token
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1beta1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
      - v1beta1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /networking/v1beta1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-3.33.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.47.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          image: docker.io/jettech/kube-webhook-certgen:v1.5.1
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-3.33.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.47.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          image: docker.io/jettech/kube-webhook-certgen:v1.5.1
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
kubectl apply -f deploy.yaml

image

9.1.2 查看状态

kubectl get pod -n ingress-nginx

我们看到ingress-nginx-controller-78965bffd5-qp6k8容器处于ContainerCreating,我们需要修改
也看参考这个issues:https://github.com/kubernetes/ingress-nginx/issues/5932
image

kubectl get secret -A | grep ingress-nginx

image

9.1.2.1 命令行修改方式:

# 复制 (自己的)-mh7zg ,将下图对应的位置,加上-mh7zg
kubectl edit deployment ingress-nginx-controller -n ingress-nginx

image
完成修改后
image
查看状态已经可以了

kubectl get pod -n ingress-nginx

image

9.1.2.2 界面修改修改方式:

找到Deployments
image
编辑
image
更新
image

9.1.3 验证

kubectl get svc -A

image

分别可以使用以下网址,如果能展示下图,就成功了
http://k8s集群任意ip:31483
https://k8s集群任意ip:30893
image

9.2 使用

ingress的一些高价功能:https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/

9.2.1 测试环境准备

应用如下yaml,准备好测试环境

vi test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-server
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hello-server
  template:
    metadata:
      labels:
        app: hello-server
    spec:
      containers:
      - name: hello-server
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server
        ports:
        - containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-demo
  name: nginx-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-demo
  template:
    metadata:
      labels:
        app: nginx-demo
    spec:
      containers:
      - image: nginx
        name: nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-demo
  name: nginx-demo
spec:
  selector:
    app: nginx-demo
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: hello-server
  name: hello-server
spec:
  selector:
    app: hello-server
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 9000

部署

kubectl apply -f test.yaml

查看是否已经部署完成

kubectl get pod

image

kubectl get svc

image

访问
image

9.2.2 域名访问

vi ingress-rule.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-host-bar
spec:
  ingressClassName: nginx
  rules:
  - host: "hello.hg.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-server
            port:
              number: 8000
  - host: "demo.hg.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"  # 把请求会转给下面的服务,下面的服务一定要能处理这个路径,不能处理就是404
        backend:
          service:
            name: nginx-demo  ## java,比如使用路径重写,去掉前缀nginx
            port:
              number: 8000
kubectl apply -f ingress-rule.yaml

如果执行有报错
image
查看

kubectl get validatingwebhookconfigurations

image

删除

kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission-token

重新部署

kubectl apply -f ingress-rule.yaml

查看ingress状态

kubectl get ingress

image
访问机器增加hosts域名配置
浏览器访问
image
image

修改对应的ingress

kubectl edit ingress ingress名称
kubectl edit ingress ingress-host-bar

9.2.3 路径重写

vi ingress-rule.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
  name: ingress-host-bar
spec:
  ingressClassName: nginx
  rules:
  - host: "hello.hg.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-server
            port:
              number: 8000
  - host: "demo.hg.com"
    http:
      paths:
      - pathType: Prefix
        path: "/nginx(/|$)(.*)"  # 把请求会转给下面的服务,下面的服务一定要能处理这个路径,不能处理就是404
        backend:
          service:
            name: nginx-demo  ## java,比如使用路径重写,去掉前缀nginx
            port:
              number: 8000

部署

kubectl apply -f ingress-rule.yaml

查看

kubectl get ingress

使用浏览器访问
image
效果:和之前不加/nginx一致,相当于在ingress处理的时候,将/nginx去掉了

9.2.4 流量限制

vi ingress-limit-rate.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-limit-rate
  annotations:
    nginx.ingress.kubernetes.io/limit-rps: "1"
spec:
  ingressClassName: nginx
  rules:
  - host: "haha.hg.com"
    http:
      paths:
      - pathType: Exact
        path: "/"
        backend:
          service:
            name: nginx-demo
            port:
              number: 8000

部署

kubectl apply -f ingree-limit-rate.yaml

查看状态

kubectl get ingress

image
使用浏览器查看,当快速访问时:
image

10 存储抽象

10.1 环境准备

10.1.1 所有节点

yum install -y nfs-utils

10.1.2 主节点

#nfs主节点
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports

mkdir -p /nfs/data
systemctl enable rpcbind --now
systemctl enable nfs-server --now
#配置生效
exportfs -r

10.1.3 从节点

# 查看nfs服务能挂载的信息
showmount -e nfs服务所在ip

#执行以下命令挂载 nfs 服务器上的共享目录到本机路径 /nfs/data
mkdir -p /nfs/data

mount -t nfs nfs服务所在ip:/nfs/data /nfs/data
# 写入一个测试文件
echo "hello nfs server" > /nfs/data/test.txt

永久挂载方式

vi /etc/fstab
# 新增一行
192.168.68.210:/nfs/data /nfs/data nfs defaults        0 0

nfs服务器端,也能看到新增的文件

10.1.4 原生方式数据挂载

vi mount.yaml
  • 修改对应nfs服务的ip
    server: 192.168.68.204
  • 创建路径
    path: /nfs/data/nginx-pv
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-pv-demo
  name: nginx-pv-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-pv-demo
  template:
    metadata:
      labels:
        app: nginx-pv-demo
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
        - name: html
          nfs:
            server: 192.168.68.204
            path: /nfs/data/nginx-pv
kubectl get pod

image
注意,如果一直在创建中,请使用 kubectl describe pod 对应的pod名字,一般是没有创建对应的文件夹
验证

# 进入pod容器内部
kubectl exec -it nginx-pv-demo-5b74676b65-cb9mh -- /bin/bash

# 查看nginx存放页面的信息
ls /usr/share/nginx/html/

image
发现对应的文件夹下,并没有数据
新建一个连接,在nfs宿主机服务的 /nfs/data/nginx-pv下,或者挂载的服务器下,新建一个index.html文件,并填充内容

echo 111 > /nfs/data/nginx-pv/index.html

然后,在之前的容器中再查看是否有新的文件

ls /usr/share/nginx/html/

image

cat /usr/share/nginx/html/index.html

image
更换另一个pod,结果也是一样

10.2 PV&PVC

PV:持久卷(Persistent Volume),将应用需要持久化的数据保存到指定位置
PVC:持久卷申明(Persistent Volume Claim),申明需要使用的持久卷规格

10.2.1 创建PV池

静态供应

mkdir -p /nfs/data/{01,02,03}

创建PV

vi pv.yaml
  • 修改nfs服务ip
    server: 自己的nfs服务ip
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01-10m
spec:
  capacity:
    storage: 10M
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/01
    server: 192.168.68.204
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv02-1gi
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/02
    server: 192.168.68.204
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv03-3gi
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/03
    server: 192.168.68.204

部署

kubectl apply -f pv.yaml

查看

kubectl get pv

image

10.2.2 PVC创建与绑定

10.2.2.1 创建PVC

vi pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
  storageClassName: nfs

部署

kubectl apply -f pvc.yaml

查看

kubectl get pvc

image
测试删除

kubectl delete -f pvc.yaml

查看

kubectl get pvc

image
再次部署

kubectl apply -f pvc.yaml

查看

kubectl get pvc

image
结论:会选用大于200M的pv,正在使用的pv会被 Bound,之前使用的,后面删除了,就会释放
image

10.2.2.2 创建Pod绑定PVC

vi pod-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-deploy-pvc
  name: nginx-deploy-pvc
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-deploy-pvc
  template:
    metadata:
      labels:
        app: nginx-deploy-pvc
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
        - name: html
          persistentVolumeClaim:
            claimName: nginx-pvc

部署

kubectl apply -f pod-pvc.yaml

查看pod

kubectl get pod

image

查看pvc和pv绑定关系

kubectl get pvc,pv

image
验证
进入启动一个pod容器内

# 进入容器,对应的pod名字修改为自己的
kubectl exec -it nginx-deploy-pvc-79fc8558c7-c8mh7 -- /bin/bash

## 展示容器内文件夹下的内容
ls /usr/share/nginx/html/

image
我们发现,文件夹下,还没有文件
从绑定关系中得知,使用的pv03-3gi在nfs服务对应的 /nfs/data/03文件夹或者挂载了nfs的服务器上,我们这03文件夹中创建文件 index.html文件

echo 333 > /nfs/data/03/index.html

再次查看,已经有了index.html文件
image
查看另外一个pod,也是如此
image

10.3 ConfigMap

抽取应用配置,并且可以自动更新

10.3.1 redis示例

10.3.1.1 创建redis.conf配置文件

vi redis.conf

内容

appendonly yes

10.3.1.2 把配置文件创建为配置集

方式一:命令行方式

# 创建配置,redis保存到k8s的etcd;
kubectl create cm redis-conf --from-file=redis.conf

image
删除cm配置

# kubectl delete cm cm名称
kubectl delete cm redis-conf

方式二:yaml方式

vi cm.yaml
apiVersion: v1
data:    #data是所有真正的数据,key:默认是文件名   value:配置文件的内容
  redis.conf: |
    appendonly yes
kind: ConfigMap
metadata:
  name: redis-conf
  namespace: default

部署

kubectl apply -f cm.yaml

image

10.3.1.3 创建pod

vi redis-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: redis
spec:
  containers:
  - name: redis
    image: redis
    command:
      - redis-server
      - "/redis-master/redis.conf"  #指的是redis容器内部的位置
    ports:
    - containerPort: 6379
    volumeMounts:
    - mountPath: /data
      name: data
    - mountPath: /redis-master
      name: config
  volumes:
    - name: data
      emptyDir: {}
    - name: config
      configMap:
        name: redis-conf
        items:
        - key: redis.conf
          path: redis.conf

查看

kubectl get pod

image

10.3.1.3 检查默认配置

kubectl exec -it redis -- redis-cli

image

10.3.1.4 修改ConfigMap

kubectl edit cm redis-conf
    maxmemory 2mb
    maxmemory-policy allkeys-lru

image

10.3.1.5 检查配置是否更新

kubectl exec -it redis -- redis-cli

image
检查指定文件内容是否已经更新
修改了CM。Pod里面的配置文件会跟着变
配置值未更改,因为需要重新启动 Pod 才能从关联的 ConfigMap 中获取更新的值。
原因:我们的Pod部署的中间件自己本身没有热更新能力

重启pod

kubectl replace --force -f redis-pod.yaml

检查配置是否更新

kubectl exec -it redis -- redis-cli

image

10.4 Secret

Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 SSH 密钥。 将这些信息放在 secret 中比放在 Pod 的定义或者 容器镜像 中来说更加安全和灵活。

10.4.1 命令方式

kubectl create secret docker-registry 密钥名 \
  --docker-server=<你的镜像仓库服务器> \
  --docker-username=<你的用户名> \
  --docker-password=<你的密码> \
  --docker-email=<你的邮箱地址>

10.4.2 创建容器

apiVersion: v1
kind: Pod
metadata:
  name: private-nginx
spec:
  containers:
  - name: private-nginx
    image: xx/xx:v1.0
  imagePullSecrets:
  - name: 密钥名
posted @ 2022-11-17 10:58  尐海爸爸  阅读(2322)  评论(0编辑  收藏  举报