k8s核心概念Controller、Service、Secret、ConfigMap
k8s核心概念Controller、Service、Secret、ConfigMap
1. Controller
1. 什么是Controller
集群上管理和运行容器的对象。
2. Controller 和 pod 的关系
pod 是通过Controller 实现应用的运维,比如伸缩,滚动升级等待。pod 和 controller 通过label 标签建立关系。
3. ReplicationController 和 ReplicaSet
1. ReplicationController
当我们定义了一个RC并且提交到k8s 集群之后,master 节点上的ControllerManager组件就得到通知,定期检查系统中存活的pod, 当低于或者高于期望值,就进行伸缩。
2. ReplicaSet
本质上和RC没有区别,RC和k8s 代码模块的ReplicationControllerController 重名,所以在k8sv1.2 之后将RC 升级为RS(升级版RC),与RC区别: RS 支持基于集合的labelSelector,而RC只支持基于等式的Label Selector。
官方的建议是不要越过RC或者RS直接创建Pod, 即使只有一个pod 副本,也强烈建议使用RC来定义Pod。也不要直接使用RS, 而应该通过deployment 来创建RS 和 pod。
4. deployment 控制器
deployment 是k8s1.2 引入的概念,目的是为了更好的解决pod 的编排问题,Deployment 内部使用ReplicaSet 来实现。
1. 部署无状态应用: 认为pod 都一样,没有顺序要求, 不用考虑在哪个node 运行,随意进行扩展和伸缩
2. 管理Pod和 ReplicaSet
3. 部署、滚动升级等
4. 典型的像web服务、分布式服务等
5. deployment 部署无状态应用
1. 导出yml 文件,并且查看内容

[root@k8smaster1 ~]# cat web2.yml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: web name: web spec: replicas: 1 selector: matchLabels: app: web strategy: {} template: metadata: creationTimestamp: null labels: app: web spec: containers: - image: nginx name: nginx resources: {} status: {}
2. 执行创建资源
[root@k8smaster1 ~]# kubectl apply -f web2.yml deployment.apps/web created [root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web-5dcb957ccc-mjvjn 1/1 Running 0 9s 10.244.2.18 k8snode2 <none> <none>
3. 暴露端口
[root@k8smaster1 ~]# kubectl expose deployment web --port=80 --type=NodePort --target-port=80 --name=web3 -o yaml > web3.yml [root@k8smaster1 ~]# cat web3.yml apiVersion: v1 kind: Service metadata: creationTimestamp: "2022-01-16T02:52:51Z" labels: app: web managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:app: {} f:spec: f:externalTrafficPolicy: {} f:ports: .: {} k:{"port":80,"protocol":"TCP"}: .: {} f:port: {} f:protocol: {} f:targetPort: {} f:selector: .: {} f:app: {} f:sessionAffinity: {} f:type: {} manager: kubectl operation: Update time: "2022-01-16T02:52:51Z" name: web3 namespace: default resourceVersion: "1114657" selfLink: /api/v1/namespaces/default/services/web3 uid: f7cc35a4-4a4e-403a-b890-bf13673792c9 spec: clusterIP: 10.101.240.157 externalTrafficPolicy: Cluster ports: - nodePort: 30445 port: 80 protocol: TCP targetPort: 80 selector: app: web sessionAffinity: None type: NodePort status: loadBalancer: {} [root@k8smaster1 ~]# kubectl apply -f web3.yml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply service/web3 configured [root@k8smaster1 ~]# kubectl get pods,svc -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/web-5dcb957ccc-mjvjn 1/1 Running 0 3m53s 10.244.2.18 k8snode2 <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d <none> service/web3 NodePort 10.101.240.157 <none> 80:30445/TCP 35s app=web
6. 升级回滚和动态伸缩
(1) 查看目前版本
[root@k8smaster1 ~]# kubectl exec -it web-5dcb957ccc-mjvjn bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. root@web-5dcb957ccc-mjvjn:/# nginx -v nginx version: nginx/1.21.5
(2) 应用升级
[root@k8smaster1 ~]# kubectl set image deployment web nginx=nginx:1.15 deployment.apps/web image updated
(3) 查看应用升级状态: 可以看出来升级之前新起了一个pod, pod唯一Id 发生了变化。其实是类似于滚动发布,如果有多个副本,会先停掉部分node,然后升级,这个之后测试下。
[root@k8smaster1 ~]# kubectl rollout status deployment web deployment "web" successfully rolled out [root@k8smaster1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-bbcf684cb-p9j2t 1/1 Running 0 114s [root@k8smaster1 ~]# kubectl exec -it web-bbcf684cb-p9j2t bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. root@web-bbcf684cb-p9j2t:/# nginx -v nginx version: nginx/1.15.12
关于升级其策略有两种:recreate方式会先停止所有就版本,停止完后才部署新版本; RollingUpdate 滚动发布, 先停部分服务,然后启动几个新的服务,然后将原来停掉的删除, 再这样滚动剩余的部分。
[root@k8smaster01 ~]# kubectl explain deploy.spec.strategy.type KIND: Deployment VERSION: apps/v1 FIELD: type <string> DESCRIPTION: Type of deployment. Can be "Recreate" or "RollingUpdate". Default is RollingUpdate.
(4) 查看升级版本且回滚到指定版本
[root@k8smaster1 ~]# kubectl rollout history deployment web deployment.apps/web REVISION CHANGE-CAUSE 1 <none> 2 <none> [root@k8smaster1 ~]# kubectl rollout undo deployment web --to-revision=1 deployment.apps/web rolled back [root@k8smaster1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-5dcb957ccc-7hns9 1/1 Running 0 25s [root@k8smaster1 ~]# kubectl exec -it web-5dcb957ccc-7hns9 bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. root@web-5dcb957ccc-7hns9:/# nginx -v nginx version: nginx/1.21.5
(5) 回滚到上一个版本: 可以看到是先新起一个,然后停掉原来的,类似于滚动
[root@k8smaster1 ~]# kubectl rollout undo deployment web deployment.apps/web rolled back [root@k8smaster1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-5dcb957ccc-7hns9 1/1 Terminating 0 110s web-bbcf684cb-hxhf7 1/1 Running 0 17s [root@k8smaster1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-bbcf684cb-hxhf7 1/1 Running 0 23s
(6) 动态伸缩
[root@k8smaster1 ~]# kubectl scale deployment web --replicas=10 deployment.apps/web scaled [root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web-bbcf684cb-7lkcf 1/1 Running 0 8s 10.244.1.23 k8snode1 <none> <none> web-bbcf684cb-bdpck 0/1 ContainerCreating 0 8s <none> k8snode1 <none> <none> web-bbcf684cb-blqn8 0/1 ContainerCreating 0 8s <none> k8snode2 <none> <none> web-bbcf684cb-d22w9 0/1 ContainerCreating 0 8s <none> k8snode2 <none> <none> web-bbcf684cb-hxhf7 1/1 Running 0 119s 10.244.1.21 k8snode1 <none> <none> web-bbcf684cb-ls88v 0/1 ContainerCreating 0 8s <none> k8snode1 <none> <none> web-bbcf684cb-qnm98 0/1 ContainerCreating 0 8s <none> k8snode2 <none> <none> web-bbcf684cb-rswzl 0/1 ContainerCreating 0 8s <none> k8snode2 <none> <none> web-bbcf684cb-w9ctz 0/1 ContainerCreating 0 8s <none> k8snode2 <none> <none> web-bbcf684cb-wgwd5 1/1 Running 0 8s 10.244.1.22 k8snode1 <none> <none>
6. StatefulSet 部署有状态应用
有状态应用,每个pod 都独立运行,保持pod 启动顺序和唯一性; 有唯一的网络标识符,持久存储; 有序,比如mysql 主从; 主机名称固定。 而且其扩容以及升级等操作也是按顺序进行的操作。
前置点: 无头Service, 说的是clusterIP 是一个None 值。
1. 查看描述
[root@k8smaster1 ~]# kubectl explain statefulsets KIND: StatefulSet VERSION: apps/v1 DESCRIPTION: StatefulSet represents a set of pods with consistent identities. Identities are defined as: - Network: A single stable DNS and hostname. - Storage: As many VolumeClaims as requested. The StatefulSet guarantees that a given network identity will always map to the same storage identity. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind <string> Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata <Object> spec <Object> Spec defines the desired identities of pods in this set. status <Object> Status is the current status of Pods in this StatefulSet. This data may be out of date by some window of time.
2. 编写sts.yml 内容如下:
apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: nginx-statefulset namespace: default spec: serviceName: nginx replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80
3. 创建资源:
[root@k8smaster1 ~]# kubectl apply -f sts.yml service/nginx created statefulset.apps/nginx-statefulset created
4. 查看
[root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-statefulset-0 1/1 Running 0 68s 10.244.1.26 k8snode1 <none> <none> nginx-statefulset-1 1/1 Running 0 65s 10.244.2.25 k8snode2 <none> <none> nginx-statefulset-2 1/1 Running 0 59s 10.244.2.26 k8snode2 <none> <none> [root@k8smaster1 ~]# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d2h <none> nginx ClusterIP None <none> 80/TCP 74s app=nginx
可以看到:
每个pod 有唯一的name, name 规则是自己的名称 + "-" + index 生成的; 有一个clusterip 是None 的无头Service。
再次查看statefulset以及查看主机名称: (可以看到主机名称也是固定的)
[root@k8smaster1 ~]# kubectl get statefulsets -o wide NAME READY AGE CONTAINERS IMAGES nginx-statefulset 3/3 9m35s nginx nginx:latest [root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-statefulset-0 1/1 Running 0 9m48s 10.244.1.26 k8snode1 <none> <none> nginx-statefulset-1 1/1 Running 0 9m45s 10.244.2.25 k8snode2 <none> <none> nginx-statefulset-2 1/1 Running 0 9m39s 10.244.2.26 k8snode2 <none> <none> [root@k8smaster1 ~]# kubectl exec nginx-statefulset-0 -- hostname nginx-statefulset-0
7. DaemonSet 部署守护进程
DaemonSet保证在每个Node上都运行一个容器副本,常用来部署一些集群的日志、监控或者其他系统管理应用。 新加入的node 也同样运行在一个pod 里面。
1. 创建ds.yaml 文件, 内容如下:
apiVersion: apps/v1 kind: DaemonSet metadata: name: ds-test labels: app: filebeat spec: selector: matchLabels: app: filebeat template: metadata: labels: app: filebeat spec: containers: - name: logs image: nginx ports: - containerPort: 80 volumeMounts: - name: varlog mountPath: /tmp/log volumes: - name: varlog hostPath: path: /var/log
2. 创建资源
[root@k8smaster1 ~]# kubectl apply -f ds.yaml daemonset.apps/ds-test created [root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ds-test-hgpcn 1/1 Running 0 14s 10.244.2.27 k8snode2 <none> <none> ds-test-zphv6 1/1 Running 0 14s 10.244.1.27 k8snode1 <none> <none>
可以看到每个节点都运行一个pod。
3. 查看详细信息

[root@k8smaster1 log]# kubectl describe pod ds-test-zphv6 Name: ds-test-zphv6 Namespace: default Priority: 0 Node: k8snode1/192.168.13.104 Start Time: Sun, 16 Jan 2022 00:44:02 -0500 Labels: app=filebeat controller-revision-hash=9fbd55487 pod-template-generation=1 Annotations: <none> Status: Running IP: 10.244.1.27 IPs: IP: 10.244.1.27 Controlled By: DaemonSet/ds-test Containers: logs: Container ID: docker://c5d7d5b970210a2c2a03fc264f8e1e77b95ef89d3095467fa54300c2305f663c Image: nginx Image ID: docker-pullable://nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31 Port: 80/TCP Host Port: 0/TCP State: Running Started: Sun, 16 Jan 2022 00:44:04 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /tmp/log from varlog (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-5r9hq (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: varlog: Type: HostPath (bare host directory volume) Path: /var/log HostPathType: default-token-5r9hq: Type: Secret (a volume populated by a Secret) SecretName: default-token-5r9hq Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/disk-pressure:NoSchedule node.kubernetes.io/memory-pressure:NoSchedule node.kubernetes.io/not-ready:NoExecute node.kubernetes.io/pid-pressure:NoSchedule node.kubernetes.io/unreachable:NoExecute node.kubernetes.io/unschedulable:NoSchedule Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m3s default-scheduler Successfully assigned default/ds-test-zphv6 to k8snode1 Normal Pulling 3m49s kubelet, k8snode1 Pulling image "nginx" Normal Pulled 3m48s kubelet, k8snode1 Successfully pulled image "nginx" Normal Created 3m48s kubelet, k8snode1 Created container logs Normal Started 3m48s kubelet, k8snode1 Started container logs
4. 查看目录挂载关系:
[root@k8smaster1 log]# kubectl exec -it ds-test-zphv6 bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. root@ds-test-zphv6:/# ls /tmp/log/ anaconda cron-20211222 maillog-20211222 sa tallylog vmware-vmsvc-root.3.log audit cron-20211226 maillog-20211226 samba tuned vmware-vmsvc-root.4.log boot.log cron-20220102 maillog-20220102 secure vmware-network.1.log vmware-vmsvc-root.5.log boot.log-20211228 cron-20220109 maillog-20220109 secure-20211222 vmware-network.2.log vmware-vmsvc-root.6.log boot.log-20211231 cups messages secure-20211226 vmware-network.3.log vmware-vmsvc-root.log boot.log-20220103 dmesg messages-20211222 secure-20220102 vmware-network.4.log vmware-vmtoolsd-root.log boot.log-20220105 dmesg.old messages-20211226 secure-20220109 vmware-network.5.log wtmp boot.log-20220107 firewalld messages-20220102 speech-dispatcher vmware-network.6.log yum.log boot.log-20220108 gdm messages-20220109 spooler vmware-network.7.log yum.log-20211224 boot.log-20220109 glusterfs ntpstats spooler-20211222 vmware-network.8.log yum.log-20220101 btmp grubby pluto spooler-20211226 vmware-network.9.log btmp-20220101 grubby_prune_debug pods spooler-20220102 vmware-network.log chrony lastlog ppp spooler-20220109 vmware-vgauthsvc.log.0 containers libvirt qemu-ga sssd vmware-vmsvc-root.1.log cron maillog rhsm swtpm vmware-vmsvc-root.2.log root@ds-test-zphv6:/# exit exit [root@k8smaster1 log]# ls /var/log/ anaconda cron-20211222 maillog-20211222 sa tallylog vmware-vmsvc-root.3.log audit cron-20211226 maillog-20211226 samba tuned vmware-vmsvc-root.4.log boot.log cron-20220102 maillog-20220102 secure vmware-network.1.log vmware-vmsvc-root.5.log boot.log-20211222 cron-20220109 maillog-20220109 secure-20211222 vmware-network.2.log vmware-vmsvc-root.6.log boot.log-20211223 cups messages secure-20211226 vmware-network.3.log vmware-vmsvc-root.log boot.log-20211228 dmesg messages-20211222 secure-20220102 vmware-network.4.log vmware-vmtoolsd-root.log boot.log-20211231 dmesg.old messages-20211226 secure-20220109 vmware-network.5.log wtmp boot.log-20220103 firewalld messages-20220102 speech-dispatcher vmware-network.6.log yum.log boot.log-20220105 gdm messages-20220109 spooler vmware-network.7.log yum.log-20211224 boot.log-20220108 glusterfs ntpstats spooler-20211222 vmware-network.8.log yum.log-20220101 btmp grubby pluto spooler-20211226 vmware-network.9.log btmp-20220101 grubby_prune_debug pods spooler-20220102 vmware-network.log chrony lastlog ppp spooler-20220109 vmware-vgauthsvc.log.0 containers libvirt qemu-ga sssd vmware-vmsvc-root.1.log cron maillog rhsm swtpm vmware-vmsvc-root.2.log
8. job 一次性任务
Job负责批量处理短暂的一次性任务 (short lived one-off tasks),即仅执行一次的任务,它保证批处理任务的一个或多个Pod成功结束。
1. 创建 job.yaml
apiVersion: batch/v1 kind: Job metadata: name: pi spec: template: spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never backoffLimit: 4
2. 创建资源
[root@k8smaster1 ~]# kubectl create -f job.yaml job.batch/pi created [root@k8smaster1 ~]# kubectl get jobs NAME COMPLETIONS DURATION AGE pi 1/1 96s 14m [root@k8smaster1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE pi-ppc7n 0/1 Completed 0 44m [root@k8smaster1 ~]# kubectl logs pi-ppc7n 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
9. 创建周期性定时任务
1. 创建yml, cronjob.yaml
apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure
2. 执行创建
[root@k8smaster1 ~]# kubectl delete -f job.yaml job.batch "pi" deleted
3. 查看日志
[root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hello-1642318260-c58sn 0/1 Completed 0 2m37s 10.244.1.43 k8snode1 <none> <none> hello-1642318320-5r8kq 0/1 Completed 0 97s 10.244.2.35 k8snode2 <none> <none> hello-1642318380-kxw7q 0/1 Completed 0 37s 10.244.1.45 k8snode1 <none> <none> [root@k8smaster1 ~]# kubectl logs hello-1642318320-5r8kq Sun Jan 16 07:32:12 UTC 2022 Hello from the Kubernetes cluster
2. Service
service 是为了防止pod 失联,提供的服务发现,类似于微服务的注册中心。定义一组pod 的访问策略。可以为一组具有相同功能的容器应用提供一个统一的入口地址,并将请求负载分发到后端的各个容器应用上。
service 通过selector 来管控对应的pod。根据label 和 selector 建立关联,通过service 实现pod 的负载均衡。
1. 常有service 类型
ClusterIp: 集群内部使用 (默认也是这个值)
NodePort: 对外暴露端口使用
LoabBalancer:对外访问应用使用,公有云
2. 测试:
(1) NodePort 之前已经测试过
(2) 测试使用 ClusterIp
1》 新建service.yml
apiVersion: v1 kind: Service metadata: name: web labels: app: web spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: web status: loadBalancer: {}
2》 创建并且查看svc
[root@k8smaster1 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d6h web ClusterIP 10.108.161.219 <none> 80/TCP 6s
3》 启动一个nginx
[root@k8smaster1 ~]# kubectl create deployment web --image=nginx deployment.apps/web created [root@k8smaster1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-5dcb957ccc-wf2kt 0/1 ContainerCreating 0 8s [root@k8smaster1 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web-5dcb957ccc-wf2kt 1/1 Running 0 2m18s 10.244.2.97 k8snode2 <none> <none>
4》 k8snode2 节点使用虚拟IP访问80 端口
[root@k8snode1 ~]# curl 10.105.134.45 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
3. Secret
将加密数据存在etcd, 让pod 容器以挂载volume的形式进行访问。通常的应用场景是凭证。
Secret解决了密码、token、密钥等敏感数据的配置问题,而不需要把这些敏感数据暴露到镜像或者Pod Spec中。Secret可以以Volume或者环境变量的方式使用。
Secret有三种类型:
- Service Account:用来访问Kubernetes API,由Kubernetes自动创建,并且会自动挂载到Pod的
/run/secrets/kubernetes.io/serviceaccount
目录中; - Opaque:base64编码格式的Secret,用来存储密码、密钥等;
kubernetes.io/dockerconfigjson
:用来存储私有docker registry的认证信息。
测试如下:
1. ServiceAccount
查看默认创建的:
[root@k8smaster1 ~]# kubectl create deployment web --image=nginx deployment.apps/web created [root@k8smaster1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-5dcb957ccc-sdhjk 0/1 ContainerCreating 0 5s [root@k8smaster1 ~]# kubectl exec -it web-5dcb957ccc-sdhjk -- bash root@web-5dcb957ccc-sdhjk:/# ls /run/secrets/kubernetes.io/serviceaccount ca.crt namespace token
2. Opaque
1. 查看base64编码串
[root@k8snode1 ~]# echo -n "admin" | base64 YWRtaW4= [root@k8snode1 ~]# echo -n "1f2d1e2e67df" | base64 MWYyZDFlMmU2N2Rm
2. 创建secrets.yml 文件
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
password: MWYyZDFlMmU2N2Rm
username: YWRtaW4=
3. 创建secrets
[root@k8smaster1 ~]# kubectl create -f secrets.yml secret/mysecret created [root@k8smaster1 ~]# kubectl get secret NAME TYPE DATA AGE default-token-5r9hq kubernetes.io/service-account-token 3 7d6h mysecret Opaque 2 10s
4. 使用
1》 以环境变量的方式
创建secret-var.yaml
apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: nginx image: nginx env: - name: SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: mysecret key: password
执行创建以及查看:
[root@k8smaster1 ~]# kubectl delete pods --all # 删除所有的pods pod "web-5dcb957ccc-wf2kt" deleted [root@k8smaster1 ~]# kubectl delete deployment --all deployment.apps "web" deleted [root@k8smaster1 ~]# kubectl apply -f secret-var.yaml # 创建 pod/mypod created [root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mypod 1/1 Running 0 11s 10.244.2.98 k8snode2 <none> <none> [root@k8smaster1 ~]# kubectl exec -it mypod -- bash # 进入pod的第一个容器 root@mypod:/# echo $SECRET_USERNAME # 查看环境变量 admin root@mypod:/# echo $SECRET_PASSWORD 1f2d1e2e67df root@mypod:/# exit exit
2》 Volume方式使用: 相当于将secret的变量以目录的形式挂载到指定目录,然后可以到目录下面进行查看解密后变量
新建secret-vol.yml, 内容如下:
apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: nginx image: nginx volumeMounts: - name: foo mountPath: "/etc/foo" readOnly: true volumes: - name: foo secret: secretName: mysecret
创建且查看:
[root@k8smaster1 ~]# kubectl apply -f secret-vol.yaml pod/mypod created [root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mypod 1/1 Running 0 6s 10.244.1.82 k8snode1 <none> <none> [root@k8smaster1 ~]# kubectl exec -it mypod -- bash root@mypod:/# ls /etc/foo/ password username root@mypod:/# cd /etc/foo/ root@mypod:/etc/foo# cat username admin
4. ConfigMap
configMap用于保存配置数据的键值对,可以用来保存单个属性,也可以用来保存配置文件。ConfigMap跟secret很类似,但它可以更方便地处理不包含敏感信息的字符串。
1. ConfigMap创建
可以使用kubectl create configmap从文件、目录或者key-value字符串创建等创建ConfigMap。
(1) 从key-value字符串创建ConfigMap
[root@k8smaster1 ~]# kubectl create configmap special-config --from-literal=special.how=very configmap/special-config created [root@k8smaster1 ~]# kubectl get configmap special-config -o go-template='{{.data}}' map[special.how:very]
对个kv可以用如下形式
kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm
(2) 从配置文件创建
[root@k8smaster1 ~]# echo -e "a=b\nc=d" | tee config.env a=b c=d [root@k8smaster1 ~]# kubectl create configmap special-config2 --from-env-file=config.env configmap/special-config2 created [root@k8smaster1 ~]# kubectl describe cm special-config2 Name: special-config2 Namespace: default Labels: <none> Annotations: <none> Data ==== a: ---- b c: ---- d Events: <none> [root@k8smaster1 ~]# kubectl get configmap special-config2 -o go-template='{{.data}}' map[a:b c:d]
(3) 使用yml 创建
1》 新建 myconfig.yaml
apiVersion: v1 kind: ConfigMap metadata: name: myconfig namespace: default data: special.level: info special.type: hello
2》创建
[root@k8smaster1 ~]# kubectl apply -f myconfig.yaml configmap/myconfig created [root@k8smaster1 ~]# kubectl get cm NAME DATA AGE myconfig 2 4s special-config 1 5m27s special-config2 2 3m50s
2. configmap 使用
(1) 使用环境变量的形式使用
新建config-var.yaml
apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: busybox image: busybox command: [ "/bin/sh", "-c", "echo $(LEVEL) $(TYPE)" ] env: - name: LEVEL valueFrom: configMapKeyRef: name: myconfig key: special.level - name: TYPE valueFrom: configMapKeyRef: name: myconfig key: special.type restartPolicy: Never
创建并查看:
[root@k8smaster1 ~]# kubectl apply -f config-var.yaml pod/mypod created [root@k8smaster1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mypod 0/1 Completed 0 7s 10.244.2.99 k8snode2 <none> <none> [root@k8smaster1 ~]# kubectl logs mypod info hello
(2) 使用挂载的方式使用
将创建的ConfigMap直接挂载至Pod的/etc/config目录下,其中每一个key-value键值对都会生成一个文件,key为文件名,value为内容
新建cm.yml
apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: nginx image: nginx command: [ "/bin/sh","-c","cat /etc/config/special.level" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config2 restartPolicy: Never
支持创建且查看日志
[root@k8smaster1 ~]# kubectl apply -f cm.yaml pod/mypod created [root@k8smaster1 ~]# kubectl logs mypod b
补充: 关于service 端口
targetPort - Container监听的Port.
port - 通过Service暴露出来的一个Port, 可以在Cluster内进行访问。
nodePort - Cluster向外网暴露出来的端口,可以让外网能够访问到Pod/Container.
1. 一个svc 其yaml 如下
apiVersion: v1 kind: Service metadata: annotations: kubesphere.io/alias-name: mynginx-svc kubesphere.io/creator: admin kubesphere.io/description: mynginx-svc creationTimestamp: "2022-02-11T07:38:59Z" labels: app: mynginx-svc name: mynginx-svc namespace: default resourceVersion: "710977" selfLink: /api/v1/namespaces/default/services/mynginx-svc uid: faf5c2bc-b358-43d5-b938-60085d672371 spec: clusterIP: 10.1.222.4 clusterIPs: - 10.1.222.4 externalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: port-http nodePort: 31689 port: 81 protocol: TCP targetPort: 80 selector: app: mynginx sessionAffinity: None type: NodePort status: loadBalancer: {}
2. 端口规则如下:
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· Obsidian + DeepSeek:免费 AI 助力你的知识管理,让你的笔记飞起来!
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
2021-01-16 Springcloud分布式服务如何保证会话一致性
2018-01-16 Java jsp页面中jstl标签详解