考题
1、创建一个名为deployment-clusterrole的clusterrole,该clusterrole只允许创建Deployment、Daemonset、Statefulset的create操作
在名字为app-team1的namespace下创建一个名为cicd-token的serviceAccount,并且将上一步创建clusterrole的权限绑定到该serviceAccount
#创建clusterrole
kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,daemonsets,statefulsets
#创建命名空间app-team1
kubectl create ns app-team1
#创建serviceaccount
kubectl create serviceaccount cicd-token -n app-team1
#将deployment-clusterrole 绑定 cicd-token
kubectl create rolebinding deployment-rolebinding --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token --namespace=app-team1
可参考:https://kubernetes.io/zh/docs/reference/access-authn-authz/rbac/
https://kubernetes.io/zh/docs/reference/access-authn-authz/rbac/#kubectl-create-clusterrole
2、将ek8s-node-1节点设置为不可用,然后重新调度该节点上的所有Pod
kubectl config use-context ek8s
kubectl cordon ek8s-node-1 #设置节点是不可调度状态
kubectl drain ek8s-node-1 --delete-emptydir-data --ignore-daemonsets --force
可参考:https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain
3、现有的 Kubernetes 集群正在运行的版本是 1.21.0,仅将主节点上的所有 kubernetes 控制面板和组件升级到版本 1.22.0 另外,在主节点上升级 kubelet 和 kubectl
kubectl cordon k8s-master
kubectl drain k8s-master --delete-emptydir-data --ignore-daemonsets –force
yum install -y kubeadm-1.21.0
kubeadm upgrade plan
kubeadm upgrade apply v1.21.0
yum install -y kubelet-1.21.0 kubectl-1.21.0
systemctl daemon-reload
systemctl restart kubelet
systemctl status kubelet
可参考:https://kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
4、针对etcd实例https://127.0.0.1:2379创建一个快照,保存到/srv/data/etcd-snapshot.db。在创建快照的过程中,如果卡住了,就键入ctrl+c终止,然后重试。
然后恢复一个已经存在的快照: /var/lib/backup/etcd-snapshot-previous.db
执行etcdctl命令的证书存放在:
ca证书:/opt/KUIN00601/ca.crt
客户端证书:/opt/KUIN00601/etcd-client.crt
客户端密钥:/opt/KUIN00601/etcd-client.key
注意:需要自己安装etcdctl命令
#备份
ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --endpoints=https://127.0.0.1:2379 snapshot save /srv/data/etcd-snapshot.db
#还原
mkdir /opt/backup/ -p
cd /etc/kubernetes/manifests && mv kube-* /opt/backup #备份k8s配置文件
ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --endpoints=https://127.0.0.1:2379 snapshot restore /var/lib/backup/etcd-snapshot.db --data-dir=/var/lib/etcd
vim etcd.yaml
# 将volume配置的path: /var/lib/etcd改成/var/lib/etcd-restore
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /var/lib/etcd-restore
# 还原k8s组件
mv /opt/backup/* /etc/kubernetes/manifests
systemctl restart kubelet
可参考:https://kubernetes.io/zh/docs/tasks/administer-cluster/configure-upgrade-etcd/
5、创建一个名字为all-port-from-namespace的NetworkPolicy,这个NetworkPolicy允许internal命名空间下的Pod访问该命名空间下的9000端口。
并且不允许不是internal命令空间的下的Pod访问
不允许访问没有监听9000端口的Pod。
kubectl create -f nwp.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: all-port-from-namespace
namespace: internal
spec:
ingress:
- from:
- podSelector: {}
ports:
- port: 9000
protocol: TCP
podSelector: {}
policyTypes:
- Ingress
可参考:https://kubernetes.io/zh/docs/concepts/services-networking/network-policies/
6、重新配置一个已经存在的deployment front-end,在名字为nginx的容器里面添加一个端口配置,名字为http,暴露端口号为80,然后创建一个service,
名字为front-end-svc,暴露该deployment的http端口,并且service的类型为NodePort。
$ kubectl edit deploy front-end
# 添加如下配置,主要是在name为nginx的容器下
添加service:
kubectl expose deploy front-end --name=front-end-svc --port=80 --target-port=http --type=NodePort
扩展:在sharenfs命名空间中
kubectl expose deployment kubepi -n sharenfs --name=front-end-svc --port=80 --target-port=http --type=NodePort
可参考:https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
7、在ing-internal 命名空间下创建一个ingress,名字为pong,代理的service hi,端口为5678,配置路径/hi。
验证:访问curl -kL <INTERNAL_IP>/hi会返回hi
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pong
namespace: ing-internal
spec:
rules:
http:
paths:
- path: /hi
pathType: Prefix
backend:
service:
name: hi
port:
number: 5678
可参考:https://kubernetes.io/zh/docs/concepts/services-networking/ingress/
8、扩容名字为loadbalancer的deployment的副本数为6
kubectl config use-context k8s
kubectl scale --replicas=6 deployment loadbalancer
或者用$ kubectl edit deployment loadbalancer 直接在线扩容也可以
9、创建一个Pod,名字为nginx-kusc00401,镜像地址是nginx,调度到具有disk=spinning标签的节点上,
kubectl run nginx --image=nginx --dry-run=client -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx-kusc00401
spec:
containers:
nodeSelector:
disk: spinning
- image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
-----------------------------------
$ vim pod-ns.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00401
labels:
role: nginx-kusc00401
spec:
nodeSelector:
disk: spinning
containers:
- name: nginx
image: nginx
$ kubectl create -f pod-ns.yaml
可参考:https://kubernetes.io/zh/docs/tasks/configure-pod-container/assign-pods-nodes/
10、检查集群中有多少节点为Ready状态,并且去除包含NoSchedule污点的节点。之后将数字写到/opt/KUSC00402/kusc00402.txt
$ kubectl config use-context k8s
$ kubectl get node | grep -i ready # 记录总数为A
$ kubectl describe node | grep Taint | grep NoSchedule # 记录总数为B
# 将A减B的值x导入到/opt/KUSC00402/kusc00402.txt
$ echo x >> /opt/KUSC00402/kusc00402.txt
grep -i: 忽略字符大小写的差别。
11、创建一个Pod,名字为kucc1,这个Pod可能包含1-4容器,该题为四个:nginx+redis+memcached+consul
kubectl run kucc1 --image=nginx --dry-run -o yaml ##获取yaml格式,在编辑
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: kucc1
name: kucc1
spec:
containers:
- image: nginx
name: nginx
- image: redis
name: redis
- image: consul
name: consul
- image: memcached
name: memcached
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
12、创建一个pv,名字为app-config,大小为2Gi,访问权限为ReadWriteMany。Volume的类型为hostPath,路径为/srv/app-config
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/srv/app-config"
可参考:https://kubernetes.io/zh/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
13、创建一个名字为pv-volume的pvc,指定storageClass为csi-hostpath-sc,大小为10Mi;
然后创建一个Pod,名字为web-server,镜像为nginx,并且挂载该PVC至/usr/share/nginx/html,挂载的权限为ReadWriteOnce。
之后通过kubectl edit或者kubectl path将pvc改成70Mi,并且记录修改记录。
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
storageClassName: csi-hostpath-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv-volume
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: pv-volume
volumes:
- name: pv-volume
persistentVolumeClaim:
claimName: pv-volume
扩容:
方式一Patch命令:
kubectl patch pvc pv-volume -p '{"spec":{"resources":{"requests":{"storage": "70Mi"}}}}' --record
方式二edit:
kubectl edit pvc pv-volume
可参考: https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
14、监控名为foobar的Pod的日志,并过滤出具有unable-access-website 信息的行,然后将写入到 /opt/KUTR00101/foobar
$ kubectl config use-context k8s
$ kubectl logs foobar | grep unable-access-website > /opt/KUTR00101/foobar
15、添加一个名为busybox且镜像为busybox的sidecar到一个已经存在的名为legacy-app的Pod上,这个sidecar的启动命令为/bin/sh, -c, 'tail -n+1 -f /var/log/legacy-app.log'。
并且这个sidecar和原有的镜像挂载一个名为logs的volume,挂载的目录为/var/log/
可参考:https://kubernetes.io/zh/docs/concepts/cluster-administration/logging/
16、找出具有name=cpu-user的Pod,并过滤出使用CPU最高的Pod,然后把它的名字写在已经存在的/opt/KUTR00401/KUTR00401.txt
文件里(注意他没有说指定namespace。所以需要使用-A指定所以namespace)
$ kubectl config use-context k8s
$ kubectl top po -A -l name=cpu-user
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system coredns-54d67798b7-hl8xc 7m 8Mi
kube-system coredns-54d67798b7-m4m2q 6m 8Mi
# 注意这里的pod名字以实际名字为准,按照CPU那一列进行选择一个最大的Pod,另外如果CPU的数值是1 2 3这样的。是大于带m这样的,因为1颗CPU等于1000m,注意要用>>而不是>
$ echo "coredns-54d67798b7-hl8xc" >> /opt/KUTR00401/KUTR00401.txt
17、一个名为wk8s-node-0的节点状态为NotReady,让其他恢复至正常状态,并确认所有的更改开机自动完成
$ ssh wk8s-node-0
$ sudo -i
# systemctl status kubelet
# systemctl start kubelet
# systemctl enable kubelet
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 25岁的心里话
· 闲置电脑爆改个人服务器(超详细) #公网映射 #Vmware虚拟网络编辑器
· 零经验选手,Compose 一天开发一款小游戏!
· 因为Apifox不支持离线,我果断选择了Apipost!
· 通过 API 将Deepseek响应流式内容输出到前端