cka 考题

3.1.1 第 1 道题 RBAC 作业提交规范https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/rbac/
我们做第一个题 RBAC,做完之后提交作业按照如下说明,我给大家提供的标准解题步骤如下:
解题:
考试时执行,切换集群。模拟环境中不需要执行。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
root@master1:~# kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets,daemonsets
clusterrole.rbac.authorization.k8s.io/deployment-clusterrole created
root@master1:~# kubectl create ns app-team1
namespace/app-team1 created
root@master1:~# kubectl create sa cicd-token -n app-team1
serviceaccount/cicd-token created
root@master1:~# kubectl create clusterrolebinding chenxi -n app-team1 --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token
clusterrolebinding.rbac.authorization.k8s.io/chenxi created
root@master1:~# kubectl create rolebinding chenxi -n app-team1 --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token
rolebinding.rbac.authorization.k8s.io/chenxi created
root@master1:~# kubectl describe rolebinding chenxi -n app-team1
Name:         chenxi
Labels:       <none>
Annotations:  <none>
Role:
  Kind:  ClusterRole
  Name:  deployment-clusterrole
Subjects:
  Kind            Name        Namespace
  ----            ----        ---------
  ServiceAccount  cicd-token  app-team1
 
 
 
 
[student@node-1]$ kubectl config use-context k8s
[student@node-1] $ kubectl create clusterrole deployment-clusterrole --verb=create --
resource=deployments,statefulsets,daemonsets
[student@node-1] $ kubectl create serviceaccount cicd-token -n app-team1
# 题目中写了“限于 namespace app-team1 中”,则创建 rolebinding。没有写的话,则创建
clusterrolebinding。
[student@node-1] $ kubectl create rolebinding cicd-token-binding -n app-team1 --
clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token
# rolebinding 后面的名字 cicd-token-rolebinding 随便起的,因为题目中没有要求,如果题目中
有要求,就不能随便起了。

  

3.1.2 第 2 道题节点维护作业提交规范

 我们做第 2 个题节点维护,做完之后提交作业按照如下说明,

1
2
3
[student@node-1] $kubectl config use-context ek8s
[student@node-1] $kubectl cordon ek8s-node-1 #设置节点是不可调度状态
[student@node-1] $kubectl drain ek8s-node-1 --delete-emptydir-data --ignore-daemonsets --force

  

3.1.3 第 3 道题 k8s 版本升级作业提交规范   官网搜索kubeadm-upgrade  地址:https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
我们做第 3 个题 k8s 版本升级时候,做完之后提交作业按照如下说明,我给大家提供的标准解题步
骤如下:
解题:
考试时执行,切换集群。模拟环境中不需要执行。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
root@master1:~# kubectl get node 查看
NAME      STATUS   ROLES                  AGE   VERSION
master1   Ready    control-plane,master   15h   v1.23.1
node1     Ready    <none>                 15h   v1.23.1
root@master1:~# kubectl cordon master1
node/master1 cordoned
root@master1:~# kubectl drain master1 --delete-emptydir-data --ignore-daemonsets --force
node/master1 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-zhb6k, kube-system/kube-proxy-l9fdg
evicting pod kube-system/coredns-65c54cc984-4dkqh
evicting pod kube-system/calico-kube-controllers-677cd97c8d-qnpr9
evicting pod kube-system/coredns-65c54cc984-2xqz8
pod/calico-kube-controllers-677cd97c8d-qnpr9 evicted
pod/coredns-65c54cc984-4dkqh evicted
pod/coredns-65c54cc984-2xqz8 evicted
node/master1 drained
 
node
root@master1:/home/chenxi# apt-cache show kubeadm | grep 1.23.2
Version: 1.23.2-00
Filename: pool/kubeadm_1.23.2-00_amd64_f3593ab00d33e8c0a19e24c7a8c81e74a02e601d0f1c61559a5fb87658b53563.deb
 
root@master1:~# kubeadm upgrade apply v1.23.2 --etcd-upgrade=false --force
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1121 13:01:19.726418  696317 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.23.2"
[upgrade/versions] Cluster version: v1.23.17
[upgrade/versions] kubeadm version: v1.23.1
[upgrade/version] Found 1 potential version compatibility errors but skipping since the --force flag is set:
 
    - Specified version to upgrade to "v1.23.2" is higher than the kubeadm version "v1.23.1". Upgrade kubeadm first using the tool you used to install kubeadm
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.23.2"...
Static pod: kube-apiserver-master1 hash: 3c8f61a122c8e355df03d157fa6c23fc
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests2256149495"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-11-21-13-07-48/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master1 hash: 3c8f61a122c8e355df03d157fa6c23fc
Static pod: kube-apiserver-master1 hash: 3c8f61a122c8e355df03d157fa6c23fc
Static pod: kube-apiserver-master1 hash: 3c8f61a122c8e355df03d157fa6c23fc
Static pod: kube-apiserver-master1 hash: 6f15f917043f6e456a012e8b45f57c03
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-11-21-13-07-48/kube-controll
er-manager.yaml"[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
^[[AStatic pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: ac2cd7a075ba83f2bae5ad1f8f5516a9
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-11-21-13-07-48/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d26a55167803c084a5cb882c2d5bfba7
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm up
grade will handle this transition transparently.[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.23.2". Enjoy!
 
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
root@master1:~# apt-get install kubelet=1.23.2-00
Reading package lists... Done
Building dependency tree      
Reading state information... Done
kubelet is already the newest version (1.23.2-00).
0 upgraded, 0 newly installed, 0 to remove and 55 not upgraded.
root@master1:~# apt-get install kubectl=1.23.2-00
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubectl
1 upgraded, 0 newly installed, 0 to remove and 55 not upgraded.
Need to get 8,929 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 kubectl amd64 1.23.2-00 [8,929 kB]
Fetched 8,929 kB in 6s (1,602 kB/s
(Reading database ... 88009 files and directories currently installed.)
Preparing to unpack .../kubectl_1.23.2-00_amd64.deb ...
Unpacking kubectl (1.23.2-00) over (1.23.1-00) ...
Setting up kubectl (1.23.2-00) ...
root@master1:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.2", GitCommit:"9d142434e3af351a628bffee3939e64c681afa4d", GitTreeState:"clean", BuildDate:"2022-01-19T17:35:46Z", GoVersion:"go1.17.5", Com
piler:"gc", Platform:"linux/amd64"}Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.2", GitCommit:"9d142434e3af351a628bffee3939e64c681afa4d", GitTreeState:"clean", BuildDate:"2022-01-19T17:29:16Z", GoVersion:"go1.17.5", Com
piler:"gc", Platform:"linux/amd64"}root@master1:~# kubelet --version
Kubernetes v1.23.2
node节点
root@node1:/home/chenxi# apt-get install kubelet=1.23.2-00
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following held packages will be changed:
  kubelet
The following packages will be upgraded:
  kubelet
1 upgraded, 0 newly installed, 0 to remove and 59 not upgraded.
Need to get 19.5 MB of archives.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 kubelet amd64 1.23.2-00 [19.5 MB]
Fetched 19.5 MB in 6s (3,138 kB/s)                                                                                                                                                                             
(Reading database ... 88305 files and directories currently installed.)
Preparing to unpack .../kubelet_1.23.2-00_amd64.deb ...
Unpacking kubelet (1.23.2-00) over (1.23.1-00) ...
Setting up kubelet (1.23.2-00) ...
root@node1:/home/chenxi# ^C
root@node1:/home/chenxi#  apt-get install kubelet=1.23.2-00
Reading package lists... Done
Building dependency tree      
Reading state information... Done
kubelet is already the newest version (1.23.2-00).
0 upgraded, 0 newly installed, 0 to remove and 59 not upgraded.
 
查看pod 状态
root@master1:/home/chenxi# kubectl get pod -n kube-system -w
NAME                                       READY   STATUS    RESTARTS      AGE
calico-kube-controllers-677cd97c8d-mtlz6   1/1     Running   0             28m
calico-node-nrtpb                          1/1     Running   1 (15h ago)   15h
calico-node-zhb6k                          1/1     Running   1 (15h ago)   15h
coredns-65c54cc984-ht4fs                   1/1     Running   0             28m
coredns-65c54cc984-wfc4s                   1/1     Running   0             28m
etcd-master1                               1/1     Running   1 (15h ago)   15h
kube-apiserver-master1                     1/1     Running   0             7m19s
kube-controller-manager-master1            1/1     Running   0             6m30s
kube-proxy-dtkxb                           1/1     Running   0             5m58s
kube-proxy-ngc6q                           1/1     Running   0             5m55s
kube-scheduler-master1                     1/1     Running   0             6m15s
恢复 master1 调度
root@master1:/home/chenxi# kubectl uncordon master1
node/master1 uncordoned
root@master1:/home/chenxi# kubectl get node   升级后查看
NAME      STATUS   ROLES                  AGE   VERSION
master1   Ready    control-plane,master   15h   v1.23.2
node1     Ready    <none>                 15h   v1.23.2
 
 
 
[student@node-1] $kubectl config use-context mk8s
开始操作
[student@node-1] $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,master 38d v1.23.1
node-1 Ready <none> 38d v1.23.1
# cordon 停止调度,将 node 调为 SchedulingDisabled。新 pod 不会被调度到该 node,但在该
node 的旧 pod 不受影响。
# drain 驱逐节点。首先,驱逐该 node 上的 pod,并在其他节点重新创建。接着,将节点调
为 SchedulingDisabled。
[student@node-1] $kubectl cordon master01
[student@node-1] $kubectl drain master01 --delete-emptydir-data --ignore-daemonsets --force
# ssh 到 master 节点,并切换到 root 下
[student@node-1] $ ssh master01
[student@master01] $ sudo -i
[root@master01] # apt-cache show kubeadm|grep 1.23.2
[root@master01] #apt-get update
6 / 11
[root@master01] #apt-get install kubeadm=1.23.2-00
# 验证升级计划
[root@master01] #kubeadm upgrade plan
# 排除 etcd,升级其他的,提示时,输入 y。
[root@master01] #kubeadm upgrade apply v1.23.2 --etcd-upgrade=false
升级 kubelet
[root@master01] #apt-get install kubelet=1.23.2-00
[root@master01] #kubelet --version
升级 kubectl
[root@master01] #apt-get install kubectl=1.23.2-00
[root@master01] #kubectl version
# 退出 root,退回到 student@master01
[root@master01] # exit
# 退出 master01,退回到 student@node-1
[student@master01] $ exit
[student@node-1] $
不要输入 exit 多了,否则会退出考试环境的。
恢复 master01 调度
[student@node-1] $ kubectl uncordon master01
检查 master01 是否为 Ready
[student@node-1] $ kubectl get node  
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,master 38d v1.23.2
node-1 Ready <none> 38d v1.23.1

  

3.1.4 第 4 道题 etcd 数据备份恢复作业提交规范  官网搜索关建子:upgrade-etcd 网址:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
备份
root@master1:~# mkdir  /srv/data
root@master1:~# sudo ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt  --key=/etc/kubernetes/pki/etcd/server.key sn
apshot save /srv/data/etcd-snapshot.db{"level":"info","ts":1700573892.0990024,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/srv/data/etcd-snapshot.db.part"}
{"level":"info","ts":"2023-11-21T13:38:12.104Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1700573892.104788,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"}
{"level":"info","ts":"2023-11-21T13:38:12.165Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"}
{"level":"info","ts":1700573892.1732032,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"4.1 MB","took":0.07412492}
{"level":"info","ts":1700573892.1735168,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/srv/data/etcd-snapshot.db"}
Snapshot saved at /srv/data/etcd-snapshot.db
root@master1:~# ls /srv/data/
etcd-snapshot.db
 
还原
 
root@master1:~# sudo etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot restore
/srv/data/etcd-snapshot.db {"level":"info","ts":1700574115.1576464,"caller":"snapshot/v3_snapshot.go:296","msg":"restoring snapshot","path":"/srv/data/etcd-snapshot.db","wal-dir":"default.etcd/member/wal","data-dir":"default.etcd","snap
-dir":"default.etcd/member/snap"}{"level":"info","ts":1700574115.1802545,"caller":"mvcc/kvstore.go:380","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":7
5402}{"level":"info","ts":1700574115.1919398,"caller":"membership/cluster.go:392","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"0","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls"
:["http://localhost:2380"]}{"level":"info","ts":1700574115.200254,"caller":"snapshot/v3_snapshot.go:309","msg":"restored snapshot","path":"/srv/data/etcd-snapshot.db","wal-dir":"default.etcd/member/wal","data-dir":"default.etcd","snap-d
ir":"default.etcd/member/snap"}
 
 
 
 
 
备份:
# 如果不使用 export ETCDCTL_API=3,而使用 ETCDCTL_API=3,则下面每条 etcdctl 命令前都要加
ETCDCTL_API=3。
# 如果执行时,提示 permission denied,则是权限不够,命令最前面加 sudo 即可。
student@node-1:~$ export ETCDCTL_API=3
student@node-1:~$ sudo ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --
cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcdclient.key snapshot save /srv/data/etcd-snapshot.db
还原:
student@node-1:~$ sudo export ETCDCTL_API=3
student@node-1:~$ sudo etcdctl --endpoints="https://127.0.0.1:2379" --
cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcdclient.key snapshot restore /var/lib/backup/etcd-snapshot-previous.db

  networkpolicy  官网搜索network-policy  地址:https://kubernetes.io/docs/concepts/services-networking/network-policies/

现有的namespace:my-app 中创建一个名为test-network-policy  的networkpolicy  ,确保networkpolicy  允许my-app namespace下的pod 连接到名为echo 名称空间下9000端口,不允许没有监听9000端口pod 进行访问,不允许不是来自my-app 名称空间下的pod 进行访问

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
root@master1:/home/chenxi# kubectl create ns echo   # 创建名称空间
namespace/echo created
root@master1:/home/chenxi# kubectl label ns echo project=echo  给名称空间打标签
namespace/echo labeled
root@master1:/home/chenxi# kubectl create ns my-app
namespace/my-app created
root@master1:/home/chenxi# cat 1.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: my-app
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
    - Ingress #进站
  ingress:
    - from:
        - namespaceSelector: #生效的名称空间
            matchLabels:
              project: echo  # 标签匹配名称空间
      ports:  # 端口
        - protocol: TCP
          port: 9000
root@master1:/home/chenxi# kubectl apply -f 1.yaml
networkpolicy.networking.k8s.io/test-network-policy created

   

第六题 SVC 暴露应用  参考https://kubernetes.io/zh-cn/docs/concepts/services-networking/service/
重新配置一个已经存在的front-end的deployment,在名字为nginx的容器里面添加一个端口配置,名字为http。暴漏端口号为80,然后创建一个svc,名字为front-end-svc,暴漏deployment的http端口。并且service的类型为NodePort
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
kubectl config use-context k8s  切换集群
root@master1:/home/chenxi# cat 2.yaml
apiVersion: apps/v1  #api版本
kind: Deployment   #资源类型
metadata:   #源数据
  name: front-end   #控制器名字
  namespace: default  # 所在的名称空间
  labels: # 标签的设置
    dev: deployment-test
spec: # 控制器期望状态
  minReadySeconds: 4 # 等待就绪时间
  revisionHistoryLimit: 5 # 保留的历史版本
  replicas: 3 # pod 个数
  strategy: # 更新策略
    rollingUpdate: # 选择更新方式
      maxSurge: 2 # 可以调度的最大 Pod 数量高于所需数量
      maxUnavailable: 1 # 最大不可用的pod 数量
  selector: # 标签选择器
    matchLabels: # 标签选择器设定
      dev: deployment-test # 标签的key与值的设定
  template: # pod 属性定义
    metadata: # 元数据
      labels: # 标签设定
        dev: deployment-test # 标签的key 与值
    spec: # pod 的期望状态
      containers: # 容器的属性定义
      - name: web  # 容器的名字
        image: nginx:1.9.1 # 运行的镜像
        imagePullPolicy: IfNotPresent   # 获取镜像策略
        #ports: # 端口设置
        #- name: web # 端口的名字
        #  containerPort: 80 # 容器的端口
root@master1:/home/chenxi# kubectl apply -f 2.yaml
deployment.apps/front-end created
root@master1:/home/chenxi# kubectl edit deployment front-end
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
  creationTimestamp: "2023-11-22T11:08:08Z"
  generation: 1
  labels:
    dev: deployment-test
  name: front-end
  namespace: default
  resourceVersion: "101920"
  uid: cbb54632-3f1f-4752-8014-2e9f9470d03e
spec:
  minReadySeconds: 4
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 5
  selector:
    matchLabels:
      dev: deployment-test
  strategy:
    rollingUpdate:
      maxSurge: 2
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        dev: deployment-test
    spec:
      containers:
      - image: nginx:1.9.1
        imagePullPolicy: IfNotPresent
        name: web
        ports:  添加
        - name: http
          containerPort: 80
root@master1:/home/chenxi# kubectl expose deployment front-end --port=80 --target-port=http --type=NodePort --name=front-end-svc
service/front-end-svc exposed
root@master1:/home/chenxi# kubectl describe svc front-end-svc
Name:                     front-end-svc
Namespace:                default
Labels:                   dev=deployment-test
Annotations:              <none>
Selector:                 dev=deployment-test
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.115.119
IPs:                      10.104.115.119
Port:                     <unset>  80/TCP
TargetPort:               http/TCP
NodePort:                 <unset>  31688/TCP
Endpoints:                10.244.166.132:80,10.244.166.133:80,10.244.166.134:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

  

第七题 Ingress   参考地址:https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/
如下创建一个新的nginx Ingress资源 名称为pong namespace :ing-internal 使用服务端口5678在路径/hello 上公开hello;可以使用 curl -kL <internal_IP>/hello
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
kubectl config use-context k8s  切换集群
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml
root@master1:/home/chenxi# kubectl create ns ing-internal
namespace/ing-internal created
root@master1:/home/chenxi# cat 14.yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
  name: nginx-example
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
  namespace: ing-internal
spec:
  controller: k8s.io/ingress-nginx
root@master1:/home/chenxi# kubectl apply  -f 14.yaml
ingressclass.networking.k8s.io/nginx-example created
root@master1:/home/chenxi# cat 3.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: pong
  namespace: ing-internal
spec:
  ingressClassName: nginx-example
  rules:
    - http:
        paths:
          - path: /hello
            pathType: Prefix
            backend:
              service:
                name: hello
                port:
                  number: 5678
root@master1:/home/chenxi# kubectl apply  -f 3.yaml
ingress.networking.k8s.io/pong configured

  

第八题 扩容 Pod 数量
 
1
kubectl scale deployment loadbalancer --replicas=5

  

第九题 nodeSelector
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
root@master1:/home/chenxi# kubectl label nodes node1 disk=ssd
node/node1 labeled
 
root@master1:/home/chenxi# cat 18.yaml
apiVersion: v1
kind: Pod
metadata:
 name: nginx-kusc00401
spec:
 containers:
 - name: nginx
   image: nginx
 nodeSelector:
   disk: ssd #disk=ssd
root@master1:/home/chenxi# kubectl apply -f 18.yaml
pod/nginx-kusc00401 created
root@master1:/home/chenxi# kubectl get pod -owide
NAME                         READY   STATUS    RESTARTS   AGE    IP               NODE    NOMINATED NODE   READINESS GATES
demo-764c97f6fd-bjf87        1/1     Running   0          26m    10.244.166.138   node1   <none>           <none>
front-end-57596bcb76-56jfp   1/1     Running   0          136m   10.244.166.133   node1   <none>           <none>
front-end-57596bcb76-fdwnx   1/1     Running   0          136m   10.244.166.132   node1   <none>           <none>
front-end-57596bcb76-n4kgg   1/1     Running   0          136m   10.244.166.134   node1   <none>           <none>
nginx-kusc00401              1/1     Running   0          16s    10.244.166.139   node1   <none>           <none>

  

统计准备就绪节点数量
1
2
3
4
5
6
7
8
9
kubectl config use-context k8s
kubectl describe node $(kubectl get nodes|grep Ready|awk '{print $1}') |grep Taint|grep -vc NoSchedule > /opt/KUSC00402/kusc00402.txt
或者
root@master1:/home/chenxi# kubectl get nodes|grep Ready | wc -l
2
root@master1:/home/chenxi# kubectl describe node |grep Taint|grep -vc NoSchedule
1
上面两个结果相减=1
root@master1:/home/chenxi# echo "1"> /opt/kusc00402.txt

  一个pod 指定多个容器

1
2
3
4
5
6
7
8
9
10
11
12
13
kubectl config use-context k8s
apiVersion: v1
kind: Pod
metadata:
 name: kucc4
spec:
 containers:
 - name: nginx
   image: nginx
 - name: redis
   image: redis
 - name: memcached
   image: memcached

  

 
posted @   烟雨楼台,行云流水  阅读(114)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· 记一次.NET内存居高不下排查解决与启示
点击右上角即可分享
微信分享提示