Kubernetes简介与安装-v1.26.3
Kubernetes简介
Kubernetes 也称为 K8s,是用于自动部署、扩缩和管理容器化应用程序的开源系统。
工作节点会托管 Pod ,而 Pod 就是作为应用负载的组件。 控制平面管理集群中的工作节点和 Pod。 在生产环境中,控制平面通常跨多台计算机运行, 一个集群通常运行多个节点,提供容错性和高可用性。
控制平面组件(Control Plane Components)
控制平面组件会为集群做出全局决策,比如资源的调度。 以及检测和响应集群事件,例如当不满足部署的 replicas
字段时, 要启动新的 pod。控制平面组件可以在集群中的任何节点上运行。 然而,为了简单起见,设置脚本通常会在同一个master节点上启动所有控制平面组件, 并且不会在此master节点上运行用户容器。
kube-apiserver
kube-apiserver
是 Kubernetes 控制平面的组件, 该组件负责公开了 Kubernetes API,负责处理接受请求的工作。 API 服务器是 Kubernetes 控制平面的前端。Kubernetes API 服务器验证并配置 API 对象的数据, 这些对象包括 pods、services、replicationcontrollers 等。 API 服务器为 REST 操作提供服务,并为集群的共享状态提供前端, 所有其他组件都通过该前端进行交互。
-
实现基于Tocken文件或客户端证书及HTTPBase的认证。
-
实现基于策略的账户鉴权及准入。
-
客户端通过API Server实现对kubernetes的API远程以实现对kubernetes内部资源的增删改查等管理任务的分发。
kube-scheduler
kube-scheduler
是控制平面的组件, 负责监视新创建的、未指定运行节点(node)的 Pods, 并选择节点来让 Pod 在上面运行。调度决策考虑的因素包括单个 Pod 及 Pods 集合的资源需求、软硬件及策略约束、 亲和性及反亲和性规范、数据位置、工作负载间的干扰及最后时限。
通过调度算法为待调度Pod列表的每个Pod从可用Node列表中选择一个最适合的Node,并将信息写入etcd中。
阶段一:预选策略:
- NoDiskConflict:
Pod所需的卷是否和节点已存在的卷冲突。 - PodFitsResources:
判断备选节点的资源是否满足备选Pod的需求。 - PodSelectorMatches:
判断备选节点是否包含备选pod的标签选择器指定的标签 - MatchInterPodAffinity:
节点亲和性筛选 - PodToleratesNodeTaints
根据 taints 和 toleration 的关系判断Pod是否可以调度到节点上Pod是否满足节点容忍的一些条件。
阶段二:优选策略:
- LeastRequestedPriority
优先从备选节点列表中选择资源消耗最小的节点(CPU+内存)。 - CalculateNodeLabelPriority
优先选择含有指定Label的节点。 - BalancedResourceAllocation
优先从备选节点列表中选择各项资源使用率最均衡的节点。 - TaintTolerationPriority
使用 Pod 中 tolerationList 与 节点 Taint 进行匹配并实现pod调度
kube-controller-manager
kube-controller-manager
是控制平面的组件, 负责运行控制器进程。
从逻辑上讲, 每个控制器都是一个单独的进程, 但是为了降低复杂性,它们都被编译到同一个可执行文件,并在同一个进程中运行。
这些控制器包括:
- 节点控制器(Node Controller):负责在节点出现故障时进行通知和响应
- 任务控制器(Job Controller):监测代表一次性任务的 Job 对象,然后创建 Pods 来运行这些任务直至完成
- 端点分片控制器(EndpointSlice controller):填充端点分片(EndpointSlice)对象(以提供 Service 和 Pod 之间的链接)。
- 服务账号控制器(ServiceAccount controller):为新的命名空间创建默认的服务账号(ServiceAccount)。
kube-controller-manager基于--leader-elect=true启动参数实现多节点高可用,会自动选举leader,原理是有一把分布式锁,哪个节点先抢到锁谁就是leader(基于hostname设置为锁的持有者),leader需要定期更新自己持有的锁状态,如超时未更新则会触发新的leader选举。
pod 高可用机制:
- node monitor period: 节点监视周期,5s
- node monitor grace period: 节点监视器宽限期,40s
- pod eviction timeout: pod驱逐超时时间,5m
cloud-controller-manager
一个 Kubernetes 控制平面组件, 嵌入了特定于云平台的控制逻辑。 云控制器管理器(Cloud Controller Manager)允许你将你的集群连接到云提供商的 API 之上, 并将与该云平台交互的组件同与你的集群交互的组件分离开来。
cloud-controller-manager
仅运行特定于云平台的控制器。 因此如果你在自己的环境中运行 Kubernetes,或者在本地计算机中运行学习环境, 所部署的集群不需要有云控制器管理器。
与 kube-controller-manager
类似,cloud-controller-manager
将若干逻辑上独立的控制回路组合到同一个可执行文件中, 供你以同一进程的方式运行。 你可以对其执行水平扩容(运行不止一个副本)以提升性能或者增强容错能力。
下面的控制器都包含对云平台驱动的依赖:
- 节点控制器(Node Controller):用于在节点终止响应后检查云提供商以确定节点是否已被删除
- 路由控制器(Route Controller):用于在底层云基础架构中设置路由
- 服务控制器(Service Controller):用于创建、更新和删除云提供商负载均衡器
Node组件
一组工作机器,称为 节点, 会运行容器化应用程序。每个集群至少有一个工作节点。
kubelet
kubelet
会在集群中每个节点(node)上运行。 它保证容器(containers)都运行在 Pod 中。kubelet 接收一组通过各类机制提供给它的 PodSpecs(源的获取位置、它用到了哪些文件、它的构建的设置及常规Pod元数据,如其名称,版本和描述), 确保这些 PodSpecs 中描述的容器处于运行状态且健康。 kubelet 不会管理不是由 Kubernetes 创建的容器。
kubelet是运行在每个worker节点的代理组件,它会监视已分配给节点的pod,具体功能如下:
- 向master汇报node节点的状态信息
- 接受指令并在Pod中创建 docker容器
- 准备Pod所需的数据卷
- 返回pod的运行状态
- 在node节点执行容器健康检查
kube-proxy
kube-proxy 是集群中每个节点(node)上所运行的网络代理, 实现 Kubernetes 服务(Service) 概念的一部分。kube-proxy 维护节点上的一些网络规则, 这些网络规则会允许从集群内部或外部的网络会话与 Pod 进行网络通信。如果操作系统提供了可用的数据包过滤层,则 kube-proxy 会通过它来实现网络规则。 否则,kube-proxy 仅做流量转发。
IPVS 相对 IPtables 效率会更高一些,使用 IPVS 模式需要在运行 Kube-Proxy 的节点上安装 ipvsadm、ipset 工具包和加载 ip_vs 内核模块,当 Kube-Proxy 以IPVS代理模式启动时,Kube-Proxy将验证节点上是否安装了 IPVS 模块,如果未安装,则 Kube-Proxy将回退到 IPtables 代理模式。
使用IPVS模式,Kube-Proxy会监视Kubernetes Service对象和Endpoints,调用宿主机内核Netlink接口以相应地创建IPVS规则并定期与Kubernetes Service对象 Endpoints对象同步IPVS规则,以确保IPVS状态与期望一致,访问服务时,流量将被重定向到其中一个后端 Pod,IPVS使用哈希表作为底层数据结构并在内核空间中工作,这意味着IPVS可以更快地重定向流量,并且在同步代理规则时具有更好的性能,此外,IPVS 为负载均衡算法提供了更多选项,例如:rr (轮询调度)、lc (最小连接数)、dh (目标哈希)、sh (源哈希)、sed (最短期望延迟)、nq(不排队调度)等。
kubectl
是一个通过命令行对kubernetes集群进行管理的客户端工具。kubectl 在 $HOME/.kube 目录中查找一个名为 config 的配置文件。 你可以通过设置 KUBECONFIG 环境变量或设置 --kubeconfig参数来指定其它 kubeconfig 文件。
容器运行时(Container Runtime)
容器运行环境是负责运行容器的软件。Kubernetes 支持许多容器运行环境,例如 containerd、 CRI-O 以及 Kubernetes CRI (容器运行环境接口) 的其他任何实现。
插件(Addons)
DNS
尽管其他插件都并非严格意义上的必需组件,但几乎所有 Kubernetes 集群都应该有集群 DNS, 因为很多示例都需要 DNS 服务。集群 DNS 是一个 DNS 服务器,和环境中的其他 DNS 服务器一起工作,它为 Kubernetes 服务提供 DNS 记录。Kubernetes 启动的容器自动将此 DNS 服务器包含在其 DNS 搜索列表中。
联网和网络策略
- Calico 是一个联网和网络策略供应商。 Calico 支持一套灵活的网络选项,因此你可以根据自己的情况选择最有效的选项,包括非覆盖和覆盖网络,带或不带 BGP。 Calico 使用相同的引擎为主机、Pod 和(如果使用 Istio 和 Envoy)应用程序在服务网格层执行网络策略。
- Flannel 是一个可以用于 Kubernetes 的 overlay 网络提供者。
Dashboard
Dashboard 是 Kubernetes 集群的通用的、基于 Web 的用户界面。 它使用户可以管理集群中运行的应用程序以及集群本身, 并进行故障排除。可以使用Dashboard获取运行在集群中的应用的概览信息,也可以创建或者修改Kubernetes资源(如 Deployment,Job,DaemonSet 等等),也可以对Deployment实现弹性伸缩、发起滚动升级、删除 Pod 或者使用向导创建新的应用。
Kubenetes安装
containerd安装
https://kubernetes.io/zh-cn/docs/setup/production-environment/container-runtimes/#containerd
v1.24 之前的 Kubernetes 版本直接集成了 Docker Engine 的一个组件,名为 dockershim。自 1.24 版起,Dockershim 已从 Kubernetes 项目中移除。推荐使用containerd作为kubernetes的容器运行时,需要确认的是,kubelet和containerd的cgroup驱动必须保持一致。如果初始化系统使用的是 systemd ,推荐使用systemd作为cgroup驱动。
使用apt/yum安装
查看仓库版本
# 阿里云软件源:https://developer.aliyun.com/mirror/?serviceType=&tag=&keyword=ubuntu
root@server-101:~# apt-cache madison containerd
containerd | 1.6.12-0ubuntu1~22.04.1 | http://mirrors.aliyun.com/ubuntu jammy-updates/main amd64 Packages
containerd | 1.5.9-0ubuntu3.1 | http://mirrors.aliyun.com/ubuntu jammy-security/main amd64 Packages
containerd | 1.5.9-0ubuntu3 | http://mirrors.aliyun.com/ubuntu jammy/main amd64 Packages
安装containerd
root@server-101:~# apt install containerd=1.6.12-0ubuntu1~22.04.1
查看service文件
root@server-101:~# cat /lib/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
验证runc环境
root@server-101:~# which runc
/usr/sbin/runc
root@server-101:~# runc -v
runc version 1.1.4-0ubuntu1~22.04.1
spec: 1.0.2-dev
go: go1.18.1
libseccomp: 2.5.3
root@server-101:~# which containerd
/usr/bin/containerd
root@server-101:~# containerd -v
containerd github.com/containerd/containerd 1.6.12-0ubuntu1~22.04.1
containerd配置文件
root@server-101:~# containerd --help | grep path
--config value, -c value path to the configuration file (default: "/etc/containerd/config.toml")
root@server-101:~# mkdir -p /etc/containerd
root@server-101:~# containerd config default > /etc/containerd/config.toml
root@server-101:~# systemctl enable --now containerd
# /etc/containerd/config.toml
root@server-101:~# containerd config default
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2
[cgroup]
path = ""
[debug]
address = ""
format = ""
gid = 0
level = ""
uid = 0
[grpc]
address = "/run/containerd/containerd.sock"
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
tcp_address = ""
tcp_tls_ca = ""
tcp_tls_cert = ""
tcp_tls_key = ""
uid = 0
[metrics]
address = ""
grpc_histogram = false
[plugins]
[plugins."io.containerd.gc.v1.scheduler"]
deletion_threshold = 0
mutation_threshold = 100
pause_threshold = 0.02
schedule_delay = "0s"
startup_delay = "100ms"
[plugins."io.containerd.grpc.v1.cri"]
device_ownership_from_security_context = false
disable_apparmor = false
disable_cgroup = false
disable_hugetlb_controller = true
disable_proc_mount = false
disable_tcp_service = true
enable_selinux = false
enable_tls_streaming = false
enable_unprivileged_icmp = false
enable_unprivileged_ports = false
ignore_image_defined_volumes = false
max_concurrent_downloads = 3
max_container_log_line_size = 16384
netns_mounts_under_state_dir = false
restrict_oom_score_adj = false
sandbox_image = "registry.k8s.io/pause:3.6"
selinux_category_range = 1024
stats_collect_period = 10
stream_idle_timeout = "4h0m0s"
stream_server_address = "127.0.0.1"
stream_server_port = "0"
systemd_cgroup = false
tolerate_missing_hugetlb_controller = true
unset_seccomp_profile = ""
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
conf_template = ""
ip_pref = ""
max_conf_num = 1
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "runc"
disable_snapshot_annotations = true
discard_unpacked_layers = false
ignore_rdt_not_enabled_errors = false
no_pivot = false
snapshotter = "overlayfs"
[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""
[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = false
[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""
[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
[plugins."io.containerd.grpc.v1.cri".image_decryption]
key_model = "node"
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""
[plugins."io.containerd.grpc.v1.cri".registry.auths]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.headers]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
tls_cert_file = ""
tls_key_file = ""
[plugins."io.containerd.internal.v1.opt"]
path = "/opt/containerd"
[plugins."io.containerd.internal.v1.restart"]
interval = "10s"
[plugins."io.containerd.internal.v1.tracing"]
sampling_ratio = 1.0
service_name = "containerd"
[plugins."io.containerd.metadata.v1.bolt"]
content_sharing_policy = "shared"
[plugins."io.containerd.monitor.v1.cgroups"]
no_prometheus = false
[plugins."io.containerd.runtime.v1.linux"]
no_shim = false
runtime = "runc"
runtime_root = ""
shim = "containerd-shim"
shim_debug = false
[plugins."io.containerd.runtime.v2.task"]
platforms = ["linux/amd64"]
sched_core = false
[plugins."io.containerd.service.v1.diff-service"]
default = ["walking"]
[plugins."io.containerd.service.v1.tasks-service"]
rdt_config_file = ""
[plugins."io.containerd.snapshotter.v1.aufs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.btrfs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.devmapper"]
async_remove = false
base_image_size = ""
discard_blocks = false
fs_options = ""
fs_type = ""
pool_name = ""
root_path = ""
[plugins."io.containerd.snapshotter.v1.native"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.overlayfs"]
root_path = ""
upperdir_label = false
[plugins."io.containerd.snapshotter.v1.zfs"]
root_path = ""
[plugins."io.containerd.tracing.processor.v1.otlp"]
endpoint = ""
insecure = false
protocol = ""
[proxy_plugins]
[stream_processors]
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar"
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar+gzip"
[timeouts]
"io.containerd.timeout.bolt.open" = "0s"
"io.containerd.timeout.shim.cleanup" = "5s"
"io.containerd.timeout.shim.load" = "5s"
"io.containerd.timeout.shim.shutdown" = "3s"
"io.containerd.timeout.task.state" = "2s"
[ttrpc]
address = ""
gid = 0
uid = 0
修改containerd配置文件
restrict_oom_score_adj = false # 60
sandbox_image = "registry.k8s.io/pause:3.6" # 61 修改镜像源
selinux_category_range = 1024 # 62
[plugins."io.containerd.grpc.v1.cri".registry.mirrors] # 153
[plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming] # 155
ShimCgroup = ""
SystemdCgroup = true # 125 需保持与kubelet一致
# https://kubernetes.io/zh-cn/docs/setup/production-environment/container-runtimes/#cgroup-drivers
验证containerd服务
root@server-101:~# ctr images pull docker.io/library/hello-world:latest
root@server-101:~# ctr images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/library/hello-world:latest application/vnd.docker.distribution.manifest.list.v2+json sha256:4e83453afed1b4fa1a3500525091dbfca6ce1e66903fd4c01ff015dbcb1ba33e 6.9 KiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/riscv64,linux/s390x,windows/amd64 -
root@server-101:~# ctr -n k8s.io images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
root@server-101:~# ctr run -d --net-host docker.io/library/hello-world:latest test
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
使用二进制安装
安装containerd
root@server-101:/opt# wget https://github.com/containerd/containerd/releases/download/v1.6.20/containerd-1.6.20-linux-amd64.tar.gz
root@server-101:/opt# tar -xvf containerd-1.6.20-linux-amd64.tar.gz
bin/
bin/containerd-shim
bin/containerd-shim-runc-v1
bin/containerd-stress
bin/containerd
bin/ctr
bin/containerd-shim-runc-v2
root@server-101:/opt# cp bin/* /usr/bin/
# 复制apt安装containerd的service文件,并启动containerd
root@server-101:/opt# mkdir -p /etc/containerd
root@server-101:/opt# containerd config default > /etc/containerd/config.toml
root@server-101:/opt# cp containerd.service /lib/systemd/system/
root@server-101:/opt# systemctl daemon-reload
root@server-101:/opt# systemctl enable --now containerd
修改containerd配置文件
restrict_oom_score_adj = false # 60
sandbox_image = "registry.k8s.io/pause:3.6" # 61 修改镜像源
selinux_category_range = 1024 # 62
[plugins."io.containerd.grpc.v1.cri".registry.mirrors] # 153
[plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming] # 155
ShimCgroup = ""
SystemdCgroup = true # 125 需保持与kubelet一致
# https://kubernetes.io/zh-cn/docs/setup/production-environment/container-runtimes/#cgroup-drivers
安装runc
root@server-101:/opt# wget https://github.com/opencontainers/runc/releases/download/v1.1.5/runc.amd64
root@server-101:/opt# chmod +x runc.amd64
root@server-101:/opt# mv runc.amd64 /usr/bin/runc
安装网络插件cni
root@server-101:/opt# wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz
root@server-101:/opt# mkdir -p /opt/cni/bin
root@server-101:/opt# tar -xf cni-plugins-linux-amd64-v1.2.0.tgz -C /opt/cni/bin
验证containerd服务
root@server-101:/opt# ctr images pull docker.io/library/hello-world:latest
root@server-101:/opt# ctr run -d docker.io/library/hello-world:latest test
使用nerdctl管理容器
安装nerdctl
root@server-101:~# wget https://github.com/containerd/nerdctl/releases/download/v1.3.0/nerdctl-1.3.0-linux-amd64.tar.gz
root@server-101:~# tar -xf nerdctl-1.3.0-linux-amd64.tar.gz -C /usr/bin/
添加配置文件
root@server-101:~# nerdctl -h | grep Config
Config file ($NERDCTL_TOML): /etc/nerdctl/nerdctl.toml
root@server-101:~# mkdir -p /etc/nerdctl
root@server-101:~# cat /etc/nerdctl/nerdctl.toml
namespace = "k8s.io"
debug = false
debug_full = false
insecure_registry = true
测试nerdctl
root@server-101:~# nerdctl run docker.io/library/hello-world:latest
Hello from Docker!
...
重载沙箱(pause)
// TODO
Kubenetes环境准备
系统优化
关闭交换分区
root@server-101:~# swapoff -a
root@server-101:~# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/aa81c44e-ec1a-44e3-bf85-b372061fa3ab / ext4 defaults 0 1
# /swap.img none swap sw 0 0
加载所需内核
root@server-101:~# vim /etc/modules-load.d/modules.conf
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
ip_vs
ip_vs_lc
ip_vs_lblc
ip_vs_lblcr
ip_vs_rr
ip_vs_wrr
ip_vs_sh
ip_vs_dh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
ip_tables
ip_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
xt_set
br_netfilter
nf_conntrack
overlay
修改内核参数
root@server-101:~# vim /etc/sysctl.conf
# for what other values do
net.ipv4.ip_forward=1
vm.max_map_count=262144
kernel.pid_max=4194303
fs.file-max=1000000
net.ipv4.tcp_max_tw_buckets=6000
net.netfilter.nf_conntrack_max=2097152
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
修改资源限制配置文件
root@server-101:~# vim /etc/security/limits.conf
* soft core unlimited
* hard core unlimited
* soft nproc 1000000
* hard nproc 1000000
* soft nofile 1000000
* hard nofile 1000000
* soft memlock 32000
* hard memlock 32000
* soft msgqueue 8192000
* hard msgqueue 8192000
重启检查
root@server-101:~# lsmod | grep br_netfilter
br_netfilter 32768 0
bridge 307200 1 br_netfilter
root@server-101:~# lsmod | grep nf_conntrack
nf_conntrack 172032 2 nf_nat,ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 5 nf_conntrack,nf_nat,btrfs,raid456,ip_vs
root@server-101:~# sysctl -p
net.ipv4.ip_forward = 1
vm.max_map_count = 262144
kernel.pid_max = 4194303
fs.file-max = 1000000
net.ipv4.tcp_max_tw_buckets = 6000
net.netfilter.nf_conntrack_max = 2097152
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
kubeadm部署kubernetes
配置软件源
# 阿里云软件源:https://developer.aliyun.com/mirror/?serviceType=&tag=&keyword=kubernetes
root@server-101:~# apt-get update && apt-get install -y apt-transport-https
root@server-101:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
root@server-101:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
root@server-101:~# apt-get update
安装工具包
# kubeadm,kubectl,kubelet版本应保持一致
root@server-101:~# apt-cache madison kubeadm | head -5
kubeadm | 1.27.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.27.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.26.4-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.26.3-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.26.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
root@server-101:~# apt install kubeadm=1.26.3-00 kubectl=1.26.3-00 kubelet=1.26.3-00
下载Kubernetes镜像
root@server-101:~# kubeadm config images list --kubernetes-version v1.26.3
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
root@server-101:~# kubeadm config images list --kubernetes-version v1.26.3 > images-down.sh
# 修改软件源
root@server-101:~# cat images-down.sh
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3
root@server-101:~# sh images-down.sh
# 或者
root@server-101:~# kubeadm config images pull --image-repository="registry.cn-hangzhou.aliyuncs.com/google_containers" --kubernetes-version=v1.26.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3
集群初始化
root@server-101:~# kubeadm init --apiserver-advertise-address=172.31.7.101 --apiserver-bind-port=6443 --kubernetes-version=v1.26.3 --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=cluster.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap
添加认证
root@server-101:~# mkdir -p $HOME/.kube
root@server-101:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@server-101:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看状态
root@server-101:~# kubectl get node
NAME STATUS ROLES AGE VERSION
server-101 Ready control-plane 4m23s v1.26.3
root@server-101:~# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-567c556887-56d5t 1/1 Running 0 4m20s
coredns-567c556887-j9pgs 1/1 Running 0 4m20s
etcd-server-101 1/1 Running 0 4m33s
kube-apiserver-server-101 1/1 Running 0 4m33s
kube-controller-manager-server-101 1/1 Running 0 4m34s
kube-proxy-nrpr9 1/1 Running 0 4m19s
kube-scheduler-server-101 1/1 Running 0 4m33s
添加node节点
初始化node节点
# node节点安装containerd并执行2.1, 2.2.1, 2.2.2操作
master节点获取join命令
root@server-101:~# kubeadm token create --print-join-command
kubeadm join 172.31.7.101:6443 --token 1kozut.ae84vnzwdjnul2tk --discovery-token-ca-cert-hash sha256:ca06d710e4ce8bc3cb5320e7984552a3bb8679ff97fd3ccb32e11edf81b695e0
node节点执行加入集群命令
root@server-102:~# kubeadm join 172.31.7.101:6443 --token 1kozut.ae84vnzwdjnul2tk --discovery-token-ca-cert-hash sha256:ca06d710e4ce8bc3cb5320e7984552a3bb8679ff97fd3ccb32e11edf81b695e0
root@server-103:~# kubeadm join 172.31.7.101:6443 --token 1kozut.ae84vnzwdjnul2tk --discovery-token-ca-cert-hash sha256:ca06d710e4ce8bc3cb5320e7984552a3bb8679ff97fd3ccb32e11edf81b695e0
查看状态
root@server-101:~# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
server-101 Ready control-plane 24m v1.26.3 172.31.7.101 <none> Ubuntu 22.04.2 LTS 5.15.0-69-generic containerd://1.6.20
server-102 NotReady <none> 4m20s v1.26.3 172.31.7.102 <none> Ubuntu 22.04.2 LTS 5.15.0-69-generic containerd://1.6.20
server-103 NotReady <none> 3m53s v1.26.3 172.31.7.103 <none> Ubuntu 22.04.2 LTS 5.15.0-69-generic containerd://1.6.20
部署网络插件
https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/addons/
Flannel网络插件
# 参考 https://github.com/flannel-io/flannel#deploying-flannel-with-kubectl
root@server-101:~# wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
# 修改自定义podCIDR
"Network": "10.100.0.0/16", # 91
# 部署网络插件
root@server-101:~# kubectl apply -f kube-flannel.yml
root@server-101:~# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
server-101 Ready control-plane 36m v1.26.3 172.31.7.101 <none> Ubuntu 22.04.2 LTS 5.15.0-69-generic containerd://1.6.20
server-102 Ready <none> 16m v1.26.3 172.31.7.102 <none> Ubuntu 22.04.2 LTS 5.15.0-69-generic containerd://1.6.20
server-103 Ready <none> 15m v1.26.3 172.31.7.103 <none> Ubuntu 22.04.2 LTS 5.15.0-69-generic containerd://1.6.20
root@server-101:~# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-567c556887-56d5t 1/1 Running 0 36m
coredns-567c556887-j9pgs 1/1 Running 0 36m
etcd-server-101 1/1 Running 0 36m
kube-apiserver-server-101 1/1 Running 0 36m
kube-controller-manager-server-101 1/1 Running 0 36m
kube-proxy-nrpr9 1/1 Running 0 36m
kube-proxy-z4d97 1/1 Running 0 16m
kube-proxy-zjrrf 1/1 Running 0 16m
kube-scheduler-server-101 1/1 Running 0 36m
Calico网络插件
# 参考 https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico
root@server-101:~# wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/calico.yaml
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.100.0.0/16" # 4605 修改自定义podCIDR
#指定基于eth0的网卡IP建立BGP连接,默认为服务器的第一块(first-found)网卡,https://projectcalico.docs.tigera.io/reference/node/configuration
- name: IP_AUTODETECTION_METHOD
value: "interface=ens33" # 4570
# 部署网络插件
root@server-101:~# kubectl apply -f calico.yaml
root@server-101:~# kubectl get node
NAME STATUS ROLES AGE VERSION
server-101 Ready control-plane 40m v1.26.3
server-102 Ready <none> 20m v1.26.3
server-103 Ready <none> 19m v1.26.3
root@server-101:~# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5857bf8d58-9bgfc 1/1 Running 0 70m
calico-node-4l6sm 1/1 Running 0 70m
calico-node-ghwjj 1/1 Running 0 70m
calico-node-gjj9h 1/1 Running 0 70m
coredns-567c556887-56d5t 1/1 Running 0 106m
coredns-567c556887-j9pgs 1/1 Running 0 106m
etcd-server-101 1/1 Running 0 106m
kube-apiserver-server-101 1/1 Running 0 106m
kube-controller-manager-server-101 1/1 Running 0 106m
kube-proxy-nrpr9 1/1 Running 0 106m
kube-proxy-z4d97 1/1 Running 0 86m
kube-proxy-zjrrf 1/1 Running 0 85m
kube-scheduler-server-101 1/1 Running 0 106m
测试Kubernetes集群服务
创建namespace,pod
# 创建namespace myserver
root@server-101:~/nginx-tomcat-case# kubectl create namespace myserver
namespace/myserver created
root@server-101:~/nginx-tomcat-case# kubectl apply -f nginx.yaml -f tomcat.yaml
查看状态
root@server-101:~/nginx-tomcat-case# kubectl get pod -o wide -n myserver
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myserver-nginx-deployment-596d5d9799-b4xfn 1/1 Running 0 69s 10.100.242.1 server-103 <none> <none>
myserver-tomcat-app1-deployment-6bb596979f-p92l4 1/1 Running 0 69s 10.100.121.129 server-102 <none> <none>
root@server-101:~/nginx-tomcat-case# kubectl get service -o wide -n myserver
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
myserver-nginx-service NodePort 10.200.214.224 <none> 80:30004/TCP,443:30443/TCP 75s app=myserver-nginx-selector
myserver-tomcat-app1-service NodePort 10.200.173.181 <none> 80:30005/TCP 75s app=myserver-tomcat-app1-selector
测试访问
root@server-101:~# curl -kv 172.31.7.102:30005
root@server-101:~# curl -kv 172.31.7.103:30004
nginx.yaml
# nginx.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: myserver-nginx-deployment-label
name: myserver-nginx-deployment
namespace: myserver
spec:
replicas: 1
selector:
matchLabels:
app: myserver-nginx-selector
template:
metadata:
labels:
app: myserver-nginx-selector
spec:
containers:
- name: myserver-nginx-container
image: nginx
#command: ["/apps/tomcat/bin/run_tomcat.sh"]
#imagePullPolicy: IfNotPresent
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
name: http
- containerPort: 443
protocol: TCP
name: https
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
---
kind: Service
apiVersion: v1
metadata:
labels:
app: myserver-nginx-service-label
name: myserver-nginx-service
namespace: myserver
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 30004
- name: https
port: 443
protocol: TCP
targetPort: 443
nodePort: 30443
selector:
app: myserver-nginx-selector
tomcat.yaml
# tomcat.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: myserver-tomcat-app1-deployment-label
name: myserver-tomcat-app1-deployment
namespace: myserver
spec:
replicas: 1
selector:
matchLabels:
app: myserver-tomcat-app1-selector
template:
metadata:
labels:
app: myserver-tomcat-app1-selector
spec:
containers:
- name: myserver-tomcat-app1-container
image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1
#command: ["/apps/tomcat/bin/run_tomcat.sh"]
#imagePullPolicy: IfNotPresent
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
# resources:
# limits:
# cpu: 2
# memory: 2Gi
# requests:
# cpu: 500m
# memory: 1Gi
---
kind: Service
apiVersion: v1
metadata:
labels:
app: myserver-tomcat-app1-service-label
name: myserver-tomcat-app1-service
namespace: myserver
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 30005
selector:
app: myserver-tomcat-app1-selector
Kubernetes官方dashboard
部署dashboard
# https://kubernetes.io/zh-cn/docs/tasks/access-application-cluster/web-ui-dashboard/
# https://kuboard.cn/
# https://kubesphere.io/zh/
root@server-101:~/dashboard-v2.7.0# kubectl apply -f dashboard-v2.7.0.yaml -f admin-user.yaml -f admin-secret.yaml
查看状态
root@server-101:~/dashboard-v2.7.0# kubectl get pod -o wide -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dashboard-metrics-scraper-7bc864c59-7gj99 1/1 Running 0 5m7s 10.100.121.130 server-102 <none> <none>
kubernetes-dashboard-6c7ccbcf87-hwstt 1/1 Running 0 5m7s 10.100.242.2 server-103 <none> <none>
root@server-101:~/dashboard-v2.7.0# kubectl get svc -o wide -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dashboard-metrics-scraper ClusterIP 10.200.41.128 <none> 8000/TCP 5m18s k8s-app=dashboard-metrics-scraper
kubernetes-dashboard NodePort 10.200.49.131 <none> 443:30000/TCP 5m18s k8s-app=kubernetes-dashboard
获取登录token
root@server-101:~/dashboard-v2.7.0# kubectl get secret -A | grep -e admin -e NAMESPACE
NAMESPACE NAME TYPE DATA AGE
kubernetes-dashboard dashboard-admin-user kubernetes.io/service-account-token 3 4m1s
root@server-101:~/dashboard-v2.7.0# kubectl describe secret -n kubernetes-dashboard dashboard-admin-user
Name: dashboard-admin-user
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 7b8bd13b-729a-41f6-a45a-c36a9a372404
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1099 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjdfYkQwLUZkWDd4MEZvbF9TU0lrM0didGpNZTVRMTFRMlZ0eWpLNGZWZVUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdXNlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2I4YmQxM2ItNzI5YS00MWY2LWE0NWEtYzM2YTlhMzcyNDA0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmFkbWluLXVzZXIifQ.Ra6EGDeNSjnjUn9Iml7JXRpYpqzSbygQMX5WGO1dRumcmJCWI9egkqpiz7-_H6UxvsN4LFvUOqGi3Zozc29T5386_xCCoJCTimcWWNi7bZt0rHJl_sHQdkIhP24Y9fapz3hFEAzLarKLrIBf-xTN-HA1BfX8xhRNnTVY9odo5uRYrKzNML_1rECvhorNG3lwJ-PvqsCZJwyDRpmONhc1cBpPfdjBmxHSeeozRzPZNRQ6oHOrFR90REUrJjE77vAuCwjR8isNq-ri6AJc0WEG9KVcWjRpTlogwqexqN2wpsAfzrTNcpHuzGJS0mWH-8-wL88zfPGHi2rBSzK6zFBjEg
访问dashboard
# 浏览器访问
# https://server-103:30000
dashboard-v2.7.0.yaml
# dashboard-v2.7.0.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30000
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.7.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.8
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
admin-user.yaml
# admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
admin-secret.yaml
# admin-secret.yaml
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: dashboard-admin-user
namespace: kubernetes-dashboard
annotations:
kubernetes.io/service-account.name: "admin-user"
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 无需6万激活码!GitHub神秘组织3小时极速复刻Manus,手把手教你使用OpenManus搭建本
· C#/.NET/.NET Core优秀项目和框架2025年2月简报
· 什么是nginx的强缓存和协商缓存
· 一文读懂知识蒸馏
· Manus爆火,是硬核还是营销?