kubernetes(一)
一、梳理k8s 各组件功能
官网:https://kubernetes.io/zh-cn/docs/concepts/overview/components/

k8s组件主要分为控制平面组件、node组件和插件
Control Plane Components(控制平面组件)
https://kubernetes.io/zh-cn/docs/concepts/overview/components/#control-plane-components
控制平面组件会为集群做出全局决策,比如资源的调度。 以及检测和响应集群事件,例如当不满足部署的 replicas
字段时, 要启动新的 pod)。
控制平面组件可以在集群中的任何节点上运行。然而,为了简单起见,设置脚本通常会在同一个计算机上启动所有控制平面组件, 并且不会在此计算机上运行用户容器。
kube-apiserver
https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/
Kubernetes API server提供了k8s各类资源对象的增删改查及watch等HTTPS Rest接口,这些对象包括pods、services、replicationcontrollers等,API Server为Rest操作提供服务,并未集群的共享状态提供前端,所有其他组件都通过该前端进行交互。
- 该端口默认值为6443,可通过启动参数"--secure-port"的值来修改默认值
- 默认IP地址为非本地(Non-Localhost)网络端口,通过启动参数"--bind-address"设置该值
- 该端口用于接收客户端、dashboard等外部HTTPS请求
- 用于基于Token文件或客户端证书及HTTP Base的认证
- 用于基于策略的授权
etcd
https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/
一致且高度可用的键值存储,用作 Kubernetes 的所有集群数据的后台数据库。etcd支持分布式集群功能,生产环境使用时需要为etcd数据提供定期备份机制。

kube-scheduler
Kubernetes 调度器是一个控制面进程,负责将 Pods 指派到节点上。 通过调度算法为待调度Pod列表的每个Pod从可用Node列表中选择一个最合适的Node,并将信息写入etcd中。
node节点上的kubelet通过API Server监听到kubernetes Scheduler产生的Pod绑定信息,然后获取对应的Pod清单,下载image,并启动容器。
调度决策考虑的因素包括单个Pod及Pods集合的资源需求、软硬件及策略约束、亲和性及反亲和性规范、数据位置、工作负载间的干扰及最后时限。
策略:
- 优先从备选节点列表中选择资源消耗最小的节点(CPU+内存)
- 优先选择含有指定Label的节点
- 优先从备选节点列表中选择各项资源使用率最均衡的节点
kube-controller-manager
https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/
kube-controller-manager是控制平面的组件,负责运行控制器进程。
Kubernetes 控制器管理器是一个守护进程,内嵌随Kubernetes 一起发布的核心控制回路。 在机器人和自动化的应用中,控制回路是一个永不休止的循环,用于调节系统状态。 在 Kubernetes 中,每个控制器是一个控制回路,通过API服务器监视集群的共享状态,并尝试进行更改以将当前状态转为期望状态。
Controller Manager包括一些子控制器(副本控制器、节点控制器、命名空间控制器和服务账号控制器等),控制器作为集群内部的管理控制中心,负责集群内的Node、Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,Controller Manager会及时发现并执行自动化修复流程,确保集群中的pod副本始终处于预期的工作状态。
cloud-controller-manager
一个 Kubernetes 控制平面组件, 嵌入了特定于云平台的控制逻辑。 云控制器管理器(Cloud Controller Manager)允许你将你的集群连接到云提供商的 API 之上, 并将与该云平台交互的组件同与你的集群交互的组件分离开来。cloud-controller-manager
仅运行特定于云平台的控制器。 因此如果你在自己的环境中运行 Kubernetes,或者在本地计算机中运行学习环境, 所部署的集群不需要有云控制器管理器。
与kube-controller-manager
类似,cloud-controller-manager
将若干逻辑上独立的控制回路组合到同一个可执行文件中,供你以同一进程的方式运行。你可以对其执行水平扩容(运行不止一个副本)以提升性能或者增强容错能力。
下面的控制器都包含对云平台驱动的依赖:
- 节点控制器(Node Controller):用于在节点终止响应后检查云提供商以确定节点是否已被删除
- 路由控制器(Route Controller):用于在底层云基础架构中设置路由
- 服务控制器(Service Controller):用于创建、更新和删除云提供商负载均衡器
Node Components(Node组件)
https://kubernetes.io/zh-cn/docs/concepts/overview/components/#node-components
节点组件会在每个节点上运行,负责维护运行的 Pod 并提供 Kubernetes 运行环境。
kubelet
kubelet会在集群中每个节点(node)上运行。 它保证容器(containers)都运行在 Pod 中。
kubelet 接收一组通过各类机制提供给它的 PodSpecs, 确保这些 PodSpecs 中描述的容器处于运行状态且健康。 kubelet 不会管理不是由 Kubernetes 创建的容器。具体功能如下:
- 向master汇报node节点的状态信息
- 接收指令并在Pod中创建容器
- 准备pod所需的数据卷
- 返回pod的运行状态
- 在node节点执行容器健康检查
kube-proxy
kube-proxy 是集群中每个节点(node)上所运行的网络代理, 实现 Kubernetes 服务(Service) 概念的一部分。
kube-proxy 维护节点上的一些网络规则, 这些网络规则会允许从集群内部或外部的网络会话与 Pod 进行网络通信。
如果操作系统提供了可用的数据包过滤层,则 kube-proxy 会通过它来实现网络规则。 否则,kube-proxy 仅做流量转发。
-
kubernetes网络代理运行在node上,反映了node上kubernetes API中定义的服务,并可以通过一组后端进行简单的TCP、UDP、和SCTP流转发或者在一组后端进行循环TCP、UDP、SCTP转发,用户必须使用apiserver 创建一个服务来配置代理,其实就是kube-proxy通过在主机上维护网络规则并执行连接转发来实现Kubernetes服务访问。
-
kube-proxy运行在每个节点上,监听API Server中服务对象的变化,再通过管理IPtables或者IPVS规则来实现网络的转发
-
kube-proxy不同版本可支持三种工作模式:
UserSpace:k8s 1.1之前使用,k8s 1.2及以后已经淘汰
IPtables:k8s 1.1版本开始支持,1.2开始为默认
IPVS:k8s 1.9引入,到1.11为正式版本,需要安装ipvsadm、ipset工具包和加载ip_vs内核模块
Container runtime(容器运行时)
容器运行环境是负责运行容器的软件。
Kubernetes 支持许多容器运行环境,例如 containerd、 CRI-O 以及 Kubernetes CRI (容器运行环境接口) 的其他任何实现。
Addons(插件)
https://kubernetes.io/zh-cn/docs/concepts/overview/components/#addons
插件使用 Kubernetes 资源(DaemonSet、 Deployment 等)实现集群功能。 因为这些插件提供集群级别的功能,插件中命名空间域的资源属于 kube-system
命名空间。
DNS
集群 DNS 是一个DNS服务器,和环境中的其他 DNS 服务器一起工作,它为 Kubernetes 服务提供 DNS 记录。
Kubernetes 启动的容器自动将此 DNS 服务器包含在其 DNS 搜索列表中。
Dashboard
Dashboard 是 Kubernetes 集群的通用的、基于 Web 的用户界面。 它使用户可以管理集群中运行的应用程序以及集群本身, 并进行故障排除。
容器资源监控
容器资源监控 将关于容器的一些常见的时间序列度量值保存到一个集中的数据库中, 并提供浏览这些数据的界面。
集群层面日志
集群层面日志机制负责将容器的日志数据保存到一个集中的日志存储中, 这种集中日志存储提供搜索和浏览接口。
kubectl
https://kubernetes.io/zh-cn/docs/reference/kubectl/cheatsheet/
kubectl是一个通过命令行对kubernetes集群进行管理的客户端工具。
二、基本掌握containerd的安装和使用
二进制安装containerd
# 安装containerd cd /opt wget https://github.com/containerd/containerd/releases/download/v1.6.10/containerd-1.6.10-linux-amd64.tar.gz tar -xvf containerd-1.6.10-linux-amd64.tar.gz -C /usr/local ## 配置systemd启动 wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service cp containerd.service /lib/systemd/system/containerd.service systemctl daemon-reload systemctl enable --now containerd ## 配置国内镜像,配置systemd cgroup 驱动,docker镜像加速地址 mkdir -p /etc/containerd containerd config default > /etc/containerd/config.toml sed -i 's#sandbox_image = "registry.k8s.io/pause:3.6"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"#g' /etc/containerd/config.toml sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml sed -ri '153a\\t[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]\ \n\t\tendpoint=["https://9916w1ow.mirror.aliyuncs.com"]' \ /etc/containerd/config.toml systemctl restart containerd # 安装runc wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64 install -m 755 runc.amd64 /usr/local/sbin/runc # 安装CNI插件 wget https://github.com/containernetworking/plugins/releases/download/v1.1.0/cni-plugins-linux-amd64-v1.1.0.tgz mkdir -p /opt/cni/bin tar -xvf cni-plugins-linux-amd64-v1.1.0.tgz -C /opt/cni/bin
查看版本
[root@containerd opt]#ctr version Client: Version: v1.6.10 Revision: 770bd0108c32f3fb5c73ae1264f7e503fe7b2661 Go version: go1.18.8 Server: Version: v1.6.10 Revision: 770bd0108c32f3fb5c73ae1264f7e503fe7b2661 UUID: 5507c691-b7c5-4c2f-b6be-564258798922
下载镜像
[root@containerd opt]#ctr images pull docker.io/library/alpine:latest docker.io/library/alpine:latest: resolved |++++++++++++++++++++++++++++++++++++++| index-sha256:8914eb54f968791faf6a8638949e480fef81e697984fba772b3976835194c6d4: done |++++++++++++++++++++++++++++++++++++++| manifest-sha256:c0d488a800e4127c334ad20d61d7bc21b4097540327217dfab52262adc02380c: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:c158987b05517b6f2c5913f3acef1f2182a32345a304fe357e3ace5fadcad715: done |++++++++++++++++++++++++++++++++++++++| config-sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da: done |++++++++++++++++++++++++++++++++++++++| elapsed: 21.5s total: 3.0 Mi (143.0 KiB/s) unpacking linux/amd64 sha256:8914eb54f968791faf6a8638949e480fef81e697984fba772b3976835194c6d4... done: 159.250483ms # 运行容器 [root@containerd opt]#ctr run -t --net-host docker.io/library/alpine:latest container1 sh / # ping www.baidu.com PING www.baidu.com (183.232.231.174): 56 data bytes 64 bytes from 183.232.231.174: seq=0 ttl=128 time=57.729 ms 64 bytes from 183.232.231.174: seq=1 ttl=128 time=95.215 ms 64 bytes from 183.232.231.174: seq=2 ttl=128 time=79.312 ms
containerd.service配置文件
[Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] #uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration #Environment="ENABLE_CRI_SANDBOXES=sandboxed" ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=infinity # Comment TasksMax if your systemd version does not supports it. # Only systemd 226 and above support this version. TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target
containerd使用说明
ctr工具命令
ctr 是 containerd 的默认客户端工具。
# 查看containerd版本 [root@containerd opt]#ctr -v ctr github.com/containerd/containerd v1.6.10
-
查看镜像
ctr images list或 ctr i ls # 如没有指定名称空间则需指定 ~]# ctr namespaces list 或 ctr ns list NAME LABELS k8s.io ~]# ctr -n k8s.io images list -
镜像标记
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2 -
删除镜像
ctr -n k8s.io images rm k8s.gcr.io/pause:3.2 -
拉取镜像
ctr -n k8s.io images pull -k k8s.gcr.io/pause:3.2 -
导出镜像
ctr -n k8s.io images export pause.tar k8s.gcr.io/pause:3.2 -
导入镜像
# 不支持 build,commit 镜像 ctr -n k8s.io i import pause.tar -
运行容器
ctr -n k8s.io run --null-io --net-host -d –env PASSWORD=$drone_password \ –mount type=bind,src=/etc,dst=/host-etc,options=rbind:rw \ –mount type=bind,src=/root/.kube,dst=/root/.kube,options=rbind:rw \ $image sysreport bash /sysreport/run.sh –null-io: 将容器内标准输出重定向到/dev/null –net-host: 主机网络 -d: 当task执行后就进行下一步shell命令,如没有选项,则会等待用户输入,并定向到容器内 -
查看容器
ctr containers list 或 ctr c ls #如没有指定名称空间则需指定 ctr -n k8s.io c ls -
先找出容器然后搜索容器名
ctr -n k8s.io c ls -
找出容器名
ctr -n k8s.io tasks list -
停止容器
kill -a -s 9 {id} -
删除容器
ctr container rm 容器
nerdctl工具命令
推荐使用 nerdctl,使用效果与 docker 命令的语法一致,与docker具有相同的体验,同时
- 支持containerd的命名空间查看,nerdctl不仅可以管理Docker容器,也可以直接管理本地的的Kubernetes pod。
- 支持将Docker Image Manifest镜像转换为OCI镜像、estargz镜像。
- 支持OCIcrypt(镜像加密)
- 支持docker-compose(nerdctl compose up)
-
安装nerdctl
https://github.com/containerd/nerdctl
# 下载 wget https://github.com/containerd/nerdctl/releases/download/v1.0.0/nerdctl-1.0.0-linux-amd64.tar.gz tar -xvf nerdctl-1.0.0-linux-amd64.tar.gz cp nerdctl /usr/bin/ 查看版本
[root@containerd opt]#nerdctl version WARN[0000] unable to determine buildctl version: exec: "buildctl": executable file not found in $PATH Client: Version: v1.0.0 OS/Arch: linux/amd64 Git commit: c00780a1f5b905b09812722459c54936c9e070e6 buildctl: Version: Server: containerd: Version: v1.6.10 GitCommit: 770bd0108c32f3fb5c73ae1264f7e503fe7b2661 runc: Version: 1.1.4 GitCommit: v1.1.4-0-g5fd4c4d1
-
创建nginx容器并运行
下载镜像
[root@containerd opt]#nerdctl pull nginx docker.io/library/nginx:latest: resolved |++++++++++++++++++++++++++++++++++++++| index-sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286: done |++++++++++++++++++++++++++++++++++++++| manifest-sha256:9a821cadb1b13cb782ec66445325045b2213459008a41c72d8d87cde94b33c8c: done |++++++++++++++++++++++++++++++++++++++| config-sha256:1403e55ab369cd1c8039c34e6b4d47ca40bbde39c371254c7cba14756f472f52: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:5f63362a3fa390a685ae42e1936feeca3e4fba185bdc46fb66cf184036611f7d: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:3f4ca61aafcd4fc07267a105067db35c0f0ac630e1970f3cd0c7bf552780e985: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:50c68654b16f458108a537c9842c609f647a022fbc5a9b6bde1ffb60b77c2349: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:3ed295c083ec7246873f1b98bbc7b634899c99f6d2d901e2f9f5220d871830dd: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:40b838968eeab5abc9fb941a8e3ee1377660bb02672153cada52bc9d4e0595b7: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:88d3ab68332da2aa6cc8d83c9dfe95905dc899d9b8fb302ebae2bf9a6b167c40: done |++++++++++++++++++++++++++++++++++++++| elapsed: 55.8s total: 54.2 M (995.5 KiB/s) [root@containerd opt]#nerdctl images REPOSITORY TAG IMAGE ID CREATED PLATFORM SIZE BLOB SIZE alpine latest 8914eb54f968 23 hours ago linux/amd64 7.0 MiB 3.2 MiB nginx latest 0047b729188a 53 seconds ago linux/amd64 146.5 MiB 54.2 MiB 创建容器
[root@containerd opt]#nerdctl run -d -p 80:80 --name=nginx nginx:latest a7f66c4bf984f3dd2313f9bb536a0a9024eb02782ce715689b3bb008f53c387b [root@containerd opt]#nerdctl ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a7f66c4bf984 docker.io/library/nginx:latest "/docker-entrypoint.…" 6 minutes ago Up 0.0.0.0:80->80/tcp nginx 浏览器访问

三、基于kubeadm和containerd部署单master k8s v1.24.x
安装containerd
# 安装containerd cd /opt wget https://github.com/containerd/containerd/releases/download/v1.6.10/containerd-1.6.10-linux-amd64.tar.gz tar -xvf containerd-1.6.10-linux-amd64.tar.gz -C /usr/local ## 配置systemd启动 wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service cp containerd.service /lib/systemd/system/containerd.service systemctl daemon-reload systemctl enable --now containerd ## 配置国内镜像,配置systemd cgroup 驱动,docker镜像加速地址 mkdir -p /etc/containerd containerd config default > /etc/containerd/config.toml sed -i 's#sandbox_image = "registry.k8s.io/pause:3.6"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"#g' /etc/containerd/config.toml sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml sed -ri '153a\\t[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]\ \n\t\tendpoint=["https://9916w1ow.mirror.aliyuncs.com"]' \ /etc/containerd/config.toml systemctl restart containerd # 安装runc wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64 install -m 755 runc.amd64 /usr/local/sbin/runc # 安装nerdctl工具 wget https://github.com/containerd/nerdctl/releases/download/v1.0.0/nerdctl-1.0.0-linux-amd64.tar.gz tar -xvf nerdctl-1.0.0-linux-amd64.tar.gz cp nerdctl /usr/bin/
安装kubeadm
# 设置国内镜像源 apt update && apt-get install -y apt-transport-https curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main deb https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial main EOF apt update # 查看版本 apt-cache madison kubeadm # 安装1.24.9版本kubeadm、kubelet、kubectl apt install -y kubeadm=1.24.9-00 kubelet=1.24.9-00 kubectl=1.24.9-00
查看kubeadm版本
# 查看版本,以json格式输出 [root@k8s-master ~]#kubeadm version -o json { "clientVersion": { "major": "1", "minor": "24", "gitVersion": "v1.24.9", "gitCommit": "9710807c82740b9799453677c977758becf0acbb", "gitTreeState": "clean", "buildDate": "2022-12-08T10:13:36Z", "goVersion": "go1.18.9", "compiler": "gc", "platform": "linux/amd64" } }
kubeadm初始化集群
1. 下载镜像
先提前下载特定的 Kubernetes 版本镜像,减少安装等待时间,镜像默认使用Google镜像仓库,国内无法直接下载,可使用阿里云镜像仓库提前下载镜像,避免后期因镜像下载异常而导致k8s部署异常。
-
查看kubernetes指定版本所需镜像
[root@k8s-master ~]#kubeadm config images list --kubernetes-version v1.24.9 registry.k8s.io/kube-apiserver:v1.24.9 registry.k8s.io/kube-controller-manager:v1.24.9 registry.k8s.io/kube-scheduler:v1.24.9 registry.k8s.io/kube-proxy:v1.24.9 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.6-0 registry.k8s.io/coredns/coredns:v1.8.6 -
下载国内镜像
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.9 nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.9 nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.9 nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.9 nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7 nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0 nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.6 -
查看镜像
[root@k8s-master ~]#nerdctl images REPOSITORY TAG IMAGE ID CREATED PLATFORM SIZE BLOB SIZE registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.5.6-0 dd75ec974b0a 33 seconds ago linux/amd64 288.9 MiB 97.8 MiB registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.24.9 a6291f66504b 3 minutes ago linux/amd64 127.1 MiB 32.3 MiB registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.24.9 5d5b724bba53 3 minutes ago linux/amd64 117.2 MiB 29.6 MiB registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.24.9 2f0df9a9723a 2 minutes ago linux/amd64 110.1 MiB 37.7 MiB registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.24.9 3c4d859c18e6 3 minutes ago linux/amd64 52.0 MiB 14.8 MiB registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.7 bb6ed397957e 2 minutes ago linux/amd64 696.0 KiB 304.0 KiB
2. 系统优化
# 关闭防火墙 systemctl disable firewalld && systemctl stop firewalld # 在/etc/hosts中添加IP、主机名 cat >> /etc/hosts <<EOF `hostname -I|awk '{print $1}'` `hostname` EOF # 内核参数优化 cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.ipv4.tcp_tw_reuse = 0 EOF sudo sysctl --system # 关闭swap # 在/etc/fstab注释swap那一行 sed -ri 's/(^[^#]*swap)/#\1/' /etc/fstab echo 'swapoff -a' >> /etc/profile swapoff -a # 修改grub sed -i '/GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"/c GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0 cgroup_enable=memory swapaccount=1"' /etc/default/grub update-grub reboot
3. 执行kuneadm init初始化
kubeadm init --apiserver-advertise-address=10.0.0.12 \ --apiserver-bind-port=6443 \ --kubernetes-version=v1.24.9 \ --pod-network-cidr=10.100.0.0/16 \ --service-cidr=10.200.0.0/16 \ --service-dns-domain=cluster.local \ --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \ --ignore-preflight-errors=swap #参数说明 kubeadm init --apiserver-advertise-address=10.0.0.12 \ #监听地址,本地IP --apiserver-bind-port=6443 \ #监听端口,默认6443 --kubernetes-version=v1.24.9 \ #k8s版本 --pod-network-cidr=10.100.0.0/16 \ #pod网络地址 --service-cidr=10.200.0.0/16 \ #service网络地址 --service-dns-domain=cluster.local \ #dns域名,默认 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \ #镜像仓库,设为阿里云 --ignore-preflight-errors=swap \ #忽略预检查报错 #--control-plane-endpoint 10.0.0.12 \ # #--upload-certs #该选项也可后续使用kubeadm init phase upload-certs --upload-certs获取
初始化完成输出信息
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.0.12:6443 --token r27by4.716qvglmj85z4sgp \ --discovery-token-ca-cert-hash sha256:e0a5b267f55eb28c130f4af654d3bf27c454fe97633d8f5311757937be093925
4. 配置kube-config文件
kube-config文件包含kube-apiserver地址及相关认证信息
# 复制执行kubeadm --init初始化生成的命令 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl自动补全(bash)
source <(kubectl completion bash) # 在 bash 中设置当前 shell 的自动补全,要先安装 bash-completion 包。 echo "source <(kubectl completion bash)" >> ~/.bashrc # 在你的 bash shell 中永久地添加自动补全
查看状态
[root@k8s-master ~]#kubectl get node NAME STATUS ROLES AGE VERSION k8s-master NotReady control-plane 2m30s v1.24.9
5. 安装网络插件-calico
https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/addons/
https://projectcalico.docs.tigera.io/getting-started/kubernetes/installation/config-options
#下载calico yaml curl https://raw.githubusercontent.com/projectcalico/calico/v3.23.1/manifests/calico-etcd.yaml -O
可参考该文件calico-etcd.yaml
#修改pod实际网络地址 sed -i -e 's/# - name: CALICO_IPV4POOL_CIDR/- name: CALICO_IPV4POOL_CIDR/g' \ -e 's@# value: "192.168.0.0/16"@ value: "10.100.0.0/16"@g' \ calico-etcd.yaml #查看calico所需镜像镜像 [root@k8s-master opt]#grep "image:" calico-etcd.yaml image: docker.io/calico/cni:v3.23.1 image: docker.io/calico/node:v3.23.1 image: docker.io/calico/node:v3.23.1 image: docker.io/calico/kube-controllers:v3.23.1 # 下载镜像 nerdctl pull docker.io/calico/cni:v3.23.1 nerdctl pull docker.io/calico/node:v3.23.1 nerdctl pull docker.io/calico/kube-controllers:v3.23.1 # 安装calico kubectl apply -f calico-etcd.yaml
查看状态
# 查看pod状态 [root@k8s-master opt]#kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-56cdb7c587-jcsvg 0/1 Pending 0 32s kube-system calico-node-q4qlr 1/1 Running 0 32s kube-system coredns-7f74c56694-fdmzz 1/1 Running 0 17m kube-system coredns-7f74c56694-xbsrf 1/1 Running 0 17m kube-system etcd-k8s-master 1/1 Running 0 17m kube-system kube-apiserver-k8s-master 1/1 Running 0 17m kube-system kube-controller-manager-k8s-master 1/1 Running 0 17m kube-system kube-proxy-kg9jb 1/1 Running 0 17m kube-system kube-scheduler-k8s-master 1/1 Running 0 17m # 查看节点状态 [root@k8s-master opt]#kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane 17m v1.24.9
四、部署harbor并实现https(SAN签发证书)
安装docker
版本20.10.10
#! /bin/bash # docker版本 docker_version=5:20.10.10~3-0 apt update # 安装依赖包 apt install -y \ apt-transport-https \ ca-certificates \ curl \ gnupg \ lsb-release \ software-properties-common # 安装GPG证书 curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=$(dpkg --print-architecture)] http://mirrors.aliyun.com/docker-ce/linux/ubuntu \ $(lsb_release -cs) stable" apt update # apt-cache madison docker-ce docker-ce-cli apt -y install docker-ce=${docker_version}~ubuntu-$(lsb_release -cs) \ docker-ce-cli=${docker_version}~ubuntu-$(lsb_release -cs) # 关闭防火墙 systemctl disable firewalld && systemctl stop firewalld # 在/etc/hosts中添加IP、主机名 cat >> /etc/hosts <<EOF `hostname -I|awk '{print $1}'` `hostname` EOF # 内核参数优化 cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system # 设置docker的cgroup driver # docker 默认的 cgroup driver 是 cgroupfs,可以通过 docker info 命令查看 # 如果用户没有在 KubeletConfiguration 下设置 cgroupDriver 字段,则 kubeadm 将默认为systemd,需要将docker cgroup driver更改为systemd # 配置docker hub镜像加速 cat <<EOF >/etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": ["https://ung2thfc.mirror.aliyuncs.com", "https://registry.docker-cn.com", "http://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn"] } EOF # systemctl daemon-reload # systemctl restart docker # 关闭swap # 在/etc/fstab注释swap那一行 sed -ri 's/(^[^#]*swap)/#\1/' /etc/fstab echo 'swapoff -a' >> /etc/profile swapoff -a # 修改grub sed -i '/GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"/c GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0 cgroup_enable=memory swapaccount=1"' /etc/default/grub update-grub reboot
查看docker信息
[root@harbor1 opt]#docker info Client: Context: default Debug Mode: false Plugins: app: Docker App (Docker Inc., v0.9.1-beta3) buildx: Build with BuildKit (Docker Inc., v0.6.3-docker) scan: Docker Scan (Docker Inc., v0.23.0) Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 20.10.10 Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: systemd Cgroup Version: 1 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc Default Runtime: runc Init Binary: docker-init containerd version: 5b842e528e99d4d4c1686467debf2bd4b88ecd86 runc version: v1.1.4-0-g5fd4c4d init version: de40ad0 Security Options: apparmor seccomp Profile: default Kernel Version: 5.4.0-122-generic Operating System: Ubuntu 20.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 3.81GiB Name: harbor1 ID: XWJA:OYV5:V76Y:UWRX:SASF:R32R:RXXO:S2LL:N75G:N3D4:ESHY:VLL6 Docker Root Dir: /var/lib/docker Debug Mode: false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Registry Mirrors: https://ung2thfc.mirror.aliyuncs.com/ https://registry.docker-cn.com/ http://hub-mirror.c.163.com/ https://docker.mirrors.ustc.edu.cn/ Live Restore Enabled: false
配置docker-compose
# 下载二进制程序 wget https://github.com/docker/compose/releases/download/v2.12.0/docker-compose-linux-x86_64 chmod a+x docker-compose-linux-x86_64 mv docker-compose-linux-x86_64 /usr/bin/docker-compose
查看docker-compose版本
[root@harbor1 opt]#docker-compose version Docker Compose version v2.12.0
安装harbor
参考:https://goharbor.io/docs/2.6.0/install-config/download-installer/
# 颁发证书,参考https://goharbor.io/docs/2.6.0/install-config/configure-https/ mkdir -p /apps/harbor/certs cd /apps/harbor/certs ## 自签名CA机构 openssl genrsa -out ca.key 4096 openssl req -x509 -new -nodes -sha512 -days 3650 \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=chu.com" \ -key ca.key \ -out ca.crt ## 服务器域名证书申请 openssl genrsa -out chu.net.key 4096 openssl req -sha512 -new \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=chu.net" \ -key chu.net.key \ -out chu.net.csr ## 准备签发环境 cat > v3.ext <<-EOF authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth subjectAltName = @alt_names [alt_names] DNS.1=chu.com DNS.2=harbor.chu.net DNS.3=harbor.chu.local EOF ## 使用自签名CA签发证书 openssl x509 -req -sha512 -days 3650 \ -extfile v3.ext \ -CA ca.crt -CAkey ca.key -CAcreateserial \ -in chu.net.csr \ -out chu.net.crt #安装harbor cd /opt # 下载二进制程序 wget https://github.com/goharbor/harbor/releases/download/v2.6.2/harbor-offline-installer-v2.6.2.tgz tar xvf harbor-offline-installer-v2.6.2.tgz -C /apps cd /apps/harbor #egrep -v '^\s*#|^$' harbor.yml.tmpl > harbor.yml cp harbor.yml.tmpl harbor.yml #根据实际修改hostnanme、harbor_admin_password、database等 sed -i -e "s/hostname: reg.mydomain.com/hostname: harbor.chu.net/g" \ -e "s#certificate: /your/certificate/path#certificate: /apps/harbor/certs/chu.net.crt#g" \ -e "s#private_key: /your/private/key/path#private_key: /apps/harbor/certs/chu.net.key#g" \ harbor.yml #开始安装 ./install.sh --with-trivy --with-chartmuseum
登录网页

查看证书

用户名:admin,密码:Harbor12345

进入首页

创建项目

说明:
- 当项目设为公开后,任何人都可以下载此项目下镜像。命令行用户不需要"docker login"就可以拉取此项目下的镜像。
- "docker login"登录后才允许上传镜像。
上传镜像
配置docker文件
#docker客户端创建域名目录 mkdir /etc/docker/certs.d/harbor.chu.net -p #将horbor服务器公钥复制到客户端目录 scp 10.0.0.101:/apps/harbor/certs/chu.net.crt /etc/docker/certs.d/harbor.chu.net/ # 配置hosts echo "10.0.0.101 harbor.chu.net" >> /etc/hosts
登录harbor
[root@docker ~]#docker login harbor.chu.net Username: admin Password: #Harbor12345 WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded
镜像打tag,即修改images名称,须符合harbor仓库格式,格式为Harbor IP(:Port)/项目名/image名称:版本号或域名/项目名/image名称:版本号,否则镜像无法上传至harbor仓库
[root@docker ~]#docker images REPOSITORY TAG IMAGE ID CREATED SIZE nginx latest 605c77e624dd 12 months ago 141MB [root@docker ~]#docker tag nginx:latest harbor.chu.net/test/nginx:v1 [root@docker ~]#docker images REPOSITORY TAG IMAGE ID CREATED SIZE nginx latest 605c77e624dd 12 months ago 141MB harbor.chu.net/test/nginx v1 605c77e624dd 12 months ago 141MB
上传镜像至harbor仓库
[root@docker ~]#docker push harbor.chu.net/test/nginx:v1 The push refers to repository [harbor.chu.net/test/nginx] d874fd2bc83b: Pushed 32ce5f6a5106: Pushed f1db227348d0: Pushed b8d6e692a25e: Pushed e379e8aedd4d: Pushed 2edcec3590a4: Pushed v1: digest: sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3 size: 1570
登录网页查看镜像

下载镜像
配置docker文件
#docker客户端创建域名目录 mkdir /etc/docker/certs.d/harbor.chu.net -p #将horbor服务器公钥复制到客户端目录 scp 10.0.0.101:/apps/harbor/certs/chu.net.crt /etc/docker/certs.d/harbor.chu.net/ # 配置hosts echo "10.0.0.101 harbor.chu.net" >> /etc/hosts
下载镜像
[root@docker2 ~]#docker pull harbor.chu.net/test/nginx:v1 v1: Pulling from test/nginx a2abf6c4d29d: Pull complete a9edb18cadd1: Pull complete 589b7251471a: Pull complete 186b1aaa4aa6: Pull complete b4df32aa5a72: Pull complete a0bcbecc962e: Pull complete Digest: sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3 Status: Downloaded newer image for harbor.chu.net/test/nginx:v1 harbor.chu.net/test/nginx:v1 [root@docker2 ~]#docker images REPOSITORY TAG IMAGE ID CREATED SIZE harbor.chu.net/test/nginx v1 605c77e624dd 12 months ago 141MB
五、部署haproxy和keepalived高可用负载均衡
主机清单
ha1 10.0.0.12 ha2 10.0.0.22 vip 10.0.0.100 harbor 10.0.0.101
安装keepalived
-
安装keepalived
apt update apt install keepalived -y -
配置keepalived.conf
cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
/etc/keepalived/keepalived.conf
配置文件参考! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id 10.0.0.12 vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 vrrp_mcast_group4 224.0.0.18 } # 脚本需放在调用之前,先定义好 vrrp_script chk_haproxy { script "/etc/keepalived/chk_haproxy.sh" interval 1 timeout 2 weight -30 fall 3 rise 5 } vrrp_instance haproxy { state MASTER interface eth0 virtual_router_id 51 priority 100 #主机配置100,备机配置80 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.100/24 dev eth0 label eth0:1 } track_script { chk_haproxy } }
-
编写chk_haproxy脚本
[root@ha1 ~]#vim /etc/keepalived/chk_haproxy.sh #!/bin/bash /usr/bin/killall -0 haproxy #添加执行权限 [root@ha1 ~]# chmod a+x /etc/keepalived/chk_haproxy.sh -
重启服务
systemctl restart keepalived -
查看VIP
[root@ha1 ~]#ifconfig -a eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.0.12 netmask 255.255.255.0 broadcast 10.0.0.255 inet6 fe80::20c:29ff:fe18:2919 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:18:29:19 txqueuelen 1000 (Ethernet) RX packets 127956 bytes 183307497 (183.3 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 16512 bytes 1358344 (1.3 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.0.100 netmask 255.255.255.0 broadcast 0.0.0.0 ether 00:0c:29:18:29:19 txqueuelen 1000 (Ethernet) # 停止主机keepalived服务后查看 [root@ha2 ~]#ifconfig -a eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.0.22 netmask 255.255.255.0 broadcast 10.0.0.255 inet6 fe80::20c:29ff:fe87:9f48 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:87:9f:48 txqueuelen 1000 (Ethernet) RX packets 185422 bytes 264231220 (264.2 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 25356 bytes 1801567 (1.8 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.0.100 netmask 255.255.255.0 broadcast 0.0.0.0 ether 00:0c:29:87:9f:48 txqueuelen 1000 (Ethernet)
安装haproxy
-
安装haproxy
apt update apt install haproxy -y
-
修改配置文件
#vim /etc/haproxy/haproxy.cfg .... #末尾添加如下配置 listen stats mode http bind 0.0.0.0:9999 stats enable log global stats uri /haproxy-status stats auth haadmin:123456 listen harbor-80 bind 10.0.0.100:80 mode tcp balance source # 源地址hash log global server harbor1 10.0.0.101:80 check inter 3s fall 3 rise 5 listen harbor-443 bind 10.0.0.100:443 mode tcp balance source # 源地址hash log global server harbor1 10.0.0.101:443 check inter 3s fall 3 rise 5
-
启用内核参数
#限制响应级别:arp_ignore #0:默认值,表示可使用本地任意接口上配置的任意地址进行响应 #1:仅在请求的目标IP配置在本地主机的接收到请求报文的接口上时,才给予响应 echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore #限制通告级别:arp_announce #0:默认值,把本机所有接口的所有信息向每个接口的网络进行通告 #1:尽量避免将接口信息向非直接连接网络进行通告 #2:必须避免将接口信息向非本网络进行通告 echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf #开启后VIP不在本地,haproxy也可绑定该地址 echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf #开启ipv4路由转发功能 #执行sysctl -p命令,修改内核生效 sysctl -p
-
重启服务
systemctl restart haproxy -
查看端口
查看主机
[root@ha1 ~]#netstat -ntlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:9999 0.0.0.0:* LISTEN 14197/haproxy tcp 0 0 0.0.0.0:37871 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/init tcp 0 0 10.0.0.100:80 0.0.0.0:* LISTEN 14197/haproxy tcp 0 0 0.0.0.0:48405 0.0.0.0:* LISTEN 817/rpc.mountd tcp 0 0 0.0.0.0:52757 0.0.0.0:* LISTEN 817/rpc.mountd tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 815/systemd-resolve tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 886/sshd: /usr/sbin tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 2557/sshd: root@pts tcp 0 0 10.0.0.100:443 0.0.0.0:* LISTEN 14197/haproxy tcp 0 0 0.0.0.0:48155 0.0.0.0:* LISTEN 817/rpc.mountd tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN - tcp6 0 0 :::111 :::* LISTEN 1/init tcp6 0 0 :::22 :::* LISTEN 886/sshd: /usr/sbin tcp6 0 0 ::1:6010 :::* LISTEN 2557/sshd: root@pts tcp6 0 0 :::40575 :::* LISTEN 817/rpc.mountd tcp6 0 0 :::2049 :::* LISTEN - tcp6 0 0 :::53825 :::* LISTEN 817/rpc.mountd tcp6 0 0 :::39107 :::* LISTEN 817/rpc.mountd tcp6 0 0 :::40197 :::* LISTEN - 查看备机
[root@ha2 ~]#netstat -ntlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:52585 0.0.0.0:* LISTEN 808/rpc.mountd tcp 0 0 0.0.0.0:58827 0.0.0.0:* LISTEN 808/rpc.mountd tcp 0 0 0.0.0.0:9999 0.0.0.0:* LISTEN 17951/haproxy tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/init tcp 0 0 10.0.0.100:80 0.0.0.0:* LISTEN 17951/haproxy tcp 0 0 0.0.0.0:35155 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 806/systemd-resolve tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 884/sshd: /usr/sbin tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 2533/sshd: root@pts tcp 0 0 10.0.0.100:443 0.0.0.0:* LISTEN 17951/haproxy tcp 0 0 0.0.0.0:47707 0.0.0.0:* LISTEN 808/rpc.mountd tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN - tcp6 0 0 :::46415 :::* LISTEN - tcp6 0 0 :::111 :::* LISTEN 1/init tcp6 0 0 :::48851 :::* LISTEN 808/rpc.mountd tcp6 0 0 :::50229 :::* LISTEN 808/rpc.mountd tcp6 0 0 :::22 :::* LISTEN 884/sshd: /usr/sbin tcp6 0 0 ::1:6010 :::* LISTEN 2533/sshd: root@pts tcp6 0 0 :::2049 :::* LISTEN - tcp6 0 0 :::52067 :::* LISTEN 808/rpc.mountd
查看网页
-
登录状态页面
用户名:haadmin,密码:123456
-
进入状态页面
-
登录harbor
通过VIP或域名登录harbor仓库


高可用验证
-
关闭主机10.0.0.12
备机VIP10.0.0.100
[root@ha2 ~]#ifconfig -a eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.0.22 netmask 255.255.255.0 broadcast 10.0.0.255 inet6 fe80::20c:29ff:fe87:9f48 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:87:9f:48 txqueuelen 1000 (Ethernet) RX packets 198125 bytes 270867171 (270.8 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 33435 bytes 6558265 (6.5 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.0.100 netmask 255.255.255.0 broadcast 0.0.0.0 ether 00:0c:29:87:9f:48 txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 176 bytes 17946 (17.9 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 176 bytes 17946 (17.9 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 备机端口80/443/9999
[root@ha2 ~]#netstat -ntlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:52585 0.0.0.0:* LISTEN 808/rpc.mountd tcp 0 0 0.0.0.0:58827 0.0.0.0:* LISTEN 808/rpc.mountd tcp 0 0 0.0.0.0:9999 0.0.0.0:* LISTEN 17951/haproxy tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/init tcp 0 0 10.0.0.100:80 0.0.0.0:* LISTEN 17951/haproxy tcp 0 0 0.0.0.0:35155 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 806/systemd-resolve tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 884/sshd: /usr/sbin tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 2533/sshd: root@pts tcp 0 0 10.0.0.100:443 0.0.0.0:* LISTEN 17951/haproxy tcp 0 0 0.0.0.0:47707 0.0.0.0:* LISTEN 808/rpc.mountd tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN - tcp6 0 0 :::46415 :::* LISTEN - tcp6 0 0 :::111 :::* LISTEN 1/init tcp6 0 0 :::48851 :::* LISTEN 808/rpc.mountd tcp6 0 0 :::50229 :::* LISTEN 808/rpc.mountd tcp6 0 0 :::22 :::* LISTEN 884/sshd: /usr/sbin tcp6 0 0 ::1:6010 :::* LISTEN 2533/sshd: root@pts tcp6 0 0 :::2049 :::* LISTEN - tcp6 0 0 :::52067 :::* LISTEN 808/rpc.mountd
-
通过登录VIP或域名harbor仓库
harbor仓库登录正常

【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】凌霞软件回馈社区,博客园 & 1Panel & Halo 联合会员上线
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】博客园社区专享云产品让利特惠,阿里云新客6.5折上折
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· DeepSeek “源神”启动!「GitHub 热点速览」
· 微软正式发布.NET 10 Preview 1:开启下一代开发框架新篇章
· 我与微信审核的“相爱相杀”看个人小程序副业
· C# 集成 DeepSeek 模型实现 AI 私有化(本地部署与 API 调用教程)
· DeepSeek R1 简明指南:架构、训练、本地部署及硬件要求