k8s集群安装学习笔记一——Kubernetes安装部署
简介:
系统初始化
Kubeadm部署安装
搭建harbor
系统初始化
安装依赖包
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
hostnamectl set-hostname <hostname>
在 master及每台node 添加 hosts:(或者大型环境使用DNS解析)
$ cat >> /etc/hosts << EOF 192.168.31.61 k8s-master 192.168.31.62 k8s-node1 192.168.31.63 k8s-node2 EOF
systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1 #开启网桥模式
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0 #新版本的可能已经弃用了这条,删除即可
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1 #禁用ipv6
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
同步系统时间
# 设置系统时区为 中国/上海 timedatectl set-timezone Asia/Shanghai # 将当前的 UTC 时间写入硬件时钟 timedatectl set-local-rtc 0 # 重启依赖于系统时间的服务 systemctl restart rsyslog systemctl restart crond
mkdir /var/log/journal # 持久化保存日志的目录 mkdir /etc/systemd/journald.conf.d cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF [Journal] # 持久化保存到磁盘 Storage=persistent # 压缩历史日志 Compress=yes SyncIntervalSec=5m RateLimitInterval=30s RateLimitBurst=1000 # 最大占用空间 10G SystemMaxUse=10G # 单日志文件最大 200M SystemMaxFileSize=200M # 日志保存时间 2 周 MaxRetentionSec=2week # 不将日志转发到 syslog ForwardToSyslog=no EOF systemctl restart systemd-journald
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm # 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装 一次! yum --enablerepo=elrepo-kernel install -y kernel-lt # 设置开机从新内核启动 grub2-set-default 'CentOS Linux (5.4.144-1.el7.elrepo.x86_64) 7 (Core)' cat /proc/version
Kubeadm部署安装
所有节点安装 Docker/kubeadm/kubelet
Kubernetes 默认 CRI( 容器运行时) 为 Docker, 因此先安装 Docker
modprobe br_netfilter cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager \ --add-repo \ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum update -y && yum install -y docker-ce ## 创建 /etc/docker 目录 mkdir /etc/docker # 配置 daemon. cat > /etc/docker/daemon.json <<EOF { "registry-mirrors": ["https://3qidreil.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" } } EOF mkdir -p /etc/systemd/system/docker.service.d # 重启docker服务 systemctl daemon-reload && systemctl restart docker && systemctl enable docker
添加阿里云 YUM 软件源
$ cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
安装 kubeadm, kubelet 和 kubectl
由于版本更新频繁,这里指定版本号部署
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0 systemctl enable kubelet
至此,以上步骤所有节点(master/slave)都执行就对了
部署 Kubernetes Master
在 192.168.31.61( Master) 执行
第一种初始化方式:(推荐,方便,自动下载)
$ kubeadm init \ --apiserver-advertise-address=192.168.44.146 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.18.0 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16
由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。
--apiserver-advertise-address=x.x.x.x :kubeadm使用eth0的默认网络接口(通常是内网IP)做为Master节点的advertise address,如果我们想使用不同的网络接口,该参数来设置
初始化注意:kubernetes 1.24+版本之后,docker必须要加装cir-docker,否则初始化会报错,以下可能出现的报错是因为本次实验使用的是最新版v1.27.2,之前实验的v1.18.0版本未发现下列错误。
如果安装的是1.24+版本,执行下面步骤:
#卸载原安装docker,以免冲突 yum -y remove docker docker-common #切换镜像源 wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo yum install -y docker-ce systemctl start docker systemctl enable docker #查看并验证 docker info
如果安装后启动不了docker,报错如下:
failed to start daemon: error initializing graphdriver: overlay2: unknown option overlay2.override_kernel_check: overlay2
解决方法一:
添加一块支持overlay2存储的,步骤如下:
1.看下文件系统是否支持了ftype,1表示开启,0为关闭
[root@k8s-master docker]# xfs_info /|grep ftype naming =version 2 bsize=4096 ascii-ci=0 ftype=0
ftype没有开启的话,可重新添加一块专给docker使用的磁盘即可
[root@k8s-master ~]# fdisk -l 磁盘 /dev/sda:107.4 GB, 107374182400 字节,209715200 个扇区 ... 设备 Boot Start End Blocks Id System /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 209715199 104344576 8e Linux LVM 磁盘 /dev/sdb:42.9 GB, 42949672960 字节,83886080 个扇区 ... 磁盘 /dev/mapper/centos-root:106.8 GB, 106845700096 字节,208683008 个扇区 ... [root@k8s-master ~]# mkfs.xfs -n ftype=1 /dev/sdb meta-data=/dev/sdb isize=512 agcount=4, agsize=2621440 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=10485760, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=5120, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@k8s-master ~]# blkid /dev/sdb /dev/sdb: UUID="05293869-8bac-4f8d-8d38-e5094538cf5f" TYPE="xfs" [root@k8s-master ~]# cat /etc/fstab /dev/mapper/centos-root / xfs defaults 0 0 UUID=f0a19dd7-d67c-4e3d-803c-7da78fb04021 /boot xfs defaults 0 0 UUID=05293869-8bac-4f8d-8d38-e5094538cf5f /data xfs defaults 0 0 #挂载 [root@k8s-master ~]# mount -a [root@k8s-master ~]# xfs_info /data|grep ftype naming =version 2 bsize=4096 ascii-ci=0 ftype=1 #创建目录给docker使用 mkdir /data/docker #修改docker配置文件 vim /etc/docker/daemon.json { "data-root": "/data/docker", "registry-mirrors": ["https://3qidreil.mirror.aliyuncs.com"] } #启动 systemctl start docker
解决方法二:
重新安装系统,设置盘符文件系统格式为ext4
注意:初始化可能会出现的错误一:
error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR CRI]: container runtime is not running: output: time="2023-06-15T09:31:33+08:00" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService" , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
解决方法
[root@k8s-master ~]# rm -rf /etc/containerd/config.toml [root@k8s-master ~]# systemctl restart containerd
然后再执行初始化即可,例:
kubeadm init \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.27.2 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16
注意:初始化可能会出现的错误二:
... [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher
按提示命令进一步查看有很多如下错误:journalctl -xeu kubelet 6月 15 10:23:49 k8s-master kubelet[4567]: E0615 10:23:49.335726 4567 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 74.125.204.82:443: i/o timeout" 6月 15 10:23:49 k8s-master kubelet[4567]: E0615 10:23:49.335864 4567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 74.125.204.82:443: i/o timeout" pod="kube-system/kube-scheduler-k8s-master"
查看日志发现,是从registry.k8s.io拉取pause:3.6失败引起的
解决:修改 containerd 配置文件
# 自动生成配置文件 containerd config default > /etc/containerd/config.toml #修改配置文件记得重启服务 systemctl daemon-reload && systemctl restart containerd vim /etc/crictl.yaml runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false vim /etc/containerd/config.toml #修改,改用国内阿里云能访问到的地址 sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9" #保存,重启服务
然后重新初始化
#重置kubeadm kubeadm reset #初始化 kubeadm init \ --image-repository=registry.aliyuncs.com/google_containers \ --kubernetes-version=v1.27.2 \ --pod-network-cidr=10.244.0.0/16 \ --service-cidr=10.96.0.0/12 \ --ignore-preflight-errors=all
第二种初始化方式:(优点:速度快)
(能连上外网)后把镜像push下来打包手动初始化
把包上传至服务器
编写手动初始化脚本
vim load_images.sh ls /root/kubeadm-basic.images > /tmp/image-list.txt cd /root/kubeadm-basic.images for i in $(cat /tmp/image-list.txt) do docker load -i $i done rm -rf /tmp/image-list.txt
$ chmod a+x load-images.sh
$ ./load-images.sh
#获取默认初始化模板
kubeadm config print init-defaults > kubeadm-config.yaml
修改后如下:
localAPIEndpoint:
advertiseAddress: 192.168.66.10
kubernetesVersion: v1.15.1
networking:
podSubnet: "10.244.0.0/16" #pod的网段
serviceSubnet: 10.96.0.0/12
#并添加如下字段:
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs #把默认调度方式改为ipvs
#执行初始化
kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
#--experimental-upload-certs 自动颁发证书
查看上面日志(kubeadm-init.log)可以知道,还需要执行下面步骤:
使用kubectl命令行管理工具:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
#此时kubectl就可以使用了
#查看节点 $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady master 4m13s v1.15.1
因为还没有构建flannel网络插件,所以上面状态的NotReady
构建flannel网络插件
mkdir -p install-k8s/core mv kubeadm-init.log kubeadm-config.yaml install-k8s/core cd install-k8s/ mkdir -p plugin/flannel cd plugin/flannel
#如果访问不了,可连接外网现下载下来,再上传服务器 wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl create -f kube-flannel.yml #可以通过flannel的资源清单创建相应文件 #查看pod运行状态(-n kube-system 指定名称空间-系统组件默认安装位置,必须指定,不加默认使用default的名称空间) kubectl get pod -n kube-system #可以查看到flannel组件运行了 #现在再查看节点(要稍等一会儿) kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 4m13s v1.15.1
加入 Kubernetes Node
在192.168.1.12/13(Node)执行。
向集群添加新节点,执行在kubeadm-init.log输出的kubeadm join命令:
$ kubeadm join 192.168.1.11:6443 --token esce21.q6hetwm8si29qxwn \ --discovery-token-ca-cert-hash sha256:00603a05805807501d7181c3d60b478788408cfe6cedefedb1f97569708be9c5
注意:实验中v1.27.2使用上面命令添加节点时可能会报如下错误:
[preflight] Running pre-flight checks error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR CRI]: container runtime is not running: output: time="2023-06-15T15:15:26+08:00" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService" , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
解决:
[root@k8s-node1 ~]# vim /etc/containerd/config.toml [root@k8s-node1 ~]# rm -rf /etc/containerd/config.toml
然后重新执行kubeadm join即可
查看pod运行状态,可以看到新加入的节点正处于初始化状态中...
kubectl get pod -n kube-system
过一会儿后查看,即可看到新加入的节点也处于Ready状态了
kubectl get nodes
默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:
kubeadm token create --print-join-command
注意:kubeadm join后,在master执行kubectl get pod -n kube-system可能会出现的问题:显示容器一直在创建中
[root@k8s-master ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-7bdc4cb885-pdb7z 1/1 Running 0 3h57m coredns-7bdc4cb885-sc9lr 1/1 Running 0 3h57m etcd-k8s-master 1/1 Running 0 3h57m kube-apiserver-k8s-master 1/1 Running 0 3h57m kube-controller-manager-k8s-master 1/1 Running 2 3h57m kube-proxy-8zpvw 0/1 ContainerCreating 0 12m kube-proxy-9vfs9 1/1 Running 0 3h57m kube-proxy-bxjsz 0/1 ContainerCreating 0 12m kube-scheduler-k8s-master 1/1 Running 2 3h57m
查看容器详细信息
kubectl describe pod kube-proxy-8zpvw -n kube-system #报错信息 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "registry.k8s.io/pause:3.6": failed to pull image "registry.k8s.io/pause:3.6": failed to pull and unpack image "registry.k8s.io/pause:3.6": failed to resolve reference "registry.k8s.io/pause:3.6": failed to do request: Head "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6": dial tcp 142.251.8.82:443: i/o timeout
同样按照上面master出现过的问题解决下即可/etc/containerd/config.toml
[root@k8s-node1 ~]# containerd config default > /etc/containerd/config.toml [root@k8s-node1 ~]# systemctl daemon-reload && systemctl restart containerd [root@k8s-node1 ~]# vim /etc/crictl.yaml
... [root@k8s-node1 ~]# vim /etc/containerd/config.toml
... [root@k8s-node1 ~]# systemctl daemon-reload && systemctl restart containerd
稍后再查看就正常了
[root@k8s-master ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-7bdc4cb885-pdb7z 1/1 Running 0 4h12m coredns-7bdc4cb885-sc9lr 1/1 Running 0 4h12m etcd-k8s-master 1/1 Running 0 4h12m kube-apiserver-k8s-master 1/1 Running 0 4h12m kube-controller-manager-k8s-master 1/1 Running 2 4h12m kube-proxy-8zpvw 1/1 Running 0 26m kube-proxy-9vfs9 1/1 Running 0 4h12m kube-proxy-bxjsz 1/1 Running 0 27m kube-scheduler-k8s-master 1/1 Running 2 4h12m
再查看节点信息
[root@k8s-master ~]# kubectl get node -n kube-system NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane 4h15m v1.27.2 k8s-node1 Ready <none> 30m v1.27.3 k8s-node2 Ready <none> 29m v1.27.3
正常
将k8s与harbor仓库连接起来
搭建harbor
curl -L https://get.daocloud.io/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
给docker-compose添加执行权限
sudo chmod +x /usr/local/bin/docker-compose
查看docker-compose是否安装成功
$ docker-compose -version docker-compose version 1.25.0, build 0a186604
wget https://github.com/goharbor/harbor/releases/download/v2.0.1/harbor-offline-installer-v2.0.1.tgz
tar -xzf harbor-offline-installer-v2.0.1.tgz mkdir /opt/harbor mv harbor/* /opt/harbor cd /opt/harbor
修改Harbor的配置,没有的话复制harbor.yml.tmpl
vi harbor.yml 修改hostname和port hostname: 192.168.1.1 #harbor服务访问IP/域名 port: 85
安装Harbor
./prepare ./install.sh #如果需要支持存储helm的chart包,添加如下参数 ./install.sh --with-chartmuseum
访问
192.168.1.1:85 默认用户名密码admin/Harbor12345 账号密码可在配置文件里修改
启动、停止Harbor
docker-compose up -d 启动 docker-compose stop 停止 docker-compose restart 重新启动
推送镜像验证
先需要将仓库添加到docker配置里面
$ vim /etc/docker/daemon.json { "insecure-registries": ["192.168.1.1:85"] } $ systemctl restart docker
docker pull library/nginx:laster
给镜像重新打标签
docker tag library/nginx:laster 192.168.1.1:85/library/nginx:latest
登录Harbor仓库
[root@k8s-n1 harbor]# docker login 192.168.1.1:85
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
上传镜像(推送命令可在harbor后台查看)
[root@k8s-n1 harbor]# docker push 192.168.1.1:85/library/nginx:latest The push refers to repository [192.168.1.1:85/library/nginx] 6c7de695ede3: Pushed 2f4accd375d9: Pushed ffc9b21953f4: Pushed latest: digest: sha256:8269a7352a7dad1f8b3dc83284f195bac72027dd50279422d363d49311ab7d9b size: 948
$ vim /etc/docker/daemon.json { "insecure-registries": ["192.168.1.1:85"] } $ systemctl restart docker
下载测试镜像
docker pull 192.168.1.1:85/library/nginx:laster
测试k8s和harbor的连通性
创建一个deployment控制器来管理容器
kubectl run nginx-deployment --image=192.168.1.1:85/library/nginx:laster --port=80 --replicas=1
--image :指定镜像为harbor仓库的镜像
--port : 指定端口,不写默认也有
--replicas=1 :指定副本数为1,如果删除一个,为了保持副本数为1,就会自动创建一个新的补齐
#查看在每个 Deployment revision 中执行了哪些命令 kubectl get deployment #deployment会链接 ReplicaSet(rs),查看rs结果差不多 kubectl get rs #查看pod运行状态 kubectl get pod #查看pod运行状态及容器信息(包括启动时间、IP及在哪个节点运行) kubectl get pod -o wide #例如查到在A节点运行的,登录A节点使用docker命令可查看运行的容器 docker ps -a |grep nginx
删除一个pod
kubectl get pod
kubectl delete pod nginx-deployment-85756b779-shc41
扩容副本数(减压)
kubectl get deployment kubectl scale --replcas=3 deployment/nginx-deployment
测试 kubernetes 集群
在 Kubernetes 集群中创建一个 pod, 验证是否正常运行:
$ kubectl create deployment nginx-deploymen--image=nginx $ kubectl expose deployment nginx-deployment --port=80 --type=NodePort $ kubectl get pod,svc
kubectl expose deployment nginx-deploymen --port=8080 --target-port=80 kubectl get svc #得到NodeIP (service)
#访问
curl NodeIP:8080 #(会轮训访问三个副本)
注:上面k8s的deployment创建pod方式仅为测试操作,仅供初学了解,实际应用中基本通过资源清单的方式创建。
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· 没有源码,如何修改代码逻辑?
· 全程不用写代码,我用AI程序员写了一个飞机大战
· DeepSeek 开源周回顾「GitHub 热点速览」
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了