ceph集群状态 pgs not scrubbed in time
摘要:
检查集群状态 ``` # ceph -s cluster: id: 83738b81-56e4-4d34-bdc2-3a60d789d224 health: HEALTH_WARN 75 pgs not scrubbed in time services: mon: 3 daemons, quoru
npc客户端安装配置
摘要:
``` D:\app\windows_amd64_client\npc.exe install -server=xxx.xxx.xxx.xxx:8069 -vkey=07sdz23ykge63p21 -type=tcp D:\app\windows_amd64_client\npc.exe star
calico.yaml
摘要:
``` # Source: calico/templates/calico-kube-controllers.yaml # This manifest creates a Pod Disruption Budget for Controller to allow K8s Cluster Autosc
coredns.yaml
摘要:
``` apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole meta
k8s推送本地目录到空项目仓库
摘要:
``` git init git checkout -b main git remote add origin http://gitlab.wjl.net/root/xxxx.git git add . git commit -m "Initial commit" git push -u origi
k8s推送代码至gitlab报错error: RPC failed; result=22, HTTP code = 413 fatal: The remote end hung up unexpectedly
摘要:
``` # git push -u origin main Username for 'http://gitlab.wjl.net': root Password for 'http://root@gitlab.wjl.net': Counting objects: 1032, done. Delt
重启harbor的脚本
摘要:
``` #!/bin/bash WORK_DIR=/root/harbor for STOP_COUNT_NUM in {1..10} do cd $WORK_DIR docker-compose stop if [ ! $? -eq 0 ]; then echo " STOP ERROR 正在进行
K8S使用ceph-csi持久化存储之RBD
摘要:
Kubernetes集成Ceph一般有两种方案:Ceph组件运行在Kubernetes中,可以使用Rook;还有就是Kubernetes集成外部的Ceph集群。 Ceph版本 ``` [root@master ~]# ceph -v ceph version 14.2.22 (ca745980650
ceph-deploy部署ceph集群 nautilus 14.2.22
摘要:
## 规划 | 主机名 | IP地址 | 系统 | ceph版本 | ceph硬盘 | 大小 | 组件 | 规划 | | | | | | | | | | | master | 192.168.1.60 | CentOS7.9 | ceph-15.2.10 | sdb | 100G | OSD、MOD
centos7 k8s v1.23.15 三节点 全二进制部署
摘要:
| 主机名 | IP地址 | Pod 网段 | Service 网段 | | | | | | | master | 192.168.10.10 | 172.16.0.0/12 | 10.96.0.0/16 | | node1 | 192.168.10.20 | 172.16.0.0/12 | 10.
Pipeline SpringBoot-deploy-CD
摘要:
``` pipeline { agent { kubernetes { cloud 'kubernetes' yaml ''' apiVersion: v1 Kind: Pod spec: imagePullSecrets: - name: harbor-admin containers: - na
Mac SSH 经常超时断开client_loop: send disconnect: Broken pipe
摘要:
一步到胃! ``` cat /etc/ssh/ssh_config Host * SendEnv LANG LC_* IPQoS=throughput TCPKeepAlive yes AddKeysToAgent yes UseKeychain yes ServerAliveInterval 15
k8s pod,pvc,pv无法删除问题
摘要:
一般删除步骤为:先删pod再删pvc最后删pv 但是遇到pv始终处于“Terminating”状态,而且delete不掉 1、查看pvc被哪个pod使用 ``` [root@hadoop03 storageclass]# kubectl describe pvc PVC-NAME | grep Mo
强制删除namespace
摘要:
之前部署过一套监控  因为是本地测试环境,资源不太够,想着进行删除命名空间monitoring ``` [root@k
StatefulSet部署postgresql报错initdb: error: directory "/var/lib/postgresql/data" exists but is not empty & Back-off restarting failed container
摘要:
容器状态一直重启 ``` [root@k8s-master01 sonarqube]# kubectl get pod -n ops NAME READY STATUS RESTARTS AGE gitlab-0 1/1 Running 0 170m pgsql-0 0/1 CrashLoopBac