持久化多合一

该项实验的的前提条件为k8s版本为1.23.6,同时集群为ready状态,持久化采取nfs动态持久化,
ps:保持时间同步,如果时间不同步,会影响最终的实验效果,没有时间同步的可以安装chrony进行时间的同步。
实验主要内容:nfs动态持久化,Prometheus安装部署和持久化,grafna安装部署和持久化,mysql-server安装持久化和添加到监控程序
实验的主机采取三台,2c2g,20G,,,master添加nfs20GB
防火墙为关闭状态,selinux为关闭状态
因为一些yaml的文件基本上大同小异,就没有逐一标上相关的注释。
image.png

nfs动态持久化

yum install nfs-utils -y #所有集群
#添加20G的硬盘作为nfs存储,格式化为exit4
#添加完了之后进行扫描
echo "- - -" > /sys/class/scsi_host/host(序号)/scan
 lsblk进行查看
#进行逻辑卷操作
 pvcreate /dev/sdb
 vgcreate nfs /dev/sdb
lvcreate -L +19G -n data01 nfs  #加19GB
 mkfs.ext4 /dev/mapper/nfs-data01  #进行格式化
mkdir nfs #建立文件夹
mount /dev/mapper/nfs-data01 /nfs/ 进行手动挂载
blkid#查看uuid
vi /etc/fstab #进行自动挂载的操作
df -h 查看是否挂载,如果没有,请检查操作是否正确

 vi /etc/exports   #修改配置文件
 /nfs  *(rw,no_root_squash)

syetemctl start nfs-server#启动服务
systemctl enable --now nfs-server#开机自启

远程连接其他节点,查看是否可以查看到nfs的挂载文件
 showmount -e进行查看,结果如下
Export list for master:
/nfs *

kubectl create ns nfs  建立名称空间
kubectl get ns 查看名称空间有没有建立

nfs-Deployment文件

piVersion: apps/v1  
kind: Deployment   #控制器
metadata:
  name: nfs-client-provisioner #名称
  labels:
    app: nfs-client-provisioner
  namespace: nfs
spec:
  replicas: 1  #副本
  strategy:
    type: Recreate #类型
  selector:
    matchLabels:
      app: nfs-client-provisioner #标签,需要和后续的普罗米修斯一样
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: 
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes #挂载路径
          env:
            - name: PROVISIONER_NAME
              value: nfs-client #---nfs-provisioner的名称,以后设置的storageclass要和这个保持一致
            - name: NFS_SERVER
              value: 192.168.10.134 #nfs服务器的地址
            - name: NFS_PATH
              value: /nfs  #nfs路径
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.10.134 #服务商地址
            path: /nfs #nfs路径

构建nfs -sc.yaml文件

## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner #和前面标签一致
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
spec:
  replicas: 1  #副本
  strategy:
    type: Recreate  #类型
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/repoa/repo:1.0 #镜像
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes  #挂载路径
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner  #标签
            - name: NFS_SERVER
              value: master ## 指定自己nfs服务器地址
            - name: NFS_PATH
              value: /nfs  ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: master  ## 指定自己nfs服务器地址
            path: /nfs  ## nfs服务器共享的目录
---
apiVersion: v1
kind: ServiceAccount #类型
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs  #名称空间
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1 #版本
metadata:
  name: nfs-client-provisioner-runner #期望值
rules: #规则
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding #类型
apiVersion: rbac.authorization.k8s.io/v1 #版本
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io  #api组

构建nfs-pvc文件

apiVersion: v1
kind: PersistentVolumeClaim  #种类
metadata:
  name: nfs-pvc01
spec:
  accessModes:    
    - ReadWriteOnce   #权限
  resources:
    requests: 
      storage: 5Gi   #大小
  storageClassName: nfs-storage  #查看pv,保持一致

运行构建的yaml文件,然后进行状态的查看

kubectl get pod -A   #查看pod 的相关状态
kubectl get pvc      #查看pvc的相关的状态,正常的状态为bound,也就是绑定状态
kubectl   get deolyment  #查看deolyment的状态
kubectl  get svc -A      #查看svc的状态

到此nfs动态存储构建完毕,如果有错误,请自己检查相关的状态

Prometheus持久化

Prometheus持久化主要是配置文件和数据库相关的文件,可以先使用docker拉取Prometheus镜像,然后使用docker运行镜像之后,使用docker inspct查看详细信息,得到配置文件和主要文件,这里就不多论述。需要注意的是,我们在deoloyment文件中写了相关的镜像的,所以不需要再单独进行相关的安装

prometheus-deployment.yaml

apiVersion: apps/v1  #版本
kind: Deployment
metadata:
  name: prometheus-deployment #期望值
  labels: #标签
    app: prometheus 
spec:
  replicas: 3  #副本
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
      - name: prometheus
        image: prom/prometheus 
        command: ["/bin/prometheus"]  #相当于执行语句,这个使用docker查看
        args:   #相当于名称参数,docker自行查看
        - --config.file=/etc/prometheus/prometheus.yml
        - --storage.tsdb.path=/prometheus
        - --web.console.libraries=/usr/share/prometheus/console_libraries
        - --web.console.templates=/usr/share/prometheus/consoles
        ports:
        - containerPort: 9090   #端口
        volumeMounts:   #挂载卷
        - name: prometheus-config   #配置文件挂载
          mountPath: "/etc/promethes"
        - name: prometheus-pvc     #pvc挂载
          mountPath: "/prometheus"
      volumes:
      - name: prometheus-config
        configMap:#使用configmap挂载,主要是配置文件使用这种方式
          name: prometheus-configmap 
      - name: prometheus-pvc
        persistentVolumeClaim:
          claimName: pro-pvc  #和pvc保持一致
---
apiVersion: v1   #版本
kind: ConfigMap
metadata:
  name: prometheus-configmap #期望值
data:
  prometheus.yml: |    #全是配置文件
    # my global config
    global:
      scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
      evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
      # scrape_timeout is set to the global default (10s).
    
    # Alertmanager configuration
    alerting:
      alertmanagers:
        - static_configs:
            - targets:
              # - alertmanager:9093
    
    # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
    rule_files:
      # - "first_rules.yml"
      # - "second_rules.yml"
    
    # A scrape configuration containing exactly one endpoint to scrape:
    # Here it's prometheus itself.
    scrape_configs:
      # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
      - job_name: "prometheus"
    
        # metrics_path defaults to '/metrics'
        # scheme defaults to 'http'.
    
        static_configs:
          - targets: ["localhost:9090"]

pvc

apiVersion: v1
kind: PersistentVolumeClaim #种类
metadata:
  name: pro-pvc  #保持一致
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem  #挂载
  resources:
    requests:
      storage: 5Gi   #挂载大小
  storageClassName: nfs-storage  #保持一致
apiVersion: v1
kind: Service
metadata:
  name: prometheus
spec:
  selector:
    app.kubernetes.io/name: prometheus-svc
  ports:
    - protocol: TCP
      port: 9090
      

grafana持久化

对于grafana主要是配置文件的持久化,和上边的普罗米修斯一样,可以先docker拉取一个镜像,然后运行镜像,查看相关的详细信息,做相关的持久化,这里也不做过多的概述。当yaml文件运行之后,请查看pod的相关的状态,查看pod是否正常,在这里也可以通过svc的端口进行访问测试,

grafana-deployment

apiVersion: apps/v1
kind: Deployment  
metadata:
  name: grafana-deployment
  labels:
    app: grafana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      containers:
      - name:  grafana
        image: grafana/grafana
        ports:
        - containerPort: 3000
        volumeMounts:
        - name: grafana-pvc
          mountPath: "/var/lib/grafana"
      volumes:
      - name: grafana-pvc
        persistentVolumeClaim:
          claimName: gr-pvc
 
---
apiVersion: v1
kind: Service
metadata:
  name: grafana
spec:
  type: NodePort
  selector:
    app: grafana
  ports:
    - protocol: TCP
      port: 3000
      targetPort: 3000

gr-pvc

kind: PersistentVolumeClaim
metadata:
  name: gr-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 5Gi
  storageClassName: nfs-storage

mysql持久化

mysql的持久化相对来说,主要做数据的持久化,相关的配置文件也是可以使用docker来进行相关 的查看的,不多概述。需要注意的是,Myql我们采用的是StatefulSet控制器,不是deyloyment控制器,写法和deployment是有不一样的,自己观察仔细

mysql-statefulset

apiVersion: apps/v1
kind: StatefulSet  #控制器
metadata:
  name: mysql-statefulset
  labels:
    app: msyql
spec:
  serviceName: "mysql"
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name:  mysql
        image: mysql/mysql-server  #镜像
        ports:
        - containerPort: 3306
        volumeMounts:
        - name: mysql-pvc
          mountPath: "/var/lib/mysql"  #挂载路径
      - name: mysql-exporter   #需要注意的是,这里添加了导出器的设置,
        image: prom/mysqld-exporter
        env:
        - name: DATA_SOURCE_NAME
          value: "root:123456@(localhost:3306)/"  #账号和密码
        ports:
        - containerPort: 9104   #端口
  volumeClaimTemplates:
  - metadata:
      name: mysql-pvc
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  type: NodePort
  selector:
    app: mysql
  ports:
    - protocol: TCP
      port: 3306
      targetPort: 3306
    - name: mysql-exporter
      port: 9104
      targetPort: 9104

在这里可以查看pod的相关的状态,然后进行Myql容器,进行密码的相关的修改,然后写入相关的数据,删除掉mysql的Pod,再重新安装,查看数据是否还存在,如果数据存在,那么代表持久化成功,如果没有,请检查自己的文件的相关的配置

普罗米修斯监控

这里基本上都是需要自己定义的,需要安装普罗米修斯的节点包,以及修改一些配置.。安装节点包是需要在所有的节点上进行的,同时所有节点也要运行,这个需要注意

 wget https://github.com/prometheus/node_exporter/releases/download/v1.4.0/node_exporter-1.4.0.linux-amd64.tar.gz
 scp node_exporter-1.4.0.linux-amd64.tar.gz node01:~/
scp node_exporter-1.4.0.linux-amd64.tar.gz node02:~/
tar xf node_exporter-1.4.0.linux-amd64.tar.gz
 nohup ./node_exporter &    采取后台运行的措施

对配置文件进行修改,这个时候我们能发现,普罗米修斯的配置文件主机在deyployment文件中了,我们只需要添加配置文件就行了,添加相关的任务就行了

static_configs:  #添加任务
          - targets: ["localhost:9090"]
      - job_name: "Node"  #任务名称
        static_configs:  #任务的相关的参数
          - targets: ["192.168.10.134:9100","192.168.10.135:9100","192.168.10.136:9100"]
      - job_name: "Mysql"
        static_configs:
          - targets: ["mysql:9104"]

然后查看svc,查看端口,对普罗米修斯进行访问,查看我们的任务有没有添加上,因为默认情况是没有的。普罗米修斯一定要进行时间同步,不然会影响到实验的效果image.png
我们能看到,我们添加的相关的任务已经在普罗米修斯里边了
接下来我们来对格拉法拉进行访问,默认的密码和用户都是admin的,第一次登录会强制性修改密码的,然后是需要添加数据源的,选择普罗米修斯就行。
image.png
image.png
我们可以添加一个模板查看效果,数据源记得选择普罗米修斯,如果数据不能正常显示,请检查自己的普罗米修斯的状态是否正常
image.png
image.png

posted @   yutoujun  阅读(26)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 25岁的心里话
· 闲置电脑爆改个人服务器(超详细) #公网映射 #Vmware虚拟网络编辑器
· 零经验选手,Compose 一天开发一款小游戏!
· 因为Apifox不支持离线,我果断选择了Apipost!
· 通过 API 将Deepseek响应流式内容输出到前端
点击右上角即可分享
微信分享提示