11 Prometheus 监控系统(转载)

Prometheus 监控系统

PrometheusCeph

Dashboard 能够提供部分 Ceph 的监控能力,然而如果要更加细化的监控能力和自定义监控则无法实现, prometheus 是新一代的监控系统,能够提供更加完善的监控指标和告警能力,一个完善的 prometheus 系统通常包含:

  • exporter:监控agent端,用于上报,mgr默认已经提供,内置有监控指标
  • promethues:监控服务端,存储监控数据,提供查询,监控,告警,展示等能力;
  • grafana:从prometheus中获取监控指标并通过模版进行展示

img

exporters 客户端

Rook 默认启用了 exporters 客户端,以 rook-ceph-mgr service 的方式提供 9283 端口作为 agent 端,默认是 ClusterIP ,可以启用为 NodePort 给外部访问,如下开启

[root@m1 ~]# kubectl -n rook-ceph get svc -l app=rook-ceph-mgr
NAME                                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
rook-ceph-mgr                            ClusterIP   10.68.82.138   <none>        9283/TCP         3d
rook-ceph-mgr-dashboard                  ClusterIP   10.68.153.82   <none>        8443/TCP         3d
rook-ceph-mgr-dashboard-external-https   NodePort    10.68.41.90    <none>        8443:35832/TCP   155m

[root@m1 ~]# kubectl -n rook-ceph  edit svc rook-ceph-mgr
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2022-11-24T03:24:57Z"
  labels:
    app: rook-ceph-mgr
    rook_cluster: rook-ceph
  name: rook-ceph-mgr
  namespace: rook-ceph
  ownerReferences:
  - apiVersion: ceph.rook.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: CephCluster
    name: rook-ceph
    uid: 8279c0cb-e44f-4af6-8689-115025bb2940
  resourceVersion: "406301"
  uid: 88c71e2d-9a07-441f-a3ad-25f00e551806
spec:
  clusterIP: 10.68.82.138
  clusterIPs:
  - 10.68.82.138
  ports:
  - name: http-metrics
    port: 9283
    protocol: TCP
    targetPort: 9283
  selector:
    app: rook-ceph-mgr
    rook_cluster: rook-ceph
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

[root@m1 ~]# kubectl -n rook-ceph get svc -l app=rook-ceph-mgr
NAME                                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
rook-ceph-mgr                            NodePort    10.68.82.138   <none>        9283:38769/TCP   3d
rook-ceph-mgr-dashboard                  ClusterIP   10.68.153.82   <none>        8443/TCP         3d
rook-ceph-mgr-dashboard-external-https   NodePort    10.68.41.90    <none>        8443:35832/TCP   156m

修改后可以通过控制台的打开 agentmetrics 地址,可以查看到有 mgr exporter 所上报的监控数据,非常详细,能够覆盖大部分所需的监控指标和数据

[root@m1 ~]# curl http://m1:38769/metrics
# HELP ceph_health_status Cluster health status
# TYPE ceph_health_status untyped
ceph_health_status 1.0
# HELP ceph_mon_quorum_status Monitors in quorum
# TYPE ceph_mon_quorum_status gauge
ceph_mon_quorum_status{ceph_daemon="mon.a"} 1.0
ceph_mon_quorum_status{ceph_daemon="mon.b"} 1.0
ceph_mon_quorum_status{ceph_daemon="mon.c"} 1.0
# HELP ceph_fs_metadata FS Metadata
# TYPE ceph_fs_metadata untyped
ceph_fs_metadata{data_pools="4",fs_id="1",metadata_pool="3",name="myfs"} 1.0
# HELP ceph_mds_metadata MDS Metadata
# TYPE ceph_mds_metadata untyped
......

浏览器访问:http://m1:38769/metrics

img

部署 Prometheus Operator

Prometheus 以容器化的形式运行在 kubernetes ,这个工作将会很复杂,涉及到多个组件的配置和集成,幸运的是,社区已经提供了 Operator 给我们使用,我们直接使用即可,如下是部署的方式:

[root@m1 ceph]# kubectl apply -f https://ghproxy.com/https://raw.githubusercontent.com/coreos/prometheus-operator/v0.40.0/bundle.yaml
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
serviceaccount/prometheus-operator created
service/prometheus-operator created

运行之后会自动部署一个 prometheus-operator 的容器,这个容器是 prometheus 的服务端容器

[root@m1 ceph]# kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
prometheus-operator-7ccf6dfc8-sdhdp   1/1     Running   0          81s

部署 Prometheus Instances

[root@m1 ceph]# cd monitoring/

[root@m1 monitoring]# kubectl apply -f prometheus.yaml 
serviceaccount/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus-rules created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
prometheus.monitoring.coreos.com/rook-prometheus created

[root@m1 monitoring]# kubectl apply -f prometheus-service.yaml 
service/rook-prometheus created

[root@m1 monitoring]# kubectl apply -f service-monitor.yaml 
servicemonitor.monitoring.coreos.com/rook-ceph-mgr created
  • 查看部署信息
[root@m1 monitoring]# kubectl -n rook-ceph get pods -l app=prometheus
NAME                           READY   STATUS    RESTARTS   AGE
prometheus-rook-prometheus-0   3/3     Running   1          100s

[root@m1 monitoring]# kubectl -n rook-ceph get svc | grep prom
prometheus-operated                      ClusterIP   None            <none>        9090/TCP            2m42s
rook-prometheus                          NodePort    10.68.99.134    <none>        9090:30900/TCP      2m34s

[root@m1 monitoring]# kubectl -n rook-ceph get ServiceMonitor
NAME            AGE
rook-ceph-mgr   3m19s

Prometheus 控制台

proemthesu 默认提供了一个 9090 端口可以供访问,通过该端口可以访问到 prometheus 的控制台, service-monitor 部署了一个 NodePort 的服务,通过 nodeport 的方式可以访问到 prometheus 的控制台,访问端口为 30900 ,如下:

[root@m1 monitoring]# kubectl -n rook-ceph get svc | grep rook-prom
rook-prometheus                          NodePort    10.68.99.134    <none>        9090:30900/TCP      6m

打开控制台可以进行数据的查询

img

img

安装 grafana 服务端

prometheusexporter 上报的监控数据进行统一的存储,在 prometheus 中可以通过 PromSQL 查询监控数据并进行数据的展示, prometheus 提供了比较简陋的图形界面,这些图形界面显然无法满足到日常实际的需求,幸运的是, grafana 是这方面的高手,因此我们通过 grafana 进行数据的展示, grafana 支持将prometheus做为一个后端进行展示,如下是安装的方法:

[root@node0 prometheus]# yum install https://mirrors.cloud.tencent.com/grafana/yum/rpm/grafana-9.2.3-1.x86_64.rpm

[root@m1 monitoring]# systemctl enable grafana-server --now
Created symlink from /etc/systemd/system/multi-user.target.wants/grafana-server.service to /usr/lib/systemd/system/grafana-server.service.

[root@m1 monitoring]# ss -tnlp | grep grafana
LISTEN     0      32768     [::]:3000                  [::]:*                   users:(("grafana-server",pid=99022,fd=8))

配置 grafana 数据源

grafana 是一个数据展示的开源工具,其数据需要依托于后端的数据源(Data Sources),因此需要配置数据源,以便更好和 grafana 进行集成,如下是配置 prometheusgrafana 进行集成,需要 prometheus9090 端口

访问:http://m1:3000/

img

img

img

img

配置 Grafana 展板

grafana 如何展示 prometheus 的数据呢?需要编写 PromSQL 实现和 prometheus 集成,幸好已经有很多定义好的 json 模版,我们直接使用即可,下载 json 后将其 import 到 grafana

img

Grafana 监控展示

img

Cluster 监控展示

img

Pool 的监控展示

img

OSD 监控展示

img

配置 Prometheus Alerts

img

Create the RBAC rules to enable monitoring.

[root@m1 monitoring]# kubectl create -f rbac.yaml
role.rbac.authorization.k8s.io/rook-ceph-monitor created
rolebinding.rbac.authorization.k8s.io/rook-ceph-monitor created
role.rbac.authorization.k8s.io/rook-ceph-metrics created
rolebinding.rbac.authorization.k8s.io/rook-ceph-metrics created

Make following changes to your CephCluster object (e.g., cluster.yaml).

[root@m1 monitoring]# vim ../cluster.yaml 

apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
[...]
spec:
[...]
  monitoring:
    enabled: true   # 启动配置,默认为 false
    rulesNamespace: "rook-ceph"
[...]

(Where rook-ceph is the CephCluster name / namespace)

Deploy or update the CephCluster object.

[root@m1 monitoring]# kubectl apply -f ../cluster.yaml
cephcluster.ceph.rook.io/rook-ceph configured

集群添加告警信息

[root@m1 monitoring]# cat prometheus-ceph-v14-rules.yaml 
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  labels:
    prometheus: rook-prometheus
    role: alert-rules
  name: prometheus-ceph-rules
  namespace: rook-ceph
spec:
  groups:
  - name: ceph.rules
    rules:
    - expr: |
        kube_node_status_condition{condition="Ready",job="kube-state-metrics",status="true"} * on (node) group_right() max(label_replace(ceph_disk_occupation{job="rook-ceph-mgr"},"node","$1","exported_instance","(.*)")) by (node)
      record: cluster:ceph_node_down:join_kube
    - expr: |
        avg(max by(instance) (label_replace(label_replace(ceph_disk_occupation{job="rook-ceph-mgr"}, "instance", "$1", "exported_instance", "(.*)"), "device", "$1", "device", "/dev/(.*)") * on(instance, device) group_right() (irate(node_disk_read_time_seconds_total[1m]) + irate(node_disk_write_time_seconds_total[1m]) / (clamp_min(irate(node_disk_reads_completed_total[1m]), 1) + irate(node_disk_writes_completed_total[1m])))))
      record: cluster:ceph_disk_latency:join_ceph_node_disk_irate1m
  - name: telemeter.rules
    rules:
    - expr: |
        count(ceph_osd_metadata{job="rook-ceph-mgr"})
      record: job:ceph_osd_metadata:count
    - expr: |
        count(kube_persistentvolume_info * on (storageclass)  group_left(provisioner) kube_storageclass_info {provisioner=~"(.*rbd.csi.ceph.com)|(.*cephfs.csi.ceph.com)"})
      record: job:kube_pv:count
    - expr: |
        sum(ceph_pool_rd{job="rook-ceph-mgr"}+ ceph_pool_wr{job="rook-ceph-mgr"})
      record: job:ceph_pools_iops:total
    - expr: |
        sum(ceph_pool_rd_bytes{job="rook-ceph-mgr"}+ ceph_pool_wr_bytes{job="rook-ceph-mgr"})
      record: job:ceph_pools_iops_bytes:total
    - expr: |
        count(count(ceph_mon_metadata{job="rook-ceph-mgr"} or ceph_osd_metadata{job="rook-ceph-mgr"} or ceph_rgw_metadata{job="rook-ceph-mgr"} or ceph_mds_metadata{job="rook-ceph-mgr"} or ceph_mgr_metadata{job="rook-ceph-mgr"}) by(ceph_version))
      record: job:ceph_versions_running:count
  - name: ceph-mgr-status
    rules:
    - alert: CephMgrIsAbsent
      annotations:
        description: Ceph Manager has disappeared from Prometheus target discovery.
        message: Storage metrics collector service not available anymore.
        severity_level: critical
        storage_type: ceph
      expr: |
        absent(up{job="rook-ceph-mgr"} == 1)
      for: 5m
      labels:
        severity: critical
    - alert: CephMgrIsMissingReplicas
      annotations:
        description: Ceph Manager is missing replicas.
        message: Storage metrics collector service doesn't have required no of replicas.
        severity_level: warning
        storage_type: ceph
      expr: |
        sum(up{job="rook-ceph-mgr"}) < 1
      for: 5m
      labels:
        severity: warning
  - name: ceph-mds-status
    rules:
    - alert: CephMdsMissingReplicas
      annotations:
        description: Minimum required replicas for storage metadata service not available.
          Might affect the working of storage cluster.
        message: Insufficient replicas for storage metadata service.
        severity_level: warning
        storage_type: ceph
      expr: |
        sum(ceph_mds_metadata{job="rook-ceph-mgr"} == 1) < 2
      for: 5m
      labels:
        severity: warning
  - name: quorum-alert.rules
    rules:
    - alert: CephMonQuorumAtRisk
      annotations:
        description: Storage cluster quorum is low. Contact Support.
        message: Storage quorum at risk
        severity_level: error
        storage_type: ceph
      expr: |
        count(ceph_mon_quorum_status{job="rook-ceph-mgr"} == 1) <= ((count(ceph_mon_metadata{job="rook-ceph-mgr"}) % 2) + 1)
      for: 15m
      labels:
        severity: critical
    - alert: CephMonHighNumberOfLeaderChanges
      annotations:
        description: Ceph Monitor {{ $labels.ceph_daemon }} on host {{ $labels.hostname
          }} has seen {{ $value | printf "%.2f" }} leader changes per minute recently.
        message: Storage Cluster has seen many leader changes recently.
        severity_level: warning
        storage_type: ceph
      expr: |
        (ceph_mon_metadata{job="rook-ceph-mgr"} * on (ceph_daemon) group_left() (rate(ceph_mon_num_elections{job="rook-ceph-mgr"}[5m]) * 60)) > 0.95
      for: 5m
      labels:
        severity: warning
  - name: ceph-node-alert.rules
    rules:
    - alert: CephNodeDown
      annotations:
        description: Storage node {{ $labels.node }} went down. Please check the node
          immediately.
        message: Storage node {{ $labels.node }} went down
        severity_level: error
        storage_type: ceph
      expr: |
        cluster:ceph_node_down:join_kube == 0
      for: 30s
      labels:
        severity: critical
  - name: osd-alert.rules
    rules:
    - alert: CephOSDCriticallyFull
      annotations:
        description: Utilization of storage device {{ $labels.ceph_daemon }} of device_class
          type {{$labels.device_class}} has crossed 80% on host {{ $labels.hostname
          }}. Immediately free up some space or add capacity of type {{$labels.device_class}}.
        message: Back-end storage device is critically full.
        severity_level: error
        storage_type: ceph
      expr: |
        (ceph_osd_metadata * on (ceph_daemon) group_right(device_class) (ceph_osd_stat_bytes_used / ceph_osd_stat_bytes)) >= 0.80
      for: 40s
      labels:
        severity: critical
    - alert: CephOSDNearFull
      annotations:
        description: Utilization of storage device {{ $labels.ceph_daemon }} of device_class
          type {{$labels.device_class}} has crossed 75% on host {{ $labels.hostname
          }}. Immediately free up some space or add capacity of type {{$labels.device_class}}.
        message: Back-end storage device is nearing full.
        severity_level: warning
        storage_type: ceph
      expr: |
        (ceph_osd_metadata * on (ceph_daemon) group_right(device_class) (ceph_osd_stat_bytes_used / ceph_osd_stat_bytes)) >= 0.75
      for: 40s
      labels:
        severity: warning
    - alert: CephOSDDiskNotResponding
      annotations:
        description: Disk device {{ $labels.device }} not responding, on host {{ $labels.host
          }}.
        message: Disk not responding
        severity_level: error
        storage_type: ceph
      expr: |
        label_replace((ceph_osd_in == 1 and ceph_osd_up == 0),"disk","$1","ceph_daemon","osd.(.*)") + on(ceph_daemon) group_left(host, device) label_replace(ceph_disk_occupation,"host","$1","exported_instance","(.*)")
      for: 1m
      labels:
        severity: critical
    - alert: CephOSDDiskUnavailable
      annotations:
        description: Disk device {{ $labels.device }} not accessible on host {{ $labels.host
          }}.
        message: Disk not accessible
        severity_level: error
        storage_type: ceph
      expr: |
        label_replace((ceph_osd_in == 0 and ceph_osd_up == 0),"disk","$1","ceph_daemon","osd.(.*)") + on(ceph_daemon) group_left(host, device) label_replace(ceph_disk_occupation,"host","$1","exported_instance","(.*)")
      for: 1m
      labels:
        severity: critical
    - alert: CephDataRecoveryTakingTooLong
      annotations:
        description: Data recovery has been active for too long. Contact Support.
        message: Data recovery is slow
        severity_level: warning
        storage_type: ceph
      expr: |
        ceph_pg_undersized > 0
      for: 2h
      labels:
        severity: warning
    - alert: CephPGRepairTakingTooLong
      annotations:
        description: Self heal operations taking too long. Contact Support.
        message: Self heal problems detected
        severity_level: warning
        storage_type: ceph
      expr: |
        ceph_pg_inconsistent > 0
      for: 1h
      labels:
        severity: warning
  - name: persistent-volume-alert.rules
    rules:
    - alert: PersistentVolumeUsageNearFull
      annotations:
        description: PVC {{ $labels.persistentvolumeclaim }} utilization has crossed
          75%. Free up some space or expand the PVC.
        message: PVC {{ $labels.persistentvolumeclaim }} is nearing full. Data deletion
          or PVC expansion is required.
        severity_level: warning
        storage_type: ceph
      expr: |
        (kubelet_volume_stats_used_bytes * on (namespace,persistentvolumeclaim) group_left(storageclass, provisioner) (kube_persistentvolumeclaim_info * on (storageclass)  group_left(provisioner) kube_storageclass_info {provisioner=~"(.*rbd.csi.ceph.com)|(.*cephfs.csi.ceph.com)"})) / (kubelet_volume_stats_capacity_bytes * on (namespace,persistentvolumeclaim) group_left(storageclass, provisioner) (kube_persistentvolumeclaim_info * on (storageclass)  group_left(provisioner) kube_storageclass_info {provisioner=~"(.*rbd.csi.ceph.com)|(.*cephfs.csi.ceph.com)"})) > 0.75
      for: 5s
      labels:
        severity: warning
    - alert: PersistentVolumeUsageCritical
      annotations:
        description: PVC {{ $labels.persistentvolumeclaim }} utilization has crossed
          85%. Free up some space or expand the PVC immediately.
        message: PVC {{ $labels.persistentvolumeclaim }} is critically full. Data
          deletion or PVC expansion is required.
        severity_level: error
        storage_type: ceph
      expr: |
        (kubelet_volume_stats_used_bytes * on (namespace,persistentvolumeclaim) group_left(storageclass, provisioner) (kube_persistentvolumeclaim_info * on (storageclass)  group_left(provisioner) kube_storageclass_info {provisioner=~"(.*rbd.csi.ceph.com)|(.*cephfs.csi.ceph.com)"})) / (kubelet_volume_stats_capacity_bytes * on (namespace,persistentvolumeclaim) group_left(storageclass, provisioner) (kube_persistentvolumeclaim_info * on (storageclass)  group_left(provisioner) kube_storageclass_info {provisioner=~"(.*rbd.csi.ceph.com)|(.*cephfs.csi.ceph.com)"})) > 0.85
      for: 5s
      labels:
        severity: critical
  - name: cluster-state-alert.rules
    rules:
    - alert: CephClusterErrorState
      annotations:
        description: Storage cluster is in error state for more than 10m.
        message: Storage cluster is in error state
        severity_level: error
        storage_type: ceph
      expr: |
        ceph_health_status{job="rook-ceph-mgr"} > 1
      for: 10m
      labels:
        severity: critical
    - alert: CephClusterWarningState
      annotations:
        description: Storage cluster is in warning state for more than 10m.
        message: Storage cluster is in degraded state
        severity_level: warning
        storage_type: ceph
      expr: |
        ceph_health_status{job="rook-ceph-mgr"} == 1
      for: 10m
      labels:
        severity: warning
    - alert: CephOSDVersionMismatch
      annotations:
        description: There are {{ $value }} different versions of Ceph OSD components
          running.
        message: There are multiple versions of storage services running.
        severity_level: warning
        storage_type: ceph
      expr: |
        count(count(ceph_osd_metadata{job="rook-ceph-mgr"}) by (ceph_version)) > 1
      for: 10m
      labels:
        severity: warning
    - alert: CephMonVersionMismatch
      annotations:
        description: There are {{ $value }} different versions of Ceph Mon components
          running.
        message: There are multiple versions of storage services running.
        severity_level: warning
        storage_type: ceph
      expr: |
        count(count(ceph_mon_metadata{job="rook-ceph-mgr"}) by (ceph_version)) > 1
      for: 10m
      labels:
        severity: warning
  - name: cluster-utilization-alert.rules
    rules:
    - alert: CephClusterNearFull
      annotations:
        description: Storage cluster utilization has crossed 75% and will become read-only
          at 85%. Free up some space or expand the storage cluster.
        message: Storage cluster is nearing full. Data deletion or cluster expansion
          is required.
        severity_level: warning
        storage_type: ceph
      expr: |
        ceph_cluster_total_used_raw_bytes / ceph_cluster_total_bytes > 0.75
      for: 5s
      labels:
        severity: warning
    - alert: CephClusterCriticallyFull
      annotations:
        description: Storage cluster utilization has crossed 80% and will become read-only
          at 85%. Free up some space or expand the storage cluster immediately.
        message: Storage cluster is critically full and needs immediate data deletion
          or cluster expansion.
        severity_level: error
        storage_type: ceph
      expr: |
        ceph_cluster_total_used_raw_bytes / ceph_cluster_total_bytes > 0.80
      for: 5s
      labels:
        severity: critical
    - alert: CephClusterReadOnly
      annotations:
        description: Storage cluster utilization has crossed 85% and will become read-only
          now. Free up some space or expand the storage cluster immediately.
        message: Storage cluster is read-only now and needs immediate data deletion
          or cluster expansion.
        severity_level: error
        storage_type: ceph
      expr: |
        ceph_cluster_total_used_raw_bytes / ceph_cluster_total_bytes >= 0.85
      for: 0s
      labels:
        severity: critical
[root@m1 monitoring]# kubectl apply -f prometheus-ceph-v14-rules.yaml
prometheusrule.monitoring.coreos.com/prometheus-ceph-rules created

配置完成后,打开 prometheus 的控制台即可看到 alter 相关的告警,如下

  • 告警详情

img

  • 告警配置

img

posted @   evescn  阅读(441)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 震惊!C++程序真的从main开始吗?99%的程序员都答错了
· 【硬核科普】Trae如何「偷看」你的代码?零基础破解AI编程运行原理
· 单元测试从入门到精通
· 上周热点回顾(3.3-3.9)
· Vue3状态管理终极指南:Pinia保姆级教程
  1. 1 毛不易
  2. 2 青丝 等什么君(邓寓君)
  3. 3 最爱 周慧敏
  4. 4 青花 (Live) 摩登兄弟刘宇宁/周传雄
  5. 5 怨苍天变了心 葱香科学家(王悠然)
  6. 6 吹梦到西洲 恋恋故人难/黄诗扶/王敬轩(妖扬)
  7. 7 姑娘别哭泣 柯柯柯啊
  8. 8 我会好好的 王心凌
  9. 9 半生雪 七叔-叶泽浩
  10. 10 用力活着 张茜
  11. 11 山茶花读不懂白玫瑰 梨笑笑
  12. 12 赴春寰 张壹ZHANG/Mukyo木西/鹿予/弦上春秋Official
  13. 13 故事终章 程响
  14. 14 沿海独白 王唯一(九姨太)
  15. 15 若把你 越南电音 云音乐AI/网易天音
  16. 16 世间美好与你环环相扣 柏松
  17. 17 愿你如愿 陆七言
  18. 18 多情种 胡杨林
  19. 19 和你一样 李宇春
  20. 20 晚风心里吹 李克勤
  21. 21 世面 黄梓溪
  22. 22 等的太久 杨大六
  23. 23 微醺状态 张一
  24. 24 醉今朝 安小茜
  25. 25 阿衣莫 阿吉太组合
  26. 26 折风渡夜 沉默书生
  27. 27 星河万里 王大毛
  28. 28 满目星辰皆是你 留小雨
  29. 29 老人与海 海鸣威/吴琼
  30. 30 海底 一支榴莲
  31. 31 只要有你 曹芙嘉
  32. 32 兰花指 阿里郎
  33. 33 口是心非 张大帅
  34. 34 爱不得忘不舍 白小白
  35. 35 惊鸿醉 指尖笑
  36. 36 如愿 葱香科学家(王悠然)
  37. 37 晚风心里吹 阿梨粤
  38. 38 惊蛰·归云 陈拾月(只有影子)/KasaYAYA
  39. 39 风飞沙 迪克牛仔
  40. 40 把孤独当做晚餐 井胧
  41. 41 星星点灯 郑智化
  42. 42 客子光阴 七叔-叶泽浩
  43. 43 走马观花 王若熙
  44. 44 沈园外 阿YueYue/戾格/小田音乐社
  45. 45 盗将行 花粥/马雨阳
  46. 46 她的眼睛会唱歌 张宇佳
  47. 47 一笑江湖 姜姜
  48. 48 虎二
  49. 49 人间烟火 程响
  50. 50 不仅仅是喜欢 萧全/孙语赛
  51. 51 你的眼神(粤语版) Ecrolyn
  52. 52 剑魂 李炜
  53. 53 虞兮叹 闻人听書_
  54. 54 时光洪流 程响
  55. 55 桃花诺 G.E.M.邓紫棋
  56. 56 行星(PLANET) 谭联耀
  57. 57 别怕我伤心 悦开心i/张家旺
  58. 58 上古山海经 小少焱
  59. 59 你的眼神 七元
  60. 60 怨苍天变了心 米雅
  61. 61 绝不会放过 王亚东
  62. 62 可笑的孤独 黄静美
  63. 63 错位时空 艾辰
  64. 64 像个孩子 仙屁孩
  65. 65 完美世界 [主题版] 水木年华
  66. 66 我们的时光 赵雷
  67. 67 万字情诗 椒椒JMJ
  68. 68 妖王 浮生
  69. 69 天地无霜 (合唱版) 杨紫/邓伦
  70. 70 塞北殇 王若熙
  71. 71 花亦山 祖娅纳惜
  72. 72 醉今朝 是可乐鸭
  73. 73 欠我个未来 艾岩
  74. 74 缘分一道桥 容云/青峰AomineDaiky
  75. 75 不知死活 子无余/严书
  76. 76 不可说 霍建华/赵丽颖
  77. 77 孤勇者 陈奕迅
  78. 78 让酒 摩登兄弟刘宇宁
  79. 79 红尘悠悠DJ沈念版 颜一彦
  80. 80 折风渡夜 (DJ名龙版) 泽国同学
  81. 81 吹灭小山河 国风堂/司南
  82. 82 等什么君 - 辞九门回忆 张大帅
  83. 83 绝世舞姬 张曦匀/戚琦
  84. 84 阿刁(无修音版|live) 张韶涵网易云资讯台
  85. 85 往事如烟 蓝波
  86. 86 清明上河图 李玉刚
  87. 87 望穿秋水 坤坤阿
  88. 88 太多 杜宣达
  89. 89 小阿七
  90. 90 霞光-《精灵世纪》片尾曲 小时姑娘
  91. 91 放开 爱乐团王超
  92. 92 醉仙美 娜美
  93. 93 虞兮叹(完整版) 黎林添娇kiki
  94. 94 单恋一枝花 夏了个天呐(朴昱美)/七夕
  95. 95 一个人挺好 (DJ版) 69/肖涵/沈子凡
  96. 96 一笑江湖 闻人听書_
  97. 97 赤伶 李玉刚
  98. 98 达拉崩吧 (Live) 周深
  99. 99 等你归来 程响
  100. 100 责无旁贷 阿悠悠
  101. 101 你是人间四月天(钢琴弹唱版) 邵帅
  102. 102 虐心 徐良/孙羽幽
  103. 103 大天蓬 (女生版) 清水er
  104. 104 赤伶 是二智呀
  105. 105 有种关系叫知己 刘大壮
  106. 106 怎随天下 王若熙
  107. 107 有人 赵钶
  108. 108 海底 三块木头
  109. 109 有何不可 许嵩
  110. 110 大天蓬 (抖音版) 璐爷
  111. 111 我吹过你吹过的晚风(翻自 ac) 辛辛
  112. 112 只爱西经 林一
  113. 113 关山酒 等什么君(邓寓君)
  114. 114 曾经的你 年少不川
  115. 115 倔强 五月天
  116. 116 Lydia F.I.R.
  117. 117 爱你 王心凌
  118. 118 杀破狼 哥哥妹妹
  119. 119 踏山河 七叔-叶泽浩
  120. 120 错过的情人 雷婷
  121. 121 你看到的我 黄勇/任书怀
  122. 122 新欢渡旧爱 黄静美
  123. 123 慕容晓晓-黄梅戏(南柯一梦 / 明洋 remix) 南柯一梦/MINGYANG
  124. 124 浮白 花粥/王胜娚
  125. 125 叹郁孤 霄磊
  126. 126 贝加尔湖畔 (Live) 李健
  127. 127 不虞 王玖
  128. 128 麻雀 李荣浩
  129. 129 一场雨落下来要用多久 鹿先森乐队
  130. 130 野狼disco 宝石Gem
  131. 131 我们不该这样的 张赫煊
  132. 132 海底 一支榴莲
  133. 133 爱情错觉 王娅
  134. 134 你一定要幸福 何洁
  135. 135 往后余生 马良
  136. 136 放你走 正点
  137. 137 只要平凡 张杰/张碧晨
  138. 138 只要平凡-小石头和孩子们 小石头和孩子们
  139. 139 红色高跟鞋 (Live) 韩雪/刘敏涛/万茜
  140. 140 明月天涯 五音Jw
  141. 141 华年 鹿先森乐队
  142. 142 分飞 徐怀钰
  143. 143 你是我撞的南墙 刘楚阳
  144. 144 同簪 小时姑娘/HITA
  145. 145 我的将军啊-唯美独特女版 熙宝(陆迦卉)
  146. 146 我的将军啊(女版戏腔) Mukyo木西
  147. 147 口是心非 南柯nanklo/乐小桃
  148. 148 DAY BY DAY (Japanese Ver.) T-ara
  149. 149 我承认我怕黑 雅楠
  150. 150 我要找到你 冯子晨
  151. 151 你的答案 子尧
  152. 152 一剪梅 费玉清
  153. 153 纸船 薛之谦/郁可唯
  154. 154 那女孩对我说 (完整版) Uu
  155. 155 我好像在哪见过你 薛之谦
  156. 156 林中鸟 葛林
  157. 157 渡我不渡她 (正式版) 苏谭谭
  158. 158 红尘来去梦一场 大壮
  159. 159 都说 龙梅子/老猫
  160. 160 산다는 건 (Cheer Up) 洪真英
  161. 161 听说 丛铭君
  162. 162 那个女孩 张泽熙
  163. 163 最近 (正式版) 王小帅
  164. 164 不谓侠 萧忆情Alex
  165. 165 芒种 音阙诗听/赵方婧
  166. 166 恋人心 魏新雨
  167. 167 Trouble Is A Friend Lenka
  168. 168 风筝误 刘珂矣
  169. 169 米津玄師-lemon(Ayasa绚沙 Remix) Ayasa
  170. 170 可不可以 张紫豪
  171. 171 告白の夜 Ayasa
  172. 172 知否知否(翻自 胡夏) 凌之轩/rainbow苒
  173. 173 琵琶行 奇然/沈谧仁
  174. 174 一曲相思 半阳
  175. 175 起风了 吴青峰
  176. 176 胡广生 任素汐
  177. 177 左手指月 古琴版 古琴唐彬/古琴白无瑕
  178. 178 清明上河图 排骨教主
  179. 179 左手指月 萨顶顶
  180. 180 刚刚好 薛之谦
  181. 181 悟空 戴荃
  182. 182 易燃易爆炸 陈粒
  183. 183 漫步人生路 邓丽君
  184. 184 不染 萨顶顶
  185. 185 不染 毛不易
  186. 186 追梦人 凤飞飞
  187. 187 笑傲江湖 刘欢/王菲
  188. 188 沙漠骆驼 展展与罗罗
  189. 189 外滩十八号 男才女貌
  190. 190 你懂得 小沈阳/沈春阳
  191. 191 铁血丹心 罗文/甄妮
  192. 192 温柔乡 陈雅森
  193. 193 似水柔情 王备
  194. 194 我只能爱你 彭青
  195. 195 年轻的战场 张杰
  196. 196 七月七日晴 许慧欣
  197. 197 心爱 金学峰
  198. 198 Something Just Like This (feat. Romy Wave) Anthony Keyrouz/Romy Wave
  199. 199 ブルーバード いきものがかり
  200. 200 舞飞扬 含笑
  201. 201 时间煮雨 郁可唯
  202. 202 英雄一怒为红颜 小壮
  203. 203 天下有情人 周华健/齐豫
  204. 204 白狐 陈瑞
  205. 205 River Flows In You Martin Ermen
  206. 206 相思 毛阿敏
  207. 207 只要有你 那英/孙楠
  208. 208 Croatian Rhapsody Maksim Mrvica
  209. 209 来生缘 刘德华
  210. 210 莫失莫忘 麦振鸿
  211. 211 往后余生 王贰浪
  212. 212 雪见—仙凡之旅 麦振鸿
  213. 213 让泪化作相思雨 南合文斗
  214. 214 追梦人 阿木
  215. 215 真英雄 张卫健
  216. 216 天使的翅膀 安琥
  217. 217 生生世世爱 吴雨霏
  218. 218 爱我就跟我走 王鹤铮
  219. 219 特别的爱给特别的你 伍思凯
  220. 220 杜婧荧/王艺翔
  221. 221 I Am You Kim Taylor
  222. 222 起风了 买辣椒也用券
  223. 223 江湖笑 周华健
  224. 224 半壶纱 刘珂矣
  225. 225 Jar Of Love 曲婉婷
  226. 226 野百合也有春天 孟庭苇
  227. 227 后来 刘若英
  228. 228 不仅仅是喜欢 萧全/孙语赛
  229. 229 Time (Official) MKJ
  230. 230 纸短情长 (完整版) 烟把儿
  231. 231 离人愁 曲肖冰
  232. 232 难念的经 周华健
  233. 233 佛系少女 冯提莫
  234. 234 红昭愿 音阙诗听
  235. 235 BINGBIAN病变 Cubi/多多Aydos
  236. 236 说散就散 袁娅维TIA RAY
  237. 237 慢慢喜欢你 莫文蔚
  238. 238 最美的期待 周笔畅
  239. 239 牵丝戏 银临/Aki阿杰
  240. 240 夜的钢琴曲 K. Williams
- 杜婧荧/王艺翔
00:00 / 00:00
An audio error has occurred, player will skip forward in 2 seconds.

作词 : king

作曲 : 杜婧荧

(女)这季节风多了一些

吹痛被爱遗忘的一切

而我却躲不过这感觉

痛的无力去改变

谁了解在我的世界

爱的信仰已被风熄灭

就像离开树的落叶飘不见

已经慢慢凋谢

忽然下的一场雪飘的那么纯洁

将我埋葬在你的世界

冰封了我爱的期限

却让痛成为永远

在一瞬间曾经所有的梦都幻灭

剩下回忆湿了我的眼

还牵着你给过我的誓言

发现已经无法兑现

(男)一整夜 爱与恨重叠

只剩心被撕裂的感觉

属于我的幸福被你忽略

我不愿 无法妥协

忽然下的一场雪飘的那么纯洁

将我埋葬在你的世界

冰封了我爱的期限

却让痛成为永远

在一瞬间曾经所有的梦都幻灭

剩下回忆湿了我的眼

还牵着你给过我的誓言

发现已经无法兑现

忧伤陪伴在我身边

任时间将我的记忆更迭

却无法忘掉 你那冰冷的眼

(合)一场雪飘的那么纯洁

将我埋葬在你的世界

陷入你善变的谎言

我的爱 已被搁浅

在一瞬间曾经所有的梦都幻灭

剩下回忆湿了我的眼

(男)就连呼吸仿佛都被冻结

在这个寂寞的深夜

(合)心随着雪飘远

点击右上角即可分享
微信分享提示