13 Ceph 管理与监控

Ceph mgr 组件描述

  • ceph mgr 做为 ceph 12.x 版本之后才开始提供,是负责 ceph 集群管理的组件。

  • Ceph Manager守护进程(ceph-mgr)负责跟踪运行时指标和Ceph集群的当前状态,包括存储利用率,当前性能指标和系统负载。

  • Ceph Manager守护进程还基于python的插件来管理和公开Ceph集群信息,包括基于Web的Ceph Manager Dashboard和 REST API。

  • 可以配置多个,默认高可用:一个active,其他standby

mgr 特性和功能

mgr 特性

The dashboard provides the following features:

  • Multi-User and Role Management: The dashboard supports multiple user
    accounts with different permissions (roles). The user accounts and roles
    can be modified on both the command line and via the WebUI.
    See User and Role Management for details.
  • Single Sign-On (SSO): the dashboard supports authentication
    via an external identity provider using the SAML 2.0 protocol. See
    Enabling Single Sign-On (SSO) for details.
  • SSL/TLS support: All HTTP communication between the web browser and the
    dashboard is secured via SSL. A self-signed certificate can be created with
    a built-in command, but it’s also possible to import custom certificates
    signed and issued by a CA. See SSL/TLS Support for details.
  • Auditing: the dashboard backend can be configured to log all PUT, POST
    and DELETE API requests in the Ceph audit log. See Auditing API Requests
    for instructions on how to enable this feature.
  • Internationalization (I18N): the dashboard can be used in different
    languages that can be selected at run-time.

mgr 功能

Currently, Ceph Dashboard is capable of monitoring and managing the following
aspects of your Ceph cluster:

  • Overall cluster health: Display overall cluster status, performance
    and capacity metrics.
  • Embedded Grafana Dashboards: Ceph Dashboard is capable of embedding
    Grafana dashboards in many locations, to display additional information
    and performance metrics gathered by the Prometheus Module. See
    Enabling the Embedding of Grafana Dashboards for details on how to configure this functionality.
  • Cluster logs: Display the latest updates to the cluster’s event and
    audit log files. Log entries can be filtered by priority, date or keyword.
  • Hosts: Display a list of all hosts associated to the cluster, which
    services are running and which version of Ceph is installed.
  • Performance counters: Display detailed service-specific statistics for
    each running service.
  • Monitors: List all MONs, their quorum status, open sessions.
  • Monitoring: Enables creation, re-creation, editing and expiration of
    Prometheus’ Silences, lists the alerting configuration of Prometheus and
    currently firing alerts. Also shows notifications for firing alerts. Needs
    configuration.
  • Configuration Editor: Display all available configuration options,
    their description, type and default values and edit the current values.
  • Pools: List all Ceph pools and their details (e.g. applications,
    placement groups, replication size, EC profile, CRUSH ruleset, etc.)
  • OSDs: List all OSDs, their status and usage statistics as well as
    detailed information like attributes (OSD map), metadata, performance
    counters and usage histograms for read/write operations. Mark OSDs
    up/down/out, purge and reweight OSDs, perform scrub operations, modify
    various scrub-related configuration options, select different profiles to
    adjust the level of backfilling activity.
  • iSCSI: List all hosts that run the TCMU runner service, display all
    images and their performance characteristics (read/write ops, traffic).
    Create, modify and delete iSCSI targets (via ceph-iscsi). See
    Enabling iSCSI Management for instructions on how to configure this
    feature.
  • RBD: List all RBD images and their properties (size, objects, features).
    Create, copy, modify and delete RBD images. Define various I/O or bandwidth
    limitation settings on a global, per-pool or per-image level. Create, delete
    and rollback snapshots of selected images, protect/unprotect these snapshots
    against modification. Copy or clone snapshots, flatten cloned images.
  • RBD mirroring: Enable and configure RBD mirroring to a remote Ceph server.
    Lists all active sync daemons and their status, pools and RBD images including
    their synchronization state.
  • CephFS: List all active filesystem clients and associated pools,
    including their usage statistics.
  • Object Gateway: List all active object gateways and their performance
    counters. Display and manage (add/edit/delete) object gateway users and their
    details (e.g. quotas) as well as the users’ buckets and their details (e.g.
    owner, quotas). See Enabling the Object Gateway Management Frontend for configuration
    instructions.
  • NFS: Manage NFS exports of CephFS filesystems and RGW S3 buckets via NFS
    Ganesha. See NFS-Ganesha Management for details on how to
    enable this functionality.
  • Ceph Manager Modules: Enable and disable all Ceph Manager modules, change
    the module-specific configuration settings.

Ceph Dashboard 部署

安装 Ceph Dashboard 包

[root@node0 csi]# yum install --nogpgcheck ceph-mgr-dashboard

启用 Dashboard


[root@node0 csi]# ceph mgr module enable dashboard
Error ENOENT: all mgr daemons do not support module 'dashboard', pass --force to force enablement

[root@node0 csi]# ceph -h | grep modu
......
mgr module ls                                                              
......

# 查看启用的模块
[root@node0 csi]# ceph mgr module ls | grep enable -A 5
    "enabled_modules": [
        "iostat",
        "restful"
    ],


[root@node0 csi]# ceph mgr module enable dashboard --force

# 再次查看启用的模块
[root@node0 csi]# ceph mgr module ls | grep enable -A 5
    "enabled_modules": [
        "iostat",
        "restful"
    ],

生成ceph默认证书,不使用 自签证书

[root@node0 csi]# ceph dashboard create-self-signed-cert
Self-signed certificate created

设置 IP 端口等信息

[root@node0 csi]# ceph config set mgr mgr/dashboard/server_addr 182.168.100.130

[root@node0 csi]# ceph config set mgr mgr/dashboard/server_port 8080
ceph config set mgr mgr/dashboard/ssl_server_port 8443

[root@node0 csi]# ceph config set mgr mgr/dashboard/ssl_server_port 8443

查看 Dashboard 信息

[root@node0 csi]# ceph mgr services
{
    "dashboard": "https://node0:8443/"
}

启用用户并设置权限

[root@node0 csi]# cat password.txt 
cephpassword

[root@node0 csi]# ceph dashboard ac-user-create cephadmin -i password.txt administrator
{"username": "cephadmin", "lastUpdate": 1667901632, "name": null, "roles": ["administrator"], "password": "$2b$12$bsStG6DQdqulVM0TspdsZehDCH1lOFRM8m20jkP1wCP5W96YPka5y", "email": null}

访问 Dashboard

访问 https://192.168.100.130:8443/

  • 查看网站证书

img

  • 登陆 Dashboard

img

img

Dashboard 功能使用

登陆 Dashboard 查看各页面和组件即可

manager 插件的使用

manager 插件

Alerts module

The alerts module can send simple alert messages about cluster health
via e-mail. In the future, it will support other notification methods
as well.

Enabling

The alerts module is enabled with:

ceph mgr module enable alerts

Configuration

To configure SMTP, all of the following config options must be set:

ceph config set mgr mgr/alerts/smtp\_host \*<smtp-server>\*
ceph config set mgr mgr/alerts/smtp\_destination \*<email-address-to-send-to>\*
ceph config set mgr mgr/alerts/smtp\_sender \*<from-email-address>\*

By default, the module will use SSL and port 465. To change that,:

ceph config set mgr mgr/alerts/smtp\_ssl false   # if not SSL
ceph config set mgr mgr/alerts/smtp\_port \*<port-number>\*  # if not 465

To authenticate to the SMTP server, you must set the user and password:

ceph config set mgr mgr/alerts/smtp\_user \*<username>\*
ceph config set mgr mgr/alerts/smtp\_password \*<password>\*

By default, the name in the From: line is simply Ceph. To
change that (e.g., to identify which cluster this is),:

ceph config set mgr mgr/alerts/smtp\_from\_name 'Ceph Cluster Foo'

By default, the module will check the cluster health once per minute
and, if there is a change, send a message. To change that
frequency,:

ceph config set mgr mgr/alerts/interval \*<interval>\*   # e.g., "5m" for 5 minutes

Commands

To force an alert to be send immediately,:

ceph alerts send

Zabbix Module

The Zabbix module actively sends information to a Zabbix server like:

  • Ceph status
  • I/O operations
  • I/O bandwidth
  • OSD status
  • Storage utilization

Requirements

The module requires that the zabbix_sender executable is present on all machines running ceph-mgr. It can be installed on most distributions using the package manager.

Dependencies

Installing zabbix_sender can be done under Ubuntu or CentOS using either apt or dnf.

On Ubuntu Xenial:

apt install zabbix-agent

On Fedora:

dnf install zabbix-sender

Enabling

You can enable the zabbix module with:

ceph mgr module enable zabbix

Configuration

Two configuration keys are vital for the module to work:

  • zabbix_host
  • identifier (optional)

The parameter zabbix_host controls the hostname of the Zabbix server to which zabbix_sender will send the items. This can be a IP-Address if required by your installation.

The identifier parameter controls the identifier/hostname to use as source when sending items to Zabbix. This should match the name of the Host in your Zabbix server.

When the identifier parameter is not configured the ceph- of the cluster
will be used when sending data to Zabbix.

This would for example be ceph-c4d32a99-9e80-490f-bd3a-1d22d8a7d354

Additional configuration keys which can be configured and their default values:

  • zabbix_port: 10051
  • zabbix_sender: /usr/bin/zabbix_sender
  • interval: 60
Configuration keys

Configuration keys can be set on any machine with the proper cephx credentials,
these are usually Monitors where the client.admin key is present.

ceph zabbix config-set <key> <value>

For example:

ceph zabbix config-set zabbix\_host zabbix.localdomain
ceph zabbix config-set identifier ceph.eu-ams02.local

The current configuration of the module can also be shown:

ceph zabbix config-show

[root@node0 ceph-deploy]# ceph zabbix config-show
{"zabbix_port": 10051, "zabbix_host": "None", "identifier": "", "zabbix_sender": "/usr/bin/zabbix_sender", "interval": 60}
Template

A template.
(XML) to be used on the Zabbix server can be found in the source directory of the module.

This template contains all items and a few triggers. You can customize the triggers afterwards to fit your needs.

Multiple Zabbix servers

It is possible to instruct zabbix module to send data to multiple Zabbix servers.

Parameter zabbix_host can be set with multiple hostnames separated by commas.
Hosnames (or IP adderesses) can be followed by colon and port number. If a port
number is not present module will use the port number defined in zabbix_port.

For example:

ceph zabbix config-set zabbix\_host "zabbix1,zabbix2:2222,zabbix3:3333"

Manually sending data

If needed the module can be asked to send data immediately instead of waiting for
the interval.

This can be done with this command:

ceph zabbix send

The module will now send its latest data to the Zabbix server.

Influx Module

The influx module continuously collects and sends time series data to an
influxdb database.

The influx module was introduced in the 13.x Mimic release.

Enabling

To enable the module, use the following command:

ceph mgr module enable influx

If you wish to subsequently disable the module, you can use the equivalent
disable command:

ceph mgr module disable influx

Configuration

ceph config set mgr mgr/influx/<key> <value>

The most important settings are hostname, username and password. For example, a typical configuration might look like this:

ceph config set mgr mgr/influx/hostname influx.mydomain.com
ceph config set mgr mgr/influx/username admin123
ceph config set mgr mgr/influx/password p4ssw0rd

iostat

This module shows the current throughput and IOPS done on the Ceph cluster.

Enabling

To check if the iostat module is enabled, run:

ceph mgr module ls

The module can be enabled with:

ceph mgr module enable iostat

To execute the module, run:

ceph iostat
[root@node0 ceph-deploy]# ceph iostat
+------------+------------+------------+------------+------------+------------+
|       Read |      Write |      Total |  Read IOPS | Write IOPS | Total IOPS |
+------------+------------+------------+------------+------------+------------+
|      0 B/s |      0 B/s |      0 B/s |          0 |          0 |          0 |
|      0 B/s |      0 B/s |      0 B/s |          0 |          0 |          0 |
|      0 B/s |      0 B/s |      0 B/s |          0 |          0 |          0 |
|      0 B/s |      0 B/s |      0 B/s |          0 |          0 |          0 |

To change the frequency at which the statistics are printed, use the -p
option:

ceph iostat -p <period in seconds>

For example, use the following command to print the statistics every 5 seconds:

ceph iostat -p 5

Crash Module

The crash module collects information about daemon crashdumps and stores
it in the Ceph cluster for later analysis.

Daemon crashdumps are dumped in /var/lib/ceph/crash by default; this can
be configured with the option ‘crash dir’. Crash directories are named by
time and date and a randomly-generated UUID, and contain a metadata file
‘meta’ and a recent log file, with a “crash_id” that is the same.
This module allows the metadata about those dumps to be persisted in
the monitors’ storage.

Enabling

The crash module is enabled with:

ceph mgr module enable crash

Commands

ceph crash post -i <metafile>

Save a crash dump. The metadata file is a JSON blob stored in the crash
dir as meta. As usual, the ceph command can be invoked with -i -,
and will read from stdin.

ceph rm <crashid>

Remove a specific crash dump.

ceph crash ls

List the timestamp/uuid crashids for all new and archived crash info.

ceph crash ls-new

List the timestamp/uuid crashids for all newcrash info.

ceph crash stat

Show a summary of saved crash info grouped by age.

ceph crash info <crashid>

Show all details of a saved crash.

ceph crash prune <keep>

Remove saved crashes older than ‘keep’ days. must be an integer.

ceph crash archive <crashid>

Archive a crash report so that it is no longer considered for the RECENT\_CRASH health check and does not appear in the crash ls-new output (it will still appear in the crash ls output).

ceph crash archive-all

Archive all new crash reports.

Options

  • mgr/crash/warn\_recent\_interval [default: 2 weeks] controls what constitutes “recent” for the purposes of raising the RECENT\_CRASH health warning.
  • mgr/crash/retain\_interval [default: 1 year] controls how long crash reports are retained by the cluster before they are automatically purged.

Insights Module

The insights module collects and exposes system information to the Insights Core
data analysis framework. It is intended to replace explicit interrogation of
Ceph CLIs and daemon admin sockets, reducing the API surface that Insights
depends on. The insights reports contains the following:

  • Health reports. In addition to reporting the current health of the
    cluster, the insights module reports a summary of the last 24 hours of health
    checks. This feature is important for catching cluster health issues that are
    transient and may not be present at the moment the report is generated. Health
    checks are deduplicated to avoid unbounded data growth.
  • Crash reports. A summary of any daemon crashes in the past 24 hours is
    included in the insights report. Crashes are reported as the number of crashes
    per daemon type (e.g. ceph-osd) within the time window. Full details of a
    crash may be obtained using the crash module.
  • Software version, storage utilization, cluster maps, placement group summary,
    monitor status, cluster configuration, and OSD metadata.

Enabling

The insights module is enabled with:

ceph mgr module enable insights

Commands

ceph insights

Generate the full report.

ceph insights prune-health <hours>

Remove historical health data older than . Passing 0 for will
clear all health data.

This command is useful for cleaning the health history before automated nightly
reports are generated, which may contain spurious health checks accumulated
while performing system maintenance, or other health checks that have been
resolved. There is no need to prune health data to reclaim storage space;
garbage collection is performed regularly to remove old health data from
persistent storage.

启用 Prometheus 模块

Prometheus Module

Provides a Prometheus exporter to pass on Ceph performance counters
from the collection point in ceph-mgr. Ceph-mgr receives MMgrReport
messages from all MgrClient processes (mons and OSDs, for instance)
with performance counter schema data and actual counter data, and keeps
a circular buffer of the last N samples. This module creates an HTTP
endpoint (like all Prometheus exporters) and retrieves the latest sample
of every counter when polled (or “scraped” in Prometheus terminology).
The HTTP path and query parameters are ignored; all extant counters
for all reporting entities are returned in text exposition format.
(See the Prometheus documentation.)

Enabling prometheus output

The prometheus module is enabled with:

ceph mgr module enable prometheus
Configuration

Note

The Prometheus manager module needs to be restarted for configuration changes to be applied.

By default the module will accept HTTP requests on port 9283 on all IPv4
and IPv6 addresses on the host. The port and listen address are both
configurable with ceph config-key set, with keys
mgr/prometheus/server\_addr and mgr/prometheus/server\_port. This port
is registered with Prometheus’s registry.

ceph config set mgr mgr/prometheus/server\_addr 0.0.0.0
ceph config set mgr mgr/prometheus/server\_port 9283

Ceph 集群启用 Prometheus 模块

模块启用后,会监控在当前集群的 mgr 服务的 9283 节点

[root@node0 ceph-deploy]# ceph mgr module enable prometheus
[root@node0 ceph-deploy]# ceph mgr module ls | less
{
    "always_on_modules": [
        "balancer",
        "crash",
        "devicehealth",
        "orchestrator_cli",
        "progress",
        "rbd_support",
        "status",
        "volumes"
    ],
    "enabled_modules": [
        "alerts",
        "dashboard",
        "iostat",
        "prometheus",
        "restful",
        "zabbix"
    ],

查看监控数据

访问 http://服务器IP:9283/

img

img

Prometheus 服务端安装

配置 Prometheus yaml 源

[root@node0 ceph-deploy]# cd /etc/yum.repos.d/

[root@node0 yum.repos.d]# cat prometheus.repo 
[prometheus]
name=prometheus
baseurl=https://packagecloud.io/prometheus-rpm/release/el/$releasever/$basearch
repo_gpgcheck=1
enabled=1
gpgkey=https://packagecloud.io/prometheus-rpm/release/gpgkey
       https://raw.githubusercontent.com/lest/prometheus-rpm/master/RPM-GPG-KEY-prometheus-rpm
gpgcheck=1
metadata_expire=300

安装软件

[root@node0 yum.repos.d]# yum install -y prometheus
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
prometheus/7/x86_64/signature                                                                                                                                       |  833 B  00:00:00     
prometheus/7/x86_64/signature                                                                                                                                       | 1.8 kB  00:00:00 !!! 
Resolving Dependencies
--> Running transaction check
---> Package prometheus.x86_64 0:1.8.2-1.el7.centos will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===========================================================================================================================================================================================
 Package                                     Arch                                    Version                                             Repository                                   Size
===========================================================================================================================================================================================
Installing:
 prometheus                                  x86_64                                  1.8.2-1.el7.centos                                  prometheus                                   11 M

Transaction Summary
===========================================================================================================================================================================================
Install  1 Package

Total size: 11 M
Installed size: 50 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : prometheus-1.8.2-1.el7.centos.x86_64                                                                                                                                    1/1 
  Verifying  : prometheus-1.8.2-1.el7.centos.x86_64                                                                                                                                    1/1 

Installed:
  prometheus.x86_64 0:1.8.2-1.el7.centos                                                                                                                                                   

Complete!

服务配置

prometheus.yml

global:
  scrape\_interval:     15s
  evaluation\_interval: 15s

scrape\_configs:
  - job\_name: 'node'
    file\_sd\_configs:
      - files:
        - node\_targets.yml
  - job\_name: 'ceph'
    honor\_labels: true
    file\_sd\_configs:
      - files:
        - ceph\_targets.yml

ceph_targets.yml

[
    {
        "targets": [ "senta04.mydomain.com:9283" ],
        "labels": {}
    }
]

node_targets.yml

[
    {
        "targets": [ "senta04.mydomain.com:9100" ],
        "labels": {
            "instance": "senta04"
        }
    }
]

Prometheus 配置

修改 Prometheus 配置文件

[root@node0 yum.repos.d]# cd /etc/prometheus/

[root@node0 prometheus]# ls
prometheus.yml
[root@node0 prometheus]# cp prometheus.yml prometheus.yml.bak

[root@node0 prometheus]# vim prometheus.yml
# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  #external_labels:
  #    monitor: 'codelab-monitor'

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
#rule_files:
  # - "first.rules"
  # - "second.rules"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'ceph-exporter'
    honor_labels: true
    static_configs:
    # 当前 mgr 在 node0 上,所以此处配置此 IP 地址
    - targets: ['node0:9283']
      labels:
        instance: ceph-exporter 

  • 查看 ceph 集群状态
[root@node0 prometheus]# ceph -s
  cluster:
    id:     97702c43-6cc2-4ef8-bdb5-855cfa90a260
    health: HEALTH_ERR
            Module 'dashboard' has failed: error('No socket could be created',)
            No Zabbix server configured
            application not enabled on 1 pool(s)
 
  services:
    mon: 3 daemons, quorum node0,node1,node2 (age 9d)
    mgr: node0(active, since 18m), standbys: node2, node1   # mgr 在 node0 节点
    mds: cephfs-demo:1 {0=node1=up:active} 2 up:standby
    osd: 6 osds: 6 up (since 9d), 6 in (since 3w)
    rgw: 2 daemons active (node0, node1)
 
  task status:
 
  data:
    pools:   10 pools, 360 pgs
    objects: 1.14k objects, 2.7 GiB
    usage:   14 GiB used, 285 GiB / 300 GiB avail
    pgs:     360 active+clean

启动服务

[root@node0 prometheus]# systemctl start prometheus
[root@node0 prometheus]# ss -tnlp | grep prome
LISTEN     0      128       [::]:9090                  [::]:*                   users:(("prometheus",pid=1726607,fd=3))

查看 Prometheus

访问 http://node0:9090/

img

img

Grafana 配置监控模版

安装 Grafana

[root@node0 prometheus]# yum install https://mirrors.cloud.tencent.com/grafana/yum/rpm/grafana-9.2.3-1.x86_64.rpm

启动服务

[root@node0 ~]# systemctl enable grafana-server --now
Created symlink from /etc/systemd/system/multi-user.target.wants/grafana-server.service to /usr/lib/systemd/system/grafana-server.service.

[root@node0 ~]# ss -tnlp | grep grafana
LISTEN     0      128       [::]:3000                  [::]:*                   users:(("grafana-server",pid=1906441,fd=14))

[root@node0 ~]# systemctl status grafana-server
● grafana-server.service - Grafana instance
   Loaded: loaded (/usr/lib/systemd/system/grafana-server.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2022-11-19 18:48:42 CST; 17s ago
     Docs: http://docs.grafana.org
 Main PID: 1906441 (grafana-server)
   CGroup: /system.slice/grafana-server.service
           └─1906441 /usr/sbin/grafana-server --config=/etc/grafana/grafana.ini --pidfile=/...

Nov 19 18:48:42 node0 grafana-server[1906441]: logger=infra.usagestats.collector t=2022-1...=2
Nov 19 18:48:42 node0 grafana-server[1906441]: logger=server t=2022-11-19T18:48:42.222277...41
Nov 19 18:48:42 node0 grafana-server[1906441]: logger=provisioning.alerting t=2022-11-19T...g"
Nov 19 18:48:42 node0 grafana-server[1906441]: logger=provisioning.alerting t=2022-11-19T...g"
Nov 19 18:48:42 node0 systemd[1]: Started Grafana instance.
Nov 19 18:48:42 node0 grafana-server[1906441]: logger=grafanaStorageLogger t=2022-11-19T1...g"
Nov 19 18:48:42 node0 grafana-server[1906441]: logger=http.server t=2022-11-19T18:48:42.2...t=
Nov 19 18:48:42 node0 grafana-server[1906441]: logger=ngalert t=2022-11-19T18:48:42.23439...p"
Nov 19 18:48:42 node0 grafana-server[1906441]: logger=ticker t=2022-11-19T18:48:42.234578...00
Nov 19 18:48:42 node0 grafana-server[1906441]: logger=ngalert.multiorg.alertmanager t=202...r"
Hint: Some lines were ellipsized, use -l to show in full.

访问 Grafana

http://node0:3000/

img

使用 admin/admin 登陆,添加数据源

img

配置 Prometheus 信息

img

img

Grafana 功能演示

导入模版

img

img

img

查看数据

img

posted @   evescn  阅读(1146)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 【自荐】一款简洁、开源的在线白板工具 Drawnix
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY
· Docker 太简单,K8s 太复杂?w7panel 让容器管理更轻松!
  1. 1 毛不易
  2. 2 青丝 等什么君(邓寓君)
  3. 3 最爱 周慧敏
  4. 4 青花 (Live) 摩登兄弟刘宇宁/周传雄
  5. 5 怨苍天变了心 葱香科学家(王悠然)
  6. 6 吹梦到西洲 恋恋故人难/黄诗扶/王敬轩(妖扬)
  7. 7 姑娘别哭泣 柯柯柯啊
  8. 8 我会好好的 王心凌
  9. 9 半生雪 七叔-叶泽浩
  10. 10 用力活着 张茜
  11. 11 山茶花读不懂白玫瑰 梨笑笑
  12. 12 赴春寰 张壹ZHANG/Mukyo木西/鹿予/弦上春秋Official
  13. 13 故事终章 程响
  14. 14 沿海独白 王唯一(九姨太)
  15. 15 若把你 越南电音 云音乐AI/网易天音
  16. 16 世间美好与你环环相扣 柏松
  17. 17 愿你如愿 陆七言
  18. 18 多情种 胡杨林
  19. 19 和你一样 李宇春
  20. 20 晚风心里吹 李克勤
  21. 21 世面 黄梓溪
  22. 22 等的太久 杨大六
  23. 23 微醺状态 张一
  24. 24 醉今朝 安小茜
  25. 25 阿衣莫 阿吉太组合
  26. 26 折风渡夜 沉默书生
  27. 27 星河万里 王大毛
  28. 28 满目星辰皆是你 留小雨
  29. 29 老人与海 海鸣威/吴琼
  30. 30 海底 一支榴莲
  31. 31 只要有你 曹芙嘉
  32. 32 兰花指 阿里郎
  33. 33 口是心非 张大帅
  34. 34 爱不得忘不舍 白小白
  35. 35 惊鸿醉 指尖笑
  36. 36 如愿 葱香科学家(王悠然)
  37. 37 晚风心里吹 阿梨粤
  38. 38 惊蛰·归云 陈拾月(只有影子)/KasaYAYA
  39. 39 风飞沙 迪克牛仔
  40. 40 把孤独当做晚餐 井胧
  41. 41 星星点灯 郑智化
  42. 42 客子光阴 七叔-叶泽浩
  43. 43 走马观花 王若熙
  44. 44 沈园外 阿YueYue/戾格/小田音乐社
  45. 45 盗将行 花粥/马雨阳
  46. 46 她的眼睛会唱歌 张宇佳
  47. 47 一笑江湖 姜姜
  48. 48 虎二
  49. 49 人间烟火 程响
  50. 50 不仅仅是喜欢 萧全/孙语赛
  51. 51 你的眼神(粤语版) Ecrolyn
  52. 52 剑魂 李炜
  53. 53 虞兮叹 闻人听書_
  54. 54 时光洪流 程响
  55. 55 桃花诺 G.E.M.邓紫棋
  56. 56 行星(PLANET) 谭联耀
  57. 57 别怕我伤心 悦开心i/张家旺
  58. 58 上古山海经 小少焱
  59. 59 你的眼神 七元
  60. 60 怨苍天变了心 米雅
  61. 61 绝不会放过 王亚东
  62. 62 可笑的孤独 黄静美
  63. 63 错位时空 艾辰
  64. 64 像个孩子 仙屁孩
  65. 65 完美世界 [主题版] 水木年华
  66. 66 我们的时光 赵雷
  67. 67 万字情诗 椒椒JMJ
  68. 68 妖王 浮生
  69. 69 天地无霜 (合唱版) 杨紫/邓伦
  70. 70 塞北殇 王若熙
  71. 71 花亦山 祖娅纳惜
  72. 72 醉今朝 是可乐鸭
  73. 73 欠我个未来 艾岩
  74. 74 缘分一道桥 容云/青峰AomineDaiky
  75. 75 不知死活 子无余/严书
  76. 76 不可说 霍建华/赵丽颖
  77. 77 孤勇者 陈奕迅
  78. 78 让酒 摩登兄弟刘宇宁
  79. 79 红尘悠悠DJ沈念版 颜一彦
  80. 80 折风渡夜 (DJ名龙版) 泽国同学
  81. 81 吹灭小山河 国风堂/司南
  82. 82 等什么君 - 辞九门回忆 张大帅
  83. 83 绝世舞姬 张曦匀/戚琦
  84. 84 阿刁(无修音版|live) 张韶涵网易云资讯台
  85. 85 往事如烟 蓝波
  86. 86 清明上河图 李玉刚
  87. 87 望穿秋水 坤坤阿
  88. 88 太多 杜宣达
  89. 89 小阿七
  90. 90 霞光-《精灵世纪》片尾曲 小时姑娘
  91. 91 放开 爱乐团王超
  92. 92 醉仙美 娜美
  93. 93 虞兮叹(完整版) 黎林添娇kiki
  94. 94 单恋一枝花 夏了个天呐(朴昱美)/七夕
  95. 95 一个人挺好 (DJ版) 69/肖涵/沈子凡
  96. 96 一笑江湖 闻人听書_
  97. 97 赤伶 李玉刚
  98. 98 达拉崩吧 (Live) 周深
  99. 99 等你归来 程响
  100. 100 责无旁贷 阿悠悠
  101. 101 你是人间四月天(钢琴弹唱版) 邵帅
  102. 102 虐心 徐良/孙羽幽
  103. 103 大天蓬 (女生版) 清水er
  104. 104 赤伶 是二智呀
  105. 105 有种关系叫知己 刘大壮
  106. 106 怎随天下 王若熙
  107. 107 有人 赵钶
  108. 108 海底 三块木头
  109. 109 有何不可 许嵩
  110. 110 大天蓬 (抖音版) 璐爷
  111. 111 我吹过你吹过的晚风(翻自 ac) 辛辛
  112. 112 只爱西经 林一
  113. 113 关山酒 等什么君(邓寓君)
  114. 114 曾经的你 年少不川
  115. 115 倔强 五月天
  116. 116 Lydia F.I.R.
  117. 117 爱你 王心凌
  118. 118 杀破狼 哥哥妹妹
  119. 119 踏山河 七叔-叶泽浩
  120. 120 错过的情人 雷婷
  121. 121 你看到的我 黄勇/任书怀
  122. 122 新欢渡旧爱 黄静美
  123. 123 慕容晓晓-黄梅戏(南柯一梦 / 明洋 remix) 南柯一梦/MINGYANG
  124. 124 浮白 花粥/王胜娚
  125. 125 叹郁孤 霄磊
  126. 126 贝加尔湖畔 (Live) 李健
  127. 127 不虞 王玖
  128. 128 麻雀 李荣浩
  129. 129 一场雨落下来要用多久 鹿先森乐队
  130. 130 野狼disco 宝石Gem
  131. 131 我们不该这样的 张赫煊
  132. 132 海底 一支榴莲
  133. 133 爱情错觉 王娅
  134. 134 你一定要幸福 何洁
  135. 135 往后余生 马良
  136. 136 放你走 正点
  137. 137 只要平凡 张杰/张碧晨
  138. 138 只要平凡-小石头和孩子们 小石头和孩子们
  139. 139 红色高跟鞋 (Live) 韩雪/刘敏涛/万茜
  140. 140 明月天涯 五音Jw
  141. 141 华年 鹿先森乐队
  142. 142 分飞 徐怀钰
  143. 143 你是我撞的南墙 刘楚阳
  144. 144 同簪 小时姑娘/HITA
  145. 145 我的将军啊-唯美独特女版 熙宝(陆迦卉)
  146. 146 我的将军啊(女版戏腔) Mukyo木西
  147. 147 口是心非 南柯nanklo/乐小桃
  148. 148 DAY BY DAY (Japanese Ver.) T-ara
  149. 149 我承认我怕黑 雅楠
  150. 150 我要找到你 冯子晨
  151. 151 你的答案 子尧
  152. 152 一剪梅 费玉清
  153. 153 纸船 薛之谦/郁可唯
  154. 154 那女孩对我说 (完整版) Uu
  155. 155 我好像在哪见过你 薛之谦
  156. 156 林中鸟 葛林
  157. 157 渡我不渡她 (正式版) 苏谭谭
  158. 158 红尘来去梦一场 大壮
  159. 159 都说 龙梅子/老猫
  160. 160 산다는 건 (Cheer Up) 洪真英
  161. 161 听说 丛铭君
  162. 162 那个女孩 张泽熙
  163. 163 最近 (正式版) 王小帅
  164. 164 不谓侠 萧忆情Alex
  165. 165 芒种 音阙诗听/赵方婧
  166. 166 恋人心 魏新雨
  167. 167 Trouble Is A Friend Lenka
  168. 168 风筝误 刘珂矣
  169. 169 米津玄師-lemon(Ayasa绚沙 Remix) Ayasa
  170. 170 可不可以 张紫豪
  171. 171 告白の夜 Ayasa
  172. 172 知否知否(翻自 胡夏) 凌之轩/rainbow苒
  173. 173 琵琶行 奇然/沈谧仁
  174. 174 一曲相思 半阳
  175. 175 起风了 吴青峰
  176. 176 胡广生 任素汐
  177. 177 左手指月 古琴版 古琴唐彬/古琴白无瑕
  178. 178 清明上河图 排骨教主
  179. 179 左手指月 萨顶顶
  180. 180 刚刚好 薛之谦
  181. 181 悟空 戴荃
  182. 182 易燃易爆炸 陈粒
  183. 183 漫步人生路 邓丽君
  184. 184 不染 萨顶顶
  185. 185 不染 毛不易
  186. 186 追梦人 凤飞飞
  187. 187 笑傲江湖 刘欢/王菲
  188. 188 沙漠骆驼 展展与罗罗
  189. 189 外滩十八号 男才女貌
  190. 190 你懂得 小沈阳/沈春阳
  191. 191 铁血丹心 罗文/甄妮
  192. 192 温柔乡 陈雅森
  193. 193 似水柔情 王备
  194. 194 我只能爱你 彭青
  195. 195 年轻的战场 张杰
  196. 196 七月七日晴 许慧欣
  197. 197 心爱 金学峰
  198. 198 Something Just Like This (feat. Romy Wave) Anthony Keyrouz/Romy Wave
  199. 199 ブルーバード いきものがかり
  200. 200 舞飞扬 含笑
  201. 201 时间煮雨 郁可唯
  202. 202 英雄一怒为红颜 小壮
  203. 203 天下有情人 周华健/齐豫
  204. 204 白狐 陈瑞
  205. 205 River Flows In You Martin Ermen
  206. 206 相思 毛阿敏
  207. 207 只要有你 那英/孙楠
  208. 208 Croatian Rhapsody Maksim Mrvica
  209. 209 来生缘 刘德华
  210. 210 莫失莫忘 麦振鸿
  211. 211 往后余生 王贰浪
  212. 212 雪见—仙凡之旅 麦振鸿
  213. 213 让泪化作相思雨 南合文斗
  214. 214 追梦人 阿木
  215. 215 真英雄 张卫健
  216. 216 天使的翅膀 安琥
  217. 217 生生世世爱 吴雨霏
  218. 218 爱我就跟我走 王鹤铮
  219. 219 特别的爱给特别的你 伍思凯
  220. 220 杜婧荧/王艺翔
  221. 221 I Am You Kim Taylor
  222. 222 起风了 买辣椒也用券
  223. 223 江湖笑 周华健
  224. 224 半壶纱 刘珂矣
  225. 225 Jar Of Love 曲婉婷
  226. 226 野百合也有春天 孟庭苇
  227. 227 后来 刘若英
  228. 228 不仅仅是喜欢 萧全/孙语赛
  229. 229 Time (Official) MKJ
  230. 230 纸短情长 (完整版) 烟把儿
  231. 231 离人愁 曲肖冰
  232. 232 难念的经 周华健
  233. 233 佛系少女 冯提莫
  234. 234 红昭愿 音阙诗听
  235. 235 BINGBIAN病变 Cubi/多多Aydos
  236. 236 说散就散 袁娅维TIA RAY
  237. 237 慢慢喜欢你 莫文蔚
  238. 238 最美的期待 周笔畅
  239. 239 牵丝戏 银临/Aki阿杰
  240. 240 夜的钢琴曲 K. Williams
绝世舞姬 - 张曦匀/戚琦
00:00 / 00:00
An audio error has occurred, player will skip forward in 2 seconds.

Loading

点击右上角即可分享
微信分享提示