10 RGW 高可用集群

扩展 RGW 集群

img

node0 node1 节点都需要部署 rgw

当前集群只有一个 rgw 部署在 node0 节点

[root@node0 ceph-deploy]# ceph -s
  cluster:
    id:     97702c43-6cc2-4ef8-bdb5-855cfa90a260
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node0,node1,node2 (age 9d)
    mgr: node1(active, since 12d), standbys: node2, node0
    mds: cephfs-demo:1 {0=node1=up:active} 2 up:standby
    osd: 6 osds: 6 up (since 5d), 6 in (since 12d)
    rgw: 1 daemon active (node0)    # 只有一个节点
 
  task status:
 
  data:
    pools:   9 pools, 352 pgs
    objects: 534 objects, 655 MiB
    usage:   8.4 GiB used, 292 GiB / 300 GiB avail
    pgs:     352 active+clean

ceph 集群 rgw 新增 node1 节点

新增节点默认端口:7480

[root@node0 ceph-deploy]# ceph-deploy rgw create node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy rgw create node1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  rgw                           : [('node1', 'rgw.node1')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5bf4168ea8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function rgw at 0x7f5bf49bb0c8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts node1:rgw.node1
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][WARNIN] rgw keyring does not exist yet, creating one
[node1][DEBUG ] create a keyring file
[node1][DEBUG ] create path recursively if it doesn't exist
[node1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.node1 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.node1/keyring
[node1][INFO  ] Running command: systemctl enable ceph-radosgw@rgw.node1
[node1][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.node1.service to /usr/lib/systemd/system/ceph-radosgw@.service.
[node1][INFO  ] Running command: systemctl start ceph-radosgw@rgw.node1
[node1][INFO  ] Running command: systemctl enable ceph.target
[ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host node1 and default port 7480
  • 测试 node1 rgw 服务信息
[root@node0 ceph-deploy]# curl node1:7480
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <Owner>
        <ID>anonymous</ID>
        <DisplayName></DisplayName>
    </Owner>
    <Buckets></Buckets>
</ListAllMyBucketsResult>
  • 查看集群信息
[root@node0 ceph-deploy]# ceph -s
  cluster:
    id:     97702c43-6cc2-4ef8-bdb5-855cfa90a260
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node0,node1,node2 (age 9d)
    mgr: node1(active, since 12d), standbys: node2, node0
    mds: cephfs-demo:1 {0=node1=up:active} 2 up:standby
    osd: 6 osds: 6 up (since 5d), 6 in (since 12d)
    rgw: 2 daemons active (node0, node1)    # rgw 服务新增了 node1 节点
 
  task status:
 
  data:
    pools:   9 pools, 352 pgs
    objects: 534 objects, 655 MiB
    usage:   8.4 GiB used, 292 GiB / 300 GiB avail
    pgs:     352 active+clean

修改 node1 rgw 服务使用 80 端口

  • 修改配置
[root@node0 ceph-deploy]# cat ceph.conf
[global]
fsid = 97702c43-6cc2-4ef8-bdb5-855cfa90a260
public_network = 192.168.100.0/24
cluster_network = 192.168.100.0/24
mon_initial_members = node0
mon_host = 192.168.100.130
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mon_max_pg_per_osd=1000
mon_allow_pool_delete = true

[client.rgw.node0]
rgw_frontends = "civetweb port=80"

# 新增 node1 配置信息
[client.rgw.node1]
rgw_frontends = "civetweb port=80"

[osd]
osd crush update on start = false
  • 推送配置文件到 ceph 集群
[root@node0 ceph-deploy]# ceph-deploy --overwrite-conf config push node0 node1 node2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf config push node0 node1 node2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : push
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fac506283b0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['node0', 'node1', 'node2']
[ceph_deploy.cli][INFO  ]  func                          : <function config at 0x7fac50643c80>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.config][DEBUG ] Pushing config to node0
[node0][DEBUG ] connected to host: node0 
[node0][DEBUG ] detect platform information from remote host
[node0][DEBUG ] detect machine type
[node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to node1
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to node2
[node2][DEBUG ] connected to host: node2 
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  • 重启 ceph 集群 radosgw 服务
[root@node0 ceph-deploy]# ansible all -m shell -a "systemctl restart ceph-radosgw.target"
node2 | CHANGED | rc=0 >>

node1 | CHANGED | rc=0 >>

node0 | CHANGED | rc=0 >>
  • 测试服务端口是否变更
[root@node0 ceph-deploy]# curl node1:7480
curl: (7) Failed connect to node1:7480; Connection refused

[root@node0 ceph-deploy]# curl node1:80
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <Owner>
        <ID>anonymous</ID>
        <DisplayName></DisplayName>
    </Owner>
    <Buckets></Buckets>
</ListAllMyBucketsResult>

高可用介绍和准备

  • 当前集群 radosgw 存在 2个节点
  • node0 和 node1 那如何实现客户端访问 radosgw 的负载均衡
  • 我们需要配置 harpoxy + keepalived 实现负载均衡效果

环境说明

harpoxy + keepalived 构建 RGW 高可用集群

主机名 IP 地址 端口 软件 VIP + 端口
node0 192.168.100.130 81 rgw+haproxy+keepalived 192.168.100.100:80 (临时的虚拟IP)
node1 192.168.100.131 81 rgw+haproxy+keepalived

修改 radosgw 端口为 81,haproxy 服务使用 80 端口

# 修改配置文件
[root@node0 ceph-deploy]# cat ceph.conf
[global]
fsid = 97702c43-6cc2-4ef8-bdb5-855cfa90a260
public_network = 192.168.100.0/24
cluster_network = 192.168.100.0/24
mon_initial_members = node0
mon_host = 192.168.100.130
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mon_max_pg_per_osd=1000
mon_allow_pool_delete = true

[client.rgw.node0]
rgw_frontends = "civetweb port=81"  # 修改服务端口

[client.rgw.node1]
rgw_frontends = "civetweb port=81"  # 修改服务端口

[osd]
osd crush update on start = false

# 推送配置文件到 ceph 集群
[root@node0 ceph-deploy]# ceph-deploy --overwrite-conf config push node0 node1 node2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf config push node0 node1 node2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : push
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ff26c86e3b0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['node0', 'node1', 'node2']
[ceph_deploy.cli][INFO  ]  func                          : <function config at 0x7ff26c889c80>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.config][DEBUG ] Pushing config to node0
[node0][DEBUG ] connected to host: node0 
[node0][DEBUG ] detect platform information from remote host
[node0][DEBUG ] detect machine type
[node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to node1
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to node2
[node2][DEBUG ] connected to host: node2 
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

重启 ceph 集群 radosgw 服务

[root@node0 ceph-deploy]# ansible all -m shell -a "systemctl restart ceph-radosgw.target"
node2 | CHANGED | rc=0 >>

node1 | CHANGED | rc=0 >>

node0 | CHANGED | rc=0 >>

测试服务端口是否变更

# 查看 80 和 81 端口
[root@node0 ceph-deploy]# ss -tnlp | grep 80

[root@node0 ceph-deploy]# ss -tnlp | grep 81
LISTEN     0      128          *:81                       *:*                   users:(("radosgw",pid=60280,fd=44))

# node0 节点
[root@node0 ceph-deploy]# curl node0:80
curl: (7) Failed connect to node1:80; Connection refused

[root@node0 ceph-deploy]# curl node0:81
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <Owner>
        <ID>anonymous</ID>
        <DisplayName></DisplayName>
    </Owner>
    <Buckets></Buckets>
</ListAllMyBucketsResult>


# node1 节点
[root@node0 ceph-deploy]# curl node1:80
curl: (7) Failed connect to node1:80; Connection refused

[root@node0 ceph-deploy]# curl node1:81
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <Owner>
        <ID>anonymous</ID>
        <DisplayName></DisplayName>
    </Owner>
    <Buckets></Buckets>
</ListAllMyBucketsResult>

配置 keepalived 高可用

keepalived 软件安装

  • ansible host 配置
[root@node0 ceph-deploy]# cat /etc/ansible/hosts
......
[ceph]
node1
node2

[all]
node0
node1
node2

[rgw]
node0
node1
  • 安装 keepalived 软件
[root@node0 ceph-deploy]# ansible rgw -m shell -a "yum install keepalived -y"

修改配置文件

  • 修改 node0 节点 keepalived 配置
[root@node0 ceph-deploy]# cd /etc/keepalived/
[root@node0 keepalived]# ls -lh
total 4.0K
-rw-r--r-- 1 root root 3.6K Oct  1  2020 keepalived.conf

# 备份配置文件
[root@node0 keepalived]# cp keepalived.conf{,.bak}

# 修改配置文件
[root@node0 keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_script chk_haproxy {
    script "killall -0 haproxy"
    interval 1
    weight -20
}

vrrp_instance RGW {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.100.100/24
    }
    track_script {
        chk_haproxy
    }
}
  • 推送配置信息到 node1 节点
[root@node0 keepalived]# scp ./keepalived.conf node1:/etc/keepalived/
keepalived.conf 
  • 修改 node1 节点配置信息
# 连接到 node1 节点
[root@node0 keepalived]# ssh node1
Last login: Thu Nov  3 14:40:47 2022 from node0
[root@node1 ~]# cd /etc/keepalived/

# 修改配置文件
[root@node1 keepalived]# cat keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_script chk_haproxy {
    script "killall -0 haproxy"
    interval 1
    weight -20
}

vrrp_instance RGW {
    state BACKUP            # 角色修改
    interface ens33
    virtual_router_id 51
    priority 90             # 权重修改
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.100.100/24
    }
    track_script {
        chk_haproxy
    }
}

启动服务

# 退出 node1
[root@node1 keepalived]# exit

# 启动 keepalived 服务
[root@node0 keepalived]# ansible rgw -m shell -a "systemctl enable keepalived --now"
node1 | CHANGED | rc=0 >>
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
node0 | CHANGED | rc=0 >>
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.

# 查看 keepalived 服务运行情况
[root@node0 keepalived]# ansible rgw -m shell -a "systemctl status keepalived"
node1 | CHANGED | rc=0 >>
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2022-11-03 14:53:03 CST; 2s ago
  Process: 43266 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 43267 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─43267 /usr/sbin/keepalived -D
           ├─43268 /usr/sbin/keepalived -D
           └─43269 /usr/sbin/keepalived -D

Nov 03 14:53:03 node1 Keepalived_vrrp[43269]: SECURITY VIOLATION - scripts are being executed but script_security not enabled.
Nov 03 14:53:03 node1 Keepalived_vrrp[43269]: VRRP_Instance(RGW) removing protocol VIPs.
Nov 03 14:53:03 node1 Keepalived_vrrp[43269]: VRRP_Instance(RGW) removing protocol iptable drop rule
Nov 03 14:53:03 node1 Keepalived_vrrp[43269]: Using LinkWatch kernel netlink reflector...
Nov 03 14:53:03 node1 Keepalived_vrrp[43269]: VRRP_Instance(RGW) Entering BACKUP STATE
Nov 03 14:53:03 node1 Keepalived_vrrp[43269]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
Nov 03 14:53:03 node1 Keepalived_vrrp[43269]: /usr/bin/killall -0 haproxy exited with status 1
Nov 03 14:53:04 node1 Keepalived_vrrp[43269]: VRRP_Instance(RGW) Changing effective priority from 90 to 70
Nov 03 14:53:04 node1 Keepalived_vrrp[43269]: /usr/bin/killall -0 haproxy exited with status 1
Nov 03 14:53:05 node1 Keepalived_vrrp[43269]: /usr/bin/killall -0 haproxy exited with status 1
node0 | CHANGED | rc=0 >>
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2022-11-03 14:53:03 CST; 2s ago
  Process: 52430 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 52431 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─52431 /usr/sbin/keepalived -D
           ├─52432 /usr/sbin/keepalived -D
           └─52433 /usr/sbin/keepalived -D

Nov 03 14:53:03 node0 Keepalived_vrrp[52433]: SECURITY VIOLATION - scripts are being executed but script_security not enabled.
Nov 03 14:53:03 node0 Keepalived_vrrp[52433]: VRRP_Instance(RGW) removing protocol VIPs.
Nov 03 14:53:03 node0 Keepalived_vrrp[52433]: VRRP_Instance(RGW) removing protocol iptable drop rule
Nov 03 14:53:03 node0 Keepalived_vrrp[52433]: Using LinkWatch kernel netlink reflector...
Nov 03 14:53:03 node0 Keepalived_vrrp[52433]: VRRP_Instance(RGW) Entering BACKUP STATE
Nov 03 14:53:03 node0 Keepalived_vrrp[52433]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
Nov 03 14:53:03 node0 Keepalived_vrrp[52433]: /usr/bin/killall -0 haproxy exited with status 1
Nov 03 14:53:04 node0 Keepalived_vrrp[52433]: VRRP_Instance(RGW) Changing effective priority from 100 to 80
Nov 03 14:53:04 node0 Keepalived_vrrp[52433]: /usr/bin/killall -0 haproxy exited with status 1
Nov 03 14:53:05 node0 Keepalived_vrrp[52433]: /usr/bin/killall -0 haproxy exited with status 1

查看 IP 信息

[root@node0 keepalived]# ip a l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:81:75:65 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.130/24 brd 192.168.100.255 scope global noprefixroute dynamic ens33
       valid_lft 1147sec preferred_lft 1147sec
    inet 192.168.100.100/24 scope global secondary ens33    # 新绑定了一个 IP 地址
       valid_lft forever preferred_lft forever
    inet6 fe80::ea04:47f0:b11e:9e2/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::cad3:6b55:3459:c179/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

配置 harpoxy 负载均衡

haproxy 软件安装

[root@node0 ~]# ansible rgw -m shell -a "yum install -y haproxy"

修改配置文件

[root@node0 ceph-deploy]# cat /etc/haproxy/haproxy.cfg 
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend http_web *:80
    mode                        http
    default_backend             rgw

backend rgw
    balance     roundrobin
    mode        http
    server  node0 192.168.100.130:81 check
    server  node1 192.168.100.131:81 check

复制配置文件到 node1 节点

[root@node0 ceph-deploy]# scp /etc/haproxy/haproxy.cfg node1:/etc/haproxy/
haproxy.cfg 

启动 haproxy 服务

# 启动服务
[root@node0 ceph-deploy]# ansible rgw -m shell -a "systemctl enable haproxy --now"
node1 | CHANGED | rc=0 >>
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
node0 | CHANGED | rc=0 >>
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.


# 检查服务运行情况
[root@node0 ceph-deploy]# ansible rgw -m shell -a "systemctl status haproxy"
node1 | CHANGED | rc=0 >>
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2022-11-03 15:58:25 CST; 6s ago
 Main PID: 52429 (haproxy-systemd)
   CGroup: /system.slice/haproxy.service
           ├─52429 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
           ├─52430 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
           └─52431 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

Nov 03 15:58:25 node1 systemd[1]: Started HAProxy Load Balancer.
Nov 03 15:58:25 node1 haproxy-systemd-wrapper[52429]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
node0 | CHANGED | rc=0 >>
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2022-11-03 15:58:25 CST; 6s ago
 Main PID: 61968 (haproxy-systemd)
   CGroup: /system.slice/haproxy.service
           ├─61968 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
           ├─61969 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
           └─61970 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

Nov 03 15:58:25 node0 systemd[1]: Started HAProxy Load Balancer.
Nov 03 15:58:25 node0 haproxy-systemd-wrapper[61968]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

检查 rgw 服务

# 查看端口情况
[root@node0 ceph-deploy]# ss -tnpl | grep *:80
LISTEN     0      128          *:80                       *:*                   users:(("haproxy",pid=62425,fd=5))

# node0 节点 radosgw 服务
[root@node0 ceph-deploy]# curl node0:80
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <Owner>
        <ID>anonymous</ID>
        <DisplayName></DisplayName>
    </Owner>
    <Buckets></Buckets>
</ListAllMyBucketsResult>


# node1 节点 radosgw 服务
[root@node0 ceph-deploy]# curl node1:80
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <Owner>
        <ID>anonymous</ID>
        <DisplayName></DisplayName>
    </Owner>
    <Buckets></Buckets>
</ListAllMyBucketsResult>

修改客户端指向

s3 客户端配置

# 修改 s3cfg 配置文件
[root@node0 ceph-deploy]# vim /root/.s3cfg
......
#host_base = 192.168.100.130
#host_bucket = 192.168.100.130:80/%(bucket)s
host_base = 192.168.100.100     # 修改为 keepalived IP 地址
host_bucket = 192.168.100.100:80/%(bucket)s # 修改为 keepalived IP 地址
......

# 查看 bucket 信息
[root@node0 ceph-deploy]# s3cmd ls
2022-10-21 01:39  s3://ceph-s3-bucket
2022-10-21 03:16  s3://s3cmd-demo
2022-10-21 06:46  s3://swift-demo

# 新建 bucket 信息,测试功能
[root@node0 ceph-deploy]# s3cmd mb s3://test-1
Bucket 's3://test-1/' created
[root@node0 ceph-deploy]# s3cmd ls
2022-10-21 01:39  s3://ceph-s3-bucket
2022-10-21 03:16  s3://s3cmd-demo
2022-10-21 06:46  s3://swift-demo
2022-11-03 08:36  s3://test-1

swift 客户端配置

[root@node0 ceph-deploy]# cat swift_source.sh
# export ST_AUTH=http://192.168.100.130:80/auth
export ST_AUTH=http://192.168.100.100:80/auth   # 修改为 keepalived IP 地址
export ST_USER=ceph-s3-user:swift
export ST_KEY=Gk1Br59ysIOh5tnwBQVqDMAHlspQCvHYixoz4Erz

# 查看 bucket 信息
[root@node0 ceph-deploy]# source swift_source.sh 
[root@node0 ceph-deploy]# swift list
ceph-s3-bucket
s3cmd-demo
swift-demo
test-1

# 新建 bucket 信息,测试功能
[root@node0 ceph-deploy]# swift post test-2
[root@node0 ceph-deploy]# swift list
ceph-s3-bucket
s3cmd-demo
swift-demo
test-1
test-2

删除创建的 bucket

[root@node0 ceph-deploy]# s3cmd rb s3://test-1
Bucket 's3://test-1/' removed
[root@node0 ceph-deploy]# s3cmd ls
2022-10-21 01:39  s3://ceph-s3-bucket
2022-10-21 03:16  s3://s3cmd-demo
2022-10-21 06:46  s3://swift-demo
2022-11-03 08:39  s3://test-2

[root@node0 ceph-deploy]# swift delete test-2
test-2
[root@node0 ceph-deploy]# swift list
ceph-s3-bucket
s3cmd-demo
swift-demo

RGW 高可用集群测试

查看当前 keepalived IP 绑定情况

默认 keepalived IP 绑定在 node0 节点上

[root@node0 ceph-deploy]# ip a l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:81:75:65 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.130/24 brd 192.168.100.255 scope global noprefixroute dynamic ens33
       valid_lft 1645sec preferred_lft 1645sec
    inet 192.168.100.100/24 scope global secondary ens33    # keepalived IP
       valid_lft forever preferred_lft forever
    inet6 fe80::ea04:47f0:b11e:9e2/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::cad3:6b55:3459:c179/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

停止 haproxy 服务

[root@node0 ceph-deploy]# systemctl stop haproxy

查看 IP 漂移情况

[root@node0 ceph-deploy]# ssh node1
Last login: Thu Nov  3 16:21:47 2022 from node0

# keepalived IP 已经漂移到 node1 节点
[root@node1 ~]# ip a l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:ce:d5:dc brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.131/24 brd 192.168.100.255 scope global noprefixroute dynamic ens33
       valid_lft 1729sec preferred_lft 1729sec
    inet 192.168.100.100/24 scope global secondary ens33    # keepalived IP
       valid_lft forever preferred_lft forever
    inet6 fe80::ea04:47f0:b11e:9e2/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::cad3:6b55:3459:c179/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::7dd2:fcda:997a:42ec/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
[root@node1 ~]# exit
logout
Connection to node1 closed.

测试客户端访问情况

# 测试 IP 是否能 ping 通
[root@node0 ceph-deploy]# ping 192.168.100.100
PING 192.168.100.100 (192.168.100.100) 56(84) bytes of data.
64 bytes from 192.168.100.100: icmp_seq=1 ttl=64 time=0.285 ms
64 bytes from 192.168.100.100: icmp_seq=2 ttl=64 time=0.499 ms
^C
--- 192.168.100.100 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.285/0.392/0.499/0.107 ms

# 客户端访问
[root@node0 ceph-deploy]# s3cmd ls
2022-10-21 01:39  s3://ceph-s3-bucket
2022-10-21 03:16  s3://s3cmd-demo
2022-10-21 06:46  s3://swift-demo

[root@node0 ceph-deploy]# swift list
ceph-s3-bucket
s3cmd-demo
swift-demo

恢复 haproxy 服务

[root@node0 ceph-deploy]# systemctl start haproxy

# 再次查看 Keepalived IP 漂移情况
[root@node0 ceph-deploy]# ip a l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:81:75:65 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.130/24 brd 192.168.100.255 scope global noprefixroute dynamic ens33
       valid_lft 1467sec preferred_lft 1467sec
    inet 192.168.100.100/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::ea04:47f0:b11e:9e2/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::cad3:6b55:3459:c179/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

# 测试客户端访问情况
[root@node0 ceph-deploy]# swift list
ceph-s3-bucket
s3cmd-demo
swift-demo
[root@node0 ceph-deploy]# ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph.conf.bak         ceph.mon.keyring  get-pip.py  s3client.py
ceph.bootstrap-mgr.keyring  ceph.bootstrap-rgw.keyring  ceph.conf                  ceph-deploy-ceph.log  crushmap          rdb         swift_source.sh
[root@node0 ceph-deploy]# s3cmd ls
2022-10-21 01:39  s3://ceph-s3-bucket
2022-10-21 03:16  s3://s3cmd-demo
2022-10-21 06:46  s3://swift-demo
posted @   evescn  阅读(215)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 【自荐】一款简洁、开源的在线白板工具 Drawnix
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY
· Docker 太简单,K8s 太复杂?w7panel 让容器管理更轻松!
历史上的今天:
2017-11-03 CentOS7中配置基于Nginx+Supervisor+Gunicorn的Flask项目
  1. 1 毛不易
  2. 2 青丝 等什么君(邓寓君)
  3. 3 最爱 周慧敏
  4. 4 青花 (Live) 摩登兄弟刘宇宁/周传雄
  5. 5 怨苍天变了心 葱香科学家(王悠然)
  6. 6 吹梦到西洲 恋恋故人难/黄诗扶/王敬轩(妖扬)
  7. 7 姑娘别哭泣 柯柯柯啊
  8. 8 我会好好的 王心凌
  9. 9 半生雪 七叔-叶泽浩
  10. 10 用力活着 张茜
  11. 11 山茶花读不懂白玫瑰 梨笑笑
  12. 12 赴春寰 张壹ZHANG/Mukyo木西/鹿予/弦上春秋Official
  13. 13 故事终章 程响
  14. 14 沿海独白 王唯一(九姨太)
  15. 15 若把你 越南电音 云音乐AI/网易天音
  16. 16 世间美好与你环环相扣 柏松
  17. 17 愿你如愿 陆七言
  18. 18 多情种 胡杨林
  19. 19 和你一样 李宇春
  20. 20 晚风心里吹 李克勤
  21. 21 世面 黄梓溪
  22. 22 等的太久 杨大六
  23. 23 微醺状态 张一
  24. 24 醉今朝 安小茜
  25. 25 阿衣莫 阿吉太组合
  26. 26 折风渡夜 沉默书生
  27. 27 星河万里 王大毛
  28. 28 满目星辰皆是你 留小雨
  29. 29 老人与海 海鸣威/吴琼
  30. 30 海底 一支榴莲
  31. 31 只要有你 曹芙嘉
  32. 32 兰花指 阿里郎
  33. 33 口是心非 张大帅
  34. 34 爱不得忘不舍 白小白
  35. 35 惊鸿醉 指尖笑
  36. 36 如愿 葱香科学家(王悠然)
  37. 37 晚风心里吹 阿梨粤
  38. 38 惊蛰·归云 陈拾月(只有影子)/KasaYAYA
  39. 39 风飞沙 迪克牛仔
  40. 40 把孤独当做晚餐 井胧
  41. 41 星星点灯 郑智化
  42. 42 客子光阴 七叔-叶泽浩
  43. 43 走马观花 王若熙
  44. 44 沈园外 阿YueYue/戾格/小田音乐社
  45. 45 盗将行 花粥/马雨阳
  46. 46 她的眼睛会唱歌 张宇佳
  47. 47 一笑江湖 姜姜
  48. 48 虎二
  49. 49 人间烟火 程响
  50. 50 不仅仅是喜欢 萧全/孙语赛
  51. 51 你的眼神(粤语版) Ecrolyn
  52. 52 剑魂 李炜
  53. 53 虞兮叹 闻人听書_
  54. 54 时光洪流 程响
  55. 55 桃花诺 G.E.M.邓紫棋
  56. 56 行星(PLANET) 谭联耀
  57. 57 别怕我伤心 悦开心i/张家旺
  58. 58 上古山海经 小少焱
  59. 59 你的眼神 七元
  60. 60 怨苍天变了心 米雅
  61. 61 绝不会放过 王亚东
  62. 62 可笑的孤独 黄静美
  63. 63 错位时空 艾辰
  64. 64 像个孩子 仙屁孩
  65. 65 完美世界 [主题版] 水木年华
  66. 66 我们的时光 赵雷
  67. 67 万字情诗 椒椒JMJ
  68. 68 妖王 浮生
  69. 69 天地无霜 (合唱版) 杨紫/邓伦
  70. 70 塞北殇 王若熙
  71. 71 花亦山 祖娅纳惜
  72. 72 醉今朝 是可乐鸭
  73. 73 欠我个未来 艾岩
  74. 74 缘分一道桥 容云/青峰AomineDaiky
  75. 75 不知死活 子无余/严书
  76. 76 不可说 霍建华/赵丽颖
  77. 77 孤勇者 陈奕迅
  78. 78 让酒 摩登兄弟刘宇宁
  79. 79 红尘悠悠DJ沈念版 颜一彦
  80. 80 折风渡夜 (DJ名龙版) 泽国同学
  81. 81 吹灭小山河 国风堂/司南
  82. 82 等什么君 - 辞九门回忆 张大帅
  83. 83 绝世舞姬 张曦匀/戚琦
  84. 84 阿刁(无修音版|live) 张韶涵网易云资讯台
  85. 85 往事如烟 蓝波
  86. 86 清明上河图 李玉刚
  87. 87 望穿秋水 坤坤阿
  88. 88 太多 杜宣达
  89. 89 小阿七
  90. 90 霞光-《精灵世纪》片尾曲 小时姑娘
  91. 91 放开 爱乐团王超
  92. 92 醉仙美 娜美
  93. 93 虞兮叹(完整版) 黎林添娇kiki
  94. 94 单恋一枝花 夏了个天呐(朴昱美)/七夕
  95. 95 一个人挺好 (DJ版) 69/肖涵/沈子凡
  96. 96 一笑江湖 闻人听書_
  97. 97 赤伶 李玉刚
  98. 98 达拉崩吧 (Live) 周深
  99. 99 等你归来 程响
  100. 100 责无旁贷 阿悠悠
  101. 101 你是人间四月天(钢琴弹唱版) 邵帅
  102. 102 虐心 徐良/孙羽幽
  103. 103 大天蓬 (女生版) 清水er
  104. 104 赤伶 是二智呀
  105. 105 有种关系叫知己 刘大壮
  106. 106 怎随天下 王若熙
  107. 107 有人 赵钶
  108. 108 海底 三块木头
  109. 109 有何不可 许嵩
  110. 110 大天蓬 (抖音版) 璐爷
  111. 111 我吹过你吹过的晚风(翻自 ac) 辛辛
  112. 112 只爱西经 林一
  113. 113 关山酒 等什么君(邓寓君)
  114. 114 曾经的你 年少不川
  115. 115 倔强 五月天
  116. 116 Lydia F.I.R.
  117. 117 爱你 王心凌
  118. 118 杀破狼 哥哥妹妹
  119. 119 踏山河 七叔-叶泽浩
  120. 120 错过的情人 雷婷
  121. 121 你看到的我 黄勇/任书怀
  122. 122 新欢渡旧爱 黄静美
  123. 123 慕容晓晓-黄梅戏(南柯一梦 / 明洋 remix) 南柯一梦/MINGYANG
  124. 124 浮白 花粥/王胜娚
  125. 125 叹郁孤 霄磊
  126. 126 贝加尔湖畔 (Live) 李健
  127. 127 不虞 王玖
  128. 128 麻雀 李荣浩
  129. 129 一场雨落下来要用多久 鹿先森乐队
  130. 130 野狼disco 宝石Gem
  131. 131 我们不该这样的 张赫煊
  132. 132 海底 一支榴莲
  133. 133 爱情错觉 王娅
  134. 134 你一定要幸福 何洁
  135. 135 往后余生 马良
  136. 136 放你走 正点
  137. 137 只要平凡 张杰/张碧晨
  138. 138 只要平凡-小石头和孩子们 小石头和孩子们
  139. 139 红色高跟鞋 (Live) 韩雪/刘敏涛/万茜
  140. 140 明月天涯 五音Jw
  141. 141 华年 鹿先森乐队
  142. 142 分飞 徐怀钰
  143. 143 你是我撞的南墙 刘楚阳
  144. 144 同簪 小时姑娘/HITA
  145. 145 我的将军啊-唯美独特女版 熙宝(陆迦卉)
  146. 146 我的将军啊(女版戏腔) Mukyo木西
  147. 147 口是心非 南柯nanklo/乐小桃
  148. 148 DAY BY DAY (Japanese Ver.) T-ara
  149. 149 我承认我怕黑 雅楠
  150. 150 我要找到你 冯子晨
  151. 151 你的答案 子尧
  152. 152 一剪梅 费玉清
  153. 153 纸船 薛之谦/郁可唯
  154. 154 那女孩对我说 (完整版) Uu
  155. 155 我好像在哪见过你 薛之谦
  156. 156 林中鸟 葛林
  157. 157 渡我不渡她 (正式版) 苏谭谭
  158. 158 红尘来去梦一场 大壮
  159. 159 都说 龙梅子/老猫
  160. 160 산다는 건 (Cheer Up) 洪真英
  161. 161 听说 丛铭君
  162. 162 那个女孩 张泽熙
  163. 163 最近 (正式版) 王小帅
  164. 164 不谓侠 萧忆情Alex
  165. 165 芒种 音阙诗听/赵方婧
  166. 166 恋人心 魏新雨
  167. 167 Trouble Is A Friend Lenka
  168. 168 风筝误 刘珂矣
  169. 169 米津玄師-lemon(Ayasa绚沙 Remix) Ayasa
  170. 170 可不可以 张紫豪
  171. 171 告白の夜 Ayasa
  172. 172 知否知否(翻自 胡夏) 凌之轩/rainbow苒
  173. 173 琵琶行 奇然/沈谧仁
  174. 174 一曲相思 半阳
  175. 175 起风了 吴青峰
  176. 176 胡广生 任素汐
  177. 177 左手指月 古琴版 古琴唐彬/古琴白无瑕
  178. 178 清明上河图 排骨教主
  179. 179 左手指月 萨顶顶
  180. 180 刚刚好 薛之谦
  181. 181 悟空 戴荃
  182. 182 易燃易爆炸 陈粒
  183. 183 漫步人生路 邓丽君
  184. 184 不染 萨顶顶
  185. 185 不染 毛不易
  186. 186 追梦人 凤飞飞
  187. 187 笑傲江湖 刘欢/王菲
  188. 188 沙漠骆驼 展展与罗罗
  189. 189 外滩十八号 男才女貌
  190. 190 你懂得 小沈阳/沈春阳
  191. 191 铁血丹心 罗文/甄妮
  192. 192 温柔乡 陈雅森
  193. 193 似水柔情 王备
  194. 194 我只能爱你 彭青
  195. 195 年轻的战场 张杰
  196. 196 七月七日晴 许慧欣
  197. 197 心爱 金学峰
  198. 198 Something Just Like This (feat. Romy Wave) Anthony Keyrouz/Romy Wave
  199. 199 ブルーバード いきものがかり
  200. 200 舞飞扬 含笑
  201. 201 时间煮雨 郁可唯
  202. 202 英雄一怒为红颜 小壮
  203. 203 天下有情人 周华健/齐豫
  204. 204 白狐 陈瑞
  205. 205 River Flows In You Martin Ermen
  206. 206 相思 毛阿敏
  207. 207 只要有你 那英/孙楠
  208. 208 Croatian Rhapsody Maksim Mrvica
  209. 209 来生缘 刘德华
  210. 210 莫失莫忘 麦振鸿
  211. 211 往后余生 王贰浪
  212. 212 雪见—仙凡之旅 麦振鸿
  213. 213 让泪化作相思雨 南合文斗
  214. 214 追梦人 阿木
  215. 215 真英雄 张卫健
  216. 216 天使的翅膀 安琥
  217. 217 生生世世爱 吴雨霏
  218. 218 爱我就跟我走 王鹤铮
  219. 219 特别的爱给特别的你 伍思凯
  220. 220 杜婧荧/王艺翔
  221. 221 I Am You Kim Taylor
  222. 222 起风了 买辣椒也用券
  223. 223 江湖笑 周华健
  224. 224 半壶纱 刘珂矣
  225. 225 Jar Of Love 曲婉婷
  226. 226 野百合也有春天 孟庭苇
  227. 227 后来 刘若英
  228. 228 不仅仅是喜欢 萧全/孙语赛
  229. 229 Time (Official) MKJ
  230. 230 纸短情长 (完整版) 烟把儿
  231. 231 离人愁 曲肖冰
  232. 232 难念的经 周华健
  233. 233 佛系少女 冯提莫
  234. 234 红昭愿 音阙诗听
  235. 235 BINGBIAN病变 Cubi/多多Aydos
  236. 236 说散就散 袁娅维TIA RAY
  237. 237 慢慢喜欢你 莫文蔚
  238. 238 最美的期待 周笔畅
  239. 239 牵丝戏 银临/Aki阿杰
  240. 240 夜的钢琴曲 K. Williams
不谓侠 - 萧忆情Alex
00:00 / 00:00
An audio error has occurred, player will skip forward in 2 seconds.

作词 : 迟意

作曲 : 潮汐-tide

编曲 : 潮汐-tide

混音 : MR.鱼

衣襟上 别好了晚霞 余晖送我牵匹老马

正路过 烟村里人家 恰似当年故里正飞花

醉过风 喝过茶 寻常巷口寻个酒家

在座皆算老友 碗底便是天涯

天涯处 无处不为家 蓬门自我也像广厦

论意气 不计多或寡 占三分便敢自称为侠

刀可捉 拳也耍 偶尔闲来问个生杀

没得英雄名讳 掂量些旧事抵酒价

向江南折过花 对春风与红蜡

多情总似我 风流爱天下

人世肯相逢 知己幸有七八

邀我拍坛去 醉眼万斗烟霞

向江北饮过马 对西风与黄沙

无情也似我 向剑底斩桃花

人世难相逢 谢青山催白发

慷慨唯霜雪 相赠眉间一道疤

过三巡 酒气开月华 浓醉到五更不还家

漫说道 无瑕少年事 敢夸玉带宝剑青骢马

眠星子 枕霜花 就茅草也比神仙塌

交游任意南北 洒落不计冬夏

算冬夏 豪气未曾罢 再砥砺剑锋出京华

问来人 胸襟谁似我 将日月山海一并笑纳

只姓名 不作答 转身向云外寄生涯

不必英雄名讳 记两个旧事抵酒价

向江南折过花 对春风与红蜡

多情总似我 风流爱天下

人世恨相逢 知己幸有七八

邀我拍坛去 醉言万斗烟霞

向江北饮过马 对西风与黄沙

无情也似我 迎剑锋斩桃花

人世能相逢 谢青山催白发

慷慨唯霜雪 相赠眉尖一道疤

当此世 赢输都算闲话

来换杯陈酒 天纵我潇洒

风流不曾老 弹铗唱作年华

凭我纵马去 过剑底杯中觅生涯

当此世 生死也算闲话

来换场豪醉 不负天纵潇洒

风流不曾老 弹铗唱作年华

凭我自由去 只做狂人不谓侠

点击右上角即可分享
微信分享提示