ceph(三)
一、对象存储网关 RadosGW
1.1、RadosGW 对象存储网关简介
1.2、RadosGW 存储特点
1.2.1、bucket 特性
1.2.2、bucket 命名规范
1.3、对象存储访问对比
1.4、部署 RadosGW 服务
1.4.1、安装 radosgw 服务并初始化
Ubuntu: root@ceph-mgr1:~# apt install radosgw Centos: [root@ceph-mgr1 ~]# yum install ceph-radosgw [root@ceph-mgr2 ~]# yum install ceph-radosgw #在 ceph deploy 服务器将 ceph-mgr1 初始化为 radosGW 服务: [cephadmin@ceph-deploy ~]$ cd ceph-cluster/ [cephadmin@ceph-deploy ceph-cluster]$ ceph-deploy rgw create ceph-mgr2 [cephadmin@ceph-deploy ceph-cluster]$ ceph-deploy rgw create ceph-mgr1
1.4.2、验证 radosgw 服务状态
1.4.3、验证 radosgw 服务进程
1.4.4、radosgw 的存储池类型
ephadmin@ceph-deploy:~/ceph-cluster$ ceph osd pool ls device_health_metrics myrbd1 cephfs-metadata cephfs-data .rgw.root default.rgw.log default.rgw.control default.rgw.meta mypool rbd-data1 #查看默认 radosgw 的存储池信息: cephadmin@ceph-deploy:~/ceph-cluster$ radosgw-admin zone get --rgw-zone=default --rgw-zonegroup=default #rgw pool 信息: .rgw.root: 包含 realm(领域信息),比如 zone 和 zonegroup。
default.rgw.log: 存储日志信息,用于记录各种 log 信息。
default.rgw.control: 系统控制池,在有数据更新时,通知其它 RGW 更新缓存。
default.rgw.meta: 元数据存储池,通过不同的名称空间分别存储不同的 rados 对象,这些名称空间包括⽤⼾UID 及其 bucket 映射信息的名称空间 users.uid、⽤⼾的密钥名称空间 users.keys、⽤⼾的 email 名称空间 users.email、⽤⼾的 subuser 的名称空间 users.swift, 以及 bucket 的名称空间 root 等。
default.rgw.buckets.index: 存放 bucket 到 object 的索引信息。
default.rgw.buckets.data: 存放对象的数据。
default.rgw.buckets.non-ec #数据的额外信息存储池
default.rgw.users.uid: 存放用户信息的存储池。
default.rgw.data.root: 存放 bucket 的元数据,结构体对应 RGWBucketInfo,比如存放桶 名、桶 ID、data_pool 等。
1.4.5、RGW 存储池功能
cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd lspools 1 device_health_metrics 2 myrbd1 3 cephfs-metadata 4 cephfs-data 5 .rgw.root 6 default.rgw.log 7 default.rgw.control 8 default.rgw.meta 11 mypool 12 rbd-data1
1.4.6、验证 RGW zone 信息
cephadmin@ceph-deploy:~/ceph-cluster$ radosgw-admin zone get --rgw-zone=default { "id": "cf74e241-35db-4df3-bf31-22299b791ed4", "name": "default", "domain_root": "default.rgw.meta:root", "control_pool": "default.rgw.control", "gc_pool": "default.rgw.log:gc", "lc_pool": "default.rgw.log:lc", "log_pool": "default.rgw.log", "intent_log_pool": "default.rgw.log:intent", "usage_log_pool": "default.rgw.log:usage", "roles_pool": "default.rgw.meta:roles", "reshard_pool": "default.rgw.log:reshard", "user_keys_pool": "default.rgw.meta:users.keys", "user_email_pool": "default.rgw.meta:users.email", "user_swift_pool": "default.rgw.meta:users.swift", "user_uid_pool": "default.rgw.meta:users.uid", "otp_pool": "default.rgw.otp", "system_key": { "access_key": "", "secret_key": "" }, "placement_pools": [ { "key": "default-placement", "val": { "index_pool": "default.rgw.buckets.index", "storage_classes": { "STANDARD": { "data_pool": "default.rgw.buckets.data" } }, "data_extra_pool": "default.rgw.buckets.non-ec", "index_type": 0 } } ], "realm_id": "", "notif_pool": "default.rgw.log:notif" }
1.4.7:访问 radosgw 服务
1.5、radosgw 服务高可用配置
1.5.1、radosgw http 高可用
1.5.1.1、自定义 http 端口
root@ceph-mgr2:~# vim /etc/ceph/ceph.conf #在最后面添加针对当前节点的自定义配置如下: [client.rgw.ceph-mgr2] rgw_host = ceph-mgr2 rgw_frontends = civetweb port=9900 #重启服务 root@ceph-mgr2 ~]# systemctl restart ceph-radosgw@rgw.ceph-mgr2.service
1.5.1.2、实现高可用
[root@ceph-client ~]# apt install keepalived
[root@ceph-client ~]# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
[root@ceph-client ~]# vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface ens3
garp_master_delay 10
smtp_alert
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.247.8.221 dev ens3 label ens3:0
}
}
[root@ceph-client ~]# systemctl restart keepalived.service
# 下载haproxy apt install haproxy #配置文件 vim /etc/haproxy/haproxy.cfg # 检查配置文件 haproxy -f /etc/haproxy/haproxy.cfg # 添加如下 listen ceph-rgw-7480 bind 10.247.8.221:80 mode tcp server rgw1 10.247.8.205:7480 check inter 2s fall 3 rise 3 server rgw2 10.247.8.206:7480 check inter 2s fall 3 rise 3
编辑C:\Windows\System32\drivers\etc下的hosts文件添加以下命令
10.247.8.221 rgw.xiaonuo.net
1.5.2、radosgw https
1.5.2.1、自签名证书
[root@ceph-mgr2 ~]# cd /etc/ceph/ [root@ceph-mgr2 ceph]# mkdir certs [root@ceph-mgr2 ceph]# cd certs/ [root@ceph-mgr2 certs]# openssl genrsa -out civetweb.key 2048[root@ceph-mgr2:/etc/ceph/certs]# openssl req -new -x509 -key civetweb.key -out civetweb.crt -subj "/CN=rgw.xiaonuo.net" [root@ceph-mgr2 certs]# cat civetweb.key civetweb.crt > civetweb.pem [root@ceph-mgr2 certs]# tree
1.5.2.2、SSL 配置
[root@ceph-mgr2 certs]# vim /etc/ceph/ceph.conf
[client.rgw.ceph-mgr1]
rgw_host = ceph-mgr1
rgw_frontends = "civetweb port=9900+9443s ssl_certificate=/etc/ceph/certs/civetweb.pem"
[client.rgw.ceph-mgr2] rgw_host = ceph-mgr2 rgw_frontends = "civetweb port=9900+9443s ssl_certificate=/etc/ceph/certs/civetweb.pem"
root@ceph-mgr2:/etc/ceph/certs# scp * 10.247.8.205:/etc/ceph/certs/
root@ceph-mgr2:/etc/ceph# scp ceph.conf 10.247.8.205:/etc/ceph/
root@docker:~# vim /etc/haproxy/haproxy.cfg
listen ceph-rgw-80
bind 10.247.8.221:80
mode tcp
server rgw1 10.247.8.205:9900 check inter 2s fall 3 rise 3
server rgw2 10.247.8.206:9900 check inter 2s fall 3 rise 3
listen ceph-rgw-443
bind 10.247.8.221:443
mode tcp
server rgw1 10.247.8.205:9443 check inter 2s fall 3 rise 3
server rgw2 10.247.8.206:9443 check inter 2s fall 3 rise 3
1.5.2.3、验证 https 端口:
1.6、客户端(s3cmd)测试数据读写
1.6.1、RGW Server 配置
root@ceph-mgr2:~$ sudo cat /etc/ceph/ceph.conf [client.rgw.ceph-mgr1] rgw_host = ceph-mgr1 rgw_frontends = civetweb port=9900 rgw_dns_name = rgw.xiaonuo.net
[client.rgw.ceph-mgr2] rgw_host = ceph-mgr2 rgw_frontends = civetweb port=9900 rgw_dns_name = rgw.xiaonuo.net
1.6.2、创建 RGW 账户
cephadmin@ceph-deploy:~/ceph-cluster$ radosgw-admin user create --uid=”user1” --display-name="user1
{
"user_id": "”user1”",
"display_name": "user1",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "”user1”",
"access_key": "XXN0K6HNTK2MVODOVNL3",
"secret_key": "JUzz9B5l7rk9EBnn8yCSPat7rroBf67BvEOlBq6X"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
1.6.3、安装 s3cmd 客户端
cephadmin@ceph-deploy:~/ceph-cluster$ sudo apt-cache madison s3cmd cephadmin@ceph-deploy:~/ceph-cluster$ sudo apt install s3cmd
1.6.4、配置 s3cmd 客户端执行环境
1.6.4.1、s3cmd 客户端添加域名解析
cephadmin@ceph-deploy:~/ceph-cluster$ vim /etc/hosts 10.247.8.206 rgw.xiaonuo.net #将域名解析到 RGW 网关或者负载均衡
1.6.4.2、配置命令执行环境
root@ceph-deploy:/home/cephadmin/ceph-cluster# s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: XXN0K6HNTK2MVODOVNL3
Secret Key: JUzz9B5l7rk9EBnn8yCSPat7rroBf67BvEOlBq6X
Default Region [US]:
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: rgw.xiaonuo.net:9900
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: rgw.xiaonuo.net:9900/%(bucker)
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]:No
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
New settings:
Access Key: XXN0K6HNTK2MVODOVNL3
Secret Key: JUzz9B5l7rk9EBnn8yCSPat7rroBf67BvEOlBq6X
Default Region: US
S3 Endpoint: rgw.xiaonuo.net:9900
DNS-style bucket+hostname:port template for accessing a bucket: rgw.xiaonuo.net:9900/%(bucker)
Encryption password:
Path to GPG program: No
Use HTTPS protocol: True
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n] Y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y
Configuration saved to '/root/.s3cf
1.5.6.3:创建 bucket 以验证权限
root@ceph-deploy:/home/cephadmin/ceph-cluster# s3cmd mb s3://mybucket Bucket 's3://mybucket/' created root@ceph-deploy:/home/cephadmin/ceph-cluster# s3cmd mb s3://css Bucket 's3://css/' created root@ceph-deploy:/home/cephadmin/ceph-cluster# s3cmd mb s3://images Bucket 's3://images/' created
1.5.6.4、验证上传数据
Put file into bucket s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX] root@ceph-deploy:~# wget https://img1.jcloudcs.com/portal/brand/2021/fl1-2.jpg #上传数据 root@ceph-deploy:~# s3cmd put fl1-2.jpg s3://test-s3cmd/
upload: 'fl1-2.jpg' -> 's3://test-s3cmd/fl1-2.jpg' [1 of 1]
1294719 of 1294719 100% in 0s 13.26 MB/s done
root@ceph-deploy:~# s3cmd put fl1-2.jpg s3://images/jpg/ upload: 'fl1-2.jpg' -> 's3://images/jpg/fl1-2.jpg' [1 of 1]
1294719 of 1294719 100% in 0s 16.05 MB/s don #验证数据 root@ceph-deploy:~# s3cmd ls s3://images/
DIR s3://images/jpg/
2022-07-01 07:56 1294719 s3://images/fl1-2.jpg root@ceph-deploy:~# s3cmd ls s3://images/jpg/
2022-07-01 07:52 1294719 s3://images/jpg/fl1-2.jpg
1.5.6.5、验证下载文件
Get file from bucket s3cmd get s3://BUCKET/OBJECT LOCAL_FILE root@ceph-deploy:~# s3cmd get s3://images/fl1-2.jpg /opt/ download: 's3://images/fl1-2.jpg' -> '/opt/fl1-2.jpg' [1 of 1] 1294719 of 1294719 100% in 0s 40.26 MB/s done ph-deploy:~# ls /opt/ fl1-2.jpg
1.5.6.6、删除文件
Delete file from bucket (alias for del) s3cmd rm s3://BUCKET/OBJECT #验证当前文件 root@ceph-deploy:~# s3cmd ls s3://images/ DIR s3://images/jpg/ 2022-07-01 07:56 1294719 s3://images/fl1-2.jpg #删除文件 root@ceph-deploy:~# s3cmd rm s3://images/fl1-2.jpg delete: 's3://images/fl1-2.jpg #验证是否被删除 root@ceph-deploy:~# s3cmd ls s3://images/ DIR s3://images/png/