Ceph 对象存储 s3cmd客户端使用、基于负载均衡器实现短视频的业务案例

  客户端(s3cmd)测试数据读写

  S3cmd github地址:https://github.com/s3tools/s3cmd

  RGW Server 配置

  在实际的生产环境,RGW1 和 RGW2 的配置参数是完全一样的。

root@ceph-mgr1:~# cat /etc/ceph/ceph.conf 
[global]
fsid = 5372c074-edf7-45dd-b635-16422165c17c
public_network = 192.168.100.0/24
cluster_network = 172.16.100.0/24
mon_initial_members = ceph-mon1,ceph-mon2,ceph-mon3
mon_host = 192.168.100.35,192.168.100.36,192.168.100.37
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

[client.rgw.ceph-mgr1]
rgw_host = ceph-mgr1
rgw_dns_name = rgw.cncf.net

[client.rgw.ceph-mgr2]
rgw_host = ceph-mgr2
rgw_dns_name = rgw.cncf.net

 

  验证负载均衡器配置

  查看haproxy配置

root@haproxyA:~# vim /etc/haproxy/haproxy.cfg
listen ceph-radosgw-7480
   bind :80
   mode tcp
   server ceph-mgr1 192.168.100.38:7480 check inter 3s fall 3 rise 2
   server ceph-mgr2 192.168.100.39:7480 check inter 3s fall 3 rise 2

 

  客户端配置hosts解析haproxy地址后,浏览器访问

 

  安装 s3cmd 客户端

  s3cmd 是一个通过命令行访问 ceph RGW 实现创建存储同桶、上传、下载以及管理数据到对象存储的命令行客户端工具。

  CentOS:

[root@ansible yum.repos.d]# yum install s3cmd

 

  Ubuntu:

root@ceph-deploy:~# apt-cache madison s3cmd
root@ceph-deploy:~# apt install s3cmd

 

  配置 s3cmd 客户端执行环境

  客户端配置连接rgw的IP地址的dns解析

[root@ansible ~]# vim /etc/hosts
192.168.100.20 rgw.cncf.net   #此为rgw的负载均衡haproxy的ip地址

 

  使用s3cmd --configure以交互的方式配置,在当前用户家目录下生成配置文件

 

[root@ansible ~]# s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: 45CMIRWTFQY9DGJX7W1Z      #输入对象用户access key
Secret Key: EyFmlD51WWfCGbtxFYZcygwDc48QWMYyKs13nuDD     #输入对象用户secretkey
Default Region [US]:    #地区选项,直接回车

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: rgw.cncf.net    #输入对象存储RGW的域名,如果是默认端口没有修改,则rgw域名:7480

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: rgw.cncf.net/%(bucket)  #输入rgw的bucket变量,格式为%(bucket)s.s3.amazonaws.com,也可以是s3.amazonaws.com/%(bucket)

Encryption password:   #直接输入回车,不需要加密

Path to GPG program [/usr/bin/gpg]:  #回车,指定gpg 命令路径,默认为/usr/bin/gpg,用于认证管理


When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: No    #是否使用 https,输入no为不适用

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:    #回车,是否使用http代理

New settings:
  Access Key: 45CMIRWTFQY9DGJX7W1Z
  Secret Key: EyFmlD51WWfCGbtxFYZcygwDc48QWMYyKs13nuDD
  Default Region: US
  S3 Endpoint: rgw.cncf.net
  DNS-style bucket+hostname:port template for accessing a bucket: rgw.cncf.net/%(bucket)
  Encryption password: 
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name: 
  HTTP Proxy server port: 0
  
Test access with supplied credentials? [Y/n] Y    #是否要测试连接对象rgw
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)   #测试成功

Now verifying that encryption works...
Not configured. Never mind.

Save settings? [y/N] y   #保存配置
Configuration saved to '/root/.s3cfg'   #将s3cmd配置文件保存到当前用户目录下

 

   验证s3cmd配置文件

[root@ansible ~]# egrep "access|secret|bucket" .s3cfg 
access_key = 45CMIRWTFQY9DGJX7W1Z
access_token = 
bucket_location = US
host_bucket = rgw.cncf.net/%(bucket)
secret_key = EyFmlD51WWfCGbtxFYZcygwDc48QWMYyKs13nuDD
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/

 

  命令行客户端 s3cmd 验证数据上传

  s3cmd使用帮助

[root@ansible ~]# s3cmd --help

Commands:
  Make bucket
      s3cmd mb s3://BUCKET
  Remove bucket
      s3cmd rb s3://BUCKET
  List objects or buckets
      s3cmd ls [s3://BUCKET[/PREFIX]]
  List all object in all buckets
      s3cmd la 
  Put file into bucket
      s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
  Get file from bucket
      s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
  Delete file from bucket
      s3cmd del s3://BUCKET/OBJECT
  Delete file from bucket (alias for del)
      s3cmd rm s3://BUCKET/OBJECT
  Restore file from Glacier storage
      s3cmd restore s3://BUCKET/OBJECT
  Synchronize a directory tree to S3 (checks files freshness using size and md5 checksum, unless overridden by options, see below)
      s3cmd sync LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR or s3://BUCKET[/PREFIX] s3://BUCKET[/PREFIX]
  Disk usage by buckets
      s3cmd du [s3://BUCKET[/PREFIX]]
  Get various information about Buckets or Files
      s3cmd info s3://BUCKET[/OBJECT]
  Copy object
      s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
  Modify object metadata
      s3cmd modify s3://BUCKET1/OBJECT
  Move object
      s3cmd mv s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
  Modify Access control list for Bucket or Files
      s3cmd setacl s3://BUCKET[/OBJECT]
  Modify Bucket Policy
      s3cmd setpolicy FILE s3://BUCKET
  Delete Bucket Policy
      s3cmd delpolicy s3://BUCKET
  Modify Bucket CORS
      s3cmd setcors FILE s3://BUCKET
  Delete Bucket CORS
      s3cmd delcors s3://BUCKET
  Modify Bucket Requester Pays policy
      s3cmd payer s3://BUCKET
  Show multipart uploads
      s3cmd multipart s3://BUCKET [Id]
  Abort a multipart upload
      s3cmd abortmp s3://BUCKET/OBJECT Id
  List parts of a multipart upload
      s3cmd listmp s3://BUCKET/OBJECT Id
  Enable/disable bucket access logging
      s3cmd accesslog s3://BUCKET
  Sign arbitrary string using the secret key
      s3cmd sign STRING-TO-SIGN
  Sign an S3 URL to provide limited public access with expiry
      s3cmd signurl s3://BUCKET/OBJECT <expiry_epoch|+expiry_offset>
  Fix invalid file names in a bucket
      s3cmd fixbucket s3://BUCKET[/PREFIX]
  Create Website from bucket
      s3cmd ws-create s3://BUCKET
  Delete Website
      s3cmd ws-delete s3://BUCKET
  Info about Website
      s3cmd ws-info s3://BUCKET
  Set or delete expiration rule for the bucket
      s3cmd expire s3://BUCKET
  Upload a lifecycle policy for the bucket
      s3cmd setlifecycle FILE s3://BUCKET
  Get a lifecycle policy for the bucket
      s3cmd getlifecycle s3://BUCKET
  Remove a lifecycle policy for the bucket
      s3cmd dellifecycle s3://BUCKET
  Upload a notification policy for the bucket
      s3cmd setnotification FILE s3://BUCKET
  Get a notification policy for the bucket
      s3cmd getnotification s3://BUCKET
  Remove a notification policy for the bucket
      s3cmd delnotification s3://BUCKET
  List CloudFront distribution points
      s3cmd cflist 
  Display CloudFront distribution point parameters
      s3cmd cfinfo [cf://DIST_ID]
  Create CloudFront distribution point
      s3cmd cfcreate s3://BUCKET
  Delete CloudFront distribution point
      s3cmd cfdelete cf://DIST_ID
  Change CloudFront distribution point parameters
      s3cmd cfmodify cf://DIST_ID
  Display CloudFront invalidation request(s) status
      s3cmd cfinvalinfo cf://DIST_ID[/INVAL_ID]

For more information, updates and news, visit the s3cmd website:
http://s3tools.org

 

  创建 bucket 以验证权限

  存储空间(Bucket)是用于存储对象(Object)的容器,在上传任意类型的 Object 前,需要先创建 Bucket。

[root@ansible ~]# s3cmd mb s3://bucket01
Bucket 's3://bucket01/' created

[root@ansible ~]# s3cmd ls
2022-12-16 08:54  s3://bucket01

 

  验证上传数据

[root@ansible ~]# s3cmd put /etc/hosts s3://bucket01
upload: '/etc/hosts' -> 's3://bucket01/hosts'  [1 of 1]
 186 of 186   100% in    1s   112.99 B/s  done
[root@ansible ~]# s3cmd ls /etc/hosts s3://bucket01
2022-12-16 08:54  s3://bucket01

 

  验证下载文件

[root@ansible ~]# s3cmd get s3://bucket01/hosts /tmp/
download: 's3://bucket01/hosts' -> '/tmp/hosts'  [1 of 1]

 

  删除文件

[root@ansible ~]# s3cmd ls s3://bucket01
2022-12-16 08:59          186  s3://bucket01/hosts
[root@ansible ~]# s3cmd rm s3://bucket01/hosts
delete: 's3://bucket01/hosts'
[root@ansible ~]# s3cmd ls s3://bucket01

 

  客户端(s3browser)连接rgw

  1、添加账户

 

  2、账户配置

  账户类型选择:s3 Compatible Storage——兼容s3存储;

  存储节点:REST Endpoint,输入rgw服务地址或者域名;

  Access Key ID:输入rgw账户 Access Key;

   Secret Access Key:输入rgw账户 Secret Key;

  最后点击 Save changes,保存账户配置即可。

 

  3.验证连接

 

  4、上传短视频验证访问

 

posted @ 2023-01-29 15:36  PunchLinux  阅读(1404)  评论(0编辑  收藏  举报