08 云原生对象存储(转载)
云原生对象存储
对象存储概述
对象存储通常是指用于上传 put
和下载 get
之用的存储系统,通常用于存储静态文件,如图片、视频、音频等,对象存储一旦上传之后无法再线进行修改,如需修改则需要将其下载到本地,对象存储最早由 aws
的 S3
提供, Ceph
的对象存储提供两种接口支持:
- S3 ⻛格接口
- Swift ⻛格接口
Ceph
对象存储具有以下特点
- RESTful Interface (RESTFul api实现对象的管理上传、下载)
- S3- and Swift-compliant APIs(提供2种风格 api,s3 和 Swift-compliant)
- S3-style subdomains (S3风格的子域)
- Unified S3/Swift namespace (S3/Swift扁平空间)
- User management (安全行:用户认证)
- Usage tracking (使用率追踪)
- Striped objects (分片上传,在重组)
- Cloud solution integration (和云平台集成)
- Multi-site deployment (多站点部署)
- Multi-site replication (多站点复制)
部署 RGW
集群
通过 rook
部署 RGW
对象存储集群非常便捷,默认已经提供了 yaml
文件,会创建对象存储所需的 pool
和 rgw
集群实例
[root@m1 ceph]# cat object.yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
name: my-store
namespace: rook-ceph
spec:
metadataPool:
failureDomain: host
replicated:
size: 3
requireSafeReplicaSize: true
parameters:
compression_mode: none
dataPool:
failureDomain: host
replicated:
size: 3
requireSafeReplicaSize: true
parameters:
compression_mode: none
preservePoolsOnDelete: false
gateway:
type: s3
sslCertificateRef:
port: 80
instances: 1
placement:
annotations:
labels:
resources:
healthCheck:
bucket:
disabled: false
interval: 60s
livenessProbe:
disabled: false
执行 kubectl apply -f object.yaml
安装文件之后会自动创建 rgw
对象存储的 pods
[root@m1 ceph]# kubectl -n rook-ceph get pods -l app=rook-ceph-rgw
NAME READY STATUS RESTARTS AGE
rook-ceph-rgw-my-store-a-b85b4576d-2ptc7 1/1 Running 0 38s
此时 rgw
已经部署完毕,默认部署了 1
个 rgw
实例,通过 ceph -s
可以查看到 rgw
已经部署到Ceph集群中
[root@m1 ceph]# ceph -s
cluster:
id: 17a413b5-f140-441a-8b35-feec8ae29521
health: HEALTH_WARN
1 daemons have recently crashed
services:
mon: 3 daemons, quorum a,b,c (age 97m)
mgr: a(active, since 96m)
mds: myfs:2 {0=myfs-d=up:active,1=myfs-a=up:active} 2 up:standby-replay
osd: 5 osds: 5 up (since 97m), 5 in (since 29h)
rgw: 1 daemon active (my.store.a)
task status:
data:
pools: 11 pools, 194 pgs
objects: 758 objects, 1.4 GiB
usage: 9.4 GiB used, 241 GiB / 250 GiB avail
pgs: 194 active+clean
io:
client: 2.5 KiB/s rd, 4 op/s rd, 0 op/s wr
progress:
PG autoscaler decreasing pool 8 PGs from 32 to 8 (60s)
[............................]
RGW
默认会创建若干个 pool
来存储对象存储的数据, pool
包含 metadata pool
和 data pool
[root@m1 ceph]# ceph osd lspools
1 device_health_metrics
2 replicapool
3 myfs-metadata
4 myfs-data0
5 my-store.rgw.control
6 my-store.rgw.meta
7 my-store.rgw.log
8 my-store.rgw.buckets.index
9 my-store.rgw.buckets.non-ec
10 .rgw.root
11 my-store.rgw.buckets.data # data pool
RGW
高可用集群
RGW
是一个无状态化的 http
服务器,通过 80
端口处理 http
的 put/get
请求,生产环境中需要部署多个 RGW
以满足高可用的要求,在 《Ceph入⻔到实战课程中》
我们通过 haproxy+keepalived
的方式来构建 RGW
的高可用集群
Ceph
集群中部署多个RGW
实例HAproxy
提供负载均衡能力keepalived
提供VIP
和保障haproxy
的高可用性
通过 rook
构建 RGW
集群默认部署了一个 RGW
实例,无法实现高可用性的要求,需要部署多个 instances
以满足高可用的诉求。
[root@m1 ceph]# kubectl -n rook-ceph get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
csi-cephfsplugin-metrics ClusterIP 10.68.55.204 <none> 8080/TCP,8081/TCP 30h
csi-rbdplugin-metrics ClusterIP 10.68.66.14 <none> 8080/TCP,8081/TCP 30h
rook-ceph-mgr ClusterIP 10.68.82.138 <none> 9283/TCP 30h
rook-ceph-mgr-dashboard ClusterIP 10.68.153.82 <none> 8443/TCP 30h
rook-ceph-mon-a ClusterIP 10.68.231.222 <none> 6789/TCP,3300/TCP 30h
rook-ceph-mon-b ClusterIP 10.68.163.216 <none> 6789/TCP,3300/TCP 30h
rook-ceph-mon-c ClusterIP 10.68.61.127 <none> 6789/TCP,3300/TCP 30h
rook-ceph-rgw-my-store ClusterIP 10.68.3.165 <none> 80/TCP 4m20s
[root@m1 ceph]# curl http://10.68.3.165:80
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Owner>
<ID>anonymous</ID>
<DisplayName></DisplayName>
</Owner>
<Buckets></Buckets>
</ListAllMyBucketsResult>
# 修改配置
[root@m1 ceph]# cat object.yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
name: my-store
namespace: rook-ceph
spec:
metadataPool:
failureDomain: host
replicated:
size: 3
requireSafeReplicaSize: true
parameters:
compression_mode: none
dataPool:
failureDomain: host
replicated:
size: 3
requireSafeReplicaSize: true
parameters:
compression_mode: none
preservePoolsOnDelete: false
gateway:
type: s3
sslCertificateRef:
port: 80
instances: 2 # 修改 instances 为 2,则部署 2 个 rgw 服务
placement:
annotations:
labels:
resources:
healthCheck:
bucket:
disabled: false
interval: 60s
livenessProbe:
disabled: false
执行完毕之后会创建两个 rgw
的 pods
[root@m1 ceph]# kubectl apply -f object.yaml
cephobjectstore.ceph.rook.io/my-store configured
[root@m1 ceph]# kubectl -n rook-ceph get pods -l app=rook-ceph-rgw
NAME READY STATUS RESTARTS AGE
rook-ceph-rgw-my-store-a-b85b4576d-2ptc7 1/1 Running 0 16m
rook-ceph-rgw-my-store-b-c4c4c8fd8-dt658 1/1 Running 0 5s
对外提供服务通过 service
的 VIP
实现, VIP
通过 80
端口映射到后端的两个 pods
上
[root@m1 ceph]# kubectl -n rook-ceph get svc rook-ceph-rgw-my-store
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-rgw-my-store ClusterIP 10.68.3.165 <none> 80/TCP 93m
[root@m1 ceph]# kubectl -n rook-ceph describe svc rook-ceph-rgw-my-store
Name: rook-ceph-rgw-my-store
Namespace: rook-ceph
Labels: app=rook-ceph-rgw
ceph_daemon_id=my-store
ceph_daemon_type=rgw
rgw=my-store
rook_cluster=rook-ceph
rook_object_store=my-store
Annotations: <none>
Selector: app=rook-ceph-rgw,ceph_daemon_id=my-store,rgw=my-store,rook_cluster=rook-ceph,rook_object_store=my-store
Type: ClusterIP
IP Families: <none>
IP: 10.68.3.165
IPs: 10.68.3.165
Port: http 80/TCP
TargetPort: 8080/TCP
Endpoints: 172.20.3.40:8080,172.20.4.36:8080
Session Affinity: None
Events: <none>
[root@m1 ceph]# kubectl -n rook-ceph get pods -l app=rook-ceph-rgw -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rook-ceph-rgw-my-store-a-b85b4576d-2ptc7 1/1 Running 0 93m 172.20.4.36 192.168.100.136 <none> <none>
rook-ceph-rgw-my-store-b-c4c4c8fd8-dt658 1/1 Running 0 77m 172.20.3.40 192.168.100.137 <none> <none>
RGW
高级调度
和前面 mon
, mds
一样, rgw
支持高级的调度机制,通过 nodeAffinity
, podAntiAffinity
, podAffinity
, tolerations
高级调度算法将 RGW
调度到特定的节点上的诉求。
[root@m1 ceph]# vim object.yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
name: my-store
namespace: rook-ceph
spec:
metadataPool:
failureDomain: host
replicated:
size: 3
requireSafeReplicaSize: true
parameters:
compression_mode: none
dataPool:
failureDomain: host
replicated:
size: 3
requireSafeReplicaSize: true
parameters:
compression_mode: none
preservePoolsOnDelete: false
gateway:
type: s3
sslCertificateRef:
port: 80
instances: 2
placement:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: ceph-rgw
operator: In
values:
- enabled
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rook-ceph-rgw
topologyKey: kubernetes.io/hostname
annotations:
labels:
resources:
healthCheck:
bucket:
disabled: false
interval: 60s
livenessProbe:
disabled: false
给 node
节点打上标签
[root@node-1 ~]# kubectl label node 192.168.100.133 ceph-rgw=enabled
[root@node-1 ~]# kubectl label node 192.168.100.134 ceph-rgw=enabled
[root@node-1 ~]# kubectl label node 192.168.100.135 ceph-rgw=enabled
[root@m1 ceph]# kubectl -n rook-ceph get pods -l app=rook-ceph-rgw -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rook-ceph-rgw-my-store-a-b85b4576d-2ptc7 0/1 Terminating 0 105m 172.20.4.36 192.168.100.136 <none> <none>
rook-ceph-rgw-my-store-b-c4c4c8fd8-dt658 1/1 Running 0 89m 172.20.3.40 192.168.100.137 <none> <none>
[root@m1 ceph]# kubectl -n rook-ceph get pods -l app=rook-ceph-rgw -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rook-ceph-rgw-my-store-a-847c97bc4-qpvqp 0/1 Pending 0 0s <none> <none> <none> <none>
rook-ceph-rgw-my-store-b-c4c4c8fd8-dt658 1/1 Running 0 89m 172.20.3.40 192.168.100.137 <none> <none>
[root@m1 ceph]# kubectl -n rook-ceph get pods -l app=rook-ceph-rgw -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rook-ceph-rgw-my-store-a-847c97bc4-qpvqp 1/1 Running 0 25s 172.20.1.51 192.168.100.135 <none> <none>
rook-ceph-rgw-my-store-b-c4bc6b4b6-rhbqz 1/1 Running 0 4s 172.20.2.64 192.168.100.134 <none> <none>
连接外部集群
Rook
提供了管理外部 RGW 对象存储的能力,通过 CephObjectStore
自定义资源对象提供了管理外部 RGW
对象存储的能力,其会通过创建一个 service
实现和外部对象存储管理, kubernetes
集群内部访问 service
的 vip
地址即可实现访问。
[root@m1 ceph]# cat object-external.yaml
#################################################################################################################
# Create an object store with settings for replication in a production environment. A minimum of 3 hosts with
# OSDs are required in this example.
# kubectl create -f object.yaml
#################################################################################################################
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
name: external-store
namespace: rook-ceph # namespace:cluster
spec:
gateway:
# The port on which **ALL** the gateway(s) are listening on.
# Passing a single IP from a load-balancer is also valid.
port: 80
externalRgwEndpoints:
- ip: 192.168.39.182
healthCheck:
bucket:
disabled: false
interval: 60s
创建 Bucket
存储桶
Ceph RGW
集群创建好之后就可以往 RGW
存储对象存储数据了,存储数据之前需要有创建 bucket
, bucket
是存储桶,是对象存储中逻辑的存储空间,可以通过 Ceph
原生的接口管理存储桶入 radowgw-admin
,云原生环境下,推荐使用云原生的方式管理 bucket
,何为云原生模式?即资源对象抽象化为 kubernetes
⻛格的方式来管理,尽可能减少使用原生命令,如创建资源池,创建 bucket
等。
首先需要创建存储消费桶的 StorageClass
[root@m1 ceph]# grep -v "[*^#]" storageclass-bucket-delete.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-delete-bucket
reclaimPolicy: Delete
parameters:
objectStoreName: my-store
region: us-east-1
- 创建
StorageClass
[root@m1 ceph]# kubectl apply -f storageclass-bucket-delete.yaml
storageclass.storage.k8s.io/rook-ceph-delete-bucket created
[root@m1 ceph]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 38h
rook-ceph-delete-bucket rook-ceph.ceph.rook.io/bucket Delete Immediate false 24s
rook-cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate true 22h
通过 ObjectBucketClaim
向 StorageClass
申请存储桶,创建了一个名为 ceph-bkt
的 bucket
[root@m1 ceph]# grep -v "[*^#]" object-bucket-claim-delete.yaml
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: ceph-delete-bucket
spec:
generateBucketName: ceph-bkt
storageClassName: rook-ceph-delete-bucket
additionalConfig:
查看 bucket
创建情况
[root@m1 ceph]# kubectl apply -f object-bucket-claim-delete.yaml
objectbucketclaim.objectbucket.io/ceph-delete-bucket created
[root@m1 ceph]# kubectl get objectbucketclaim.objectbucket.io
NAME AGE
ceph-delete-bucket 40s
[root@m1 ceph]# radosgw-admin bucket list
[
"rook-ceph-bucket-checker-b0360498-0acf-4464-bddd-bca1bf4ce4b0",
"ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a"
]
容器访问对象存储
创建 ObjectBucketClaim
之后, operator
会自动创建 bucket
相关的访问信息,包括访问的路径,认证的访问所需的 key
,这些信息分别存放在 configmap
和 secrets
对象中,可以通过如下的方式获取
获取访问的地址
[root@m1 ceph]# kubectl get configmaps ceph-delete-bucket -o yaml -o jsonpath='{.data.BUCKET_HOST}'
rook-ceph-rgw-my-store.rook-ceph.svc
获取 ACCESS KEY
和 SECRET_ACCESS_KEY
, Secrets
通过 base64
加密,需要获取解密后的字符串
[root@m1 ceph]# kubectl get secrets ceph-delete-bucket -o yaml -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d
XQW87WB02CK3XP13U5B0
[root@m1 ceph]# kubectl get secrets ceph-delete-bucket -o yamel -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d
O2dMSmOetHducvdfFwn7pCCLb6NHhNoqYaUbgMXP
安装 s3cmd
客户端
- 安装
toolbox
容器
[root@m1 ceph]# kubectl apply -f toolbox.yaml
deployment.apps/rook-ceph-tools unchanged
- 安装
s3cmd
客户端
# 登陆容器
[root@m1 ceph]# kubectl -n rook-ceph exec -it rook-ceph-tools-77bf5b9b7d-9pq6m -- bash
# 设置为阿里云,默认说国外源无法访问
[root@rook-ceph-tools-77bf5b9b7d-9pq6m /]# cd /etc/yum.repos.d/
[root@rook-ceph-tools-77bf5b9b7d-9pq6m yum.repos.d]# mkdir bak
[root@rook-ceph-tools-77bf5b9b7d-9pq6m yum.repos.d]# mv * bak
[root@rook-ceph-tools-77bf5b9b7d-9pq6m yum.repos.d]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
[root@rook-ceph-tools-77bf5b9b7d-9pq6m yum.repos.d]# yum clean all
[root@rook-ceph-tools-77bf5b9b7d-9pq6m yum.repos.d]# yum install epel-release
[root@rook-ceph-tools-77bf5b9b7d-9pq6m yum.repos.d/]# yum install -y s3cmd
配置 s3cmd
[root@rook-ceph-tools-77bf5b9b7d-9pq6m yum.repos.d]# s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: XQW87WB02CK3XP13U5B0 # 输入 access key
Secret Key: O2dMSmOetHducvdfFwn7pCCLb6NHhNoqYaUbgMXP # 输入 secret key
Default Region [US]:
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: rook-ceph-rgw-my-store.rook-ceph.svc:80 # 输入访问地址
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: rook-ceph-rgw-my-store.rook-ceph.svc:80/%(bucket) # 输入访问 bucket
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: no # 关闭访问代理
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
New settings:
Access Key: XQW87WB02CK3XP13U5B0
Secret Key: O2dMSmOetHducvdfFwn7pCCLb6NHhNoqYaUbgMXP
Default Region: US
S3 Endpoint: rook-ceph-rgw-my-store.rook-ceph.svc:80
DNS-style bucket+hostname:port template for accessing a bucket: rook-ceph-rgw-my-store.rook-ceph.svc:80/%(bucket)
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n] Y # 确定信息
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y # 保存信息
Configuration saved to '/root/.s3cfg'
- 对应生成的配置内容如下
[root@rook-ceph-tools-77bf5b9b7d-9pq6m yum.repos.d]# cat /root/.s3cfg
[default]
access_key = XQW87WB02CK3XP13U5B0 # access key
......
host_base = rook-ceph-rgw-my-store.rook-ceph.svc:80 # 访问地址
host_bucket = rook-ceph-rgw-my-store.rook-ceph.svc:80/%(bucket) # 访问 bucket
......
secret_key = O2dMSmOetHducvdfFwn7pCCLb6NHhNoqYaUbgMXP # secret key
......
- 查看桶信息
[root@rook-ceph-tools-77bf5b9b7d-9pq6m yum.repos.d]# s3cmd ls
2022-11-26 01:38 s3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a
上传 object
对象
[root@rook-ceph-tools-77bf5b9b7d-9pq6m yum.repos.d]# s3cmd put /etc/passwd* s3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a
upload: '/etc/passwd' -> 's3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a/passwd' [1 of 2]
1168 of 1168 100% in 0s 69.92 KB/s done
upload: '/etc/passwd-' -> 's3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a/passwd-' [2 of 2]
1112 of 1112 100% in 0s 125.83 KB/s done
# 查看上传的资源信息
[root@rook-ceph-tools-77bf5b9b7d-9pq6m yum.repos.d]# s3cmd ls s3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a/
2022-11-26 02:03 1168 s3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a/passwd
2022-11-26 02:03 1112 s3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a/passwd-
# 下载资源
[root@rook-ceph-tools-77bf5b9b7d-9pq6m yum.repos.d]# s3cmd get s3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a/passwd ./
download: 's3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a/passwd' -> './passwd' [1 of 1]
1168 of 1168 100% in 0s 27.12 KB/s done
[root@rook-ceph-tools-77bf5b9b7d-9pq6m yum.repos.d]# ls -lh
total 32K
-rw-r--r-- 1 root root 2.6K Nov 26 01:51 CentOS-Base.repo
drwxr-xr-x 2 root root 4.0K Nov 26 01:51 bak
-rw-r--r-- 1 root root 1.5K Jun 8 2021 epel-modular.repo
-rw-r--r-- 1 root root 1.6K Jun 8 2021 epel-playground.repo
-rw-r--r-- 1 root root 1.6K Jun 8 2021 epel-testing-modular.repo
-rw-r--r-- 1 root root 1.5K Jun 8 2021 epel-testing.repo
-rw-r--r-- 1 root root 1.4K Jun 8 2021 epel.repo
-rw-r--r-- 1 root root 1.2K Nov 26 02:03 passwd
[root@rook-ceph-tools-77bf5b9b7d-9pq6m yum.repos.d]# ls -lh | grep passwd
-rw-r--r-- 1 root root 1.2K Nov 26 02:03 passwd
[root@rook-ceph-tools-77bf5b9b7d-9pq6m yum.repos.d]# exit
s3cmd
其他操作包括:
put
上传get
下载ls
查看rm
删除info
查看信息
外部访问对象存储
kubernetes
集群外部如何访问 Ceph
对象存储呢,通过 NodePort
的形式暴露服务给集群外部访问
[root@m1 ceph]# cat rgw-external.yaml
apiVersion: v1
kind: Service
metadata:
name: rook-ceph-rgw-my-store-external
namespace: rook-ceph # namespace:cluster
labels:
app: rook-ceph-rgw
rook_cluster: rook-ceph # namespace:cluster
rook_object_store: my-store
spec:
ports:
- name: rgw
port: 80 # service port mentioned in object store crd
protocol: TCP
targetPort: 8080
selector:
app: rook-ceph-rgw
rook_cluster: rook-ceph # namespace:cluster
rook_object_store: my-store
sessionAffinity: None
type: NodePort
- 安装
rgw-externel.yaml
资源清单
[root@m1 ceph]# kubectl apply -f rgw-external.yaml
service/rook-ceph-rgw-my-store-external created
[root@m1 ceph]# kubectl -n rook-ceph get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
......
rook-ceph-rgw-my-store ClusterIP 10.68.3.165 <none> 80/TCP 17h
rook-ceph-rgw-my-store-external NodePort 10.68.242.187 <none> 80:38623/TCP 58s
[root@m1 ceph]# curl http://m1:38623
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Owner>
<ID>anonymous</ID>
<DisplayName></DisplayName>
</Owner>
<Buckets></Buckets>
</ListAllMyBucketsResult>
- 安装
s3cmd
[root@m1 ceph]# yum install -y s3cmd
- 配置
s3cmd
[root@m1 ceph]# s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: XQW87WB02CK3XP13U5B0
Secret Key: O2dMSmOetHducvdfFwn7pCCLb6NHhNoqYaUbgMXP
Default Region [US]:
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: m1:38623 # 输入访问地址
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: m1:38623/%(bucket) # 输入访问 bucket
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: no
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
New settings:
Access Key: XQW87WB02CK3XP13U5B0
Secret Key: O2dMSmOetHducvdfFwn7pCCLb6NHhNoqYaUbgMXP
Default Region: US
S3 Endpoint: m1:38623
DNS-style bucket+hostname:port template for accessing a bucket: m1:38623/%(bucket)
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n] Y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
- 验证
s3cmd
[root@m1 ceph]# s3cmd ls
2022-11-26 01:38 s3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a
[root@m1 ceph]# s3cmd put rgw-external.yaml s3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a
upload: 'rgw-external.yaml' -> 's3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a/rgw-external.yaml' [1 of 1]
517 of 517 100% in 0s 10.13 KB/s done
[root@m1 ceph]# s3cmd ls s3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a
2022-11-26 02:03 1168 s3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a/passwd
2022-11-26 02:03 1112 s3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a/passwd-
2022-11-26 02:45 517 s3://ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a/rgw-external.yaml
创建单独访问用户
通过 OBC
创建 bucket
时会自动 access key
和 secret key
,这些 key
无法只能操作特定 bucket
中的 object
,如果需要对象存储完整的 api
接口,需要单独的创建, ceph
的 radosgw-admin user create
提供了原生的 api
用于创建管理用户,在云原生时代,推荐使用原生的模式来创建和管理用户, rook
提供了 CephObjectStoreUser
自定义资源对象来提供 user
的管理创建后会自动生成 secrets
对象,可以获取到相关的访问认证信息
- 上述流程的用户,只有当前
bucket
桶权限,无创建bucket
等bucket
相关权限
[root@m1 ceph]# s3cmd mb s3://test
ERROR: S3 error: 400 (TooManyBuckets)
- 单独创建用户,给予
bucket
级别权限
[root@m1 ceph]# grep -v "[*^#]" object-user.yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectStoreUser
metadata:
name: my-user
spec:
store: my-store
displayName: "my display name"
[root@m1 ceph]# kubectl apply -f object-user.yaml
cephobjectstoreuser.ceph.rook.io/my-user created
- 查看新用户的
secrets
信息
[root@m1 ceph]# kubectl -n rook-ceph get secrets rook-ceph-object-user-my-store-my-user
NAME TYPE DATA AGE
rook-ceph-object-user-my-store-my-user kubernetes.io/rook 3 28s
[root@m1 ceph]# kubectl -n rook-ceph get secrets rook-ceph-object-user-my-store-my-user -o yaml
apiVersion: v1
data:
AccessKey: RTdXM0E1Mk9SUUNGM1ZZUzBDSVE=
Endpoint: aHR0cDovL3Jvb2stY2VwaC1yZ3ctbXktc3RvcmUucm9vay1jZXBoLnN2Yzo4MA==
SecretKey: N2ZGOXZDdzNzOHNNWlVtZWwyMWNqanFnNlNocU1zZ3Bzd1RHZW9XUQ==
kind: Secret
metadata:
......
labels:
app: rook-ceph-rgw
rook_cluster: rook-ceph
rook_object_store: my-store
user: my-user
name: rook-ceph-object-user-my-store-my-user
namespace: rook-ceph
ownerReferences:
- apiVersion: ceph.rook.io/v1
blockOwnerDeletion: true
controller: true
kind: CephObjectStoreUser
name: my-user
uid: 2d42369b-fce0-4d35-982f-af4598e86006
resourceVersion: "973421"
uid: 182f1e04-0d2b-4814-96a8-5411dfbaa17e
type: kubernetes.io/rook
- 获取新用户的
access key
和secret key
信息
[root@m1 ceph]# kubectl -n rook-ceph get secrets rook-ceph-object-user-my-store-my-user -o yaml -o jsonpath='{.data.AccessKey}' | base64 -d
E7W3A52ORQCF3VYS0CIQ
[root@m1 ceph]# kubectl -n rook-ceph get secrets rook-ceph-object-user-my-store-my-user -o yaml -o jsonpath='{.data.SecretKey}' | base64 -d
7fF9vCw3s8sMZUmel21cjjqg6ShqMsgpswTGeoWQ
[root@m1 ceph]# kubectl -n rook-ceph get secrets rook-ceph-object-user-my-store-my-user -o yaml -o jsonpath='{.data.Endpoint}' | base64 -d
http://rook-ceph-rgw-my-store.rook-ceph.svc:80
- 修改当前宿主机
/root/.s3cfg
配置信息,使用新用户的access key
和secret key
信息
[root@m1 ceph]# vim /root/.s3cfg
[default]
#access_key = XQW87WB02CK3XP13U5B0
access_key = E7W3A52ORQCF3VYS0CIQ
......
#secret_key = O2dMSmOetHducvdfFwn7pCCLb6NHhNoqYaUbgMXP
secret_key = 7fF9vCw3s8sMZUmel21cjjqg6ShqMsgpswTGeoWQ
......
- 验证新用户权限
[root@m1 ceph]# s3cmd mb s3://test
Bucket 's3://test/' created
[root@m1 ceph]# s3cmd ls
2022-11-26 02:57 s3://test
[root@m1 ceph]# s3cmd put rgw-external.yaml s3://test/
upload: 'rgw-external.yaml' -> 's3://test/rgw-external.yaml' [1 of 1]
517 of 517 100% in 0s 6.27 KB/s done
[root@m1 ceph]# s3cmd ls s3://test/
2022-11-26 02:57 517 s3://test/rgw-external.yaml
[root@m1 ceph]# radosgw-admin bucket list
[
"rook-ceph-bucket-checker-b0360498-0acf-4464-bddd-bca1bf4ce4b0",
"ceph-bkt-b7d89ff6-e2b6-4360-89b5-0c33082fda2a",
"test" # 新创建的存储桶
]
RGW
集群维护
RGW
以容器 pods
的形式运行在集群中
[root@m1 ceph]# kubectl get pods -n rook-ceph -l app=rook-ceph-rgw
NAME READY STATUS RESTARTS AGE
rook-ceph-rgw-my-store-a-847c97bc4-qpvqp 1/1 Running 0 15h
rook-ceph-rgw-my-store-b-c4bc6b4b6-rhbqz 1/1 Running 0 15h
RGW
集群状态
[root@m1 ceph]# ceph -s
cluster:
id: 17a413b5-f140-441a-8b35-feec8ae29521
health: HEALTH_WARN
1 daemons have recently crashed
services:
mon: 3 daemons, quorum a,b,c (age 26m)
mgr: a(active, since 19h)
mds: myfs:2 {0=myfs-d=up:active,1=myfs-a=up:active} 2 up:standby-replay
osd: 5 osds: 5 up (since 26m), 5 in (since 46h)
rgw: 2 daemons active (my.store.a, my.store.b) # RGW 服务状态
task status:
data:
pools: 11 pools, 177 pgs
objects: 828 objects, 1.4 GiB
usage: 9.4 GiB used, 241 GiB / 250 GiB avail
pgs: 177 active+clean
io:
client: 2.8 KiB/s rd, 4 op/s rd, 0 op/s wr
查看 RGW
日志
[root@m1 ceph]# kubectl -n rook-ceph logs -f rook-ceph-rgw-my-store-a-847c97bc4-qpvqp