分布式存储-ceph02
一、ceph 认证流程、多用户管理、权限管理
1.授权流程: cephx身份验证功能仅限制在ceph的各组件之间,不能扩展到其他非ceph组建 ceph只负责认证授权,不能解决数据传输的加密问题
- 客户端发送请求认证到mon
- mon生成session key,用key加密后发送给客户端
- 客户端key解密后得到session key,再次申请tiket
- mon验证session key并发送tiket
- 客户端使用tiket访问osd
- osd验证tiket再返回数据
2.访问流程:
无论ceph客户端是哪种类型,例如块设备,对象存储,文件系统,ceph都会在存储池中将所有数据存储为对象
ceph用户需要拥有存储池访问权限,才能读取和写入数据
ceph用户必须拥有执行权限,才能使用ceph管理命令
3.用户管理
ceph用户通过点号来区分用户类型和用户名,例如client.admin
列出用户信息:ceph auth ls/ceph auth get client.admin
3.1 ceph auth add
ceph auth add client.lyh mon 'allow r' osd 'allow rw pool=mypool'
3.2 ceph auth get-or-create:如果用户已存在则只以密钥文件格式返回用户名和key
ceph auth get-or-create client.lyh mon 'allow r' osd 'allow rw pool=mypool'
[client.lyh]
key = AQDp7E9ipaqNJBAAUB+n+3UQjdDf94r+hz4+Sg==
3.3 ceph auth get-or-create-key#只返回key
ceph auth get-or-create-key client.liuyh mon 'allow r' osd 'allow rw pool=mypool'
AQBB8E9i54O4JRAArgPYQRu+f/Xx82H8Mtx38A==
3.4 ceph auth print-key #获取单个用户的key
ceph auth print_key client.liuyh
3.5 ceph auth caps #修改用户的权限
ceph auth caps client.lyh mon 'allow rw' osd 'allow rw pool=mypool'
updated caps for client.lyh
3.6 ceph auth rm #删除用户
ceph auth rm client.lyh
3.7 导出用户认证信息
ceph auth get client.liuyh -o ceph.client.liuyh.keyring
3.8 导入用户信息
ceph auth import -i ceph.client.liuyh.keyring
3.9 将多个用户信息导入到一个认证文件
创建keyring文件:ceph-authtool -C ceph.client.user.keyring
导入:ceph-authtool ./ceph.client.user.keyring --import-keyring ./ceph.client.liuyh.keyring
再次导入:sudo ceph-authtool ./ceph.client.user.keyring --import-keyring ./ceph.client.admin.keyring
二、ceph rbd块存储使用详解及普通用户挂载
创建用户:ceph auth add client.lyh mon 'allow r' osd 'allow rwx pool=hulai'
创建keyring文件:ceph auth get client.lyh -o ceph.client.lyh.keyring
拷贝文件:scp ceph.client.lyh.keyring root@192.168.1.77:/etc/ceph
验证:ceph --user lyh -s
挂载:sudo rbd --user lyh -p hulai map rbd-lyh
写入数据测试:dd if=/dev/zero of=/ceph-rbd/testfile bs=1M count=50
卸载:umount /ceph-rbd/;rbd --user lyh -p hulai unmap rbd-lyh
删除:rbd rm -p hulai --image rbd-lyh
镜像还原:rbd snap rollback -p hulai --image rbd-hulai --snap rbd-hulai-bak
三、cephfs 使用详解、kernel挂载、ceph-fuse挂载、MDS高可用
MDS高可用
设置每个 Rank 的备份 MDS,也就是如果此 Rank 当前的 MDS 出现问题马上切换到另个MDS。设置备份的方法有很多,常用选项如下。
mds_standby_replay:值为 true 或 false,true 表示开启 replay 模式,这种模式下主 MDS内的数量将实时与从 MDS 同步,如果主宕机,从快速的切换。如果为 false 只有宕机的时候才去同步数据,这样会有一段时间的中断。
mds_standby_for_name:设置当前 MDS 进程只用于备份于指定名称的 MDS。
mds_standby_for_rank:设置当前 MDS 进程只用于备份于哪个 Rank,通常为 Rank 编号。另外在存在之个 CephFS 文件系统中,还可以使用mds_standby_for_fscid 参数来为指定不同的文件系统。
mds_standby_for_fscid:指定 CephFS 文件系统 ID,需要联合 mds_standby_for_rank 生效,如果设置 mds_standby_for_rank,那么就是用于指定文件系统的指定 Rank,如果没有设置,就是指定文件系统的所有 Rank。
配置高可用(分发完配置需要重启每个节点ceph-mds@X服务):
[mds.ceph-ceph02]
mds_standby_for_name = ceph-ceph01
mds_standby_replay = true
[mds.ceph-ceph4]
mds_standby_for_name = ceph-ceph03
mds_standby_replay = true
四、对象存储使用详解、radosGW简介及部署、web界面使用
创建用户:radosgw-admin user create --uid="user1" --display-name="user1"
初始创建配置文件:
s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: NESIZBY0KEWAFSRF6Z5C
Secret Key: p9Fg4xNEQIrpKw8B6j8a7MaiGxbeGjkqsD6En7x0
Default Region [US]:
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: net.radosgw.com:7480
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: net.radosgw.com:7480%(bucket)
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: 123456
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: no
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
New settings:
Access Key: NESIZBY0KEWAFSRF6Z5C
Secret Key: p9Fg4xNEQIrpKw8B6j8a7MaiGxbeGjkqsD6En7x0
Default Region: US
S3 Endpoint: net.radosgw.com:7480
DNS-style bucket+hostname:port template for accessing a bucket: net.radosgw.com:7480%(bucket)
Encryption password: 123456
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Success. Encryption and decryption worked fine :-)
Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
创建bucket:s3cmd mb s3://mybucket
上传文件:s3cmd put 2019070508503489.png s3://mybucket
查看:s3cmd ls s3://mybucket
下载:s3cmd get s3://mybucket/2019070508503489.png /opt
删除:s3cmd rm s3://mybucket/2019070508503489.png
五、ceph crush
修改权重:ceph osd crush reweight osd.1 0.02
自定义规则:
导出规则:ceph osd getcrushmap -o ./curshmap
转换文件格式:crushtool -d ./curshmap > ./crushmap.txt
增加配置:
host ceph-ssdnode1 {
id -103 # do not change unnecessarily
id -104 class hdd # do not change unnecessarily
# weight 0.098
alg straw2
hash 0 # rjenkins1
item osd.0 weight 0.098
}
host ceph-ssdnode2 {
id -105 # do not change unnecessarily
id -106 class hdd # do not change unnecessarily
# weight 0.098
alg straw2
hash 0 # rjenkins1
item osd.3 weight 0.098
}
host ceph-ssdnode3 {
id -107 # do not change unnecessarily
id -108 class hdd # do not change unnecessarily
# weight 0.098
alg straw2
hash 0 # rjenkins1
item osd.6 weight 0.098
}
#magedu bucket
root ssd {
id -127 # do not change unnecessarily
id -11 class hdd # do not change unnecessarily
# weight 1.952
alg straw
hash 0 # rjenkins1
item ceph-ssdnode1 weight 0.488
item ceph-ssdnode2 weight 0.488
item ceph-ssdnode3 weight 0.488
}
#magedu rules
rule magedu_ssd_rule {
id 20
type replicated
min_size 1
max_size 5
step take ssd
step chooseleaf firstn 0 type host
step emit
}
转换格式:crushtool -c ./crushmap.txt -o crushmapnew
导入:ceph osd setcrushmap -i ./crushmapnew
验证:ceph osd crush rule dump
创建pool指定规则:ceph osd pool create ssdpool 32 32 magedu_ssd_rule
验证pg:ceph pg ls-by-pool ssdpool|awk '{print $1,$2,$14}'
六、ceph dashboard及prometheus监控
安装ceph-mgr-dashboardsudo: yum install -y ceph-mgr-dashboard
启动:ceph mgr module enable dashboard
关闭ssl:ceph config set mgr mgr/dashboard/ssl false
指定dashboard监听地址:ceph config set mgr mgr/dashboard/ceph02/server_addr 192.168.1.192
指定dashboard监听端口:ceph config set mgr mgr/dashboard/ceph02/server_port 9090
设置账号和密码:
touch pass.txt
echo "123456" > pass.txt
ceph dashboard set-login-credentials admin -i pass.txt
peometheus监控ceph
mgr节点启用promenthes:ceph mgr module enable prometheus(默认9283端口),配置忽略
七、kubernetes基于ceph rbd块存储实现数据持久化
1.创建存储池同步认证文件
k8s在使用ceph作为动态存储 卷的时候,需要kube-controller-manager组件能够访问ceph,因此需要在master和node节点都同步ceph认证文件。
创建存储池:
ceph osd pool create k8s-pool 32 32
指定存储池格式:
ceph osd pool application enable k8s-pool rbd
初始化:
rbd pool init -p k8s-pool
创建image:
rbd create -p k8s-pool --image rbd-k8s --size 10G --image-format 2 --image-feature layering
创建用户授权:
ceph auth get-or-create client.k8s mon 'allow r' osd 'allow rwx pool=k8s-pool'
获取用户信息写入配置文件:
ceph auth get client.k8s -o ceph.client.k8s.keyring
同步配置文件:
scp ceph.client.k8s.keyring root@192.168.1.34:/etc/ceph
scp ceph.client.k8s.keyring root@192.168.1.36:/etc/ceph
scp ceph.client.k8s.keyring root@192.168.1.37:/etc/ceph
scp ceph.client.k8s.keyring root@192.168.1.72:/etc/ceph
scp ceph.client.k8s.keyring root@192.168.1.73:/etc/ceph
scp ceph.client.k8s.keyring root@192.168.1.74:/etc/ceph
scp ceph.conf root@192.168.1.34:/etc/ceph
scp ceph.conf root@192.168.1.36:/etc/ceph
scp ceph.conf root@192.168.1.37:/etc/ceph
scp ceph.conf root@192.168.1.72:/etc/ceph
scp ceph.conf root@192.168.1.73:/etc/ceph
scp ceph.conf root@192.168.1.74:/etc/ceph
配置主机名解析,略
2. 通过keyring配置文件挂载rbd
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: Always
name: busybox
volumeMounts:
- name: rbd-k8s
mountPath: /data
volumes:
- name: rbd-k8s
rbd:
monitors:
- '192.168.1.191:6789'
- '192.168.1.192:6789'
- '192.168.1.193:6789'
pool: k8s-pool
image: rbd-k8s
fsType: xfs
readOnly: false
user: k8s
keyring: /etc/ceph/ceph.client.k8s.keyring
3. 通过secret挂载rbd
首先将key进行加密:ceph auth get-key client.k8s|base64
部署sercet(secret必须和应用处于同一namespace):
[root@192-168-1-34-master01 20210403-rbd-cephfs]# cat case3-secret-client-shijie.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-k8s
type: "kubernetes.io/rbd"
data:
key: QVFDZW5GTmllMkppSlJBQXVzWUlYQVFaT1lVSDBpWFpKdWNwSmc9PQ==
部署测试nginx:
[root@192-168-1-34-master01 20210403-rbd-cephfs]# cat case4-nginx-secret.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: ng-deploy-60
template:
metadata:
labels:
app: ng-deploy-60
spec:
containers:
- name: ng-deploy-60
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: rbd-k8s
mountPath: /data
volumes:
- name: rbd-k8s
rbd:
monitors:
- "192.168.1.191:6789"
- "192.168.1.192:6789"
- "192.168.1.193:6789"
pool: k8s-pool
image: rbd-k8s
fsType: xfs
readOnly: false
user: k8s
secretRef:
name: ceph-secret-k8s
4. 动态存储卷Storageclass
部署admin secret:
[root@192-168-1-34-master01 20210403-rbd-cephfs]# cat case5-secret-client-admin.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-k8s-admin
type: "kubernetes.io/rbd"
data:
key: QVFBa2VrUmlLeGs4SWhBQW4zdzFRZEgvYzJqK29nSW85UHJRclE9PQ=
部署StorageClass:
[root@192-168-1-34-master01 20210403-rbd-cephfs]# cat case6-ceph-storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-storage-class-k8s
annotations:
storageclass.kubernetes.io/is-default-class: "false" #设置为默认存储类
provisioner: kubernetes.io/rbd
parameters:
monitors: 192.168.1.191:6789,192.168.1.192:6789,192.168.1.193:6789
adminId: admin
adminSecretName: ceph-secret-k8s-admin
adminSecretNamespace: default
pool: k8s-pool
userId: k8s
userSecretName: ceph-secret-k8s
部署pvc:
[root@192-168-1-34-master01 20210403-rbd-cephfs]# cat case7-mysql-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-data-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-storage-class-k8s
resources:
requests:
storage: '10G'
部署mysql验证:
[root@192-168-1-34-master01 20210403-rbd-cephfs]# cat case8-mysql-single.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6.46
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: magedu123456
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-data-pvc
---
kind: Service
apiVersion: v1
metadata:
labels:
app: mysql-service-label
name: mysql-service
spec:
type: NodePort
ports:
- name: http
port: 3306
protocol: TCP
targetPort: 3306
nodePort: 33306
selector:
app: mysql
八、kubernetes基于cephfs 实现数据共享与持久化
按照七部署admin的secret
[root@192-168-1-34-master01 20210403-rbd-cephfs]# cat case9-nginx-cephfs.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels: #rs or deployment
app: ng-deploy-80
template:
metadata:
labels:
app: ng-deploy-80
spec:
containers:
- name: ng-deploy-80
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: magedu-staticdata-cephfs
mountPath: /usr/share/nginx/html/cephfs
volumes:
- name: magedu-staticdata-cephfs
cephfs:
monitors:
- '192.168.1.191:6789'
- '192.168.1.192:6789'
- '192.168.1.193:6789'
path: /
user: admin
secretRef:
name: ceph-secret-k8s-admin
---
---
kind: Service
apiVersion: v1
metadata:
labels:
app: ng-deploy-80-service-label
name: ng-deploy-80-service
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 33380
selector:
app: ng-deploy-80
本文作者:没有猫的猫奴
本文链接:https://www.cnblogs.com/cat1/p/16117680.html
版权声明:本作品采用知识共享署名-非商业性使用-禁止演绎 2.5 中国大陆许可协议进行许可。
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步