s3fs安装配置

3fs安装配置说明

一、安装

s3fs可以通过两种方式安装。

方式一 使用包管理工具通过网络安装。

RHEL and CentOS 7 or newer 可以通过EPEL仓库进行安装。

[root@nko51 ~]# yum install epel-release
[root@nko51 ~]# yum install s3fs-fuse
[root@nko51 ~]# rpm -qa | grep s3fs
s3fs-fuse-1.90-1.el7.x86_64
[root@nko51 ~]# 

1656643077494

截止2022年3月10日,通过yum安装的s3fs版本为1.90-1。
其他系统请查看官方github

方式二 编译安装

[root@nko51 ~]# yum install automake fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel -y  # 安装依赖
[root@nko51 ~]# git clone https://github.com/s3fs-fuse/s3fs-fuse.git # 获取源码,也可以打开https://github.com/s3fs-fuse/s3fs-fuse/releases 页面手动下载源码包
[root@nko51 ~]# cd s3fs-fuse  # 开始编译安装
[root@nko51 ~]# ./autogen.sh
[root@nko51 ~]# ./configure
[root@nko51 ~]# make && make install

截止2022年3月10日,通过编译安装的s3fs版本为1.91。

二、使用s3fs挂载bucket

获取AK、SK信息

可以从赞存的管理页面获取用户AK、 SK信息。也可以在服务器后台使用radosgw-admin命令获取。

[root@4U4N1 ~]# radosgw-admin user info --uid=nko
2022-03-10 10:23:28.031066 7fc6643f58c0  0 WARNING: can't generate connection for zone 70552a0c-7e6f-11ec-abc9-0cc47a88ffbb id cephmetazone: no endpoints defined
{
    "user_id": "nko",
    "display_name": "nko",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "nko",
            "access_key": "WUWB618CCS0UYCEA4V90",
            "secret_key": "fpAk9GaKTmStW1bNwz3hHRwHSHkkCLxMtjvO3mHA"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}

[root@4U4N1 ~]# radosgw-admin bucket list --uid=nko
2022-03-10 10:24:28.803719 7f8d8e4368c0  0 WARNING: can't generate connection for zone 70552a0c-7e6f-11ec-abc9-0cc47a88ffbb id cephmetazone: no endpoints defined
[
    "b01",
    "bucket-100w1",
    "bucket-10w1",
    "bucket-1w1"
]
[root@4U4N1 ~]#

在客户端创建认证密码

[root@nko51 ~]# echo "IGAXQ7NGJ6D1CFU4B94C:lgLw8hOb6AFtdPOTAs8TwdXFxJo1ySACMjhRaFBJ" > /etc/passwd-s3fs
[root@nko51 ~]# chmod 600 /etc/passwd-s3fs

调整客户端时间,与服务器时间一致

[root@nko51 ~]# vim /etc/chrony.conf 
server 192.168.40.33
keyfile /etc/chrony.keys
driftfile /var/lib/chrony/chrony.drift
logdir /var/log/chrony
maxupdateskew 100.0
hwclockfile /etc/adjtime
rtcsync
makestep 1 3
[root@nko51 ~]# systemctl restart chronyd
[root@nko51 ~]# systemctl status chronyd
● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2022-03-10 10:31:36 CST; 6s ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
  Process: 8475 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
  Process: 8472 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 8474 (chronyd)
   CGroup: /system.slice/chronyd.service
           └─8474 /usr/sbin/chronyd

Mar 10 10:31:36 nko51 systemd[1]: Starting NTP client/server...
Mar 10 10:31:36 nko51 chronyd[8474]: chronyd version 3.2 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SECHASH +SIGND +ASYNCDNS ...6 +DEBUG)Mar 10 10:31:36 nko51 chronyd[8474]: Frequency 19.736 +/- 4.778 ppm read from /var/lib/chrony/chrony.drift
Mar 10 10:31:36 nko51 systemd[1]: Started NTP client/server.
Hint: Some lines were ellipsized, use -l to show in full.
[root@nko51 ~]# chronyc sources -v
210 Number of sources = 1

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample             
===============================================================================
^* 192.168.40.33                 3   6    17    65  -7623ns[  +99us] +/-   32ms

^* 标志代表客户端时间已经与服务器进行过同步。

挂载bucket

[root@nko51 ~]# mkdir /mnt/1w
[root@nko51 ~]# s3fs bucket-1w1 /mnt/1w -o passwd_file=/etc/passwd-s3fs -o url=http://172.18.0.10 \
-o use_path_request_style \
-o use_cache=/dev/shm \
-o kernel_cache \
-o max_background=1000 \
-o max_stat_cache_size=100000 \
-o multipart_size=64 \
-o parallel_count=30 \
-o multireq_max=30 \
-o dbglevel=warn
[root@nko51 ~]# 
[root@nko51 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   56G  8.2G   47G  15% /
devtmpfs                  16G     0   16G   0% /dev
tmpfs                     16G     0   16G   0% /dev/shm
tmpfs                     16G  9.0M   16G   1% /run
tmpfs                     16G     0   16G   0% /sys/fs/cgroup
/dev/sda1               1014M  148M  867M  15% /boot
tmpfs                    3.2G     0  3.2G   0% /run/user/0
s3fs                      16E     0   16E   0% /mnt/1w

存储空间显示的16E并不代表存储空间真的有16EB大小,而是其所能支持的最大空间容量。s3fs并不能探测到该bucket所能存储的最大数据量,故而如此显示。

三、读写测试

挂载后进行读写测试,正常,但是使用ls命令查看所有文件(S3对象),会卡较长时间。
经测试,对于有1万个对象的bucket,用s3fs挂载后ls一次约需15秒钟;对于有10万个对象的bucket,用s3fs挂载后ls一次约需35秒。
对于有100万对象的bucket,使用ls命令、rsync命令、find命令、以及python的os.listdir函数()和os.walk()函数来获取其文件列表,均以进程卡死而告终。
对于有2亿对象的bucket,使用s3fs挂载后可以正常读写,但因挂载目录下的文件数太多,不能使用ls等命令获取挂载目录下的文件列表。

posted @ 2022-03-10 10:52  GustabM  阅读(3094)  评论(1编辑  收藏  举报