fastdfs分布式安装教程

From 系统架构部 Ade.Xiao(肖建锋)
有创新相关的点子\创意,或者目前项目中遇到的难点或者可以抽离出来复用的组件或者功能, 欢迎联系本人 (微信 15980530492)

环境准备

使用的系统软件

名称 说明
centos 7.x
libfastcommon FastDFS分离出的一些公用函数包
FastDFS FastDFS本体
fastdfs-nginx-module FastDFS和nginx的关联模块
nginx nginx1.15.4

IP分配

名称 IP地址
tracker 192.168.174.133
storage 192.168.174.134

编译环境

tracker和storage都要配置

CentOS

yum install git gcc gcc-c++ make automake autoconf libtool pcre pcre-devel zlib zlib-devel openssl-devel wget vim -y

Debian

apt-get -y install git gcc g++ make automake autoconf libtool pcre2-utils libpcre2-dev zlib1g zlib1g-dev openssl libssh-dev wget vim

磁盘目录

说明 位置
所有安装包 ~/software/fdfs
数据存储位置 /usr/local/fastdfs
#这里我为了方便把日志什么的都放到了dfs /usr/local/fastdfs
mkdir -p /usr/local/fastdfs
mkdir -p ~/software/fdfs

上传tar文件夹下的所有文件

开始安装

安装libfastcommon

  1. 解压压缩包
[root@fdfs-tracker-133 fdfs]# cd ~/software/fdfs/
[root@fdfs-tracker-133 fdfs]# ll
total 2036
drwxrwxr-x. 12 root root    4096 Dec 30 22:18 fastdfs-6.07
-rw-r--r--.  1 root root  809381 Jan 13 06:54 fastdfs-6.07.tar.gz
-rw-r--r--.  1 root root   19952 Jan 13 06:54 fastdfs-nginx-module-1.22.tar.gz
drwxrwxr-x.  5 root root     168 Jan 13 06:55 libfastcommon-1.0.45
-rw-r--r--.  1 root root  206348 Jan 13 06:54 libfastcommon-1.0.45.tar.gz
drwxr-xr-x.  8 1001 1001     158 Apr 21  2020 nginx-1.18.0
-rw-r--r--.  1 root root 1039530 Jan 13 06:54 nginx-1.18.0.tar.gz
[root@fdfs-tracker-133 fdfs]# 

  1. make.sh
[root@fdfs-tracker-133 libfastcommon-1.0.45]# clear
[root@fdfs-tracker-133 libfastcommon-1.0.45]# pwd
/root/software/fdfs/libfastcommon-1.0.45
[root@fdfs-tracker-133 libfastcommon-1.0.45]# ll
total 52
drwxrwxr-x. 2 root root   114 Dec 24 21:25 doc
-rw-rw-r--. 1 root root 12401 Dec 24 21:25 HISTORY
-rw-rw-r--. 1 root root   674 Dec 24 21:25 INSTALL
-rw-rw-r--. 1 root root  1607 Dec 24 21:25 libfastcommon.spec
-rw-rw-r--. 1 root root  7652 Dec 24 21:25 LICENSE
-rwxrwxr-x. 1 root root  3454 Dec 24 21:25 make.sh
drwxrwxr-x. 2 root root   191 Dec 24 21:25 php-fastcommon
-rw-rw-r--. 1 root root  2776 Dec 24 21:25 README
drwxrwxr-x. 3 root root  8192 Jan 13 06:55 src
[root@fdfs-tracker-133 libfastcommon-1.0.45]# ./make.sh 

  1. make.sh install
[root@fdfs-tracker-133 libfastcommon-1.0.45]# ./make.sh install

以上步骤在tracker和storage服务器上都要执行一次

安装fastdfs

  1. 解压压缩包

安装tracker

  1. 复制配置文件到/etc/fdfs 下
[root@fdfs-tracker-133 conf]# pwd
/root/software/fdfs/fastdfs-6.07/conf
[root@fdfs-tracker-133 conf]# cp .* /etc/fdfs
[root@fdfs-tracker-133 conf]# cd /etc/fdfs/
[root@fdfs-tracker-133 fdfs]# ll
total 124
-rw-r--r--. 1 root root 23981 Jan 13 06:58 anti-steal.jpg
-rw-r--r--. 1 root root  1909 Jan 13 06:58 client.conf
-rw-r--r--. 1 root root  1909 Jan 13 06:56 client.conf.sample
-rw-r--r--. 1 root root   965 Jan 13 06:58 http.conf
-rw-r--r--. 1 root root 31172 Jan 13 06:58 mime.types
-rw-r--r--. 1 root root 10246 Jan 13 06:58 storage.conf
-rw-r--r--. 1 root root 10246 Jan 13 06:56 storage.conf.sample
-rw-r--r--. 1 root root   620 Jan 13 06:58 storage_ids.conf
-rw-r--r--. 1 root root   620 Jan 13 06:56 storage_ids.conf.sample
-rw-r--r--. 1 root root  9144 Jan 13 22:27 tracker.conf
-rw-r--r--. 1 root root  9138 Jan 13 06:56 tracker.conf.sample
[root@fdfs-tracker-133 fdfs]# 

  1. 创建tracker数据目录
mkdir -p /usr/local/fastdfs/tracker
  1. 修改tracker.conf 文件, 主要是修改 base_path = /usr/local/fastdfs/tracker
[root@fdfs-tracker-133 tracker]# vim /etc/fdfs/tracker.conf
# is this config file disabled
# false for enabled
# true for disabled
disabled = false

# bind an address of this host
# empty for bind all addresses of this host
bind_addr =

# the tracker server port
port = 22122

# connect timeout in seconds
# default value is 30
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5

# network timeout in seconds for send and recv
# default value is 30
network_timeout = 60

# the base path to store data and log files
base_path = /usr/local/fastdfs/tracker

# max concurrent connections this server support
# you should set this parameter larger, eg. 10240
# default value is 256
max_connections = 1024

# accept thread count
# default value is 1 which is recommended
# since V4.07
accept_threads = 1

# work thread count
# work threads to deal network io
# default value is 4
# since V2.00
work_threads = 4

# the min network buff size
# default value 8KB
min_buff_size = 8KB

# the max network buff size
# default value 128KB
max_buff_size = 128KB

# the method for selecting group to upload files
# 0: round robin
# 1: specify group
# 2: load balance, select the max free space group to upload file
store_lookup = 2

# which group to upload file
# when store_lookup set to 1, must set store_group to the group name
store_group = group2

# which storage server to upload file
# 0: round robin (default)
# 1: the first server order by ip address
# 2: the first server order by priority (the minimal)
# Note: if use_trunk_file set to true, must set store_server to 1 or 2
store_server = 0

# which path (means disk or mount point) of the storage server to upload file
# 0: round robin
# 2: load balance, select the max free space path to upload file
store_path = 0

# which storage server to download file
# 0: round robin (default)
# 1: the source storage server which the current file uploaded to
download_server = 0

# reserved storage space for system or other applications.
# if the free(available) space of any stoarge server in 
# a group <= reserved_storage_space, no file can be uploaded to this group.
# bytes unit can be one of follows:
### G or g for gigabyte(GB)
### M or m for megabyte(MB)
### K or k for kilobyte(KB)
### no unit for byte(B)
### XX.XX% as ratio such as: reserved_storage_space = 10%
reserved_storage_space = 20%

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level = info

#unix group name to run this program, 
#not set (empty) means run by the group of current user
run_by_group=

#unix username to run this program,
#not set (empty) means run by current user
run_by_user =

# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts = *

# sync log buff to disk every interval seconds
# default value is 10 seconds
sync_log_buff_interval = 1

# check storage server alive interval seconds
check_active_interval = 120

# thread stack size, should >= 64KB
# default value is 256KB
thread_stack_size = 256KB

# auto adjust when the ip address of the storage server changed
# default value is true
storage_ip_changed_auto_adjust = true

# storage sync file max delay seconds
# default value is 86400 seconds (one day)
# since V2.00
storage_sync_file_max_delay = 86400

# the max time of storage sync a file
# default value is 300 seconds
# since V2.00
storage_sync_file_max_time = 300

# if use a trunk file to store several small files
# default value is false
# since V3.00
use_trunk_file = false 

# the min slot size, should <= 4KB
# default value is 256 bytes
# since V3.00
slot_min_size = 256

# the max slot size, should > slot_min_size
# store the upload file to trunk file when it's size <=  this value
# default value is 16MB
# since V3.00
slot_max_size = 1MB

# the alignment size to allocate the trunk space
# default value is 0 (never align)
# since V6.05
# NOTE: the larger the alignment size, the less likely of disk
#       fragmentation, but the more space is wasted.
trunk_alloc_alignment_size = 256

# if merge contiguous free spaces of trunk file
# default value is false
# since V6.05
trunk_free_space_merge = true

# if delete / reclaim the unused trunk files
# default value is false
# since V6.05
delete_unused_trunk_files = false

# the trunk file size, should >= 4MB
# default value is 64MB
# since V3.00
trunk_file_size = 64MB

# if create trunk file advancely
# default value is false
# since V3.06
trunk_create_file_advance = false

# the time base to create trunk file
# the time format: HH:MM
# default value is 02:00
# since V3.06
trunk_create_file_time_base = 02:00

# the interval of create trunk file, unit: second
# default value is 38400 (one day)
# since V3.06
trunk_create_file_interval = 86400

# the threshold to create trunk file
# when the free trunk file size less than the threshold,
# will create he trunk files
# default value is 0
# since V3.06
trunk_create_file_space_threshold = 20G

# if check trunk space occupying when loading trunk free spaces
# the occupied spaces will be ignored
# default value is false
# since V3.09
# NOTICE: set this parameter to true will slow the loading of trunk spaces 
# when startup. you should set this parameter to true when neccessary.
trunk_init_check_occupying = false

# if ignore storage_trunk.dat, reload from trunk binlog
# default value is false
# since V3.10
# set to true once for version upgrade when your version less than V3.10
trunk_init_reload_from_binlog = false

# the min interval for compressing the trunk binlog file
# unit: second, 0 means never compress
# FastDFS compress the trunk binlog when trunk init and trunk destroy
# recommand to set this parameter to 86400 (one day)
# default value is 0
# since V5.01
trunk_compress_binlog_min_interval = 86400

# the interval for compressing the trunk binlog file
# unit: second, 0 means never compress
# recommand to set this parameter to 86400 (one day)
# default value is 0
# since V6.05
trunk_compress_binlog_interval = 86400

# compress the trunk binlog time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 03:00
# since V6.05
trunk_compress_binlog_time_base = 03:00

# max backups for the trunk binlog file
# default value is 0 (never backup)
# since V6.05
trunk_binlog_max_backups = 7

# if use storage server ID instead of IP address
# if you want to use dual IPs for storage server, you MUST set
# this parameter to true, and configure the dual IPs in the file
# configured by following item "storage_ids_filename", such as storage_ids.conf
# default value is false
# since V4.00
use_storage_id = false

# specify storage ids filename, can use relative or absolute path
# this parameter is valid only when use_storage_id set to true
# since V4.00
storage_ids_filename = storage_ids.conf

# id type of the storage server in the filename, values are:
## ip: the ip address of the storage server
## id: the server id of the storage server
# this paramter is valid only when use_storage_id set to true
# default value is ip
# since V4.03
id_type_in_filename = id

# if store slave file use symbol link
# default value is false
# since V4.01
store_slave_file_use_link = false

# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false

# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time = 00:00

# if compress the old error log by gzip
# default value is false
# since V6.04
compress_old_error_log = false

# compress the error log days before
# default value is 1
# since V6.04
compress_error_log_days_before = 7

# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0

# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0

# if use connection pool
# default value is false
# since V4.05
use_connection_pool = true

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600

# HTTP port on this tracker server
http.server_port = 8080

# check storage HTTP server alive interval seconds
# <= 0 for never check
# default value is 30
http.check_alive_interval = 30

# check storage HTTP server alive type, values are:
#   tcp : connect to the storge server with HTTP port only, 
#        do not request and get response
#   http: storage check alive url must return http status 200
# default value is tcp
http.check_alive_type = tcp

# check storage HTTP server alive uri/url
# NOTE: storage embed HTTP server support uri: /status.html
http.check_alive_uri = /status.html


  1. 启动tracker
[root@fdfs-tracker-133 fdfs]# /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf

查看进程

[root@fdfs-tracker-133 fdfs]# ps -ef | grep tracker
root       1562      1  0 Jan13 ?        00:00:07 /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf
root       2091   1536  0 01:47 pts/0    00:00:00 grep --color=auto tracker

安装storage

  1. 复制配置文件到/etc/fdfs 下
[root@fdfs-tracker-133 conf]# pwd
/root/software/fdfs/fastdfs-6.07/conf
[root@fdfs-tracker-133 conf]# cp .* /etc/fdfs
[root@fdfs-tracker-133 conf]# cd /etc/fdfs/
[root@fdfs-tracker-133 fdfs]# ll
total 124
-rw-r--r--. 1 root root 23981 Jan 13 06:58 anti-steal.jpg
-rw-r--r--. 1 root root  1909 Jan 13 06:58 client.conf
-rw-r--r--. 1 root root  1909 Jan 13 06:56 client.conf.sample
-rw-r--r--. 1 root root   965 Jan 13 06:58 http.conf
-rw-r--r--. 1 root root 31172 Jan 13 06:58 mime.types
-rw-r--r--. 1 root root 10246 Jan 13 06:58 storage.conf
-rw-r--r--. 1 root root 10246 Jan 13 06:56 storage.conf.sample
-rw-r--r--. 1 root root   620 Jan 13 06:58 storage_ids.conf
-rw-r--r--. 1 root root   620 Jan 13 06:56 storage_ids.conf.sample
-rw-r--r--. 1 root root  9144 Jan 13 22:27 tracker.conf
-rw-r--r--. 1 root root  9138 Jan 13 06:56 tracker.conf.sample
[root@fdfs-tracker-133 fdfs]# 
  1. 创建tracker数据目录
[root@fdfs-tracker-133 fdfs]# mdkir -p /usr/local/fastdfs/storage/
  1. 修改storage.conf 文件, 主要是修改base_path、tracker_server、group_name、store_path0、
# is this config file disabled
# false for enabled
# true for disabled
disabled = false

# the name of the group this storage server belongs to
#
# comment or remove this item for fetching from tracker server,
# in this case, use_storage_id must set to true in tracker.conf,
# and storage_ids.conf must be configured correctly.
group_name = ppay

# bind an address of this host
# empty for bind all addresses of this host
bind_addr =

# if bind an address of this host when connect to other servers 
# (this storage server as a client)
# true for binding the address configured by the above parameter: "bind_addr"
# false for binding any address of this host
client_bind = true

# the storage server port
port = 23000

# connect timeout in seconds
# default value is 30
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5

# network timeout in seconds for send and recv
# default value is 30
network_timeout = 60

# the heart beat interval in seconds
# the storage server send heartbeat to tracker server periodically
# default value is 30
heart_beat_interval = 30

# disk usage report interval in seconds
# the storage server send disk usage report to tracker server periodically
# default value is 300
stat_report_interval = 60

# the base path to store data and log files
# NOTE: the binlog files maybe are large, make sure
#       the base path has enough disk space,
#       eg. the disk free space should > 50GB
base_path = /usr/local/fastdfs/storage

# max concurrent connections the server supported,
# you should set this parameter larger, eg. 10240
# default value is 256
max_connections = 1024

# the buff size to recv / send data from/to network
# this parameter must more than 8KB
# 256KB or 512KB is recommended
# default value is 64KB
# since V2.00
buff_size = 256KB

# accept thread count
# default value is 1 which is recommended
# since V4.07
accept_threads = 1

# work thread count
# work threads to deal network io
# default value is 4
# since V2.00
work_threads = 4

# if disk read / write separated
##  false for mixed read and write
##  true for separated read and write
# default value is true
# since V2.00
disk_rw_separated = true

# disk reader thread count per store path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_reader_threads = 1

# disk writer thread count per store path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_writer_threads = 1

# when no entry to sync, try read binlog again after X milliseconds
# must > 0, default value is 200ms
sync_wait_msec = 50

# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
sync_interval = 0

# storage sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_start_time = 00:00

# storage sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_end_time = 23:59

# write to the mark file after sync N files
# default value is 500
write_mark_file_freq = 500

# disk recovery thread count
# default value is 1
# since V6.04
disk_recovery_threads = 3

# store path (disk or mount point) count, default value is 1
store_path_count = 1

# store_path#, based on 0, to configure the store paths to store files
# if store_path0 not exists, it's value is base_path (NOT recommended)
# the paths must be exist.
#
# IMPORTANT NOTE:
#       the store paths' order is very important, don't mess up!!!
#       the base_path should be independent (different) of the store paths

store_path0 = /usr/local/fastdfs/storage
#store_path1 = /home/yuqing/fastdfs2

# subdir_count  * subdir_count directories will be auto created under each 
# store_path (disk), value can be 1 to 256, default value is 256
subdir_count_per_path = 256

# tracker_server can ocur more than once for multi tracker servers.
# the value format of tracker_server is "HOST:PORT",
#   the HOST can be hostname or ip address,
#   and the HOST can be dual IPs or hostnames seperated by comma,
#   the dual IPS must be an inner (intranet) IP and an outer (extranet) IP,
#   or two different types of inner (intranet) IPs.
#   for example: 192.168.2.100,122.244.141.46:22122
#   another eg.: 192.168.1.10,172.17.4.21:22122

tracker_server = 192.168.174.133:22122
#tracker_server = 192.168.209.122:22122

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level = info

#unix group name to run this program, 
#not set (empty) means run by the group of current user
run_by_group =

#unix username to run this program,
#not set (empty) means run by current user
run_by_user =

# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts = *

# the mode of the files distributed to the data path
# 0: round robin(default)
# 1: random, distributted by hash code
file_distribute_path_mode = 0

# valid when file_distribute_to_path is set to 0 (round robin).
# when the written file count reaches this number, then rotate to next path.
# rotate to the first path (00/00) after the last path (such as FF/FF).
# default value is 100
file_distribute_rotate_count = 100

# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
fsync_after_written_bytes = 0

# sync log buff to disk every interval seconds
# must > 0, default value is 10 seconds
sync_log_buff_interval = 1

# sync binlog buff / cache to disk every interval seconds
# default value is 60 seconds
sync_binlog_buff_interval = 1

# sync storage stat info to disk every interval seconds
# default value is 300 seconds
sync_stat_file_interval = 300

# thread stack size, should >= 512KB
# default value is 512KB
thread_stack_size = 512KB

# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
upload_priority = 10

# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
if_alias_prefix =

# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
check_file_duplicate = 0

# file signature method for check file duplicate
## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01
file_signature_method = hash

# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
key_namespace = FastDFS

# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
keep_alive = 0

# you can use "#include filename" (not include double quotes) directive to 
# load FastDHT server list, when the filename is a relative path such as 
# pure filename, the base path is the base path of current/this config file.
# must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
##include /home/yuqing/fastdht/conf/fdht_servers.conf

# if log to access log
# default value is false
# since V4.00
use_access_log = false

# if rotate the access log every day
# default value is false
# since V4.00
rotate_access_log = false

# rotate access log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
access_log_rotate_time = 00:00

# if compress the old access log by gzip
# default value is false
# since V6.04
compress_old_access_log = false

# compress the access log days before
# default value is 1
# since V6.04
compress_access_log_days_before = 7

# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false

# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time = 00:00

# if compress the old error log by gzip
# default value is false
# since V6.04
compress_old_error_log = false

# compress the error log days before
# default value is 1
# since V6.04
compress_error_log_days_before = 7

# rotate access log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_access_log_size = 0

# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0

# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0

# if skip the invalid record when sync file
# default value is false
# since V4.02
file_sync_skip_invalid_record = false

# if use connection pool
# default value is false
# since V4.05
use_connection_pool = true

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600

# if compress the binlog files by gzip
# default value is false
# since V6.01
compress_binlog = true

# try to compress binlog time, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 01:30
# since V6.01
compress_binlog_time = 01:30

# if check the mark of store path to prevent confusion
# recommend to set this parameter to true
# if two storage servers (instances) MUST use a same store path for
# some specific purposes, you should set this parameter to false
# default value is true
# since V6.03
check_store_path_mark = true

# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
http.domain_name =

# the port of the web server on this storage server
http.server_port = 8888


  1. 启动storage
[root@fdfs-storage-134 fdfs]# /usr/bin/fdfs_storaged /etc/fdfs/storage.conf
  1. 查看进程
[root@fdfs-storage-134 fdfs]# clear
[root@fdfs-storage-134 fdfs]# ps -ef | grep storage
root       1608      1  0 Jan13 ?        00:00:09 /usr/bin/fdfs_storaged /etc/fdfs/storage.conf
root       7139   1536  0 02:04 pts/0    00:00:00 grep --color=auto storage
[root@fdfs-storage-134 fdfs]# 

配置client进行测试

  1. 修改client.conf 文件, 主要是修改base_path、 tracker_server、
[root@fdfs-storage-134 fdfs]# vim /etc/fdfs/client.conf
# connect timeout in seconds
# default value is 30s
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5

# network timeout in seconds
# default value is 30s
network_timeout = 60

# the base path to store log files
base_path = /usr/local/fastdfs/client

# tracker_server can ocur more than once for multi tracker servers.
# the value format of tracker_server is "HOST:PORT",
#   the HOST can be hostname or ip address,
#   and the HOST can be dual IPs or hostnames seperated by comma,
#   the dual IPS must be an inner (intranet) IP and an outer (extranet) IP,
#   or two different types of inner (intranet) IPs.
#   for example: 192.168.2.100,122.244.141.46:22122
#   another eg.: 192.168.1.10,172.17.4.21:22122

tracker_server = 192.168.174.133:22122
#tracker_server = 192.168.0.197:22122

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level = info

# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600

# if load FastDFS parameters from tracker server
# since V4.05
# default value is false
load_fdfs_parameters_from_tracker = false

# if use storage ID instead of IP address
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# default value is false
# since V4.05
use_storage_id = false

# specify storage ids filename, can use relative or absolute path
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V4.05
storage_ids_filename = storage_ids.conf

#HTTP settings
http.tracker_server_port = 80

#use "#include" directive to include HTTP other settiongs
##include http.conf

  1. 上传一张测试图片
[root@fdfs-storage-134 fdfs]# cd /etc/fdfs/
[root@fdfs-storage-134 fdfs]# pwd
/etc/fdfs
[root@fdfs-storage-134 fdfs]# ll
total 128
-rw-r--r--. 1 root root 23981 Jan 13 06:59 anti-steal.jpg
-rw-r--r--. 1 root root  1917 Jan 13 22:33 client.conf
-rw-r--r--. 1 root root  1909 Jan 13 06:58 client.conf.sample
-rw-r--r--. 1 root root   965 Jan 13 06:59 http.conf
-rw-r--r--. 1 root root 31172 Jan 13 06:59 mime.types
-rw-r--r--. 1 root root  3755 Jan 14 00:33 mod_fastdfs.conf
-rw-r--r--. 1 root root 10257 Jan 13 22:32 storage.conf
-rw-r--r--. 1 root root 10246 Jan 13 06:58 storage.conf.sample
-rw-r--r--. 1 root root   620 Jan 13 06:59 storage_ids.conf
-rw-r--r--. 1 root root   620 Jan 13 06:58 storage_ids.conf.sample
-rw-r--r--. 1 root root  9138 Jan 13 06:59 tracker.conf
-rw-r--r--. 1 root root  9138 Jan 13 06:58 tracker.conf.sample
[root@fdfs-storage-134 fdfs]# /usr/bin/fdfs_test /etc/fdfs/client.conf upload ./anti-steal.jpg 
This is FastDFS client test program v6.07

Copyright (C) 2008, Happy Fish / YuQing

FastDFS may be copied only under the terms of the GNU General
Public License V3, which may be found in the FastDFS source kit.
Please visit the FastDFS Home Page http://www.fastken.com/ 
for more detail.

[2021-01-14 02:08:05] DEBUG - base_path=/usr/local/fastdfs/client, connect_timeout=5, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0

tracker_query_storage_store_list_without_group: 
	server 1. group_name=, ip_addr=192.168.174.134, port=23000

group_name=ppay, ip_addr=192.168.174.134, port=23000
storage_upload_by_filename
group_name=ppay, remote_filename=M00/00/00/wKiuhl__7dWAMHYZAABdreSfEnY980.jpg
source ip address: 192.168.174.134
file timestamp=2021-01-14 02:08:05
file size=23981
file crc32=3835630198
example file url: http://192.168.174.134/ppay/M00/00/00/wKiuhl__7dWAMHYZAABdreSfEnY980.jpg
storage_upload_slave_by_filename
group_name=ppay, remote_filename=M00/00/00/wKiuhl__7dWAMHYZAABdreSfEnY980_big.jpg
source ip address: 192.168.174.134
file timestamp=2021-01-14 02:08:05
file size=23981
file crc32=3835630198
example file url: http://192.168.174.134/ppay/M00/00/00/wKiuhl__7dWAMHYZAABdreSfEnY980_big.jpg
[root@fdfs-storage-134 fdfs]# 

安装nginx模块

  1. 解压 fastdfs-nginx-module-1.22.tar.gz
  2. 配置config文件
[root@fdfs-storage-134 fdfs]# cd fastdfs-nginx-module-1.22
[root@fdfs-storage-134 fastdfs-nginx-module-1.22]# ll
total 8
-rw-rw-r--. 1 root root 3036 Nov 18  2019 HISTORY
-rw-rw-r--. 1 root root 2001 Nov 18  2019 INSTALL
drwxrwxr-x. 2 root root  109 Jan 14 00:20 src
[root@fdfs-storage-134 fastdfs-nginx-module-1.22]# cd src/
[root@fdfs-storage-134 src]# ll
total 84
-rw-rw-r--. 1 root root 43507 Nov 18  2019 common.c
-rw-rw-r--. 1 root root  3995 Nov 18  2019 common.h
-rw-rw-r--. 1 root root   836 Jan 14 00:20 config
-rw-rw-r--. 1 root root  3725 Nov 18  2019 mod_fastdfs.conf
-rw-rw-r--. 1 root root 28668 Nov 18  2019 ngx_http_fastdfs_module.c
[root@fdfs-storage-134 src]# pwd
/root/software/fdfs/fastdfs-nginx-module-1.22/src
[root@fdfs-storage-134 src]# vim config 

将文件中的/usr/local 全部改成 /usr

ngx_addon_name=ngx_http_fastdfs_module

if test -n "${ngx_module_link}"; then
    ngx_module_type=HTTP
    ngx_module_name=$ngx_addon_name
    ngx_module_incs="/usr/include"
    ngx_module_libs="-lfastcommon -lfdfsclient"
    ngx_module_srcs="$ngx_addon_dir/ngx_http_fastdfs_module.c"
    ngx_module_deps=
    CFLAGS="$CFLAGS -D_FILE_OFFSET_BITS=64 -DFDFS_OUTPUT_CHUNK_SIZE='256*1024' -DFDFS_MOD_CONF_FILENAME='\"/etc/fdfs/mod_fastdfs.conf\"'"
    . auto/module
else
    HTTP_MODULES="$HTTP_MODULES ngx_http_fastdfs_module"
    NGX_ADDON_SRCS="$NGX_ADDON_SRCS $ngx_addon_dir/ngx_http_fastdfs_module.c"
    CORE_INCS="$CORE_INCS /usr/include"
    CORE_LIBS="$CORE_LIBS -lfastcommon -lfdfsclient"
    CFLAGS="$CFLAGS -D_FILE_OFFSET_BITS=64 -DFDFS_OUTPUT_CHUNK_SIZE='256*1024' -DFDFS_MOD_CONF_FILENAME='\"/etc/fdfs/mod_fastdfs.conf\"'"

复制配置到/etc/fdfs

[root@fdfs-storage-134 src]# cp mod_fastdfs.conf /etc/fdfs/
  1. 安装nginx的变异环境
yum install gcc-c++
yum install -y pcre pcre-devel
yum install -y zlib zlib-devel
yum install -y openssl openssl-devel
tar -zxvf nginx-1.16.1.tar.gz
mkdir /var/temp/nginx -p
  1. 解压nginx
  2. 进入解压文件夹, 执行configure
[root@fdfs-storage-134 nginx-1.18.0]# ll
total 768
drwxr-xr-x. 6 1001 1001   4096 Jan 14 00:09 auto
-rw-r--r--. 1 1001 1001 302863 Apr 21  2020 CHANGES
-rw-r--r--. 1 1001 1001 462213 Apr 21  2020 CHANGES.ru
drwxr-xr-x. 2 1001 1001    168 Jan 14 00:09 conf
-rwxr-xr-x. 1 1001 1001   2502 Apr 21  2020 configure
drwxr-xr-x. 4 1001 1001     72 Jan 14 00:09 contrib
drwxr-xr-x. 2 1001 1001     40 Jan 14 00:09 html
-rw-r--r--. 1 1001 1001   1397 Apr 21  2020 LICENSE
-rw-r--r--. 1 root root    355 Jan 14 00:28 Makefile
drwxr-xr-x. 2 1001 1001     21 Jan 14 00:09 man
drwxr-xr-x. 4 root root    187 Jan 14 00:29 objs
-rw-r--r--. 1 1001 1001     49 Apr 21  2020 README
drwxr-xr-x. 9 1001 1001     91 Jan 14 00:09 src
[root@fdfs-storage-134 nginx-1.18.0]# pwd
/root/software/fdfs/nginx-1.18.0
[root@fdfs-storage-134 nginx-1.18.0]# 

./configure \
  --prefix=/usr/local/nginx   \
  --pid-path=/var/run/nginx/nginx.pid  \
  --lock-path=/var/lock/nginx.lock \
  --user=nginx \
  --group=nginx \
  --with-http_ssl_module \
  --with-http_flv_module \
  --with-http_stub_status_module \
  --with-http_gzip_static_module \
  --http-client-body-temp-path=/var/temp/nginx/client/ \
  --http-proxy-temp-path=/var/temp/nginx/proxy/ \
  --http-fastcgi-temp-path=/var/temp/nginx/fcgi/ \
  --http-uwsgi-temp-path=/var/temp/nginx/uwsgi \
  --http-scgi-temp-path=/var/temp/nginx/scgi \
  --add-module=/root/software/fdfs/fastdfs-nginx-module-1.22/src

注意!最后一行的地址为fastdfs-nginx-module-1.22/src的绝对路径

  1. 配置 /etc/fdfs/mod_fastdfs.conf

配置项目:

base_path=/usr/local/fastdfs/temp

store_path0=/usr/local/fastdfs/storage

tracker_server=192.168.174.133:22122

group_name=ppay

url_have_group_name = true

# connect timeout in seconds
# default value is 30s
connect_timeout=2

# network recv and send timeout in seconds
# default value is 30s
network_timeout=30

# the base path to store log files
base_path=/usr/local/fastdfs/temp

# if load FastDFS parameters from tracker server
# since V1.12
# default value is false
load_fdfs_parameters_from_tracker=true

# storage sync file max delay seconds
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V1.12
# default value is 86400 seconds (one day)
storage_sync_file_max_delay = 86400

# if use storage ID instead of IP address
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# default value is false
# since V1.13
use_storage_id = false

# specify storage ids filename, can use relative or absolute path
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V1.13
storage_ids_filename = storage_ids.conf

# FastDFS tracker_server can ocur more than once, and tracker_server format is
#  "host:port", host can be hostname or ip address
# valid only when load_fdfs_parameters_from_tracker is true
tracker_server=192.168.174.133:22122

# the port of the local storage server
# the default value is 23000
storage_server_port=23000

# the group name of the local storage server
group_name=ppay

# if the url / uri including the group name
# set to false when uri like /M00/00/00/xxx
# set to true when uri like ${group_name}/M00/00/00/xxx, such as group1/M00/xxx
# default value is false
url_have_group_name = true

# path(disk or mount point) count, default value is 1
# must same as storage.conf
store_path_count=1

# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
# must same as storage.conf
store_path0=/usr/local/fastdfs/storage
#store_path1=/home/yuqing/fastdfs1

# standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info

# set the log filename, such as /usr/local/apache2/logs/mod_fastdfs.log
# empty for output to stderr (apache and nginx error_log file)
log_filename=

# response mode when the file not exist in the local file system
## proxy: get the content from other storage server, then send to client
## redirect: redirect to the original storage server (HTTP Header is Location)
response_mode=proxy

# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# this paramter used to get all ip address of the local host
# default values is empty
if_alias_prefix=

# use "#include" directive to include HTTP config file
# NOTE: #include is an include directive, do NOT remove the # before include
#include http.conf


# if support flv
# default value is false
# since v1.15
flv_support = true

# flv file extension name
# default value is flv
# since v1.15
flv_extension = flv


# set the group count
# set to none zero to support multi-group on this storage server
# set to 0  for single group only
# groups settings section as [group1], [group2], ..., [groupN]
# default value is 0
# since v1.14
group_count = 0

# group settings for group #1
# since v1.14
# when support multi-group on this storage server, uncomment following section
#[group1]
#group_name=group1
#storage_server_port=23000
#store_path_count=2
#store_path0=/home/yuqing/fastdfs
#store_path1=/home/yuqing/fastdfs1

# group settings for group #2
# since v1.14
# when support multi-group, uncomment following section as neccessary
#[group2]
#group_name=group2
#storage_server_port=23000
#store_path_count=1
#store_path0=/home/yuqing/fastdfs

  1. 修改nginx.conf , 配置location
[root@fdfs-storage-134 fdfs]# cd /usr/local/nginx/conf/
[root@fdfs-storage-134 conf]# ll
total 68
-rw-r--r--. 1 root root 1077 Jan 14 00:29 fastcgi.conf
-rw-r--r--. 1 root root 1077 Jan 14 00:29 fastcgi.conf.default
-rw-r--r--. 1 root root 1007 Jan 14 00:29 fastcgi_params
-rw-r--r--. 1 root root 1007 Jan 14 00:29 fastcgi_params.default
-rw-r--r--. 1 root root 2837 Jan 14 00:29 koi-utf
-rw-r--r--. 1 root root 2223 Jan 14 00:29 koi-win
-rw-r--r--. 1 root root 5231 Jan 14 00:29 mime.types
-rw-r--r--. 1 root root 5231 Jan 14 00:29 mime.types.default
-rw-r--r--. 1 root root 2620 Jan 14 00:43 nginx.conf
-rw-r--r--. 1 root root 2656 Jan 14 00:29 nginx.conf.default
-rw-r--r--. 1 root root  636 Jan 14 00:29 scgi_params
-rw-r--r--. 1 root root  636 Jan 14 00:29 scgi_params.default
-rw-r--r--. 1 root root  664 Jan 14 00:29 uwsgi_params
-rw-r--r--. 1 root root  664 Jan 14 00:29 uwsgi_params.default
-rw-r--r--. 1 root root 3610 Jan 14 00:29 win-utf

配置location /ppay/M00

注意!

server_port 必须和storage.conf 中配置的端口号保持一致!!!!!!!

配置 user root; 否则nginx -t 的时候会fail !!!!!!

[root@fdfs-storage-134 conf]# cat nginx.conf

user  root;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    server {
        listen       80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location /ppay/M00 {
	    	ngx_fastdfs_module;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        #    proxy_pass   http://127.0.0.1;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        #    root           html;
        #    fastcgi_pass   127.0.0.1:9000;
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        #    include        fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #    deny  all;
        #}
    }


    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    #    listen       8000;
    #    listen       somename:8080;
    #    server_name  somename  alias  another.alias;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}


    # HTTPS server
    #
    #server {
    #    listen       443 ssl;
    #    server_name  localhost;

    #    ssl_certificate      cert.pem;
    #    ssl_certificate_key  cert.key;

    #    ssl_session_cache    shared:SSL:1m;
    #    ssl_session_timeout  5m;

    #    ssl_ciphers  HIGH:!aNULL:!MD5;
    #    ssl_prefer_server_ciphers  on;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}

}

  1. 测试nginx 配置文件
[root@fdfs-storage-134 fdfs]# /usr/local/nginx/sbin/nginx -t
ngx_http_fastdfs_set pid=7219
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
  1. 启动nginx
[root@fdfs-storage-134 fdfs]# /usr/local/nginx/sbin/nginx 
[root@fdfs-storage-134 fdfs]# ps -ef | grep nginx
root       6909      1  0 00:44 ?        00:00:00 nginx: master process ../sbin/nginx
root       6910   6909  0 00:44 ?        00:00:00 nginx: worker process
root       7222   1536  0 02:25 pts/0    00:00:00 grep --color=auto nginx
[root@fdfs-storage-134 fdfs]# 

  1. 测试

浏览器访问 http://192.168.174.134/ppay/M00/00/00/wKiuhl__8kyAecZYAABdreSfEnY355_big.jpg

Springboot集成

  1. 引入pom.xml
<dependency>
	<groupId>com.github.tobato</groupId>
	<artifactId>fastdfs-client</artifactId>
	<version>1.25.2-RELEASE</version>
</dependency>
  1. application.yml配置
fdfs: 
  soTimeout: 1500
  connectTimeout: 600
  trackerList: 192.168.174.133:22122  #tracker路径,TrackerList参数,支持多个
  1. 修改启动类
  • @EnableMBeanExport(registration = RegistrationPolicy.IGNORE_EXISTING)注解是为了解决JMX重复注册bean的问题
  • @Import(FdfsClientConfig.class)注解,就可以拥有带有连接池的FastDFS Java客户端了
//解决jmx重复注册bean的问题
@EnableMBeanExport(registration = RegistrationPolicy.IGNORE_EXISTING)
@Import(FdfsClientConfig.class)//只需要一行注解 @Import(FdfsClientConfig.class)就可以拥有带有连接池的FastDFS Java客户端了
@SpringBootApplication
public class TestFastDfsApplication {

	public static void main(String[] args) {
		SpringApplication.run(TestFastDfsApplication.class, args);
	}
}
  1. 工具类
  • 基于FastFileStorageClient接口
import java.io.IOException;
import java.io.InputStream;

import org.apache.commons.io.FilenameUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import org.springframework.web.multipart.MultipartFile;

import com.github.tobato.fastdfs.domain.StorePath;
import com.github.tobato.fastdfs.service.FastFileStorageClient;

@Component
public class FastDFSClientWrapper {

    @Autowired
    private FastFileStorageClient storageClient;
    
   public String uploadFile(MultipartFile file) throws IOException {
       StorePath storePath = storageClient.uploadFile((InputStream)file.getInputStream(),file.getSize(), FilenameUtils.getExtension(file.getOriginalFilename()),null);
       return getResAccessUrl(storePath);
   }
   
   // 封装文件完整URL地址
   private String getResAccessUrl(StorePath storePath) {
       String fileUrl = "http://192.168.174.133:8888" + "/" + storePath.getFullPath();
       return fileUrl;
   }
}
  1. Controller类,访问接口
@RestController
public class MyController {
    
    @Autowired
    private FastDFSClientWrapper dfsClient;

    // 上传文件
    @RequestMapping(value = "/upload", method = RequestMethod.POST)
    public String upload(MultipartFile file, HttpServletRequest request, HttpServletResponse response) throws Exception {
        String fileUrl= dfsClient.uploadFile(file);
        return fileUrl;
    }
}
posted on 2021-01-14 17:06  肖建锋  阅读(226)  评论(0编辑  收藏  举报