seaweedfs复制、备份、恢复、清理删除
备份
1. 直接备份存储的文件
例如seaweedfs的文件存储路径是 data/seaweedfs_volume/,那么直接备份这个目录下的文件即可
2. 使用 weed backup 命令导出来文件
备份命令:./weed backup -server=127.0.0.1:19333 -dir=. -volumeId=26,如果需要备份多个文件,使用脚本
1 2 3 4 5 | #!/bin/bash for i in $(seq 1 200) do docker exec -it seaweedfs_master weed backup -server=127.0.0.1:19333 -dir=. -volumeId=$i done |
获取备份的文件
1. 如果使用docker版本的,备份出来的文件在容器里面,通过容器目录映射
2. 通过外置的weed程序
恢复文件
备份下来是2个文件,例如26.idx、26.dat,直接放到恢复的seaweedfs的存储目录下,重启seaweedfs即可恢复。
如果把26.idx改为4999.idx,那么访问时候http://ip:19001:26,**改为http://ip:19001:4999,**就可以了
seaweedfs脚本备份到本地
sh seaweedfs_backup.sh 参数1(原目录) 参数2(备份到的目录)
#!/bin/bash seaweedfs_dir=$1 echo "`date "+%Y-%m-%d %H:%M:%S"` seaweedfs_dir: ${seaweedfs_dir}" >> /seaweedfs_backup.log seaweedfs_backup_base_dir=$2 seaweedfs_backup_detail_dir=`date "+%Y_%m_%d_%H_%M_%S"` seaweedfs_backup_dir=${seaweedfs_backup_base_dir}/${seaweedfs_backup_detail_dir} echo "`date "+%Y-%m-%d %H:%M:%S"` seaweedfs_backup_dir: ${seaweedfs_backup_dir}" >> /seaweedfs_backup.log if [ -e ${seaweedfs_backup_dir} ] then echo "`date "+%Y-%m-%d %H:%M:%S"` ${seaweedfs_backup_dir} 目录存在" >> /seaweedfs_backup.log else echo "`date "+%Y-%m-%d %H:%M:%S"` ${seaweedfs_backup_dir} 目录不存在,创建之" >> /seaweedfs_backup.log mkdir ${seaweedfs_backup_dir} fi
#指定某些文件进行备份 cp ${seaweedfs_dir}/7* ${seaweedfs_backup_dir}/. file_num=`ls ${seaweedfs_dir}/7* | wc -l` file_backup_num=`ls ${seaweedfs_backup_dir}/* | wc -l` echo "`date "+%Y-%m-%d %H:%M:%S"` 已备份 原文件${file_num}个,新文件${file_backup_num}个" >> /seaweedfs_backup.log echo "" >> /seaweedfs_backup.log
seaweedfs脚本备份,从远程到本地,只保留最近3个备份
crontab添加:0 * * * * bash /data/backup/seaweedfs/seaweedfs_data_backup.sh
#!/bin/bash PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin echo "`date "+%Y-%m-%d %H:%M:%S"` 备份开始" >> /data/backup/seaweedfs/seaweedfs_data_backup.log # 备份文件的目录,固定目录+年月日时分秒命名的目录 seaweedfs_backup_base_dir=/data/backup/seaweedfs/seaweedfs_data_backup seaweedfs_backup_detail_dir=`date "+%Y_%m_%d_%H_%M_%S"` seaweedfs_backup_dir=${seaweedfs_backup_base_dir}/${seaweedfs_backup_detail_dir} echo "`date "+%Y-%m-%d %H:%M:%S"` seaweedfs_backup_dir: ${seaweedfs_backup_dir}" >> /data/backup/seaweedfs/seaweedfs_data_backup.log # 目录如果不存在就创建 if [ -e ${seaweedfs_backup_dir} ] then echo "`date "+%Y-%m-%d %H:%M:%S"` ${seaweedfs_backup_dir} 目录存在" >> /data/backup/seaweedfs/seaweedfs_data_backup.log else echo "`date "+%Y-%m-%d %H:%M:%S"` ${seaweedfs_backup_dir} 目录不存在,创建之" >> /data/backup/seaweedfs/seaweedfs_data_backup.log mkdir ${seaweedfs_backup_dir} fi # 从远程拷贝seaweedfs数据文件到备份文件的目录 sshpass -p 'IP1-ssh-password' scp root@IP1:/seaweedfs_data/7* ${seaweedfs_backup_dir}/. file_backup_num=`ls ${seaweedfs_backup_dir}/* | wc -l` echo "`date "+%Y-%m-%d %H:%M:%S"` 已备份新文件${file_backup_num}个" >> /data/backup/seaweedfs/seaweedfs_data_backup.log echo "`date "+%Y-%m-%d %H:%M:%S"` 备份结束" >> /data/backup/seaweedfs/seaweedfs_data_backup.log echo "`date "+%Y-%m-%d %H:%M:%S"` 删除过期备份开始" >> /data/backup/seaweedfs/seaweedfs_data_backup.log # 删除旧的备份 # 查询当前的所有的目录,按照时间倒序排列 old_dirs=`ls -lt /data/backup/seaweedfs/seaweedfs_data_backup | grep -v total | awk '{print $9}'` printf "`date "+%Y-%m-%d %H:%M:%S"` 查询出来的目录有 \n${old_dirs}\n" >> /data/backup/seaweedfs/seaweedfs_data_backup.log # 默认保留最新的3个备份目录 num=0 for one_dir in ${old_dirs} do ((num++)) echo "`date "+%Y-%m-%d %H:%M:%S"` 循环处理目录 num: ${num} one_dir: ${one_dir}" >> /data/backup/seaweedfs/seaweedfs_data_backup.log if [ ${num} -le 3 ] then echo "`date "+%Y-%m-%d %H:%M:%S"` 目录${one_dir} 符合最新的3个范围内,保留" >> /data/backup/seaweedfs/seaweedfs_data_backup.log continue fi echo "`date "+%Y-%m-%d %H:%M:%S"` 目录${one_idr} 超过最新的3个范围内,删除" >> /data/backup/seaweedfs/seaweedfs_data_backup.log rm -rf /data/backup/seaweedfs/seaweedfs_data_backup/${one_dir} done echo "`date "+%Y-%m-%d %H:%M:%S"` 删除过期备份结束" >> /data/backup/seaweedfs/seaweedfs_data_backup.log echo "`date "+%Y-%m-%d %H:%M:%S"` finish..." >> /data/backup/seaweedfs/seaweedfs_data_backup.log echo "" >> /data/backup/seaweedfs/seaweedfs_data_backup.log
删除对应的日志
50 * * * * sh /data/backup/seaweedfs/log_clean.sh
#!/bin/bash PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin # 清理log_clean.log文件 log_clean_file_size=`ls -la /data/backup/seaweedfs/log_clean.log | awk '{print $5}'` # 如果文件大于30M,那么删除 if [ $log_clean_file_size -ge 31457280 ] then rm -rf /data/backup/seaweedfs/log_clean.log printf "`date "+%Y-%m-%d %H:%M:%S"` log_clean.log size $log_clean_file_size 大于30M 删除此文件\n" >> /data/backup/seaweedfs/log_clean.log else printf "`date "+%Y-%m-%d %H:%M:%S"` log_clean.log size $log_clean_file_size 小于30M 不做处理\n" >> /data/backup/seaweedfs/log_clean.log fi # 清理其他日志文件 base_dir=/data/backup/seaweedfs log_files=`ls /data/backup/seaweedfs | grep \.log | grep -v clean` for one_file in ${log_files} do one_file_full=${base_dir}/${one_file} one_file_full_size=`ls -la ${one_file_full} | awk '{print $5}'` echo "`date "+%Y-%m-%d %H:%M:%S"` 文件 ${one_file_full} 大小 ${one_file_full_size}" >> /data/backup/seaweedfs/log_clean.log if [ ${one_file_full_size} -ge 31457280 ] then echo ${one_file_full} | grep log >/dev/null 2>&1 if [ $? -eq 0 ]; then rm -rf ${one_file_full} printf "`date "+%Y-%m-%d %H:%M:%S"` 文件 ${one_file_full} 大小 ${one_file_full_size} 大于30M 删除此文件\n" >> /data/backup/seaweedfs/log_clean.log else printf "`date "+%Y-%m-%d %H:%M:%S"` 文件 ${one_file_full} 大小 ${one_file_full_size} 没有包含log字符,不删除此文件\n" >> /data/backup/seaweedfs/log_clean.log fi else printf "`date "+%Y-%m-%d %H:%M:%S"` 文件 ${one_file_full} 大小 ${one_file_full_size} 小于30M 不删除此文件\n" >> /data/backup/seaweedfs/log_clean.log fi done printf "\n" >> /data/backup/seaweedfs/log_clean.log
定期删除n天之前的collection
clean_seaweedfs.sh
#!/bin/bash current_date=$(date +%Y%m%d) del_date=$(date -d "-30 days" +%Y%m%d) echo "del_date $del_date" array_name=("1${del_date}" "2${del_date}" "3${del_date}") for element in "${array_name[@]}"; do echo "$element" url="http://ip:9333/col/delete?collection=${element}" response=$(curl -s "$url") echo "Response: $response" done
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· DeepSeek 开源周回顾「GitHub 热点速览」
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· AI与.NET技术实操系列(二):开始使用ML.NET
· 单线程的Redis速度为什么快?