大文件分块上传实现
原理说明
对于大文件上传这个问题,作为一个有追求的程序员一定会有所思考,解决这个问题无非就是将文件变小,也就是通过对文件压缩或者对大文件分块后再上传。下面我们就来看看将文件资源分块的方案
代码实现之前端代码
(1)创建一个vue项目
(2)写一个简单页面
<input type="file" @change="getFile" name="" value="" />
(3)获取上传文件
async getFile(e) {
// 1.获取文件
let file = e.target.files;
this.currFile = file[0];
},
(4)大文件分块
// 获取文件分块
getFileChunk(file, chunkSize) {
let that = this;
return new Promise((resovle) => {
let blobSlice =
File.prototype.slice ||
File.prototype.mozSlice ||
File.prototype.webkitSlice,
chunks = Math.ceil(file.size / chunkSize),
currentChunk = 0,
spark = new SparkMD5.ArrayBuffer(),
fileReader = new FileReader();
//使用fileReader将文件块转为文件流,方便生成文件hash值
//这里生成文件hash的作用
fileReader.onload = function (e) {
const chunk = e.target.result;
spark.append(chunk);
currentChunk++;
if (currentChunk < chunks) {
loadNext();
} else {
let fileHash = spark.end();
resovle({ fileHash });
}
};
fileReader.onerror = function () {
console.warn("oops, something went wrong.");
};
function loadNext() {
//文件切块
let start = currentChunk * chunkSize,
end =start + chunkSize >= file.size ? file.size : start + chunkSize;
let chunk = blobSlice.call(file, start, end);
that.fileChunkList.push({
chunk,
size: chunk.size,
name: that.currFile.name,
percentage:0
});
fileReader.readAsArrayBuffer(chunk);
}
loadNext();
});
},
(5)上传分块和合并请求
代码实现之后端代码
(1)初始化node.js项目,提供上传接口和合并接口
npm install -g koa-generator
koa2 项目名
npm install
router.post("/upload", async (ctx) => {
ctx.set("Content-Type", "application/json");
ctx.body = JSON.stringify({ data: { code: 2000, }, message: "successful!", });
});
router.post("/mergeChunk", async (ctx) => {
ctx.set("Content-Type", "application/json");
ctx.body = JSON.stringify({ data: { code: 2000, }, message: "successful!", });
});
可能有跨域:需要配置一下
const cors = require('koa2-cors');
(2)使用koa-body处理文件分块
const koaBody = require("koa-body")
// 上传请求
uploadChunks(fileHash) {
let that = this;
const requests = this.fileChunkList.map((item, index) => {
const formData = new FormData();
formData.append(
`${this.currFile.name}-${fileHash}-${index}`,
item.chunk
);
// 上传
return uploadFile("/upload", formData, that.onUploadProgress(item));
});
Promise.all(requests).then(() => {
// 合并
mergeChunks("/mergeChunks", {
size: that.DefualtChunkSize,
filename: that.currFile.name,
});
});
},
import axios from "axios";
const baseURL = 'http://localhost:3001';
export const uploadFile = (url, formData, onUploadProgress = () => { }) => {
return axios({
method: 'post',
url,
baseURL,
headers: {
'Content-Type': 'multipart/form-data'
},
data: formData,
onUploadProgress
});
}
export const mergeChunks = (url, data) => {
return axios({
method: 'post',
url,
baseURL,
headers: {
'Content-Type': 'application/json'
},
data
});
}
(3)处理合并请求
// 合并请求
router.post("/mergeChunks", async (ctx) => {
const { filename, size } = ctx.request.body;
// 合并 chunks
await mergeFileChunk(filename, size);
// 处理响应
ctx.set("Content-Type", "application/json");
ctx.body = JSON.stringify({
data: {
code: 2000,
filename,
size,
},
message: "merge chunks successful!",
});
});
(4)优化文件小块的进度条
注意:这里在切块的时候,需要给每一块加上percentage字段,这样才能响应化!!!