http传输大文件
有时我们需要传输的文件超过了机器内存限制,如果使用传统的方式将整个文件读到字节数组再通过http方式发送,显然是不行的。此时我们可以使用如下几种方式
1 长连接+分块
使用http1.1的新特性:长连接和分块传输
长连接:HTTP连接通常保持连接状态,直到所有数据块都被发送完毕。它允许在同一个TCP连接上发送和接收多个HTTP请求/响应,而不是每个请求/响应都打开一个新的连接
分块传输:分块上传(Chunked Transfer Encoding)是HTTP/1.1协议中定义的一种数据传输机制,它允许客户端在不知道内容总大小的情况下,将数据分成一系列的块(chunks)进行发送。这种机制特别适用于发送大文件或者数据流,因为它允许服务器开始处理数据,而不必等待所有数据都被发送完毕。
请求示例
POST /upload HTTP/1.1 //请求行 Host: example.com //请求头 Transfer-Encoding: chunked //请求头,这里标志了使用分块机制并指定了编码方式 Content-Type: application/octet-stream //请求头 Connection: keep-alive //请求头 1A\r\n //下面是请求体,这里有两个块。每个块第一行标志块大小,然后回车携带本块内容 This is the first chunk of data.\r\n B\r\n And this is the second chunk.\r\n 0\r\n //结束标志,发送一个空的块 \r\n
通常,分块传输要结合长连接一起使用,这样有助于提高传输效率。但是长连接不是绝对的。当不使用长连接而只是用分块时,请求发出后,只会传输第一个块。接着会再建立tcp连接发送第二个块。
代码示例
调用方
public void upload(String localFile) { String targetUrl = "http://xxx:8080/binary-upload/upload"; File file = new File(localFile); try { FileInputStream fileInputStream = new FileInputStream(file); URL url = new URL(targetUrl); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // Enable input and output streams connection.setDoInput(true); connection.setDoOutput(true); // Disable caching connection.setUseCaches(false); // Set HTTP request headers connection.setRequestMethod("POST"); connection.setRequestProperty("Connection", "Keep-Alive"); connection.setRequestProperty("Cache-Control", "no-cache"); connection.setRequestProperty("Content-Type", "application/octet-stream"); connection.setRequestProperty("Transfer-Encoding", "chunked"); connection.setRequestProperty("fileName", file.getName());// Set ChunkedStreamingMode to avoid OutOfMemoryError for large files connection.setChunkedStreamingMode(1024); // Open output stream DataOutputStream outputStream = new DataOutputStream(connection.getOutputStream()); // Write file data in chunks byte[] buffer = new byte[1024*1024]; int bytesRead; // int i=0; while ((bytesRead = fileInputStream.read(buffer)) != -1) { // LOGGER.info("bytesRead:"+i++); outputStream.write(buffer, 0, bytesRead); } // Flush output stream outputStream.flush(); outputStream.close(); fileInputStream.close(); // Check server's response int responseCode = connection.getResponseCode(); System.out.println("Server's response code: " + responseCode); // Close the connection connection.disconnect(); } catch (Exception e) { e.printStackTrace(); } }
上传文件调用的接口
@RestController @RequestMapping("/binary-upload") public class BinaryUploadController { private static final Logger LOGGER = LoggerFactory.getLogger(BinaryUploadController.class); @PostMapping("/upload") public Object upload(HttpServletRequest request) { String fileName = request.getHeader("fileName"); LOGGER.info("upload"); // 设置文件的存储路径 String storagePath = "e:/"+fileName; // 使用try-with-resources确保资源正确关闭 try (InputStream inputStream = request.getInputStream(); // 打开文件输出流,设置为追加模式 FileOutputStream fileOutputStream = new FileOutputStream(storagePath, true); BufferedOutputStream outputStream = new BufferedOutputStream(fileOutputStream)) { byte[] buffer = new byte[1024*1024]; int i = 0; int bytesRead; while ((bytesRead = inputStream.read(buffer)) != -1) { if(i++%1000==0) { LOGGER.info("写:{}", i); } outputStream.write(buffer, 0, bytesRead); } // 确保所有数据都写入文件 outputStream.flush(); return new ResponseEntity<>("File uploaded and appended successfully.", HttpStatus.OK); } catch (IOException e) { e.printStackTrace(); return new ResponseEntity<>("Error uploading file.", HttpStatus.INTERNAL_SERVER_ERROR); } } }
经过实测,上传速度还是可以的,20G文件大约5分钟。
但是,这里是本地通过java api凤凰http请求调用/binary-upload接口,也就是说要求本地要有一个上传工具,帮我们实现封装请求参数(比如设置请求头Transfer-Encoding chunked)等动作。