ffmpeg的命令行

ffmpeg的安装目录下有个ffmpeg-all.html,巨长无比,下面是它的摘录。

ffmpeg [global_options] {[input_file_options] -i input_url} ... {[output_file_options] output_url} ...

ffmpeg的输入和输出都可以是URL,不仅仅是文件。在推流(从文件读写到URL)时,一般要使用-re 选项,否则可能高达数万fps。从URL读时,对方不发数据过来这边就阻塞住了。

ffmpeg可以把多个input搞成多个output. 每个input里可能有多个stream,如多个音轨。Selecting which streams from which inputs will go into which output is either done automatically or with the -map option. To refer to input files, you must use their indices (0-based). E.g. the first input file is 0, the second is 1, etc. Similarly, streams within a file are referred to by their indices. E.g. 2:3 refers to the fourth stream in the third input file.

Options are applied to the next specified file. Therefore, order is important, and you can have the same option on the command line multiple times.

Do not mix input and output files – first specify all input files, then all output files.

-b:v 1m set the video bitrate of the output file to 1 mbit/s

The -vn / -an / -sn / -dn options can be used to skip inclusion of video, audio, subtitle and data streams respectively. 比如把.mp4转纯声音的.m4a时: ffmpeg -i 1.mp4 -vn 1.m4a

ffprobe filename 可看到文件中的stream信息。

ffmpeg -i A.avi -i B.mp4 out1.mkv out2.wav -map 1:a -c:a copy out3.mov

out1.mkv可以存放音视频,ffmpeg自动从A.avi和B.mp4中选择分辨率最高的视频流和声道数最多的音频流。
out2.wav只能存放音频,ffmpeg忽略视频流,选择声道数最多的音频流。
out3.mov使用了 -map 1:a,ffmpeg从#1文件(B.mp4)中选择所有的音频(audio)流。-c:a copy的意思: codec:audio copy. codec是编解码器(coder decoer的缩写)。

动手试一试: ffmpeg -i 1.mp4 -map 0:0 -t 10 t.mp4 和 ffmpeg -i 1.mp4 -map 0:0 -t 10 t2.mp4 分别生成了仅有视频或音频的两个文件。-t duration. duration的格式是hour:minute:second.秒的小数部分,可以只有部分字段,如-t 10.1为10.1秒。
-to stop_time -ss stop_time, time的格式好像和duration一样,详情请在ffmpeg-all.html里查"expressing time duration".

恭喜你能删广告了。:-)

-fs limit_size set the limit file size in bytes, 亲测 limit_size可以用10KB, 1MB 这样的格式。

ffmpeg -h或-h long或-h full可看到帮助信息。可ffmpeg -h full >ffmpeg-help.txt I/O重定向, stdout is redirected.

-formats 看支持的格式,关键是格式的名字,以后要用。D代表能demux,E代表能mux. 一个文件里放多个流叫multiplex(多路复用),从其中提取某个流叫demux。

-codecs 看支持的编解码器。D能解, E能编, V视频, A音频, S字幕(subtitle), I仅I帧, L有损失, S无损(和字幕的那个S位置不同). WinZip这样的是无损压缩,解压后和原始文件是一摸一样的。视频都是有损的,肉眼看不出差别就很厉害了,音频有无损的,那是有些人装耳朵好,其实肉耳听不出差别。仅I帧就像一系列jpeg图片,压缩比低。

-protocols 看支持的协议。如http, rtxp (rtp, rtmp... 太多了)

-report Dump full command line and log output to a file named program-YYYYMMDD-HHMMSS.log in the current directory. This file can be useful for bug reports. It also implies -loglevel debug. 改用log4j或log4c++的话,说不定打log的代码比音视频转换的代码还大。好像打log能解决"我的Java程序只能在Intel i3上的JDK 1.8上运行"的问题……似的。不检查%PROCESSOR_IDENTIFIER%而能做到前述,我倒是极佩服的。

-hide_banner All FFmpeg tools will normally show a copyright notice, build options and library versions. This option can be used to suppress printing this information. python有popen,可以执行ffmpeg.exe,读取它的stdout和stderr. 当然python是靠C,C是靠操作系统实现这点的,用python比较省事而已。

for %%f in (*.mp4) do ffmpeg -i "%%f" -vn -c:a copy "%%~nf.m4a" 把当前目录下所有.mp4转成.m4a文件。你可以新建个文本文件,把扩展名改成.bat或.cmd,把该行粘贴进去执行。也可以粘贴到cmd窗口里执行,此时需要把所有的%%换成%。

窃以为可以有个开源项目file-name, Windows下一个极小的.exe, Linux下一个shell脚本,语法统一,不用诡异字符而是如basename。或者一个很小的shell。

-f fmt (input/output) Force input or output file format. The format is normally auto detected for input files and guessed from the file extension for output files, so this option is not needed in most cases.

-y (global) Overwrite output files without asking yes的那个y。
-n (global) Do not overwrite output files, and exit immediately if a specified output file already exists.

-c:v libx264 -c:a copy video的codec用libx264,audio的直接复制。可分别用-vcodec或-acodec.

-s 720x480 设置视频分辨率为 720x480. 一般不用改分辨率,改码率即可,如-b:v 1m. 音频自然是-b:a
指定视频的frame的大小时,可以用hd720, hd1080, 2k, 4k等。1080p则不行。

-stdin Enable interaction on standard input.

-vframes number (output) 或 -frames:v Set the number of video frames to output.
-r fps
-aspect Set the video display aspect ratio specified by aspect.

-hwaccels 列出支持的硬件加速设备,如cuda ,dxva2, qsv, d3d11va, opencl, vulkan 说的是它支持,不是我有(installed).
-hwaccel Use hardware acceleration to decode the matching stream(s). qsv Use the Intel QuickSync Video acceleration for video transcoding. Unlike most other values, this option does not enable accelerated decoding (that is used automatically whenever a qsv decoder is selected), but accelerated transcoding [转码], without copying the frames into the system memory. For it to work, both the decoder and the encoder must support QSV acceleration and no filters must be used.

-aq q (output) Set the audio quality (codec-specific, VBR). This is an alias for -q:a. VBR可变码率,CBR固定码率,够用了。

-acodec codec (input/output) Set the audio codec. This is an alias for -codec:a.

A preset file contains a sequence of option=value pairs, one for each line, specifying a sequence of options which would be awkward to specify on the command line.

You can extract images from a video, or create a video from many images:
For extracting images from a video: ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
For creating a video from many images: ffmpeg -f image2 -framerate 12 -i foo-%03d.jpeg -s WxH foo.avi

You can output to a raw YUV420P file: ffmpeg -i mydivx.avi hugefile.yuv
You can input from a raw YUV420P file: ffmpeg -i /tmp/test.yuv /tmp/out.avi
You can use YUV files as input: ffmpeg -i /tmp/test%d.Y /tmp/out.mpg

ffmpeg带GIF image/animation encoder.

ffmpeg -i 1.mp4 b.mp3 默认128Kbps, 再加个-b:a 160k 就小康了。

265 H.265/HEVC encoder wrapper. This encoder requires the presence of the libx265 headers and library during configuration. You need to explicitly configure the build with --enable-libx265. 不带命令行参数的ffmpeg看看到它是如何configure的,如我手里的有--enable-libx264 --enable-libx265.

h264_mp4toannexb

Convert an H.264 bitstream from length prefixed mode to start code prefixed mode (as defined in the Annex B of the ITU-T H.264 specification) This is required by some streaming formats, typically the MPEG-2 transport stream format (muxer mpegts). For example to remux an MP4 file containing an H.264 stream to mpegts format with ffmpeg, you can use the command:
ffmpeg -i INPUT.mp4 -codec copy -bsf:v h264_mp4toannexb OUTPUT.ts
Please note that this filter is auto-inserted for MPEG-TS (muxer mpegts) and raw H.264 (muxer h264) output formats.

hevc_metadata Modify metadata embedded in an HEVC stream.

hevc_mp4toannexb Convert an HEVC/H.265 bitstream from length prefixed mode to start code prefixed mode (as defined in the Annex B of the ITU-T H.265 specification).

mpegts MPEG-2 transport stream demuxer/muxer.

flv Adobe Flash Video Format muxer.

dash Dynamic Adaptive Streaming over HTTP (DASH) muxer that creates segments and manifest files according to the MPEG-DASH standard ISO/IEC 23009-1:2014.

hls Apple HTTP Live Streaming muxer that segments MPEG-TS according to the HTTP Live Streaming (HLS) specification. It creates a playlist file, and one or more segment files. The output filename specifies the playlist filename.

The M3U8 playlists describing the segments can be remote HTTP resources or local files, accessed using the standard file protocol. The nested protocol is declared by specifying "+proto" after the hls URI scheme name, where proto is either "file" or "http".
hls+http://host/path/to/remote/resource.m3u8
hls+file://path/to/local/resource.m3u8

mov, mp4, ismv MOV/MP4/ISMV (Smooth Streaming) muxer. The mov/mp4/ismv muxer supports fragmentation.

bluray Read BluRay playlist.

crypto AES-encrypted stream reading protocol.

Data in-line in the URI. See http://en.wikipedia.org/wiki/Data_URI_scheme.

For example, to convert a GIF file given inline with ffmpeg:
ffmpeg -i "data:image/gif;base64,R0lGODdhCAAIAMIEAAAAAAAA//8AAP//AP///////////////ywAAAAACAAIAAADF0gEDLojDgdGiJdJqUX02iB4E8Q9jUMkADs=" smiley.png

MMS (Microsoft Media Server) protocol over TCP. MMS (Microsoft Media Server) protocol over HTTP.

UNIX pipe access protocol.

The Real-Time Messaging Protocol (RTMP) is used for streaming multimedia content across a TCP/IP network. The required syntax is:
rtmp://[username:password@]server[:port][/app][/instance][/playpath]

Encrypted Real-Time Messaging Protocol. Real-Time Messaging Protocol over a secure SSL connection. Real-Time Messaging Protocol tunneled through HTTP. Encrypted Real-Time Messaging Protocol tunneled through HTTP. Real-Time Messaging Protocol tunneled through HTTPS.

libsmbclient permits one to manipulate CIFS/SMB network resources. Following syntax is required.
smb://[[domain:]user[:password@]]server[/share[/path[/file]]]

Secure File Transfer Protocol via libssh. Read from or write to remote resources using SFTP protocol.
Following syntax is required.
sftp://[user[:password]@]server[:port]/path/to/remote/resource.mpeg

Real-time Transport Protocol. The required syntax for an RTP URL is: rtp://hostname[:port][?option=val...]

RTSP is not technically a protocol handler in libavformat, it is a demuxer and muxer. The demuxer supports both normal RTSP (with data transferred over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with data transferred over RDT).

Secure Real-time Transport Protocol.

tee Writes the output to multiple protocols. The individual outputs are separated by |. 字母T长得像个三通。

tcp, udp, ZeroMQ.

posted @ 2021-12-25 17:41  Fun_with_Words  阅读(480)  评论(0编辑  收藏  举报









 张牌。