ffmpeg主体架构分析
[时间:2016-07] [状态:Open]
[关键词:ffmpeg,libavcodec,libavformat]
FFmpeg接触几年了,用的比较多的是libavcodec和libavformat两个库,偶尔也会用用libswresample(主要处理音频PCM的转换,比如不同的声道数、频率、采样位数、量化位数转换)和libswscale(视频原始数据处理,比如缩放、色度格式、量化位数的转换)。
libavformat的主要机制
libavformat主要完成针对多媒体文件或流媒体(FFmpeg内部成为URL)的数据解析,包括数据读取、格式分析以及包读取;多媒体文件生成。其中主要包含几种主要的结构体:
- AVFormatContext
最核心的结构体,对于每个URL,这里面都会demuxer/muxer,一个AVStream数组以及一个AVIOContext。 - AVStream
记录媒体文件中包含的流信息,比如音频、视频或者数据流及其类型。 - AVInputFormat(demuxer)
作为解析器,是libavformat中很多的结构,其对外接口如下:
int(* read_probe )(AVProbeData *)
Tell if a given file has a chance of being parsed as this format.
int(* read_header )(struct AVFormatContext *)
Read the format header and initialize the AVFormatContext structure.
int(* read_packet )(struct AVFormatContext *, AVPacket *pkt)
Read one packet and put it in 'pkt'.
int(* read_close )(struct AVFormatContext *)
Close the stream.
int(* read_seek )(struct AVFormatContext *, int stream_index, int64_t timestamp, int flags)
Seek to a given timestamp relative to the frames in stream component stream_index.
int64_t(* read_timestamp )(struct AVFormatContext *s, int stream_index, int64_t *pos, int64_t pos_limit)
Get the next timestamp in stream[stream_index].time_base units.
- AVOutputFormat(muxer)
作为复用器,其对外接口如下:
int(* write_header )(struct AVFormatContext *)
int(* write_packet )(struct AVFormatContext *, AVPacket *pkt)
int(* write_trailer )(struct AVFormatContext *)
int(* interleave_packet )(struct AVFormatContext *, AVPacket *out, AVPacket *in, int flush)
Currently only used to set pixel format if not YUV420P.
int(* query_codec )(enum AVCodecID id, int std_compliance)
Test if the given codec can be stored in this container.
void(* get_output_timestamp )(struct AVFormatContext *s, int stream, int64_t *dts, int64_t *wall)
int(* control_message )(struct AVFormatContext *s, int type, void *data, size_t data_size)
Allows sending messages from application to device.
int(* write_uncoded_frame )(struct AVFormatContext *, int stream_index, AVFrame **frame, unsigned flags)
Write an uncoded AVFrame.
int(* get_device_list )(struct AVFormatContext *s, struct AVDeviceInfoList *device_list)
Returns device list with it properties.
int(* create_device_capabilities )(struct AVFormatContext *s, struct AVDeviceCapabilitiesQuery *caps)
Initialize device capabilities submodule.
int(* free_device_capabilities )(struct AVFormatContext *s, struct AVDeviceCapabilitiesQuery *caps)
Free device capabilities submodule.
int(* init )(struct AVFormatContext *)
Initialize format.
void(* deinit )(struct AVFormatContext *)
Deinitialize format.
int(* check_bitstream )(struct AVFormatContext *, const AVPacket *pkt)
Set up any necessary bitstream filtering and extract any extra data needed for the global header.
- AVIOContext、URLProtocol(协议解析)
这里面涉及输入输出的机制,与具体协议有关,比如http、tcp、udp、rtp、rtsp等。
URLProtocol的接口如下:
typedef struct URLProtocol {
const char *name;
int (*url_open)( URLContext *h, const char *url, int flags);
/**
* This callback is to be used by protocols which open further nested
* protocols. options are then to be passed to ffurl_open()/ffurl_connect()
* for those nested protocols.
*/
int (*url_open2)(URLContext *h, const char *url, int flags, AVDictionary **options);
int (*url_accept)(URLContext *s, URLContext **c);
int (*url_handshake)(URLContext *c);
/**
* Read data from the protocol.
* If data is immediately available (even less than size), EOF is
* reached or an error occurs (including EINTR), return immediately.
* Otherwise:
* In non-blocking mode, return AVERROR(EAGAIN) immediately.
* In blocking mode, wait for data/EOF/error with a short timeout (0.1s),
* and return AVERROR(EAGAIN) on timeout.
* Checking interrupt_callback, looping on EINTR and EAGAIN and until
* enough data has been read is left to the calling function; see
* retry_transfer_wrapper in avio.c.
*/
int (*url_read)( URLContext *h, unsigned char *buf, int size);
int (*url_write)(URLContext *h, const unsigned char *buf, int size);
int64_t (*url_seek)( URLContext *h, int64_t pos, int whence);
int (*url_close)(URLContext *h);
int (*url_read_pause)(URLContext *h, int pause);
int64_t (*url_read_seek)(URLContext *h, int stream_index,
int64_t timestamp, int flags);
int (*url_get_file_handle)(URLContext *h);
int (*url_get_multi_file_handle)(URLContext *h, int **handles,
int *numhandles);
int (*url_shutdown)(URLContext *h, int flags);
int priv_data_size;
const AVClass *priv_data_class;
int flags;
int (*url_check)(URLContext *h, int mask);
int (*url_open_dir)(URLContext *h);
int (*url_read_dir)(URLContext *h, AVIODirEntry **next);
int (*url_close_dir)(URLContext *h);
int (*url_delete)(URLContext *h);
int (*url_move)(URLContext *h_src, URLContext *h_dst);
const char *default_whitelist;
} URLProtocol;
- AVPacket
解复用或者复用之后的数据包,通常包含一帧视频或者一段音频数据。
就我而言,我用的比较多的是demuxer。通常FFmpeg的处理流程是,先通过demuxer的read_probe
函数确定URL包含的容器类型,然后调用read_header
读取多媒体的信息头,完成基本的初始化;之后正常读取packet通过read_packet
和read_timestamp
;最后在读取结束的时候调用read_close
,完成反初始化操作。另外可以通过read_seek
实现媒体文件的seek操作(快进/快退)。
libavcodec主要机制
libavcodec主要结合libavformat实现解码、编码及音视频解析(单独格式分包或打包)。其中主要包含以下结构:
- AVCodecContext
AVCodecContext是libavcodec最核心的结构体,包含编码器或解码器类型,一个AVCodec、一个AVHWAccel以及一些其他编解码参数。 - AVCodec
这个结构包是编码器/解码器的对外的封装格式,其对外接口包括:
void(* init_static_data )(struct AVCodec *codec)
Initialize codec static data, called from avcodec_register().
int(* init )(AVCodecContext *)
int(* encode_sub )(AVCodecContext *, uint8_t *buf, int buf_size, const struct AVSubtitle *sub)
int(* encode2 )(AVCodecContext *avctx, AVPacket *avpkt, const AVFrame *frame, int *got_packet_ptr)
Encode data to an AVPacket.
int(* decode )(AVCodecContext *, void *outdata, int *outdata_size, AVPacket *avpkt)
int(* close )(AVCodecContext *)
int(* send_frame )(AVCodecContext *avctx, const AVFrame *frame)
Decode/encode API with decoupled packet/frame dataflow.
int(* send_packet )(AVCodecContext *avctx, const AVPacket *avpkt)
int(* receive_frame )(AVCodecContext *avctx, AVFrame *frame)
int(* receive_packet )(AVCodecContext *avctx, AVPacket *avpkt)
void(* flush )(AVCodecContext *)
Flush buffers.
Frame-level threading support functions
int(* init_thread_copy )(AVCodecContext *)
If defined, called on thread contexts when they are created.
int(* update_thread_context )(AVCodecContext *dst, const AVCodecContext *src)
Copy necessary context variables from a previous thread context to the current one.
- AVHWAccel
这里包含硬件解码的结构体,需要依赖特定硬件才可以正常运行。其统一接口如下:
int(* alloc_frame )(AVCodecContext *avctx, AVFrame *frame)
Allocate a custom buffer.
int(* start_frame )(AVCodecContext *avctx, const uint8_t *buf, uint32_t buf_size)
Called at the beginning of each frame or field picture.
int(* decode_slice )(AVCodecContext *avctx, const uint8_t *buf, uint32_t buf_size)
Callback for each slice.
int(* end_frame )(AVCodecContext *avctx)
Called at the end of each frame or field picture.
void(* decode_mb )(struct MpegEncContext *s)
Called for every Macroblock in a slice.
int(* init )(AVCodecContext *avctx)
Initialize the hwaccel private data.
int(* uninit )(AVCodecContext *avctx)
Uninitialize the hwaccel private data.
- AVCodecParserContext和AVCodecParser
这是用于特定音频或视频的parser,比如h264、aac等,其统一对外接口如下:
int(* parser_init )(AVCodecParserContext *s)
int(* parser_parse )(AVCodecParserContext *s, AVCodecContext *avctx, const uint8_t **poutbuf, int *poutbuf_size, const uint8_t *buf, int buf_size)
void(* parser_close )(AVCodecParserContext *s)
int(* split )(AVCodecContext *avctx, const uint8_t *buf, int buf_size)
- AVFrame
这里面存储了音频、视频解码之后的原始数据或者编码器的输入数据。其中存储的音视频数据具体格式需要参考AVFrame::format
(音频格式是AVSampleFormat,视频格式是AVPixelFormat)。
我使用FFmpeg的libavcodec是decoder和parser,当然为了获取音视频原始数据,还需要了解必要的AVFrame结构。
通常decoder的调用逻辑是通过AVCodec的init
初始化解码器,调用decode
函数解码数据,调用close
反初始化解码器,在需要的时候(比如解码结束,切换流)调用flush
清空解码器内部缓存数据。
当然很多音视频比特流需要通过parser才送入解码器,这样就可以调用parser_parse实现。
关于硬解码的实现可能有很多细节,有兴趣的可以参考下Hardware acceleration introduction with FFmpeg。
----------------------------------------------------------------------------------------------------------------------------
本文作者:Tocy e-mail: zyvj@qq.com
版权所有@2015-2020,请勿用于商业用途,转载请注明原文地址。本人保留所有权利。