FFmpeg 重打包

解封装涉及到很多接口的调用

AVFormatContext: 初始化格式上下文,由 avfomat_alloc_output_context2(&oc, NULL, NULL, filename) 赋值

作用:用于封装和解封装的核心数据结构是 AVFormatContext,它包含所有关于正在读取或写入的文件的信息。与大多数 libavformat 的结构一样,它的大小不是公共 ABI 的一部分,所以它不能被分配到堆栈或直接用 av_malloc() 分配。要创建一个 AVFormatContext,通常使用 avformat_alloc_context() 函数,或者一些其他内部包含 AVFormatContext 申请操作相关的函数,如 avformat_open_input(),这些函数会执行对应的创建。这意味着虽然可以访问 AVFormatContext 的内部字段,但是需要使用特定的 API 去创建并销毁它。

nb_streamsAVFormatContext 结构中的一个参数,用于表示媒体文件中的流(stream)数量。在 FFmpeg 中,一个媒体文件可以包含多个流,比如视频流、音频流和字幕流。每个流都包含不同类型的媒体数据。

AvOutputFormat: 是一个独立的数据结构,表示输出文件容器格式

作用:输入(AVInputFormat)或输出(AVOutputFormat)格式。输入格式要么是由 FFmpeg 的内部机制自动检测,要么是由用户人为设置;而输出则需要由用户来设置。如果是读音视频直播流,可以使用 FFmpeg 内部实现的探测功能探测,或者如果用户已经预先知道媒体流传输的格式,也可以自行设置;而对于输出场景,则只能自行设置了。

AvPacket: 保存压缩数据的结构体,通常由解复用器导出,然后作为输入传递给解码器,或者接受编码器的输出,然后传递给复用器,由 av_packet_alloc() 赋值

作用:AVPacket 是 FFmpeg 中很重要的一个数据结构,它保存了解复用之后,解码之前的数据(仍然是压缩后的数据)和关于这些数据的一些附加信息,如显示时间戳(pts)、解码时间戳(dts)、数据时长,所在媒体流的索引等。

对于视频(Video)来说,AVPacket 通常包含一个压缩的 Frame,而音频(Audio)则有可能包含多个压缩的 Frame。并且,一个 Packet 有可能是空的,不包含任何压缩数据,只含有 side data(side data,容器提供的关于 Packet 的一些附加信息。例如,在编码结束的时候更新一些流的参数)。AVPacket 的大小是公共的 ABI(public ABI)一部分,这样的结构体在 FFmpeg 很少,由此也可见 AVPacket 的重要性。它可以被分配在栈空间上(可以使用语句 AVPacket packet; 在栈空间定义一个 Packet ),并且除非 libavcodec 和 libavformat 有很大的改动,不然不会在 AVPacket 中添加新的字段。

avformat_open_input: AVFormatContext 会由 avformat_open_input 自动创建,用于打开文件的输入流

作用:Open an input stream and read the header. The codecs are not opened. The stream must be closed with avformat_close_input().

avformat_find_stream_info: 与 avformat_open_input 配合使用,在 avformat_open_input 打开文件的输入流并读取头之后(其实就是给 AVFormatContext  赋值),avformat_find_stream_info 就会从上下文中读取 packets 的流信息,即压缩数据的信息

作用:Read packets of a media file to get stream information. This is useful for file formats with no headers such as MPEG. This function also computes the real framerate in case of MPEG-2 repeat frame mode. The logical file position is not changed by this function; examined packets may be buffered for later processing.

av_dump_format: 打印多媒体文件相关信息

avformat_alloc_output_context2: 根据输出文件名推测输出格式

作用:Allocate an AVFormatContext for an output format.

avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
if (!ofmt_ctx) {
   fprintf(stderr, "Could not create output context\n");
}

FFmpeg 内存分配参考:内存的分配和释放

av_calloc: 简单封装了 av_mallocz()

void *av_calloc(size_t nmemb, size_t size)
{
    if (size <= 0 || nmemb >= INT_MAX / size)
        return NULL;
    return av_mallocz(nmemb * size);
}

AVStream: 存储每一个视频/音频流信息的结构体

avformat_new_stream: 添加新的流给音频文件,通常与 avformat_alloc_output_context2 一起使用

作用:Add a new stream to a media file. When demuxing, it is called by the demuxer in read_header(). If the flag AVFMTCTX_NOHEADER is set in s.ctx_flags, then it may also be called in read_packet().

When muxing, should be called by the user before avformat_write_header(). User is required to call avformat_free_context() to clean up the allocation by avformat_new_stream().

AVCodecParameters : This struct describes the properties of an encoded stream.

avcodec_parameters_copy: Copy the contents of src to dst. 原型:avcodec_parameters_copy(AVCodecParameters* dst, const AVCodecParameters* src )

avio_open2: 用于打开 FFmpeg 的输入输出文件,类似于 CreatFile 函数,为接下来的写文件做准备

写入封装文件时,分为三步:avformat_write_header() 写文件头、av_write_frame() 写音视频帧、av_write_trailer() 写文件尾

while (1) {
	AVStream *in_stream, *out_stream;

	ret = av_read_frame(ifmt_ctx, pkt);
	if (ret < 0)
		break;

	in_stream  = ifmt_ctx->streams[pkt->stream_index];
	if (pkt->stream_index >= stream_mapping_size ||
		stream_mapping[pkt->stream_index] < 0) {
		av_packet_unref(pkt);
		continue;
	}

	pkt->stream_index = stream_mapping[pkt->stream_index];
	out_stream = ofmt_ctx->streams[pkt->stream_index];
	log_packet(ifmt_ctx, pkt, "in");

	/* copy packet */
	av_packet_rescale_ts(pkt, in_stream->time_base, out_stream->time_base);
	pkt->pos = -1;
	log_packet(ofmt_ctx, pkt, "out");

	ret = av_interleaved_write_frame(ofmt_ctx, pkt);
	/* pkt is now blank (av_interleaved_write_frame() takes ownership of
	 * its contents and resets pkt), so that no unreferencing is necessary.
	 * This would be different if one used av_write_frame(). */
	if (ret < 0) {
		fprintf(stderr, "Error muxing packet\n");
		break;
	}
}

av_write_trailer(ofmt_ctx);

 

这部分代码的作用:在主循环中,程序读取每个数据包,调整时间戳,并写入到输出文件。av_read_frame 从输入文件读取数据包,av_packet_rescale_ts 调整时间戳,av_interleaved_write_frame 写入数据包

其中 streams 对应多路流,故要先获取当前 ifmt_ctx 对应的流(视频流/音频流),将其 index 赋值给 ofmt_ctx 对应的 streams,最后赋值给 out_stream

清理和释放资源:

end:
    av_packet_free(&pkt);

    avformat_close_input(&ifmt_ctx);

    /* close output */
    if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
        avio_closep(&ofmt_ctx->pb);
    avformat_free_context(ofmt_ctx);

    av_freep(&stream_mapping);

    if (ret < 0 && ret != AVERROR_EOF) {
        fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
        return 1;
    }

    return 0;
}

  

最后附上代码示例

显示了如何使用 FFmpeg 库进行基本的媒体文件重打包操作,涉及打开输入和输出文件、复制流和编解码器参数、读取和写入数据包等步骤

/*
 * Copyright (c) 2013 Stefano Sabatini
 *
 * Permission is hereby granted, free of charge, to any person obtaining a copy
 * of this software and associated documentation files (the "Software"), to deal
 * in the Software without restriction, including without limitation the rights
 * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 * copies of the Software, and to permit persons to whom the Software is
 * furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
 * THE SOFTWARE.
 */
 
/**
 * @file
 * libavformat/libavcodec demuxing and muxing API example.
 *
 * Remux streams from one container format to another.
 * @example remuxing.c
 */
 
#include <libavutil/timestamp.h>
#include <libavformat/avformat.h>
 
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt, const char *tag)
{
    AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
 
    printf("%s: pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
           tag,
           av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
           av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
           av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
           pkt->stream_index);
}
 
int main(int argc, char **argv)
{
    const AVOutputFormat *ofmt = NULL;
    AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
    AVPacket *pkt = NULL;
    const char *in_filename, *out_filename;
    int ret, i;
    int stream_index = 0;
    int *stream_mapping = NULL;
    int stream_mapping_size = 0;
 
    if (argc < 3) {
        printf("usage: %s input output\n"
               "API example program to remux a media file with libavformat and libavcodec.\n"
               "The output format is guessed according to the file extension.\n"
               "\n", argv[0]);
        return 1;
    }
 
    in_filename  = argv[1];
    out_filename = argv[2];
 
    pkt = av_packet_alloc();
    if (!pkt) {
        fprintf(stderr, "Could not allocate AVPacket\n");
        return 1;
    }
 
    if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
        fprintf(stderr, "Could not open input file '%s'", in_filename);
        goto end;
    }
 
    if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
        fprintf(stderr, "Failed to retrieve input stream information");
        goto end;
    }
 
    av_dump_format(ifmt_ctx, 0, in_filename, 0);
 
    avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
    if (!ofmt_ctx) {
        fprintf(stderr, "Could not create output context\n");
        ret = AVERROR_UNKNOWN;
        goto end;
    }
 
    stream_mapping_size = ifmt_ctx->nb_streams;
    stream_mapping = av_calloc(stream_mapping_size, sizeof(*stream_mapping));
    if (!stream_mapping) {
        ret = AVERROR(ENOMEM);
        goto end;
    }
 
    ofmt = ofmt_ctx->oformat;  // The output container format.
 
    for (i = 0; i < ifmt_ctx->nb_streams; i++) {
        AVStream *out_stream;
        AVStream *in_stream = ifmt_ctx->streams[i];
        AVCodecParameters *in_codecpar = in_stream->codecpar;
 
        if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
            in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&
            in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
            stream_mapping[i] = -1;
            continue;
        }
 
        stream_mapping[i] = stream_index++;
 
        out_stream = avformat_new_stream(ofmt_ctx, NULL);
        if (!out_stream) {
            fprintf(stderr, "Failed allocating output stream\n");
            ret = AVERROR_UNKNOWN;
            goto end;
        }
 
        ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
        if (ret < 0) {
            fprintf(stderr, "Failed to copy codec parameters\n");
            goto end;
        }
        out_stream->codecpar->codec_tag = 0;
    }
    av_dump_format(ofmt_ctx, 0, out_filename, 1);
 
    if (!(ofmt->flags & AVFMT_NOFILE)) {
        ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);  // pb: I/O context.
        if (ret < 0) {
            fprintf(stderr, "Could not open output file '%s'", out_filename);
            goto end;
        }
    }
 
    ret = avformat_write_header(ofmt_ctx, NULL);
    if (ret < 0) {
        fprintf(stderr, "Error occurred when opening output file\n");
        goto end;
    }
 
    while (1) {
        AVStream *in_stream, *out_stream;
 
        ret = av_read_frame(ifmt_ctx, pkt);
        if (ret < 0)
            break;
 
        in_stream = ifmt_ctx->streams[pkt->stream_index];
        if (pkt->stream_index >= stream_mapping_size ||
            stream_mapping[pkt->stream_index] < 0) {
            av_packet_unref(pkt);
            continue;
        }
 
        pkt->stream_index = stream_mapping[pkt->stream_index];
        out_stream = ofmt_ctx->streams[pkt->stream_index];
        log_packet(ifmt_ctx, pkt, "in");
 
        /* copy packet */
        av_packet_rescale_ts(pkt, in_stream->time_base, out_stream->time_base);
        pkt->pos = -1;
        log_packet(ofmt_ctx, pkt, "out");
 
        ret = av_interleaved_write_frame(ofmt_ctx, pkt);
        /* pkt is now blank (av_interleaved_write_frame() takes ownership of
         * its contents and resets pkt), so that no unreferencing is necessary.
         * This would be different if one used av_write_frame(). */
        if (ret < 0) {
            fprintf(stderr, "Error muxing packet\n");
            break;
        }
    }
 
    av_write_trailer(ofmt_ctx);
end:
    av_packet_free(&pkt);
 
    avformat_close_input(&ifmt_ctx);
 
    /* close output */
    if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
        avio_closep(&ofmt_ctx->pb);
    avformat_free_context(ofmt_ctx);
 
    av_freep(&stream_mapping);
 
    if (ret < 0 && ret != AVERROR_EOF) {
        fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
        return 1;
    }
 
    return 0;
}

  

 

posted @ 2024-06-14 12:24  strive-sun  阅读(8)  评论(0编辑  收藏  举报