如何在FFMpeg中获取输入设备名

这篇文章是19年在实习时候写的。
简单来说,其实ffmpeg是有相应的接口的,如avdevice_list_input_sources可以查询设备名,甚至在avdevice.h的注释里可以看到,还有可以根据设备名获取设备支持的分辨率、格式等信息的API。

但是在网上的方法里面,一个是通过

AVFormatContext *pFormatCtx = avformat_alloc_context();
AVDictionary* options = NULL;
av_dict_set(&options, "list_devices", "true", 0);
AVInputFormat *iformat = av_find_input_format("dshow");
puts("Device Option Info======");
avformat_open_input(&pFormatCtx, "video=dummy", iformat, &options);

在stdout中打印信息,从而可以解析这些信息来获取。
另一种是通过调用COM接口来获取,参见https://blog.csdn.net/jhqin/article/details/5929796

他们的本质都是通过COM接口来获取信息,ffmpeg也是调用的同样的函数,只是没提供返回的接口,而是打印了出来,也就说ffmpeg由于dshow部分的一些函数官方还没写完,所以就只能用这种蛋疼的方法。

那么怎么做才能完美呢?
当然是自己实现对应的接口了……

dshow.c可以看出:

AVInputFormat ff_dshow_demuxer = {
    .name           = "dshow",
    .long_name      = NULL_IF_CONFIG_SMALL("DirectShow capture"),
    .priv_data_size = sizeof(struct dshow_ctx),
    .read_header    = dshow_read_header,
    .read_packet    = dshow_read_packet,
    .read_close     = dshow_read_close,
    .flags          = AVFMT_NOFILE,
    .priv_class     = &dshow_class,
};

里面只有这些实现了,而AVInputFormat 完整的接口呢?

typedef struct AVInputFormat {
    /**
     * A comma separated list of short names for the format. New names
     * may be appended with a minor bump.
     */
    const char *name;

    /**
     * Descriptive name for the format, meant to be more human-readable
     * than name. You should use the NULL_IF_CONFIG_SMALL() macro
     * to define it.
     */
    const char *long_name;

    /**
     * Can use flags: AVFMT_NOFILE, AVFMT_NEEDNUMBER, AVFMT_SHOW_IDS,
     * AVFMT_GENERIC_INDEX, AVFMT_TS_DISCONT, AVFMT_NOBINSEARCH,
     * AVFMT_NOGENSEARCH, AVFMT_NO_BYTE_SEEK, AVFMT_SEEK_TO_PTS.
     */
    int flags;

    /**
     * If extensions are defined, then no probe is done. You should
     * usually not use extension format guessing because it is not
     * reliable enough
     */
    const char *extensions;

    const struct AVCodecTag * const *codec_tag;

    const AVClass *priv_class; ///< AVClass for the private context

    /**
     * Comma-separated list of mime types.
     * It is used check for matching mime types while probing.
     * @see av_probe_input_format2
     */
    const char *mime_type;

    /*****************************************************************
     * No fields below this line are part of the public API. They
     * may not be used outside of libavformat and can be changed and
     * removed at will.
     * New public fields should be added right above.
     *****************************************************************
     */
    struct AVInputFormat *next;

    /**
     * Raw demuxers store their codec ID here.
     */
    int raw_codec_id;

    /**
     * Size of private data so that it can be allocated in the wrapper.
     */
    int priv_data_size;

    /**
     * Tell if a given file has a chance of being parsed as this format.
     * The buffer provided is guaranteed to be AVPROBE_PADDING_SIZE bytes
     * big so you do not have to check for that unless you need more.
     */
    int (*read_probe)(AVProbeData *);

    /**
     * Read the format header and initialize the AVFormatContext
     * structure. Return 0 if OK. 'avformat_new_stream' should be
     * called to create new streams.
     */
    int (*read_header)(struct AVFormatContext *);

    /**
     * Read one packet and put it in 'pkt'. pts and flags are also
     * set. 'avformat_new_stream' can be called only if the flag
     * AVFMTCTX_NOHEADER is used and only in the calling thread (not in a
     * background thread).
     * @return 0 on success, < 0 on error.
     *         When returning an error, pkt must not have been allocated
     *         or must be freed before returning
     */
    int (*read_packet)(struct AVFormatContext *, AVPacket *pkt);

    /**
     * Close the stream. The AVFormatContext and AVStreams are not
     * freed by this function
     */
    int (*read_close)(struct AVFormatContext *);

    /**
     * Seek to a given timestamp relative to the frames in
     * stream component stream_index.
     * @param stream_index Must not be -1.
     * @param flags Selects which direction should be preferred if no exact
     *              match is available.
     * @return >= 0 on success (but not necessarily the new offset)
     */
    int (*read_seek)(struct AVFormatContext *,
                     int stream_index, int64_t timestamp, int flags);

    /**
     * Get the next timestamp in stream[stream_index].time_base units.
     * @return the timestamp or AV_NOPTS_VALUE if an error occurred
     */
    int64_t (*read_timestamp)(struct AVFormatContext *s, int stream_index,
                              int64_t *pos, int64_t pos_limit);

    /**
     * Start/resume playing - only meaningful if using a network-based format
     * (RTSP).
     */
    int (*read_play)(struct AVFormatContext *);

    /**
     * Pause playing - only meaningful if using a network-based format
     * (RTSP).
     */
    int (*read_pause)(struct AVFormatContext *);

    /**
     * Seek to timestamp ts.
     * Seeking will be done so that the point from which all active streams
     * can be presented successfully will be closest to ts and within min/max_ts.
     * Active streams are all streams that have AVStream.discard < AVDISCARD_ALL.
     */
    int (*read_seek2)(struct AVFormatContext *s, int stream_index, int64_t min_ts, int64_t ts, int64_t max_ts, int flags);

    /**
     * Returns device list with it properties.
     * @see avdevice_list_devices() for more details.
     */
    int (*get_device_list)(struct AVFormatContext *s, struct AVDeviceInfoList *device_list);

    /**
     * Initialize device capabilities submodule.
     * @see avdevice_capabilities_create() for more details.
     */
    int (*create_device_capabilities)(struct AVFormatContext *s, struct AVDeviceCapabilitiesQuery *caps);

    /**
     * Free device capabilities submodule.
     * @see avdevice_capabilities_free() for more details.
     */
    int (*free_device_capabilities)(struct AVFormatContext *s, struct AVDeviceCapabilitiesQuery *caps);
} AVInputFormat;

看到没,还有个get_device_listcreate_device_capabilities没实现,这俩函数指针对应着从ffmpeg的dshow获取输入的设备名,以及设备名对应的可用的设备参数的信息。

那么接下来的问题就很简单,我们把这两个接口实现即可,既然之前都能打印到stdout,说明对COM的封装函数都写好了,只是没有返回数据而已。我们把对应的函数制作一个副本,改成返回数据的形式,实现接口即可。

在avdevice/dshow.c中1073行附近,找个空,增加一个函数:

static int dshow_get_device_list(AVFormatContext *avctx, AVDeviceInfoList *device_list)
{
    IEnumMoniker *classenum = NULL;
    ICreateDevEnum *devenum = NULL;
    IMoniker *m = NULL;
    int r = 0;
    const GUID *device_guid[2] = { &CLSID_VideoInputDeviceCategory,
                                   &CLSID_AudioInputDeviceCategory };


    device_list->nb_devices = 0;
    device_list->devices = NULL;
    r = CoInitialize(0);
    if(r != S_OK){
        return AVERROR(EIO);
    }

    r = CoCreateInstance(&CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER,
                         &IID_ICreateDevEnum, (void **) &devenum);
    if (r != S_OK) {
        av_log(avctx, AV_LOG_ERROR, "Could not enumerate system devices.\n");
        r = AVERROR(EIO);
        goto fail2;
    }

    for(int sourcetype = 0; sourcetype<2; sourcetype++){
        r = ICreateDevEnum_CreateClassEnumerator(devenum, device_guid[sourcetype],
                                             (IEnumMoniker **) &classenum, 0);
        if (r != S_OK) {
            av_log(avctx, AV_LOG_ERROR, "Could not enumerate dshow devices (or none found).\n");
            r = AVERROR(EIO);
            goto fail2;
        }

        while (IEnumMoniker_Next(classenum, 1, &m, NULL) == S_OK) {
            IPropertyBag *bag = NULL;
            char *friendly_name = NULL;
            char *unique_name = NULL;
            VARIANT var;
            VariantInit(&var);
            IBindCtx *bind_ctx = NULL;
            LPOLESTR olestr = NULL;
            LPMALLOC co_malloc = NULL;
            int i;

            r = CoGetMalloc(1, &co_malloc);
            if (r != S_OK)
                goto fail1;
            r = CreateBindCtx(0, &bind_ctx);
            if (r != S_OK)
                goto fail1;
            /* GetDisplayname works for both video and audio, DevicePath doesn't */
            r = IMoniker_GetDisplayName(m, bind_ctx, NULL, &olestr);
            if (r != S_OK)
                goto fail1;
            unique_name = dup_wchar_to_utf8(olestr);
            /* replace ':' with '_' since we use : to delineate between sources */
            for (i = 0; i < strlen(unique_name); i++) {
                if (unique_name[i] == ':')
                    unique_name[i] = '_';
            }

            r = IMoniker_BindToStorage(m, 0, 0, &IID_IPropertyBag, (void *) &bag);
            if (r != S_OK)
                goto fail1;

            var.vt = VT_BSTR;
            r = IPropertyBag_Read(bag, L"FriendlyName", &var, NULL);
            if (r != S_OK)
                goto fail1;
            friendly_name = dup_wchar_to_utf8(var.bstrVal);
            char *friendly_name2 = av_malloc(strlen(friendly_name)+7);
            strcpy(friendly_name2,sourcetype==0?"video=":"audio=");
            strcat(friendly_name2,friendly_name);
            av_free(friendly_name);

            device_list->nb_devices +=1;
            device_list->devices = av_realloc( device_list->devices, device_list->nb_devices * sizeof(AVDeviceInfo*));
            
            AVDeviceInfo* newDevice = av_malloc(sizeof(AVDeviceInfo));
            newDevice->device_description = friendly_name2;
            newDevice->device_name = unique_name;

            device_list->devices[device_list->nb_devices - 1] = newDevice;
    fail1:
            VariantClear(&var);
            if (olestr && co_malloc)
                IMalloc_Free(co_malloc, olestr);
            if (bind_ctx)
                IBindCtx_Release(bind_ctx);
            if (bag)
                IPropertyBag_Release(bag);
            IMoniker_Release(m);
        }
    }
    
fail2:
    if (devenum)
        ICreateDevEnum_Release(devenum);
    if (classenum)
        IEnumMoniker_Release(classenum);

    CoUninitialize();
    return r;
}

然后滚动到dshow.c最后一页,在.read_close后面增加一行,变成:

AVInputFormat ff_dshow_demuxer = {
    .name           = "dshow",
    .long_name      = NULL_IF_CONFIG_SMALL("DirectShow capture"),
    .priv_data_size = sizeof(struct dshow_ctx),
    .read_header    = dshow_read_header,
    .read_packet    = dshow_read_packet,
    .read_close     = dshow_read_close,
    .get_device_list= dshow_get_device_list, //这一行是新加的。
    .flags          = AVFMT_NOFILE,
    .priv_class     = &dshow_class,
};

那么怎么使用呢?下面是一个获取音视频设备名,并且分别放在两个vector中的示例函数

void GetDeviceList(vector<string> *audio, vector<string> *video)
{
	AVDeviceInfoList *list = nullptr;
	avdevice_list_input_sources(NULL, "dshow", NULL, &list);
	if (!list) return;
	for (int i = 0; i < list->nb_devices; i++) {
		if (video && memcmp(list->devices[i]->device_description, "video", 5) == 0) {
			video->push_back(list->devices[i]->device_description);
		}
		else if (audio && memcmp(list->devices[i]->device_description, "audio", 5) == 0) {
			audio->push_back(list->devices[i]->device_description);
		}
	}
	avdevice_free_list_devices(&list);
}
posted @ 2020-02-11 09:29  原始锋芒  阅读(2153)  评论(0编辑  收藏  举报