Android视频缩略图获取流程,获取视频第一帧

文章来源51cto,作者:killads

https://blog.51cto.com/u_14731/6959356

一、前言

我们用手机录制一个视频之后,点开相册等APP,在不点开播放的时候也可以看到视频的第一帧,这就是视频的缩略图。

安卓中具体怎么获取视频的缩略图呢?本文将一一揭晓。

二、视频缩略图获取过程

1)首先我们看看APP怎么调用的。

APP的调用很简单,直接用createVideoThumbnail函数即可。
下面看看这个函数的具体实现。


frameworks/base/media/java/android/media/ThumbnailUtils.java


// 输入为视频的文件名和kind,输出为bitmap
public static Bitmap createVideoThumbnail(String filePath, int kind) {
        Bitmap bitmap = null;
        // 首先创建一个MediaMetadataRetriever
        MediaMetadataRetriever retriever = new MediaMetadataRetriever();
        try {
            retriever.setDataSource(filePath); // setDataSource
            bitmap = retriever.getFrameAtTime(-1); // 获取封面
            // 此处有两种方式获取封面,一种是getFrameAtTime, 一种是getImageAtIndex
            // 这两个的区别是一个是通过时间戳解析封面,另一个是通过index解析封面
        } catch (IllegalArgumentException ex) {
            // Assume this is a corrupt video file
        } catch (RuntimeException ex) {
            // Assume this is a corrupt video file.
        } finally {
            try {
                retriever.release();
            } catch (RuntimeException ex) {
                // Ignore failures while cleaning up.
            }
        }


        if (bitmap == null) return null;


        if (kind == Images.Thumbnails.MINI_KIND) { // 如果是缩略图需要做缩放
            // Scale down the bitmap if it's too large.
            int width = bitmap.getWidth();
            int height = bitmap.getHeight();
            int max = Math.max(width, height);
            if (max > 512) {
                float scale = 512f / max;
                int w = Math.round(scale * width);
                int h = Math.round(scale * height);
                bitmap = Bitmap.createScaledBitmap(bitmap, w, h, true);
            }
        } else if (kind == Images.Thumbnails.MICRO_KIND) {
            bitmap = extractThumbnail(bitmap,
                    TARGET_SIZE_MICRO_THUMBNAIL,
                    TARGET_SIZE_MICRO_THUMBNAIL,
                    OPTIONS_RECYCLE_INPUT);
        }
        return bitmap;
    }

上面的MediaMetadataRetriever位于/frameworks/base/media/java/android/media/MediaMetadataRetriever.java中。

其通过jni调用的最终函数都是/frameworks/av/media/libmedia/mediametadataretriever.cpp中的函数。mediametadataretriever.cpp实际是一个bp,具体实现都在mRetriever中。

2)下面来看看怎么获取的mRetriever:

frameworks/av/media/libmedia/mediametadataretriever.cpp
MediaMetadataRetriever::MediaMetadataRetriever()
{
    ALOGV("constructor");
    const sp<IMediaPlayerService> service(getService());
    if (service == 0) {
        ALOGE("failed to obtain MediaMetadataRetrieverService");
        return;
    }
    // 从service创建MediaMetadataRetriever
    sp<IMediaMetadataRetriever> retriever(service->createMetadataRetriever());
    if (retriever == 0) {
        ALOGE("failed to create IMediaMetadataRetriever object from server");
    }
    mRetriever = retriever;
}


/frameworks/av/media/libmediaplayerservice/MediaPlayerService.cpp
sp<IMediaMetadataRetriever> MediaPlayerService::createMetadataRetriever()
{
    pid_t pid = IPCThreadState::self()->getCallingPid();
    sp<MetadataRetrieverClient> retriever = new MetadataRetrieverClient(pid);
    ALOGV("Create new media retriever from pid %d", pid);
    return retriever;
}

综合上面可以看到实际MediaMetadataRetriever这个bp对应bn就是MetadataRetrieverClient,

具体函数的实现就是在MetadataRetrieverClient中进行的。

3) 接下来我们就忽略jni、bp等繁琐的过程,直接讲下MetadataRetrieverClient中的实现。

frameworks/av/media/libmediaplayerservice/MetadataRetrieverClient.cpp
status_t MetadataRetrieverClient::setDataSource(int fd, int64_t offset, int64_t length)
{
    ALOGV("setDataSource fd=%d, offset=%" PRId64 ", length=%" PRId64 "", fd, offset, length);
    Mutex::Autolock lock(mLock);
    struct stat sb;
    int ret = fstat(fd, &sb);
    if (ret != 0) {
        ALOGE("fstat(%d) failed: %d, %s", fd, ret, strerror(errno));
        return BAD_VALUE;
    }


    if (offset >= sb.st_size) {
        ALOGE("offset (%" PRId64 ") bigger than file size (%" PRIu64 ")", offset, sb.st_size);
        return BAD_VALUE;
    }
    if (offset + length > sb.st_size) {
        length = sb.st_size - offset;
        ALOGV("calculated length = %" PRId64 "", length);
    }
    // 首先需要根据文件内容获取player type
    player_type playerType =
        MediaPlayerFactory::getPlayerType(NULL /* client */,
                                          fd,
                                          offset,
                                          length);
    ALOGV("player type = %d", playerType);
    // 然后根据playerType创建合适的Retriever,此处一般创建的就是StagefrightMetadataRetriever
    sp<MediaMetadataRetrieverBase> p = createRetriever(playerType);
    if (p == NULL) {
        return NO_INIT;
    }
    // 调用StagefrightMetadataRetriever的setDataSource
    status_t status = p->setDataSource(fd, offset, length);
    if (status == NO_ERROR) mRetriever = p; // 赋值mRetriever = p
    return status;
}




sp<IMemory> MetadataRetrieverClient::getFrameAtTime(
        int64_t timeUs, int option, int colorFormat, bool metaOnly)
{
    ALOGV("getFrameAtTime: time(%" PRId64 "us) option(%d) colorFormat(%d), metaOnly(%d)",
            timeUs, option, colorFormat, metaOnly);
    Mutex::Autolock lock(mLock);
    Mutex::Autolock glock(sLock);
    if (mRetriever == NULL) {
        ALOGE("retriever is not initialized");
        return NULL;
    }
    // mRetriever就是StagefrightMetadataRetriever
    sp<IMemory> frame = mRetriever->getFrameAtTime(timeUs, option, colorFormat, metaOnly);
    if (frame == NULL) {
        ALOGE("failed to capture a video frame");
        return NULL;
    }
    return frame;
}

4) 看完我们就发现MetadataRetrieverClient函数的实现在StagefrightMetadataRetriever中……真是层层封装啊……

好,接下来继续看StagefrightMetadataRetriever……

/frameworks/av/media/libstagefright/StagefrightMetadataRetriever.cpp
status_t StagefrightMetadataRetriever::setDataSource(
        int fd, int64_t offset, int64_t length) {
    fd = dup(fd);


    ALOGV("setDataSource(%d, %" PRId64 ", %" PRId64 ")", fd, offset, length);
    AVUtils::get()->printFileName(fd);


    clearMetadata();
    mSource = new FileSource(fd, offset, length); // 创建mSource


    status_t err;
    if ((err = mSource->initCheck()) != OK) {
        mSource.clear();
        return err;
    }


    mExtractor = MediaExtractorFactory::Create(mSource); // 创建extractor


    if (mExtractor == NULL) {
        mSource.clear();
        return UNKNOWN_ERROR;
    }
    return OK;
}


sp<IMemory> StagefrightMetadataRetriever::getFrameAtTime(
        int64_t timeUs, int option, int colorFormat, bool metaOnly) {
    ALOGV("getFrameAtTime: %" PRId64 " us option: %d colorFormat: %d, metaOnly: %d",
            timeUs, option, colorFormat, metaOnly);


    sp<IMemory> frame;
    status_t err = getFrameInternal( // 调用getFrameInternal
            timeUs, 1, option, colorFormat, metaOnly, &frame, NULL /*outFrames*/);
    return (err == OK) ? frame : NULL;
}


status_t StagefrightMetadataRetriever::getFrameInternal(
        int64_t timeUs, int numFrames, int option, int colorFormat, bool metaOnly,
        sp<IMemory>* outFrame, std::vector<sp<IMemory> >* outFrames) {
    if (mExtractor.get() == NULL) {
        ALOGE("no extractor.");
        return NO_INIT;
    }


    sp<MetaData> fileMeta = mExtractor->getMetaData(); // 获取metadata


    if (fileMeta == NULL) {
        ALOGE("extractor doesn't publish metadata, failed to initialize?");
        return NO_INIT;
    }


    int32_t drm = 0;
    if (fileMeta->findInt32(kKeyIsDRM, &drm) && drm != 0) {
        ALOGE("frame grab not allowed.");
        return ERROR_DRM_UNKNOWN;
    }


    size_t n = mExtractor->countTracks(); // 一共有几个track
    size_t i;
    for (i = 0; i < n; ++i) {
        sp<MetaData> meta = mExtractor->getTrackMetaData(i); // 循环获取每个track的metadata


        if (meta == NULL) {
            continue;
        }


        const char *mime;
        CHECK(meta->findCString(kKeyMIMEType, &mime)); // 获取当前track的mime


        if (!strncasecmp(mime, "video/", 6)) {
            break; // 如果是video就直接break,video track对应的id就是i
        }
    }


    if (i == n) { // 未找到video track
        ALOGE("no video track found.");
        return INVALID_OPERATION;
    }


    // 获取vidoe track的metadata
    sp<MetaData> trackMeta = mExtractor->getTrackMetaData(
            i, MediaExtractor::kIncludeExtensiveMetaData);


    if (metaOnly) { // 一般是false
        if (outFrame != NULL) {
            *outFrame = FrameDecoder::getMetadataOnly(trackMeta, colorFormat);
            if (*outFrame != NULL) {
                return OK;
            }
        }
        return UNKNOWN_ERROR;
    }


    sp<IMediaSource> source = mExtractor->getTrack(i); // 获取video track


    if (source.get() == NULL) {
        ALOGV("unable to instantiate video track.");
        return UNKNOWN_ERROR;
    }


    const void *data;
    uint32_t type;
    size_t dataSize;
    if (fileMeta->findData(kKeyAlbumArt, &type, &data, &dataSize)
            && mAlbumArt == NULL) {
        mAlbumArt = MediaAlbumArt::fromData(dataSize, data);
    }


    const char *mime;
    CHECK(trackMeta->findCString(kKeyMIMEType, &mime));


    Vector<AString> matchingCodecs;
    MediaCodecList::findMatchingCodecs( // 从MediaCodecList中找到可用的media decoders
            mime,
            false, /* encoder */
            0/*MediaCodecList::kPreferSoftwareCodecs*/,
            &matchingCodecs);


    for (size_t i = 0; i < matchingCodecs.size(); ++i) { // 循环尝试每个可用的decoedr
        const AString &componentName = matchingCodecs[i];
        VideoFrameDecoder decoder(componentName, trackMeta, source); 
        // 创建decoder,然后初始化decoder,然后用extractFrame获取一帧
        if (decoder.init(timeUs, numFrames, option, colorFormat) == OK) {
            if (outFrame != NULL) {
                *outFrame = decoder.extractFrame();
                if (*outFrame != NULL) {
                    return OK;
                }
            } else if (outFrames != NULL) {
                status_t err = decoder.extractFrames(outFrames);
                if (err == OK) {
                    return OK;
                }
            }
        }
        ALOGV("%s failed to extract frame, trying next decoder.", componentName.c_str());
    }


    ALOGE("all codecs failed to extract frame.");
    return UNKNOWN_ERROR;
}

StagefrightMetadataRetriever中首先在setdatasource中创建了extractor,然后在getFrameInternal中提取filemetadata,循环file所有track扎到video track,并分析track metadata找到了所有可用的decoders。然后在循环尝试每个可用decoder去解析一帧图片。

具体解析过程见: FrameDecoder::extractFrame。

5) 接下来就看看这个函数的具体实现

/frameworks/av/media/libstagefright/FrameDecoder.cpp


status_t FrameDecoder::init(
        int64_t frameTimeUs, size_t numFrames, int option, int colorFormat) {
    if (!getDstColorFormat(
            (android_pixel_format_t)colorFormat, &mDstFormat, &mDstBpp)) {
        return ERROR_UNSUPPORTED;
    }


    sp<AMessage> videoFormat = onGetFormatAndSeekOptions(
            frameTimeUs, numFrames, option, &mReadOptions);
    if (videoFormat == NULL) {
        ALOGE("video format or seek mode not supported");
        return ERROR_UNSUPPORTED;
    }


    status_t err;
    sp<ALooper> looper = new ALooper;
    looper->start();
    // 根据ComponentName创建decoder,decoder就是MediaCodec
    sp<MediaCodec> decoder = MediaCodec::CreateByComponentName(
            looper, mComponentName, &err);
    if (decoder.get() == NULL || err != OK) {
        ALOGW("Failed to instantiate decoder [%s]", mComponentName.c_str());
        return (decoder.get() == NULL) ? NO_MEMORY : err;
    }


    err = decoder->configure( // 首先调用decoder的configure 
            videoFormat, NULL /* surface */, NULL /* crypto */, 0 /* flags */);
    if (err != OK) {
        ALOGW("configure returned error %d (%s)", err, asString(err));
        decoder->release(); // 如果出错就relaese
        return err;
    }


    err = decoder->start(); // 调用decoder的start
    if (err != OK) {
        ALOGW("start returned error %d (%s)", err, asString(err));
        decoder->release();// 如果出错就relaese
        return err;
    }


    err = mSource->start(); // mSource就是video track的metadata
    if (err != OK) {
        ALOGW("source failed to start: %d (%s)", err, asString(err));
        decoder->release();
        return err;
    }
    mDecoder = decoder; // 把decoder赋值给mDecoder


    return OK;
}


sp<IMemory> FrameDecoder::extractFrame(FrameRect *rect) {
    status_t err = onExtractRect(rect); // 先调用onExtractRect
    if (err == OK) {
        err = extractInternal(); // 先调用extractInternal
    }
    if (err != OK) {
        return NULL;
    }


    return mFrames.size() > 0 ? mFrames[0] : NULL; // 实际输出的图片
}


status_t ImageDecoder::onExtractRect(FrameRect *rect) {
    if (rect == NULL) {
        if (mTilesDecoded > 0) {
            return ERROR_UNSUPPORTED;
        }
        mTargetTiles = mGridRows * mGridCols;
        return OK;
    }


    if (mTileWidth <= 0 || mTileHeight <=0) {
        return ERROR_UNSUPPORTED;
    }


    int32_t row = mTilesDecoded / mGridCols;
    int32_t expectedTop = row * mTileHeight;
    int32_t expectedBot = (row + 1) * mTileHeight;
    if (expectedBot > mHeight) {
        expectedBot = mHeight;
    }
    if (rect->left != 0 || rect->top != expectedTop
            || rect->right != mWidth || rect->bottom != expectedBot) {
        ALOGE("currently only support sequential decoding of slices");
        return ERROR_UNSUPPORTED;
    }


    // advance one row
    mTargetTiles = mTilesDecoded + mGridCols;
    return OK;
}




status_t FrameDecoder::extractInternal() {
    status_t err = OK;
    bool done = false;
    size_t retriesLeft = kRetryCount;
    do {
        size_t index;
        int64_t ptsUs = 0ll;
        uint32_t flags = 0;


        while (mHaveMoreInputs) {
            // // 先从decoder dequeue一个inputbuffer,获取inout buffer的index
            err = mDecoder->dequeueInputBuffer(&index, 0); 
            if (err != OK) {
                ALOGV("Timed out waiting for input");
                if (retriesLeft) {
                    err = OK;
                }
                break;
            }
            sp<MediaCodecBuffer> codecBuffer;
            // 然后调用decoder getInputBuffer通过index拿到codecbuffer
            err = mDecoder->getInputBuffer(index, &codecBuffer); // 
            if (err != OK) {
                ALOGE("failed to get input buffer %zu", index);
                break;
            }


            MediaBufferBase *mediaBuffer = NULL;
            // 从video track中读取一帧码流数据
            err = mSource->read(&mediaBuffer, &mReadOptions);
            mReadOptions.clearSeekTo();
            if (err != OK) {
                ALOGW("Input Error or EOS");
                mHaveMoreInputs = false;
                if (!mFirstSample && err == ERROR_END_OF_STREAM) {
                    err = OK;
                    flags |= MediaCodec::BUFFER_FLAG_EOS;
                    mHaveMoreInputs = true;
                }
                break;
            }
            // 接下来判断codecBuffer是否够用,如果够用就把mediaBuffer的数据给codecBuffer
            if (mediaBuffer->range_length() > codecBuffer->capacity()) {
                ALOGE("buffer size (%zu) too large for codec input size (%zu)",
                        mediaBuffer->range_length(), codecBuffer->capacity());
                mHaveMoreInputs = false;
                err = BAD_VALUE;
            } else {
                codecBuffer->setRange(0, mediaBuffer->range_length());


                CHECK(mediaBuffer->meta_data().findInt64(kKeyTime, &ptsUs));
                memcpy(codecBuffer->data(),
                        (const uint8_t*)mediaBuffer->data() + mediaBuffer->range_offset(),
                        mediaBuffer->range_length());


                onInputReceived(codecBuffer, mediaBuffer->meta_data(), mFirstSample, &flags);
                mFirstSample = false;
            }


            mediaBuffer->release(); // release mediabuffer


            if (mHaveMoreInputs) {
                ALOGV("QueueInput: size=%zu ts=%" PRId64 " us flags=%x",
                        codecBuffer->size(), ptsUs, flags);


                err = mDecoder->queueInputBuffer( // 把码流数据送给decoder
                        index,
                        codecBuffer->offset(),
                        codecBuffer->size(),
                        ptsUs,
                        flags);


                if (flags & MediaCodec::BUFFER_FLAG_EOS) {
                    mHaveMoreInputs = false;
                }
            }
        }


        while (err == OK) {
            size_t offset, size;
            // wait for a decoded buffer
            // 从decoder dequeue一个output buffer,获取到buffer的index
            err = mDecoder->dequeueOutputBuffer( 
                    &index,
                    &offset,
                    &size,
                    &ptsUs,
                    &flags,
                    kBufferTimeOutUs);


            if (err == INFO_FORMAT_CHANGED) {
                ALOGV("Received format change");
                err = mDecoder->getOutputFormat(&mOutputFormat);
            } else if (err == INFO_OUTPUT_BUFFERS_CHANGED) {
                ALOGV("Output buffers changed");
                err = OK;
            } else {
                if (err == -EAGAIN /* INFO_TRY_AGAIN_LATER */ && --retriesLeft > 0) {
                    err = OK;
                } else if (err == OK) {
                    sp<MediaCodecBuffer> videoFrameBuffer;
                    // 从decoder中根据index获取videoFrameBuffer
                    err = mDecoder->getOutputBuffer(index, &videoFrameBuffer);
                    if (err != OK) {
                        ALOGE("failed to get output buffer %zu", index);
                        break;
                    }
                    // 然后就是等待decoder输出一个output buffer
                    err = onOutputReceived(videoFrameBuffer, mOutputFormat, ptsUs, &done);
                    mDecoder->releaseOutputBuffer(index); // 释放buffer
                } else {
                    ALOGW("Received error %d (%s) instead of output", err, asString(err));
                    done = true;
                }
                break;
            }
        }
    } while (err == OK && !done);


    if (err != OK) {
        ALOGE("failed to get video frame (err %d)", err);
    }


    return err;
}


// 接下来看看decoder解码完的处理
status_t VideoFrameDecoder::onOutputReceived(
        const sp<MediaCodecBuffer> &videoFrameBuffer,
        const sp<AMessage> &outputFormat,
        int64_t timeUs, bool *done) {
    bool shouldOutput = (mTargetTimeUs < 0ll) || (timeUs >= mTargetTimeUs);


    // If this is not the target frame, skip color convert.
    if (!shouldOutput) {
        *done = false;
        return OK;
    }


    *done = (++mNumFramesDecoded >= mNumFrames);


    if (outputFormat == NULL) {
        return ERROR_MALFORMED;
    }


    int32_t width, height, stride, slice_height;
    CHECK(outputFormat->findInt32("width", &width));
    CHECK(outputFormat->findInt32("height", &height));
    CHECK(outputFormat->findInt32("stride", &stride));
    CHECK(outputFormat->findInt32("slice-height", &slice_height));


    int32_t crop_left, crop_top, crop_right, crop_bottom;
    if (!outputFormat->findRect("crop", &crop_left, &crop_top, &crop_right, &crop_bottom)) {
        crop_left = crop_top = 0;
        crop_right = width - 1;
        crop_bottom = height - 1;
    }


    sp<IMemory> frameMem = allocVideoFrame( // 创建内存
            trackMeta(),
            (crop_right - crop_left + 1),
            (crop_bottom - crop_top + 1),
            0,
            0,
            dstBpp());
    addFrame(frameMem); // 把frameMem加入mFrames
    VideoFrame* frame = static_cast<VideoFrame*>(frameMem->pointer());


    int32_t srcFormat;
    CHECK(outputFormat->findInt32("color-format", &srcFormat));
    // 接下来会根据decoder输出的格式和app需要的输出格式判断是否需要颜色转换。此处先忽略。
    ColorConverter converter((OMX_COLOR_FORMATTYPE)srcFormat, dstFormat());


    if (converter.isValid()) {
        converter.convert(
                (const uint8_t *)videoFrameBuffer->data(),
                stride, slice_height,
                crop_left, crop_top, crop_right, crop_bottom,
                frame->getFlattenedData(),
                frame->mWidth,
                frame->mHeight,
                crop_left, crop_top, crop_right, crop_bottom);
        return OK;
    }


    ALOGE("Unable to convert from format 0x%08x to 0x%08x",
                srcFormat, dstFormat());
    return ERROR_UNSUPPORTED;
}

以上就是安卓中整个获取视频缩略图的过程了,改天有空画个流程图就更直观了,就酱~不知道聪明的你看懂了没呢?

96f163895f0989b2369ca002a1c431dc.png

推荐阅读:

关于我

一篇文章带你了解Android 最新Camera框架

独家 | Android Camera 面试流程、经验分享

视频课程上架啦 | Android Camera开发入门

深圳上班,

从事Android Camera相关软件开发工作,

公众号记录生活和工作的点滴。

02f51a6ed2d9229cc2b584e75d5cb4f6.png

《Android Camera开发入门》视频课程已经上架了,可以加我微信咨询,目前针对星球成员免费开放,也欢迎加入“小驰成长圈”星球0b88f3691b7544004b7556b9d8d615f8.png

视频课程上架啦 | Android Camera开发入门

99291adda87db47b546aed9f12f88b84.jpeg

觉得不错,点个赞呗 fb0696faadaa2a5be894f4d5a8354247.png

posted @ 2024-01-23 08:00  小驰行动派  阅读(396)  评论(0编辑  收藏  举报  来源