Android 12(S) MultiMedia Learning(九)MediaCodec

这一节来学习MediaCodec的工作原理,相关代码路径:

http://aospxref.com/android-12.0.0_r3/xref/frameworks/av/media/libstagefright/MediaCodec.cpp

 

1、创建mediacodec对象

mediacodec对外给出了两个静态方法用于创建mediacodec对象,分别为CreateByType 和 CreateByComponentName,下面分别来看看

CreateByType方法的参数需要给出一个mimetype,并且指定是否为encoder,然后去MediaCodecList中去查询是否有合适的Codec,接着创建一个mediacodec对象,最后用查到的componentName来初始化mediacodec对象

// static
sp<MediaCodec> MediaCodec::CreateByType(
        const sp<ALooper> &looper, const AString &mime, bool encoder, status_t *err, pid_t pid,
        uid_t uid) {
    sp<AMessage> format;
    return CreateByType(looper, mime, encoder, err, pid, uid, format);
}

sp<MediaCodec> MediaCodec::CreateByType(
        const sp<ALooper> &looper, const AString &mime, bool encoder, status_t *err, pid_t pid,
        uid_t uid, sp<AMessage> format) {
    Vector<AString> matchingCodecs;

    MediaCodecList::findMatchingCodecs(
            mime.c_str(),
            encoder,
            0,
            format,
            &matchingCodecs);

    if (err != NULL) {
        *err = NAME_NOT_FOUND;
    }
    for (size_t i = 0; i < matchingCodecs.size(); ++i) {
        sp<MediaCodec> codec = new MediaCodec(looper, pid, uid);
        AString componentName = matchingCodecs[i];
        status_t ret = codec->init(componentName);
        if (err != NULL) {
            *err = ret;
        }
        if (ret == OK) {
            return codec;
        }
        ALOGD("Allocating component '%s' failed (%d), try next one.",
                componentName.c_str(), ret);
    }
    return NULL;
}

CreateByComponentName方法其实和上面这个方法类似,因为参数指定了componentName,所以不需要MediaCodec查找的过程。

// static
sp<MediaCodec> MediaCodec::CreateByComponentName(
        const sp<ALooper> &looper, const AString &name, status_t *err, pid_t pid, uid_t uid) {
    sp<MediaCodec> codec = new MediaCodec(looper, pid, uid);

    const status_t ret = codec->init(name);
    if (err != NULL) {
        *err = ret;
    }
    return ret == OK ? codec : NULL; // NULL deallocates codec.
}

init

status_t MediaCodec::init(const AString &name) {
    // 保存componentName
    mInitName = name;

    mCodecInfo.clear();

    bool secureCodec = false;
    const char *owner = "";
    // 从MediaCodecList中获取ComponentName对应的codecInfo
    if (!name.startsWith("android.filter.")) {
        status_t err = mGetCodecInfo(name, &mCodecInfo);
        if (err != OK) {
            mCodec = NULL;  // remove the codec.
            return err;
        }
        if (mCodecInfo == nullptr) {
            ALOGE("Getting codec info with name '%s' failed", name.c_str());
            return NAME_NOT_FOUND;
        }
        secureCodec = name.endsWith(".secure");
        Vector<AString> mediaTypes;
        mCodecInfo->getSupportedMediaTypes(&mediaTypes);
        for (size_t i = 0; i < mediaTypes.size(); ++i) {
            if (mediaTypes[i].startsWith("video/")) {
                mIsVideo = true;
                break;
            }
        }
        获取ownerName
        owner = mCodecInfo->getOwnerName();
    }
    // 根据owner来创建一个CodecBase对象
    mCodec = mGetCodecBase(name, owner);
    if (mCodec == NULL) {
        ALOGE("Getting codec base with name '%s' (owner='%s') failed", name.c_str(), owner);
        return NAME_NOT_FOUND;
    }
    // 根据codecInfo来判读是否为Video,如果是video则为其创建一个Looper
    if (mIsVideo) {
        // video codec needs dedicated looper
        if (mCodecLooper == NULL) {
            mCodecLooper = new ALooper;
            mCodecLooper->setName("CodecLooper");
            mCodecLooper->start(false, false, ANDROID_PRIORITY_AUDIO);
        }

        mCodecLooper->registerHandler(mCodec);
    } else {
        mLooper->registerHandler(mCodec);
    }

    mLooper->registerHandler(this);
    // 给codecbase注册回调
    mCodec->setCallback(
            std::unique_ptr<CodecBase::CodecCallback>(
                    new CodecCallback(new AMessage(kWhatCodecNotify, this))));
    // 获取codecbase的bufferchannel
    mBufferChannel = mCodec->getBufferChannel();
    // 给bufferchannel注册回调
    mBufferChannel->setCallback(
            std::unique_ptr<CodecBase::BufferCallback>(
                    new BufferCallback(new AMessage(kWhatCodecNotify, this))));

    sp<AMessage> msg = new AMessage(kWhatInit, this);
    if (mCodecInfo) {
        msg->setObject("codecInfo", mCodecInfo);
        // name may be different from mCodecInfo->getCodecName() if we stripped
        // ".secure"
    }
    msg->setString("name", name);

    // ......
    err = PostAndAwaitResponse(msg, &response);

    return err;
}

init方法中做了以下几件事情:

1、从MediaCodecList中获取componentName对应的codecInfo(mGetCodecBase是个函数指针,在构造函数中被定义),然后查看codecInfo的类型,判别当前的mediacodec对象是用于video/audio,获取component对应的owner。为什么要找owner?现在android中有两套框架用于编解码,一套是omx,还有一套codec2.0,component owner用于标记其所属omx还是codec2.0,

//static
sp<CodecBase> MediaCodec::GetCodecBase(const AString &name, const char *owner) {
    if (owner) {
        if (strcmp(owner, "default") == 0) {
            return new ACodec;
        } else if (strncmp(owner, "codec2", 6) == 0) {
            return CreateCCodec();
        }
    }

    if (name.startsWithIgnoreCase("c2.")) {
        return CreateCCodec();
    } else if (name.startsWithIgnoreCase("omx.")) {
        // at this time only ACodec specifies a mime type.
        return new ACodec;
    } else if (name.startsWithIgnoreCase("android.filter.")) {
        return new MediaFilter;
    } else {
        return NULL;
    }
}

创建codecbase的代码并不长,这里就贴出来。可以看到有两套判断机制,可以根据owner来判断,也可以用component的开头来判断。

2、给codecbase设置looper,如果是video则为其创建新的Looper,如果是audio则用上层传下来的Looper,mediacodec自身使用上层传下来的looper

3、给codecbase注册回调,注册的是CodecCallback对象,CodecCallback中保存的AMessage target是mediacodec对象,所以CodecBase发出回调消息,会通过CodecCallback中转后发给mediaCodec处理

4、获取codecbase的bufferchannel

5、给bufferchannel注册回调,注册的是BufferCallback对象,过程和codecbase的回调相同

6、发送一条kWhatInit消息,到onMessageReceived中处理,将codecInfo和componentName重新打包用于初始化codecbase对象

            setState(INITIALIZING);

            sp<RefBase> codecInfo;
            (void)msg->findObject("codecInfo", &codecInfo);
            AString name;
            CHECK(msg->findString("name", &name));

            sp<AMessage> format = new AMessage;
            if (codecInfo) {
                format->setObject("codecInfo", codecInfo);
            }
            format->setString("componentName", name);

            mCodec->initiateAllocateComponent(format);

到这里mediacodec的创建就完成了,codecbase如何创建以及初始化的会专门来学习。

2、configure

configure代码比较长,但是很简单!这里只贴一点点

    sp<AMessage> msg = new AMessage(kWhatConfigure, this);
    msg->setMessage("format", format);
    msg->setInt32("flags", flags);
    msg->setObject("surface", surface);

    if (crypto != NULL || descrambler != NULL) {
        if (crypto != NULL) {
            msg->setPointer("crypto", crypto.get());
        } else {
            msg->setPointer("descrambler", descrambler.get());
        }
        if (mMetricsHandle != 0) {
            mediametrics_setInt32(mMetricsHandle, kCodecCrypto, 1);
        }
    } else if (mFlags & kFlagIsSecure) {
        ALOGW("Crypto or descrambler should be given for secure codec");
    }
    err = PostAndAwaitResponse(msg, &response);

这个方法做了两件事:

1、解析出传入format中的参数信息,保存到mediacodec中

2、重新打包format、surface、crypto等信息,到onMessageReceived中处理

        case kWhatConfigure:
        {
            sp<RefBase> obj;
            CHECK(msg->findObject("surface", &obj));

            sp<AMessage> format;
            CHECK(msg->findMessage("format", &format));
            // setSurface
            if (obj != NULL) {
                if (!format->findInt32(KEY_ALLOW_FRAME_DROP, &mAllowFrameDroppingBySurface)) {
                    // allow frame dropping by surface by default
                    mAllowFrameDroppingBySurface = true;
                }

                format->setObject("native-window", obj);
                status_t err = handleSetSurface(static_cast<Surface *>(obj.get()));
                if (err != OK) {
                    PostReplyWithError(replyID, err);
                    break;
                }
            } else {
                // we are not using surface so this variable is not used, but initialize sensibly anyway
                mAllowFrameDroppingBySurface = false;

                handleSetSurface(NULL);
            }

            uint32_t flags;
            CHECK(msg->findInt32("flags", (int32_t *)&flags));
            if (flags & CONFIGURE_FLAG_USE_BLOCK_MODEL) {
                if (!(mFlags & kFlagIsAsync)) {
                    PostReplyWithError(replyID, INVALID_OPERATION);
                    break;
                }
                mFlags |= kFlagUseBlockModel;
            }
            mReplyID = replyID;
            setState(CONFIGURING);
            // 获取crypto
            void *crypto;
            if (!msg->findPointer("crypto", &crypto)) {
                crypto = NULL;
            }
            // 将crypto设定给bufferchannel
            mCrypto = static_cast<ICrypto *>(crypto);
            mBufferChannel->setCrypto(mCrypto);
            // 获取解扰信息
            void *descrambler;
            if (!msg->findPointer("descrambler", &descrambler)) {
                descrambler = NULL;
            }
            // 将解扰信息设定给bufferchannel
            mDescrambler = static_cast<IDescrambler *>(descrambler);
            mBufferChannel->setDescrambler(mDescrambler);

            // 从flags中判断是否为encoder
            format->setInt32("flags", flags);
            if (flags & CONFIGURE_FLAG_ENCODE) {
                format->setInt32("encoder", true);
                mFlags |= kFlagIsEncoder;
            }

            // 获取csd buffer
            extractCSD(format);

            // 判断是否需要tunnel mode
            int32_t tunneled;
            if (format->findInt32("feature-tunneled-playback", &tunneled) && tunneled != 0) {
                ALOGI("Configuring TUNNELED video playback.");
                mTunneled = true;
            } else {
                mTunneled = false;
            }

            int32_t background = 0;
            if (format->findInt32("android._background-mode", &background) && background) {
                androidSetThreadPriority(gettid(), ANDROID_PRIORITY_BACKGROUND);
            }
            // 调用codecbase的configure方法
            mCodec->initiateConfigureComponent(format);
            break;
        }
                                            

configure是至关重要的,播放器的功能,比如:是否需要surface、tunnel mode、加密播放、以及是否为encoder,都在配置

3、Start

配置完mediacodec状态被置为CONFIGURED,接下来就可以开始播放了。

      setState(STARTING);
      mCodec->initiateStart();

start方法比较简单,将状态置为STARTING,调用codecbase的start方法。可以猜测,codecbase start成功之后会有个回调将状态置为STARTED

4、setCallback

setCallback其实应该放在Start之前,因为只有设置了callback之后,上层才能正常使用mediacodec。callback会将底层传给mediacodec的事件上抛给再上一层,由上层处理事件比如CB_INPUT_AVAILABLE

方法很简单:

      sp<AMessage> callback;
      CHECK(msg->findMessage("callback", &callback));
      mCallback = callback;

5、上层getBuffer

涉及到2组共4个方法 getInputBuffers / getOutputBuffers / getInputBuffer / getOutpuBuffer。

 getInputBuffers / getOutputBuffers 用于一次性获取decoder的输入输出buffer数组,codecbase中创建的buffer都由bufferchannel来管理,所以调用的是bufferchannel的getInputBufferArray方法

status_t MediaCodec::getInputBuffers(Vector<sp<MediaCodecBuffer> > *buffers) const {
    sp<AMessage> msg = new AMessage(kWhatGetBuffers, this);
    msg->setInt32("portIndex", kPortIndexInput);
    msg->setPointer("buffers", buffers);

    sp<AMessage> response;
    return PostAndAwaitResponse(msg, &response);
}

    case kWhatGetBuffers:
    {
        sp<AReplyToken> replyID;
        CHECK(msg->senderAwaitsResponse(&replyID));
        if (!isExecuting() || (mFlags & kFlagIsAsync)) {
            PostReplyWithError(replyID, INVALID_OPERATION);
            break;
        } else if (mFlags & kFlagStickyError) {
            PostReplyWithError(replyID, getStickyError());
            break;
        }

        int32_t portIndex;
        CHECK(msg->findInt32("portIndex", &portIndex));

        Vector<sp<MediaCodecBuffer> > *dstBuffers;
        CHECK(msg->findPointer("buffers", (void **)&dstBuffers));

        dstBuffers->clear();
        if (portIndex != kPortIndexInput || !mHaveInputSurface) {
            if (portIndex == kPortIndexInput) {
                mBufferChannel->getInputBufferArray(dstBuffers);
            } else {
                mBufferChannel->getOutputBufferArray(dstBuffers);
            }
        }

        (new AMessage)->postReply(replyID);
        break;
    }

getInputBuffer / getOutpuBuffer 根据索引来查找mediacodec buffer队列中的buffer,队列中的元素codecbase通过回调方法加入的

status_t MediaCodec::getOutputBuffer(size_t index, sp<MediaCodecBuffer> *buffer) {
    sp<AMessage> format;
    return getBufferAndFormat(kPortIndexOutput, index, buffer, &format);
}

status_t MediaCodec::getBufferAndFormat(
        size_t portIndex, size_t index,
        sp<MediaCodecBuffer> *buffer, sp<AMessage> *format) {

    if (buffer == NULL) {
        ALOGE("getBufferAndFormat - null MediaCodecBuffer");
        return INVALID_OPERATION;
    }

    if (format == NULL) {
        ALOGE("getBufferAndFormat - null AMessage");
        return INVALID_OPERATION;
    }

    buffer->clear();
    format->clear();

    if (!isExecuting()) {
        ALOGE("getBufferAndFormat - not executing");
        return INVALID_OPERATION;
    }

    Mutex::Autolock al(mBufferLock);

    std::vector<BufferInfo> &buffers = mPortBuffers[portIndex];
    if (index >= buffers.size()) {
        return INVALID_OPERATION;
    }

    const BufferInfo &info = buffers[index];
    if (!info.mOwnedByClient) {
        return INVALID_OPERATION;
    }

    *buffer = info.mData;
    *format = info.mData->format();

    return OK;
}

6、buffers的处理过程

接下来看看input / output buffer的处理过程

kPortIndexInput

BufferChannel调用BufferCallback的onInputBufferAvailable方法将input buffer加入到队列中

void BufferCallback::onInputBufferAvailable(
        size_t index, const sp<MediaCodecBuffer> &buffer) {
    sp<AMessage> notify(mNotify->dup());
    notify->setInt32("what", kWhatFillThisBuffer);
    notify->setSize("index", index);
    notify->setObject("buffer", buffer);
    notify->post();
}

onMessageReceived中的处理不算太长,做了5件事:

    case kWhatFillThisBuffer:
    {
        // 将buffer加入到mPortBuffers当中,将索引加入到mAvailPortBuffers中
        /* size_t index = */updateBuffers(kPortIndexInput, msg);
        
        // 如果是flush、stop、release状态则清除availPortBuffer中的索引,丢弃buffer中的内容
        if (mState == FLUSHING
                || mState == STOPPING
                || mState == RELEASING) {
            returnBuffersToCodecOnPort(kPortIndexInput);
            break;
        }
        // 如果包含有 csd buffer,那么会首先将这个buffer写给decoder,之后就清除csd buffer,下次seek/flush之后可能会再次设定csd buffer
        if (!mCSD.empty()) {
            ssize_t index = dequeuePortBuffer(kPortIndexInput);
            CHECK_GE(index, 0);

            status_t err = queueCSDInputBuffer(index);

            if (err != OK) {
                ALOGE("queueCSDInputBuffer failed w/ error %d",
                      err);

                setStickyError(err);
                postActivityNotificationIfPossible();

                cancelPendingDequeueOperations();
            }
            break;
        }
        // 先处理mLeftover中的buffer、暂时未用到
        if (!mLeftover.empty()) {
            ssize_t index = dequeuePortBuffer(kPortIndexInput);
            CHECK_GE(index, 0);

            status_t err = handleLeftover(index);
            if (err != OK) {
                setStickyError(err);
                postActivityNotificationIfPossible();
                cancelPendingDequeueOperations();
            }
            break;
        }
        // 如果是异步处理buffer,也就是设置了callback,就调用onInputBufferAvailable,通知上层处理,否则等待同步调用
        if (mFlags & kFlagIsAsync) {
            if (!mHaveInputSurface) {
                if (mState == FLUSHED) {
                    mHavePendingInputBuffers = true;
                } else {
                    onInputBufferAvailable();
                }
            }
        } else if (mFlags & kFlagDequeueInputPending) {
            CHECK(handleDequeueInputBuffer(mDequeueInputReplyID));

            ++mDequeueInputTimeoutGeneration;
            mFlags &= ~kFlagDequeueInputPending;
            mDequeueInputReplyID = 0;
        } else {
            postActivityNotificationIfPossible();
        }
        break;
    }    

1、调用updateBuffers将送上来的inputBuffer保存到mPortBuffers[kPortIndexInput],对应的索引保存到mAvailPortBuffers中

2、判断当前的状态是否要丢弃所有的buffer

3、如果有csd buffer,则要先将csd buffer 写给decoder

4、先将mLeftover中的buffer处理结束,暂时未用到

5、如果设置了callback说明是异步调用,那么调用onInputBufferAvailable通知上层异步处理,否则等待同步调用

void MediaCodec::onInputBufferAvailable() {
    int32_t index;
    // 循环处理直到mAvailPortBuffers中没有索引了
    while ((index = dequeuePortBuffer(kPortIndexInput)) >= 0) {
        sp<AMessage> msg = mCallback->dup();
        msg->setInt32("callbackID", CB_INPUT_AVAILABLE);
        msg->setInt32("index", index);
        // 通知上层处理
        msg->post();
    }
}

ssize_t MediaCodec::dequeuePortBuffer(int32_t portIndex) {
    CHECK(portIndex == kPortIndexInput || portIndex == kPortIndexOutput);

    // 获取mAvailPortBuffers中的第一个可用索引,然后取出mPortBuffers中对应位置的buffer
    BufferInfo *info = peekNextPortBuffer(portIndex);
    if (!info) {
        return -EAGAIN;
    }

    List<size_t> *availBuffers = &mAvailPortBuffers[portIndex];
    size_t index = *availBuffers->begin();
    CHECK_EQ(info, &mPortBuffers[portIndex][index]);
    // 擦除第一个索引
    availBuffers->erase(availBuffers->begin());
    // mOwnedByClient要到codecbase中研究
    CHECK(!info->mOwnedByClient);
    {
        Mutex::Autolock al(mBufferLock);
        info->mOwnedByClient = true;

        // set image-data
        if (info->mData->format() != NULL) {
            sp<ABuffer> imageData;
            if (info->mData->format()->findBuffer("image-data", &imageData)) {
                info->mData->meta()->setBuffer("image-data", imageData);
            }
            int32_t left, top, right, bottom;
            if (info->mData->format()->findRect("crop", &left, &top, &right, &bottom)) {
                info->mData->meta()->setRect("crop-rect", left, top, right, bottom);
            }
        }
    }
    // 返回索引
    return index;
}

onInputBufferAvailable会一次性将队列中的inputBuffer index都通知给上层,上层拿到索引就可以通过getInputBuffer获取buffer、填充buffer,最后调用queueInputBuffer将buffer写给decoder,接下来看看是如何写入的。

status_t MediaCodec::queueInputBuffer(
        size_t index,
        size_t offset,
        size_t size,
        int64_t presentationTimeUs,
        uint32_t flags,
        AString *errorDetailMsg) {
    if (errorDetailMsg != NULL) {
        errorDetailMsg->clear();
    }

    sp<AMessage> msg = new AMessage(kWhatQueueInputBuffer, this);
    msg->setSize("index", index);
    msg->setSize("offset", offset);
    msg->setSize("size", size);
    msg->setInt64("timeUs", presentationTimeUs);
    msg->setInt32("flags", flags);
    msg->setPointer("errorDetailMsg", errorDetailMsg);

    sp<AMessage> response;
    return PostAndAwaitResponse(msg, &response);
}

queueInputBuffer将index,pts,flag、size等信息打包发送出去,到onMessageReceive中做实际处理

        case kWhatQueueInputBuffer:
        {
            sp<AReplyToken> replyID;
            CHECK(msg->senderAwaitsResponse(&replyID));

            if (!isExecuting()) {
                PostReplyWithError(replyID, INVALID_OPERATION);
                break;
            } else if (mFlags & kFlagStickyError) {
                PostReplyWithError(replyID, getStickyError());
                break;
            }

            status_t err = UNKNOWN_ERROR;
            // 检查mLeftOver是否为空,如果不为空则先加入到mLeftover中
            if (!mLeftover.empty()) {
                mLeftover.push_back(msg);
                size_t index;
                msg->findSize("index", &index);
                err = handleLeftover(index);
            } else {
                // 或者直接调用onQueueInputBuffer中处理
                err = onQueueInputBuffer(msg);
            }

            PostReplyWithError(replyID, err);
            break;
        }

有两种处理方式,一是加入到mLeftover队列中,调用handleLeftover方法处理,还有一种是调用onQueueInputBuffer处理。由于目前还没接触到mLeftover,所以先看onQueueInputBuffer是如何处理的。

status_t MediaCodec::onQueueInputBuffer(const sp<AMessage> &msg) {
    size_t index;
    size_t offset;
    size_t size;
    int64_t timeUs;
    uint32_t flags;
    CHECK(msg->findSize("index", &index));
    CHECK(msg->findInt64("timeUs", &timeUs));
    CHECK(msg->findInt32("flags", (int32_t *)&flags));
    std::shared_ptr<C2Buffer> c2Buffer;
    sp<hardware::HidlMemory> memory;
    sp<RefBase> obj;
    // queueCSDbuffer / queueEncryptedBuffer时用到c2buffer / memory
    if (msg->findObject("c2buffer", &obj)) {
        CHECK(obj);
        c2Buffer = static_cast<WrapperObject<std::shared_ptr<C2Buffer>> *>(obj.get())->value;
    } else if (msg->findObject("memory", &obj)) {
        CHECK(obj);
        memory = static_cast<WrapperObject<sp<hardware::HidlMemory>> *>(obj.get())->value;
        CHECK(msg->findSize("offset", &offset));
    } else {
        CHECK(msg->findSize("offset", &offset));
    }
    const CryptoPlugin::SubSample *subSamples;
    size_t numSubSamples;
    const uint8_t *key;
    const uint8_t *iv;
    CryptoPlugin::Mode mode = CryptoPlugin::kMode_Unencrypted;

    CryptoPlugin::SubSample ss;
    CryptoPlugin::Pattern pattern;

    if (msg->findSize("size", &size)) {
        if (hasCryptoOrDescrambler()) {
            ss.mNumBytesOfClearData = size;
            ss.mNumBytesOfEncryptedData = 0;

            subSamples = &ss;
            numSubSamples = 1;
            key = NULL;
            iv = NULL;
            pattern.mEncryptBlocks = 0;
            pattern.mSkipBlocks = 0;
        }
    } else if (!c2Buffer) {
        if (!hasCryptoOrDescrambler()) {
            return -EINVAL;
        }

        CHECK(msg->findPointer("subSamples", (void **)&subSamples));
        CHECK(msg->findSize("numSubSamples", &numSubSamples));
        CHECK(msg->findPointer("key", (void **)&key));
        CHECK(msg->findPointer("iv", (void **)&iv));
        CHECK(msg->findInt32("encryptBlocks", (int32_t *)&pattern.mEncryptBlocks));
        CHECK(msg->findInt32("skipBlocks", (int32_t *)&pattern.mSkipBlocks));

        int32_t tmp;
        CHECK(msg->findInt32("mode", &tmp));

        mode = (CryptoPlugin::Mode)tmp;

        size = 0;
        for (size_t i = 0; i < numSubSamples; ++i) {
            size += subSamples[i].mNumBytesOfClearData;
            size += subSamples[i].mNumBytesOfEncryptedData;
        }
    }

    if (index >= mPortBuffers[kPortIndexInput].size()) {
        return -ERANGE;
    }
    // 拿到mPortBuffers[kPortIndexInput] 中对应索引的buffer
    BufferInfo *info = &mPortBuffers[kPortIndexInput][index];
    sp<MediaCodecBuffer> buffer = info->mData;

    if (c2Buffer || memory) {
        sp<AMessage> tunings;
        CHECK(msg->findMessage("tunings", &tunings));
        onSetParameters(tunings);

        status_t err = OK;
        if (c2Buffer) {
            err = mBufferChannel->attachBuffer(c2Buffer, buffer);
        } else if (memory) {
            err = mBufferChannel->attachEncryptedBuffer(
                    memory, (mFlags & kFlagIsSecure), key, iv, mode, pattern,
                    offset, subSamples, numSubSamples, buffer);
        } else {
            err = UNKNOWN_ERROR;
        }

        if (err == OK && !buffer->asC2Buffer()
                && c2Buffer && c2Buffer->data().type() == C2BufferData::LINEAR) {
            C2ConstLinearBlock block{c2Buffer->data().linearBlocks().front()};
            if (block.size() > buffer->size()) {
                C2ConstLinearBlock leftover = block.subBlock(
                        block.offset() + buffer->size(), block.size() - buffer->size());
                sp<WrapperObject<std::shared_ptr<C2Buffer>>> obj{
                    new WrapperObject<std::shared_ptr<C2Buffer>>{
                        C2Buffer::CreateLinearBuffer(leftover)}};
                msg->setObject("c2buffer", obj);
                mLeftover.push_front(msg);
                // Not sending EOS if we have leftovers
                flags &= ~BUFFER_FLAG_EOS;
            }
        }

        offset = buffer->offset();
        size = buffer->size();
        if (err != OK) {
            return err;
        }
    }

    if (buffer == nullptr || !info->mOwnedByClient) {
        return -EACCES;
    }

    if (offset + size > buffer->capacity()) {
        return -EINVAL;
    }
    // 将传入的offset和pts打包进buffer
    buffer->setRange(offset, size);
    buffer->meta()->setInt64("timeUs", timeUs);
    if (flags & BUFFER_FLAG_EOS) {
       // 如果是eos,也设置buffer中的flag
        buffer->meta()->setInt32("eos", true);
    }
    // 如果是写入的csd buffer,也拉起对应的flag通知codec
    if (flags & BUFFER_FLAG_CODECCONFIG) {
        buffer->meta()->setInt32("csd", true);
    }
    // 不太清楚这里的flag有什么用
    if (mTunneled) {
        TunnelPeekState previousState = mTunnelPeekState;
        switch(mTunnelPeekState){
            case TunnelPeekState::kEnabledNoBuffer:
                buffer->meta()->setInt32("tunnel-first-frame", 1);
                mTunnelPeekState = TunnelPeekState::kEnabledQueued;
                break;
            case TunnelPeekState::kDisabledNoBuffer:
                buffer->meta()->setInt32("tunnel-first-frame", 1);
                mTunnelPeekState = TunnelPeekState::kDisabledQueued;
                break;
            default:
                break;
        }
    }

    status_t err = OK;
    if (hasCryptoOrDescrambler() && !c2Buffer && !memory) {
        AString *errorDetailMsg;
        CHECK(msg->findPointer("errorDetailMsg", (void **)&errorDetailMsg));
        // Notify mCrypto of video resolution changes
        if (mTunneled && mCrypto != NULL) {
            int32_t width, height;
            if (mInputFormat->findInt32("width", &width) &&
                mInputFormat->findInt32("height", &height) && width > 0 && height > 0) {
                if (width != mTunneledInputWidth || height != mTunneledInputHeight) {
                    mTunneledInputWidth = width;
                    mTunneledInputHeight = height;
                    mCrypto->notifyResolution(width, height);
                }
            }
        }
        // 写入加密buffer
        err = mBufferChannel->queueSecureInputBuffer(
                buffer,
                (mFlags & kFlagIsSecure),
                key,
                iv,
                mode,
                pattern,
                subSamples,
                numSubSamples,
                errorDetailMsg);
        if (err != OK) {
            mediametrics_setInt32(mMetricsHandle, kCodecQueueSecureInputBufferError, err);
            ALOGW("Log queueSecureInputBuffer error: %d", err);
        }
    } else {
        // 写入普通buffer
        err = mBufferChannel->queueInputBuffer(buffer);
        if (err != OK) {
            mediametrics_setInt32(mMetricsHandle, kCodecQueueInputBufferError, err);
            ALOGW("Log queueInputBuffer error: %d", err);
        }
    }

    if (err == OK) {
        // synchronization boundary for getBufferAndFormat
        Mutex::Autolock al(mBufferLock);
        // 改变BufferInfo的owner
        info->mOwnedByClient = false;
        info->mData.clear();
        // 记录下写入buffer的pts以及对应的写入时间
        statsBufferSent(timeUs, buffer);
    }

    return err;
}

onQueueInputBuffer方法很长,主要是有很多不同的方法会调用它,比如这里的queueInputBuffer、queueCSDBuffer以及queueSecureBuffer等,里面会做很多判断,最终调用了codecbase的queueInputBuffer和queueSecureBuffer。

到这里一个完整的inputBuffer处理过程就结束了。

 

kPortIndexOutput

BufferChannel调用回调方法onOutputBufferAvailable去

void BufferCallback::onOutputBufferAvailable(
        size_t index, const sp<MediaCodecBuffer> &buffer) {
    sp<AMessage> notify(mNotify->dup());
    notify->setInt32("what", kWhatDrainThisBuffer);
    notify->setSize("index", index);
    notify->setObject("buffer", buffer);
    notify->post();
}

接下来还是到onMessageReceived中处理

    case kWhatDrainThisBuffer:
    {
        // 把output buffer加入到队列当中
        /* size_t index = */updateBuffers(kPortIndexOutput, msg);

        if (mState == FLUSHING
                || mState == STOPPING
                || mState == RELEASING) {
            returnBuffersToCodecOnPort(kPortIndexOutput);
            break;
        }

        if (mFlags & kFlagIsAsync) {
            sp<RefBase> obj;
            CHECK(msg->findObject("buffer", &obj));
            sp<MediaCodecBuffer> buffer = static_cast<MediaCodecBuffer *>(obj.get());

            // In asynchronous mode, output format change is processed immediately.
            // 如果outputformat发生变化,则调用方法更新
            handleOutputFormatChangeIfNeeded(buffer);
            // 异步通知上层处理outputbuffer
            onOutputBufferAvailable();
        } else if (mFlags & kFlagDequeueOutputPending) {
            CHECK(handleDequeueOutputBuffer(mDequeueOutputReplyID));

            ++mDequeueOutputTimeoutGeneration;
            mFlags &= ~kFlagDequeueOutputPending;
            mDequeueOutputReplyID = 0;
        } else {
            postActivityNotificationIfPossible();
        }

        break;
    }

还是熟悉的过程:

1、将outputbuffer以及其索引加入到队列当中

2、如果outputformat发生变化则更新

3、调用onOutputBufferAvailable异步通知上层处理

void MediaCodec::onOutputBufferAvailable() {
    int32_t index;
    while ((index = dequeuePortBuffer(kPortIndexOutput)) >= 0) {
        const sp<MediaCodecBuffer> &buffer =
            mPortBuffers[kPortIndexOutput][index].mData;
        sp<AMessage> msg = mCallback->dup();
        msg->setInt32("callbackID", CB_OUTPUT_AVAILABLE);
        msg->setInt32("index", index);
        msg->setSize("offset", buffer->offset());
        msg->setSize("size", buffer->size());

        int64_t timeUs;
        CHECK(buffer->meta()->findInt64("timeUs", &timeUs));

        msg->setInt64("timeUs", timeUs);

        int32_t flags;
        CHECK(buffer->meta()->findInt32("flags", &flags));

        msg->setInt32("flags", flags);

        // 记录outputbuffer送给上层的时间及对应的pts
        statsBufferReceived(timeUs, buffer);

        msg->post();
    }
}

上层拿到outputbuffer之后,做完AVSync会确定渲染还是丢弃,调用renderOutputBufferAndRelease 和 releaseOutputBuffer

status_t MediaCodec::renderOutputBufferAndRelease(size_t index, int64_t timestampNs) {
    sp<AMessage> msg = new AMessage(kWhatReleaseOutputBuffer, this);
    msg->setSize("index", index);
    msg->setInt32("render", true);
    msg->setInt64("timestampNs", timestampNs);

    sp<AMessage> response;
    return PostAndAwaitResponse(msg, &response);
}
case kWhatReleaseOutputBuffer:
{
    sp<AReplyToken> replyID;
    CHECK(msg->senderAwaitsResponse(&replyID));

    if (!isExecuting()) {
        PostReplyWithError(replyID, INVALID_OPERATION);
        break;
    } else if (mFlags & kFlagStickyError) {
        PostReplyWithError(replyID, getStickyError());
        break;
    }

    status_t err = onReleaseOutputBuffer(msg);

    PostReplyWithError(replyID, err);
    break;
}
status_t MediaCodec::onReleaseOutputBuffer(const sp<AMessage> &msg) {
    size_t index;
    CHECK(msg->findSize("index", &index));

    int32_t render;
    if (!msg->findInt32("render", &render)) {
        render = 0;
    }

    if (!isExecuting()) {
        return -EINVAL;
    }

    if (index >= mPortBuffers[kPortIndexOutput].size()) {
        return -ERANGE;
    }

    BufferInfo *info = &mPortBuffers[kPortIndexOutput][index];

    if (info->mData == nullptr || !info->mOwnedByClient) {
        return -EACCES;
    }

    // synchronization boundary for getBufferAndFormat
    sp<MediaCodecBuffer> buffer;
    {
        Mutex::Autolock al(mBufferLock);
        info->mOwnedByClient = false;
        buffer = info->mData;
        info->mData.clear();
    }

    if (render && buffer->size() != 0) {
        int64_t mediaTimeUs = -1;
        buffer->meta()->findInt64("timeUs", &mediaTimeUs);

        int64_t renderTimeNs = 0;
        if (!msg->findInt64("timestampNs", &renderTimeNs)) {
            // use media timestamp if client did not request a specific render timestamp
            ALOGV("using buffer PTS of %lld", (long long)mediaTimeUs);
            renderTimeNs = mediaTimeUs * 1000;
        }

        if (mSoftRenderer != NULL) {
            std::list<FrameRenderTracker::Info> doneFrames = mSoftRenderer->render(
                    buffer->data(), buffer->size(), mediaTimeUs, renderTimeNs,
                    mPortBuffers[kPortIndexOutput].size(), buffer->format());

            // if we are running, notify rendered frames
            if (!doneFrames.empty() && mState == STARTED && mOnFrameRenderedNotification != NULL) {
                sp<AMessage> notify = mOnFrameRenderedNotification->dup();
                sp<AMessage> data = new AMessage;
                if (CreateFramesRenderedMessage(doneFrames, data)) {
                    notify->setMessage("data", data);
                    notify->post();
                }
            }
        }
        status_t err = mBufferChannel->renderOutputBuffer(buffer, renderTimeNs);

        if (err == NO_INIT) {
            ALOGE("rendering to non-initilized(obsolete) surface");
            return err;
        }
        if (err != OK) {
            ALOGI("rendring output error %d", err);
        }
    } else {
        mBufferChannel->discardBuffer(buffer);
    }

    return OK;
}

可以看到,最后是调用BufferChannel的renderOutputBuffer来渲染。

到这里一个output buffer的处理就完成了。

7、flush

case kWhatFlush:
{
    if (!isExecuting()) {
        PostReplyWithError(msg, INVALID_OPERATION);
        break;
    } else if (mFlags & kFlagStickyError) {
        PostReplyWithError(msg, getStickyError());
        break;
    }

    if (mReplyID) {
        mDeferredMessages.push_back(msg);
        break;
    }
    sp<AReplyToken> replyID;
    CHECK(msg->senderAwaitsResponse(&replyID));

    mReplyID = replyID;
    // TODO: skip flushing if already FLUSHED
    setState(FLUSHING);
    // 调用codecbase的signalFlush
    mCodec->signalFlush();
    // 将所有的buffer丢弃
    returnBuffersToCodec();
    TunnelPeekState previousState = mTunnelPeekState;
    mTunnelPeekState = TunnelPeekState::kEnabledNoBuffer;
    ALOGV("TunnelPeekState: %s -> %s",
          asString(previousState),
          asString(TunnelPeekState::kEnabledNoBuffer));
    break;
}

flush方法会先将状态置为FLUSHING,然后调用codecbase的signalFlush方法(等待调用结束后应该会有回调置为FLUSHED),将所有的buffer丢弃,丢弃buffer分为两部分:

一是调用BufferChannel的discardbuffer方法,将buffer还给decoder,二是清除mediacode持有的可用索引。

 

mediacodec并没有pause和resume方法!pause和resume需要player来实现。基本的运行原理大概都了解清楚了,其他的方法暂时就不看了。

 

posted @ 2022-04-08 16:53  青山渺渺  阅读(1797)  评论(0编辑  收藏  举报