讲完了audio和video的处理流程,接下来要看的是audio和video同步化(synchronization)的问题。OpenCORE的做法是设置一个主clock,而audio和video就分别以此作为输出的依据。而在Stagefright中,audio的输出是透过callback函式来驱动,video则根据audio的timestamp来做同步。以下是详细的说明:
(1) 当callback函式驱动AudioPlayer读取解码后的资料时,AudioPlayer会取得两个时间戳 -- mPositionTimeMediaUs和mPositionTimeRealUs
size_t AudioPlayer::fillBuffer(data, size) { ...
mSource->read(&mInputBuffer, ...);
mInputBuffer->meta_data()->findInt64(kKeyTime, &mPositionTimeMediaUs); mPositionTimeRealUs = ((mNumFramesPlayed + size_done / mFrameSize) * 1000000) / mSampleRate;
... }
|
mPositionTimeMediaUs是资料里面所载明的时间戳(timestamp);mPositionTimeRealUs则是播放此资料的实际时间(依据frame number及sample rate得出)。
(2) Stagefright中的video便依据从AudioPlayer得出来之两个时间戳的差值,作为播放的依据
void AwesomePlayer::onVideoEvent() { ...
mVideoSource->read(&mVideoBuffer, ...); mVideoBuffer->meta_data()->findInt64(kKeyTime, &timeUs);
mAudioPlayer->getMediaTimeMapping(&realTimeUs, &mediaTimeUs); mTimeSourceDeltaUs = realTimeUs - mediaTimeUs;
nowUs = ts->getRealTimeUs() - mTimeSourceDeltaUs; latenessUs = nowUs - timeUs;
... }
|
AwesomePlayer从AudioPlayer取得realTimeUs(即mPositionTimeRealUs)和mediaTimeUs(即mPositionTimeMediaUs),并算出其差值mTimeSourceDeltaUs。
(3) 最后我们将该video资料做排程
void AwesomePlayer::onVideoEvent() { ... if (latenessUs > 40000) { mVideoBuffer->release(); mVideoBuffer = NULL;
postVideoEvent_l(); return; } if (latenessUs < -10000) { postVideoEvent_l(10000); return; }
mVideoRenderer->render(mVideoBuffer);
... }
|