Android 高通平台Camera录制--MPEG4Writer.cpp 简单跟读
首先在前面申明一点,本人接触Android的时间并不长,只是因为喜欢 Android开发,并且项目有需要,才会去看Android framework 层MPEG4Writer.cpp 的代码。在这里也只是想简单的记录下自己这几天跟读代码的结果,也好给自己个交代。其中有些知识还是感谢网上其它大神的指点,这里给出我参考的博文的链接点击打开链接 http://m.blog.csdn.net/blog/liwendovo/8478259。
下面的一些内容和图片,我就不自己总结了,直接粘贴上面链接博文的内容:
--------------------------------------------------------------------------------------------------------------
Android系统录像封装流程主要有三个步骤:
1) 录制开始时,写入文件头部。
2) 录制进行时,实时写入音视频轨迹的数据块。
3) 录制结束时,写入索引信息并更新头部参数。
索引负责描述音视频轨迹的特征,会随着音视频轨迹的存储而变化,所以通常做法会将录像文件索引信息放在音视频轨迹流后面,在媒体流数据写完(录像结束)后才能写入。可以看到,存放音视频数据的mdat box是位于第二位的,而负责检索音视频的moov box是位于最后的,这与通常的MP4封装的排列顺序不同,当然这是为了符合录制而产生的结果。因为 moov的大小是随着 mdat 变化的,而我们录制视频的时间预先是不知道的,所以需要先将mdat 数据写入,最后再写入moov,完成封装。
现有Android系统上录像都是录制是MP4或3GP格式,底层就是使用MPEG4Writer组合器类来完成的,它将编码后的音视频轨迹按照MPEG4规范进行封装,填入各个参数,就组合成完整的MP4格式文件。MPEG4Writer的组合功能主要由两种线程完成,一种是负责音视频数据写入封装文件的写线程(WriterThread),一种是音视频数据读取处理的轨迹线程(TrackThread)。轨迹线程一般有两个:视频轨迹数据读取线程和音频轨迹数据读取线程,而写线程只有一个,负责将轨迹线程中打包成Chunk的数据写入封装文件。
如图3所示,轨迹线程是以帧为单位获取数据帧(Sample),并将每帧中的信息及系统环境信息提取汇总存储在内存的trak表中,其中需要维持的信息有Chunk写入文件的偏移地址Stco(Chunk Offset)、Sample与Chunk的映射关系Stsc(Sample-to-Chunk)、关键帧Stss(Sync Sample)、每一帧的持续时间Stts(Time-to-Sample)等,这些信息是跟每一帧的信息密切相关的,由图可以看出trak表由各自的线程维护,当录像结束时trak表会就会写入封装文件。而每一帧的数据流会先存入一个链表缓存中,当帧的数量达到一定值时,轨迹线程会将这些帧数据打包成块(Chunk)并通知写线程写入到封装文件。写线程接到Chunk已准备好的通知后就马上搜索Chunk链表(链表个数与轨迹线程个数相关,一般有两个,音视频轨迹线程各有一个),将找到的第一个Chunk后便写入封装文件,并会将写入的偏移地址更新到相应的trak表的Stco项(但trak表中其它数据是由轨迹线程更新)。音视频的Chunk数据是存储于同一mdat box中,按添加到Chunk链表时间先后顺序排列。等到录像结束时,录像应用会调用MPEG4Writer的stop方法,此时就会将音视频的trak表分别写入moov。
------------------------------------------------------------------------------------------------------------------------------------------------
其实看完上面的内容,应该对Android录制视频过程中,录制的视频的封装过程有一个大体了解,我们平时所说的视频后缀名.mp4/.mkv等等就是视频封装的各种格式。
下面将给出MPEG4Writer.cpp 的跟读过程:
MPEG4Writer.cpp 的构造函数,在这里将实现一些参数的初始化,fd是传进来的录制文件的文件描述符。
MPEG4Writer::MPEG4Writer(int fd)
mFd(dup(fd)),
ifReRecording(false),
mInitCheck(mFd < 0? NO_INIT: OK),
mIsRealTimeRecording(true),
mUse4ByteNalLength(true),
mUse32BitOffset(true),
mIsFileSizeLimitExplicitlyRequested(false),
mPaused(false),
mStarted(false),
mWriterThreadStarted(false),
mOffset(0),
mMdatOffset(0),
mEstimatedMoovBoxSize(0),
mInterleaveDurationUs(1000000),
mLatitudex10000(0),
mLongitudex10000(0),
mAreGeoTagsAvailable(false),
mStartTimeOffsetMs(-1),
mHFRRatio(1) {
ALOGD("*** MPEG4Writer(int fd):mFd is:%d",mFd);
}
应用层 MediaRecorder.start();时,往framework层调用时,将会调用到MPEG4Writer.cpp 中的 start部分,在start部分,我们看到在这一部分,writeFtypBox(param) 将实现录制文件文件头部信息的相关信息的写入操作;startWriterThread() 开启封装视频文件的写线程;startTracks(param) 开启视频数据的读线程,也就是前面文件部分所说的轨迹线程。
status_t MPEG4Writer::start(MetaData *param) {
......
//暂停后,再次start时,mStarted = true;mPaused = false;
if (mStarted) {
if (mPaused) {
mPaused = false;
return startTracks(param);
}
return OK;
}
......
writeFtypBox(param); //写入封装文件头部信息
mFreeBoxOffset = mOffset;
......
mOffset = mMdatOffset;
lseek64(mFd, mMdatOffset, SEEK_SET);//将文件指针移动到mMdatOffset的位置
status_t err = startWriterThread(); //开启写线程
if (err != OK) { return err; }
err = startTracks(param);//开启轨迹线程
if (err != OK) { return err; }
mStarted = true; return OK;}
继续看下 startWriterThread()部分,在startWriterThread()函数中,将真正建立新的子线程,并在子线程中执行ThreadWrappe函数中的操作。
status_t MPEG4Writer::startWriterThread() {
ALOGV("****** startWriterThread");
mDone = false;
mIsFirstChunk = true;
mDriftTimeUs = 0;
for (List<Track *>::iterator it = mTracks.begin();
it != mTracks.end(); ++it) {
ChunkInfo info;
info.mTrack = *it;
info.mPrevChunkTimestampUs = 0;
info.mMaxInterChunkDurUs = 0;
mChunkInfos.push_back(info); //
}
pthread_attr_t attr;
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
pthread_create(&mThread, &attr,ThreadWrapper, this);
pthread_attr_destroy(&attr);
mWriterThreadStarted = true;
return OK;
}
接着继续看 ThreadWrapper()函数,在这里new 了一个MPEGWriter对象,真正的操作在threadFunc()中体现
void *MPEG4Writer::ThreadWrapper(void *me) {
MPEG4Writer *writer = static_cast<MPEG4Writer *>(me);
writer->threadFunc();
return NULL;
}
下面看下threadFun()。在这个函数中,将根据变量mDone 进行while循环,一直检测是否有数据块Chunk可写。轨迹线程是一直将读数据的数据往buffer中写入,buffer到了一定量后,就是chunk,这时就会通过信号量 mChunkReadyCondition来通知封装文件的写线程去检测链表,然后将检索到的Chunk数据写入文件的数据区,当然写之前,肯定会去判断下是否真的有数据可写。
void MPEG4Writer::threadFunc() {
prctl(PR_SET_NAME, (unsigned long)"MPEG4Writer", 0, 0, 0);
Mutex::Autolock autoLock(mLock);
while (!mDone) {
Chunk chunk;
bool chunkFound = false;
while (!mDone && !(chunkFound = findChunkToWrite(&chunk))) {
mChunkReadyCondition.wait(mLock);
}
// In real time recording mode, write without holding the lock in order
// to reduce the blocking time for media track threads.
// Otherwise, hold the lock until the existing chunks get written to the
// file.
if (chunkFound) {
if (mIsRealTimeRecording) {
mLock.unlock();
}
writeChunkToFile(&chunk);
if (mIsRealTimeRecording) {
mLock.lock();
}
}
}
writeAllChunks();//这个是在while(!mDone)之后了,应用层MediaRecorder.stop()时,mDone的值将为true,这里应该是录制文件结束时,将剩下的所有数据都写入封装文件,具体也没有跟读。
}
下面看下writerChunkToFile(&chunk);轨迹线程读数据时是以数据帧Sample为单位,所以这里将Chunk写入封装文件,也是以Sample为单位,遍历整个链表,将数据写入封装文件,真正的写入操作是addSamole_l(*it);
void MPEG4Writer::writeChunkToFile(Chunk* chunk) {
ALOGV("******writeChunkToFile: %lld from %s track",
chunk->mTimeStampUs, chunk->mTrack->isAudio()? "audio": "video");
int32_t isFirstSample = true;
while (!chunk->mSamples.empty()) {
List<MediaBuffer *>::iterator it = chunk->mSamples.begin();
off64_t offset = chunk->mTrack->isAvc()
? addLengthPrefixedSample_l(*it)
:addSample_l(*it);
if (isFirstSample) {
chunk->mTrack->addChunkOffset(offset);
isFirstSample = false;
}
(*it)->release();
(*it) = NULL;
chunk->mSamples.erase(it);
}
chunk->mSamples.clear();
}
下面看下addSamole_l(*it) 函数,wirte写入操作,mFd 是上层设置录制的文件路径传下来的文件描述符
off64_t MPEG4Writer::addSample_l(MediaBuffer *buffer) {
off64_t old_offset = mOffset;
::write(mFd,
(const uint8_t *)buffer->data() + buffer->range_offset(),
buffer->range_length());
mOffset += buffer->range_length();
return old_offset;
}
到此,封装文件的写入线程的操作大体走完,下面看轨迹线程的操作。
---------------------------------------------------------------------------------------------------------------------------------
startTracks(param) 轨迹线程的开启。文件的录制过程中是有2条轨迹线程,一个是视频的轨迹线程,另一条则是音频的轨迹线程,在starTrack(param)中是在for 循环中start了两条轨迹线程。
status_t MPEG4Writer::startTracks(MetaData *params) {
if (mTracks.empty()) {
ALOGE("No source added");
return INVALID_OPERATION;
}
for (List<Track *>::iterator it = mTracks.begin();
it != mTracks.end(); ++it) {
status_t err =(*it)->start(params);
if (err != OK) {
for (List<Track *>::iterator it2 = mTracks.begin();
it2 != it; ++it2) {
(*it2)->stop();
}
return err;
}
}
return OK;
}
(*it)->start(params) 将会执行status_t MPEG4Writer::Track::start(MetaData *params) {} 。在这边也是同样新建子线程,在子线程中执行轨迹线程的相应操作。
status_t MPEG4Writer::Track::start(MetaData *params) {
......
pthread_attr_t attr;
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
mDone = false;
mStarted = true;
mTrackDurationUs = 0;
mReachedEOS = false;
mEstimatedTrackSizeBytes = 0;
mMdatSizeBytes = 0;
mMaxChunkDurationUs = 0;
pthread_create(&mThread, &attr,ThreadWrapper, this);//在子线程中执行ThreadWrapper函数
pthread_attr_destroy(&attr);
mHFRRatio = ExtendedUtils::HFR::getHFRRatio(mMeta);
return OK;
}
下面看下上面ThreadWrapper函数,真正的操作又是放到了threadEntry()中去执行
void *MPEG4Writer::Track::ThreadWrapper(void *me) {
Track *track = static_cast<Track *>(me);
status_t err = track->threadEntry();
return (void *) err;
}
下面看thrck->threadEntry(),算是比较重要的一个函数了。有些具体的我自己也没看懂,只是知道到大概。
status_t MPEG4Writer::Track::threadEntry() {
......
int64_t lastTimestampUs = 0; // Previous sample time stamp
int64_t lastDurationUs = 0; // Between the previous two samples
int64_t currDurationTicks = 0; // Timescale based ticks
int64_t lastDurationTicks = 0; // Timescale based ticks
int32_t sampleCount = 1; // Sample count in the current stts table entry
uint32_t previousSampleSize = 0; // Size of the previous sample
int64_t previousPausedDurationUs = 0;
int64_t timestampUs = 0;
int64_t cttsOffsetTimeUs = 0;
int64_t currCttsOffsetTimeTicks = 0; // Timescale based ticks
int64_t lastCttsOffsetTimeTicks = -1; // Timescale based ticks
int32_t cttsSampleCount = 0; // Sample count in the current ctts table entry
uint32_t lastSamplesPerChunk = 0;
sp<MetaData> meta_data;
status_t err = OK;
MediaBuffer *buffer;
while (!mDone && (err = mSource->read(&buffer)) == OK) {
//mSource->read(&buffer) 将会调用CameraSource.cpp中的相应接口,进行视频数据的读取
if (buffer->range_length() == 0) {
buffer->release();
buffer = NULL;
++nZeroLengthFrames;
continue;
}
// If the codec specific data has not been received yet, delay pause.
// After the codec specific data is received, discard what we received
// when the track is to be paused.
if (mPaused && !mResumed) {
buffer->release();
buffer = NULL;
continue;
}
++count;
int32_t isCodecConfig; //???
if (buffer->meta_data()->findInt32(kKeyIsCodecConfig, &isCodecConfig)
&& isCodecConfig) {
CHECK(!mGotAllCodecSpecificData);
if (mIsAvc) {
status_t err = makeAVCCodecSpecificData(
(const uint8_t *)buffer->data()
+ buffer->range_offset(),
buffer->range_length());
CHECK_EQ((status_t)OK, err);
} else if (mIsMPEG4) {
mCodecSpecificDataSize = buffer->range_length();
mCodecSpecificData = malloc(mCodecSpecificDataSize);
memcpy(mCodecSpecificData,
(const uint8_t *)buffer->data()
+ buffer->range_offset(),
buffer->range_length());
}
buffer->release();
buffer = NULL;
mGotAllCodecSpecificData = true;
continue;
}
// Make a deep copy of the MediaBuffer and Metadata and release
// the original as soon as we can
MediaBuffer *copy = new MediaBuffer(buffer->range_length());
memcpy(copy->data(), (uint8_t *)buffer->data() + buffer->range_offset(),
buffer->range_length());
copy->set_range(0, buffer->range_length());
meta_data = new MetaData(*buffer->meta_data().get());
buffer->release();
buffer = NULL;
if (mIsAvc) StripStartcode(copy);
size_t sampleSize = copy->range_length();
if (mIsAvc) {
if (mOwner->useNalLengthFour()) {
sampleSize += 4;
} else {
sampleSize += 2;
}
}
// Max file size or duration handling
mMdatSizeBytes += sampleSize;
updateTrackSizeEstimate();
/*写过视频录制的同学应该比较清楚,在上层应用,初始化MediaRecorder时,设置了录制文件录制的大小或者录制录制的段长时,为了实现循环录制,就需要监听底层的回调接口。而底层的回调就是在下面实现的。
exceedsFileSizeLimit()函数将会判断录制的文件的大小是否已经达到设置的大小,从而决定是否向上层回馈信息。同理exceedsFileDurationLimit()函数则是判断是否到达设置的段长。
*/
if (mOwner->exceedsFileSizeLimit()) {
mOwner->notify(MEDIA_RECORDER_EVENT_INFO,MEDIA_RECORDER_INFO_MAX_FILESIZE_REACHED, 0);
break;
}
if (mOwner->exceedsFileDurationLimit()) {
mOwner->notify(MEDIA_RECORDER_EVENT_INFO, MEDIA_RECORDER_INFO_MAX_DURATION_REACHED, 0);
break;
}
int32_t isSync = false;
meta_data->findInt32(kKeyIsSyncFrame, &isSync);
CHECK(meta_data->findInt64(kKeyTime, ×tampUs));
if (mStszTableEntries->count() == 0) {
mFirstSampleTimeRealUs = systemTime() / 1000;
mStartTimestampUs = timestampUs;
mOwner->setStartTimestampUs(mStartTimestampUs);
previousPausedDurationUs = mStartTimestampUs;
}
if (mResumed) {
mResumed = false;
}
timestampUs -= previousPausedDurationUs;
CHECK_GE(timestampUs, 0ll);
if (!mIsAudio) {
/*
* Composition time: timestampUs
* Decoding time: decodingTimeUs
* Composition time offset = composition time - decoding time
*/
int64_t decodingTimeUs;
CHECK(meta_data->findInt64(kKeyDecodingTime, &decodingTimeUs));
decodingTimeUs -= previousPausedDurationUs;
cttsOffsetTimeUs =
timestampUs - decodingTimeUs;
CHECK_GE(kMaxCttsOffsetTimeUs, decodingTimeUs - timestampUs);
timestampUs = decodingTimeUs;
ALOGV("decoding time: %lld and ctts offset time: %lld",
timestampUs, cttsOffsetTimeUs);
// Update ctts box table if necessary
currCttsOffsetTimeTicks =
(cttsOffsetTimeUs * mTimeScale + 500000LL) / 1000000LL;
CHECK_LE(currCttsOffsetTimeTicks, 0x0FFFFFFFFLL);
if (mStszTableEntries->count() == 0) {
// Force the first ctts table entry to have one single entry
// so that we can do adjustment for the initial track start
// time offset easily in writeCttsBox().
lastCttsOffsetTimeTicks = currCttsOffsetTimeTicks;
addOneCttsTableEntry(1, currCttsOffsetTimeTicks);
cttsSampleCount = 0; // No sample in ctts box is pending
} else {
if (currCttsOffsetTimeTicks != lastCttsOffsetTimeTicks) {
addOneCttsTableEntry(cttsSampleCount, lastCttsOffsetTimeTicks);
lastCttsOffsetTimeTicks = currCttsOffsetTimeTicks;
cttsSampleCount = 1; // One sample in ctts box is pending
} else {
++cttsSampleCount;
}
}
// Update ctts time offset range
if (mStszTableEntries->count() == 0) {
mMinCttsOffsetTimeUs = currCttsOffsetTimeTicks;
mMaxCttsOffsetTimeUs = currCttsOffsetTimeTicks;
} else {
if (currCttsOffsetTimeTicks > mMaxCttsOffsetTimeUs) {
mMaxCttsOffsetTimeUs = currCttsOffsetTimeTicks;
} else if (currCttsOffsetTimeTicks < mMinCttsOffsetTimeUs) {
mMinCttsOffsetTimeUs = currCttsOffsetTimeTicks;
}
}
}
if (mOwner->isRealTimeRecording()) {
if (mIsAudio) {
updateDriftTime(meta_data);
}
}
CHECK_GE(timestampUs, 0ll);
ALOGV("%s media time stamp: %lld and previous paused duration %lld",
mIsAudio? "Audio": "Video", timestampUs, previousPausedDurationUs);
if (timestampUs > mTrackDurationUs) {
mTrackDurationUs = timestampUs;
}
// We need to use the time scale based ticks, rather than the
// timestamp itself to determine whether we have to use a new
// stts entry, since we may have rounding errors.
// The calculation is intended to reduce the accumulated
// rounding errors.
currDurationTicks =
((timestampUs * mTimeScale + 500000LL) / 1000000LL -
(lastTimestampUs * mTimeScale + 500000LL) / 1000000LL);
if (currDurationTicks < 0ll) {
ALOGE("timestampUs %lld < lastTimestampUs %lld for %s track",
timestampUs, lastTimestampUs, mIsAudio? "Audio": "Video");
err = UNKNOWN_ERROR;
mSource->notifyError(err);
return err;
}
mStszTableEntries->add(htonl(sampleSize));
if (mStszTableEntries->count() > 2) {
// Force the first sample to have its own stts entry so that
// we can adjust its value later to maintain the A/V sync.
if (mStszTableEntries->count() == 3 || currDurationTicks != lastDurationTicks) {
addOneSttsTableEntry(sampleCount, lastDurationTicks);
sampleCount = 1;
} else {
++sampleCount;
}
}
if (mSamplesHaveSameSize) {
if (mStszTableEntries->count() >= 2 && previousSampleSize != sampleSize) {
mSamplesHaveSameSize = false;
}
previousSampleSize = sampleSize;
}
ALOGV("%s timestampUs/lastTimestampUs: %lld/%lld",
mIsAudio? "Audio": "Video", timestampUs, lastTimestampUs);
lastDurationUs = timestampUs - lastTimestampUs;
lastDurationTicks = currDurationTicks;
lastTimestampUs = timestampUs;
if (isSync != 0) {
addOneStssTableEntry(mStszTableEntries->count());
}
if (mTrackingProgressStatus) {
if (mPreviousTrackTimeUs <= 0) {
mPreviousTrackTimeUs = mStartTimestampUs;
}
trackProgressStatus(timestampUs);
}
// use File write in seperate thread for video only recording
if (!hasMultipleTracks && mIsAudio) {
off64_t offset = mIsAvc? mOwner->addLengthPrefixedSample_l(copy)
: mOwner->addSample_l(copy);
uint32_t count = (mOwner->use32BitFileOffset()
? mStcoTableEntries->count()
: mCo64TableEntries->count());
if (count == 0) {
addChunkOffset(offset);
}
copy->release();
copy = NULL;
continue;
}
mChunkSamples.push_back(copy);//push_back 是将读取到的数据添加到数据块链表的尾部
......
}
if (isTrackMalFormed()) {
err = ERROR_MALFORMED;
}
mOwner->trackProgressStatus(mTrackId, -1, err);
// Last chunk
if (!hasMultipleTracks && mIsAudio) {
addOneStscTableEntry(1, mStszTableEntries->count());
} else if (!mChunkSamples.empty()) {
addOneStscTableEntry(++nChunks, mChunkSamples.size());
bufferChunk(timestampUs);
}
// We don't really know how long the last frame lasts, since
// there is no frame time after it, just repeat the previous
// frame's duration.
if (mStszTableEntries->count() == 1) {
lastDurationUs = 0; // A single sample's duration
lastDurationTicks = 0;
} else {
++sampleCount; // Count for the last sample
}
if (mStszTableEntries->count() <= 2) {
addOneSttsTableEntry(1, lastDurationTicks);
if (sampleCount - 1 > 0) {
addOneSttsTableEntry(sampleCount - 1, lastDurationTicks);
}
} else {
addOneSttsTableEntry(sampleCount, lastDurationTicks);
}
// The last ctts box may not have been written yet, and this
// is to make sure that we write out the last ctts box.
if (currCttsOffsetTimeTicks == lastCttsOffsetTimeTicks) {
if (cttsSampleCount > 0) {
addOneCttsTableEntry(cttsSampleCount, lastCttsOffsetTimeTicks);
}
}
mTrackDurationUs += lastDurationUs;
mReachedEOS = true;
sendTrackSummary(hasMultipleTracks);
ALOGI("Received total/0-length (%d/%d) buffers and encoded %d frames. - %s",
count, nZeroLengthFrames, mStszTableEntries->count(), mIsAudio? "audio": "video");
if (mIsAudio) {
ALOGI("Audio track drift time: %lld us", mOwner->getDriftTimeUs());
}
if (err == ERROR_END_OF_STREAM) {
return OK;
}
return err;
}
------------------------------------------------------------------------------------------------------------
下面看下,录制文件结束时的一些操作。录制文件结束时,上层应用分别是调用 MediaRecorder的stop()、reset()和release()方法,下面看下MPEG4Writer.cpp中相对应的操作。
status_t MPEG4Writer::Track::stop() {
if (!mStarted) {
ALOGE("Stop() called but track is not started");
return ERROR_END_OF_STREAM;
}
if (mDone) {
return OK;
}
mDone = true;
void *dummy;
pthread_join(mThread, &dummy);//等待mThread 的结束
status_t err = (status_t) dummy;
ALOGD("Stopping %s track source", mIsAudio? "Audio": "Video");
{
status_t status = mSource->stop();
if (err == OK && status != OK && status != ERROR_END_OF_STREAM) {
err = status;
}
}
ALOGD("%s track stopped", mIsAudio? "Audio": "Video");
return err;
}
reset()函数中将完成轨迹线程、写入线程的停止、封装文件尾部信息的写入等操作
status_t MPEG4Writer::reset() {
if (mInitCheck != OK) {
return OK;
} else {
if (!mWriterThreadStarted ||
!mStarted) {
if (mWriterThreadStarted) {
stopWriterThread();
}
release();
return OK;
}
}
status_t err = OK;
int64_t maxDurationUs = 0;
int64_t minDurationUs = 0x7fffffffffffffffLL;
for (List<Track *>::iterator it = mTracks.begin();
it != mTracks.end(); ++it) {
status_t status = (*it)->stop();//停止轨迹线程
if (err == OK && status != OK) {
err = status;
}
int64_t durationUs = (*it)->getDurationUs();
if (durationUs > maxDurationUs) {
maxDurationUs = durationUs;
}
if (durationUs < minDurationUs) {
minDurationUs = durationUs;
}
}
if (mTracks.size() > 1) {
ALOGD("Duration from tracks range is [%lld, %lld] us",
minDurationUs, maxDurationUs);
}
stopWriterThread(); //停止封装文件的写线程
// Do not write out movie header on error.
if (err != OK) {
release();
return err;
}
// Fix up the size of the 'mdat' chunk.
if (mUse32BitOffset) {
lseek64(mFd, mMdatOffset, SEEK_SET);
uint32_t size = htonl(static_cast<uint32_t>(mOffset - mMdatOffset));
::write(mFd, &size, 4);
} else {
lseek64(mFd, mMdatOffset + 8, SEEK_SET);
uint64_t size = mOffset - mMdatOffset;
size = hton64(size);
::write(mFd, &size, 8);
}
lseek64(mFd, mOffset, SEEK_SET);
// Construct moov box now
mMoovBoxBufferOffset = 0;
mWriteMoovBoxToMemory = mStreamableFile;
if (mWriteMoovBoxToMemory) {
// There is no need to allocate in-memory cache
// for moov box if the file is not streamable.
mMoovBoxBuffer = (uint8_t *) malloc(mEstimatedMoovBoxSize);
CHECK(mMoovBoxBuffer != NULL);
}
writeMoovBox(maxDurationUs);//封装文件尾部相应信息的写入
// mWriteMoovBoxToMemory could be set to false in
// MPEG4Writer::write() method
if (mWriteMoovBoxToMemory) {
mWriteMoovBoxToMemory = false;
// Content of the moov box is saved in the cache, and the in-memory
// moov box needs to be written to the file in a single shot.
CHECK_LE(mMoovBoxBufferOffset + 8, mEstimatedMoovBoxSize);
// Moov box
lseek64(mFd, mFreeBoxOffset, SEEK_SET);
mOffset = mFreeBoxOffset;
write(mMoovBoxBuffer, 1, mMoovBoxBufferOffset);
// Free box
lseek64(mFd, mOffset, SEEK_SET);
writeInt32(mEstimatedMoovBoxSize - mMoovBoxBufferOffset);
write("free", 4);
} else {
ALOGI("The mp4 file will not be streamable.");
}
// Free in-memory cache for moov box
if (mMoovBoxBuffer != NULL) {
free(mMoovBoxBuffer);
mMoovBoxBuffer = NULL;
mMoovBoxBufferOffset = 0;
}
CHECK(mBoxes.empty());
release();
return err;
}
release()中关闭文件描述符
void MPEG4Writer::release() {
ALOGD("***** release()!!!");
close(mFd);
mFd = -1;
mInitCheck = NO_INIT;
mStarted = false;
}
至此,整个录制文件的封装过程,大体流程走完,就是有轨迹线程不停的读取数据,然后读取的数据达到一定的大小时,也就是成chunk(块)时,将会通知写入线程去检测是否有chunk可写,有的话,将进行数据的写入操作。整个封装文件必须有相应的头部信息和尾部信息。
由于能力有限,只是进行了简单的跟读,对整个流程有大体的了解。其中不免有说错的地方,还望指教。
=============================================================================
欢迎关注我的个人微信公众号,公众号会记录自己开发的点滴,还有日常的生活,希望和更多的小伙伴一起交流~~