Android 5.1 Auido framework知识记录 (一)
2017-12-21 16:12 wulizhi 阅读(961) 评论(0) 编辑 收藏 举报应用层播放音乐的时候,一个典型的播放序列为:
MediaPlayer player = new MediaPlayer() //创建一个MediaPlayer实例
player->setDataSource(); //设置音源数据
player->prepare(); //进行播放的准备工作
player->start(); //开始播放
本文中分析以STREAM_MUSIC, DEEPBUFFER_PLAYBACK为例来分析播放的流程。
我们这里直接从player->start()过程进行梳理, 对应到native层,它调用的是mediaplayer.app中的MediaPlayer::start()方法
status_t MediaPlayer::start() { if ( (mPlayer != 0) && ( mCurrentState & ( MEDIA_PLAYER_PREPARED | MEDIA_PLAYER_PLAYBACK_COMPLETE | MEDIA_PLAYER_PAUSED ) ) ) { mPlayer->setLooping(mLoop); //设置是否循环播放 mPlayer->setVolume(mLeftVolume, mRightVolume); //设置左右声道音量 mPlayer->setAuxEffectSendLevel(mSendLevel); mCurrentState = MEDIA_PLAYER_STARTED; ret = mPlayer->start(); //启动播放 } }
这个mPlayer是MediaPlayerServcie::Client对象,我们看看它的start函数的实现:
1 status_t MediaPlayerService::Client::start() 2 { 3 ALOGV("[%d] start", mConnId); 4 sp<MediaPlayerBase> p = getPlayer(); 5 if (p == 0) return UNKNOWN_ERROR; 6 p->setLooping(mLoop); 7 return p->start(); 8 }
它直接调用了的播放器实例的start函数,这里以StagefrightPlayer为例:
1 status_t StagefrightPlayer::start() { 2 ALOGV("start"); 3 4 return mPlayer->play(); 5 }
它调用AwesomePlayer::play()来完成工作:
1 status_t AwesomePlayer::play() { 2 return play_l(); 3 } 4 5 status_t AwesomePlayer::play_l() { 6 ... 7 if (mAudioPlayer == NULL) { 8 createAudioPlayer_l(); //创建AudioPlayer 9 } 10 11 if (mVideoSource == NULL) { 12 status_t err = startAudioPlayer_l( 13 false /* sendErrorNotification */); 14 15 if ((err != OK) && mOffloadAudio) { 16 err = fallbackToSWDecoder(); 17 } 18 } 19 ... 20 }
代码比较长,但是它主要做了两件事情,调用createAudioPlayer_l() 创建AudioPlayer 和调用startAudioPlayer_l() 启动AudioPlayer。
我们先看看createAudioPlayer_l()函数,它创建了一个AudioPlayer对象,并把mAudioSink即mAudioOutput保存在AudioPlayer::mAudioSink中,将AwesomePlayer实例保存在AudioPlayer::mObserver中。将mAudioSource即OMXCodec保存在AudioPlayer::mSource中
1 void AwesomePlayer::createAudioPlayer_l() 2 { 3 mAudioPlayer = new AudioPlayer(mAudioSink, flags, this); 4 mAudioPlayer->setSource(mAudioSource); 5 6 mTimeSource = mAudioPlayer; 7 }
再看看startAudioPlayer_l()函数,它调用了AudioPlayer的start方法:
1 status_t AwesomePlayer::startAudioPlayer_l(bool sendErrorNotification) { 2 err = mAudioPlayer->start(true /* sourceAlreadyStarted */); 3 }
1 status_t AudioPlayer::start(bool sourceAlreadyStarted) { 2 /*从OMXCodec中读取第一包数据*/ 3 do { 4 mFirstBufferResult = mSource->read(&mFirstBuffer, &options); 5 } while (mFirstBufferResult == -EAGAIN); 6 7 /*启动AudioOuput对象,并注册回调函数,当AudioOuput输出完缓冲区里的数据后,就会通过回调函数告诉AudioPlayer填充数据 */ 8 status_t err = mAudioSink->open( 9 mSampleRate, numChannels, channelMask, audioFormat, 10 DEFAULT_AUDIOSINK_BUFFERCOUNT, 11 &AudioPlayer::AudioSinkCallback, 12 this, 13 (audio_output_flags_t)flags, 14 useOffload() ? &offloadInfo : NULL); 15 16 err = mAudioSink->start(); 17 }
调用了 MediaPlayerService::AudioOutput::open()方法,在这里,会进行播放的一些初始化动作,注册回调函数,创建AudioTrack,并且设置了左右声道的音量:
1 status_t MediaPlayerService::AudioOutput::open( 2 uint32_t sampleRate, int channelCount, audio_channel_mask_t channelMask, 3 audio_format_t format, int bufferCount, 4 AudioCallback cb, void *cookie, 5 audio_output_flags_t flags, 6 const audio_offload_info_t *offloadInfo) 7 { 8 if (mCallback != NULL) { 9 newcbd = new CallbackData(this); 10 /*创建AudioTrack, 注册回调函数,这个回调函数是MediaPlayerService::AudioOutput::CallbackWrapper函数*/ 11 t = new AudioTrack( 12 mStreamType, /*创建AuidoOuput的时候将它赋值STREAM_MUSIC*/ 13 sampleRate, 14 format, 15 channelMask, 16 frameCount, 17 flags, 18 CallbackWrapper, 19 newcbd, 20 0, // notification frames 21 mSessionId, 22 AudioTrack::TRANSFER_CALLBACK, 23 offloadInfo, 24 mUid, 25 mPid, 26 mAttributes); 27 } 28 29 ALOGV("setVolume"); 30 t->setVolume(mLeftVolume, mRightVolume); /*调用Audio Track的setVolume方法*/ 31 mTrack = t; /*将创建的Auido Track赋值给MediaPlayerService::AudioOutput::mTrack成员*/ 32 }
在MediaPlayerService::AudioOutput::open()方法中创建AudioTrack之后,将创建的AudioTrack赋值给mTrack成员,
再看看MediaPlayerService::AudioOutput::start()方法:
1 status_t MediaPlayerService::AudioOutput::start() 2 { 3 ALOGV("start"); 4 if (mCallbackData != NULL) { 5 mCallbackData->endTrackSwitch(); 6 } 7 if (mTrack != 0) { 8 mTrack->setVolume(mLeftVolume, mRightVolume); //设置左右声道音量 9 mTrack->setAuxEffectSendLevel(mSendLevel); 10 return mTrack->start(); //调用AudioTrack的start()方法 11 } 12 return NO_INIT; 13 }
可以看到,它最后还是调用AudioTrack的start方法来完成任务的。
我们回头看看创建AudioTrack的地方:
1 AudioTrack::AudioTrack( 2 audio_stream_type_t streamType, 3 uint32_t sampleRate, 4 audio_format_t format, 5 audio_channel_mask_t channelMask, 6 size_t frameCount, 7 audio_output_flags_t flags, 8 callback_t cbf, 9 void* user, 10 uint32_t notificationFrames, 11 int sessionId, 12 transfer_type transferType, 13 const audio_offload_info_t *offloadInfo, 14 int uid, 15 pid_t pid, 16 const audio_attributes_t* pAttributes) 17 : mStatus(NO_INIT), 18 mIsTimed(false), 19 mPreviousPriority(ANDROID_PRIORITY_NORMAL), 20 mPreviousSchedulingGroup(SP_DEFAULT), 21 mPausedPosition(0) 22 { 23 mStatus = set(streamType, sampleRate, format, channelMask, 24 frameCount, flags, cbf, user, notificationFrames, 25 0 /*sharedBuffer*/, false /*threadCanCallJava*/, sessionId, transferType, 26 offloadInfo, uid, pid, pAttributes); 27 } 28 29 30 status_t AudioTrack::set( 31 audio_stream_type_t streamType, 32 uint32_t sampleRate, 33 audio_format_t format, 34 audio_channel_mask_t channelMask, 35 size_t frameCount, 36 audio_output_flags_t flags, 37 callback_t cbf, 38 void* user, 39 uint32_t notificationFrames, 40 const sp<IMemory>& sharedBuffer, 41 bool threadCanCallJava, 42 int sessionId, 43 transfer_type transferType, 44 const audio_offload_info_t *offloadInfo, 45 int uid, 46 pid_t pid, 47 const audio_attributes_t* pAttributes) 48 { 49 /*如果回调函数不为空,创建AudioTrack线程,把AudioTrack当作指针传递给了AudioTrackThread的mReceiver成员,并且运行AudioTracm线程*/ 50 if (cbf != NULL) { 51 mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava); 52 mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/); 53 } 54 55 /*创建IAudioTrack*/ 56 status_t status = createTrack_l(); 57 }
在创建AudioTrack的同时,也创建AudioTrack的线程,并且创建了IAudioTrack,但这个IAudioTrack是什么东西呢,后面再分析
我们先看看AudioTrack线程的ThreadLoop函数AudioTrack::AudioTrackThread::threadLoop():
1 bool AudioTrack::AudioTrackThread::threadLoop() 2 { 3 nsecs_t ns = mReceiver.processAudioBuffer(); 4 }
它调用了mReceiver成员的processAudioBuffer()方法,这个mReceiver就是在上面创建AudioTrackThread时传入的AudioTrack指针。
在processAudioBuffer()方法中,调用callback回调函数来处理各种事件,这些事件都是在AudioTrack(native)和AudioTrack(java)之间进行交互的:
1 enum event_type { 2 /*请求更多的音频数据*/ 3 EVENT_MORE_DATA = 0, // Request to write more data to buffer. 4 // If this event is delivered but the callback handler 5 // does not want to write more data, the handler must explicitly 6 // ignore the event by setting frameCount to zero. 7 /*产生了underun事件 */ 8 EVENT_UNDERRUN = 1, // Buffer underrun occurred. 9 /*到达了loop end, 如果loop count != 0, 从loop start 回放*/ 10 EVENT_LOOP_END = 2, // Sample loop end was reached; playback restarted from 11 // loop start if loop count was not 0. 12 /*playback head 到达指定位置,参考setMarkerPosition */ 13 EVENT_MARKER = 3, // Playback head is at the specified marker position 14 // (See setMarkerPosition()). 15 /*playback head 到达 新的位置, 参考 setPositionUpdatePeriod()*/ 16 EVENT_NEW_POS = 4, // Playback head is at a new position 17 // (See setPositionUpdatePeriod()). 18 /*playback head 在buffer末尾*/ 19 EVENT_BUFFER_END = 5, // Playback head is at the end of the buffer. 20 // Not currently used by android.media.AudioTrack. 21 /*IAudioTrack重新创建*/ 22 EVENT_NEW_IAUDIOTRACK = 6, // IAudioTrack was re-created, either due to re-routing and 23 // voluntary invalidation by mediaserver, or mediaserver crash. 24 /*AF和HW端所有的数据都播放完毕*/ 25 EVENT_STREAM_END = 7, // Sent after all the buffers queued in AF and HW are played 26 // back (after stop is called) 27 /*暂时未用*/ 28 EVENT_NEW_TIMESTAMP = 8, // Delivered periodically and when there's a significant change 29 // in the mapping from frame position to presentation time. 30 // See AudioTimestamp for the information included with event. 31 };
其中最重要的就是 EVENT_MORE_DATA事件, AudioTrackThread通过回调函数,发送该事件,请求更多的音频数据
1 nsecs_t AudioTrack::processAudioBuffer() 2 { 3 while (mRemainingFrames > 0) { 4 Buffer audioBuffer; 5 audioBuffer.frameCount = mRemainingFrames; 6 /**申请一块buffer*/ 7 status_t err = obtainBuffer(&audioBuffer, requested, NULL, &nonContig); 8 size_t reqSize = audioBuffer.size; 9 /*通过回调函数发送EVENT_MORE_DATA事件,请求更多数据*/ 10 mCbf(EVENT_MORE_DATA, mUserData, &audioBuffer); 11 size_t writtenSize = audioBuffer.size; 12 } 13 }
在AudioTrack::processAudioBuffer()函数中,首先调用obtainBuffer()获取一块audioBuffer,buffer的分配处理过程,稍后再单独分析,然后再通过回调函数发送EVENT_MORE_DATA事件:
上面有讲过,传入到这里的回调函数为MediaPlayerService::AudioOutput::CallbackWrapper()。
1 void MediaPlayerService::AudioOutput::CallbackWrapper( 2 int event, void *cookie, void *info) { 3 CallbackData *data = (CallbackData*)cookie; 4 AudioOutput *me = data->getOutput(); /*getOutput返回mData成员,创建CallbackData的时候,传入的this赋值给cookie,赋值给mData*/ 5 switch(event) { 6 /*me即创建CallbackData的地方的this对象,即MediaPlayerService::AudioOutput*/ 7 /*MediaPlayerService::AudioOutput::mCallback 为open的时候传入的AudioPlayer::AudioSinkCallback()函数*/ 8 case AudioTrack::EVENT_MORE_DATA: { 9 size_t actualSize = (*me->mCallback)( 10 me, buffer->raw, buffer->size, me->mCallbackCookie, 11 CB_EVENT_FILL_BUFFER); 12 13 if ((me->mFlags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) == 0 && 14 actualSize == 0 && buffer->size > 0 && me->mNextOutput == NULL) { 15 // We've reached EOS but the audio track is not stopped yet, 16 // keep playing silence. 17 18 memset(buffer->raw, 0, buffer->size); 19 actualSize = buffer->size; 20 } 21 22 buffer->size = actualSize; 23 } break; 24 }
最后调用AudioPlayer::AudioSinkCallback()函数,传入CB_EVENT_FILL_BUFFER事件,同时传入的buffer即之前分配的AuidoBuffer, bufferr->raw指向buffer的首地址,buffer->size为buffer的大小。
再回到AudioPlayer.cpp文件中,查看AudioPlayer::AudioSinkCallback()函数:
1 size_t AudioPlayer::AudioSinkCallback( 2 MediaPlayerBase::AudioSink * /* audioSink */, 3 void *buffer, size_t size, void *cookie, 4 MediaPlayerBase::AudioSink::cb_event_t event) { 5 AudioPlayer *me = (AudioPlayer *)cookie; 6 7 switch(event) { 8 case MediaPlayerBase::AudioSink::CB_EVENT_FILL_BUFFER: 9 return me->fillBuffer(buffer, size); 10 } 11 } 12 13 14 size_t AudioPlayer::fillBuffer(void *data, size_t size) { 15 while (size_remaining > 0) { 16 if (mInputBuffer == NULL) { 17 status_t err; 18 19 /*如果是第一包数据, 之前在调用AudioPlayer::start的时候读取了第一包数据放到了mFirstBuffer中,这里把它读出来*/ 20 if (mIsFirstBuffer) { 21 mInputBuffer = mFirstBuffer; 22 mFirstBuffer = NULL; 23 } else { 24 /*读取数据,放到mInputBuffer中*/ 25 err = mSource->read(&mInputBuffer, &options); 26 } 27 } 28 29 /*把数据拷贝到AudioBuffer中*/ 30 memcpy((char *)data + size_done, 31 (const char *)mInputBuffer->data() + mInputBuffer->range_offset(), 32 copy); 33 } 34 }
通过调用AudioPlayer::fillBuffer()方法,读取数据,拷贝到AudioBuffer中,其中,如果是第一包数据的话,就将之前在AuidoPlayer::open()中读取的第一包数据填充进去。
其中,mAudioSource 即创建AudioPlayer时传入的OMXCodec,读取音频数据的接口函数为status_t OMXCodec::read()函数,稍后总结Buffer交互过程再分析此函数。
总结以上的过程,AudioTrack是在ThreadLoop中从OMXCodec中读取解码后的数据。
回到上面,在创建AudioTrack的时候,除了创建了一个AudioTrack线程外,它还创建了一个IAudioTrack,接口函数为AudioTrack::createTrack_l(), 这个函数中,最重要的一件事情就是
调用了audioFlinger->createTrack()创建了一块共享buffer用于audio track和audio flinger之间的音频数据的交互。我们分析下这个函数的主要流程:
1 status_t AudioTrack::createTrack_l() 2 { 3 /*打开输出设备流,最终会调用到hal层中的adev_open_output_stream(),获取操作输出设备的接口句柄*/ 4 status_t status = AudioSystem::getOutputForAttr(attr, &output, 5 (audio_session_t)mSessionId, &streamType, 6 mSampleRate, mFormat, mChannelMask, 7 mFlags, mOffloadInfo); 8 9 size_t temp = frameCount; // temp may be replaced by a revised value of frameCount, 10 // but we will still need the original value also 11 /*调用AudioFlinger::createTrack()创建音轨,分配buffer*/ 12 sp<IAudioTrack> track = audioFlinger->createTrack(streamType, 13 mSampleRate, 14 // AudioFlinger only sees 16-bit PCM 15 mFormat == AUDIO_FORMAT_PCM_8_BIT && 16 !(mFlags & AUDIO_OUTPUT_FLAG_DIRECT) ? 17 AUDIO_FORMAT_PCM_16_BIT : mFormat, 18 mChannelMask, 19 &temp, 20 &trackFlags, 21 mSharedBuffer, 22 output, 23 tid, 24 &mSessionId, 25 mClientUid, 26 &status); 27 28 29 /*这里的track 来自于AudioFlinger::createTrack()返回的trackhandle,AudioFlinger::TrackHandle::getCblk() 30 调用的是mTrack->getCblk(),mTrack是为new TrackHandle时传入的track,track为AudioFlinger::PlaybackThread::createTrack_l() 31 的返回值,即在其中new track创建的AudioFlinger::PlaybackThread::Track::Track, 它继承自TrackBase,它的getCblk()返回的是 32 mCblkMemory, 后面可以看到,这个mCblkMemory就是在createTrack中分配的buffer的句柄 所以这里的iMem = mCblkMemory 33 */ 34 sp<IMemory> iMem = track->getCblk(); 35 void *iMemPointer = iMem->pointer(); // 36 mAudioTrack = track; //TrackHandle 37 mCblkMemory = iMem; 38 audio_track_cblk_t* cblk = static_cast<audio_track_cblk_t*>(iMemPointer); /*cblk指向buffer的首地址*/ 39 mCblk = cblk; 40 41 frameCount = temp; 42 43 44 // Starting address of buffers in shared memory. If there is a shared buffer, buffers 45 // is the value of pointer() for the shared buffer, otherwise buffers points 46 // immediately after the control block. This address is for the mapping within client 47 // address space. AudioFlinger::TrackBase::mBuffer is for the server address space. 48 /*实际分配的buffer大小为buffer+ sizeof(audio_track_cblk_t) audio_track_cblk_t 结构体中存储的为buffer的信息, 49 */ 50 void* buffers; 51 if (mSharedBuffer == 0) { 52 buffers = (char*)cblk + sizeof(audio_track_cblk_t); 53 } else { 54 buffers = mSharedBuffer->pointer(); 55 } 56 57 /*AudioTrackClientProxy来管理cblk并与服务端通信*/ 58 // update proxy 59 if (mSharedBuffer == 0) { 60 mStaticProxy.clear(); 61 mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF); 62 } else { 63 mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF); 64 mProxy = mStaticProxy; 65 } 66 }
函数中,分配好buffer之后,把buffer操作的句柄赋值给了mCblkMemory, 把buffer的首地址赋值给了cblk.
再看看AudioFlinger::createTrack()函数的实现:
1 sp<IAudioTrack> AudioFlinger::createTrack( 2 audio_stream_type_t streamType, 3 uint32_t sampleRate, 4 audio_format_t format, 5 audio_channel_mask_t channelMask, 6 size_t *frameCount, 7 IAudioFlinger::track_flags_t *flags, 8 const sp<IMemory>& sharedBuffer, 9 audio_io_handle_t output, 10 pid_t tid, 11 int *sessionId, 12 int clientUid, 13 status_t *status) 14 { 15 track = thread->createTrack_l(client, streamType, sampleRate, format, 16 channelMask, frameCount, sharedBuffer, lSessionId, flags, tid, clientUid, &lStatus); 17 18 setAudioHwSyncForSession_l(thread, (audio_session_t)lSessionId); 19 }
函数中主要是调用 AudioFlinger::PlaybackThread::createTrack_l()来实现具体的工作:
1 // PlaybackThread::createTrack_l() must be called with AudioFlinger::mLock held 2 sp<AudioFlinger::PlaybackThread::Track> AudioFlinger::PlaybackThread::createTrack_l( 3 const sp<AudioFlinger::Client>& client, 4 audio_stream_type_t streamType, 5 uint32_t sampleRate, 6 audio_format_t format, 7 audio_channel_mask_t channelMask, 8 size_t *pFrameCount, 9 const sp<IMemory>& sharedBuffer, 10 int sessionId, 11 IAudioFlinger::track_flags_t *flags, 12 pid_t tid, 13 int uid, 14 status_t *status) 15 { 16 track = new Track(this, client, streamType, sampleRate, format, 17 channelMask, frameCount, NULL, sharedBuffer, 18 sessionId, uid, *flags, TrackBase::TYPE_DEFAULT); 19 } 20 }
1 // ---------------------------------------------------------------------------- 2 3 // Track constructor must be called with AudioFlinger::mLock and ThreadBase::mLock held 4 AudioFlinger::PlaybackThread::Track::Track( 5 PlaybackThread *thread, 6 const sp<Client>& client, 7 audio_stream_type_t streamType, 8 uint32_t sampleRate, 9 audio_format_t format, 10 audio_channel_mask_t channelMask, 11 size_t frameCount, 12 void *buffer, 13 const sp<IMemory>& sharedBuffer, 14 int sessionId, 15 int uid, 16 IAudioFlinger::track_flags_t flags, 17 track_type type) 18 : TrackBase(thread, client, sampleRate, format, channelMask, frameCount, 19 (sharedBuffer != 0) ? sharedBuffer->pointer() : buffer, 20 sessionId, uid, flags, true /*isOut*/, 21 (type == TYPE_PATCH) ? ( buffer == NULL ? ALLOC_LOCAL : ALLOC_NONE) : ALLOC_CBLK, 22 type), 23 { 24 }
AudioFlinger::PlaybackThread::createTrack_l()方法中,创建了AudioFlinger::PlaybackThread::Track()对象,它继承自TrackBase,在构造的时候也调用了TrackBase的构造函数:
1 AudioFlinger::ThreadBase::TrackBase::TrackBase( 2 ThreadBase *thread, 3 const sp<Client>& client, 4 uint32_t sampleRate, 5 audio_format_t format, 6 audio_channel_mask_t channelMask, 7 size_t frameCount, 8 void *buffer, 9 int sessionId, 10 int clientUid, 11 IAudioFlinger::track_flags_t flags, 12 bool isOut, 13 alloc_type alloc, 14 track_type type) 15 { 16 /*分配的buffer的大小为 framecount * framesize + sizeof(audio_track_cblk_t)*/ 17 size_t size = sizeof(audio_track_cblk_t); 18 size_t bufferSize = (((buffer == NULL) && audio_is_linear_pcm(format)) ? roundup(frameCount) : frameCount) * mFrameSize; 19 if (buffer == NULL && alloc == ALLOC_CBLK) { 20 size += bufferSize; 21 } 22 23 /*申请可一块buffer*/ 24 if (client != 0) { 25 mCblkMemory = client->heap()->allocate(size); 26 } 27 28 if (mCblk != NULL) { 29 new(mCblk) audio_track_cblk_t(); 30 switch (alloc) { 31 case ALLOC_CBLK: 32 /*初始化buffer*/ 33 if (buffer == NULL) { 34 mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t); 35 memset(mBuffer, 0, bufferSize); 36 } else { 37 mBuffer = buffer; 38 } 39 break; 40 } 41 } 42 }
我们看看mCblkMemory = client->heap()->allocate(size)的实现:
1 sp<MemoryDealer> AudioFlinger::Client::heap() const 2 { 3 return mMemoryDealer; 4 }
client->headp()返回的是mMemoryDealer,所以这里调用的是mMemoryDealer->allocate(size),而在构造AudioFlinger::Client::Client的时候,为mMemoryDealer分配了1MB+4k大小的空间,
最大支持32个track,每个track8个buffer, 每个buffer4KB大小
AudioFlinger::Client::Client(const sp<AudioFlinger>& audioFlinger, pid_t pid) : RefBase(), mAudioFlinger(audioFlinger), // FIXME should be a "k" constant not hard-coded, in .h or ro. property, see 4 lines below mMemoryDealer(new MemoryDealer(1028*1024, "AudioFlinger::Client")), //1MB + 1 more 4k page mPid(pid), mTimedTrackCount(0) { // 1 MB of address space is good for 32 tracks, 8 buffers each, 4 KB/buffer }
1 MemoryHeapBase::MemoryHeapBase(size_t size, uint32_t flags, char const * name) 2 : mFD(-1), mSize(0), mBase(MAP_FAILED), mFlags(flags), 3 mDevice(0), mNeedUnmap(false), mOffset(0) 4 { 5 int fd = ashmem_create_region(name == NULL ? "MemoryHeapBase" : name, size); 6 }
这1M大小的空间是通过android的ashmem匿名共享内存机制来分配的,这也就是为什么AudioTrack和AudioFlinger分属于不同的进程,但是却能够共享传递数据。
到这里,AudioTrack::createTrack_l()的流程就结束了。
回到上面,MediaPlayerService::AudioOutput::start() 调用AudioTrack::start()启动播放:
1 status_t AudioTrack::start() 2 { 3 AutoMutex lock(mLock); 4 5 /*如果已经是ACTIVE状态,返回*/ 6 if (mState == STATE_ACTIVE) { 7 return INVALID_OPERATION; 8 } 9 10 mInUnderrun = true; 11 12 /*如果之前的状态是 正在stopping, 则当前状态也设置为stopping,否则设置成ACTIVE*/ 13 State previousState = mState; 14 if (previousState == STATE_PAUSED_STOPPING) { 15 mState = STATE_STOPPING; 16 } else { 17 mState = STATE_ACTIVE; 18 } 19 (void) updateAndGetPosition_l(); 20 /*如果是停止状态,则把position重新设置为0,重头开始播放*/ 21 if (previousState == STATE_STOPPED || previousState == STATE_FLUSHED) { 22 // reset current position as seen by client to 0 23 mPosition = 0; 24 // For offloaded tracks, we don't know if the hardware counters are really zero here, 25 // since the flush is asynchronous and stop may not fully drain. 26 // We save the time when the track is started to later verify whether 27 // the counters are realistic (i.e. start from zero after this time). 28 mStartUs = getNowUs(); 29 30 // force refresh of remaining frames by processAudioBuffer() as last 31 // write before stop could be partial. 32 mRefreshRemaining = true; 33 34 // for static track, clear the old flags when start from stopped state 35 if (mSharedBuffer != 0) 36 android_atomic_and( 37 ~(CBLK_LOOP_CYCLE | CBLK_LOOP_FINAL | CBLK_BUFFER_END), 38 &mCblk->mFlags); 39 } 40 mNewPosition = mPosition + mUpdatePeriod; 41 int32_t flags = android_atomic_and(~CBLK_DISABLED, &mCblk->mFlags); 42 43 /*是否有回调线程,如果在apk断独立调用AudioTrack,是不会有回调线程的,但是AudioPlayer这种系统播放器会设置回调线程*/ 44 sp<AudioTrackThread> t = mAudioTrackThread; 45 if (t != 0) { 46 if (previousState == STATE_STOPPING) { 47 /*中断*/ 48 mProxy->interrupt(); 49 } else { 50 /*恢复播放*/ 51 t->resume(); 52 } 53 } else { 54 /*保存当前线程的优先级,在后面停止的时候设置回来*/ 55 mPreviousPriority = getpriority(PRIO_PROCESS, 0); 56 get_sched_policy(0, &mPreviousSchedulingGroup); 57 /*设置线程优先级为ANDROID_PRIORITY_AUDIO*/ 58 androidSetThreadPriority(0, ANDROID_PRIORITY_AUDIO); 59 } 60 61 status_t status = NO_ERROR; 62 if (!(flags & (CBLK_INVALID | CBLK_STREAM_FATAL_ERROR))) { 63 /*如果共享buffer cblk可用,则调用start函数*/ 64 status = mAudioTrack->start(); 65 if (status == DEAD_OBJECT) { 66 flags |= CBLK_INVALID; 67 } 68 } 69 if (flags & (CBLK_INVALID | CBLK_STREAM_FATAL_ERROR)) { 70 status = restoreTrack_l("start"); 71 } 72 73 if (status != NO_ERROR) { 74 ALOGE("start() status %d", status); 75 mState = previousState; 76 if (t != 0) { 77 if (previousState != STATE_STOPPING) { 78 t->pause(); 79 } 80 } else { 81 setpriority(PRIO_PROCESS, 0, mPreviousPriority); 82 set_sched_policy(0, mPreviousSchedulingGroup); 83 } 84 } 85 86 return status; 87 }
在AudioTrack::start()中,有两个重要的地方,调用AudioTrack::AudioTrackThread::resume() 来resume和调用AudioFlinger::TrackHandle::start() ==>AudioFlinger::PlaybackThread::Track::start()
先来看看AudioTrack::AudioTrackThread::resume()函数:
1 void AudioTrack::AudioTrackThread::resume() 2 { 3 AutoMutex _l(mMyLock); 4 mIgnoreNextPausedInt = true; 5 /*在创建AudioTrackThread之后,mPaused为true, mPausedInt为false*/ 6 if (mPaused || mPausedInt) { 7 mPaused = false; 8 mPausedInt = false; 9 /*发送消息给AudioTrack::AudioTrackThread::threadLoop*/ 10 mMyCond.signal(); 11 } 12 } 13 14 bool AudioTrack::AudioTrackThread::threadLoop() 15 { 16 /*刚开始播放的时候,mPaused为true,threadloop在这里休眠等待 17 调用AudioTrack::AudioTrackThread::resume()发送信号后,threadloop退出等待 18 退出后,会重新再运行,检测mPaused是否为true,如果是则重新进入休眠,防止意外唤醒 19 */ 20 { 21 AutoMutex _l(mMyLock); 22 if (mPaused) { 23 mMyCond.wait(mMyLock); 24 // caller will check for exitPending() 25 return true; 26 } 27 if (mIgnoreNextPausedInt) { 28 mIgnoreNextPausedInt = false; 29 mPausedInt = false; 30 } 31 } 32 }
AudioTrack::AudioTrackThread::resume()函数最重要的功能就是清除暂停的状态,唤醒AudioTrack::AudioTrackThread::threadLoop() 线程
再看看AudioFlinger::PlaybackThread::Track::start()函数, 它主要做了两件事情,将当前的track添加到活跃的tracks中,并发送广播,唤醒AudioFlinger::PlaybackThread::threadLoop()
1 status_t AudioFlinger::PlaybackThread::Track::start(AudioSystem::sync_event_t event __unused, 2 int triggerSession __unused) 3 { 4 status = playbackThread->addTrack_l(this); 5 } 6 7 8 // addTrack_l() must be called with ThreadBase::mLock held 9 status_t AudioFlinger::PlaybackThread::addTrack_l(const sp<Track>& track) 10 { 11 /*将当前的track添加到活跃的track中*/ 12 if (mActiveTracks.indexOf(track) < 0) { 13 mActiveTracks.add(track); 14 mWakeLockUids.add(track->uid()); 15 mActiveTracksGeneration++; 16 mLatestActiveTrack = track 17 } 18 19 /*发送广播,使AudioFlinger线程退出休眠,开始播放数据*/ 20 onAddNewTrack_l(); 21 return status; 22 } 23 24 void AudioFlinger::PlaybackThread::onAddNewTrack_l() 25 { 26 broadcast_l(); 27 } 28 29 void AudioFlinger::PlaybackThread::broadcast_l() 30 { 31 mSignalPending = true; 32 /*发送广播,唤醒AudioFlinger线程*/ 33 mWaitWorkCV.broadcast(); 34 }
至此,整个播放的准备过程都已经完成,AudioTrack::AudioTrackThread::threadLoop() 和AudioFlinger::PlaybackThread::threadLoop() 都从休眠中唤醒,开始音频的播放工作。