《深入理解Android(卷1)》笔记 5.第六章 深入理解Binder
2013-01-04 09:10 ...平..淡... 阅读(2039) 评论(0) 编辑 收藏 举报第六章 深入理解Binder
我的简要总结:
根据MediaServer的main函数来分析 Main_mediaserver.cpp::main int main(int argc, char** argv) { sp<ProcessState> proc(ProcessState::self()); //(1) 获得一个ProcessState实例 sp<IServiceManager> sm = defaultServiceManager();//(2) 调用defaultServiceManager,得到一个IserviceManager LOGI("ServiceManager: %p", sm.get()); AudioFlinger::instantiate();//初始化音频系统的AudioFlinger MediaPlayerService::instantiate();//(3) 多媒体系统的MediaPlayer服务,将以它作为主切入点 CameraService::instantiate(); AudioPolicyService::instantiate(); ProcessState::self()->startThreadPool();//(4) 创建一个线程池 IPCThreadState::self()->joinThreadPool();//(5) 将自己加入到上面创建的线程池中 }
1.sp<ProcessState> proc(ProcessState::self());//获得ProcessState实例 作用: a.打开了binder设备 b.分配内存来接收数据 2.sp<IServiceManager> sm = defaultServiceManager();//返回一个IServiceManager对象,通过它,可以和ServiceManager交互 流程: defaultServiceManager----->interface_cast<IServiceManager>ProcessState::self()->getContextObject(null); 其中,getContextObject(null);----->getStrongProxyForHandle(0);---->返回BpBinder(0) 所以,interface_cast<IServiceManager>ProcessState::self()->getContextObject(null);这条语句就变成了interface_cast<IServiceManager>BpBinder(0); 然后继续,interface_cast<IServiceManager>BpBinder(0);---->IServiceManager::asInterface(BpBinder(0)); IServiceManager有两个重要的宏: DECLARE_META_INTERFACE:声明了asInterface函数 IMPLEMENT_META_INTERFACE:实现了asInterface函数---->返回new BpServiceManager(BpBinder(0)); 总结作用: a.获得一个BpBinder对象,handle为0 b.获得BpServiceManager对象(继承了IServiceManager,实现了IServiceManager的业务函数),它的mRemote为BpBinder。 3.MediaPlayerService::instantiate(string , new service()); ---->addService ---->BpBinder->transact ---->IPCThreadState::self()->transact 分析IPCThreadState::self():返回一个IPCThreadState对象。 分析IPCThreadState::transact(): a.writeTransactionData:将请求信息写入到mOut中。 b.waitForResponse:发送请求、接收回复。 b1.talkWithDriver:与binder交互 b2.cmd = mIn.readInt32:接收回复---->executeCommand(分析该命令中的两种情况) b21.case BR_Transaction:生成BBinder对象(该对象实际上就是实现了BnServiceXXX的那个对象) b22.case BR_SPAWN_Loop:创建了一个新线程,用于与Binder通信。 4.ProcessState::self()->startThreadPool(); ---->spawnPooledThread(true); ---->new PoolThread(isMain); ---->IPCThreadState::self()->joinThreadPool(mIsMain); 可知,IPCThreadState和新创建的线程中都调用了该函数。 5.IPCThreadState::self()->joinThreadPool(); 分析得知,IPCThreadState主线程和新创建的线程都在talkWithDriver,它们通过joinThreadPool读取binder设备,查看是否有请求。
开始详细分析:
Binder是Android系统提供的一种IPC机制。Android系统可以看作是一个基于Binder通信的C/S架构。Binder如同网络,将系统的各个部分连接在了一起。
知识点1:在这个架构中,除了C/S架构所包括的Client端和Server端外,还有一个全局的ServiceManager端(其作用是管理系统中的各种服务(Service))。三者关系图如下:
从图中分析得:
(1) Server进程要先在ServiceManager中注册Service,所以Server是ServiceManager的客户端。
(2) 如果某个Client要使用某个Service,必须先到ServiceManager中获取该Service的相关信息,所以Client是ServiceManager的客户端。
(3) Client根据得到的Service信息和Service所在的Server进程建立通信的通路,然后就可以直接与Service交互了。所以Client是Service的客户端。
(4) The important point is 三者的交互都是基于Binder通信的。所以通过任意两者之间的关系,都可以揭示Binder的奥秘哦~
知识点2:庖丁解MediaServer
1.MediaServer中包括许多重要的Service。如下:
AudioFlinger: 音频系统的核心服务。
AudioPolicyService: 音频系统中关于音频策略的重要服务。
MediaPlayerService: 多媒体系统中的重要服务。
CameraService: 有关摄像/照相的重要服务。
2.MediaServer入口函数[---->Main_mediaserver.cpp]
int main(int argc, char** argv) { sp<ProcessState> proc(ProcessState::self()); //(1) 获得一个ProcessState实例 sp<IServiceManager> sm = defaultServiceManager();//(2) 调用defaultServiceManager,得到一个IserviceManager LOGI("ServiceManager: %p", sm.get()); AudioFlinger::instantiate();//初始化音频系统的AudioFlinger MediaPlayerService::instantiate();//(3) 多媒体系统的MediaPlayer服务,将以它作为主切入点 CameraService::instantiate(); AudioPolicyService::instantiate(); ProcessState::self()->startThreadPool();//(4) 创建一个线程池 IPCThreadState::self()->joinThreadPool();//(5) 将自己加入到上面创建的线程池中 }
(1) 独一无二的ProcessState (每个进程都只有一个ProcessState)
[---->Main_mediaserver.cpp]
sp<ProcessState> proc(ProcessState::self()); //(2.1) 获得一个ProcessState实例
(1.1) 单例的ProcessState [---->ProcessState.cpp]
sp<ProcessState> ProcessState::self() { //gProcess是在Static.cpp中定义的一个全局变量。刚开始时,gProcess肯定为空。 if (gProcess != NULL) return gProcess; AutoMutex _l(gProcessMutex); //创建一个ProcessState对象,赋给gProcess if (gProcess == NULL) gProcess = new ProcessState; return gProcess; }
ps: self()函数采用了单例模式,根据这个和Process State的名字,可以很明确的说明每个进程只有一个ProcessState对象。
(1.2) ProcessState的构造函数 [---->ProcessState.cpp] (它打开了Binder设备)
ProcessState::ProcessState() : mDriverFD(open_driver()) //就是在这里打开了Binder设备 , mVMStart(MAP_FAILED) //映射内存的起始地址 , mManagesContexts(false) , mBinderContextCheckFunc(NULL) , mBinderContextUserData(NULL) , mThreadPoolStarted(false) , mThreadPoolSeq(1) { if (mDriverFD >= 0) { // XXX Ideally, there should be a specific define for whether we // have mmap (or whether we could possibly have the kernel module // availabla). #if !defined(HAVE_WIN32_IPC) // mmap the binder, providing a chunk of virtual address space to receive transactions. mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0); if (mVMStart == MAP_FAILED) { // *sigh* LOGE("Using /dev/binder failed: unable to mmap transaction memory.\n"); close(mDriverFD); mDriverFD = -1; } #else mDriverFD = -1; #endif } if (mDriverFD < 0) { // Need to run without the driver, starting our own thread pool. } }
(1.3) 打开Binder设备 [---->ProcessState.cpp]
open_driver的作用就是打开/dev/binder这个设备,它是Android在内核中为完成进程间通信而专门设置的一个虚拟设备,实现如下:
static int open_driver() { if (gSingleProcess) { return -1; } int fd = open("/dev/binder", O_RDWR); //打开/dev/binder设备 if (fd >= 0) { fcntl(fd, F_SETFD, FD_CLOEXEC); int vers; #if defined(HAVE_ANDROID_OS) status_t result = ioctl(fd, BINDER_VERSION, &vers); #else status_t result = -1; errno = EPERM; #endif if (result == -1) { LOGE("Binder ioctl to obtain version failed: %s", strerror(errno)); close(fd); fd = -1; } if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) { LOGE("Binder driver protocol does not match user space protocol!"); close(fd); fd = -1; } #if defined(HAVE_ANDROID_OS) size_t maxThreads = 15; //通过ioct1方式告诉binder驱动,这个fd支持的最大线程数为15个。 result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads); if (result == -1) { LOGE("Binder ioctl to set max threads failed: %s", strerror(errno)); } #endif } else { LOGW("Opening '/dev/binder' failed: %s\n", strerror(errno)); } return fd; }
至此,ProcessState::self()分析完毕。总结如下:
(a) 打开/dev/binder设备。这就相当于与内核的Binder驱动有了交互的通道。
(b) 对返回的fd使用mmap。这样Binder驱动就会分配一块内存来接收数据。
(c)由于ProcessState具有唯一性,因此一个进程只打开设备一次。
(2) 时空穿越魔术----defaultServiceManager [---->IServiceManager.cpp]
该函数会返回一个IServiceManager对象,通过这个对象,我们可以与另一个进程ServiceManager进行交互。这就是穿越魔术。
(2.1)魔术前的准备工作
分析defaultServiceManager调用了哪些函数,返回的IServiceManager又是什么?源码如下:
sp<IServiceManager> defaultServiceManager() { if (gDefaultServiceManager != NULL) return gDefaultServiceManager; { //还是用了单例模式 AutoMutex _l(gDefaultServiceManagerLock); if (gDefaultServiceManager == NULL) { gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL)); } } return gDefaultServiceManager; }
分析可知,调用了ProcessState的getContextObject函数!其中参数为NULL。继续分析getContextObject函数。
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller) { //caller的值为NULL,函数返回值为IBinder。 // supportsProcesses函数根据open_driver函数是否可成功打开设备来判断它是否支持process。实际设备肯定支持process。 if (supportsProcesses()) { return getStrongProxyForHandle(0);//真实设备上肯定支持进程,所以会调用这个函数。 } else { return getContextObject(String16("default"), caller); } } getStrongProxyForHandle函数源码如下: sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle) { sp<IBinder> result; AutoMutex _l(mLock); //根据索引查找对应的资源。如果lookupHandleLocked发现没有对应资源项,则会创建一个新的项并返回。这个新项的内容需要填充。 handle_entry* e = lookupHandleLocked(handle); if (e != NULL) { // We need to create a new BpBinder if there isn't currently one, OR we // are unable to acquire a weak reference on this current one. See comment // in getWeakProxyForHandle() for more info about this. IBinder* b = e->binder; if (b == NULL || !e->refs->attemptIncWeak(this)) { //对于新创建的资源项,它的binder为空,因此走这个分支。注意,handle的值为0 b = new BpBinder(handle); //创建一个BpBinder e->binder = b; //填充entry的内容 if (b) e->refs = b->getWeakRefs(); result = b; } else { // This little bit of nastyness is to allow us to add a primary // reference to the remote proxy when this team doesn't have one // but another team is sending the handle to us. result.force_set(b); e->refs->decWeak(this); } } return result; //返回BpBinder(handle),注意,handle的值为0 }
(2.2) 魔术表演的道具----BpBinder
众所周知,表演魔术必须要有道具,这个穿越魔术的道具就是BpBinder。它是什么呢,它还有个孪生兄弟BBinder。Binder家族图谱如下:
(a) BpBinder是客户端用来与Server交互的代理类,p即Proxy的意思。
(b) BBinder是与Proxy相对的一端,它是Proxy交互的目的端。如果Proxy代表客户端,那么BBinder则代表服务端。BpBinder与BBinder一一对应,即BpBinderA只能和对应的BBinderA交互。
question:
(a) 为什么创建的不是BBinder?
因为我们是ServiceManager的客户端,当然得使用代理端与ServiceManager进行交互。
(b) BpBinder如何标识它所对应的BBinder端呢?
Binder系统通过handler来标识对应的BBinder。
ps:我们给BpBinder的构造函数传递的参数handle的值为0。这个0在整个Binder系统中有重要含义----因为0代表的是ServiceManager所对应的BBinder。
分析BpBinder的构造函数
BpBinder::BpBinder(int32_t handle) : mHandle(handle) //handle是0 , mAlive(1) , mObitsSent(0) , mObituaries(NULL) { LOGV("Creating BpBinder %p handle %d\n", this, mHandle); extendObjectLifetime(OBJECT_LIFETIME_WEAK); //另一个重要对象是IPCThreadState IPCThreadState::self()->incWeakHandle(handle); }
从代码中看,BpBinder、BBinder这两个类没有任何地方操作/dev/binder设备。它们只是个道具。
回顾BpBinder道具的出场:
gDefaultServiceManager = interface_cast<IServiceManager>(ProcessState::self()->getContextObject(NULL));
然后变成了
gDefaultServiceManager = interface_cast<IServiceManager>(new BpBinder(0));
分析interface_cast,它其实是一个障眼法。
(2.3) 障眼法---- interface_cast
第一眼是不是觉得interface_cast是强制类型转换的意思?错!!! (在下面的”(b)业务和通信的挂钩”中会分析这个问题)
interface_cast的定义 [---->IInterface.h]
template<typename INTERFACE> inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj) { return INTERFACE::asInterface(obj); }
因此interface_cast<IServiceManager>()等价于
inline sp< IServiceManager > interface_cast(const sp<IBinder>& obj) { return IServiceManager::asInterface(obj); }
又转移到IServiceManager中去了,到此为止,至少解除了障眼法。
(2.4) 拨开云雾见月明----IServiceManager
问题:因为BpBinder和BBinder是与通信业务有关的,所以业务层的逻辑是怎样巧妙地架构在Binder机制上的呢?
解决方案:通过IServiceManager来解释这个问题。
(a) 定义业务逻辑
IServiceManager定义了ServiceManager所提供的服务。IServiceManager的定义在[---->IServiceManager.h]
class IServiceManager : public IInterface { public: DECLARE_META_INTERFACE(ServiceManager); //无比关键的宏!!! //下面是ServiceManager提供的业务函数 /** * Retrieve an existing service, blocking for a few seconds * if it doesn't yet exist. */ virtual sp<IBinder> getService( const String16& name) const = 0; /** * Retrieve an existing service, non-blocking. */ virtual sp<IBinder> checkService( const String16& name) const = 0; /** * Register a service. */ virtual status_t addService( const String16& name, const sp<IBinder>& service) = 0; /** * Return list of all existing services. */ virtual Vector<String16> listServices() = 0; enum { GET_SERVICE_TRANSACTION = IBinder::FIRST_CALL_TRANSACTION, CHECK_SERVICE_TRANSACTION, ADD_SERVICE_TRANSACTION, LIST_SERVICES_TRANSACTION, }; };
(b) 业务和通信的挂钩
Android通过DECLARE_META_INTERFACE和IMPLEMENT_META_INTERFACE宏,将业务和通信牢牢地钩在一起。两个宏的定义都在[Iinterface.h]
首先,看下DECLARE_META_INTERFACE宏的定义:
#define DECLARE_META_INTERFACE(INTERFACE) \ static const android::String16 descriptor; \ static android::sp<I##INTERFACE> asInterface( \ const android::sp<android::IBinder>& obj); \ virtual const android::String16& getInterfaceDescriptor() const; \ I##INTERFACE(); \ virtual ~I##INTERFACE();
然后根据DECLARE_META_INTERFACE(ServiceManager)进行相应替换,得到
#define DECLARE_META_INTERFACE(ServiceManager) static const android::String16 descriptor; static android::sp<IServiceManager> asInterface( const android::sp<android::IBinder>& obj); virtual const android::String16& getInterfaceDescriptor() const; IServiceManager(); virtual ~IServiceManager();
然后,看下IMPLEMENT_META_INTERFACE宏的定义:
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \ const android::String16 I##INTERFACE::descriptor(NAME); \ const android::String16& \ I##INTERFACE::getInterfaceDescriptor() const { \ return I##INTERFACE::descriptor; \ } \ android::sp<I##INTERFACE> I##INTERFACE::asInterface( \ const android::sp<android::IBinder>& obj) \ { \ android::sp<I##INTERFACE> intr; \ if (obj != NULL) { \ intr = static_cast<I##INTERFACE*>( \ obj->queryLocalInterface( \ I##INTERFACE::descriptor).get()); \ if (intr == NULL) { \ intr = new Bp##INTERFACE(obj); \ } \ } \ return intr; \ } \ I##INTERFACE::I##INTERFACE() { } \ I##INTERFACE::~I##INTERFACE() { } \
IServiceManager.cpp中有如下一行代码:
IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");
因此宏展开,得到
#define IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager") const android::String16 IServiceManager::descriptor("android.os.IServiceManager"); //实现getInterfaceDescriptor函数 const android::String16&IServiceManager::getInterfaceDescriptor() const { //返回字符串descriptor,值为"android.os.IServiceManager" return IServiceManager::descriptor; } //实现asInterface函数 android::sp<IServiceManager> IServiceManager::asInterface( const android::sp<android::IBinder>& obj) { android::sp<IServiceManager> intr; if (obj != NULL) { intr = static_cast<IServiceManager*>( obj->queryLocalInterface( IServiceManager::descriptor).get()); if (intr == NULL) { //obj是我们刚才创建的那个BpBinder(0) intr = new BpServiceManager (obj); } } return intr; } IServiceManager::IServiceManager () { } IServiceManager::~IServiceManager () { }
在asInterface函数中,根据代码intr = new BpServiceManager (obj);可知,asInterface函数的作用是利用BpBinder对象作为参数新建了一个BpServiceManager对象。
看下IServiceManager家族图谱:
图谱分析,得到
(1) IServiceManager、BpServiceManager、BnServiceManager都与业务逻辑有关。
(2) BnServiceManager同事从IServiceManager BBinder派生,表示它可以直接参与Binder通信。
(3) BpServiceManager虽然从BpInterface中派生,但是与BpBinder没有关系。
(4) BnServiceManager是一个虚类,它的业务最终需要子类来实现。
问:既然BpServiceManager与Binder没有直接关系,那怎么与Binder交互呢?
答:其父类BpRefBase中的mRemote变量就是BpBinder!!!
不信么,分析下:
BpServiceManager类源码 [---->IServiceManager.cpp]
class BpServiceManager : public BpInterface<IServiceManager> { public: // impl是IBinder类型,它实际上是BpBinder对象。先不管,继续分析 //调用基类BpInterface的构造函数 BpServiceManager(const sp<IBinder>& impl) : BpInterface<IServiceManager>(impl) { } …… }
BpInterface的实现代码如下:[---->IInterface.h]
template<typename INTERFACE> inline BpInterface<INTERFACE>::BpInterface(const sp<IBinder>& remote) : BpRefBase(remote) // 基类构造函数 { }
[---->Binder.cpp::BpRefBase类]
BpRefBase::BpRefBase(const sp<IBinder>& o) //mRemote最终等于那个new出来的BpBinder(0) : mRemote(o.get()), mRefs(NULL), mState(0) { extendObjectLifetime(OBJECT_LIFETIME_WEAK); if (mRemote) { mRemote->incStrong(this); // Removed on first IncStrong(). mRefs = mRemote->createWeak(this); // Held for our entire lifetime. } }
最后分析得到的结果就是BpServiceManager的一个变量mRemote指向了BpBinder。
到此,魔术结束了。总结一下,回顾defaultServiceManager函数,可以得到两个关键对象:
(a) 有一个BpBinder对象,它的handle值为0.
(b) 有一个BpServiceManager对象,它的mRemote值为BpBinder。
BpServiceManager对象实现了IServiceManager的业务函数,现在又有了BpBinder作为通信的代表,接下来的工作就简单多了。
下面,要通过分析MediaPlayerService的注册过程,进一步分析业务函数的内部是如何工作的。
(3) 注册MediaPlayerService
(3.1) 业务层的工作
回到MS中,分析MediaPlayerService::instantiate();
void MediaPlayerService::instantiate() { defaultServiceManager()->addService( String16("media.player"), new MediaPlayerService()); }
根据之前分析,defaultServiceManager函数返回的是BpServiceManager对象,它是IServiceManager的子类。分析addService函数的源码:
virtual status_t addService(const String16& name, const sp<IBinder>& service) { Parcel data, reply; //Parcel:可以当作一个数据包 data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); data.writeStrongBinder(service); //remote函数返回的是mRemote变量,也就是BpBinder对象。 status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); return err == NO_ERROR ? reply.readExceptionCode() : err; }
addService函数是一个业务层的函数,它将请求数据打包成data后,传给BpBinder的transact函数,这样就把通信的工作交给BpBinder了。
到此,业务层的工作原理已经很清楚了。业务层的作用就是将请求信息打包,交给通信层处理。
(3.2) 通信层的工作
之前已经知道,BpBinder只是个工具,源码中没有与Binder设备交互的地方。那怎么交互呢?秘密就在transact函数中。分析BpBinder的transact函数。
status_t BpBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { // Once a binder has died, it will never come back to life. if (mAlive) { //BpBinder将自己的transact工作交给了IPCThreadState的transact函数!!! status_t status = IPCThreadState::self()->transact( mHandle, code, data, reply, flags); if (status == DEAD_OBJECT) mAlive = 0; return status; } return DEAD_OBJECT; }
(3.2.1)深入分析IPCThreadState
(a) 分析IPCThreadState::self()函数
[---->IPCThreadState.cpp]
IPCThreadState* IPCThreadState::self() { if (gHaveTLS) { //第一次进来为false restart: const pthread_key_t k = gTLS; /* TLS是现成本地存储空间的简称。 ps:这种空间每个线程都有,而且线程间不共享这些空间。 通过pthread_getspecific/ pthread_setspecific函数可以获取/设置这些空间中的内容 从线程本地存储空间中获取保存在其中的IPCThreadState对象。 */ IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k); if (st) return st; //new一个对象,构造函数中会调用pthread_setspecific函数 return new IPCThreadState; } if (gShutdown) return NULL; pthread_mutex_lock(&gTLSMutex); if (!gHaveTLS) { if (pthread_key_create(&gTLS, threadDestructor) != 0) { pthread_mutex_unlock(&gTLSMutex); return NULL; } gHaveTLS = true; } pthread_mutex_unlock(&gTLSMutex); goto restart; }
接下来看IPCThreadState的构造函数
IPCThreadState::IPCThreadState() : mProcess(ProcessState::self()), mMyThreadId(androidGetTid()), mStrictModePolicy(0), mLastTransactionBinderFlags(0) { pthread_setspecific(gTLS, this);//把自己设置到线程本地存储空间中去 clearCaller(); //mIn和mOut是两个Parcel。把它看成是发送和接收命令的缓冲区即可 mIn.setDataCapacity(256); mOut.setDataCapacity(256); }
每个线程都有一个IPCThreadState,每个IPCThreadState中都有一个mIn和mOut。其中mIn是用来接收来自Binder设备的数据的,mOut是用来存储发送给Binder设备的数据的。
(b) 分析IPCThreadState::transact函数
//注意,handle的值为0,代表了通信的目的端
status_t IPCThreadState::transact(int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { status_t err = data.errorCheck(); flags |= TF_ACCEPT_FDS; IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand " << handle << " / code " << TypeCode(code) << ": " << indent << data << dedent << endl; } if (err == NO_ERROR) { LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(), (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY"); /* 第一个参数BC_TRANSACTION是应用程序向binder设备发送消息的消息码; 而binder设备向应用程序回复消息的消息码以BR_开头。 消息码的定义在binder_module.h中。 */ err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL); } if (err != NO_ERROR) { if (reply) reply->setError(err); return (mLastError = err); } if ((flags & TF_ONE_WAY) == 0) { #if 0 if (code == 4) { // relayout LOGI(">>>>>> CALLING transaction 4"); } else { LOGI(">>>>>> CALLING transaction %d", code); } #endif if (reply) { err = waitForResponse(reply); } else { Parcel fakeReply; err = waitForResponse(&fakeReply); } #if 0 if (code == 4) { // relayout LOGI("<<<<<< RETURNING transaction 4"); } else { LOGI("<<<<<< RETURNING transaction %d", code); } #endif IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand " << handle << ": "; if (reply) alog << indent << *reply << dedent << endl; else alog << "(none requested)" << endl; } } else { err = waitForResponse(NULL, NULL); } return err; }
这个函数的流程很简单,就是发数据,然后等结果。但需要关注下handle这个参数在这里的作用。
分析writeTransactionData函数
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags, int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer) { binder_transaction_data tr; //handle的值传给了target,用来标识目的端,其中 0是ServiceManager的标志。 tr.target.handle = handle; //code是消息码,是用来switch/case的。 tr.code = code; tr.flags = binderFlags; const status_t err = data.errorCheck(); if (err == NO_ERROR) { tr.data_size = data.ipcDataSize(); tr.data.ptr.buffer = data.ipcData(); tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t); tr.data.ptr.offsets = data.ipcObjects(); } else if (statusBuffer) { tr.flags |= TF_STATUS_CODE; *statusBuffer = err; tr.data_size = sizeof(status_t); tr.data.ptr.buffer = statusBuffer; tr.offsets_size = 0; tr.data.ptr.offsets = NULL; } else { return (mLastError = err); } //把命令写到mOut中 mOut.writeInt32(cmd); mOut.write(&tr, sizeof(tr)); return NO_ERROR; }
这样,就已经把addService函数中的请求信息写到mOut中了。接下来看看发送请求和接收回复部分的实现。代码在waitForResponse函数中。如下所示:
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) { int32_t cmd; int32_t err; while (1) { //talkWithDriver!!! 还用多解释么? if ((err=talkWithDriver()) < NO_ERROR) break; err = mIn.errorCheck(); if (err < NO_ERROR) break; if (mIn.dataAvail() == 0) continue; cmd = mIn.readInt32(); IF_LOG_COMMANDS() { alog << "Processing waitForResponse Command: " << getReturnString(cmd) << endl; } switch (cmd) { case BR_TRANSACTION_COMPLETE: if (!reply && !acquireResult) goto finish; break; case BR_DEAD_REPLY: err = DEAD_OBJECT; goto finish; case BR_FAILED_REPLY: err = FAILED_TRANSACTION; goto finish; case BR_ACQUIRE_RESULT: { LOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT"); const int32_t result = mIn.readInt32(); if (!acquireResult) continue; *acquireResult = result ? NO_ERROR : INVALID_OPERATION; } goto finish; case BR_REPLY: { binder_transaction_data tr; err = mIn.read(&tr, sizeof(tr)); LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); if (err != NO_ERROR) goto finish; if (reply) { if ((tr.flags & TF_STATUS_CODE) == 0) { reply->ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), freeBuffer, this); } else { err = *static_cast<const status_t*>(tr.data.ptr.buffer); freeBuffer(NULL, reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), this); } } else { freeBuffer(NULL, reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), this); continue; } } goto finish; default: //executeCommand函数 err = executeCommand(cmd); if (err != NO_ERROR) goto finish; break; } } finish: if (err != NO_ERROR) { if (acquireResult) *acquireResult = err; if (reply) reply->setError(err); mLastError = err; } return err; }
OK,我们已经发送了请求数据,假设马上收到了回复,看executeCommand函数的源码:
status_t IPCThreadState::executeCommand(int32_t cmd) { BBinder* obj; RefBase::weakref_type* refs; status_t result = NO_ERROR; switch (cmd) { case BR_ERROR: result = mIn.readInt32(); break; case BR_OK: break; case BR_ACQUIRE: refs = (RefBase::weakref_type*)mIn.readInt32(); obj = (BBinder*)mIn.readInt32(); LOG_ASSERT(refs->refBase() == obj, "BR_ACQUIRE: object %p does not match cookie %p (expected %p)", refs, obj, refs->refBase()); obj->incStrong(mProcess.get()); IF_LOG_REMOTEREFS() { LOG_REMOTEREFS("BR_ACQUIRE from driver on %p", obj); obj->printRefs(); } mOut.writeInt32(BC_ACQUIRE_DONE); mOut.writeInt32((int32_t)refs); mOut.writeInt32((int32_t)obj); break; case BR_RELEASE: refs = (RefBase::weakref_type*)mIn.readInt32(); obj = (BBinder*)mIn.readInt32(); LOG_ASSERT(refs->refBase() == obj, "BR_RELEASE: object %p does not match cookie %p (expected %p)", refs, obj, refs->refBase()); IF_LOG_REMOTEREFS() { LOG_REMOTEREFS("BR_RELEASE from driver on %p", obj); obj->printRefs(); } mPendingStrongDerefs.push(obj); break; case BR_INCREFS: refs = (RefBase::weakref_type*)mIn.readInt32(); obj = (BBinder*)mIn.readInt32(); refs->incWeak(mProcess.get()); mOut.writeInt32(BC_INCREFS_DONE); mOut.writeInt32((int32_t)refs); mOut.writeInt32((int32_t)obj); break; case BR_DECREFS: refs = (RefBase::weakref_type*)mIn.readInt32(); obj = (BBinder*)mIn.readInt32(); // NOTE: This assertion is not valid, because the object may no // longer exist (thus the (BBinder*)cast above resulting in a different // memory address). //LOG_ASSERT(refs->refBase() == obj, // "BR_DECREFS: object %p does not match cookie %p (expected %p)", // refs, obj, refs->refBase()); mPendingWeakDerefs.push(refs); break; case BR_ATTEMPT_ACQUIRE: refs = (RefBase::weakref_type*)mIn.readInt32(); obj = (BBinder*)mIn.readInt32(); { const bool success = refs->attemptIncStrong(mProcess.get()); LOG_ASSERT(success && refs->refBase() == obj, "BR_ATTEMPT_ACQUIRE: object %p does not match cookie %p (expected %p)", refs, obj, refs->refBase()); mOut.writeInt32(BC_ACQUIRE_RESULT); mOut.writeInt32((int32_t)success); } break; case BR_TRANSACTION: { binder_transaction_data tr; result = mIn.read(&tr, sizeof(tr)); LOG_ASSERT(result == NO_ERROR, "Not enough command data for brTRANSACTION"); if (result != NO_ERROR) break; Parcel buffer; buffer.ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), freeBuffer, this); const pid_t origPid = mCallingPid; const uid_t origUid = mCallingUid; mCallingPid = tr.sender_pid; mCallingUid = tr.sender_euid; int curPrio = getpriority(PRIO_PROCESS, mMyThreadId); if (gDisableBackgroundScheduling) { if (curPrio > ANDROID_PRIORITY_NORMAL) { // We have inherited a reduced priority from the caller, but do not // want to run in that state in this process. The driver set our // priority already (though not our scheduling class), so bounce // it back to the default before invoking the transaction. setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL); } } else { if (curPrio >= ANDROID_PRIORITY_BACKGROUND) { // We want to use the inherited priority from the caller. // Ensure this thread is in the background scheduling class, // since the driver won't modify scheduling classes for us. // The scheduling group is reset to default by the caller // once this method returns after the transaction is complete. androidSetThreadSchedulingGroup(mMyThreadId, ANDROID_TGROUP_BG_NONINTERACT); } } //LOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid); Parcel reply; IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BR_TRANSACTION thr " << (void*)pthread_self() << " / obj " << tr.target.ptr << " / code " << TypeCode(tr.code) << ": " << indent << buffer << dedent << endl << "Data addr = " << reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer) << ", offsets addr=" << reinterpret_cast<const size_t*>(tr.data.ptr.offsets) << endl; } if (tr.target.ptr) { //这里的b实际上就是实现了BnServiceXXX的那个对象 sp<BBinder> b((BBinder*)tr.cookie); const status_t error = b->transact(tr.code, buffer, &reply, tr.flags); if (error < NO_ERROR) reply.setError(error); } else { // the_context_object->transact函数是IPCThreadState.cpp中定义的一个全局变量,可通过setTheContextObject函数设置 const status_t error = the_context_object->transact(tr.code, buffer, &reply, tr.flags); if (error < NO_ERROR) reply.setError(error); } //LOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n", // mCallingPid, origPid, origUid); if ((tr.flags & TF_ONE_WAY) == 0) { LOG_ONEWAY("Sending reply to %d!", mCallingPid); sendReply(reply, 0); } else { LOG_ONEWAY("NOT sending reply to %d!", mCallingPid); } mCallingPid = origPid; mCallingUid = origUid; IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj " << tr.target.ptr << ": " << indent << reply << dedent << endl; } } break; case BR_DEAD_BINDER: { BpBinder *proxy = (BpBinder*)mIn.readInt32(); proxy->sendObituary(); mOut.writeInt32(BC_DEAD_BINDER_DONE); mOut.writeInt32((int32_t)proxy); } break; case BR_CLEAR_DEATH_NOTIFICATION_DONE: { BpBinder *proxy = (BpBinder*)mIn.readInt32(); proxy->getWeakRefs()->decWeak(proxy); } break; case BR_FINISHED: result = TIMED_OUT; break; case BR_NOOP: break; case BR_SPAWN_LOOPER: //这里将接收来自驱动的指示以创建一个新线程,用于和Binder通信。 mProcess->spawnPooledThread(false); break; default: printf("*** BAD COMMAND %d received from Binder driver\n", cmd); result = UNKNOWN_ERROR; break; } if (result != NO_ERROR) { mLastError = result; } return result; }
对于如何和binder设备交互,可通过talkwithDriver函数分析。其实并不是通过write和read来发送和接收请求,而是通过ioctl函数。[---->IPCThreadState.cpp]
status_t IPCThreadState::talkWithDriver(bool doReceive) { LOG_ASSERT(mProcess->mDriverFD >= 0, "Binder driver is not opened"); // binder_write_read是用来和binder设备交换数据的结构 binder_write_read bwr; // Is the read buffer empty? const bool needRead = mIn.dataPosition() >= mIn.dataSize(); // We don't want to write anything if we are still reading // from data left in the input buffer and the caller // has requested to read the next data. const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0; //请求命令的填充 bwr.write_size = outAvail; bwr.write_buffer = (long unsigned int)mOut.data(); // This is what we'll read. if (doReceive && needRead) { //接收数据缓冲区信息的填充。如果以后收到数据,就直接填在mIn中了。 bwr.read_size = mIn.dataCapacity(); bwr.read_buffer = (long unsigned int)mIn.data(); } else { bwr.read_size = 0; } IF_LOG_COMMANDS() { TextOutput::Bundle _b(alog); if (outAvail != 0) { alog << "Sending commands to driver: " << indent; const void* cmds = (const void*)bwr.write_buffer; const void* end = ((const uint8_t*)cmds)+bwr.write_size; alog << HexDump(cmds, bwr.write_size) << endl; while (cmds < end) cmds = printCommand(alog, cmds); alog << dedent; } alog << "Size of receive buffer: " << bwr.read_size << ", needRead: " << needRead << ", doReceive: " << doReceive << endl; } // Return immediately if there is nothing to do. if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR; bwr.write_consumed = 0; bwr.read_consumed = 0; status_t err; do { IF_LOG_COMMANDS() { alog << "About to read/write, write size = " << mOut.dataSize() << endl; } #if defined(HAVE_ANDROID_OS) //通过ioctl方式发送/接收请求 if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) err = NO_ERROR; else err = -errno; #else err = INVALID_OPERATION; #endif IF_LOG_COMMANDS() { alog << "Finished read/write, write size = " << mOut.dataSize() << endl; } } while (err == -EINTR); IF_LOG_COMMANDS() { alog << "Our err: " << (void*)err << ", write consumed: " << bwr.write_consumed << " (of " << mOut.dataSize() << "), read consumed: " << bwr.read_consumed << endl; } if (err >= NO_ERROR) { if (bwr.write_consumed > 0) { if (bwr.write_consumed < (ssize_t)mOut.dataSize()) mOut.remove(0, bwr.write_consumed); else mOut.setDataSize(0); } if (bwr.read_consumed > 0) { mIn.setDataSize(bwr.read_consumed); mIn.setDataPosition(0); } IF_LOG_COMMANDS() { TextOutput::Bundle _b(alog); alog << "Remaining data size: " << mOut.dataSize() << endl; alog << "Received commands from driver: " << indent; const void* cmds = mIn.data(); const void* end = mIn.data() + mIn.dataSize(); alog << HexDump(cmds, mIn.dataSize()) << endl; while (cmds < end) cmds = printReturnCommand(alog, cmds); alog << dedent; } return NO_ERROR; } return err; }
MediaPlayerService的注册过程分析完毕。开始分析startThreadPool和joinThreadPool函数。
(4)秋风扫落叶----startThreadPool和joinThreadPool分析
(4.1)创造劳动力----startThreadPool函数
void ProcessState::startThreadPool() { AutoMutex _l(mLock); //如果已经是startThreadPool,那这个函数就到此为止了。 if (!mThreadPoolStarted) { mThreadPoolStarted = true; spawnPooledThread(true); //传入的参数为true } }
分析spawnPooledThread函数
void ProcessState::spawnPooledThread(bool isMain) { //传递过来的参数isMain为true if (mThreadPoolStarted) { int32_t s = android_atomic_add(1, &mThreadPoolSeq); char buf[32]; sprintf(buf, "Binder Thread #%d", s); LOGV("Spawning new pooled thread, name=%s\n", buf); sp<Thread> t = new PoolThread(isMain); t->run(buf); } }
PoolThread是在IPCThreadState中定义的一个Thread子类,实现如下:
class PoolThread : public Thread { public: PoolThread(bool isMain) : mIsMain(isMain) { } protected: virtual bool threadLoop() { // IPCThreadState::self()->joinThreadPool(mIsMain); return false; } const bool mIsMain; };
(4.2)万众归一----joinThreadPool
IPCThreadState和新创建的线程中都调用了该函数,看具体实现:
void IPCThreadState::joinThreadPool(bool isMain) { LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid()); //因为isMain为true,所以把请求信息写入到mOut中 mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER); // This thread may have been spawned by a thread that was in the background // scheduling group, so first we will make sure it is in the default/foreground // one to avoid performing an initial transaction in the background. androidSetThreadSchedulingGroup(mMyThreadId, ANDROID_TGROUP_DEFAULT); status_t result; do { int32_t cmd; // When we've cleared the incoming command queue, process any pending derefs if (mIn.dataPosition() >= mIn.dataSize()) { size_t numPending = mPendingWeakDerefs.size(); if (numPending > 0) { for (size_t i = 0; i < numPending; i++) { RefBase::weakref_type* refs = mPendingWeakDerefs[i]; refs->decWeak(mProcess.get()); } mPendingWeakDerefs.clear(); } //处理已经死亡的BBinder对象 numPending = mPendingStrongDerefs.size(); if (numPending > 0) { for (size_t i = 0; i < numPending; i++) { BBinder* obj = mPendingStrongDerefs[i]; obj->decStrong(mProcess.get()); } mPendingStrongDerefs.clear(); } } //发送命令,读取请求 // now get the next command to be processed, waiting if necessary result = talkWithDriver(); if (result >= NO_ERROR) { size_t IN = mIn.dataAvail(); if (IN < sizeof(int32_t)) continue; cmd = mIn.readInt32(); IF_LOG_COMMANDS() { alog << "Processing top-level Command: " << getReturnString(cmd) << endl; } result = executeCommand(cmd); //处理消息 } // After executing the command, ensure that the thread is returned to the // default cgroup before rejoining the pool. The driver takes care of // restoring the priority, but doesn't do anything with cgroups so we // need to take care of that here in userspace. Note that we do make // sure to go in the foreground after executing a transaction, but // there are other callbacks into user code that could have changed // our group so we want to make absolutely sure it is put back. androidSetThreadSchedulingGroup(mMyThreadId, ANDROID_TGROUP_DEFAULT); // Let this thread exit the thread pool if it is no longer // needed and it is not the main process thread. if(result == TIMED_OUT && !isMain) { break; } } while (result != -ECONNREFUSED && result != -EBADF); LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%p\n", (void*)pthread_self(), getpid(), (void*)result); mOut.writeInt32(BC_EXIT_LOOPER); talkWithDriver(false); }
分析得知,IPCThreadState主线程和新创建的线程都在talkWithDriver,它们通过joinThreadPool读取binder设备,查看是否有请求。
因此得知,binder设备是支持多线程操作的,其中一定做了同步方面的工作。
ps:MediaServer这个进程一共注册了4个服务。
从概念上再理解下,就是说:
a. Binder是通信机制。
b. 业务可以基于Binder通信。
Binder之所以复杂,重要原因是在于android通过层层封装,巧妙地将通信和业务结合在了一起。如图所示:
知识点3:服务总管ServiceManager
1.ServiceManager原理
defaultServiceManager返回的是一个BpServiceManager对象,通过它可以把命令请求发送给handle值为0的目的端。根据IServiceManager的图谱,应该要有一个类从BnServiceManager派生出来并处理这些来自远方的请求。但源码中并没有这样的一个类存在!但确实有一个程序完成了BnServiceManager的工作,这个程序就是ServiceManager。代码在Service_manager.c中。ServiceManager的入口函数如下所示:
int main(int argc, char **argv) { struct binder_state *bs; // BINDER_SERVICE_MANAGER的值为NULL,是一个magic number. void *svcmgr = BINDER_SERVICE_MANAGER; // (1)打开binder设备 bs = binder_open(128*1024); // (2)成为manager,将自己的handle置为0 if (binder_become_context_manager(bs)) { LOGE("cannot become context manager (%s)\n", strerror(errno)); return -1; } svcmgr_handle = svcmgr; // (3)处理客户端发过来的请求 binder_loop(bs, svcmgr_handler); return 0; }
(1) 打开binder设备
binder_open函数有两个作用:a.打开binder设备; b.内存映射
实现如下:
struct binder_state *binder_open(unsigned mapsize) { struct binder_state *bs; bs = malloc(sizeof(*bs)); if (!bs) { errno = ENOMEM; return 0; } bs->fd = open("/dev/binder", O_RDWR);//打开binder设备 if (bs->fd < 0) { fprintf(stderr,"binder: cannot open device (%s)\n", strerror(errno)); goto fail_open; } bs->mapsize = mapsize; bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0); //内存映射 if (bs->mapped == MAP_FAILED) { fprintf(stderr,"binder: cannot map device (%s)\n", strerror(errno)); goto fail_map; } /* TODO: check version */ return bs; fail_map: close(bs->fd); fail_open: free(bs); return 0; }
(2) ServiceManager成为manager,实现代码如下:[---->Binder.c]
int binder_become_context_manager(struct binder_state *bs) { return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0); }
(3) 处理客户端发过来的请求,binder_loop实现在Binder.c中。
/* binder_handler参数是一个函数指针,binder_loop读取请求后将解析这些请求,最后调用binder_handler完成最终的处理。 */ void binder_loop(struct binder_state *bs, binder_handler func) { int res; struct binder_write_read bwr; unsigned readbuf[32]; bwr.write_size = 0; bwr.write_consumed = 0; bwr.write_buffer = 0; readbuf[0] = BC_ENTER_LOOPER; binder_write(bs, readbuf, sizeof(unsigned)); for (;;) { bwr.read_size = sizeof(readbuf); bwr.read_consumed = 0; bwr.read_buffer = (unsigned) readbuf; res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); if (res < 0) { LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno)); break; } //接收到请求,交给binder_parse,最后会调用func来处理这些请求 res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func); if (res == 0) { LOGE("binder_loop: unexpected reply?!\n"); break; } if (res < 0) { LOGE("binder_loop: io error %d %s\n", res, strerror(errno)); break; } } }
(4) 调用func处理
main函数中传递给binder_loop的参数func是svcmgr_handler,代码如下:
int svcmgr_handler(struct binder_state *bs, struct binder_txn *txn, struct binder_io *msg, struct binder_io *reply) { struct svcinfo *si; uint16_t *s; unsigned len; void *ptr; uint32_t strict_policy; // LOGI("target=%p code=%d pid=%d uid=%d\n", // txn->target, txn->code, txn->sender_pid, txn->sender_euid); //svcmgr_handle就是前面说的magic number,值为null。这里比较target是不是自己。 if (txn->target != svcmgr_handle) return -1; // Equivalent to Parcel::enforceInterface(), reading the RPC // header with the strict mode policy mask and the interface name. // Note that we ignore the strict_policy and don't propagate it // further (since we do no outbound RPCs anyway). strict_policy = bio_get_uint32(msg); s = bio_get_string16(msg, &len); if ((len != (sizeof(svcmgr_id) / 2)) || memcmp(svcmgr_id, s, sizeof(svcmgr_id))) { fprintf(stderr,"invalid id %s\n", str8(s)); return -1; } switch(txn->code) { case SVC_MGR_GET_SERVICE: //得到某个service的信息,service用字符串表示。 case SVC_MGR_CHECK_SERVICE: s = bio_get_string16(msg, &len); //s是字符串表示的service名称。 ptr = do_find_service(bs, s, len); if (!ptr) break; bio_put_ref(reply, ptr); return 0; case SVC_MGR_ADD_SERVICE: //对应addService请求。 s = bio_get_string16(msg, &len); ptr = bio_get_ref(msg); if (do_add_service(bs, s, len, ptr, txn->sender_euid)) return -1; break; //得到当前系统已经注册的所有service的名字。 case SVC_MGR_LIST_SERVICES: { unsigned n = bio_get_uint32(msg); si = svclist; while ((n-- > 0) && si) si = si->next; if (si) { bio_put_string16(reply, si->name); return 0; } return -1; } default: LOGE("unknown code %d\n", txn->code); return -1; } bio_put_uint32(reply, 0); return 0; }
2.服务的注册
上面提到的switch/case语句,将实现IServiceManager中定义的各个业务函数。重点分析其中的do_add_service函数,它最终完成了对addService请求的处理。代码如下:[---->Service_manager.c]
int do_add_service(struct binder_state *bs, uint16_t *s, unsigned len, void *ptr, unsigned uid) { struct svcinfo *si; // LOGI("add_service('%s',%p) uid=%d\n", str8(s), ptr, uid); if (!ptr || (len == 0) || (len > 127)) return -1; // svc_can_register比较注册进程的uid和名字 if (!svc_can_register(uid, s)) { LOGE("add_service('%s',%p) uid=%d - PERMISSION DENIED\n", str8(s), ptr, uid); return -1; } si = find_svc(s, len); if (si) { if (si->ptr) { LOGE("add_service('%s',%p) uid=%d - ALREADY REGISTERED\n", str8(s), ptr, uid); return -1; } si->ptr = ptr; } else { si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t)); if (!si) { LOGE("add_service('%s',%p) uid=%d - OUT OF MEMORY\n", str8(s), ptr, uid); return -1; } //ptr是关键数据,为void*类型。 si->ptr = ptr; si->len = len; memcpy(si->name, s, (len + 1) * sizeof(uint16_t)); si->name[len] = '\0'; si->death.func = svcinfo_death; //service退出的通知函数 si->death.ptr = si; //这个svclist是一个list,保存了当前注册到ServiceManager中的信息。 si->next = svclist; svclist = si; } binder_acquire(bs, ptr); //服务进程退出后,binder_link_to_death函数做清理工作。每当有服务进程退出时,ServiceManager都会得到来自binder设备的通知。 binder_link_to_death(bs, ptr, &si->death); return 0; }
(1)分析svc_can_register函数---->注册服务
int svc_can_register(unsigned uid, uint16_t *name) { unsigned n; //如果用户组是root用户或system用户,则权限够高,允许注册。 if ((uid == 0) || (uid == AID_SYSTEM)) return 1; for (n = 0; n < sizeof(allowed) / sizeof(allowed[0]); n++) if ((uid == allowed[n].uid) && str16eq(name, allowed[n].name)) return 1; return 0; }
allowed结构数组控制那些权限达不到root和system的进程,它的定义如下:
static struct { unsigned uid; const char *name; } allowed[] = { #ifdef LVMX { AID_MEDIA, "com.lifevibes.mx.ipc" }, #endif { AID_MEDIA, "media.audio_flinger" }, { AID_MEDIA, "media.player" }, { AID_MEDIA, "media.camera" }, { AID_MEDIA, "media.audio_policy" }, { AID_NFC, "nfc" }, { AID_RADIO, "radio.phone" }, { AID_RADIO, "radio.sms" }, { AID_RADIO, "radio.phonesubinfo" }, { AID_RADIO, "radio.simphonebook" }, { AID_RADIO, "radio.vt" }, /* TODO: remove after phone services are updated: */ { AID_RADIO, "phone" }, { AID_RADIO, "sip" }, { AID_RADIO, "isms" }, { AID_RADIO, "iphonesubinfo" }, { AID_RADIO, "simphonebook" }, };
所以,如果Server进程权限不够root和system,那么请记住要在allowed中添加相应的项。
ServiceManager其实就是保存了一些服务的信息,那它存在的意义是什么?
存在意义:
a. ServiceManager能集中管理系统内的所有服务,同时能施加权限控制,并不是所有进程都能注册服务。
b. ServiceManager支持通过字符串名称来查找对应的服务。
c. 由于各种原因的影响,Server进程可能生死无常。如果让每个Client都去检测,压力实在太大了。现在有了统一的管理机构,Client只需要查询ServiceManager,就能把握动向,得到最新信息。这可能正是ServiceManager存在的最大意义吧。
通过MediaPlayerService和它的Client来分析请求数据是如何从通信层传递到业务层并进行处理的。一个Client要想获得Service的信息,首先必须和ServiceManager打交道,通过getService来获取Service的信息。例如ImediaDeathNotifier.cpp中的getMediaPlayerService函数。
/*static*/const sp<IMediaPlayerService>& IMediaDeathNotifier::getMediaPlayerService() { LOGV("getMediaPlayerService"); Mutex::Autolock _l(sServiceLock); if (sMediaPlayerService.get() == 0) { sp<IServiceManager> sm = defaultServiceManager(); sp<IBinder> binder; do { //向ServiceManager查询对应的服务信息,返回BpBinder binder = sm->getService(String16("media.player")); if (binder != 0) { break; } LOGW("Media player service not published, waiting..."); //如果ServiceManager上没有查询的服务,则继续等待,直到对应服务注册为止。 usleep(500000); // 0.5 s } while(true); if (sDeathNotifier == NULL) { sDeathNotifier = new DeathNotifier(); } binder->linkToDeath(sDeathNotifier); //通过interface_cast,将这个binder转化成BpMediaPlayerService,binder中的handle标识的一定是目的端MediaPlayerService。 sMediaPlayerService = interface_cast<IMediaPlayerService>(binder); } LOGE_IF(sMediaPlayerService == 0, "no media player service!?"); return sMediaPlayerService; }
有了BpMediaPlayerService,就可以使用任何ImediaPlayerService提供的业务逻辑函数了。
调用的这些函数都将把请求数据打包发送给Binder驱动,并根据BpBinder中的handle值找到对应端的处理者来处理。过程如下:
a.通信层接收到请求
b.递交给业务层处理
现在来分析下这个过程。
(5)MediaPlayerService驻留在MediaServer进程中,这个进程有两个线程在talkWithDriver。假如其中有一个线程收到了请求信息,它最终会通过executeCommand调用来处理这个请求。
代码如下:
status_t IPCThreadState::executeCommand(int32_t cmd) { BBinder* obj; RefBase::weakref_type* refs; status_t result = NO_ERROR; switch (cmd) { case BR_ERROR: result = mIn.readInt32(); break; ………… case BR_TRANSACTION: { binder_transaction_data tr; result = mIn.read(&tr, sizeof(tr)); LOG_ASSERT(result == NO_ERROR, "Not enough command data for brTRANSACTION"); if (result != NO_ERROR) break; Parcel buffer; ………… Parcel reply; ………… if (tr.target.ptr) { //在这里,b就是MediaPlayerService,这样就直接定位到业务层了 sp<BBinder> b((BBinder*)tr.cookie); //(1)分析这里的transact函数 const status_t error = b->transact(tr.code, buffer, &reply, tr.flags); if (error < NO_ERROR) reply.setError(error); } else { const status_t error = the_context_object->transact(tr.code, buffer, &reply, tr.flags); if (error < NO_ERROR) reply.setError(error); } //LOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n", // mCallingPid, origPid, origUid); if ((tr.flags & TF_ONE_WAY) == 0) { LOG_ONEWAY("Sending reply to %d!", mCallingPid); sendReply(reply, 0); } else { LOG_ONEWAY("NOT sending reply to %d!", mCallingPid); } mCallingPid = origPid; mCallingUid = origUid; IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj " << tr.target.ptr << ": " << indent << reply << dedent << endl; } } break; ………… }
因为MediaPlayerService继承自BnMediaPlayerService,而BnMediaPlayerService继承自
BBinder和ImediaPlayerService。有了这样的理解,再来分析代码中的(1)
(1)b->transact函数是继承自BBinder,来看代码:
status_t BBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { data.setDataPosition(0); status_t err = NO_ERROR; switch (code) { case PING_TRANSACTION: reply->writeInt32(pingBinder()); break; default: //调用子类的onTransact函数 err = onTransact(code, data, reply, flags); break; } if (reply != NULL) { reply->setDataPosition(0); } return err; }
分析BnMediaPlayerService类中实现的onTransact函数[---->ImediaPlayerService.cpp]
status_t BnMediaPlayerService::onTransact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { switch(code) { ………… case CREATE_MEDIA_RECORDER: { CHECK_INTERFACE(IMediaPlayerService, data, reply); pid_t pid = data.readInt32();//从请求数据中读取相应参数。 //子类要实现createMediaRecorder函数 sp<IMediaRecorder> recorder = createMediaRecorder(pid); reply->writeStrongBinder(recorder->asBinder()); return NO_ERROR; } break; case CREATE_METADATA_RETRIEVER: { CHECK_INTERFACE(IMediaPlayerService, data, reply); pid_t pid = data.readInt32(); //子类要实现createMetadataRetriever函数 sp<IMediaMetadataRetriever> retriever = createMetadataRetriever(pid); reply->writeStrongBinder(retriever->asBinder()); return NO_ERROR; } break; default: return BBinder::onTransact(code, data, reply, flags); } }
在MediaPlayerService.cpp中实现了这两个函数,如下:
sp<IMediaRecorder> MediaPlayerService::createMediaRecorder(pid_t pid) { sp<MediaRecorderClient> recorder = new MediaRecorderClient(this, pid); wp<MediaRecorderClient> w = recorder; Mutex::Autolock lock(mLock); mMediaRecorderClients.add(w); LOGV("Create new media recorder client from pid %d", pid); return recorder; } sp<IMediaMetadataRetriever> MediaPlayerService::createMetadataRetriever(pid_t pid) { sp<MetadataRetrieverClient> retriever = new MetadataRetrieverClient(pid); LOGV("Create new media retriever from pid %d", pid); return retriever; }
over~