转载时请注明出处和作者联系方式
文章出处:http://www.limodev.cn/blog
作者联系方式:李先静 <xianjimli at hotmail dot com>
o IBinder接口
IBinder接口是对跨进程的对象的抽象。普通对象在当前进程可以访问,如果希望对象能被其它进程访问,那就必须实现IBinder接口。IBinder接口可以指向本地对象,也可以指向远程对象,调用者不需要关心指向的对象是本地的还是远程。
transact是IBinder接口中一个比较重要的函数,它的函数原型如下:
virtual status_t transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0) = 0;
|
android中的IPC的基本模型是基于客户/服务器(C/S)架构的。
如果IBinder指向的是一个客户端代理,那transact只是把请求发送给服务器。服务端的IBinder的transact则提供了实际的服务。
o 客户端
BpBinder是远程对象在当前进程的代理,它实现了IBinder接口。它的transact函数实现如下:
status_t BpBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { // Once a binder has died, it will never come back to life. if (mAlive) { status_t status = IPCThreadState::self()->transact( mHandle, code, data, reply, flags); if (status == DEAD_OBJECT) mAlive = 0; return status; }
return DEAD_OBJECT; }
|
参数说明:
code 是请求的ID号。
data 是请求的参数。
reply 是返回的结果。
flags 一些额外的标识,如FLAG_ONEWAY。通常为0。
transact只是简单的调用了IPCThreadState::self()的transact,在IPCThreadState::transact中:
status_t IPCThreadState::transact(int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { status_t err = data.errorCheck();
flags |= TF_ACCEPT_FDS;
IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand " << handle << " / code " << TypeCode(code) << ": " << indent << data << dedent << endl; }
if (err == NO_ERROR) { LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(), (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY"); err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL); }
if (err != NO_ERROR) { if (reply) reply->setError(err); return (mLastError = err); }
if ((flags & TF_ONE_WAY) == 0) { if (reply) { err = waitForResponse(reply); } else { Parcel fakeReply; err = waitForResponse(&fakeReply); }
IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand " << handle << ": "; if (reply) alog << indent << *reply << dedent << endl; else alog << "(none requested)" << endl; } } else { err = waitForResponse(NULL, NULL); }
return err; }
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) { int32_t cmd; int32_t err;
while (1) { if ((err=talkWithDriver()) < NO_ERROR) break; err = mIn.errorCheck(); if (err < NO_ERROR) break; if (mIn.dataAvail() == 0) continue;
cmd = mIn.readInt32();
IF_LOG_COMMANDS() { alog << "Processing waitForResponse Command: " << getReturnString(cmd) << endl; }
switch (cmd) { case BR_TRANSACTION_COMPLETE: if (!reply && !acquireResult) goto finish; break;
case BR_DEAD_REPLY: err = DEAD_OBJECT; goto finish;
case BR_FAILED_REPLY: err = FAILED_TRANSACTION; goto finish;
case BR_ACQUIRE_RESULT: { LOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT"); const int32_t result = mIn.readInt32(); if (!acquireResult) continue; *acquireResult = result ? NO_ERROR : INVALID_OPERATION; } goto finish;
case BR_REPLY: { binder_transaction_data tr; err = mIn.read(&tr, sizeof(tr)); LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); if (err != NO_ERROR) goto finish;
if (reply) { if ((tr.flags & TF_STATUS_CODE) == 0) { reply->ipcSetDataReference( reinterpret_cast(tr.data.ptr.buffer), tr.data_size, reinterpret_cast(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), freeBuffer, this); } else { err = *static_cast(tr.data.ptr.buffer); freeBuffer(NULL, reinterpret_cast(tr.data.ptr.buffer), tr.data_size, reinterpret_cast(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), this); } } else { freeBuffer(NULL, reinterpret_cast(tr.data.ptr.buffer), tr.data_size, reinterpret_cast(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), this); continue; } } goto finish;
default: err = executeCommand(cmd); if (err != NO_ERROR) goto finish; break; } }
finish: if (err != NO_ERROR) { if (acquireResult) *acquireResult = err; if (reply) reply->setError(err); mLastError = err; }
return err; }
|
这里transact把请求经内核模块发送了给服务端,服务端处理完请求之后,沿原路返回结果给调用者。这里也可以看出请求是同步操作,它会等待直到结果返回为止。
在BpBinder之上进行简单包装,我们可以得到与服务对象相同的接口,调用者无需要关心调用的对象是远程的还是本地的。拿ServiceManager来说:
(frameworks/base/libs/utils/IServiceManager.cpp)
class BpServiceManager : public BpInterface { public: BpServiceManager(const sp& impl) : BpInterface(impl) { } ... virtual status_t addService(const String16& name, const sp& service) { Parcel data, reply; data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); data.writeStrongBinder(service); status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); return err == NO_ERROR ? reply.readInt32() : err; } ... };
|
BpServiceManager实现了
IServiceManager和IBinder两个接口,调用者可以把BpServiceManager的对象看作是一个
IServiceManager对象或者IBinder对象。当调用者把BpServiceManager对象当作IServiceManager对象使
用时,所有的请求只是对BpBinder::transact的封装。这样的封装使得调用者不需要关心IServiceManager对象是本地的还是远
程的了。
客户通过defaultServiceManager函数来创建BpServiceManager对象:
(frameworks/base/libs/utils/IServiceManager.cpp)
sp<IServiceManager> defaultServiceManager() { if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
{ AutoMutex _l(gDefaultServiceManagerLock); if (gDefaultServiceManager == NULL) { gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL)); } }
return gDefaultServiceManager; }
|
先通过ProcessState::self()->getContextObject(NULL)创建一个Binder对象,然后通过
interface_cast和IMPLEMENT_META_INTERFACE(ServiceManager,
“android.os.IServiceManager”)把Binder对象包装成
IServiceManager对象。原理上等同于创建了一个BpServiceManager对象。
ProcessState::self()->getContextObject调用ProcessState::getStrongProxyForHandle创建代理对象:
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle) { sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) { // We need to create a new BpBinder if there isn't currently one, OR we // are unable to acquire a weak reference on this current one. See comment // in getWeakProxyForHandle() for more info about this. IBinder* b = e->binder; if (b == NULL || !e->refs->attemptIncWeak(this)) { b = new BpBinder(handle); e->binder = b; if (b) e->refs = b->getWeakRefs(); result = b; } else { // This little bit of nastyness is to allow us to add a primary // reference to the remote proxy when this team doesn't have one // but another team is sending the handle to us. result.force_set(b); e->refs->decWeak(this); } }
return result; }
|
如果handle为空,默认为context_manager对象,context_manager实际上就是ServiceManager。
o 服务端
服务端也要实现IBinder接口,BBinder类对IBinder接口提供了部分默认实现,其中transact的实现如下:
status_t BBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { data.setDataPosition(0);
status_t err = NO_ERROR; switch (code) { case PING_TRANSACTION: reply->writeInt32(pingBinder()); break; default: err = onTransact(code, data, reply, flags); break; }
if (reply != NULL) { reply->setDataPosition(0); }
return err; }
|
PING_TRANSACTION请求用来检查对象是否还存在,这里简单的把
pingBinder的返回值返回给调用者。其它的请求交给onTransact处理。onTransact是BBinder里声明的一个
protected类型的虚函数,这个要求它的子类去实现。比如CameraService里的实现如下:
status_t CameraService::onTransact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { // permission checks... switch (code) { case BnCameraService::CONNECT: IPCThreadState* ipc = IPCThreadState::self(); const int pid = ipc->getCallingPid(); const int self_pid = getpid(); if (pid != self_pid) { // we're called from a different process, do the real check if (!checkCallingPermission( String16("android.permission.CAMERA"))) { const int uid = ipc->getCallingUid(); LOGE("Permission Denial: " "can't use the camera pid=%d, uid=%d", pid, uid); return PERMISSION_DENIED; } } break; }
status_t err = BnCameraService::onTransact(code, data, reply, flags);
LOGD("+++ onTransact err %d code %d", err, code);
if (err == UNKNOWN_TRANSACTION || err == PERMISSION_DENIED) { // the 'service' command interrogates this binder for its name, and then supplies it // even for the debugging commands. that means we need to check for it here, using // ISurfaceComposer (since we delegated the INTERFACE_TRANSACTION handling to // BnSurfaceComposer before falling through to this code).
LOGD("+++ onTransact code %d", code);
CHECK_INTERFACE(ICameraService, data, reply);
switch(code) { case 1000: { if (gWeakHeap != 0) { sp h = gWeakHeap.promote(); IMemoryHeap *p = gWeakHeap.unsafe_get(); LOGD("CHECKING WEAK REFERENCE %p (%p)", h.get(), p); if (h != 0) h->printRefs(); bool attempt_to_delete = data.readInt32() == 1; if (attempt_to_delete) { // NOT SAFE! LOGD("DELETING WEAK REFERENCE %p (%p)", h.get(), p); if (p) delete p; } return NO_ERROR; } } break; default: break; } } return err; }
|
由此可见,服务端的onTransact是一个请求分发函数,它根据请求码(code)做相应的处理。
o 消息循环
服务端(任何进程都可以作为服务端)有一个线程监听来自客户端的请求,并循环处理这些请求。
如果在主线程中处理请求,可以直接调用下面的函数:
IPCThreadState::self()->joinThreadPool(mIsMain);
|
如果想在非主线程中处理请求,可以按下列方式:
sp proc = ProcessState::self(); if (proc->supportsProcesses()) { LOGV("App process: starting thread pool./n"); proc->startThreadPool(); }
|
startThreadPool的实现原理:
void ProcessState::startThreadPool() { AutoMutex _l(mLock); if (!mThreadPoolStarted) { mThreadPoolStarted = true; spawnPooledThread(true); } }
void ProcessState::spawnPooledThread(bool isMain) { if (mThreadPoolStarted) { int32_t s = android_atomic_add(1, &mThreadPoolSeq); char buf[32]; sprintf(buf, "Binder Thread #%d", s); LOGV("Spawning new pooled thread, name=%s/n", buf); sp t = new PoolThread(isMain); t->run(buf); } }
|
这里创建了PoolThread的对象,实现上就是创建了一个线程。所有的线程类都要实现threadLoop虚函数。PoolThread的threadLoop的实现如下:
virtual bool threadLoop() { IPCThreadState::self()->joinThreadPool(mIsMain); return false; }
|
上述代码,简而言之就是创建了一个线程,然后在线程里调用 IPCThreadState::self()->joinThreadPool函数。
下面再看joinThreadPool的实现:
do { ... result = talkWithDriver(); if (result >= NO_ERROR) { size_t IN = mIn.dataAvail(); if (IN < sizeof(int32_t)) continue; cmd = mIn.readInt32(); IF_LOG_COMMANDS() { alog << "Processing top-level Command: " << getReturnString(cmd) << endl; } result = executeCommand(cmd); } ... while(...);
|
这个函数在循环中重复执行下列动作:
- talkWithDriver 通过ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)读取请求和写回结果。
- executeCommand 执行相应的请求
在IPCThreadState::executeCommand(int32_t cmd)函数中:
- 对于控制对象生命周期的请求,像BR_ACQUIRE/BR_RELEASE直接做了处理。
- 对于BR_TRANSACTION请求,它调用被请求对象的transact函数。
按下列方式调用实际的对象:
if (tr.target.ptr) { sp<BBinder> b((BBinder*)tr.cookie); const status_t error = b->transact(tr.code, buffer, &reply, 0); if (error < NO_ERROR) reply.setError(error);
} else { const status_t error = the_context_object->transact(tr.code, buffer, &reply, 0); if (error < NO_ERROR) reply.setError(error); }
|
如果tr.target.ptr不为空,就把tr.cookie转换成一个Binder对象,并调用它的transact函数。如果没有目标对象,
就调用
the_context_object对象的transact函数。奇怪的是,根本没有谁对the_context_object进行初始
化,the_context_object是空指针。原因是context_mgr的请求发给了ServiceManager,所以根本不会走到else
语句里来。
o 内核模块
android使用了一个内核模块binder来中转各个进程之间的消息。模块源代码放在binder.c里,它是一个字符驱动程序,主要通过
binder_ioctl与用户空间的进程交换数据。其中BINDER_WRITE_READ用来读写数据,数据包中有一个cmd域用于区分不同的请求:
- binder_thread_write用于发送请求或返回结果。
- binder_thread_read用于读取结果。
从binder_thread_write中调用binder_transaction中转请求和返回结果,binder_transaction的实现如下:
对请求的处理:
- 通过对象的handle找到对象所在的进程,如果handle为空就认为对象是context_mgr,把请求发给context_mgr所在的进程。
- 把请求中所有的binder对象全部放到一个RB树中。
- 把请求放到目标进程的队列中,等待目标进程读取。
如何成为context_mgr呢?内核模块提供了BINDER_SET_CONTEXT_MGR调用:
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { ... case BINDER_SET_CONTEXT_MGR: if (binder_context_mgr_node != NULL) { printk(KERN_ERR "binder: BINDER_SET_CONTEXT_MGR already set/n"); ret = -EBUSY; goto err; } if (binder_context_mgr_uid != -1) { if (binder_context_mgr_uid != current->euid) { printk(KERN_ERR "binder: BINDER_SET_" "CONTEXT_MGR bad uid %d != %d/n", current->euid, binder_context_mgr_uid); ret = -EPERM; goto err; } } else binder_context_mgr_uid = current->euid; binder_context_mgr_node = binder_new_node(proc, NULL, NULL); if (binder_context_mgr_node == NULL) { ret = -ENOMEM; goto err; } binder_context_mgr_node->local_weak_refs++; binder_context_mgr_node->local_strong_refs++; binder_context_mgr_node->has_strong_ref = 1; binder_context_mgr_node->has_weak_ref = 1; break;
|
ServiceManager(frameworks/base/cmds/servicemanager)通过下列方式成为了context_mgr进程:
int binder_become_context_manager(struct binder_state *bs) { return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0); }
int main(int argc, char **argv) { struct binder_state *bs; void *svcmgr = BINDER_SERVICE_MANAGER;
bs = binder_open(128*1024);
if (binder_become_context_manager(bs)) { LOGE("cannot become context manager (%s)/n", strerror(errno)); return -1; }
svcmgr_handle = svcmgr; binder_loop(bs, svcmgr_handler); return 0; }
|
o 如何得到服务对象的handle
- 服务提供者通过defaultServiceManager得到ServiceManager对象,然后调用addService向服务管理器注册。
- 服务使用者通过defaultServiceManager得到ServiceManager对象,然后调用getService通过服务名称查找到服务对象的handle。
o 如何通过服务对象的handle找到服务所在的进程
0表示服务管理器的handle,getService可以查找到系统服务的handle。这个handle只是代表了服务对象,内核模块是如何通过handle找到服务所在的进程的呢?
- 对于ServiceManager: ServiceManager调用了binder_become_context_manager使用自己成为context_mgr,所有handle为0的请求都会被转发给ServiceManager。
- 对于系统服务和应用程序的Listener,在第一次请求内核模块时(比如调用addService),内核模块在一个RB树中建立了服务对象和进程的对应关系。
off_end = (void *)offp + tr->offsets_size; for (; offp < off_end; offp++) { struct flat_binder_object *fp; if (*offp > t->buffer->data_size - sizeof(*fp)) { binder_user_error("binder: %d:%d got transaction with " "invalid offset, %d/n", proc->pid, thread->pid, *offp); return_error = BR_FAILED_REPLY; goto err_bad_offset; } fp = (struct flat_binder_object *)(t->buffer->data + *offp); switch (fp->type) { case BINDER_TYPE_BINDER: case BINDER_TYPE_WEAK_BINDER: { struct binder_ref *ref; struct binder_node *node = binder_get_node(proc, fp->binder); if (node == NULL) { node = binder_new_node(proc, fp->binder, fp->cookie); if (node == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_new_node_failed; } node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK; node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS); } if (fp->cookie != node->cookie) { binder_user_error("binder: %d:%d sending u%p " "node %d, cookie mismatch %p != %p/n", proc->pid, thread->pid, fp->binder, node->debug_id, fp->cookie, node->cookie); goto err_binder_get_ref_for_node_failed; } ref = binder_get_ref_for_node(target_proc, node); if (ref == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_get_ref_for_node_failed; } if (fp->type == BINDER_TYPE_BINDER) fp->type = BINDER_TYPE_HANDLE; else fp->type = BINDER_TYPE_WEAK_HANDLE; fp->handle = ref->desc; binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo); if (binder_debug_mask & BINDER_DEBUG_TRANSACTION) printk(KERN_INFO " node %d u%p -> ref %d desc %d/n", node->debug_id, node->ptr, ref->debug_id, ref->desc); } break;
|
- 请求服务时,内核先通过handle找到对应的进程,然后把请求放到服务进程的队列中。
o C调用JAVA
前面我们分析的是C代码的处理。对于JAVA代码,JAVA调用C的函数通过JNI调用即可。从内核时读取请求是在C代码(executeCommand)里进行了,那如何在C代码中调用那些用JAVA实现的服务呢?
android_os_Binder_init里的JavaBBinder对Java里的Binder对象进行包装。
JavaBBinder::onTransact调用Java里的execTransact函数:
jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact, code, (int32_t)&data, (int32_t)reply, flags); jthrowable excep = env->ExceptionOccurred(); if (excep) { report_exception(env, excep, "*** Uncaught remote exception! " "(Exceptions are not yet supported across processes.)"); res = JNI_FALSE;
/* clean up JNI local ref -- we don't return to Java code */ env->DeleteLocalRef(excep); }
|
o 广播消息
binder不提供广播消息,不过可以ActivityManagerService服务来实现广播。
(frameworks/base/core/java/android/app/ActivityManagerNative.java)
接收广播消息需要实现接口BroadcastReceiver,然后调用ActivityManagerProxy::registerReceiver注册。
触发广播调用ActivityManagerProxy::broadcastIntent。(应用程序并不直接调用它,而是调用Context对它的包装)