Android Binder 学习中几个问题
最近一段时间一直在学习Android 的binder通信,期间看了许多相关的Android书籍和博客,对其Android的跨进程通信原理也有了比较清楚的认识,但是总有些觉得不能把他们串联起来。直到现在把源码对照看了一次后才有点恍然大悟的感觉。
这里我就不详细的去介绍binder了,只记录一下我遇到的问题,其他的具体内容可参阅参考中的两位大神的文章。
1、Binder通信中Server和Client是如何获取Service Manager。
在罗升阳老师的《浅谈Android系统进程间通信(IPC)机制Binder中的Server和Client获得Service Manager接口之路》中,详细描述了Service Manager的接口获取流程。我这里需要强调一点的是:获取Service Manager接口的client或server只是在自己进程中
gDefaultServiceManager = new BpServiceManager(new BpBinder(0)), 0为binder的编号,这个过程与binder驱动或者service manager一点关系都没有,后续就会用这个编号0与binder驱动通信,binder驱动发现通信的target编号为0就会自动找到之前注册的Service Manager的binder node,然后与它通信。
2、Binder的编号是怎样产生的,然后怎样根据编号找到对应的进程并与之通信的。
(1)、client使用编号0与Service Manager通信过程。
这里我们就来看看client怎样使用编号为0的binder与Service Manager通信,这里以add_service为例。
我们首先到BpBinder::transact函数中去看看:
status_t BpBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { // Once a binder has died, it will never come back to life. if (mAlive) { status_t status = IPCThreadState::self()->transact( mHandle, code, data, reply, flags); if (status == DEAD_OBJECT) mAlive = 0; return status; } return DEAD_OBJECT; }
注意,这里的mHandle为0,code为ADD_SERVICE_TRANSACTION。ADD_SERVICE_TRANSACTION是上面以参数形式传进来的,那mHandle为什么是0呢?因为这里表示的是Service Manager远程接口,它的句柄值一定是0。这个0是怎么保存并传到binder中去的呢?
再进入到IPCThreadState::transact函数,看看做了些什么事情:
status_t IPCThreadState::transact(int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { ...... if (err == NO_ERROR) { LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(), (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY"); err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL); } ......
if (reply) {
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
......
}
handle继续被传到writeTranscationData方法中,我们继续往下看:
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags, int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer) { binder_transaction_data tr; tr.target.handle = handle; tr.code = code; tr.flags = binderFlags; ....... }
可以看到handle为0并被保存到binder_transaction_data结构中的target.handle中。其中target是一个union,具体可参见binder_transaction_data的定义。
再回到IPCThreadState::transact中,继续执行waitForResponse,waitForResponse中通过talkWithDriver()与binder驱动交互,具体代码:
status_t IPCThreadState::talkWithDriver(bool doReceive) { ...... #if defined(HAVE_ANDROID_OS) if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) err = NO_ERROR; else err = -errno; #else err = INVALID_OPERATION; #endif ...... }
ioctl直接操作的binder驱动文件,句柄为mProcess->mDriverFD(在ProcessState中打开,每个进程中只打开一次)。bwr中包含我们传给binder驱动的数据,其中就有handle 0。
继续到binder驱动中对ioctl的处理,我们只关注cmd为BINDER_WRITE_READ的逻辑:
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { ...... switch (cmd) { case BINDER_WRITE_READ: { struct binder_write_read bwr; if (size != sizeof(struct binder_write_read)) { ret = -EINVAL; goto err; } if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { ret = -EFAULT; goto err; } if (binder_debug_mask & BINDER_DEBUG_READ_WRITE) printk(KERN_INFO "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n", proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer); if (bwr.write_size > 0) { ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed); if (ret < 0) { bwr.read_consumed = 0; if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto err; } } if (bwr.read_size > 0) { ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK); if (!list_empty(&proc->todo)) wake_up_interruptible(&proc->wait); if (ret < 0) { if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto err; } } if (binder_debug_mask & BINDER_DEBUG_READ_WRITE) printk(KERN_INFO "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n", proc->pid, thread->pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size); if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { ret = -EFAULT; goto err; } break; } ...... } ret = 0; err: ...... return ret; }
这里我们的bwr.write_size > 0,继续执行到binder_thread_write中,我们只关注BC_TRANSACTION部分的逻辑:
binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, void __user *buffer, int size, signed long *consumed) { uint32_t cmd; void __user *ptr = buffer + *consumed; void __user *end = buffer + size; while (ptr < end && thread->return_error == BR_OK) { if (get_user(cmd, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) { binder_stats.bc[_IOC_NR(cmd)]++; proc->stats.bc[_IOC_NR(cmd)]++; thread->stats.bc[_IOC_NR(cmd)]++; } switch (cmd) { ..... case BC_TRANSACTION: case BC_REPLY: { struct binder_transaction_data tr; if (copy_from_user(&tr, ptr, sizeof(tr))) return -EFAULT; ptr += sizeof(tr); binder_transaction(proc, thread, &tr, cmd == BC_REPLY); break; } ...... } *consumed = ptr - buffer; } return 0; }
首先将用户传进来的transact参数拷贝在本地变量struct binder_transaction_data tr中去,接着调用binder_transaction函数进一步处理,这里我们忽略掉无关代码:
static void binder_transaction(struct binder_proc *proc, struct binder_thread *thread, struct binder_transaction_data *tr, int reply) { struct binder_transaction *t; struct binder_work *tcomplete; size_t *offp, *off_end; struct binder_proc *target_proc; struct binder_thread *target_thread = NULL; struct binder_node *target_node = NULL; struct list_head *target_list; wait_queue_head_t *target_wait; struct binder_transaction *in_reply_to = NULL; struct binder_transaction_log_entry *e; uint32_t return_error; ...... if (reply) { ...... } else { if (tr->target.handle) { ...... } else { target_node = binder_context_mgr_node; if (target_node == NULL) { return_error = BR_DEAD_REPLY; goto err_no_context_mgr_node; } } ...... target_proc = target_node->proc; if (target_proc == NULL) { return_error = BR_DEAD_REPLY; goto err_dead_binder; } ...... } if (target_thread) { ...... } else { target_list = &target_proc->todo; target_wait = &target_proc->wait; } ...... /* TODO: reuse incoming transaction for reply */ t = kzalloc(sizeof(*t), GFP_KERNEL); if (t == NULL) { return_error = BR_FAILED_REPLY; goto err_alloc_t_failed; } ...... tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL); if (tcomplete == NULL) { return_error = BR_FAILED_REPLY; goto err_alloc_tcomplete_failed; } ...... if (!reply && !(tr->flags & TF_ONE_WAY)) t->from = thread; else t->from = NULL; t->sender_euid = proc->tsk->cred->euid; t->to_proc = target_proc; t->to_thread = target_thread; t->code = tr->code; t->flags = tr->flags; t->priority = task_nice(current); t->buffer = binder_alloc_buf(target_proc, tr->data_size, tr->offsets_size, !reply && (t->flags & TF_ONE_WAY)); if (t->buffer == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_alloc_buf_failed; } t->buffer->allow_user_free = 0; t->buffer->debug_id = t->debug_id; t->buffer->transaction = t; t->buffer->target_node = target_node; if (target_node) binder_inc_node(target_node, 1, 0, NULL); offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *))); if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) { ...... return_error = BR_FAILED_REPLY; goto err_copy_data_failed; } if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) { ...... return_error = BR_FAILED_REPLY; goto err_copy_data_failed; } ...... off_end = (void *)offp + tr->offsets_size; for (; offp < off_end; offp++) { struct flat_binder_object *fp; ...... fp = (struct flat_binder_object *)(t->buffer->data + *offp); switch (fp->type) { case BINDER_TYPE_BINDER: case BINDER_TYPE_WEAK_BINDER: { struct binder_ref *ref; struct binder_node *node = binder_get_node(proc, fp->binder); if (node == NULL) { node = binder_new_node(proc, fp->binder, fp->cookie); if (node == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_new_node_failed; } node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK; node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS); } if (fp->cookie != node->cookie) { ...... goto err_binder_get_ref_for_node_failed; } ref = binder_get_ref_for_node(target_proc, node); if (ref == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_get_ref_for_node_failed; } if (fp->type == BINDER_TYPE_BINDER) fp->type = BINDER_TYPE_HANDLE; else fp->type = BINDER_TYPE_WEAK_HANDLE; fp->handle = ref->desc; binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo); ...... } break; ...... } } if (reply) { ...... } else if (!(t->flags & TF_ONE_WAY)) { BUG_ON(t->buffer->async_transaction != 0); t->need_reply = 1; t->from_parent = thread->transaction_stack; thread->transaction_stack = t; } else { ...... } t->work.type = BINDER_WORK_TRANSACTION; list_add_tail(&t->work.entry, target_list); tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE; list_add_tail(&tcomplete->entry, &thread->todo); if (target_wait) wake_up_interruptible(target_wait); return; ...... }
注意到其中的if (tr->target.handle) 这个判断,还记得这里保存的是什么吗?不错,这里保存的就是Service Manager 的BpBinder中的handle,为0!然后就执行else中代码:target_node = binder_context_mgr_node; binder_context_mgr_node是Sevice Manager申明自己为服务管理者时创建的。并且本方法后续分别会找到target_proc、target_thread、target_node、target_list和target_wait, 他们的值分别为:
target_node = binder_context_mgr_node; target_proc = target_node->proc; target_list = &target_proc->todo; target_wait = &target_proc->wait;
这样就找到了为编号为 0 的 binder对应的进程,然后把它唤醒就可以与之通信了,具体细节就不再说了。
(2)、其他binder编号的产生和查询对应的进程。
还是继续分析add service过程,在binder驱动的binder_transaction中,如果传输内容中有新的binder(例如ActivityManagerService,MediaPlayerService等),而binder驱动中有没有保存有它(通过binder_get_node方法查询返回空),就会通过binder_new_node在proc中新建一个,下次就可以直接使用了,具体代码如下:
case BINDER_TYPE_WEAK_BINDER: { struct binder_ref *ref; struct binder_node *node = binder_get_node(proc, fp->binder); if (node == NULL) { node = binder_new_node(proc, fp->binder, fp->cookie); if (node == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_new_node_failed; } node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK; node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS); }
其中fp->binder是从本次传输内容中解析出来的,proc为本次调用add service的进程,即binder所在的进程。
继续往下执行:
ref = binder_get_ref_for_node(target_proc, node); if (ref == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_get_ref_for_node_failed; } if (fp->type == BINDER_TYPE_BINDER) fp->type = BINDER_TYPE_HANDLE; else fp->type = BINDER_TYPE_WEAK_HANDLE; fp->handle = ref->desc;
我们可以看到这里使用上文创建的binder_node *node 到target_proc(此处是向Service Manager中add service,所以target_proc为Service Manager)中去查找引用,并且把引用的ref->desc保存到 fp->handle中,这里binder_get_ref_for_node的作用有两个:一是如果binder的引用不存在,就会在target_proc的保存binder引用的红黑树中创建一个引用,并递增生成一个desc值,这个值就是binder在target_proc(Service Manager)中编号,以后所有查询service和使用service都使用这个编号。二是如果target_proc的保存binder引用的红黑树存在该binder,就返回该binder,这时引用里面也是带有该binder编号的。
到这里服务对应的binder的编号就产生了,并通过fp->handle = ref->desc传输到Service Manager中保存起来,当有第三方进程需要使用该服务时,通过getService从Service Manager中查询该服务,Service Manager返回该编号,第三方进程使用该编号与服务进程通信,继续在binder驱动的binder_transaction方法中执行:
if (reply) { ...... } else { if (tr->target.handle) { struct binder_ref *ref; ref = binder_get_ref(proc, tr->target.handle); if (ref == NULL) { binder_user_error("binder: %d:%d got " "transaction to invalid handle\n", proc->pid, thread->pid); return_error = BR_FAILED_REPLY; goto err_invalid_target_handle; } target_node = ref->node; } else { target_node = binder_context_mgr_node; if (target_node == NULL) { return_error = BR_DEAD_REPLY; goto err_no_context_mgr_node; } } ...... }
在上文我们分析这段代码时,tr->target.handle为Service Manager的binder的编号,是0,运行的是else分支。此时tr->target.handle是从Service Manager中返回的服务binder的编号,是一个大于0的数,所以运行的是if分支,从第三方进程的binder引用的红黑树中找到服务的binder引用,并从引用中找到服务binder对应的进程。然后开始与之通信。
不知道读者注意到没有,这里存在一点问题:就是服务的binder引用怎么会到第三方进行的binder引用的红黑树中的呢?
这里就留给大家自己分析,只有自己分析出来的东西才是自己的。思路提醒:第三方进程使用getService时会使用从Service Manager中找到的binder调用binder_get_ref_for_node在第三方进程的红黑树中创建引用。
参考:
http://blog.csdn.net/Luoshengyang/article/list/3
http://www.cnblogs.com/innost/archive/2011/01/09/1931456.html
posted on 2015-05-08 18:40 skywalker0011 阅读(293) 评论(0) 编辑 收藏 举报