缺页异常详解

首先明确下什么是缺页异常,CPU通过地址总线可以访问连接在地址总线上的所有外设,包括物理内存、IO设备等等,但从CPU发出的访问地址并非是这些外设在地址总线上的物理地址,而是一个虚拟地址,由MMU将虚拟地址转换成物理地址再从地址总线上发出,MMU上的这种虚拟地址和物理地址的转换关系是需要创建的,并且MMU还可以设置这个物理页是否可以进行写操作,当没有创建一个虚拟地址到物理地址的映射,或者创建了这样的映射,但那个物理页不可写的时候,MMU将会通知CPU产生了一个缺页异常。

下面总结下缺页异常的几种情况:

1、当MMU中确实没有创建虚拟页物理页映射关系,并且在该虚拟地址之后再没有当前进程的线性区vma的时候,可以肯定这是一个编码错误,这将杀掉该进程;

2、当MMU中确实没有创建虚拟页物理页映射关系,并且在该虚拟地址之后存在当前进程的线性区vma的时候,这很可能是缺页异常,并且可能是栈溢出导致的缺页异常;

3、当使用malloc/mmap等希望访问物理空间的库函数/系统调用后,由于linux并未真正给新创建的vma映射物理页,此时若先进行写操作,将如上面的2的情况产生缺页异常,若先进行读操作虽也会产生缺页异常,将被映射给默认的零页(zero_pfn),等再进行写操作时,仍会产生缺页异常,这次必须分配物理页了,进入写时复制的流程;

4、当使用fork等系统调用创建子进程时,子进程不论有无自己的vma,“它的”vma都有对于物理页的映射,但它们共同映射的这些物理页属性为只读,即linux并未给子进程真正分配物理页,当父子进程任何一方要写相应物理页时,导致缺页异常的写时复制;

目前来看,应该就是这四种情况,还是比较清晰的,可发现一个重要规律就是,linux是直到实在不行的时候才会分配物理页,把握这个原则理解的会好一些,下面详细的看缺页处理:

arm的缺页处理函数为arch/arm/mm/fault.c文件中的do_page_fault函数,关于缺页异常是怎么一步步调到这个函数的,同上一篇位置进程地址空间创建说的一样,后面会有专题文章描述这个问题,现在只关心缺页异常的处理,下面是函数do_page_fault:

static int __kprobes

do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)

{

         struct task_struct *tsk;

         struct mm_struct *mm;

         int fault, sig, code;

    /*空函数*/

         if (notify_page_fault(regs, fsr))

                   return 0;

    /*获取到缺页异常的进程描述符和其内存描述符*/

         tsk = current;

         mm  = tsk->mm;

         /*

          * If we're in an interrupt or have no user

          * context, we must not take the fault..

          */

         /*1、判断当前是否是在原子操作中(中断、可延迟函数、临界区)发生的异常

      2、通过mm是否存在判断是否是内核线程,对于内核线程,进程描述符的mm总为NULL

      一旦成立,说明是在内核态中发生的异常,跳到标号no_context*/

         if (in_atomic() || !mm)

                   goto no_context;

         /*

          * As per x86, we may deadlock here.  However, since the kernel only

          * validly references user space from well defined areas of the code,

          * we can bug out early if this is from code which shouldn't.

          */

         if (!down_read_trylock(&mm->mmap_sem)) {

                   if (!user_mode(regs) && !search_exception_tables(regs->ARM_pc))

                            goto no_context;

                   down_read(&mm->mmap_sem);

         } else {

                   /*

                    * The above down_read_trylock() might have succeeded in

                    * which case, we'll have missed the might_sleep() from

                    * down_read()

                    */

                   might_sleep();

#ifdef CONFIG_DEBUG_VM

                   if (!user_mode(regs) &&

                       !search_exception_tables(regs->ARM_pc))

                            goto no_context;

#endif

         }

         fault = __do_page_fault(mm, addr, fsr, tsk);

         up_read(&mm->mmap_sem);

         /*

          * Handle the "normal" case first - VM_FAULT_MAJOR / VM_FAULT_MINOR

          */

         /*如果返回值fault不是这里面的值,那么应该会是VM_FAULT_MAJOR或VM_FAULT_MINOR,说明问题解决了,返回,一般正常情况下,__do_page_fault的返回值fault会是0(VM_FAULT_MINOR)或者其他一些值,都不是下面之后会看到的这些*/

         if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP | VM_FAULT_BADACCESS))))

                   return 0;

         /*如果fault是VM_FAULT_OOM这个级别的错误,那么这要杀掉进程*/

         if (fault & VM_FAULT_OOM) {

                   /*

                    * We ran out of memory, call the OOM killer, and return to

                    * userspace (which will retry the fault, or kill us if we

                    * got oom-killed)

                    */

                   pagefault_out_of_memory();

                   return 0;

         }

         /*

          * If we are in kernel mode at this point, we

          * have no context to handle this fault with.

          */

         /*再次判断是否是内核空间出现了页异常,并且通过__do_page_fault没有没有解决,跳到到no_context*/

         if (!user_mode(regs))

                   goto no_context;

         /*下面两个情况,通过英文注释可以理解,

           一个是无法修复,另一个是访问非法地址,都是要杀掉进程的错误*/

         if (fault & VM_FAULT_SIGBUS) {

                   /*

                    * We had some memory, but were unable to

                    * successfully fix up this page fault.

                    */

                   sig = SIGBUS;

                   code = BUS_ADRERR;

         } else {

                   /*

                    * Something tried to access memory that

                    * isn't in our memory map..

                    */

                   sig = SIGSEGV;

                   code = fault == VM_FAULT_BADACCESS ?

                            SEGV_ACCERR : SEGV_MAPERR;

         }

         /*给用户进程发送相应的信号,杀掉进程*/

         __do_user_fault(tsk, addr, fsr, sig, code, regs);

         return 0;

no_context:

    /*内核引发的异常处理,如修复不畅,内核也要杀掉*/

         __do_kernel_fault(mm, addr, fsr, regs);

         return 0;

}

首先看第一个重点,源码片段如下:

/*1、判断当前是否是在原子操作中(中断、可延迟函数、临界区)发生的异常

  2、通过mm是否存在判断是否是内核线程,对于内核线程,进程描述符的mm总为NULL,一旦成立,说明是在内核态中发生的异常,跳到标号no_context*/

         if (in_atomic() || !mm)

                   goto no_context;

如果当前执行流程在内核态,不论是在临界区(中断/推后执行/临界区)还是内核进程本身(内核的mm为NULL),说明在内核态出了问题,跳到标号no_context进入内核态异常处理,由函数__do_kernel_fault完成,这个函数首先尽可能的设法解决这个异常,通过查找异常表中和目前的异常对应的解决办法并调用执行,这个部分的细节一直没有找到在哪里,如果找到的话留言告我一下吧!如果无法通过异常表解决,那么内核就要在打印其页表等内容后退出了!其源码如下:

static void

__do_kernel_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,

                     struct pt_regs *regs)

{

         /*

          * Are we prepared to handle this kernel fault?

          */

         /*fixup_exception()用于搜索异常表,并试图找到一个对应该异常的例程来进行修正,这个例程在fixup_exception()返回后执行*/

         if (fixup_exception(regs))

                   return;

         /*

          * No handler, we'll have to terminate things with extreme prejudice.

          */

         /*走到这里就说明异常确实是由于内核的程序设计缺陷导致的了,内核将产生一个oops,下面的工作就是打印CPU寄存器和内核态堆栈的信息到控制台并终结当前的进程*/

         bust_spinlocks(1);

         printk(KERN_ALERT

                   "Unable to handle kernel %s at virtual address %08lx\n",

                   (addr < PAGE_SIZE) ? "NULL pointer dereference" :

                   "paging request", addr);

         /*打印内核一二级页表信息*/

         show_pte(mm, addr);

         /*内核产生一个oops,打印一堆东西准备退出*/

         die("Oops", regs, fsr);

         bust_spinlocks(0);

         /*内核退出了!*/

         do_exit(SIGKILL);

}

 

 

接上一篇

回到函数do_page_fault,如果不是内核的缺页异常而是用户进程的缺页异常,那么调用函数__do_page_fault,这个应该是本文的重点,主要讨论的是用户进程的缺页异常,结合最前面说的用户进程产生缺页异常的四种情况,函数__do_page_fault都会排查到,源码如下:

__do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,

                   struct task_struct *tsk)

{

         struct vm_area_struct *vma;

         int fault;

    /*搜索出现异常的地址前向最近的的vma*/

         vma = find_vma(mm, addr);

         fault = VM_FAULT_BADMAP;

    /*如果vma为NULL,说明addr之后没有vma,所以这个addr是个错误地址*/

         if (unlikely(!vma))

                   goto out;

    /*如果addr后面有vma,但不包含addr,不能断定addr是错误地址,还需检查*/

         if (unlikely(vma->vm_start > addr))

                   goto check_stack;

         /*

          * Ok, we have a good vm_area for this

          * memory access, so we can handle it.

          */

good_area:

    /*权限错误也要返回,比如缺页报错(由参数fsr标识)报的是不可写/不可执行的错误,但addr所属vma线性区本身就不可写/不可执行,那么就直接返回,因为问题根本不是缺页,而是vma就已经有问题*/

         if (access_error(fsr, vma)) {

                   fault = VM_FAULT_BADACCESS;

                   goto out;

         }

         /*

          * If for any reason at all we couldn't handle the fault, make

          * sure we exit gracefully rather than endlessly redo the fault.

          */

         /*为引发缺页的进程分配一个物理页框,它先确定与引发缺页的线性地址对应的各级页目录项是否存在,如不存在则分进行分配。具体如何分配这个页框是通过调用handle_pte_fault完成的*/

         fault = handle_mm_fault(mm, vma, addr & PAGE_MASK, (fsr & FSR_WRITE) ? FAULT_FLAG_WRITE : 0);

         if (unlikely(fault & VM_FAULT_ERROR))

                   return fault;

         if (fault & VM_FAULT_MAJOR)

                   tsk->maj_flt++;

         else

                   tsk->min_flt++;

         return fault;

check_stack:

    /*addr后面的vma的vm_flags含有VM_GROWSDOWN标志,这说明这个vma是属于栈的vma,所以addr是在栈中,有可能是栈空间不够时再进栈导致的访问错误,同时查看栈是否还能扩展,如果不能扩展(expand_stack返回非0)则确认确实是栈溢出导致,即addr确实是栈中地址,不是非法地址,应该进入缺页中的请求调页*/

         if (vma->vm_flags & VM_GROWSDOWN && !expand_stack(vma, addr))

                   goto good_area;

out:

         return fault;

}

l  首先,查看缺页异常的这个虚拟地址addr,找它后面最近的vma,如果真的没有找到,那么说明访问的地址是真的错误了,因为它根本不在所分配的任何一个vma线性区;这是一种严重错误,将返回错误码(fault)VM_FAULT_BADMAP,内核会杀掉这个进程;

l  如果addr后面有vma,但addr并未落在这个vma的区间内,这存在一种可能,要知道栈的增长方向和堆是相反的即栈是向下增长,所以也许addr实际上是栈的一个地址,它后面的vma实际上是栈的vma,栈已无法扩展,即访问addr时,这个addr并没有落在vma中所以更无二级页表映射,导致缺页异常,所以查看addr后面的vma是否是向下增长并且栈是否无法扩展,以此界定addr是不是栈地址,如果是则进入缺页异常处理流程,否则同样返回错误码(fault)VM_FAULT_BADMAP,内核会杀掉这个进程;

l  权限错误也就返回,比如缺页报错(fsr)报的是不可写,但vma本身就不可写,那么就直接返回,因为问题根本不是缺页,而是vma就已经有问题;返回错误码(fault) VM_FAULT_BADACCESS,这也是一种严重错误,内核会杀掉这个进程;s

l  最后是对确实缺页异常的情况进行处理,调用函数handle_mm_fault,正常情况下将返回VM_FAULT_MAJOR或VM_FAULT_MINOR,返回错误码fault并加一task的maj_flt或min_flt成员;

函数handle_mm_fault,就是为引发缺页的进程分配一个物理页框,它先确定与引发缺页的线性地址对应的各级页目录项是否存在,如不存在则分进行分配。具体如何分配这个页框是通过调用handle_pte_fault()完成的,注意最后一个参数flag,它来源于fsr,标识写异常和非写异常,这是为了达到进一步推后分配物理内存的一个铺垫;源码如下:

int handle_mm_fault(struct mm_struct *mm, struct vm_area_struct *vma,

                   unsigned long address, unsigned int flags)

{

         pgd_t *pgd;

         pud_t *pud;

         pmd_t *pmd;

         pte_t *pte;

         __set_current_state(TASK_RUNNING);

         count_vm_event(PGFAULT);

         if (unlikely(is_vm_hugetlb_page(vma)))

                   return hugetlb_fault(mm, vma, address, flags);

    /*返回addr对应的一级页表条目*/

         pgd = pgd_offset(mm, address);

    /*对于arm,pud就是pgd*/

         pud = pud_alloc(mm, pgd, address);

         if (!pud)

                   return VM_FAULT_OOM;

    /*对于arm,pmd就是pud就是pgd*/

         pmd = pmd_alloc(mm, pud, address);

         if (!pmd)

                   return VM_FAULT_OOM;

    /*返回addr对应的二级页表条目*/

         pte = pte_alloc_map(mm, pmd, address);

         if (!pte)

                   return VM_FAULT_OOM;

    /*该函数根据页表项pte所描述的物理页框是否在物理内存中,分为两大类:

    请求调页:被访问的页框不在主存中,那么此时必须分配一个页框,分为线性(匿名/文件)映射、非线性映射、swap情况下映射

    写时复制:被访问的页存在,但是该页是只读的,内核需要对该页进行写操作,

             此时内核将这个已存在的只读页中的数据复制到一个新的页框中*/

         return handle_pte_fault(mm, vma, address, pte, pmd, flags);

}

首先注意下个细节,在二级页表条目不存在时,会先创建条目;最终会调用函数handle_pte_fault,该函数功能注释已经描述很清楚,源码如下:

static inline int handle_pte_fault(struct mm_struct *mm,

                   struct vm_area_struct *vma, unsigned long address,

                   pte_t *pte, pmd_t *pmd, unsigned int flags)

{

         pte_t entry;

         spinlock_t *ptl;

         entry = *pte;

/*调页请求:分为线性(匿名/文件)映射、非线性映射、swap情况下映射

  注意,pte_present(entry)为0说明二级页表条目pte映射的物理地址(即*pte)不存在,很可能是调页请求*/

         if (!pte_present(entry)) {

        /*(pte_none(entry))为1说明二级页表条目pte尚且没有写入任何物理地址,说明还根本从未分配物理页*/

                   if (pte_none(entry)) {

            /*如果该vma的操作函数集合实现了fault函数,说明是文件映射而不是匿名映射,将调用do_linear_fault分配物理页*/

                            if (vma->vm_ops) {

                                     if (likely(vma->vm_ops->fault))

                                               return do_linear_fault(mm, vma, address,

                                                        pte, pmd, flags, entry);

                            }

            /*匿名映射的情况分配物理页,最终调用alloc_pages*/

                            return do_anonymous_page(mm, vma, address,

                                                         pte, pmd, flags);

                   }

        /*(pte_file(entry))说明是非线性映射,调用do_nonlinear_fault分配物理页*/

                   if (pte_file(entry))

                            return do_nonlinear_fault(mm, vma, address,

                                               pte, pmd, flags, entry);

        /*如果页框事先被分配,但是此刻已经由主存换出到了外存,则调用do_swap_page()完成页框分配*/

                   return do_swap_page(mm, vma, address,

                                               pte, pmd, flags, entry);

         }

/*写时复制

    COW的场合就是访问映射的页不可写,有两种情况、:

一种是之前给vma映射的是零页(zero_pfn),

    另外一种是访问fork得到的进程空间(子进程与父进程共享父进程的只读页)

    共同特点就是: 二级页表条目不允许写,简单说就是该页不可写*/

         ptl = pte_lockptr(mm, pmd);

         spin_lock(ptl);

         if (unlikely(!pte_same(*pte, entry)))

                   goto unlock;

    /*是写操作时发生的缺页异常*/

         if (flags & FAULT_FLAG_WRITE) {

        /*二级页表条目不允许写,引发COW*/

                   if (!pte_write(entry))

                            return do_wp_page(mm, vma, address,

                                               pte, pmd, ptl, entry);

        /*标志本页已脏*/

                   entry = pte_mkdirty(entry);

         }

         entry = pte_mkyoung(entry);

         if (ptep_set_access_flags(vma, address, pte, entry, flags & FAULT_FLAG_WRITE)) {

                   update_mmu_cache(vma, address, entry);

         } else {

                   /*

                    * This is needed only for protection faults but the arch code

                    * is not yet telling us if this is a protection fault or not.

                    * This still avoids useless tlb flushes for .text page faults

                    * with threads.

                    */

                   if (flags & FAULT_FLAG_WRITE)

                            flush_tlb_page(vma, address);

         }

unlock:

         pte_unmap_unlock(pte, ptl);

         return 0;

}

回过头看下那四个异常的情况,上面的内容会比较好理解些,首先获取到二级页表条目值entry,对于写时复制的情况,它的异常addr的二级页表条目还是存在的(就是说起码存在标志L_PTE_PRESENT),只是说映射的物理页不可写,所以由(!pte_present(entry))可界定这是请求调页的情况;

在请求调页情况下,如果这个二级页表条目的值为0,即什么都没有,那么说明这个地址所在的vma是完完全全没有做过映射物理页的操作,那么根据该vma是否存在vm_ops成员即操作函数,并且vm_ops存在fault成员,这说明是文件映射而非匿名映射,反之是匿名映射,分别调用函数do_linear_fault、do_anonymous_page;

仍然在请求调页的情况下,如果二级页表条目的值含有L_PTE_FILE标志,说明这是个非线性文件映射,将调用函数do_nonlinear_fault分配物理页;其他情况视为物理页曾被分配过,但后来被linux交换出内存,将调用函数do_swap_page再分配物理页;

文件线性/非线性映射和交换分区的映射除请求调页方面外,还涉及文件、交换分区的很多内容,为简化起见,下面仅以匿名映射为例描述用户空间缺页异常的实际处理,而事实上日常使用的malloc都是匿名映射;

匿名映射体现了linux为进程分配物理空间的基本态度,不到实在不行的时候不分配物理页,当使用malloc/mmap申请映射一段物理空间时,内核只是给该进程创建了段线性区vma,但并未映射物理页,然后如果试图去读这段申请的进程空间,由于未创建相应的二级页表映射条目,MMU会发出缺页异常,而这时内核依然只是把一个默认的零页zero_pfn(这是在初始化时创建的,前面的内存页表的文章描述过)给vma映射过去,当应用程序又试图写这段申请的物理空间时,这就是实在不行的时候了,内核才会给vma映射物理页,源码如下:

static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,

                   unsigned long address, pte_t *page_table, pmd_t *pmd,

                   unsigned int flags)

{

         struct page *page;

         spinlock_t *ptl;

         pte_t entry;

/*如果不是写操作的话(即读操作),那么非常简单,把zero_pfn的二级页表条目赋给entry,因为这里已经是缺页异常的请求调页的处理,又是读操作,所以肯定是本进程第一次访问这个页,所以这个页里面是什么内容无所谓,分配个默认全零页就好,进一步推迟物理页的分配,这就会让entry带着zero_pfn跳到标号setpte*/

         if (!(flags & FAULT_FLAG_WRITE)) {

                   entry = pte_mkspecial(pfn_pte(my_zero_pfn(address),

                                                        vma->vm_page_prot));

                   ptl = pte_lockptr(mm, pmd);

                   spin_lock(ptl);

        /*如果这个缺页的虚拟地址对应的二级页表条目所映射的内容居然在内存中,直接跳到标号unlock准备解锁返回*/

                   if (!pte_none(*page_table))

                            goto unlock;

        /*跳到标号setpte就是写二级页表条目的内容即映射内容,对于这类就是把entry即zero_pfn写进去了*/

                   goto setpte;

         }

/*如果是写操作,就要分配一个新的物理页了*/

         /* Allocate our own private page. */

    /*这里为空函数*/

         pte_unmap(page_table);

    /*分配一个anon_vma实例,反向映射相关,可暂不关注*/

         if (unlikely(anon_vma_prepare(vma)))

                   goto oom;

    /*它将调用alloc_page,这个页被0填充*/

         page = alloc_zeroed_user_highpage_movable(vma, address);

         if (!page)

                   goto oom;

         __SetPageUptodate(page);

    /*空函数*/

         if (mem_cgroup_newpage_charge(page, mm, GFP_KERNEL))

                   goto oom_free_page;

 

/*把该页的物理地址加属性的值赋给entry,这是二级页表映射内容的基础值*/

         entry = mk_pte(page, vma->vm_page_prot);

    /*如果是写访问,那么设置这个二级页表条目属性还要加入:脏且可写*/

         if (vma->vm_flags & VM_WRITE)

                   entry = pte_mkwrite(pte_mkdirty(entry));

/*把page_table指向虚拟地址addr的二级页表条目地址*/

         page_table = pte_offset_map_lock(mm, pmd, address, &ptl);

/*如果这个缺页的虚拟地址对应的二级页表条目所映射的内容居然在内存中,报错返回*/

         if (!pte_none(*page_table))

                   goto release;

    /*mm的rss成员加一,用于记录分配给本进程的物理页总数*/

         inc_mm_counter(mm, anon_rss);

    /*page_add_new_anon_rmap用于建立线性区和匿名页的反向映射,可暂不关注*/

         page_add_new_anon_rmap(page, vma, address);

setpte:

/*给page_table这个二级页表条目写映射内容,内容是entry*/

         set_pte_at(mm, address, page_table, entry);

         /* No need to invalidate - it was non-present before */

    /*更新MMU*/

         update_mmu_cache(vma, address, entry);

unlock:

         pte_unmap_unlock(page_table, ptl);

         return 0;

release:

         mem_cgroup_uncharge_page(page);

         page_cache_release(page);

         goto unlock;

oom_free_page:

         page_cache_release(page);

oom:

         return VM_FAULT_OOM;

}

结合上面的描述和源码注释应该比较容易能理解请求调页的原理和流程;

现在分析写时复制COW,对于写时复制,首先把握一点就是只有写操作时才有可能触发写时复制,所以首先总要判断异常flag是否含有标志FAULT_FLAG_WRITE,然后判断二级页表条目值是否含有L_PTE_WRITE标志,这是意味着这个物理页是否可写,如果不可写则说明应该进入写时复制流程,调用处理函数do_wp_page;

可见,COW的应用场合就是访问映射的页不可写,它包括两种情况,第一种是fork导致,第二种是如malloc后第一次对他进行读操作,获取到的是zero_pfn零页,当再次写时需要写时复制,共同特点都是虚拟地址的二级页表映射内容在内存中,但是对应的页不可写,在函数do_wp_page中对于这两种情况的处理基本相似的;

另外一个应该知道的是,如果该页只有一个进程在用,那么就直接修改这个页可写就行了,不要搞COW,总之,不到不得以的情况下是不会进行COW的,这也是内核对于COW使用的原则,就是尽量不使用;

函数do_wp_page源码如下:

static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,

                   unsigned long address, pte_t *page_table, pmd_t *pmd,

                   spinlock_t *ptl, pte_t orig_pte)

{

         struct page *old_page, *new_page;

         pte_t entry;

         int reuse = 0, ret = 0;

         int page_mkwrite = 0;

         struct page *dirty_page = NULL;

    /*返回不可写的页的页描述符,如果是COW的第一种情况即zero_pfn可读页,返回NULL,将进入下面的if流程;第二种情况即(父子进程)共享页将正常返回其页描述符*/

         old_page = vm_normal_page(vma, address, orig_pte);

         if (!old_page) {

                   /*

                    * VM_MIXEDMAP !pfn_valid() case

                    *

                    * We should not cow pages in a shared writeable mapping.

                    * Just mark the pages writable as we can't do any dirty

                    * accounting on raw pfn maps.

                    */

                   /*如果这个vma是可写且共享的,跳到标号reuse,这就不会COW

          否则跳到标号gotten*/

                   if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==

                                          (VM_WRITE|VM_SHARED))

                            goto reuse;

                   goto gotten;

         }

         /*

          * Take out anonymous pages first, anonymous shared vmas are

          * not dirty accountable.

          */

/*下面的if和else流程,都是为了尽可能不进行COW,它们试图进入标号reuse*/

   

         /*如果该页old_page是匿名页(由页描述符的mapping),

           并且只有一个进程使用该页(reuse_swap_page,由页描述符的_mapcount值是否为0),那么不要搞什么COW了,这个进程就是可以使用该页*/

         if (PageAnon(old_page) && !PageKsm(old_page)) {

        /*排除其他进程在使用该页的情况,由页描述符的flag*/

                   if (!trylock_page(old_page)) {

                            page_cache_get(old_page);

                            pte_unmap_unlock(page_table, ptl);

                            lock_page(old_page);

                            page_table = pte_offset_map_lock(mm, pmd, address,

                                                                  &ptl);

                            if (!pte_same(*page_table, orig_pte)) {

                                     unlock_page(old_page);

                                     page_cache_release(old_page);

                                     goto unlock;

                            }

                            page_cache_release(old_page);

                   }

        /*判断该页描述符的_mapcount值是否为0*/

                   reuse = reuse_swap_page(old_page);

                   unlock_page(old_page);

         }

    /*如果vma是共享且可写,看看这种情况下有没有机会不COW*/

    else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==

                                               (VM_WRITE|VM_SHARED))) {

                   /*

                    * Only catch write-faults on shared writable pages,

                    * read-only shared pages can get COWed by

                    * get_user_pages(.write=1, .force=1).

                    */

                   if (vma->vm_ops && vma->vm_ops->page_mkwrite) {

                            struct vm_fault vmf;

                            int tmp;

                            vmf.virtual_address = (void __user *)(address &

                                                                           PAGE_MASK);

                            vmf.pgoff = old_page->index;

                            vmf.flags = FAULT_FLAG_WRITE|FAULT_FLAG_MKWRITE;

                            vmf.page = old_page;

                            /*

                             * Notify the address space that the page is about to

                             * become writable so that it can prohibit this or wait

                             * for the page to get into an appropriate state.

                             *

                             * We do this without the lock held, so that it can

                             * sleep if it needs to.

                             */

                            page_cache_get(old_page);

                            pte_unmap_unlock(page_table, ptl);

                            tmp = vma->vm_ops->page_mkwrite(vma, &vmf);

                            if (unlikely(tmp &

                                               (VM_FAULT_ERROR | VM_FAULT_NOPAGE))) {

                                     ret = tmp;

                                     goto unwritable_page;

                            }

                            if (unlikely(!(tmp & VM_FAULT_LOCKED))) {

                                     lock_page(old_page);

                                     if (!old_page->mapping) {

                                               ret = 0; /* retry the fault */

                                               unlock_page(old_page);

                                               goto unwritable_page;

                                     }

                            } else

                                     VM_BUG_ON(!PageLocked(old_page));

                            /*

                             * Since we dropped the lock we need to revalidate

                             * the PTE as someone else may have changed it.  If

                             * they did, we just return, as we can count on the

                             * MMU to tell us if they didn't also make it writable.

                             */

                            page_table = pte_offset_map_lock(mm, pmd, address,

                                                                  &ptl);

                            if (!pte_same(*page_table, orig_pte)) {

                                     unlock_page(old_page);

                                     page_cache_release(old_page);

                                     goto unlock;

                            }

                            page_mkwrite = 1;

                   }

                   dirty_page = old_page;

                   get_page(dirty_page);

                   reuse = 1;

         }

    /*reuse: 不进行COW,直接操作该页old_page*/

         if (reuse) {

reuse:

                   flush_cache_page(vma, address, pte_pfn(orig_pte));

                   entry = pte_mkyoung(orig_pte);

        /*写该页的二级页表属性,加入可写且脏*/

                   entry = maybe_mkwrite(pte_mkdirty(entry), vma);

                   if (ptep_set_access_flags(vma, address, page_table, entry,1))

                            update_mmu_cache(vma, address, entry);

                   ret |= VM_FAULT_WRITE;

                   goto unlock;

         }

         /*

          * Ok, we need to copy. Oh, well..

          */

/*真正的COW即将开始*/

    /*首先增加之前的页的被映射次数(get_page(), page->_count)*/

         page_cache_get(old_page);

gotten:

         pte_unmap_unlock(page_table, ptl);

         if (unlikely(anon_vma_prepare(vma)))

                   goto oom;

    /*COW的第一种情况(zero_pfn),将分配新页并清零该页*/

         if (is_zero_pfn(pte_pfn(orig_pte))) {

                   new_page = alloc_zeroed_user_highpage_movable(vma, address);

                   if (!new_page)

                            goto oom;

         }

    /*COW的第二种情况(fork),申请一个页,并把old_page页的内容拷贝到新页new_page(4K字节的内容)*/

    else {

                   new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);

                   if (!new_page)

                            goto oom;

                   cow_user_page(new_page, old_page, address, vma);

         }

         __SetPageUptodate(new_page);

         /*

          * Don't let another task, with possibly unlocked vma,

          * keep the mlocked page.

          */

         /*COW第二种情况下,如果vma还是锁定的,那还需要解锁*/

         if ((vma->vm_flags & VM_LOCKED) && old_page) {

                   lock_page(old_page);      /* for LRU manipulation */

                   clear_page_mlock(old_page);

                   unlock_page(old_page);

         }

    /*空函数*/

         if (mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))

                   goto oom_free_new;

         /*

          * Re-check the pte - we dropped the lock

          */

         /*再获取下访问异常的地址addr对应的二级页表条目地址page_table*/

         page_table = pte_offset_map_lock(mm, pmd, address, &ptl);

         if (likely(pte_same(*page_table, orig_pte))) {

                   if (old_page) {

                            if (!PageAnon(old_page)) {

                                     dec_mm_counter(mm, file_rss);

                                     inc_mm_counter(mm, anon_rss);

                            }

                   } else

                            inc_mm_counter(mm, anon_rss);

                   flush_cache_page(vma, address, pte_pfn(orig_pte));

        /*写新页的二级页表条目内容为脏*/

                   entry = mk_pte(new_page, vma->vm_page_prot);

                   entry = maybe_mkwrite(pte_mkdirty(entry), vma);

                   /*

                    * Clear the pte entry and flush it first, before updating the

                    * pte with the new entry. This will avoid a race condition

                    * seen in the presence of one thread doing SMC and another

                    * thread doing COW.

                    */

                   ptep_clear_flush(vma, address, page_table);

                   page_add_new_anon_rmap(new_page, vma, address);

                   /*

                    * We call the notify macro here because, when using secondary

                    * mmu page tables (such as kvm shadow page tables), we want the

                    * new page to be mapped directly into the secondary page table.

                    */

                   set_pte_at_notify(mm, address, page_table, entry);

                   update_mmu_cache(vma, address, entry);

                   if (old_page) {

                            /*

                             * Only after switching the pte to the new page may

                             * we remove the mapcount here. Otherwise another

                             * process may come and find the rmap count decremented

                             * before the pte is switched to the new page, and

                             * "reuse" the old page writing into it while our pte

                             * here still points into it and can be read by other

                             * threads.

                             *

                             * The critical issue is to order this

                             * page_remove_rmap with the ptp_clear_flush above.

                             * Those stores are ordered by (if nothing else,)

                             * the barrier present in the atomic_add_negative

                             * in page_remove_rmap.

                             *

                             * Then the TLB flush in ptep_clear_flush ensures that

                             * no process can access the old page before the

                             * decremented mapcount is visible. And the old page

                             * cannot be reused until after the decremented

                             * mapcount is visible. So transitively, TLBs to

                             * old page will be flushed before it can be reused.

                             */

                            page_remove_rmap(old_page);

                   }

                   /* Free the old page.. */

                   new_page = old_page;

                   ret |= VM_FAULT_WRITE;

         }

    else

                   mem_cgroup_uncharge_page(new_page);

         if (new_page)

                   page_cache_release(new_page);

         if (old_page)

                   page_cache_release(old_page);

unlock:

         pte_unmap_unlock(page_table, ptl);

         if (dirty_page) {

                   /*

                    * Yes, Virginia, this is actually required to prevent a race

                    * with clear_page_dirty_for_io() from clearing the page dirty

                    * bit after it clear all dirty ptes, but before a racing

                    * do_wp_page installs a dirty pte.

                    *

                    * do_no_page is protected similarly.

                    */

                   if (!page_mkwrite) {

                            wait_on_page_locked(dirty_page);

                            set_page_dirty_balance(dirty_page, page_mkwrite);

                   }

                   put_page(dirty_page);

                   if (page_mkwrite) {

                            struct address_space *mapping = dirty_page->mapping;

                            set_page_dirty(dirty_page);

                            unlock_page(dirty_page);

                            page_cache_release(dirty_page);

                            if (mapping)      {

                                     /*

                                      * Some device drivers do not set page.mapping

                                      * but still dirty their pages

                                      */

                                     balance_dirty_pages_ratelimited(mapping);

                            }

                   }

                   /* file_update_time outside page_lock */

                   if (vma->vm_file)

                            file_update_time(vma->vm_file);

         }

         return ret;

oom_free_new:

         page_cache_release(new_page);

oom:

         if (old_page) {

                   if (page_mkwrite) {

                            unlock_page(old_page);

                            page_cache_release(old_page);

                   }

                   page_cache_release(old_page);

         }

         return VM_FAULT_OOM;

unwritable_page:

         page_cache_release(old_page);

         return ret;

}

一级一级返回,最终返回到函数__do_page_fault,会根据返回值fault累计task的相应异常类型次数(maj_flt或min_flt),并最终把fault返回给函数do_page_fault,释放信号量mmap_sem,正常情况下就返回0,缺页异常处理完毕。

posted @ 2016-07-06 20:13  极客先锋  阅读(12900)  评论(2编辑  收藏  举报