Linux mem 2.4 Buddy 内存管理机制

1. Buddy 简介

内存是计算机系统中最重要的核心资源之一,Buddy 系统是 Linux 最底层的内存管理机制,它使用 Page 粒度来管理内存。通常情况下一个 Page 的大小为 4K,在 Buddy 系统中分配、释放、回收的最小单位都是 Page。

在这里插入图片描述

上图是 Buddy 系统的内部组织结构,本篇文章只关心未分配区域Free区域的管理,下篇文章再来分析可回收区域的管理。

一个系统的内存总大小动辄几G几十G,不同的内存区域也有不同的特性。Buddy 使用层次化的结构把这些特性给组织起来:

  • 1、Node。在 NUMA 架构下存在多个 Memory 和 CPU 节点,不同 CPU 访问不同 Memory 节点的速度是不一样的,使用 Node 的形式把各个 Memory 节点的内存管理起来。
  • 2、Zone。某些外设只能访问低 16M 的物理地址,某些外设只能访问低 4G 的物理地址,32bit 的内核空间只能直接映射低 896M 物理地址。根据这些地址空间的限制,把同一个 node 内的内存再划分成多个 zone 。
  • 3、Order Freelist。按照空闲内存块的长度,把内存挂载到不同长度的 freelist 链表中。freelist 的长度是以 (2^order x Page) 来递增的,即 1 page、2 page、4 page … 2^n,通常情况下最大 order 为10 对应的空闲内存大小为 4M bytes。在分配时,如果一个空闲块的大小不能被任一长度整除,它就从大到小逐个分解成多个 (2^order x Page) 块来挂载;在释放时,首先把内存释放到对应长度的链表中,随后看看和该内存大小相同、地址相邻的兄弟块(Buddy)是不是free的,如果可以和 buddy 块合并成一个大块挂载到更高一阶的链表,在挂载的时候继续尝试合并。这就是 Buddy 的核心思想,已2的幂个 page 的长度来管理内存方便分配和释放,最核心的目的就是减少内存的碎片化。
  • 4、Migrate Type。为了进一步减少碎片化,系统对内存按照迁移类型进行了分类,最基本的迁移类型有:不可移动(unmovable)、可移动(movable)、可回收(reclaimable)。初始的最大块空闲内存都是 unmovable 的,如果其中一小块分配给了 reclaimable ,那么剩下的内存都变成了 reclaimable。这样坏的类型和坏的类型集中到了一起,避免坏情况的扩散从而造成多个 Free 区域无法合并的情况。
  • 5、PerCPU 1 Page Cache。大于 1 Page 的内存分配大多发生在内核态,而用户态的内存分配使用的是缺页机制所以分配的大小一般是 1 Page。针对大小为 1 Page 的内存分配,系统设计了一个免锁的 PerCPU cache 来支撑。1 Page (Order = 0) 的空闲内存优先释放到 PCP 中,超过了一定 batch 才会释放到 Order Freelist当中;同样 1 Page 的内存也优先在 PCP 中分配。
layeritemcategorydescript
0node0-nNUMA 结构含有多个 Memory 节点
UMA 结构只有一个 Memory 节点
1zoneZONE_DMA
ZONE_DMA32
ZONE_NORMAL
ZONE_HIGHMEM
ZONE_MOVABLE
ZONE_DEVICE
(<16M)x86架构下某些ISA外设的DMA寻址能力为16M
(<4G)历史32bit下的外设DMA寻址能力为4G
-
在x86 32bit下,超过896M的内存无法在内核态线性映射,称为高端内存。x86_64因为虚拟空间很大,没有这块内存。
这些区域的物理内存支持动态的热插拔
为某些设备预留的内存区域
2.1order freelist2^0 Page

2^max_order Page
按照2的幂个 Page 的大小来分类空闲内存
2.1.1migrate typeMIGRATE_UNMOVABLE
MIGRATE_MOVABLE
MIGRATE_RECLAIMABLE
MIGRATE_HIGHATOMIC
MIGRATE_CMA
MIGRATE_ISOLATE
把每级 order freelist 按照迁移类型分成多个链表
2.2PCPPerCPU:
MIGRATE_UNMOVABLE
MIGRATE_MOVABLE
MIGRATE_RECLAIMABLE
针对 1 Page 创建的 PerCPU的cache,其中只包含3种基本的 migrate type

2. Buddy 初始化

2.1 Struct Page 初始化

以 Page 大小的粒度来管理内存,一个 Page 对应的物理内存称为页框 (Page Frame)。另外为了应对复杂的管理,系统给每个 Page 还分配了一个管理结构 struct page,系统在初始化时会预留这部分的物理内存并且映射到 vmemmap 区域 (参考:内核地址空间布局),内核根据物理页帧的编号 pfn 就能在 vmemmap 区域中找到对应的 struct page 结构。

struct page 结构存储了很多信息 (参考:Page 页帧管理详解)。在 sparse_init() 时已经把所有的struct page 结构清零,zone_sizes_init() 初始化时主要初始化两部分信息:

  • 1、初始化 struct page :
    在这里插入图片描述

将 page->flags 中保存的 setcion、node、zone 设置成对应的 index,这样后续操作 struct page 结构时就能快速的找到对应的 setcion、node、zone 而不需要重新根据 pfn 来进行计算。page->flags 中的 flag 部分初始化为 0。

另外给 page->_refcount、_mapcount、_last_cpupid、lru 等成员都进行了初始化。

start_kernel() → setup_arch() → x86_init.paging.pagetable_init() → native_pagetable_init() → paging_init() → zone_sizes_init() → free_area_init_nodes() → free_area_init_node() → free_area_init_core():

|→ memmap_init() → memmap_init_zone()

void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
		unsigned long start_pfn, enum memmap_context context)
{

	for (pfn = start_pfn; pfn < end_pfn; pfn++) {

        /* (2) 当前是一个pageblock的第一个page */
		if (!(pfn & (pageblock_nr_pages - 1))) {
			struct page *page = pfn_to_page(pfn);

            /* (2.1) 初始化对应的 struct page 结构 */
			__init_single_page(page, pfn, zone, nid,
					context != MEMMAP_HOTPLUG);
            /* (2.2) 初始化时把所有pageblock的migratetype设置成MIGRATE_MOVABLE */
			set_pageblock_migratetype(page, MIGRATE_MOVABLE);
			cond_resched();
        /* (3) pageblock中的其他page */
		} else {
            /* (3.1) 初始化对应的 struct page 结构 */
			__init_single_pfn(pfn, zone, nid,
					context != MEMMAP_HOTPLUG);
		}
	}
}

↓
__init_single_pfn()
↓

static void __meminit __init_single_page(struct page *page, unsigned long pfn,
				unsigned long zone, int nid, bool zero)
{
    /* (2.1.1) 如果需要,对struct page结构清零 */
	if (zero)
		mm_zero_struct_page(page);
    /* (2.1.2) 设置page->flags中的setcion index、node index、zone index */
	set_page_links(page, zone, nid, pfn);
    /* (2.1.3) 设置page->_refcount = 1 */
	init_page_count(page);
    /* (2.1.4) 设置page->_mapcount = -1 */
	page_mapcount_reset(page);
    /* (2.1.5) 设置page->_last_cpupid = -1 */
	page_cpupid_reset_last(page);

    /* (2.1.6) 初始化page->lru */
	INIT_LIST_HEAD(&page->lru);
#ifdef WANT_PAGE_VIRTUAL
	/* The shift won't overflow because ZONE_NORMAL is below 4G. */
	if (!is_highmem_idx(zone))
		set_page_address(page, __va(pfn << PAGE_SHIFT));
#endif
}
  • 2、分配并初始化 zone->pageblock_flags

文章开始时说了 migrate type 的概念。系统把内存划分成多个 pageblock,一个 pageblock 即对应 (2^max_order x Page),每个 pageblock 拥有自己的 migrate type。

系统以 zone 为单位分配空间来保存所有 pageblock 的 migrate type:

start_kernel() → setup_arch() → x86_init.paging.pagetable_init() → native_pagetable_init() → paging_init() → zone_sizes_init() → free_area_init_nodes() → free_area_init_node() → free_area_init_core():

|→ setup_usemap()

static void __init setup_usemap(struct pglist_data *pgdat,
				struct zone *zone,
				unsigned long zone_start_pfn,
				unsigned long zonesize)
{
	unsigned long usemapsize = usemap_size(zone_start_pfn, zonesize);
	zone->pageblock_flags = NULL;
	if (usemapsize)
        /* (1) 分配存储当前zone里所有pageblock的migrate标志 */
		zone->pageblock_flags =
			memblock_virt_alloc_node_nopanic(usemapsize,
							 pgdat->node_id);
}

pageblock 的初始 migrate type 为 MIGRATE_MOVABLE:

void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
		unsigned long start_pfn, enum memmap_context context)
{

	for (pfn = start_pfn; pfn < end_pfn; pfn++) {

        /* (2) 当前是一个pageblock的第一个page */
		if (!(pfn & (pageblock_nr_pages - 1))) {
			struct page *page = pfn_to_page(pfn);

            /* (2.1) 初始化对应的 struct page 结构 */
			__init_single_page(page, pfn, zone, nid,
					context != MEMMAP_HOTPLUG);
            /* (2.2) 初始化时把所有pageblock的migratetype设置成MIGRATE_MOVABLE */
			set_pageblock_migratetype(page, MIGRATE_MOVABLE);
			cond_resched();
        /* (3) pageblock中的其他page */
		} else {
            /* (3.1) 初始化对应的 struct page 结构 */
			__init_single_pfn(pfn, zone, nid,
					context != MEMMAP_HOTPLUG);
		}
	}
}

pageblock 中第一个分配的内存的 migrate type 决定了整个 pageblock 的 migrate type。

2.2 Buddy 初始化

在内核启动过程中在 Buddy 初始化以前,系统使用一个简便的 Memblock 机制来管理内存。在 Buddy 数据结构准备好后,需要把 Memblock 中的内存释放到 Buddy 当中。
在这里插入图片描述

这就是 Buddy 系统初始的状态,除了保留的内存,其他的内存都处于 Free 状态:

start_kernel() → mm_init() → mem_init() → free_all_bootmem():

unsigned long __init free_all_bootmem(void)
{
	unsigned long pages;

    /* (1) 将每个node每个zone管理的page清零:z->managed_pages = 0 */
	reset_all_zones_managed_pages();

    /* (2) 将memblock中的内存转移到buddy系统中 */
	pages = free_low_memory_core_early();
	totalram_pages += pages;

	return pages;
}

↓

static unsigned long __init free_low_memory_core_early(void)
{
	unsigned long count = 0;
	phys_addr_t start, end;
	u64 i;

	memblock_clear_hotplug(0, -1);

    /* (2.1) 遍历memblock中的保留内存,将其对应的`struct page`结构page->flags设置PG_reserved标志 */
	for_each_reserved_mem_region(i, &start, &end)
		reserve_bootmem_region(start, end);

	/*
	 * We need to use NUMA_NO_NODE instead of NODE_DATA(0)->node_id
	 *  because in some case like Node0 doesn't have RAM installed
	 *  low ram will be on Node1
	 */
    /* (2.2) 遍历memblock中尚未分配的内存,将其释放到buddy系统中 */
	for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &start, &end,
				NULL)
		count += __free_memory_core(start, end);

	return count;
}

↓
__free_memory_core()
↓

static void __init __free_pages_memory(unsigned long start, unsigned long end)
{
	int order;

    /* (2.2.1) 对需要释放的区域,拆分成尽可能大的 2^order 内存块去释放 */
	while (start < end) {
		order = min(MAX_ORDER - 1UL, __ffs(start));

        /* (2.2.1.1) 计算最大的释放长度(2^order)page */
		while (start + (1UL << order) > end)
			order--;

        /* (2.2.1.2) 继续释放 */
		__free_pages_bootmem(pfn_to_page(start), start, order);

		start += (1UL << order);
	}
}

↓

__free_pages_bootmem() → __free_pages_boot_core() → __free_pages()

具体的释放细节 __free_pages() 在下一节中解析。

3. 内存释放

Buddy 系统中,相比较内存的分配,内存的释放过程更简单,我们先来解析这部分。

这里体现了 Buddy 的核心思想:在内存释放时判断其 buddy 兄弟 page 是不是 order 大小相等的 free page,如果是则合并成更高一阶 order。这样的目的是最大可能的减少内存碎片化。

内存释放最后都会落到 __free_pages() 函数:

void __free_pages(struct page *page, unsigned int order)
{
    /* (1) 对page->_refcount减1后并判断是否为0
            如果引用计数为0了,说明可以释放page了
     */
	if (put_page_testzero(page))
		free_the_page(page, order);
}

↓

static inline void free_the_page(struct page *page, unsigned int order)
{
    /* (1) 单个 page 首先尝试释放到 pcp */
	if (order == 0)		/* Via pcp? */
		free_unref_page(page);
    /* (2) 大于 1 的 2^order 个 page,释放到 order free_area_ 当中 */
	else
		__free_pages_ok(page, order);
}

↓

static void __free_pages_ok(struct page *page, unsigned int order)
{
	unsigned long flags;
	int migratetype;
	unsigned long pfn = page_to_pfn(page);

    /* (2.1) page释放前的一些动作:
            清理一些成员
            做一些检查
            执行一些回调函数
     */
	if (!free_pages_prepare(page, order, true))
		return;

    /* (2.2) 获取到page所在pageblock的migrate type
            当前page会被释放到对应order free_area的对应 migrate freelist链表当中
     */
	migratetype = get_pfnblock_migratetype(page, pfn);
	local_irq_save(flags);
	__count_vm_events(PGFREE, 1 << order);
    /* (2.3) 向zone中释放page */
	free_one_page(page_zone(page), page, pfn, order, migratetype);
	local_irq_restore(flags);
}

↓
free_one_page()
↓

static inline void __free_one_page(struct page *page,
		unsigned long pfn,
		struct zone *zone, unsigned int order,
		int migratetype)
{
	unsigned long combined_pfn;
	unsigned long uninitialized_var(buddy_pfn);
	struct page *buddy;
	unsigned int max_order;

	max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);

	VM_BUG_ON(!zone_is_initialized(zone));
	VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);

	VM_BUG_ON(migratetype == -1);
	if (likely(!is_migrate_isolate(migratetype)))
		__mod_zone_freepage_state(zone, 1 << order, migratetype);

	VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page);
	VM_BUG_ON_PAGE(bad_range(zone, page), page);

continue_merging:
    /* (2.3.1) 尝试对释放的(2^order)长度的page进行逐级向上合并 */
	while (order < max_order - 1) {
        /* (2.3.1.1) 得到当前释放的(2^order)长度page对应的buddy伙伴page指针
                计算伙伴buddy使用和(1<<order)进行异或:(0<<order)pfn对应的伙伴page为(1<<order)pfn,(1<<order)pfn对应的伙伴page为(0<<order)pfn
         */
		buddy_pfn = __find_buddy_pfn(pfn, order);
		buddy = page + (buddy_pfn - pfn);

		if (!pfn_valid_within(buddy_pfn))
			goto done_merging;
        /* (2.3.1.2) 判断伙伴page的是否是buddy状态:
                    是否是free状态在buddy系统中(page->_mapcount == PAGE_BUDDY_MAPCOUNT_VALUE)
                    当前的free order和要释放的order相等(page->private == order)
          */
		if (!page_is_buddy(page, buddy, order))
			goto done_merging;
		/*
		 * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
		 * merge with it and move up one order.
		 */
		if (page_is_guard(buddy)) {
			clear_page_guard(zone, buddy, order, migratetype);
		} else {
            /* (2.3.1.3) 如果满足合并的条件,则准备开始合并
                        把伙伴page从原freelist中删除
             */
			list_del(&buddy->lru);
			zone->free_area[order].nr_free--;
            /* 清理page中保存的order信息:
                page->_mapcount = -1
                page->private = 0
            */
			rmv_page_order(buddy);
		}
        /* (2.3.1.4) 组成了更高一级order的空闲内存 */
		combined_pfn = buddy_pfn & pfn;
		page = page + (combined_pfn - pfn);
		pfn = combined_pfn;
		order++;
	}
	if (max_order < MAX_ORDER) {
		/* If we are here, it means order is >= pageblock_order.
         * 如果在这里,意味着order  >= pageblock_order。
		 * We want to prevent merge between freepages on isolate
		 * pageblock and normal pageblock. Without this, pageblock
		 * isolation could cause incorrect freepage or CMA accounting.
         * 我们要防止隔离页面块和正常页面块上的空闲页面合并。 否则,页面块隔离可能导致不正确的空闲页面或CMA计数。
		 *
		 * We don't want to hit this code for the more frequent
		 * low-order merging.
         * 我们不想命中此代码进行频繁的低阶合并。
		 */
		if (unlikely(has_isolate_pageblock(zone))) {
			int buddy_mt;

			buddy_pfn = __find_buddy_pfn(pfn, order);
			buddy = page + (buddy_pfn - pfn);
			buddy_mt = get_pageblock_migratetype(buddy);

			if (migratetype != buddy_mt
					&& (is_migrate_isolate(migratetype) ||
						is_migrate_isolate(buddy_mt)))
				goto done_merging;
		}
		max_order++;
		goto continue_merging;
	}

    /* (2.3.2) 开始挂载合并成order的空闲内存 */
done_merging:
    /* (2.3.2.1) page中保存order大小:
                page->_mapcount = PAGE_BUDDY_MAPCOUNT_VALUE(-128)
                page->private = order
     */
	set_page_order(page, order);

	/*
	 * If this is not the largest possible page, check if the buddy
	 * of the next-highest order is free. If it is, it's possible
	 * that pages are being freed that will coalesce soon. In case,
	 * that is happening, add the free page to the tail of the list
	 * so it's less likely to be used soon and more likely to be merged
	 * as a higher order page
     * 如果这不是最大的页面,请检查倒数第二个order的伙伴是否空闲。 如果是这样,则可能是页面即将被释放,即将合并。 万一发生这种情况,请将空闲页面添加到列表的末尾,这样它就不太可能很快被使用,而更有可能被合并为高阶页面
	 */
    /* (2.3.2.2) 将空闲page加到对应order链表的尾部 */
	if ((order < MAX_ORDER-2) && pfn_valid_within(buddy_pfn)) {
		struct page *higher_page, *higher_buddy;
		combined_pfn = buddy_pfn & pfn;
		higher_page = page + (combined_pfn - pfn);
		buddy_pfn = __find_buddy_pfn(combined_pfn, order + 1);
		higher_buddy = higher_page + (buddy_pfn - combined_pfn);
		if (pfn_valid_within(buddy_pfn) &&
		    page_is_buddy(higher_page, higher_buddy, order + 1)) {
			list_add_tail(&page->lru,
				&zone->free_area[order].free_list[migratetype]);
			goto out;
		}
	}

    /* (2.3.2.3) 将空闲page加到对应order链表的开始 */
	list_add(&page->lru, &zone->free_area[order].free_list[migratetype]);
out:
	zone->free_area[order].nr_free++;
}

PageBuddy()用来判断page是否在buddy系统中,还有很多类似的page操作函数都定义在page-flags.h当中:

linux-source-4.15.0\include\linux\page-flags.h:

/*
 * For pages that are never mapped to userspace, page->mapcount may be
 * used for storing extra information about page type. Any value used
 * for this purpose must be <= -2, but it's better start not too close
 * to -2 so that an underflow of the page_mapcount() won't be mistaken
 * for a special page.
 */
#define PAGE_MAPCOUNT_OPS(uname, lname)					\
static __always_inline int Page##uname(struct page *page)		\
{									\
	return atomic_read(&page->_mapcount) ==				\
				PAGE_##lname##_MAPCOUNT_VALUE;		\
}									\
static __always_inline void __SetPage##uname(struct page *page)		\
{									\
	VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);	\
	atomic_set(&page->_mapcount, PAGE_##lname##_MAPCOUNT_VALUE);	\
}									\
static __always_inline void __ClearPage##uname(struct page *page)	\
{									\
	VM_BUG_ON_PAGE(!Page##uname(page), page);			\
	atomic_set(&page->_mapcount, -1);				\
}

/*
 * PageBuddy() indicate that the page is free and in the buddy system
 * (see mm/page_alloc.c).
 */
#define PAGE_BUDDY_MAPCOUNT_VALUE		(-128)
PAGE_MAPCOUNT_OPS(Buddy, BUDDY)

对于单个page,会首先释放到percpu缓存中,:

start_kernel() → mm_init() → mem_init() → free_all_bootmem() free_low_memory_core_early() → __free_memory_core() → __free_pages_memory() → __free_pages_bootmem() → __free_pages() → free_the_page() → free_unref_page():

↓

void free_unref_page(struct page *page)
{
	unsigned long flags;
	unsigned long pfn = page_to_pfn(page);

    /* (1) 一些初始化准备工作
            page->index = migratetype;
     */
	if (!free_unref_page_prepare(page, pfn))
		return;

	local_irq_save(flags);
    /* (2) 释放page到pcp中 */
	free_unref_page_commit(page, pfn);
	local_irq_restore(flags);
}

↓

static void free_unref_page_commit(struct page *page, unsigned long pfn)
{
	struct zone *zone = page_zone(page);
	struct per_cpu_pages *pcp;
	int migratetype;

    /* (2.1) migratetype = page->index */
	migratetype = get_pcppage_migratetype(page);
	__count_vm_event(PGFREE);

	/*
	 * We only track unmovable, reclaimable and movable on pcp lists.
	 * Free ISOLATE pages back to the allocator because they are being
	 * offlined but treat HIGHATOMIC as movable pages so we can get those
	 * areas back if necessary. Otherwise, we may have to free
	 * excessively into the page allocator
	 */
    /* (2.2) 对于某些migratetype的特殊处理 */
	if (migratetype >= MIGRATE_PCPTYPES) {
        /* (2.2.1) 对于isolate类型,free到全局的freelist中 */
		if (unlikely(is_migrate_isolate(migratetype))) {
			free_one_page(zone, page, pfn, 0, migratetype);
			return;
		}
		migratetype = MIGRATE_MOVABLE;
	}

    /* (2.3) 获取到zone当前cpu pcp的链表头 */
	pcp = &this_cpu_ptr(zone->pageset)->pcp;
    /* (2.4) 将空闲的单page加入到pcp对应链表中 */
	list_add(&page->lru, &pcp->lists[migratetype]);
	pcp->count++;
    /* (2.5) 如果pcp中的page数量过多(大于pcp->high),释放pcp->batch个page到全局free list当中去 */
	if (pcp->count >= pcp->high) {
		unsigned long batch = READ_ONCE(pcp->batch);
		free_pcppages_bulk(zone, batch, pcp);
		pcp->count -= batch;
	}
}

pcp->high 和 pcp->batch 的赋值过程:

start_kernel() → setup_per_cpu_pageset() → setup_zone_pageset() → zone_pageset_init() → pageset_set_high_and_batch():

|→

static int zone_batchsize(struct zone *zone)
{
    /* batch 的大小 = (zone_size / (1024*4)) * (3/2) */
	batch = zone->managed_pages / 1024;
	if (batch * PAGE_SIZE > 512 * 1024)
		batch = (512 * 1024) / PAGE_SIZE;
	batch /= 4;		/* We effectively *= 4 below */
	if (batch < 1)
		batch = 1;

	/*
	 * Clamp the batch to a 2^n - 1 value. Having a power
	 * of 2 value was found to be more likely to have
	 * suboptimal cache aliasing properties in some cases.
	 *
	 * For example if 2 tasks are alternately allocating
	 * batches of pages, one task can end up with a lot
	 * of pages of one half of the possible page colors
	 * and the other with pages of the other colors.
	 */
	batch = rounddown_pow_of_two(batch + batch/2) - 1;

	return batch;
}

|→

static void pageset_set_batch(struct per_cpu_pageset *p, unsigned long batch)
{
    /* high = 6 * batch */
	pageset_update(&p->pcp, 6 * batch, max(1UL, 1 * batch));
}

4. 内存分配

相比较释放,内存分配的策略要复杂的多,要考虑的因素也多很多,让我们一一来解析。

4.1 gfp_mask

gfp_mask是GFP(Get Free Page)相关的一系列标志,控制了分配page的一系列行为。

namedefinedescriptnote
(1.1)zone modifiers:物理地址zone的修饰符。指定内存分配的区域zone。-
__GFP_DMA___GFP_DMA--
__GFP_HIGHMEM___GFP_HIGHMEM--
__GFP_DMA32___GFP_DMA32--
__GFP_MOVABLE___GFP_MOVABLE--
(1.2)Page mobility and placement hints:页的移动性和位置提示-
__GFP_MOVABLE___GFP_MOVABLE(也是zone修饰符)指示可以在内存压缩期间通过页面迁移来移动页面或将其回收。-
__GFP_RECLAIMABLE___GFP_RECLAIMABLE用来给SLAB_RECLAIM_ACCOUNT的slab分配器使用,这些页可以通过收缩器(shrinker)来释放。-
__GFP_WRITE___GFP_WRITE表示调用者打算弄脏页面。 在可能的情况下,这些页面将散布在本地zone之间,以避免所有脏页面都位于一个zone中(公平zone分配策略)。-
__GFP_HARDWALL___GFP_HARDWALL强制执行cpuset内存分配策略。-
__GFP_THISNODE___GFP_THISNODE强制从请求的节点满足分配要求,而没有fallback或强制实施放置策略。-
__GFP_ACCOUNT___GFP_ACCOUNT导致分配计入kmemcg。-
(1.3)Watermark modifiers:水位线修饰符。控制对紧急储备的访问。-
__GFP_HIGH___GFP_HIGH表示呼叫者是高优先级,并且在系统可以进行转发之前必须授予请求。例如,创建IO上下文以清理页面。-
__GFP_ATOMIC___GFP_ATOMIC表示呼叫者无法回收或进入睡眠状态,并且具有较高的优先级。 用户通常是中断处理程序。 可以与__GFP_HIGH结合使用-
__GFP_MEMALLOC___GFP_MEMALLOC允许访问所有内存。 仅当调用者保证分配将允许在很短的时间内释放更多内存时,才应使用此选项。例如进程退出或交换。用户应该是MM或与VM紧密配合(例如,通过NFS交换)。-
__GFP_NOMEMALLOC___GFP_NOMEMALLOC用于明确禁止访问紧急储备。如果同时设置了__GFP_MEMALLOC标志,则此命令优先。-
(1.4)Reclaim modifiers:回收修饰符-
__GFP_IO___GFP_IO可以启动物理IO。-
__GFP_FS___GFP_FS可以调用低级FS。清除此标志可避免分配器递归到可能已经持有锁的文件系统中。-
__GFP_DIRECT_RECLAIM___GFP_DIRECT_RECLAIM指示调用者可以进入直接回收。当fallback选项可用时,可以清除此标志以避免不必要的延迟。-
__GFP_KSWAPD_RECLAIM___GFP_KSWAPD_RECLAIM指示调用者想在达到low水位标记时唤醒kswapd,并要求其回收页面直到达到high水位标记。当fallback选项可用并且回收很可能破坏系统时,呼叫者可能希望清除此标志。典型的例子是THP分配,其中fallback便宜,但回收/压缩可能会导致间接停顿。-
__GFP_RECLAIM(___GFP_DIRECT_RECLAIM | ___GFP_KSWAPD_RECLAIM)是允许/禁止直接和kswapd回收的简写。-
__GFP_NORETRY___GFP_NORETRYVM实现将仅尝试非常轻量级的内存直接回收,以在内存压力下获得一些内存(因此它可以休眠)。它将避免诸如OOM杀手之类的破坏性行动。调用者必须处理很可能在沉重的内存压力下发生的故障。当可以以低成本轻松处理故障(例如降低吞吐量)时,该标志是适用的-
__GFP_RETRY_MAYFAIL___GFP_RETRY_MAYFAIL如果有迹象表明在其他地方已取得进展,则VM实现将重试以前失败的内存回收过程。它可以等待其他任务尝试使用高级方法来释放内存,例如压缩(这会消除碎片)和页面调出。重试次数仍然有一定的限制,但是比__GFP_NORETRY更大。带有此标志的分配可能会失败,但仅当真正的未使用内存很少时才可能。尽管这些分配不会直接触发OOM杀手,但它们的失败表明系统可能需要尽快使用OOM杀手。调用方必须处理失败,但是可以通过使更高级别的请求失败或仅以效率低得多的方式完成来合理地做到这一点。如果分配确实失败了,并且调用方可以释放一些不必要的内存,那么这样做可以使整个系统受益。-
__GFP_NOFAIL___GFP_NOFAILVM实现必须无限重试:调用者无法处理分配失败。分配可以无限期地阻塞,但永远不会因失败而返回。测试失败是没有意义的。应该仔细评估新用户(仅在没有合理的失败策略时才使用该标志),但是绝对最好使用该标志,而不是在分配器周围使用opencode无休止循环。不鼓励使用此标志进行昂贵的分配。-
(1.5)Action modifiers:行为修饰符-
__GFP_NOWARN___GFP_NOWARN禁止分配失败报告。-
__GFP_COMP___GFP_COMP地址复合页面元数据。-
__GFP_ZERO___GFP_ZERO成功返回零页面。-
(2)常用标志组合:建议子系统尽量使用这些定义,而不是更底层的标志定义-
GFP_ATOMIC(__GFP_HIGH | __GFP_ATOMIC | __GFP_KSWAPD_RECLAIM)用户无法睡眠并且需要分配成功。较低的水印被应用以允许访问“原子储备”-
GFP_KERNEL(__GFP_RECLAIM | __GFP_IO | __GFP_FS)通常用于内核内部分配。呼叫者需要ZONE_NORMAL或更低的区域才能直接访问,但可以直接回收。-
GFP_KERNEL_ACCOUNT(GFP_KERNEL | __GFP_ACCOUNT)与GFP_KERNEL相同,除了分配记入kmemcg。-
GFP_NOWAIT(__GFP_KSWAPD_RECLAIM)用于不因直接回收,启动物理IO或使用任何文件系统回调而停顿的内核分配。-
GFP_NOIO(__GFP_RECLAIM)将使用直接回收来丢弃不需要启动任何物理IO的干净页面或平板页面。请尝试避免直接使用此标志,而应使用memalloc_noio_ {save,restore}标记无法执行任何IO的整个范围,并简要说明原因。所有分配请求将隐式继承GFP_NOIO。-
GFP_NOFS(__GFP_RECLAIM | __GFP_IO)将使用直接回收,但将不使用任何文件系统接口。请尝试避免直接使用此标志,而应使用memalloc_nofs_ {save,restore}标记无法/不应递归到FS层的整个范围,并简要说明原因。所有分配请求将隐式继承GFP_NOFS。-
GFP_USER(__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL)用于用户空间分配,也需要内核或硬件直接对其进行访问。硬件通常将其用于映射到硬件仍必须通过DMA访问的用户空间(例如图形)的缓冲区。将对这些分配强制执行cpuset限制。-
GFP_DMA__GFP_DMA由于历史原因而存在,应尽可能避免使用。这些标志指示调用者要求使用最低区域(x86-64上为ZONE_DMA或16M)。理想情况下,将其删除,但是由于某些用户确实需要它,而其他人则使用该标志来避免ZONE_DMA中的低内存保留,并将最低区域视为一种紧急保留,因此需要仔细审核。-
GFP_DMA32__GFP_DMA32与GFP_DMA相似,只是调用者需要一个32位地址。-
GFP_HIGHUSER(GFP_USER | __GFP_HIGHMEM)用于可能映射到用户空间的用户空间分配,不需要由内核直接访问,但是一旦使用便无法移动。一个示例可能是将数据直接映射到用户空间但没有寻址限制的硬件分配。-
GFP_HIGHUSER_MOVABLE(GFP_HIGHUSER | __GFP_MOVABLE)用于内核不需要直接访问但可以在需要访问时使用kmap()的用户空间分配。它们可以通过页面回收或页面迁移来移动。通常,LRU上的页面也将分配有GFP_HIGHUSER_MOVABLE。-
GFP_TRANSHUGE(GFP_TRANSHUGE_LIGHT | __GFP_DIRECT_RECLAIM)用于THP分配。它们是复合分配,如果内存不可用,通常会很快失败,并且在失败时不会唤醒kswapd / kcompactd。 _LIGHT版本根本不尝试回收/压缩,默认情况下在页面错误路径中使用_LIGHT版本,而khugepaged使用非light版-
GFP_TRANSHUGE_LIGHT((GFP_HIGHUSER_MOVABLE | __GFP_COMP | __GFP_NOMEMALLOC | __GFP_NOWARN) & ~__GFP_RECLAIM)用于THP分配。它们是复合分配,如果内存不可用,通常会很快失败,并且在失败时不会唤醒kswapd / kcompactd。 _LIGHT版本根本不尝试回收/压缩,默认情况下在页面错误路径中使用_LIGHT版本,而khugepaged使用非light版-
linux-source-4.15.0\include\linux\gfp.h:

/* (1) 最底层的GFP bitmask定义,不要直接使用这些宏 */
/* Plain integer GFP bitmasks. Do not use this directly. */
#define ___GFP_DMA		0x01u
#define ___GFP_HIGHMEM		0x02u
#define ___GFP_DMA32		0x04u
#define ___GFP_MOVABLE		0x08u
#define ___GFP_RECLAIMABLE	0x10u
#define ___GFP_HIGH		0x20u
#define ___GFP_IO		0x40u
#define ___GFP_FS		0x80u
#define ___GFP_NOWARN		0x200u
#define ___GFP_RETRY_MAYFAIL	0x400u
#define ___GFP_NOFAIL		0x800u
#define ___GFP_NORETRY		0x1000u
#define ___GFP_MEMALLOC		0x2000u
#define ___GFP_COMP		0x4000u
#define ___GFP_ZERO		0x8000u
#define ___GFP_NOMEMALLOC	0x10000u
#define ___GFP_HARDWALL		0x20000u
#define ___GFP_THISNODE		0x40000u
#define ___GFP_ATOMIC		0x80000u
#define ___GFP_ACCOUNT		0x100000u
#define ___GFP_DIRECT_RECLAIM	0x400000u
#define ___GFP_WRITE		0x800000u
#define ___GFP_KSWAPD_RECLAIM	0x1000000u
#ifdef CONFIG_LOCKDEP
#define ___GFP_NOLOCKDEP	0x2000000u
#else
#define ___GFP_NOLOCKDEP	0
#endif

/* (2.1) 物理地址zone的修饰符。指定内存分配的区域zone。 */
/*
 * Physical address zone modifiers (see linux/mmzone.h - low four bits)
 *
 * Do not put any conditional on these. If necessary modify the definitions
 * without the underscores and use them consistently. The definitions here may
 * be used in bit comparisons.
 */
#define __GFP_DMA	((__force gfp_t)___GFP_DMA)
#define __GFP_HIGHMEM	((__force gfp_t)___GFP_HIGHMEM)
#define __GFP_DMA32	((__force gfp_t)___GFP_DMA32)
#define __GFP_MOVABLE	((__force gfp_t)___GFP_MOVABLE)  /* ZONE_MOVABLE allowed */
#define GFP_ZONEMASK	(__GFP_DMA|__GFP_HIGHMEM|__GFP_DMA32|__GFP_MOVABLE)

/* (2.2) 页的移动性和位置提示 */
/*
 * Page mobility and placement hints
 *
 * These flags provide hints about how mobile the page is. Pages with similar
 * mobility are placed within the same pageblocks to minimise problems due
 * to external fragmentation.
 * 这些标志提供有关页面移动性的提示。 具有相似移动性的页面放在相同的页面块中,以最大程度地减少由于外部碎片而引起的问题。
 *
 * __GFP_MOVABLE (also a zone modifier) indicates that the page can be
 *   moved by page migration during memory compaction or can be reclaimed.
 * __GFP_MOVABLE(也是zone修饰符)指示可以在内存压缩期间通过页面迁移来移动页面或将其回收。
 *
 * __GFP_RECLAIMABLE is used for slab allocations that specify
 *   SLAB_RECLAIM_ACCOUNT and whose pages can be freed via shrinkers.
 * __GFP_RECLAIMABLE 用来给SLAB_RECLAIM_ACCOUNT的slab分配器使用,这些页可以通过收缩器(shrinker)来释放。
 *
 * __GFP_WRITE indicates the caller intends to dirty the page. Where possible,
 *   these pages will be spread between local zones to avoid all the dirty
 *   pages being in one zone (fair zone allocation policy).
 * __GFP_WRITE 表示调用者打算弄脏页面。 在可能的情况下,这些页面将散布在本地zone之间,以避免所有脏页面都位于一个zone中(公平zone分配策略)。
 *
 * __GFP_HARDWALL enforces the cpuset memory allocation policy.
 * __GFP_HARDWALL 强制执行cpuset内存分配策略。
 *
 * __GFP_THISNODE forces the allocation to be satisified from the requested
 *   node with no fallbacks or placement policy enforcements.
 * __GFP_THISNODE 强制从请求的节点满足分配要求,而没有fallback或强制实施放置策略。
 *
 * __GFP_ACCOUNT causes the allocation to be accounted to kmemcg.
 * __GFP_ACCOUNT 导致分配计入kmemcg。
 */
#define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE)
#define __GFP_WRITE	((__force gfp_t)___GFP_WRITE)
#define __GFP_HARDWALL   ((__force gfp_t)___GFP_HARDWALL)
#define __GFP_THISNODE	((__force gfp_t)___GFP_THISNODE)
#define __GFP_ACCOUNT	((__force gfp_t)___GFP_ACCOUNT)

/* (2.3) 水位线修饰符。控制对紧急储备的访问。 */
/*
 * Watermark modifiers -- controls access to emergency reserves
 *
 * __GFP_HIGH indicates that the caller is high-priority and that granting
 *   the request is necessary before the system can make forward progress.
 *   For example, creating an IO context to clean pages.
 * __GFP_HIGH 表示呼叫者是高优先级,并且在系统可以进行转发之前必须授予请求。例如,创建IO上下文以清理页面。
 *
 * __GFP_ATOMIC indicates that the caller cannot reclaim or sleep and is
 *   high priority. Users are typically interrupt handlers. This may be
 *   used in conjunction with __GFP_HIGH
 * __GFP_ATOMIC 表示呼叫者无法回收或进入睡眠状态,并且具有较高的优先级。 用户通常是中断处理程序。 可以与__GFP_HIGH结合使用
 *
 * __GFP_MEMALLOC allows access to all memory. This should only be used when
 *   the caller guarantees the allocation will allow more memory to be freed
 *   very shortly e.g. process exiting or swapping. Users either should
 *   be the MM or co-ordinating closely with the VM (e.g. swap over NFS).
 * __GFP_MEMALLOC 允许访问所有内存。 仅当调用者保证分配将允许在很短的时间内释放更多内存时,才应使用此选项。例如进程退出或交换。用户应该是MM或与VM紧密配合(例如,通过NFS交换)。
 *
 * __GFP_NOMEMALLOC is used to explicitly forbid access to emergency reserves.
 *   This takes precedence over the __GFP_MEMALLOC flag if both are set.
 * __GFP_NOMEMALLOC 用于明确禁止访问紧急储备。如果同时设置了__GFP_MEMALLOC标志,则此命令优先。
 */
#define __GFP_ATOMIC	((__force gfp_t)___GFP_ATOMIC)
#define __GFP_HIGH	((__force gfp_t)___GFP_HIGH)
#define __GFP_MEMALLOC	((__force gfp_t)___GFP_MEMALLOC)
#define __GFP_NOMEMALLOC ((__force gfp_t)___GFP_NOMEMALLOC)

/* (2.4) 回收修饰符 */
/*
 * Reclaim modifiers
 *
 * __GFP_IO can start physical IO.
 * __GFP_IO 可以启动物理IO。
 *
 * __GFP_FS can call down to the low-level FS. Clearing the flag avoids the
 *   allocator recursing into the filesystem which might already be holding
 *   locks.
 * __GFP_FS 可以调用低级FS。清除此标志可避免分配器递归到可能已经持有锁的文件系统中。
 *
 * __GFP_DIRECT_RECLAIM indicates that the caller may enter direct reclaim.
 *   This flag can be cleared to avoid unnecessary delays when a fallback
 *   option is available.
 * __GFP_DIRECT_RECLAIM 指示调用者可以进入直接回收。当fallback选项可用时,可以清除此标志以避免不必要的延迟。
 *
 * __GFP_KSWAPD_RECLAIM indicates that the caller wants to wake kswapd when
 *   the low watermark is reached and have it reclaim pages until the high
 *   watermark is reached. A caller may wish to clear this flag when fallback
 *   options are available and the reclaim is likely to disrupt the system. The
 *   canonical example is THP allocation where a fallback is cheap but
 *   reclaim/compaction may cause indirect stalls.
 * __GFP_KSWAPD_RECLAIM 指示调用者想在达到low水位标记时唤醒kswapd,并要求其回收页面直到达到high水位标记。当fallback选项可用并且回收很可能破坏系统时,呼叫者可能希望清除此标志。典型的例子是THP分配,其中fallback便宜,但回收/压缩可能会导致间接停顿。
 *
 * __GFP_RECLAIM is shorthand to allow/forbid both direct and kswapd reclaim.
 * __GFP_RECLAIM 是允许/禁止直接和kswapd回收的简写。
 *
 * The default allocator behavior depends on the request size. We have a concept
 * of so called costly allocations (with order > PAGE_ALLOC_COSTLY_ORDER).
 * !costly allocations are too essential to fail so they are implicitly
 * non-failing by default (with some exceptions like OOM victims might fail so
 * the caller still has to check for failures) while costly requests try to be
 * not disruptive and back off even without invoking the OOM killer.
 * The following three modifiers might be used to override some of these
 * implicit rules
 * 默认分配器行为取决于请求大小。我们有一个所谓的昂贵分配的概念(order > PAGE_ALLOC_COSTLY_ORDER)。非代价高昂的分配过于重要而无法失败,因此默认情况下它们暗含不会失败(某些异常例如OOM受害者可能会失败,因此调用方仍必须检查失败),而代价高昂的请求尽量避免破坏性并退后,即使没有调用OOM killer。以下三个修饰符可用于覆盖其中的某些隐含规则
 *
 * __GFP_NORETRY: The VM implementation will try only very lightweight
 *   memory direct reclaim to get some memory under memory pressure (thus
 *   it can sleep). It will avoid disruptive actions like OOM killer. The
 *   caller must handle the failure which is quite likely to happen under
 *   heavy memory pressure. The flag is suitable when failure can easily be
 *   handled at small cost, such as reduced throughput
 * __GFP_NORETRY:VM实现将仅尝试非常轻量级的内存直接回收,以在内存压力下获得一些内存(因此它可以休眠)。它将避免诸如OOM杀手之类的破坏性行动。调用者必须处理很可能在沉重的内存压力下发生的故障。当可以以低成本轻松处理故障(例如降低吞吐量)时,该标志是适用的
 *
 * __GFP_RETRY_MAYFAIL: The VM implementation will retry memory reclaim
 *   procedures that have previously failed if there is some indication
 *   that progress has been made else where.  It can wait for other
 *   tasks to attempt high level approaches to freeing memory such as
 *   compaction (which removes fragmentation) and page-out.
 *   There is still a definite limit to the number of retries, but it is
 *   a larger limit than with __GFP_NORETRY.
 *   Allocations with this flag may fail, but only when there is
 *   genuinely little unused memory. While these allocations do not
 *   directly trigger the OOM killer, their failure indicates that
 *   the system is likely to need to use the OOM killer soon.  The
 *   caller must handle failure, but can reasonably do so by failing
 *   a higher-level request, or completing it only in a much less
 *   efficient manner.
 *   If the allocation does fail, and the caller is in a position to
 *   free some non-essential memory, doing so could benefit the system
 *   as a whole.
 * __GFP_RETRY_MAYFAIL:如果有迹象表明在其他地方已取得进展,则VM实现将重试以前失败的内存回收过程。它可以等待其他任务尝试使用高级方法来释放内存,例如压缩(这会消除碎片)和页面调出。重试次数仍然有一定的限制,但是比__GFP_NORETRY更大。带有此标志的分配可能会失败,但仅当真正的未使用内存很少时才可能。尽管这些分配不会直接触发OOM杀手,但它们的失败表明系统可能需要尽快使用OOM杀手。调用方必须处理失败,但是可以通过使更高级别的请求失败或仅以效率低得多的方式完成来合理地做到这一点。如果分配确实失败了,并且调用方可以释放一些不必要的内存,那么这样做可以使整个系统受益。
 *
 * __GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller
 *   cannot handle allocation failures. The allocation could block
 *   indefinitely but will never return with failure. Testing for
 *   failure is pointless.
 *   New users should be evaluated carefully (and the flag should be
 *   used only when there is no reasonable failure policy) but it is
 *   definitely preferable to use the flag rather than opencode endless
 *   loop around allocator.
 *   Using this flag for costly allocations is _highly_ discouraged.
 * __GFP_NOFAIL:VM实现必须无限重试:调用者无法处理分配失败。分配可以无限期地阻塞,但永远不会因失败而返回。测试失败是没有意义的。应该仔细评估新用户(仅在没有合理的失败策略时才使用该标志),但是绝对最好使用该标志,而不是在分配器周围使用opencode无休止循环。不鼓励使用此标志进行昂贵的分配。
 */
#define __GFP_IO	((__force gfp_t)___GFP_IO)
#define __GFP_FS	((__force gfp_t)___GFP_FS)
#define __GFP_DIRECT_RECLAIM	((__force gfp_t)___GFP_DIRECT_RECLAIM) /* Caller can reclaim */
#define __GFP_KSWAPD_RECLAIM	((__force gfp_t)___GFP_KSWAPD_RECLAIM) /* kswapd can wake */
#define __GFP_RECLAIM ((__force gfp_t)(___GFP_DIRECT_RECLAIM|___GFP_KSWAPD_RECLAIM))
#define __GFP_RETRY_MAYFAIL	((__force gfp_t)___GFP_RETRY_MAYFAIL)
#define __GFP_NOFAIL	((__force gfp_t)___GFP_NOFAIL)
#define __GFP_NORETRY	((__force gfp_t)___GFP_NORETRY)

/* (2.5) 行为修饰符 */
/*
 * Action modifiers
 *
 * __GFP_NOWARN suppresses allocation failure reports.
 * __GFP_NOWARN 禁止分配失败报告。
 *
 * __GFP_COMP address compound page metadata.
 * __GFP_COMP 地址复合页面元数据。
 *
 * __GFP_ZERO returns a zeroed page on success.
 * __GFP_ZERO 成功返回零页面。
 */
#define __GFP_NOWARN	((__force gfp_t)___GFP_NOWARN)
#define __GFP_COMP	((__force gfp_t)___GFP_COMP)
#define __GFP_ZERO	((__force gfp_t)___GFP_ZERO)

/* Disable lockdep for GFP context tracking */
#define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP)

/* Room for N __GFP_FOO bits */
#define __GFP_BITS_SHIFT (25 + IS_ENABLED(CONFIG_LOCKDEP))
#define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))

/* (3) 常用标志组合。建议子系统尽量使用这些定义,而不是更底层的标志 */
/*
 * Useful GFP flag combinations that are commonly used. It is recommended
 * that subsystems start with one of these combinations and then set/clear
 * __GFP_FOO flags as necessary.
 * 常用的有用的GFP标志组合。建议子系统从这些组合之一开始,然后根据需要设置/清除__GFP_FOO标志。
 *
 * GFP_ATOMIC users can not sleep and need the allocation to succeed. A lower
 *   watermark is applied to allow access to "atomic reserves"
 * GFP_ATOMIC 用户无法睡眠并且需要分配成功。较低的水印被应用以允许访问“原子储备”
 *
 * GFP_KERNEL is typical for kernel-internal allocations. The caller requires
 *   ZONE_NORMAL or a lower zone for direct access but can direct reclaim.
 * GFP_KERNEL 通常用于内核内部分配。呼叫者需要ZONE_NORMAL或更低的区域才能直接访问,但可以直接回收。
 *
 * GFP_KERNEL_ACCOUNT is the same as GFP_KERNEL, except the allocation is
 *   accounted to kmemcg.
 * GFP_KERNEL_ACCOUNT 与GFP_KERNEL相同,除了分配记入kmemcg。
 *
 * GFP_NOWAIT is for kernel allocations that should not stall for direct
 *   reclaim, start physical IO or use any filesystem callback.
 * GFP_NOWAIT 用于不因直接回收,启动物理IO或使用任何文件系统回调而停顿的内核分配。
 *
 * GFP_NOIO will use direct reclaim to discard clean pages or slab pages
 *   that do not require the starting of any physical IO.
 *   Please try to avoid using this flag directly and instead use
 *   memalloc_noio_{save,restore} to mark the whole scope which cannot
 *   perform any IO with a short explanation why. All allocation requests
 *   will inherit GFP_NOIO implicitly.
 * GFP_NOIO 将使用直接回收来丢弃不需要启动任何物理IO的干净页面或平板页面。请尝试避免直接使用此标志,而应使用memalloc_noio_ {save,restore}标记无法执行任何IO的整个范围,并简要说明原因。所有分配请求将隐式继承GFP_NOIO。
 *
 * GFP_NOFS will use direct reclaim but will not use any filesystem interfaces.
 *   Please try to avoid using this flag directly and instead use
 *   memalloc_nofs_{save,restore} to mark the whole scope which cannot/shouldn't
 *   recurse into the FS layer with a short explanation why. All allocation
 *   requests will inherit GFP_NOFS implicitly.
 * GFP_NOFS 将使用直接回收,但将不使用任何文件系统接口。请尝试避免直接使用此标志,而应使用memalloc_nofs_ {save,restore}标记无法/不应递归到FS层的整个范围,并简要说明原因。所有分配请求将隐式继承GFP_NOFS。
 *
 * GFP_USER is for userspace allocations that also need to be directly
 *   accessibly by the kernel or hardware. It is typically used by hardware
 *   for buffers that are mapped to userspace (e.g. graphics) that hardware
 *   still must DMA to. cpuset limits are enforced for these allocations.
 * GFP_USER 用于用户空间分配,也需要内核或硬件直接对其进行访问。硬件通常将其用于映射到硬件仍必须通过DMA访问的用户空间(例如图形)的缓冲区。将对这些分配强制执行cpuset限制。
 *
 * GFP_DMA exists for historical reasons and should be avoided where possible.
 *   The flags indicates that the caller requires that the lowest zone be
 *   used (ZONE_DMA or 16M on x86-64). Ideally, this would be removed but
 *   it would require careful auditing as some users really require it and
 *   others use the flag to avoid lowmem reserves in ZONE_DMA and treat the
 *   lowest zone as a type of emergency reserve.
 * GFP_DMA 由于历史原因而存在,应尽可能避免使用。这些标志指示调用者要求使用最低区域(x86-64上为ZONE_DMA或16M)。理想情况下,将其删除,但是由于某些用户确实需要它,而其他人则使用该标志来避免ZONE_DMA中的低内存保留,并将最低区域视为一种紧急保留,因此需要仔细审核。
 *
 * GFP_DMA32 is similar to GFP_DMA except that the caller requires a 32-bit
 *   address.
 * GFP_DMA32 与GFP_DMA相似,只是调用者需要一个32位地址。
 *
 * GFP_HIGHUSER is for userspace allocations that may be mapped to userspace,
 *   do not need to be directly accessible by the kernel but that cannot
 *   move once in use. An example may be a hardware allocation that maps
 *   data directly into userspace but has no addressing limitations.
 * GFP_HIGHUSER 用于可能映射到用户空间的用户空间分配,不需要由内核直接访问,但是一旦使用便无法移动。一个示例可能是将数据直接映射到用户空间但没有寻址限制的硬件分配。
 *
 * GFP_HIGHUSER_MOVABLE is for userspace allocations that the kernel does not
 *   need direct access to but can use kmap() when access is required. They
 *   are expected to be movable via page reclaim or page migration. Typically,
 *   pages on the LRU would also be allocated with GFP_HIGHUSER_MOVABLE.
 * GFP_HIGHUSER_MOVABLE 用于内核不需要直接访问但可以在需要访问时使用kmap()的用户空间分配。它们可以通过页面回收或页面迁移来移动。通常,LRU上的页面也将分配有GFP_HIGHUSER_MOVABLE。
 *
 * GFP_TRANSHUGE and GFP_TRANSHUGE_LIGHT are used for THP allocations. They are
 *   compound allocations that will generally fail quickly if memory is not
 *   available and will not wake kswapd/kcompactd on failure. The _LIGHT
 *   version does not attempt reclaim/compaction at all and is by default used
 *   in page fault path, while the non-light is used by khugepaged.
 * GFP_TRANSHUGE和GFP_TRANSHUGE_LIGHT 用于THP分配。它们是复合分配,如果内存不可用,通常会很快失败,并且在失败时不会唤醒kswapd / kcompactd。 _LIGHT版本根本不尝试回收/压缩,默认情况下在页面错误路径中使用_LIGHT版本,而khugepaged使用非light版
 */
#define GFP_ATOMIC	(__GFP_HIGH|__GFP_ATOMIC|__GFP_KSWAPD_RECLAIM)
#define GFP_KERNEL	(__GFP_RECLAIM | __GFP_IO | __GFP_FS)
#define GFP_KERNEL_ACCOUNT (GFP_KERNEL | __GFP_ACCOUNT)
#define GFP_NOWAIT	(__GFP_KSWAPD_RECLAIM)
#define GFP_NOIO	(__GFP_RECLAIM)
#define GFP_NOFS	(__GFP_RECLAIM | __GFP_IO)
#define GFP_USER	(__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL)
#define GFP_DMA		__GFP_DMA
#define GFP_DMA32	__GFP_DMA32
#define GFP_HIGHUSER	(GFP_USER | __GFP_HIGHMEM)
#define GFP_HIGHUSER_MOVABLE	(GFP_HIGHUSER | __GFP_MOVABLE)
#define GFP_TRANSHUGE_LIGHT	((GFP_HIGHUSER_MOVABLE | __GFP_COMP | \
			 __GFP_NOMEMALLOC | __GFP_NOWARN) & ~__GFP_RECLAIM)
#define GFP_TRANSHUGE	(GFP_TRANSHUGE_LIGHT | __GFP_DIRECT_RECLAIM)

/* Convert GFP flags to their corresponding migrate type */
#define GFP_MOVABLE_MASK (__GFP_RECLAIMABLE|__GFP_MOVABLE)
#define GFP_MOVABLE_SHIFT 3

4.2 node 候选策略

在 NUMA 的情况下,会有多个 memory node 可供选择,系统会根据 policy 选择当前分配的 node。

alloc_pages() → alloc_pages_current():

struct page *alloc_pages_current(gfp_t gfp, unsigned order)
{
    /* (1.1) 使用默认NUMA策略 */
	struct mempolicy *pol = &default_policy;
	struct page *page;

    /* (1.2) 获取当前进程的NUMA策略 */
	if (!in_interrupt() && !(gfp & __GFP_THISNODE))
		pol = get_task_policy(current);

	/*
	 * No reference counting needed for current->mempolicy
	 * nor system default_policy
	 */
	if (pol->mode == MPOL_INTERLEAVE)
		page = alloc_page_interleave(gfp, order, interleave_nodes(pol));
	else
        /* (2) 从NUMA策略指定的首选node和备选node组上,进行内存页面的分配 */
		page = __alloc_pages_nodemask(gfp, order,
				policy_node(gfp, pol, numa_node_id()),
				policy_nodemask(gfp, pol));

	return page;
}

4.3 zone 候选策略

Buddy 系统中对每一个 node 定义了多个类型的 zone :

enum zone_type {
	ZONE_DMA,
	ZONE_DMA32,
	ZONE_NORMAL,
	ZONE_HIGHMEM,
	ZONE_MOVABLE,
	ZONE_DEVICE,
	__MAX_NR_ZONES
};

gfp_mask 中也定义了一系列选择 zone 的flag:

/*
 * Physical address zone modifiers (see linux/mmzone.h - low four bits)
 */
#define __GFP_DMA	((__force gfp_t)___GFP_DMA)
#define __GFP_HIGHMEM	((__force gfp_t)___GFP_HIGHMEM)
#define __GFP_DMA32	((__force gfp_t)___GFP_DMA32)
#define __GFP_MOVABLE	((__force gfp_t)___GFP_MOVABLE)  /* ZONE_MOVABLE allowed */
#define GFP_ZONEMASK	(__GFP_DMA|__GFP_HIGHMEM|__GFP_DMA32|__GFP_MOVABLE)

怎么样根据 gfp_mask 中的 zone modifiers 来选择分配锁使用的 zone 呢?系统设计了一套算法来进行转换:

序号___GFP_DMA___GFP_HIGHMEM___GFP_DMA32___GFP_MOVABLE组合结果
00000从 ZONE_NORMAL 中分配
11000从 ZONE_NORMAL 或 ZONE_DMA 中分配
20100从 ZONE_NORMAL 或 ZONE_HIGHMEM 中分配
31100不能同时满足,错误
40010从 ZONE_NORMAL 或 ZONE_DMA32 中分配
51010不能同时满足,错误
60110不能同时满足,错误
71110不能同时满足,错误
80001从 ZONE_NORMAL 或 ZONE_MOVABLE 获得
91001从 ZONE_NORMAL 或 (ZONE_DMA + ZONE_MOVALE) 获得
a0101从 ZONE_MOVABLE 获得
b1101不能同时满足,错误
c0011从 ZONE_DMA
d1011不能同时满足,错误
e0111不能同时满足,错误
f1111不能同时满足,错误

具体的代码如下:

alloc_pages() → alloc_pages_current() → __alloc_pages_nodemask() → prepare_alloc_pages() → gfp_zone():

static inline enum zone_type gfp_zone(gfp_t flags)
{
	enum zone_type z;

	/* (1) gfp 标志中低4位为 zone modifiers */
	int bit = (__force int) (flags & GFP_ZONEMASK);

	/* (2) 查表得到最后的候选zone
		   内核规定 ___GFP_DMA,___GFP_HIGHMEM 和 ___GFP_DMA32 其两个或全部不能同时存在于 gfp 标志中
	 */
	z = (GFP_ZONE_TABLE >> (bit * GFP_ZONES_SHIFT)) &
					 ((1 << GFP_ZONES_SHIFT) - 1);
	VM_BUG_ON((GFP_ZONE_BAD >> bit) & 1);
	return z;
}

#define GFP_ZONE_TABLE ( \
	(ZONE_NORMAL << 0 * GFP_ZONES_SHIFT)				       \
	| (OPT_ZONE_DMA << ___GFP_DMA * GFP_ZONES_SHIFT)		       \
	| (OPT_ZONE_HIGHMEM << ___GFP_HIGHMEM * GFP_ZONES_SHIFT)	       \
	| (OPT_ZONE_DMA32 << ___GFP_DMA32 * GFP_ZONES_SHIFT)		       \
	| (ZONE_NORMAL << ___GFP_MOVABLE * GFP_ZONES_SHIFT)		       \
	| (OPT_ZONE_DMA << (___GFP_MOVABLE | ___GFP_DMA) * GFP_ZONES_SHIFT)    \
	| (ZONE_MOVABLE << (___GFP_MOVABLE | ___GFP_HIGHMEM) * GFP_ZONES_SHIFT)\
	| (OPT_ZONE_DMA32 << (___GFP_MOVABLE | ___GFP_DMA32) * GFP_ZONES_SHIFT)\
)

#define GFP_ZONE_BAD ( \
	1 << (___GFP_DMA | ___GFP_HIGHMEM)				      \
	| 1 << (___GFP_DMA | ___GFP_DMA32)				      \
	| 1 << (___GFP_DMA32 | ___GFP_HIGHMEM)				      \
	| 1 << (___GFP_DMA | ___GFP_DMA32 | ___GFP_HIGHMEM)		      \
	| 1 << (___GFP_MOVABLE | ___GFP_HIGHMEM | ___GFP_DMA)		      \
	| 1 << (___GFP_MOVABLE | ___GFP_DMA32 | ___GFP_DMA)		      \
	| 1 << (___GFP_MOVABLE | ___GFP_DMA32 | ___GFP_HIGHMEM)		      \
	| 1 << (___GFP_MOVABLE | ___GFP_DMA32 | ___GFP_DMA | ___GFP_HIGHMEM)  \
)

4.4 zone fallback 策略

通过上述的候选策略,我们选定了内存分配的 node 和 zone,然后开始分配。如果分配失败,我们并不会马上启动内存回收,而是通过 fallback 机制尝试从其他低级的 zone 中看看能不能借用一些内存。

fallback 的借用,只能从高级到低级的借用,而不能从低级到高级的借用。比如:原本想分配 Normal zone 的内存,失败的情况下可以尝试从 DMA32 zone 中分配内存,因为能用 normal zone 地址范围的内存肯定也可以用 DMA32 zone 地址范围的内存。但是反过来就不行,原本需要 DMA32 zone 地址范围的内存,你给他一个 normal zone 的内存,地址超过了4G,可能就超过了 DMA 设备的寻址能力。

系统还定义了一个 __GFP_THISNODE 标志,用来限制 fallback 时只能在本 node 上寻找合适的低级 zone。否则会在所有 node 上寻找合适的低级 zone。

该算法的具体实现如下:

  • 1、每个 node 定义了 fallback 时用到的候选 zone 链表:
pgdat->node_zonelists[ZONELIST_FALLBACK]        // 跨 node FALLBACK机制生效,用来链接所有node的所有zone
pgdat->node_zonelists[ZONELIST_NOFALLBACK]      // 如果gfp_mask设置了__GFP_THISNODE标志,跨 node  FALLBACK机制失效,用来链接本node的所有zone

系统启动时初始化这些链表:

start_kernel() → build_all_zonelists() → __build_all_zonelists() → build_zonelists() → build_zonelists_in_node_order()/build_thisnode_zonelists() → build_zonerefs_node():

在这里插入图片描述

  • 2、内存分配时确定使用的 fallback 链表:
alloc_pages() → alloc_pages_current() → __alloc_pages_nodemask() → prepare_alloc_pages() → node_zonelist():

static inline struct zonelist *node_zonelist(int nid, gfp_t flags)
{
    /* (1) 根据fallback机制是否使能,来选择候选zone链表 */
	return NODE_DATA(nid)->node_zonelists + gfp_zonelist(flags);
}

static inline int gfp_zonelist(gfp_t flags)
{
#ifdef CONFIG_NUMA
    /* (1.1) 如果gfp_mask指定了__GFP_THISNODE,则跨 node fallback机制失效 */
	if (unlikely(flags & __GFP_THISNODE))
		return ZONELIST_NOFALLBACK;
#endif

    /* (1.2) 否则,跨 node fallback机制生效 */
	return ZONELIST_FALLBACK;
}

alloc_pages() → alloc_pages_current() → __alloc_pages_nodemask() → finalise_ac():

static inline void finalise_ac(gfp_t gfp_mask,
		unsigned int order, struct alloc_context *ac)
{
	/* Dirty zone balancing only done in the fast path */
	ac->spread_dirty_pages = (gfp_mask & __GFP_WRITE);

	/*
	 * The preferred zone is used for statistics but crucially it is
	 * also used as the starting point for the zonelist iterator. It
	 * may get reset for allocations that ignore memory policies.
	 */
	/* (2) 从fallback list中选取最佳候选zone,即本node的符合zone type条件的最高zone */
	ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
					ac->high_zoneidx, ac->nodemask);
}
  • 3、从原有zone分配失败时,尝试从 fallback zone 中分配内存:
alloc_pages() → alloc_pages_current() → __alloc_pages_nodemask() → get_page_from_freelist():

static struct page *
get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
						const struct alloc_context *ac)
{
	struct zoneref *z = ac->preferred_zoneref;
	struct zone *zone;

	/* (1) 如果分配失败,遍历 fallback list 中的 zone,逐个尝试分配 */
	for_next_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx,
								ac->nodemask) {

	}
}

4.5 lowmem reserve 机制

承接上述的 fallback 机制,高等级的 zone 可以借用低等级 zone 的内存。但是从理论上说,低等级的内存更加的宝贵因为它的空间更小,如果被高等级的侵占完了,那么用户需要低层级内存的时候就会分配失败。

为了解决这个问题,系统给每个 zone 能够给其他高等级 zone 借用的内存设置了一个预留值,可以借用内存但是本zone保留的内存不能小于这个值。

我们可以通过命令来查看每个 zone 的 lowmem reserve 大小设置,protection 字段描述了本zone给其他zone借用时必须保留的内存:

pwl@ubuntu:~$ cat /proc/zoneinfo
Node 0, zone      DMA
  pages free     3968
        min      67
        low      83
        high     99
        spanned  4095
        present  3997
        managed  3976
		// 本 zone 为 DMA 
		// 给 DMA zone 借用时必须保留 0 pages
		// 给 DMA32 zone 借用时必须保留 2934 pages
		// 给 Normal/Movable/Device zone 借用时必须保留 3859 pages
        protection: (0, 2934, 3859, 3859, 3859) 

Node 0, zone    DMA32
  pages free     418978
        min      12793
        low      15991
        high     19189
        spanned  1044480
        present  782288
        managed  759701
		// 本 zone 为 DMA32 
		// 给 DMA/DMA32 zone 借用时必须保留 0 pages
		// 给 Normal/Movable/Device zone 借用时必须保留 925 pages
        protection: (0, 0, 925, 925, 925)
      nr_free_pages 418978

Node 0, zone   Normal
  pages free     4999
        min      4034
        low      5042
        high     6050
        spanned  262144
        present  262144
        managed  236890
		// 本 zone 为 Normal 
		// 因为 Movable/Device zone 大小为0,所以给所有 zone 借用时必须保留 0 pages
        protection: (0, 0, 0, 0, 0)

Node 0, zone  Movable
  pages free     0
        min      0
        low      0
        high     0
        spanned  0
        present  0
        managed  0
        protection: (0, 0, 0, 0, 0)
Node 0, zone   Device
  pages free     0
        min      0
        low      0
        high     0
        spanned  0
        present  0
        managed  0
        protection: (0, 0, 0, 0, 0)		

可以通过lowmem_reserve_ratio来调节这个值的大小:

pwl@ubuntu:~$ cat /proc/sys/vm/lowmem_reserve_ratio
256     256     32      0       0

这部分算法的详细原理,可以参考:Linux内存调节之lowmem reserve

4.6 order fallback 策略

Buddy 系统中对每一个 zone 又细分了多个 order 的 free_area:

#ifndef CONFIG_FORCE_MAX_ZONEORDER
#define MAX_ORDER 11
#else
#define MAX_ORDER CONFIG_FORCE_MAX_ZONEORDER
#endif

如果在对应 order 的 free_area 中找不多 free 内存的话,会逐个往高级别 order free_area 中查找,直至 max_order。

对高级别 order 的 freelist ,会被分割成多个低级别 order 的 freelist。

4.7 migrate type 候选策略

Buddy 系统中对每一个 zone 中的每一个 order free_area 又细分了多个 migrate type :

enum migratetype {
	MIGRATE_UNMOVABLE,
	MIGRATE_MOVABLE,
	MIGRATE_RECLAIMABLE,
	MIGRATE_PCPTYPES,	/* the number of types on the pcp lists */
	MIGRATE_HIGHATOMIC = MIGRATE_PCPTYPES,
	MIGRATE_CMA,
	MIGRATE_ISOLATE,	/* can't allocate from here */
	MIGRATE_TYPES
};

gfp_mask 中也定义了一系列选择 migrate type 的flag:

#define __GFP_MOVABLE	((__force gfp_t)___GFP_MOVABLE)  /* ZONE_MOVABLE allowed */
#define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE)
#define GFP_MOVABLE_MASK (__GFP_RECLAIMABLE|__GFP_MOVABLE)

根据 gfp_mask 转换成 migrate type 的代码如下:

alloc_pages() → alloc_pages_current() → __alloc_pages_nodemask() → prepare_alloc_pages() → gfpflags_to_migratetype():

static inline int gfpflags_to_migratetype(const gfp_t gfp_flags)
{
	VM_WARN_ON((gfp_flags & GFP_MOVABLE_MASK) == GFP_MOVABLE_MASK);
	BUILD_BUG_ON((1UL << GFP_MOVABLE_SHIFT) != ___GFP_MOVABLE);
	BUILD_BUG_ON((___GFP_MOVABLE >> GFP_MOVABLE_SHIFT) != MIGRATE_MOVABLE);

	if (unlikely(page_group_by_mobility_disabled))
		return MIGRATE_UNMOVABLE;

	/* Group based on mobility */
	/* (1) 转换的结果仅为3种类型:MIGRATE_UNMOVABLE/MIGRATE_MOVABLE/MIGRATE_RECLAIMABLE */ 
	return (gfp_flags & GFP_MOVABLE_MASK) >> GFP_MOVABLE_SHIFT;
}

4.8 migrate fallback 策略

在指定 migrate type 的 order 和大于 order 的 free list 分配失败时,可以从同一 zone 的其他 migrate type freelist 中偷取内存。

static int fallbacks[MIGRATE_TYPES][4] = {
	[MIGRATE_UNMOVABLE]   = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE,   MIGRATE_TYPES },
	[MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE,   MIGRATE_MOVABLE,   MIGRATE_TYPES },
	[MIGRATE_MOVABLE]     = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_TYPES },
#ifdef CONFIG_CMA
	[MIGRATE_CMA]         = { MIGRATE_TYPES }, /* Never used */
#endif
#ifdef CONFIG_MEMORY_ISOLATION
	[MIGRATE_ISOLATE]     = { MIGRATE_TYPES }, /* Never used */
#endif
};

在这里插入图片描述
fallbacks[] 数组定义了当前 migrate 可以从偷取哪些其他 migrate 的空闲内存,基本就是 MIGRATE_UNMOVABLE、MIGRATE_RECLAIMABLE、MIGRATE_MOVABLE 可以相互偷取。

具体的代码如下:

alloc_pages() → alloc_pages_current() → __alloc_pages_nodemask() → get_page_from_freelist() → rmqueue() → __rmqueue() → __rmqueue_fallback():

4.9 reclaim watermark

分配时如果 freelist 中现有的内存不能满足需求,则会启动内充回收。系统对每个 zone 定义了三种内存水位 high/low/min,针对不同的水位采取不同的回收策略:

pwl@ubuntu:~$ cat /proc/zoneinfo
Node 0, zone      DMA
  pages free     3968
        min      67
        low      83
        high     99

在这里插入图片描述

具体三种水位的回收策略如下:

watermarksizereclaim policy
minZoneSizeInPages / 128当 zone 中可用的 page 数量低于这个值时,将开启同步回收动作,获取 page 的进程会被阻塞(如果可以阻塞的话)
GFP_ATOMIC类型的内存可以突破这个限制,不启动阻塞回收直接进行分配。所以min以下的内存其实就是给GFP_ATOMIC预留的,保证它能在极端情况下分配到内存
low-当 zone 中可用的 page 数量低于这个值时,将唤醒 kswapd 内核线程(每个 node 对应一个 swapd )进行异步回收
high-当 zone 中可用的 page 数量高于这个值时,kswapd 会休眠,停止回收

4.10 reclaim 方式

系统设计了几种回收内存的手段:

functionparadescript
node_reclaim()priority = NODE_RECLAIM_PRIORITY
may_writepage = 0
may_unmap = 0
may_swap = 1
快速内存回收,在 get_page_from_freelist() 中调用,最终调用的是 shrink_node()
wake_all_kswapds()-唤醒异步回收线程,最终调用的是 shrink_node()
__alloc_pages_direct_reclaim()-在当前线程中直接回收内存,最终调用的是 shrink_node()
__alloc_pages_direct_compact()-在当前线程中直接内存压缩来进行内存回收
__alloc_pages_may_oom()-通过杀进程来回收内存

4.11 alloc_pages()

Buddy 内存分配的核心代码实现。

alloc_pages() → alloc_pages_current() → __alloc_pages_nodemask():

struct page *
__alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
							nodemask_t *nodemask)
{
	struct page *page;
    /* (1.1) 默认的允许水位为low */
	unsigned int alloc_flags = ALLOC_WMARK_LOW;
	gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */
	struct alloc_context ac = { };

	/*
	 * There are several places where we assume that the order value is sane
	 * so bail out early if the request is out of bound.
	 */
    /* (1.2) order长度的合法性判断 */
	if (unlikely(order >= MAX_ORDER)) {
		WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN));
		return NULL;
	}

    /* (1.3) gfp_mask的过滤 */
	gfp_mask &= gfp_allowed_mask;
	alloc_mask = gfp_mask;
    /* (1.4) 根据gfp_mask,决定的high_zoneidx、候选zone list、migrate type */
	if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags))
		return NULL;

    /* (1.5) 挑选第一个合适的zone */
	finalise_ac(gfp_mask, order, &ac);

	/* First allocation attempt */
    /* (2) 第1次分配:使用low水位尝试直接从free list分配page */
	page = get_page_from_freelist(alloc_mask, order, alloc_flags, &ac);
	if (likely(page))
		goto out;

	/*
	 * Apply scoped allocation constraints. This is mainly about GFP_NOFS
	 * resp. GFP_NOIO which has to be inherited for all allocation requests
	 * from a particular context which has been marked by
	 * memalloc_no{fs,io}_{save,restore}.
	 */
    /* (3.1) 如果使用 memalloc_no{fs,io}_{save,restore} 设置了 NOFS和NOIO
			从 current->flags 解析出相应的值,用来清除 gfp_mask 中相应的 __GFP_FS 和 __GFP_IO 标志
	 */
	alloc_mask = current_gfp_context(gfp_mask);
	ac.spread_dirty_pages = false;

	/*
	 * Restore the original nodemask if it was potentially replaced with
	 * &cpuset_current_mems_allowed to optimize the fast-path attempt.
	 */
	/* (3.2) 恢复原有的nodemask */
	if (unlikely(ac.nodemask != nodemask))
		ac.nodemask = nodemask;

    /* (4) 慢速分配路径:使用min水位,以及各种手段进行内存回收后,再尝试分配内存 */
	page = __alloc_pages_slowpath(alloc_mask, order, &ac);

out:
	if (memcg_kmem_enabled() && (gfp_mask & __GFP_ACCOUNT) && page &&
	    unlikely(memcg_kmem_charge(page, gfp_mask, order) != 0)) {
		__free_pages(page, order);
		page = NULL;
	}

	trace_mm_page_alloc(page, order, alloc_mask, ac.migratetype);

	return page;
}

|→

static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
		int preferred_nid, nodemask_t *nodemask,
		struct alloc_context *ac, gfp_t *alloc_mask,
		unsigned int *alloc_flags)
{
    /* (1.4.1) 根据gfp_mask,获取到可能的最高优先级的zone */
	ac->high_zoneidx = gfp_zone(gfp_mask);
    /* (1.4.2) 根据gfp_mask,获取到可能候选node的所有zone链表 */
	ac->zonelist = node_zonelist(preferred_nid, gfp_mask);
	ac->nodemask = nodemask;
    /* (1.4.3) 根据gfp_mask,获取到migrate type
                MIGRATE_UNMOVABLE/MIGRATE_MOVABLE/MIGRATE_RECLAIMABLE
     */
	ac->migratetype = gfpflags_to_migratetype(gfp_mask);

    /* (1.4.4) 如果cpuset cgroup使能,设置相应标志位 */
	if (cpusets_enabled()) {
		*alloc_mask |= __GFP_HARDWALL;
		if (!ac->nodemask)
			ac->nodemask = &cpuset_current_mems_allowed;
		else
			*alloc_flags |= ALLOC_CPUSET;
	}

    /* (1.4.5) 如果指定了__GFP_FS,则尝试获取fs锁 */
	fs_reclaim_acquire(gfp_mask);
	fs_reclaim_release(gfp_mask);

    /* (1.4.6) 如果指定了__GFP_DIRECT_RECLAIM,判断当前是否是非原子上下文可以睡眠 */
	might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);

	if (should_fail_alloc_page(gfp_mask, order))
		return false;

    /* (1.4.7) 让MIGRATE_MOVABLE可以使用MIGRATE_CMA区域 */
	if (IS_ENABLED(CONFIG_CMA) && ac->migratetype == MIGRATE_MOVABLE)
		*alloc_flags |= ALLOC_CMA;

	return true;
}

4.11.1 get_page_from_freelist()

第一次的快速内存分配,和后续的慢速内存分配,最后都是调用 get_page_from_freelist() 从freelist中获取内存。

static struct page *
get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
						const struct alloc_context *ac)
{
	struct zoneref *z = ac->preferred_zoneref;
	struct zone *zone;
	struct pglist_data *last_pgdat_dirty_limit = NULL;

	/*
	 * Scan zonelist, looking for a zone with enough free.
	 * See also __cpuset_node_allowed() comment in kernel/cpuset.c.
	 */
    /* (2.5.1) 轮询 fallback zonelist链表,在符合条件(idx<=high_zoneidx)的zone中尝试分配内存 */
	for_next_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx,
								ac->nodemask) {
		struct page *page;
		unsigned long mark;

		if (cpusets_enabled() &&
			(alloc_flags & ALLOC_CPUSET) &&
			!__cpuset_zone_allowed(zone, gfp_mask))
				continue;

        /* (2.5.2) 如果__GFP_WRITE指示了分配页的用途是dirty,平均分布脏页
                    查询node上分配的脏页是否超过限制,超过则换node
         */
		if (ac->spread_dirty_pages) {
			if (last_pgdat_dirty_limit == zone->zone_pgdat)
				continue;

			if (!node_dirty_ok(zone->zone_pgdat)) {
				last_pgdat_dirty_limit = zone->zone_pgdat;
				continue;
			}
		}

        /* (2.5.3) 获取当前分配能超越的水位线 */
		mark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK];
        /* (2.5.4) 判断当前zone中的free page是否满足条件:
                    1、total free page >= (2^order) + watermark + lowmem_reserve
                    2、是否有符合要求的长度为(2^order)的连续内存
         */
		if (!zone_watermark_fast(zone, order, mark,
				       ac_classzone_idx(ac), alloc_flags)) {
			int ret;
            /* (2.5.5) 如果没有足够的free内存,则进行下列的判断 */

			/* Checked here to keep the fast path fast */
			BUILD_BUG_ON(ALLOC_NO_WATERMARKS < NR_WMARK);
            /* (2.5.6) 如果可以忽略水位线,则直接进行分配尝试 */
			if (alloc_flags & ALLOC_NO_WATERMARKS)
				goto try_this_zone;

			if (node_reclaim_mode == 0 ||
			    !zone_allows_reclaim(ac->preferred_zoneref->zone, zone))
				continue;

            /* (2.5.7) 快速内存回收尝试回收(2^order)个page
					快速回收不能进行unmap,writeback操作,回收priority为4,即最多尝试调用shrink_node进行回收的次数为priority值
					在__node_reclaim()中使用以下 scan_control 参数来调用shrink_node(),
						struct scan_control sc = {
							.nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX),
							.gfp_mask = current_gfp_context(gfp_mask),
							.order = order,
							.priority = NODE_RECLAIM_PRIORITY,
							.may_writepage = !!(node_reclaim_mode & RECLAIM_WRITE), // 默认为0
							.may_unmap = !!(node_reclaim_mode & RECLAIM_UNMAP),		// 默认为0
							.may_swap = 1,
							.reclaim_idx = gfp_zone(gfp_mask),
						};
			 */
			ret = node_reclaim(zone->zone_pgdat, gfp_mask, order);
			switch (ret) {
			case NODE_RECLAIM_NOSCAN:
				/* did not scan */
				continue;
			case NODE_RECLAIM_FULL:
				/* scanned but unreclaimable */
				continue;
			default:
				/* did we reclaim enough */
                /* (2.5.8) 如果回收成功,重新判断空闲内存是否已经足够 */
				if (zone_watermark_ok(zone, order, mark,
						ac_classzone_idx(ac), alloc_flags))
					goto try_this_zone;

				continue;
			}
		}

try_this_zone:
        /* (2.5.9) 满足条件,尝试实际的从free list中摘取(2^order)个page */
		page = rmqueue(ac->preferred_zoneref->zone, zone, order,
				gfp_mask, alloc_flags, ac->migratetype);
		if (page) {
            /* (2.5.10) 分配到内存后,对 struct page 的一些处理 */
			prep_new_page(page, order, gfp_mask, alloc_flags);

			/*
			 * If this is a high-order atomic allocation then check
			 * if the pageblock should be reserved for the future
			 */
			if (unlikely(order && (alloc_flags & ALLOC_HARDER)))
				reserve_highatomic_pageblock(page, zone, order);

			return page;
		}
	}

	return NULL;
}

||→

static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
		unsigned long mark, int classzone_idx, unsigned int alloc_flags)
{
    /* (2.5.4.1) 获取当前zone中free page的数量  */
	long free_pages = zone_page_state(z, NR_FREE_PAGES);
	long cma_pages = 0;

#ifdef CONFIG_CMA
	/* If allocation can't use CMA areas don't use free CMA pages */
	if (!(alloc_flags & ALLOC_CMA))
		cma_pages = zone_page_state(z, NR_FREE_CMA_PAGES);
#endif

	/*
	 * Fast check for order-0 only. If this fails then the reserves
	 * need to be calculated. There is a corner case where the check
	 * passes but only the high-order atomic reserve are free. If
	 * the caller is !atomic then it'll uselessly search the free
	 * list. That corner case is then slower but it is harmless.
	 */
    /* (2.5.4.2) 对order=0的长度,进行快速检测free内存是否够用 */
	if (!order && (free_pages - cma_pages) > mark + z->lowmem_reserve[classzone_idx])
		return true;

    /* (2.5.4.3) 慢速检测free内存是否够用 */
	return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
					free_pages);
}

|||→

bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
			 int classzone_idx, unsigned int alloc_flags,
			 long free_pages)
{
	long min = mark;
	int o;
	const bool alloc_harder = (alloc_flags & (ALLOC_HARDER|ALLOC_OOM));

	/* free_pages may go negative - that's OK */
    /* (2.5.4.3.1) 首先用free page总数减去需要的order长度,判断剩下的长度是不是还超过水位线 */
	free_pages -= (1 << order) - 1;

    /* (2.5.4.3.2) 如果是优先级高,水位线可以减半  */
	if (alloc_flags & ALLOC_HIGH)
		min -= min / 2;

	/*
	 * If the caller does not have rights to ALLOC_HARDER then subtract
	 * the high-atomic reserves. This will over-estimate the size of the
	 * atomic reserve but it avoids a search.
	 */
    /* (2.5.4.3.3) 非harder类的分配,free内存还需预留nr_reserved_highatomic的内存 */
	if (likely(!alloc_harder)) {
		free_pages -= z->nr_reserved_highatomic;
    /* (2.5.4.3.4) harder类的分配,非常紧急了,水位线还可以继续减半缩小 */
	} else {
		/*
		 * OOM victims can try even harder than normal ALLOC_HARDER
		 * users on the grounds that it's definitely going to be in
		 * the exit path shortly and free memory. Any allocation it
		 * makes during the free path will be small and short-lived.
		 */
		if (alloc_flags & ALLOC_OOM)
			min -= min / 2;
		else
			min -= min / 4;
	}


#ifdef CONFIG_CMA
	/* If allocation can't use CMA areas don't use free CMA pages */
    /* (2.5.4.3.5) 非CMA的分配,free内存还需预留CMA内存 */
	if (!(alloc_flags & ALLOC_CMA))
		free_pages -= zone_page_state(z, NR_FREE_CMA_PAGES);
#endif

	/*
	 * Check watermarks for an order-0 allocation request. If these
	 * are not met, then a high-order request also cannot go ahead
	 * even if a suitable page happened to be free.
	 */
    /* (2.5.4.3.6) free内存还要预留(水位内存+lowmem_reserve[classzone_idx])
                    如果减去上述所有的预留内存内存后,还大于请求的order长度,说明当前zone中的free内存总长度满足请求分配的order
                    但是有没有符合要求的长度为(2^order)的连续内存,还要进一步查找判断
     */
	if (free_pages <= min + z->lowmem_reserve[classzone_idx])
		return false;

	/* If this is an order-0 request then the watermark is fine */
    /* (2.5.4.3.7) 如果order为0,不用进一步判断了,总长度满足,肯定能找到合适长度的page */
	if (!order)
		return true;

	/* For a high-order request, check at least one suitable page is free */
    /* (2.5.4.3.8) 逐个查询当前zone中大于请求order的链表 */
	for (o = order; o < MAX_ORDER; o++) {
		struct free_area *area = &z->free_area[o];
		int mt;

		if (!area->nr_free)
			continue;

        /* (2.5.4.3.9) 逐个查询当前order中的每个migrate type链表,如果不为空则返回成功 */
		for (mt = 0; mt < MIGRATE_PCPTYPES; mt++) {
			if (!list_empty(&area->free_list[mt]))
				return true;
		}

#ifdef CONFIG_CMA
		if ((alloc_flags & ALLOC_CMA) &&
		    !list_empty(&area->free_list[MIGRATE_CMA])) {
			return true;
		}
#endif
		if (alloc_harder &&
			!list_empty(&area->free_list[MIGRATE_HIGHATOMIC]))
			return true;
	}
	return false;
}

4.11.2 rmqueue()

找到合适有足够 free 内存的zone以后,rmqueue()负责从 freelist 中摘取 page。

rmqueue() → __rmqueue():

static __always_inline struct page *
__rmqueue(struct zone *zone, unsigned int order, int migratetype)
{
	struct page *page;

retry:
	/* (1) 从原始指定的 migrate freeist 中分配内存 */
	page = __rmqueue_smallest(zone, order, migratetype);
	if (unlikely(!page)) {
		if (migratetype == MIGRATE_MOVABLE)
			page = __rmqueue_cma_fallback(zone, order);

		/* (2) 如果上一步分配失败,尝试从其他 migrate list 中偷取内存来分配 */
		if (!page && __rmqueue_fallback(zone, order, migratetype))
			goto retry;
	}

	trace_mm_page_alloc_zone_locked(page, order, migratetype);
	return page;
}

↓

static __always_inline
struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
						int migratetype)
{
	unsigned int current_order;
	struct free_area *area;
	struct page *page;

	/* Find a page of the appropriate size in the preferred list */
	/* (1.1) 逐个查询 >= order 的 freaa_area 中 migratetype 的freelist,看看是否有free内存 */
	for (current_order = order; current_order < MAX_ORDER; ++current_order) {
		area = &(zone->free_area[current_order]);
		page = list_first_entry_or_null(&area->free_list[migratetype],
							struct page, lru);
		if (!page)
			continue;
		/* (1.1.1) 从 freelist 中摘取内存 */
		list_del(&page->lru);
		/* 清理page中保存的order信息:
			page->_mapcount = -1
			page->private = 0
		*/
		rmv_page_order(page);
		area->nr_free--;
		/* (1.1.2) 把剩余内存重新挂载到低阶 order 的freelist中 */
		expand(zone, page, order, current_order, area, migratetype);
		set_pcppage_migratetype(page, migratetype);
		return page;
	}

	return NULL;
}

4.11.3 __alloc_pages_slowpath()

static inline struct page *
__alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
						struct alloc_context *ac)
{
	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
	const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER;
	struct page *page = NULL;
	unsigned int alloc_flags;
	unsigned long did_some_progress;
	enum compact_priority compact_priority;
	enum compact_result compact_result;
	int compaction_retries;
	int no_progress_loops;
	unsigned int cpuset_mems_cookie;
	int reserve_flags;

	/*
	 * We also sanity check to catch abuse of atomic reserves being used by
	 * callers that are not in atomic context.
	 */
	if (WARN_ON_ONCE((gfp_mask & (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)) ==
				(__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)))
		gfp_mask &= ~__GFP_ATOMIC;

retry_cpuset:
	compaction_retries = 0;
	no_progress_loops = 0;
	compact_priority = DEF_COMPACT_PRIORITY;
	cpuset_mems_cookie = read_mems_allowed_begin();

	/*
	 * The fast path uses conservative alloc_flags to succeed only until
	 * kswapd needs to be woken up, and to avoid the cost of setting up
	 * alloc_flags precisely. So we do that now.
	 */
	/* (1) 设置各种标志:
			ALLOC_WMARK_MIN,水位降低到 min
			ALLOC_HARDER,如果是 atomic 或者 rt_task,进一步降低水位
	 */
	alloc_flags = gfp_to_alloc_flags(gfp_mask);

	/*
	 * We need to recalculate the starting point for the zonelist iterator
	 * because we might have used different nodemask in the fast path, or
	 * there was a cpuset modification and we are retrying - otherwise we
	 * could end up iterating over non-eligible zones endlessly.
	 */
	/* (2) 重新安排 fallback zone list */
	ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
					ac->high_zoneidx, ac->nodemask);
	if (!ac->preferred_zoneref->zone)
		goto nopage;

	/* (3) 进入慢速路径,说明在 low 水位下已经分配失败了,
			所以先唤醒 kswapd 异步回收线程
	 */
	if (gfp_mask & __GFP_KSWAPD_RECLAIM)
		wake_all_kswapds(order, ac);

	/*
	 * The adjusted alloc_flags might result in immediate success, so try
	 * that first
	 */
	/* (4) 第2次分配:使用min水位尝试直接从free list分配page */
	page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
	if (page)
		goto got_pg;

	/*
	 * For costly allocations, try direct compaction first, as it's likely
	 * that we have enough base pages and don't need to reclaim. For non-
	 * movable high-order allocations, do that as well, as compaction will
	 * try prevent permanent fragmentation by migrating from blocks of the
	 * same migratetype.
	 * 对于昂贵的分配,首先尝试直接压缩,因为我们可能有足够的基本页,不需要回收。对于不可移动的高阶分配,也要这样做,因为压缩将尝试通过从相同migratetype的块迁移来防止永久的碎片化。
	 * Don't try this for allocations that are allowed to ignore
	 * watermarks, as the ALLOC_NO_WATERMARKS attempt didn't yet happen.
	 * 不要尝试这个分配而允许忽略水位,因为alloc_no_watermark尝试还没有发生。
	 */
	if (can_direct_reclaim &&
			(costly_order ||
			   (order > 0 && ac->migratetype != MIGRATE_MOVABLE))
			&& !gfp_pfmemalloc_allowed(gfp_mask)) {
		/* (5) 第3次分配:内存压缩compact后,尝试分配 get_page_from_freelist() */
		page = __alloc_pages_direct_compact(gfp_mask, order,
						alloc_flags, ac,
						INIT_COMPACT_PRIORITY,
						&compact_result);
		if (page)
			goto got_pg;

		/*
		 * Checks for costly allocations with __GFP_NORETRY, which
		 * includes THP page fault allocations
		 */
		if (costly_order && (gfp_mask & __GFP_NORETRY)) {
			/*
			 * If compaction is deferred for high-order allocations,
			 * it is because sync compaction recently failed. If
			 * this is the case and the caller requested a THP
			 * allocation, we do not want to heavily disrupt the
			 * system, so we fail the allocation instead of entering
			 * direct reclaim.
			 */
			if (compact_result == COMPACT_DEFERRED)
				goto nopage;

			/*
			 * Looks like reclaim/compaction is worth trying, but
			 * sync compaction could be very expensive, so keep
			 * using async compaction.
			 */
			compact_priority = INIT_COMPACT_PRIORITY;
		}
	}

retry:
	/* Ensure kswapd doesn't accidentally go to sleep as long as we loop */
	/* (6) 再一次唤醒 kswapd 异步回收线程,可能ac参数变得更严苛了 */
	if (gfp_mask & __GFP_KSWAPD_RECLAIM)
		wake_all_kswapds(order, ac);

	/* (7) 设置各种标志:
			ALLOC_NO_WATERMARKS,进一步降低水位,直接忽略水位
	 */
	reserve_flags = __gfp_pfmemalloc_flags(gfp_mask);
	if (reserve_flags)
		alloc_flags = reserve_flags;

	/*
	 * Reset the zonelist iterators if memory policies can be ignored.
	 * These allocations are high priority and system rather than user
	 * orientated.
	 */
	if (!(alloc_flags & ALLOC_CPUSET) || reserve_flags) {
		ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
					ac->high_zoneidx, ac->nodemask);
	}

	/* Attempt with potentially adjusted zonelist and alloc_flags */
	/* (8) 第4次分配:使用no水位尝试直接从free list分配page */
	page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
	if (page)
		goto got_pg;

	/* Caller is not willing to reclaim, we can't balance anything */
	/* (9) 如果当前不支持直接回收,则退出,等待 kswapd 异步线程的回收 */
	if (!can_direct_reclaim)
		goto nopage;

	/* Avoid recursion of direct reclaim */
	/* (10) 避免递归回收 */
	if (current->flags & PF_MEMALLOC)
		goto nopage;

	/* Try direct reclaim and then allocating */
	/* (11) 第5次分配:直接启动内存回收后,并尝试page get_page_from_freelist() */
	page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags, ac,
							&did_some_progress);
	if (page)
		goto got_pg;

	/* Try direct compaction and then allocating */
	/* (12) 第6次分配:直接启动内存压缩后,并尝试page get_page_from_freelist() */
	page = __alloc_pages_direct_compact(gfp_mask, order, alloc_flags, ac,
					compact_priority, &compact_result);
	if (page)
		goto got_pg;

	/* Do not loop if specifically requested */
	/* (13) 如果还是分配失败,且不支持重试,出错返回 */
	if (gfp_mask & __GFP_NORETRY)
		goto nopage;

	/*
	 * Do not retry costly high order allocations unless they are
	 * __GFP_RETRY_MAYFAIL
	 */
	if (costly_order && !(gfp_mask & __GFP_RETRY_MAYFAIL))
		goto nopage;

	/* (14) 检查重试内存回收是否有意义 */
	if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,
				 did_some_progress > 0, &no_progress_loops))
		goto retry;

	/*
	 * It doesn't make any sense to retry for the compaction if the order-0
	 * reclaim is not able to make any progress because the current
	 * implementation of the compaction depends on the sufficient amount
	 * of free memory (see __compaction_suitable)
	 */
	/* (15) 检查重试内存压缩是否有意义 */
	if (did_some_progress > 0 &&
			should_compact_retry(ac, order, alloc_flags,
				compact_result, &compact_priority,
				&compaction_retries))
		goto retry;


	/* Deal with possible cpuset update races before we start OOM killing */
	/* (16) 在启动 OOM kiling 之前,是否有可能更新 cpuset 来进行重试 */
	if (check_retry_cpuset(cpuset_mems_cookie, ac))
		goto retry_cpuset;

	/* Reclaim has failed us, start killing things */
	/* (17) 第7次分配:所有的内存回收尝试都已经失败,祭出最后的大招:通过杀进程来释放内存 */
	page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress);
	if (page)
		goto got_pg;

	/* Avoid allocations with no watermarks from looping endlessly */
	/* (18) 避免无止境循环的无水位分配 */
	if (tsk_is_oom_victim(current) &&
	    (alloc_flags == ALLOC_OOM ||
	     (gfp_mask & __GFP_NOMEMALLOC)))
		goto nopage;

	/* Retry as long as the OOM killer is making progress */
	/* (19) 在OOM killing取得进展时重试 */
	if (did_some_progress) {
		no_progress_loops = 0;
		goto retry;
	}

nopage:
	/* Deal with possible cpuset update races before we fail */
	/* (20) 在我们失败之前处理可能的cpuset更新 */
	if (check_retry_cpuset(cpuset_mems_cookie, ac))
		goto retry_cpuset;

	/*
	 * Make sure that __GFP_NOFAIL request doesn't leak out and make sure
	 * we always retry
	 */
	/* (21) 如果指定了 __GFP_NOFAIL,只能不停的进行重试 */
	if (gfp_mask & __GFP_NOFAIL) {
		/*
		 * All existing users of the __GFP_NOFAIL are blockable, so warn
		 * of any new users that actually require GFP_NOWAIT
		 */
		if (WARN_ON_ONCE(!can_direct_reclaim))
			goto fail;

		/*
		 * PF_MEMALLOC request from this context is rather bizarre
		 * because we cannot reclaim anything and only can loop waiting
		 * for somebody to do a work for us
		 */
		WARN_ON_ONCE(current->flags & PF_MEMALLOC);

		/*
		 * non failing costly orders are a hard requirement which we
		 * are not prepared for much so let's warn about these users
		 * so that we can identify them and convert them to something
		 * else.
		 */
		WARN_ON_ONCE(order > PAGE_ALLOC_COSTLY_ORDER);

		/*
		 * Help non-failing allocations by giving them access to memory
		 * reserves but do not use ALLOC_NO_WATERMARKS because this
		 * could deplete whole memory reserves which would just make
		 * the situation worse
		 */
		page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_HARDER, ac);
		if (page)
			goto got_pg;

		cond_resched();
		goto retry;
	}
fail:
	/* (22) 构造分配失败的告警信息 */
	warn_alloc(gfp_mask, ac->nodemask,
			"page allocation failure: order:%u", order);
got_pg:
	return page;
}

参考文档:

1.page 页帧管理详解
2.内核地址空间布局详解
3.深入理解Linux内核——MM
4.Understanding the Linux Virtual Memory Manager
5.构建 GFP_ZONE_TABLE
6.通过迁移类型分组来实现反碎片
7.用户态进程如何得到虚拟地址对应的物理地址?
8.linux内存源码分析 - 伙伴系统(初始化和申请页框)
9.linux内存源码分析 - 伙伴系统(释放页框)
10.linux内存源码分析 - 内存回收(整体流程)
11.linux内存源码分析 - 零散知识点
12.深入理解Linux内存回收 - Round One
13.linux内存回收机制
14.【Linux内存源码分析】页面迁移
15.Linux内存调节之lowmem reserve
16.从备用类型总盗用steal page
17.Linux内存管理 - zoned page frame allocator - 2

posted @ 2021-04-03 18:00  pwl999  阅读(600)  评论(0编辑  收藏  举报