作者:@daemon365
本文为作者原创,转载请注明出处:https://www.cnblogs.com/daemon365/p/18607996
操作系统内存管理
操作系统管理内存的存储单元是页(page),在 linux 中一般是 4KB。而且,操作系统还会使用 虚拟内存
来管理内存,在用户程序中,我们看到的内存是不是真实的内存,而是虚拟内存。当访问或者修改内存的时候,操作系统会将虚拟内存映射到真实的内存中。申请内存的组件是 Page Table 和 MMU(Memory Management Unit)。因为这个性能很重要,所以在 CPU 中专门有一个 TLB(Translation Lookaside Buffer)来缓存 Page Table 的内容。
为什么要用虚拟内存?
- 保护内存,每个进程都有自己的虚拟内存,不会相互干扰。防止修改和访问别的进程的内存。
- 减少内存碎片,虚拟内存是连续的,而真实的内存是不连续的。
- 当内存不够时,可以把虚拟内存映射到硬盘上,这样就可以使用硬盘的空间来扩展内存。

如上图所示,如果直接使用真实的内存,想要连续的内存肯定是申请不到的,这就是内存碎片的问题。而使用虚拟内存,通过 Page 映射的方式,保证内存连续。
Go 内存管理单元
page
在 go 中,管理内存的存储单元也是页(Page), 每个页的大小是 8KB。Go 内存管理是由 runtime 来管理的,runtime 会维护一个内存池,用来分配和回收内存。这样可以避免频繁的系统调用申请内存,提高性能。
mspan
mspan 是 go 内存管理基本单元,一个 mspan 包含一个或者多个 page。go 中有多种 mspan,每种 mspan 给不同的内存大小使用。
class |
bytes/obj |
bytes/span |
objects |
tail waste |
max waste |
min align |
1 |
8 |
8192 |
1024 |
0 |
87.50% |
8 |
2 |
16 |
8192 |
512 |
0 |
43.75% |
16 |
3 |
24 |
8192 |
341 |
8 |
29.24% |
8 |
4 |
32 |
8192 |
256 |
0 |
21.88% |
32 |
5 |
48 |
8192 |
170 |
32 |
31.52% |
16 |
6 |
64 |
8192 |
128 |
0 |
23.44% |
64 |
7 |
80 |
8192 |
102 |
32 |
19.07% |
16 |
8 |
96 |
8192 |
85 |
32 |
15.95% |
32 |
9 |
112 |
8192 |
73 |
16 |
13.56% |
16 |
... |
... |
... |
... |
... |
... |
... |
64 |
24576 |
24576 |
1 |
0 |
11.45% |
8192 |
65 |
27264 |
81920 |
3 |
128 |
10.00% |
128 |
66 |
28672 |
57344 |
2 |
0 |
4.91% |
4096 |
67 |
32768 |
32768 |
1 |
0 |
12.50% |
8192 |
- class 是 mspan 的类型,每种类型对应不同的内存大小。
- obj 是每个对象的大小。
- span 是 mspan 的大小。
- objects 是 mspan 中对象的个数。
- tail waste 是 mspan 中最后一个对象的浪费空间。(不能整除造成的)
- max waste 是 mspan 中最大的浪费空间。(比如第一个中 每个都使用 1 byte,那么就所有都浪费 7 byte,7 / 8 = 87.50%)
- min align 是 mspan 中对象的对齐大小。如果超过这个就会分配下一个 mspan。
数据结构
mspan
| type mspan struct { |
| |
| next *mspan |
| prev *mspan |
| |
| list *mSpanList |
| |
| |
| startAddr uintptr |
| npages uintptr |
| |
| |
| manualFreeList gclinkptr |
| |
| |
| freeindex uint16 |
| |
| nelems uint16 |
| |
| freeIndexForScan uint16 |
| |
| |
| allocCache uint64 |
| |
| |
| |
| spanclass spanClass |
| |
| } |
spanClass
| type spanClass uint8 |
| |
| func makeSpanClass(sizeclass uint8, noscan bool) spanClass { |
| return spanClass(sizeclass<<1) | spanClass(bool2int(noscan)) |
| } |
| |
| |
| func (sc spanClass) sizeclass() int8 { |
| return int8(sc >> 1) |
| } |
| |
| |
| func (sc spanClass) noscan() bool { |
| return sc&1 != 0 |
| } |
spanClass 是 unint8 类型,一共有 8 位,前 7 位是 sizeclass,也就是上边 table 中的内容,一共有 (67 + 1) * 2
种类型, +1 是 0 代表比 67 class 的内存还大。最后一位是 noscan,也就是表示这个对象中是否含有指针,用来给 GC 扫描加速用的(无指针对象就不用继续扫描了),所以要 * 2。
mspan 详解

如果所示
- mspan 是一个双向链表,如果不够用了,在挂一个就行了。
- startAddr 是 mspan 的起始地址,npages 是 page 数量。根据 startAddr + npages * 8KB 就可以得到 mspan 的结束地址。
- allocCache 是一个 bitmap,每个 bit 对应一个对象,标记是否使用。使用了 ctz(count trailing zero)。
- freeindex 是下一个空闲对象的地址,如果小于它,就不用检索了,直接从这个地址开始,提高效率。
mcache
mache 是每个 P (processor)的结构体中都有的,是用来缓存的,因为每个 P 同一时间只有一个 goroutine 在执行,所以 mcache 是不需要加锁的。这也是 mcache 的设计初衷,减少锁的竞争,提高性能。
| type p struct { |
| |
| mcache *mcache |
| |
| } |
| |
| type mcache struct { |
| |
| _ sys.NotInHeap |
| |
| |
| |
| nextSample uintptr |
| scanAlloc uintptr |
| |
| |
| tiny uintptr |
| tinyoffset uintptr |
| tinyAllocs uintptr |
| |
| |
| |
| alloc [numSpanClasses]*mspan |
| |
| |
| stackcache [_NumStackOrders]stackfreelist |
| |
| |
| flushGen atomic.Uint32 |
| } |
mcentral
mcentral 也是一种缓存,只不过在中心而不是在每个 P 上。mcentral 存在的意义也是减少锁竞争,如果没有 mcentral,那么只要从中心申请 mspan 就需要加锁。现在加上了 mcentral,申请时就需要加特别力度的锁就可以了,比如申请 class = 1 的 mspan 加 class = 1 的锁就可以了,不影响别人申请 class = 2 的 mspan。这样就可以较少锁竞争,提高性能。
| type mcentral struct { |
| _ sys.NotInHeap |
| |
| spanclass spanClass |
| |
| |
| |
| |
| |
| partial [2]spanSet |
| |
| full [2]spanSet |
| } |
| |
| |
| type spanSet struct { |
| |
| |
| |
| |
| |
| |
| |
| spineLock mutex |
| |
| spine atomicSpanSetSpinePointer |
| |
| spineLen atomic.Uintptr |
| |
| spineCap uintptr |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| index atomicHeadTailIndex |
| } |
| type mheap struct { |
| central [numSpanClasses]struct { |
| mcentral mcentral |
| |
| pad [(cpu.CacheLinePadSize - unsafe.Sizeof(mcentral{})%cpu.CacheLinePadSize) % cpu.CacheLinePadSize]byte |
| } |
| } |
mheap
mheap 是全局的内存管理器,申请内存是 mcentral 不满足要求的时候,就会从 mheap 中申请,要加全局锁。如果 mheap 还不能满足,就会系统调用从操作系统申请,每次申请的最小单位是 Arena,也就是 64M。
| type mheap struct { |
| _ sys.NotInHeap |
| |
| |
| lock mutex |
| |
| pages pageAlloc |
| |
| sweepgen uint32 |
| |
| |
| allspans []*mspan |
| |
| |
| pagesInUse atomic.Uintptr |
| |
| |
| |
| |
| arenas [1 << arenaL1Bits]*[1 << arenaL2Bits]*heapArena |
| |
| spanalloc fixalloc |
| cachealloc fixalloc |
| specialfinalizeralloc fixalloc |
| |
| } |
heapArena
| |
| |
| |
| type heapArena struct { |
| _ sys.NotInHeap |
| |
| |
| |
| spans [pagesPerArena]*mspan |
| |
| |
| |
| pageInUse [pagesPerArena / 8]uint8 |
| |
| |
| pageMarks [pagesPerArena / 8]uint8 |
| |
| |
| pageSpecials [pagesPerArena / 8]uint8 |
| |
| checkmarks *checkmarksMap |
| |
| |
| zeroedBase uintptr |
| } |
pageAlloc
分配 page 的结构体,是一个 radix tree 的结构,一共有 5 层,每一层都是一个 summary 数组,用于快速查找空闲页面。
| type pageAlloc struct { |
| |
| |
| summary [summaryLevels][]pallocSum |
| |
| |
| |
| chunks [1 << pallocChunksL1Bits]*[1 << pallocChunksL2Bits]pallocData |
| |
| |
| searchAddr offAddr |
| |
| |
| start, end chunkIdx |
| |
| |
| } |
| type pallocSum uint64 |
| |
| |
| |
| |
| |
| |
| func (p pallocSum) start() uint { |
| |
| if uint64(p)&uint64(1<<63) != 0 { |
| return maxPackedValue |
| } |
| |
| return uint(uint64(p) & (maxPackedValue - 1)) |
| } |
| |
| func (p pallocSum) max() uint { |
| if uint64(p)&uint64(1<<63) != 0 { |
| return maxPackedValue |
| } |
| |
| return uint((uint64(p) >> logMaxPackedValue) & (maxPackedValue - 1)) |
| } |
| |
| func (p pallocSum) end() uint { |
| if uint64(p)&uint64(1<<63) != 0 { |
| return maxPackedValue |
| } |
| |
| return uint((uint64(p) >> (2 * logMaxPackedValue)) & (maxPackedValue - 1)) |
| } |

内存分配流程
流程

go 中把 对象分成三类 tiny ,small 和 large。tiny 是小于 16B 的对象,small 是大于等于 16B 小于 32KB 的对象,large 是大于 32KB 的对象。tiny 分配器主要是为了减少内存碎片。
- 如果是 tiny object,直接使用 tiny 分配器分配。如果 tiny 分配器中的空间不够(定长位16B),就从 mchunk 中获取一个新的 16B 的对象作为 tiny 分配器的空间使用。
- 如果是 small object,根据所属的 class, 从 mcache 获取对应 mspan 中的内存。
- 如果 mspan 中的内存不够,根据所属的 class 从 mcentral 中获取新的 mspan ,从 mspan 中获取内存。(要 class 力度的锁)
- 如果 mcentral 中的 mspan 也不够,就从 mheap 中获取对应数量的 page 组装成 mspan,然后从新的 mspan 中获取内存。(全局锁)
- 如果 mheap 中的 mspan 也不够,就系统调用从操作系统获取新的 Arena。把内存 page 分配好,然后继续第四步。
- 如果是 large object,直接从第四部开始。
mallocgc
| |
| func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer { |
| |
| if gcphase == _GCmarktermination { |
| throw("mallocgc called with gcphase == _GCmarktermination") |
| } |
| |
| if size == 0 { |
| return unsafe.Pointer(&zerobase) |
| } |
| |
| |
| |
| |
| |
| mp := acquirem() |
| if mp.mallocing != 0 { |
| throw("malloc deadlock") |
| } |
| if mp.gsignal == getg() { |
| throw("malloc during signal") |
| } |
| mp.mallocing = 1 |
| |
| shouldhelpgc := false |
| dataSize := userSize |
| |
| c := getMCache(mp) |
| if c == nil { |
| throw("mallocgc called without a P or outside bootstrapping") |
| } |
| var span *mspan |
| var header **_type |
| var x unsafe.Pointer |
| |
| noscan := typ == nil || !typ.Pointers() |
| |
| |
| if size <= maxSmallSize-mallocHeaderSize { |
| |
| if noscan && size < maxTinySize { |
| off := c.tinyoffset |
| |
| if size&7 == 0 { |
| off = alignUp(off, 8) |
| } else if goarch.PtrSize == 4 && size == 12 { |
| off = alignUp(off, 8) |
| } else if size&3 == 0 { |
| off = alignUp(off, 4) |
| } else if size&1 == 0 { |
| off = alignUp(off, 2) |
| } |
| |
| if off+size <= maxTinySize && c.tiny != 0 { |
| x = unsafe.Pointer(c.tiny + off) |
| c.tinyoffset = off + size |
| c.tinyAllocs++ |
| mp.mallocing = 0 |
| releasem(mp) |
| return x |
| } |
| |
| span = c.alloc[tinySpanClass] |
| v := nextFreeFast(span) |
| if v == 0 { |
| v, span, shouldhelpgc = c.nextFree(tinySpanClass) |
| } |
| x = unsafe.Pointer(v) |
| (*[2]uint64)(x)[0] = 0 |
| (*[2]uint64)(x)[1] = 0 |
| if !raceenabled && (size < c.tinyoffset || c.tiny == 0) { |
| c.tiny = uintptr(x) |
| c.tinyoffset = size |
| } |
| size = maxTinySize |
| } else { |
| |
| |
| hasHeader := !noscan && !heapBitsInSpan(size) |
| if hasHeader { |
| size += mallocHeaderSize |
| } |
| |
| var sizeclass uint8 |
| if size <= smallSizeMax-8 { |
| sizeclass = size_to_class8[divRoundUp(size, smallSizeDiv)] |
| } else { |
| sizeclass = size_to_class128[divRoundUp(size-smallSizeMax, largeSizeDiv)] |
| } |
| size = uintptr(class_to_size[sizeclass]) |
| spc := makeSpanClass(sizeclass, noscan) |
| span = c.alloc[spc] |
| |
| v := nextFreeFast(span) |
| if v == 0 { |
| |
| v, span, shouldhelpgc = c.nextFree(spc) |
| } |
| x = unsafe.Pointer(v) |
| |
| if needzero && span.needzero != 0 { |
| memclrNoHeapPointers(x, size) |
| } |
| |
| if hasHeader { |
| header = (**_type)(x) |
| x = add(x, mallocHeaderSize) |
| size -= mallocHeaderSize |
| } |
| } |
| } else { |
| |
| shouldhelpgc = true |
| span = c.allocLarge(size, noscan) |
| span.freeindex = 1 |
| span.allocCount = 1 |
| size = span.elemsize |
| x = unsafe.Pointer(span.base()) |
| if needzero && span.needzero != 0 { |
| delayedZeroing = true |
| } |
| if !noscan { |
| |
| span.largeType = nil |
| header = &span.largeType |
| } |
| } |
| |
| return x |
| } |
nextFreeFast
| func nextFreeFast(s *mspan) gclinkptr { |
| |
| theBit := sys.TrailingZeros64(s.allocCache) |
| |
| if theBit < 64 { |
| result := s.freeindex + uint16(theBit) |
| if result < s.nelems { |
| freeidx := result + 1 |
| if freeidx%64 == 0 && freeidx != s.nelems { |
| return 0 |
| } |
| |
| s.allocCache >>= uint(theBit + 1) |
| s.freeindex = freeidx |
| s.allocCount++ |
| |
| |
| return gclinkptr(uintptr(result)*s.elemsize + s.base()) |
| } |
| } |
| return 0 |
| } |
nextFree
| |
| func (c *mcache) nextFree(spc spanClass) (v gclinkptr, s *mspan, shouldhelpgc bool) { |
| s = c.alloc[spc] |
| shouldhelpgc = false |
| |
| freeIndex := s.nextFreeIndex() |
| if freeIndex == s.nelems { |
| |
| c.refill(spc) |
| shouldhelpgc = true |
| s = c.alloc[spc] |
| |
| freeIndex = s.nextFreeIndex() |
| } |
| |
| if freeIndex >= s.nelems { |
| throw("freeIndex is not valid") |
| } |
| |
| v = gclinkptr(uintptr(freeIndex)*s.elemsize + s.base()) |
| s.allocCount++ |
| if s.allocCount > s.nelems { |
| println("s.allocCount=", s.allocCount, "s.nelems=", s.nelems) |
| throw("s.allocCount > s.nelems") |
| } |
| return |
| } |
| |
一组一组获取空闲对象
| func (s *mspan) nextFreeIndex() uint16 { |
| sfreeindex := s.freeindex |
| snelems := s.nelems |
| if sfreeindex == snelems { |
| return sfreeindex |
| } |
| if sfreeindex > snelems { |
| throw("s.freeindex > s.nelems") |
| } |
| |
| aCache := s.allocCache |
| |
| bitIndex := sys.TrailingZeros64(aCache) |
| for bitIndex == 64 { |
| |
| sfreeindex = (sfreeindex + 64) &^ (64 - 1) |
| if sfreeindex >= snelems { |
| s.freeindex = snelems |
| return snelems |
| } |
| whichByte := sfreeindex / 8 |
| |
| s.refillAllocCache(whichByte) |
| aCache = s.allocCache |
| bitIndex = sys.TrailingZeros64(aCache) |
| |
| |
| } |
| result := sfreeindex + uint16(bitIndex) |
| if result >= snelems { |
| s.freeindex = snelems |
| return snelems |
| } |
| |
| s.allocCache >>= uint(bitIndex + 1) |
| sfreeindex = result + 1 |
| |
| if sfreeindex%64 == 0 && sfreeindex != snelems { |
| whichByte := sfreeindex / 8 |
| s.refillAllocCache(whichByte) |
| } |
| s.freeindex = sfreeindex |
| return result |
| } |
| |
| func (c *mcache) refill(spc spanClass) { |
| s := c.alloc[spc] |
| |
| if s.allocCount != s.nelems { |
| throw("refill of span with free space remaining") |
| } |
| if s != &emptymspan { |
| |
| |
| mheap_.central[spc].mcentral.uncacheSpan(s) |
| |
| |
| } |
| |
| |
| s = mheap_.central[spc].mcentral.cacheSpan() |
| |
| |
| |
| c.alloc[spc] = s |
| } |
cacheSpan
| func (c *mcentral) cacheSpan() *mspan { |
| |
| |
| |
| sg := mheap_.sweepgen |
| if s = c.partialSwept(sg).pop(); s != nil { |
| goto havespan |
| } |
| |
| |
| sl = sweep.active.begin() |
| if sl.valid { |
| |
| for ; spanBudget >= 0; spanBudget-- { |
| s = c.partialUnswept(sg).pop() |
| if s == nil { |
| break |
| } |
| |
| if s, ok := sl.tryAcquire(s); ok { |
| |
| s.sweep(true) |
| sweep.active.end(sl) |
| goto havespan |
| } |
| } |
| |
| for ; spanBudget >= 0; spanBudget-- { |
| s = c.fullUnswept(sg).pop() |
| if s == nil { |
| break |
| } |
| if s, ok := sl.tryAcquire(s); ok { |
| s.sweep(true) |
| |
| freeIndex := s.nextFreeIndex() |
| if freeIndex != s.nelems { |
| s.freeindex = freeIndex |
| sweep.active.end(sl) |
| goto havespan |
| } |
| c.fullSwept(sg).push(s.mspan) |
| } |
| } |
| sweep.active.end(sl) |
| } |
| trace = traceAcquire() |
| if trace.ok() { |
| trace.GCSweepDone() |
| traceDone = true |
| traceRelease(trace) |
| } |
| |
| |
| s = c.grow() |
| if s == nil { |
| return nil |
| } |
| |
| |
| havespan: |
| |
| |
| freeByteBase := s.freeindex &^ (64 - 1) |
| whichByte := freeByteBase / 8 |
| s.refillAllocCache(whichByte) |
| s.allocCache >>= s.freeindex % 64 |
| |
| return s |
| } |
| |
grow
| func (c *mcentral) grow() *mspan { |
| npages := uintptr(class_to_allocnpages[c.spanclass.sizeclass()]) |
| size := uintptr(class_to_size[c.spanclass.sizeclass()]) |
| |
| |
| s := mheap_.alloc(npages, c.spanclass) |
| if s == nil { |
| return nil |
| } |
| |
| |
| n := s.divideByElemSize(npages << _PageShift) |
| s.limit = s.base() + size*n |
| s.initHeapBits(false) |
| return s |
| } |
| |
| func (h *mheap) alloc(npages uintptr, spanclass spanClass) *mspan { |
| var s *mspan |
| systemstack(func() { |
| |
| if !isSweepDone() { |
| h.reclaim(npages) |
| } |
| s = h.allocSpan(npages, spanAllocHeap, spanclass) |
| }) |
| return s |
| } |
| |
| func (h *mheap) allocSpan(npages uintptr, typ spanAllocType, spanclass spanClass) (s *mspan) { |
| |
| if !needPhysPageAlign && pp != nil && npages < pageCachePages/4 { |
| |
| *c = h.pages.allocToCache() |
| } |
| |
| lock(&h.lock) |
| |
| if needPhysPageAlign { |
| |
| extraPages := physPageSize / pageSize |
| |
| |
| base, _ = h.pages.find(npages + extraPages) |
| |
| } |
| |
| if base == 0 { |
| |
| base, scav = h.pages.alloc(npages) |
| if base == 0 { |
| var ok bool |
| |
| growth, ok = h.grow(npages) |
| if !ok { |
| unlock(&h.lock) |
| return nil |
| } |
| base, scav = h.pages.alloc(npages) |
| if base == 0 { |
| throw("grew heap, but no adequate free space found") |
| } |
| } |
| } |
| unlock(&h.lock) |
| |
| HaveSpan: |
| |
| |
| |
| h.initSpan(s, typ, spanclass, base, npages) |
| |
| return s |
| } |
| |
| |
| func (h *mheap) grow(npage uintptr) (uintptr, bool) { |
| |
| ask := alignUp(npage, pallocChunkPages) * pageSize |
| |
| av, asize := h.sysAlloc(ask, &h.arenaHints, true) |
| |
| } |
| |
| |
| func sysReserveOS(v unsafe.Pointer, n uintptr) unsafe.Pointer { |
| p, err := mmap(v, n, _PROT_NONE, _MAP_ANON|_MAP_PRIVATE, -1, 0) |
| if err != 0 { |
| return nil |
| } |
| return p |
| } |
| func mmap(addr unsafe.Pointer, n uintptr, prot, flags, fd int32, off uint32) (unsafe.Pointer, int) { |
| |
| return sysMmap(addr, n, prot, flags, fd, off) |
| } |
stack 内存
| |
| if newg == nil { |
| newg = malg(stackMin) |
| } |
| |
| func stackalloc(n uint32) stack { |
| thisg := getg() |
| |
| |
| var v unsafe.Pointer |
| |
| if n < fixedStack<<_NumStackOrders && n < _StackCacheSize { |
| order := uint8(0) |
| n2 := n |
| for n2 > fixedStack { |
| order++ |
| n2 >>= 1 |
| } |
| var x gclinkptr |
| |
| |
| |
| |
| if stackNoCache != 0 || thisg.m.p == 0 || thisg.m.preemptoff != "" { |
| lock(&stackpool[order].item.mu) |
| x = stackpoolalloc(order) |
| unlock(&stackpool[order].item.mu) |
| } else { |
| |
| c := thisg.m.p.ptr().mcache |
| x = c.stackcache[order].list |
| |
| if x.ptr() == nil { |
| stackcacherefill(c, order) |
| x = c.stackcache[order].list |
| } |
| c.stackcache[order].list = x.ptr().next |
| c.stackcache[order].size -= uintptr(n) |
| } |
| v = unsafe.Pointer(x) |
| } else { |
| |
| var s *mspan |
| npage := uintptr(n) >> _PageShift |
| log2npage := stacklog2(npage) |
| |
| |
| lock(&stackLarge.lock) |
| if !stackLarge.free[log2npage].isEmpty() { |
| s = stackLarge.free[log2npage].first |
| stackLarge.free[log2npage].remove(s) |
| } |
| unlock(&stackLarge.lock) |
| |
| lockWithRankMayAcquire(&mheap_.lock, lockRankMheap) |
| |
| if s == nil { |
| |
| s = mheap_.allocManual(npage, spanAllocStack) |
| if s == nil { |
| throw("out of memory") |
| } |
| osStackAlloc(s) |
| s.elemsize = uintptr(n) |
| } |
| v = unsafe.Pointer(s.base()) |
| } |
| |
| |
| return stack{uintptr(v), uintptr(v) + uintptr(n)} |
| } |
stackpoolalloc stackpool
| var stackpool [_NumStackOrders]struct { |
| item stackpoolItem |
| _ [(cpu.CacheLinePadSize - unsafe.Sizeof(stackpoolItem{})%cpu.CacheLinePadSize) % cpu.CacheLinePadSize]byte |
| } |
| |
| type stackpoolItem struct { |
| _ sys.NotInHeap |
| mu mutex |
| span mSpanList |
| } |
| |
| |
| func stackpoolalloc(order uint8) gclinkptr { |
| list := &stackpool[order].item.span |
| s := list.first |
| if s == nil { |
| |
| s = mheap_.allocManual(_StackCacheSize>>_PageShift, spanAllocStack) |
| |
| } |
| |
| x := s.manualFreeList |
| |
| return x |
| } |
stackcache
| type mcache struct { |
| stackcache [_NumStackOrders]stackfreelist |
| } |
| |
| type stackfreelist struct { |
| list gclinkptr |
| size uintptr |
| } |
| |
| type gclinkptr uintptr |
| |
| func (p gclinkptr) ptr() *gclink { |
| return (*gclink)(unsafe.Pointer(p)) |
| } |
stackcacherefill
| func stackcacherefill(c *mcache, order uint8) { |
| for size < _StackCacheSize/2 { |
| x := stackpoolalloc(order) |
| x.ptr().next = list |
| list = x |
| size += fixedStack << order |
| } |
| unlock(&stackpool[order].item.mu) |
| c.stackcache[order].list = list |
| c.stackcache[order].size = size |
| } |
回收
| |
| func gdestroy(gp *g) { |
| |
| |
| casgstatus(gp, _Grunning, _Gdead) |
| |
| |
| |
| dropg() |
| |
| gfput(pp, gp) |
| } |
| |
| func gfput(pp *p, gp *g) { |
| |
| stksize := gp.stack.hi - gp.stack.lo |
| |
| |
| if stksize != uintptr(startingStackSize) { |
| |
| stackfree(gp.stack) |
| gp.stack.lo = 0 |
| gp.stack.hi = 0 |
| gp.stackguard0 = 0 |
| } |
| |
| |
| pp.gFree.push(gp) |
| pp.gFree.n++ |
| |
| if pp.gFree.n >= 64 { |
| var ( |
| inc int32 |
| stackQ gQueue |
| noStackQ gQueue |
| ) |
| for pp.gFree.n >= 32 { |
| gp := pp.gFree.pop() |
| pp.gFree.n-- |
| if gp.stack.lo == 0 { |
| noStackQ.push(gp) |
| } else { |
| stackQ.push(gp) |
| } |
| inc++ |
| } |
| lock(&sched.gFree.lock) |
| sched.gFree.noStack.pushAll(noStackQ) |
| sched.gFree.stack.pushAll(stackQ) |
| sched.gFree.n += inc |
| unlock(&sched.gFree.lock) |
| } |
| } |
stackfree
| func stackfree(stk stack) { |
| |
| |
| if n < fixedStack<<_NumStackOrders && n < _StackCacheSize { |
| |
| order := uint8(0) |
| n2 := n |
| for n2 > fixedStack { |
| order++ |
| n2 >>= 1 |
| } |
| x := gclinkptr(v) |
| |
| if stackNoCache != 0 || gp.m.p == 0 || gp.m.preemptoff != "" { |
| lock(&stackpool[order].item.mu) |
| stackpoolfree(x, order) |
| unlock(&stackpool[order].item.mu) |
| } else { |
| |
| c := gp.m.p.ptr().mcache |
| if c.stackcache[order].size >= _StackCacheSize { |
| stackcacherelease(c, order) |
| } |
| x.ptr().next = c.stackcache[order].list |
| c.stackcache[order].list = x |
| c.stackcache[order].size += n |
| } |
| } else { |
| |
| s := spanOfUnchecked(uintptr(v)) |
| if s.state.get() != mSpanManual { |
| println(hex(s.base()), v) |
| throw("bad span state") |
| } |
| if gcphase == _GCoff { |
| |
| osStackFree(s) |
| mheap_.freeManual(s, spanAllocStack) |
| } else { |
| |
| log2npage := stacklog2(s.npages) |
| lock(&stackLarge.lock) |
| stackLarge.free[log2npage].insert(s) |
| unlock(&stackLarge.lock) |
| } |
| } |
| } |
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· DeepSeek 开源周回顾「GitHub 热点速览」
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· AI与.NET技术实操系列(二):开始使用ML.NET
· 单线程的Redis速度为什么快?